Research Challenges in Astronomy

I’ve been at the first APAC All Hands Meeting this week, generally hearing what all the other people in the APAC Grid project are up to and meeting folks from around the country that I only otherwise get to see via Access Grid.

Today was the turn of some of the science areas to tell us what they are up to and what they see their big challenges being, and the most scariest (from an HPC perspective) was the session on Astronomy and Astrophysics by Peter Quinn (formerly of the ESO and now a Premier’s Fellow at UWA).

The most intimidating points I picked up from his presentation were:

  • Data explosion – doubling time T2 was < 12 months, with new big survey projects such as VST and VISTA that will become T2 < 6 months!
  • Disk technology T2 is 10 years at present (according to Peter), and slowing.
  • The Large Synoptic Survey Telescope is reckoned to capable of producing 7 PetaBytes per annum of data.
  • The ESO’s data archive is currently (2006) 100TB in 10 racks and using 70kW of power. By 2012 it is forecast to be 10PB in 100 racks and consuming 1MW of electricity.
  • A recent Epoch of Reionisation simulation of 5,0003 particles on a 1,000 CPU Opteron cluster used 2 months of CPU time and 10TB physical RAM (about 10GB per core) and produced about 100TB of output data.
  • Catalogue sizes are exploding, in 2000 there were about 100,000 galaxies in a catalogue, by 2010 that will be 1 billion.
  • Algorithms are not scaling with these data sizes – an algorithm that took 1 second in 2000 will take 3 years in 2010!

But these problems pale into insignificance when you consider the massive Square Kilometre Array (SKA) radio telescope, it is forecast to produce 100 ExaBytes (that’s one hundred million TeraBytes) of data annually!

This raises a number of very fundamental issues:

  • The terabit speed network technologies needed to get the data off the detectors does not exist (yet).
  • There is no storage technology to cope with the volumes of data.
  • This means they will need to process the data on the fly in a highly parallel manner.
  • This is a radio telescope, so there is no time when it cannot take data, unlike an optical ‘scope. This means you cannot somehow buffer the night time data and then process it during the day.
  • If the ESO estimate of 1 megawatt of power for 7 PB is correct, and assuming that power per PB stays roughly and they do store all 100 EB of data, then the storage of one years data will need about 14GW of generating capacity.

Fortunately construction of the SKA isn’t due to start until 2013, so we’ve got a bit of time to solve all these.. 🙂

3 thoughts on “Research Challenges in Astronomy

  1. To put the SKA’s data requirement in numbers that might make a bit more sense – 100 ExaBytes of data a year is a sustained data rate of around 3.3 TeraBytes per second.

    The current largest SATA drive you can buy is 750GB, so you would need 5 of those drives per second to store the data!

  2. Surely you wouldn’t need to keep all those thousands of drives online and spinning at the same time. Astronomical data isn’t like to need to be randomly accessed across the entire piece within a single operation, so disks could be shunted on and offline as required more in the manner of tape drives.

    Hopefully the consequence of this would be a significantly lesser power bill at least.

  3. Forget the power problem – with our current 750GB drives you’d need over 13.3 million drives a year, or over 36,500 a day.

    Using the numbers that Seagate forecast for 2013 of 8TB disks, then that’s still 3,424 drives a day.

    A similar scale problem (if you can imagine it) is that (if Seagates numbers are correct) the bandwidth to the drive will only be 5Gb/s (that’s less than 1GB/s) – it needs to be more than 3,000 times better to be able to keep up!

    So there is no way around it, they will have to process the data in RAM as it comes out of the receivers and reduce it significantly in size before squirting it out to disk. Either that or they will have to drop data in the detector – that’s not unprecedented, the LHC has data discard circuitry built into the detectors and only records 1/10th of 1% of the information that comes in. This is why the LHC will only produce 5-10TB a day.

Comments are closed.