At SC06 there was a panel discussion on the final day about whether the trend to more and more cores on a socket was going to be good or bad for HPC. The feeling was that because the fact that chip makers need to do something to make up for stagnating clock speeds coincided with having more and more blank space on the die as transistor sizes shrunk more cores was inevitable.
However, this puts all your memory on the wrong side of the pins from the cores, and HPC will (must) need to find a way to deal with it!
The presentations were really good and I was a bit sad that I couldn’t get enough notes as the session was packed and I was up near the back, but I’ve just found out that all the slides used are up on the web as PDF’s, courtesy of the most amiable Thomas Sterling, who chaired the session.
The most illuminuating HPC related quote was from the slides of Steve Scott talking about how RAM characteristics have changed over the years:
1979 -> 1999:
16000X density increase
640X uniform access BW increase
500X random access BW increase
25X less per-bit memory bandwidth
My favourite non-HPC quote is from Don Becker’s slides:
My nightmare: An 80 core consumer CPU means your web experience will be 79 3ÂD animated ads roaming over your screen
Be afraid, be very afraid (on both grounds)..
Why don’t they start putting RAM around and in between all these cores? (in case it’s not obvious, I’m not a chip designer 😉 )
I guess I should probably have read the slides *before* writing this, but the Wiggles don’t make for a good background reading soundtrack.
Putting RAM on the die has been suggested, after all that’s exactly what cache is! 🙂