Saturday, January 08, 2005

No more free lunch...

Here's a nice article by Herb Sutter making the case that further advances in computing power will be gained by developing multiprocessor technology, as opposed to continuing to ramp up the GHz. Apparently Intel is planning on creating chips with hundreds of cores eventually. Which is fine by me! Programming for parallel machines is somewhat trickier, but is natural enough once you get used to it. Say your program is looping over discrete time steps, with different nodes handling different areas of the computation (perhaps subvolumes in a 3-D model). Each node in turn will likely need to both work on data that will be sent to to other nodes (perhaps boundary conditions), and also work on completely internal data. It then makes sense to do the boundary calculations first, send off the results through the network, then work on the internal data, and by the time that is finished, the boundary data from other nodes should have arrived, allowing the next time step to immediately proceed. There are all sorts of tricks to be invented, and most programs that need huge amounts of processing power can be largely parallelized. Vector processing should be kept in mind too - ClearSpeed's CSX600 chip gets 50 Gflops running at 250 MHz with 96 processing elements - and all that with under 5 watts of power. That would get you to 100 Tflops - the current peak for supercomputers, and the estimated power of the human brain - with only 10,000 watts (which would be about 100 times less efficient than the human body - not bad though!). A.I. research, in particular, should be able to thrive on multiprocessor systems, since the human brain itself is a paragon of parallel design, with some 100 billion neurons, each one connected to some 1000 others.

Comments: Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?