Jonas Tangen
8
Why do supercomputers have to be so big?
We need supercomputers because scientists are doing really awesome work that requires lots of computing time. For some of this work, if we weren’t using supercomputing to break up tasks and make processing faster, it would take years or decades to complete.
By putting hundreds of computer processors working on a job, you can divide the effective time that it takes to do a big calculation by hundreds. If you had thousands of processors you could divide that time by thousands.
There are actually mixed opinions in the computing world about what supercomputing is and isn’t. The thing they have in common is the idea that you’re spreading out a big computational task among many different processors and maybe even among many computers that might have multiple processors.
If your computational problem can be broken up into many independent tasks, we call that high throughput computing. We’re actually leaders in high throughput computing at UW–Madison.
If a task can’t be broken up into separate pieces, you can still have a group of computers share the load by working together simultaneously.
One example is a simulation of a galaxy in astronomy. There may be multiple moments in time that have to be calculated. To speed up all the processing within each time step, we could split up the calculation of different star positions between multiple processors on multiple computers.
I am constantly surprised at the range of research that can benefit from computing.
We have researchers from the physical sciences and engineering, the sort of traditional users of large-scale computing that have for decades been using computing to tackle problems.
We also work with researchers in the social sciences and humanities like psychology, economics, and history who are also beginning to see, as a community, the benefits of computing.