The recent discovery of a source of cosmic neutrinos — the mysterious, almost massless “ghost particles” that bombard the universe — is another marvel of modern collaborative science, involving hundreds of scientists, dozens of countries … and a galaxy of data points.
Scientists with the IceCube neutrino detection project, located on the South Pole and run by the University of Wisconsin–Madison, announced in July that they found the origin point of a cosmic neutrino in an energy-spewing black hole 4 billion light years from Earth. Scientists say the discovery will provide a fundamental new tool for seeing the unseeable in the universe.
Much like the detection of gravitational waves in 2017 and the discovery of the Higgs boson in 2013, the IceCube neutrino detection project relies on corralling, analyzing and making sense of massive amounts of data. And a common denominator in each case has been the use of high-throughput computing (HTC) to solve this once-invincible challenge.
Applying advance HTC technologies to IceCube has been a partnership between the Wisconsin IceCube Partical Astrophysics Center (WIPAC) and the UW–Madison Center for High Throughput Computing (CHTC). Jointly, the two centers worked to translate the long established leadership of the UW–Madison campus in developing HTC frameworks and technologies into a powerful, worldwide distributed HTC environment.
The HTCondor software, pioneered 30 years ago by Miron Livny, director of core computation technologies at the Morgridge Institute for Research and the John P. Morgridge professor of computer science, is a key element the IceCube HTC infrastructure. This infrastructure integrates computing clusters into several IceCube collaborating institutions with the CHTC and an international fabric of HTC services provided by the Open Science Grid (OSG).
IceCube is one of the more than 200 research projects at UW–Madison that leverage the unique capabilities of HTCondor to harness massive amounts of computing power in support of their high throughput applications.
“The IceCube accomplishment is another chapter in the ongoing story of how HTC is enabling scientific discovery in an ever growing range of science domains,” Livny says.
Gonzalo Merino, manager of WIPAC computing up until July 2018, says the number of computer central processing units (CPUs) tapped by the project through HTCondor and the OSG has at least doubled over his five years with the project. The number of graphic processing units (GPUs) increased tenfold.
“The offline analysis of the IceCube data requires easily in the orbit of 10,000 CPU cores and 1,000 GPUs sustained throughout the year,” he says.
The particle detectors embedded in the South Pole ice take about 3,000 “pictures” a second — each one representing potential particle interactions. “The computing problem is we’re dealing with billions of pictures, and need to find a signal in a very small number of them buried in a mountain of data,” he says.
“The offline analysis of the IceCube data requires easily in the orbit of 10,000 CPU cores and 1,000 GPUs sustained throughout the year.”
Gonzalo Merino
HTCondor and the resources provided by the OSG were especially useful for helping validate IceCube’s discovery of the source of a single neutrino in September 2017. Merino says after that discovery, they were able to quickly pore through all of the IceCube data collected over its 10-year existence and find additional neutrinos that originated from the same astronomical feature, known as a blazar.
Merino says he was happy to come across a quote from IceCube director Frances Halzen in The New York Times, where Halzen marveled about the ability to run the whole IceCube operation from his laptop computer.
“That’s exactly the benefit of our high-throughput infrastructure,” Merino says. “It provides transparent access in the sense that Francis, or any one of our 300 collaborators, can access all those resources around the world from a single point. They don’t need to worry at all where the jobs end up running and can focus on the science.”