Credit: Brookhaven National Laboratory
In high-throughput computing, a workload made up of data from many STAR collisions is processed event-by-event in a sequential manner to give physicists "reconstructed data" —the product they need to fully analyze the data. High-performance computing breaks the workload into smaller chunks that can be run through separate CPUs to speed up the data reconstruction. In this simple illustration, breaking a workload of 15 events into three chunks of five events processed in parallel yields the same product in one-third the time as the high-throughput method. Using 32 CPUs on a supercomputer like Cori can greatly reduce the time it takes to transform the raw data from a real STAR dataset, with many millions of events, into useful information physicists can analyze to make discoveries.