Instead of continued increasing clock speed, for the last few years mainstream computer hardware has advanced by increasing the number of cores. Although use of parallel computers has been considered "normal" in many industries for over 30 years, for historical reasons parallel algorithms have never seen extensive use in the pipeline simulation industry. In this talk we discuss why that must change, how it has been changing, and some interesting paths the future may take.


Instead of continued increasing clock speed, for the last few years mainstream computer hardware has advanced by increasing the number of cores. Commodity desktop computers are now available with dozens of general purpose cores at under 200$/core, and with a thousand or more specialized (GPGPU) cores also available at under a dollar per core. In-house data-centers may have even greater resources available for burst-computing applications. Multicore architectures promise extraordinary possibilities for users and extraordinary challenges for developers. If 10, 100, or 1000 inexpensive cores can be effectively directed at a problem of interest, how might it change our whole outlook on what is possible? How about 10,000 processors? How about a million? From a software vendor perspective, parallel software is more expensive to develop than normal software, and once developed you would need to convince clients to buy both the software and extra specialized hardware. A "conventional wisdom" grew that potential clients would simply turn down such software automatically unless it could run on commodity hardware obtainable through a standard purchase order. Moore's Law kept doubling the speed of single processor computers every couple of years, and algorithms kept getting better, so many problems of interest to vendors and clients could be done acceptably well without resorting to new methods. However there have always been subcategories of important problems that were computationally bound. Optimization problems are a good example; there was always a struggle between what one would have liked to have done and what could be done with available single core hardware. It appears likely that "conventional wisdom" was overstated in at least some instances, where software users might have been quite willing to invest in specialized hardware in order to solve high value problems. It is certainly true that other areas of the energy business were among the pioneers and heaviest continuing users of massively parallel algorithms. Geophysical and seismic companies are immediate examples. Regardless of the past, we reached a point a few years ago where computer manufacturers stopped trying to double clock speed every few years, and instead started moving toward making parallel processing part of the "normal" approach. This decision was driven by a variety of factors, such as increasing problems with heat dissipation, and a growing gap between processor improvements and comparatively sluggish memory speed improvements. Thus were born the dual core, the quad core, the hexa-core, which have now become familiar. The old "conventional wisdom" has become moot: it is now virtually impossible to buy new hardware that isn't parallel.

This content is only available via PDF.
You can access this article if you purchase or spend a download.