Summary

Seismic processing, and specifically imaging, has always been compute intensive. Our choice of imaging algorithms is highly influenced by computer architecture. Stacked memory, cloud computing, and arrays of ARM processors are the next wave in computer architecture. These new technologies are likely to change not only our algorithmic choices but also how we approach seismic processing.

Introduction

Computers have played an essential and ever growing roll in seismic processing since the 1950s (Roth, 2004). In the last twenty five years we’ve seen the shift from main frames to massively parallel machines(Biondi and Moorhead, 1992) to shared memory systems(Smith, 1997) to clusters (Mosher et al., 1996) to GPUs (Liu et al., 2009; Micikevicius, 2009). There has also been experiments along the way with FPGAs (Nemeth et al., 2008; Fu et al., 2009), Xeon Phis (Zhebel et al., 2013), and even gaming devices (Bednar and Bednar, 2007). While we’ve been successful in taking advantage of ever increasing compute power in imaging, other areas of seismic processing have not been as successful in taking advantage of the changing architecture. In this paper I show how our choice of seismic imaging algorithms has always been strongly influenced by computer architecture. I will then discuss some coming trends in computer architecture and their implications for seismic imaging and processing.

Computer Architectures and Seismic Imaging Algorithms

At the high end of the imaging world, the last twenty years has seen a transition from Kirchhoff pre-stack depth migration to downward continuation based methods and now to Reverse Time Migration (RTM). Over this period our imaging algorithm choice has been strong influenced by computer architecture changes.

Kirchhoff Pre-stack Depth Migration

Computer architecture has gone through several revolutions since the 1980s days of main frames and vector processors. Moore’s Law (Moore, 1965) is about transistor density not computational speed. Computer engineers continually faced the challenge of how to make every chip generation able to do more floatings point operations per Second (FLOPS) than its predecessor. Increasing clock speed every generation was a partial solution (for a time), but divergence from a von Neumann machine grew quickly. Some of the architecture changes can be ascribed to the fundamental problem that memory speed increases could not keep up with CPU speed (McCalpin, 1995). To counteract this problem, cache, introduced in the 1980s, became standard, with multiple levels being introduced in the 1990s. Cache lines, where multiple sequential bytes are transferred among memory levels, also began to grow during this period. To achieve optimal performance the concept of maximizing spatial and temporal locality became essential.

This content is only available via PDF.
You can access this article if you purchase or spend a download.