Numerical simulation is becoming an indispensable tool for the Oil & Gas industry. In order to simulate large numerical models, common practice is to rely on parallel computing. Here, we present key concepts for the development of numerical application intended to be parallel from start. Design and implementation of numerical simulators require many decisions. However, engineers do not consider parallelization priority. This is not a good decision and we present three factors that affect the implementation and performance of a numerical tool. Memory usage: workstations have very large amounts of memory. It is tempting to "splurge" on that resource. However, parallel machines have less memory per processor (e.g. BlueGene and GPU). Reducing the memory footprint in order to parallelize an existing numerical tool is time-consuming and error prone. Conversely, developing numerical code in such environment means that they can run with fine-grained parallelism when needed, but still run in fatter nodes. I/O: Usually the numerical core is the focus of parallelization. Nonetheless, as applications scale to large processor counts, I/O becomes a bottleneck. As an efficient alternative, I/O routines can be implemented in HDF5. This has two main advantages: (1) it is parallel and relies on the underlying parallel I/O system (e.g. GPFS); (2) and it is an open standard. Adopting it at a later stage would mean a different file format in the workflow that would require converters that can become a bottleneck. Solvers: Discretization of the governing equations leads to large systems that need to be assembled and solved several times during the computation. This requires efficient iterative solvers. Interchangeable interfaces for standardized libraries add flexibility. This is fundamental for ill-conditioned problems found in coupled fluid flow and stress phenomenon.

You can access this article if you purchase or spend a download.