Many of the petroleum industry's most widely used uncertainty quantification and optimization workflows such as experimental design ideally require significant computational resources (hundreds if not thousands of model evaluations) at a scale not currently available within many oil companies. This study demonstrates the efficiency of using massively distributed computing for reservoir simulation, uncertainty quantification and optimization on a commercially available compute cloud.

There are certain benefits to commercial compute clouds, including: 1) virtually unlimited scalability; that is, at a moments notice, thousands of virtual servers can be launched to meet computational workload; 2) on demand nature; one only pays for what is used; and 3) service oriented architecture; the cloud is maintained and managed by the service provider, possibly resulting in reduced maintenance costs. The end result is that cloud solutions can be more economically viable than provisioning and maintianing private clouds.

In the study, we launched a 3000 ECU Linux cluster on Amazon EC2™ web service (“EC2”) with MIT StarCluster™ software. The 3000 ECU cluster on EC2 web service was built from 150 RedHat Linux c1.xlarge instances, each consisting of 8 vCPUs (virtual cores), enabling 1200 simultaneous distributed reservoir simulations. An Amazon Machine Image (AMI) was created with a reservoir simulator installed on it, which was then used to launch these 150 c1.xlarge instances, enabling running the simulator on these instances. An in-house optimization software residing in a workstation within's Chevron's intranet was connected to a head node on EC2 web service via a pinhole. The headnode was further connected to a master node and the 150 worker nodes via the StarCluster software.

The EC2 cluster was then used for ensemble based optimization of a realistic field model to maximize NPV by controlling well BHPs. Because the ensemble gradient can be calculated much more accurately with a large ensemble enabled by a large cluster, an order of magnitude speedup was achieved in the total time required for optimization using such a large distributed compute capacity over our internal clusters with limited number of cores available to a given user for distributed computing.

You can access this article if you purchase or spend a download.