Skip Nav Destination
Filter
Filter
Filter
Filter
Filter

####
Geologic Modeling

Search Results for
The problem of nonstationarity

Update search

Filter

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

### NARROW

Peer Reviewed

Format

Subjects

Journal

Publisher

Conference Series

Date

Availability

1-20 of 85 Search Results for

#### The problem of nonstationarity

**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1990 SEG Annual Meeting, September 23–27, 1990

Paper Number: SEG-1990-1683

...

**nonstationarity**in**the**direct arrival or primary part OI thedowngoing wavefield.Absorptionandtime-varyingphaseshift (possibly**the**result**of**normal dispersion) are discussed as possible explanations**of****the**observed**nonstationarity**. These observation: are made in**the**upper 0.9 seconds two-way travel time il...
Proceedings Papers

Publisher: Society of Petroleum Engineers (SPE)

Paper presented at the SPE Annual Technical Conference and Exhibition, October 6–9, 1996

Paper Number: SPE-36569-MS

...

**the**simplified simulation with**the**true well-test permeability.**The****problem****of****nonstationarity**between**the**well-test regions is addressed.**The**results obtained here show that there is a reduction**of**reservoir uncertainty when well-test information is added. Introduction Most conventional...
Abstract

Reservoir Modeling Constrained to Multiple Well-Test Permeabilities F.P. Campozana, SPE, UT, Larry W. Lake, SPE, UT, K. Sepehrnoori, SPE, UT Abstract Well-test permeability derived from pressure transient analysis is important information about the interwell permeability distribution of a reservoir. It is available for many active wells in a reservoir. Nevertheless, no conventional geostatistical technique is able to incorporate this information into a reservoir model. This paper describes an integrated procedure to account for well-test permeabilities obtained from several wells into a geostatistical model of a reservoir permeability distribution. Our procedure is based on a simulated annealing algorithm coupled with a steady-state, single-phase flow simulator that solves an inverse problem. A correlation is developed to relate the effective permeability of each well-test region obtained from the simplified simulation with the true well-test permeability. The problem of nonstationarity between the well-test regions is addressed. The results obtained here show that there is a reduction of reservoir uncertainty when well-test information is added. Introduction Most conventional geostatistically generated models use static information only; however, reservoir models generated this way may not match dynamic or field data. History matching is then required. History matching consists of submitting the input parameters of a reservoir model to sometimes arbitrary changes. A flow simulator is then used to observe how these changes impact the production of fluids and the pressure behavior. The model is altered until a satisfactory match between simulated and observed production data is obtained. History matching is cumbersome and time-consuming; it usually does not allow the use of many images of the same reservoir, that is, stochastic modeling is limited. Therefore, it becomes difficult to quantify reservoir uncertainty and its impact on a production forecast. There are basically three main reasons why well test data should be included when modeling a reservoir: they are available for many active wells, they are an actual response of the reservoir to pressure changes, and their support volumes cover large areas around wells, The purpose of this paper is to extend the previous works by (1) considering the entire reservoir with multiple well tests, (2) allowing for different statistics in each well-test region, and (3) using a new simulated annealing (SA)- based algorithm coupled with a simple simulator. Background The effective permeability obtained from classical pressure transient analysis represents an average of the small-scale permeability values within the volume affected by the well test. To consider the inverse problem of obtaining a refined grid of a permeability distribution from the knowledge of its average, it is necessary to define (1) what blocks participate in the average and (2) what type of averaging is relevant. Van Poolen proposed the following equation to estimate the radius of investigation of a pressure transient well test: (1) Equation (1) applies to a reservoir that is homogeneous. However, because of heterogeneity, the actual shape of the drainage area will be irregular, not circular as needed by Eq. (1). If anisotropy is present, the shape of the drainage area is better approximated by an ellipse. In this case, the use of Eq. (1) to determine the small-scale permeability blocks that should participate in the averaging leads to including some blocks that are not participating in the flow and leaving out others that are. The second problem is to determine the proper averaging to be used. A general procedure commonly used is power averaging. P. 851

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2010 SEG Annual Meeting, October 17–22, 2010

Paper Number: SEG-2010-3662

... created by Raymond Abma (personal communication) shows a simple curved event (Figure 2a), in- cluding both

**nonstationarity**and aliasing. In general, exist- ing methods are able to solve only one**of****the**two**problems**. Figure 2b shows**the**interpolated result using Claerbout s sta- tionary t-x PEF. Note that...
Abstract

SUMMARY Seismic data are often inadequately sampled along spatial axes. Spatially aliased data can produce imaging results with artifacts. We present a new adaptive prediction-error filter (PEF) approach based on regularized nonstationary autoregression, which aims at interpolating aliased seismic data. Instead of using patching, a popular method for handling nonstationarity, we obtain smoothly nonstationary PEF coefficients by solving a regularized least-squares problem. Shaping regularization is used to control the smoothness of adaptive PEFs. Finding the interpolated traces can be treated as another linear least-squares problem, that solves for data values rather than filter coefficients. Using benchmark synthetic and real data examples, we successfully apply this method to the problem of seismic trace interpolation. INTRODUCTION The spatial sampling interval is an important factor that controls seismic resolution. Too large a spatial sampling interval leads to aliasing problems, which can adversely affect migration and result in poor lateral resolution of subsurface images. An alternative to expensive dense spatial sampling is interpolation of seismic traces (Spitz, 1991). One important approach to trace interpolation is prediction interpolating methods, mainly an extension of Spitz’s original method, which uses low-frequency non-aliasing data to extract antialiasing prediction filters and then interpolates high frequencies beyond aliasing. Claerbout (1992) treated Spitz’s method as a prediction-error filter in the original t-x domain. Porsani (1999) proposed a half-step prediction-filter scheme that makes the interpolation process more efficient. Wang (2002) extended f - x trace interpolation to higher dimensions, the f -x-y domain. Gulunay (2003) introduced an algorithm similar to f -x prediction filtering, which has an elegant representation in the f -k domain. Naghizadeh and Sacchi (2009) proposed an adaptive f -x interpolation using exponentially weighted recursive least squares. Most recently Naghizadeh and Sacchi (2010) used a prediction approach similar to Spitz’s method, except that the curvelet transform is involved instead of the Fourier transform. Seismic data are nonstationary, but a standard PEF can only be used to interpolate stationary data (Claerbout, 1992). Patching is a common method to handle nonstationarity (Claerbout, 2010), although it occasionally fails in the assumption of piecewise constant dips. Crawley et al. (1999) proposed smoothly nonstationary PEFs with “micropatches” and radial smoothing, which typically produces better results than the rectangular patching approach. Fomel (2002) developed a plane-wave destruction (PWD) filter (Claerbout, 1992) as an alternative to t-x PEF and applied the PWD operator to nonstationary trace interpolation. However, the PWD method depends on the assumption of a small number of smoothly variable seismic dips. In this paper, we use the two-step strategy, similar to that of Claerbout (1992) and Crawley et al. (1999), but calculate the adaptive PEF by using regularized nonstationary autoregression (Fomel, 2009) to deal with both nonstationarity and aliasing. Shaping regularization (Fomel, 2007) controls the locally smooth interpolation. We test the new method by using several benchmark synthetic examples. Results of applying the proposed method to a field example demonstrate that regularized adaptive PEF can be effective in trace interpolation problems, even in the presence of multiple variable dips.

Journal Articles

Journal:
SPE Journal

Publisher: Society of Petroleum Engineers (SPE)

*SPE Journal*9 (04): 429–436.

Paper Number: SPE-84594-PA

Published: 01 December 2004

... Paris center

**of**geostatistics. By analyzing**the**limitations and**the**potential**of****the**truncated Gaussian method, Galli et al . 5 found a way to apply this method to a 3D**problem**with vertical**nonstationarity**in**the**proportions**of**lithofacies. They showed that this method preserved**the**consistency**of****the**...
Abstract

Summary The truncated pluri-Gaussian method for modeling geologic facies is appealing not only for the wide variety of textures and shapes that can be generated, but also because of the internal consistency of the stochastic model. This method has not, however, been widely applied in simulating distributions of reservoir facies or in automatic history matching. One reason seems to be that it is fairly difficult to estimate the pmeters of the stochastic model that could be used to generate geological facies maps with the desired properties. The second is that because "facies type" is a discrete variable, it is not straightforward to apply the efficient gradient-based minimization method to generate reservoir facies models that honor production data. Nongradient methods, however, are too slow for large field-scale problems. In this paper, the nondifferentiable history-matching problem was replaced with a differentiable problem so that an automatic history-matching technique could be applied to the problem of conditional simulation of facies boundaries generated from the truncated pluri-Gaussian method. The resulting realizations are consistent with both the geostatistical model of the observed facies and the historic production. Application of the method requires efficient computation of the gradient of the objective function with respect to model variables. We present an example five-spot water-injection problem with more than 73,000 model variables conditioned to pressure data at wells. The gradient was computed using the adjoint method, and the minimization routine used a quasi-Newton algorithm. The objective function decreased more than 98% in 13 iterations. Introduction Researchers have been building tools for history matching of permeability and porosity distributions to honor production data for several years. The assumption is almost always made that the rock properties are distributed randomly and that the randomness can be adequately described by the mean and the spatial covariance of the property fields. If there is more than one type of rock, region, or facies, the assumption is usually made that the boundaries of these regions are known. Bi et al. 1 and Zhang et al . 2 relaxed this restriction by allowing the boundaries of a 3D channel to be adjusted interactively during the history-matching process. While the method worked quite well for a single channel in a background low-permeability facies, it became apparent that the extension to a reservoir with large numbers of channels would be impractical. As a result, we consider the truncated pluri-Gaussian model for the description of facies boundaries. The truncated pluri-Gaussian is attractive for modeling facies for several reasons. The model is capable of generating a wide variety of facies shapes and neighbor relations. The model is based on Gaussian random fields, which are well suited to current history-matching framework. The truncation, or threshold map, can be described by relatively few pmeters. In this paper, we describe progress on two aspects of the history-matching problem. The first problem has to do with the specification of a prior geostatistical model, the purpose of which is to ensure plausibility of realizations. 3,4 This is considerably more complex for the truncated pluri-Gaussian model than for many other geostatistical models because it is necessary to specify at least two covariance models (types, ranges, variances, and orientations), as well as the threshold pmeters for the truncation. The second problem is adjustment of the facies boundaries for a fixed set of geostatistical model pmeters. This requires efficient minimization of an objective function that is not differentiable. Background Major improvements in the application of the truncated Gaussian method for lithofacies simulations based on indicators were developed mostly by scholars at the Ecole des Mines de Paris center of geostatistics. By analyzing the limitations and the potential of the truncated Gaussian method, Galli et al . 5 found a way to apply this method to a 3D problem with vertical nonstationarity in the proportions of lithofacies. They showed that this method preserved the consistency of the indicator variograms and cross variograms. The major achievement of this paper is the introduction of the truncated pluri-Gaussian method, which allowed more complex neighbor relations than the standard truncated Gaussian model. In the same period, Le Loc'h et al. 6 showed the flexibility of the truncated pluri-Gaussian method by truncating two Gaussian functions. They pointed out that even if the two underlying Gaussian functions are uncorrelated, the resulting facies sets obtained by truncating are not independent. The correlation depends on the construction of thresholds of lithotypes. Using uncorrelated Gaussian functions, they found that complex theoretical indicator variograms can be produced in combining various anisotropies by choosing different Gaussian functions. They suggested that the choice of a truncation method to the Gaussian functions should be as simple as possible to have easier control over the problem. Later, Le Loc'h and Galli 7 presented an insight to implementing the algorithm both for practical structural analysis and conditional simulations. In demonstrating the influence of the thresholds chosen for truncation, the partition of facies was accomplished using rectangles. But even with this relatively simple thresholding method, it is not at all straightforward to choose appropriate thresholds. The difficulty in estimating model pmeters that will result in the desired facies distributions has restricted the practical application of this method. An example of truncated pluri-Gaussian simulation conditional to facies data at well locations was presented with a very slow convergence. This problem was attributed to the instability of the Gaussian covariance matrix. Lantuejoul 8 discusses the problem of conditioning truncated pluri-Gaussian models to facies observations extensively. Assuming known threshold pmeters, the truncated pluri-Gaussian simulation scheme was able to simulate the Gaussian random fields to match given lithofacies observations. As the simulation problem was small, the Markov chain Monte Carlo sampling method was applied to evolve Gaussian random fields. While, once again, the great potential of the truncated pluri-Gaussian method in simulating lithofacies distribution was revealed, two major problems were left unsolved and seem to be limiting the application of this method. First is the difficulty in estimation of geostatistical pmeters [i.e., the geostatiscal quantities such as the range, the variance, the covariance type (Gaussian, Exponential, Spherical, etc.)] and the thresholds for discrimination of facies. Second, the application of the truncated pluri-Gaussian method in practical conditional simulation problems requires more efficient sampling methods to deal with reservoir history-matching problems.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2015 SEG Annual Meeting, October 18–23, 2015

Paper Number: SEG-2015-5916027

...-whiting factor is so commonly used that people usually fail to investigate its hazards, which probably smear useful reflectivity information and degrade

**the**ultimate deconvolution result. In this paper, we treat**the**process as an inversion**problem**and employ**the**strategy**of**shaping regularization to...
Abstract

Summary Gabor deconvolution is acknowledged as an effective nonstationary deconvolution method which can simultaneously remove the source signature and attenuation effects. Under the constant Q theory, the attenuation contours track along hyperbolic trajectories in time-frequency plane, and smoothing along these trajectories would lead to more reasonable estimation on attenuation, source wavelet and reflectivity. Furthermore, division operation is necessitated to form the deconvolution operator, and pre-whitening factors are always used to enhance the division stability. However, the pre-whiting factor is so commonly used that people usually fail to investigate its hazards, which probably smear useful reflectivity information and degrade the ultimate deconvolution result. In this paper, we treat the process as an inversion problem and employ the strategy of shaping regularization to stabilize it. Moreover, the results achieved from model data and field data have demonstrated the effectiveness and robustness of improved method in precise reflectivity estimation and reasonable compensation. Introduction High resolution seismic data have possessed a pivot position in reservoir characterization for a long period and it is predicted to extend this tendency. However, the Earth Filtering can cause dissipation of energy especially in high frequency components and phase distortion, which is constantly regarded as an important issue that limits the final resolution, especially for deep reservoir. In the industry, the inverse Q filtering and deconvolution are regarded as two significant methods in boosting seismic resolution. In 1998, Margrave presented a new nonstationary convolution model which takes the influence of attenuation into consideration. In recent years, geophysicists attempted to improve the performance of nonstationarity deconvolution (or Gabor deconvolution) from several aspects, such as suitable window function (Grossman et al., 2002) and reliable smoothing methods (Margrave et al., 2011; Wu and Sun, 2011; Sun et al., 2012; Chen et al., 2012). Until now, as one of the most important components, hyperbolic smoothing has been developed as a preferable method for Gabor deconvolution when compared with other smoothing methods, such as boxcar smoothing method, cross smoothing method and regularized smoothing method.

Journal Articles

*International Journal of Offshore and Polar Engineering*26 (02): 88–99.

Paper Number: ISOPE-16-26-2-088

Published: 01 June 2016

... wave short-crestedness, shielding effects from

**the**HLV, radiation damping from**the**MP, and**the****nonstationarity****of****the**process.**The**influence**of**each factor on**the**allowable sea states and operability is assessed. A large number**of**time-domain simulations are performed, considering random waves, to...
Abstract

Offshore installation operations require careful planning in the design phase to minimize associated risks. This study addresses numerical modeling and time-domain simulations of the lowering operation during installation of a monopile (MP) for offshore wind turbine (OWT) using a heavy lift vessel (HLV). The purpose is to apply different numerical approaches to obtain the allowable sea states and to assess the operability. Four critical factors regarding the numerical modeling approaches for the coupled HLV-MP lowering process are studied. Those factors include wave short-crestedness, shielding effects from the HLV, radiation damping from the MP, and the nonstationarity of the process. The influence of each factor on the allowable sea states and operability is assessed. A large number of time-domain simulations are performed, considering random waves, to derive the allowable sea states. The results indicate that, although the radiation damping from the MP is secondary, it is essential to consider the other features. The study can be used as a reference for the numerical modeling of relevant offshore operations. Introduction Installation of offshore wind turbine (OWT) components is more challenging than that of land-based wind turbines. It was estimated that the installation and assembly of OWTs make up 20% of the capital costs compared with approximately 6% for land-based wind turbines (Moné et al., 2015). Because of the low profit margin of the offshore wind industry, it is essential to reduce the installation costs by improving the methodology during the design and planning phases. Because of their structural simplicity and low manufacturing expenses, monopiles (MPs) are the most preferable bottom-fixed foundations for OWTs in shallow water (EWEA, 2014). The installation of an MP consists of several steps. After arriving at the offshore site, an MP is upended to a vertical position, then lowered through the wave zone so that it stands vertically on the seabed. A hydraulic hammer is used to drive it into the seabed to a predetermined depth. Although MPs are easy to install compared to other foundations, the installations have been carried out with various levels of success because the challenges have not been taken seriously enough (Thomsen, 2011). Therefore, it is of great importance to evaluate and improve the allowable sea states by considering each activity during the operation. More importantly, the allowable sea states for a single operation would affect the installation efficiency of the entire wind farm. For this reason, accurate numerical models are required. There are generally two types of vessels for installation of MPs: the jack-ups and the floating crane vessels. A jack-up vessel provides a stable working platform for the lifting and piling operations. However, the installation and retrieval of the legs of the jack-ups are time-consuming and weather-sensitive. Compared to jack-ups, floating vessels have more flexibility for offshore operations and are effective in mass installations of wind farms because of fast transit between foundations. Floating vessels have been used to install MPs for several large offshore wind farms, e.g., Sheringham Shoal and Greater Gabbard wind farms. Hence, the potential of reducing installation costs by using floating installation vessels is huge.

Journal Articles

Journal:
SPE Journal

Publisher: Society of Petroleum Engineers (SPE)

*SPE Journal*16 (02): 318–330.

Paper Number: SPE-118916-PA

Published: 11 November 2010

... different facies are given in Table 2 .

**The**performance**of**our proposed approach for simultaneously updating**the**facies distribution and**the**petrophysical properties was evaluated using two different test**problems**designed with different features**of****nonstationarity**. TABLE 2 MEAN AND STANDARD...
Abstract

Summary The ensemble Kalman filter (EnKF), is a sequential data-assimilation technique that has been shown to work quite well in obtaining conditional facies models from assimilating production data. Because the problem of history matching geological facies is quite complex, most efforts at solving this problem typically assume that facies properties are constant and spatially homogeneous. In this paper, we propose a method for updating both the categorical facies variables and the spatially heterogeneous and nonuniform properties of the facies in a consistent manner within the EnKF framework. Tests of our proposed approach on two representative examples with different features of nonstationarity resulted in satisfactory history-match solutions and geologically consistent estimates of the nonuniform and heterogeneous petrophysical properties.

Journal Articles

Journal:
SPE Journal

Publisher: Society of Petroleum Engineers (SPE)

*SPE Journal*18 (06): 1033–1042.

Paper Number: SPE-163147-PA

Published: 06 May 2013

... . This paper recalls and further demonstrates that facies proportions can be modeled by local beta distributions. However,

**the**highly variable shapes**of****the**conditional probability-density functions (PDFs) for**the**random variables in**the**field lead to complex**nonstationarity**and nonlinearity issues. A...
Abstract

Summary Conditional beta distributions are proposed with examples to evaluate the probability of intercepting specific proportions of target rocks in well planning. Geological facies or rock-type proportions are random variables p k ( x ) at each location, x . This paper recalls and further demonstrates that facies proportions can be modeled by local beta distributions. However, the highly variable shapes of the conditional probability-density functions (PDFs) for the random variables in the field lead to complex nonstationarity and nonlinearity issues. A practical and robust approach is to transform the proportion random variables to Gaussian variables, thus enabling the use of classical geostatistics. Although a direct relationship between Gaussian and beta random variables appears intractable, a suitable transformation that involves second-order expectations of proportions is proposed. The conditional parameters of the beta variables are recovered from kriging estimates after back transformation to proportions through Riemann sums.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2016 SEG International Exposition and Annual Meeting, October 16–21, 2016

Paper Number: SEG-2016-13873107

..., optimum prediction parameters are difficult to determine — in fact in some cases no stationary value

**of****the**search parameter can optimally predict all multiples without introducing damaging artifacts. A re-formulation and implementation in**the**time-domain permits time-**nonstationarity**to be enforced in...
Abstract

ABSTRACT Practical internal multiple prediction and removal is a high priority area of seismic processing technology, that has special significance for unconventional plays, where data are complex and sophisticated quantitative interpretation methods are apt to be applied. When the medium is unknown and/or complex, and move-out based discrimination is not possible, inverse scattering based prediction is the method of choice, but challenges remain for its application in certain environments. For instance, when generators are distributed up-shallow and within and below zones of interest, optimum prediction parameters are difficult to determine — in fact in some cases no stationary value of the search parameter can optimally predict all multiples without introducing damaging artifacts. A re-formulation and implementation in the time-domain permits time-nonstationarity to be enforced in , after which a range of possible data-driven and geology-driven criteria for selecting a () schedule can be analyzed. 1D and 1.5D versions of the time-nonstationary algorithm are easily derived and can be shown to add a new element of precision to prediction. Merging of these ideas with multidimensional plane-wave domain versions of the algorithm will provide 2D/3D extensions. Presentation Date: Monday, October 17, 2016 Start Time: 4:35:00 PM Location: 142 Presentation Type: ORAL

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2014 SEG Annual Meeting, October 26–31, 2014

Paper Number: SEG-2014-0329

... is proposed, in which

**the****nonstationarity**is due to**the**Q attenuation and dispersion effects.**The**deabsorption preprocessing is then needless, as**the**NAVA inversion performs AVA inversion and deabsorption simultaneously. Namely,**the**NAVA inversion integrates**the**two-step procedure**of**inverse Q...
Abstract

Summary In this paper, a nonstationary AVA (NAVA) inversion method for field nonstationary prestack data is proposed, in which the nonstationarity is due to the Q attenuation and dispersion effects. The deabsorption preprocessing is then needless, as the NAVA inversion performs AVA inversion and deabsorption simultaneously. Namely, the NAVA inversion integrates the two-step procedure of "inverse Q filtering followed by the AVA inversion" into a single operation. The highlights of the NAVA inversion are that it avoids the intrinsic numerical instability of inverse Q filtering and directly extracts high-resolution parameters from the nonstationary prestack data. Using some synthetic noise-free and noisy data results (presented in the followed paper, see Part 2), it is shown that this completely novel methodology (the NAVA inversion) generates superior results compared to the traditional approach of "performing inverse Q filtering followed by the AVA inversion".

Journal Articles

Journal:
SPE Journal

Publisher: Society of Petroleum Engineers (SPE)

*SPE Journal*6 (02): 137–143.

Paper Number: SPE-71309-PA

Published: 01 June 2001

... seismic amplitude map. A rejection scheme is used, which requires fast, repetitive simulation

**of**gridblock columns and generation**of**convolutional responses.**The****nonstationarity****of****the**model means that this cannot be achieved using**the**conventional large kriging system. We use a different, but comparably...
Abstract

Summary We have developed methods of conditioning nonstationary Lévy-stable geostatistical models 1,2 to 3D seismic data. The technique involves adapting the sequential Lévy simulation method so that the convolutional response of the realizations acceptably matches the seismic amplitude map. A rejection scheme is used, which requires fast, repetitive simulation of gridblock columns and generation of convolutional responses. The nonstationarity of the model means that this cannot be achieved using the conventional large kriging system. We use a different, but comparably rapid method, based on storing the relevant parts of a sequential simulation calculation for the column. Working directly with the amplitude traces also has the advantage of avoiding the ambiguities and nonuniqueness involved in inverting the traces to acoustic impedance. The most difficult part of the problem is estimation of the seismic wavelet, and this is often done nonoptimally. We describe a sophisticated method of estimating the wavelet and show that this can yield better-than-expected results. Suitable rejection criteria are proposed, based on reasonable probabilistic models. The application of the technique is demonstrated with a field example. Introduction The use of geostatistical models to characterize uncertainty in the spatial distribution of petroleum reservoir properties is now seen as a fundamentally desirable tool in reserves or forecasting work. Production forecasts can follow a very wide distribution when reservoir heterogeneity is appreciable and where well data are sparse. For this reason, it is clear that the conditioning of these geostatistical models to auxiliary data such as seismic and production data can only help in reducing forecasting uncertainties and thus the quality of management decisions. Particularly in areas where there are few measurements in wells to constrain the underlying random field, the sheer density of low-resolution measurements like seismic data will clearly constrain the range of variation seen in Monte Carlo realizations of the random field. Nonetheless, it is currently impractical to condition the kinds of geostatistical models used in flow calculations to the full range of seismic data. For both computational and storage reasons, the models usually are constrained only to fully processed seismic data (post-stack, post-migrated), which may represent only 5% or so of the total volume of measurements acquired. In general, the relationship between remote-sensed acoustic properties of rocks with poor resolution (seismic data have resolution ~30 m, at best) and local petrophysical properties with a small scale of support (~1 m) is complex. Any credible conditioning method must take into account these very different scales of support. Most analyses rely implicitly on cross-correlations modeled from the sonic and porosity logs, which implicitly assume that the field-scale seismic processing is consistent with the sonic-log information. The complexity of the relationship between seismic and petrophysical data also means that an analytical derivation of the posterior distribution of the petrophysical properties, conditional to the seismic data, is impossible in general. This conditioning must therefore be carried out using rejection-based sampling techniques such as the Markov Chain Monte Carlo (MCMC) method; candidate realizations must be accepted or rejected using a likelihood function, for example a Metropolis sampling technique. 3 In the case of seismic data, this likelihood is usually determined by computing a synthetic seismic from the unconditional reservoir realization, and comparing this to the true seismic in some suitably meaningful way. Such sampling techniques nearly always suffer from high rejection ratios when the conditioning is strong, so it is necessary that the algorithm used to draw realizations of the reservoir properties is extremely efficient. Furthermore, it is important that the forward model used to compute a "synthetic seismic" from a given reservoir realization must be simple and computationally rapid. Like other workers, we use a 1D convolutional model, which expresses the amplitudes in a seismic trace at a given common midpoint as a convolution 4 of the reflectivities in the geological profile immediately below with a suitable wavelet. To date, models along these lines have been successfully applied in the context of multi-Gaussian models of the reservoir properties. A general theory for such models is given by Eide, 5 in which the full posterior distribution for the reservoir properties is formally derived. In principle, this solution implies that posterior samples can be drawn explicitly, but the actual form turns out to be computationally infeasible. Some suggestions for suitable approximations to the posterior are given in the context of a sequential simulation method, and some small-scale examples are given. The work of Bortoli et al . 6 uses the 1D convolutional model also, but does not appeal to a full posterior distribution as does that of Eide. 5 A rejection method is used, with the rejection criterion being a threshold correlation coefficient between the true and synthetic seismic which must be exceeded for the realization to be accepted. For reasons to be explained later, we think this is not a particularly good choice of acceptance criterion, although it obviously guarantees a certain similarity between the true and synthetic seismic fields. The methodology we present in this paper extends this previous work into a class of nonstationary, non-Gaussian random field models called Lévy fractal models. Techniques based on stationary models invariably require the removal of trend terms from the data, and this process can be very subjective. A less subjective technique would involve modeling the data with a general nonstationary process of sufficient elasticity to accommodate the trends, but sufficient rigidity to enable adequate estimation of its parameters. Extensive analysis of the spatial behavior of the distribution of increments in wireline log data 7–11 has shown that the distribution of increments usually has heavy tails and is well modeled by a Lévy-stable distribution. The heavy tails in the distribution yield a high probability of large jumps in the spatial field, which aptly mimic the transitions across facies boundaries. Similarly, the width of this distribution as a function of the lag r used to form the increments frequently follows a power-law behavior, which betrays a quality of self-similarity akin to fractals.

Journal Articles

Publisher: Society of Petroleum Engineers (SPE)

*SPE Reservoir Evaluation & Engineering*5 (01): 68–78.

Paper Number: SPE-76905-PA

Published: 01 February 2002

..., such as Markov Chain Monte Carlo (MCMC) methods, and model extensions to account for

**nonstationarity**, discontinuity, and varying spatial properties at various scales**of**resolution are easily accessible in**the**MRF framework. Our proposed method is computationally efficient and well suited to reconstruct...
Abstract

Summary We propose a hierarchical approach to spatial modeling based on Markov Random Fields (MRF) and multiresolution algorithms in image analysis. Unlike their geostatistical counterparts, which simultaneously specify distributions across the entire field, MRFs are based on a collection of full conditional distributions that rely on the local neighborhoods of each element. This critical focus on local specification provides several advantages: MRFs are computationally tractable and are ideally suited to simulation based computation, such as Markov Chain Monte Carlo (MCMC) methods, and model extensions to account for nonstationarity, discontinuity, and varying spatial properties at various scales of resolution are easily accessible in the MRF framework. Our proposed method is computationally efficient and well suited to reconstruct fine-scale spatial fields from coarser, multiscale samples (based on seismic and production data) and sparse fine-scale conditioning data (e.g., well data). It is easy to implement, and it can account for the complex, nonlinear interactions between different scales, as well as the precision of the data at various scales, in a consistent fashion. We illustrate our method with a variety of examples that demonstrate the power and versatility of the proposed approach. Finally, a comparison with Sequential Gaussian Simulation with Block Kriging (SGSBK) indicates similar performance with less restrictive assumptions. Introduction A persistent problem in petroleum reservoir characterization is to build a model for flow simulations based on incomplete information. Because of the limited spatial information, any conceptual reservoir model used to describe heterogeneities will, necessarily, have large uncertainty. Such uncertainties can be significantly reduced by integrating multiple data sources into the reservoir model. 1 In general, we have hard data, such as well logs and cores, and soft data, such as seismic traces, production history, conceptual depositional models, and regional geological analyses. Integrating information from this wide variety of sources into the reservoir model is not a trivial task. This is because different data sources scan different length scales of heterogeneity and can have different degrees of precision. 2 Reconciling multiscale data for spatial modeling of reservoir properties is important because different data types provide different information about the reservoir architecture and heterogeneity. It is essential that reservoir models preserve small-scale property variations observed in well logs and core measurements and capture the large-scale structure and continuity observed in global measures such as seismic and production data. A hierarchical model is particularly well suited to address the multiscaled nature of spatial fields, match available data at various levels of resolution, and account for uncertainties inherent in the information. 1–3 Several methods to combine multiscale data have been introduced in the literature, with a primary focus on integrating seismic and well data. 3–9 These include conventional techniques such as cokriging and its variations, 3–6 SGSBK, 7 and Bayesian updating of point kriging. 8,9 Most kriging-based methods are restricted to multi-Gaussian and stationary random fields. 3–9 Therefore, they require data transformation and variogram construction. In practice, variogram modeling with a limited data set can be difficult and strongly user-dependent. Improper variograms can lead to errors and inaccuracies in the estimation. Thus, one might also need to consider the uncertainty in variogram models during estimation. 10 However, conventional geostatistical methods do not provide an effective framework to account for the uncertainty of the variogram. Furthermore, most of the multiscale integration algorithms assume a linear relationship between the scales. The objective of this paper is to introduce a novel multiscale data-integration technique that provides a flexible and sound mathematical framework to overcome some of the limitations of conventional geostatistical techniques. Our approach is based on multiscale MRFs 11–14 that can effectively integrate multiple data sources into high-resolution reservoir models for reliable reservoir forecasting. This proposed approach is also ideally suited to simulation- based computations, such as MCMC. 15,16 Methodology Our problem of interest is to generate fine-scale random fields based on sparse fine-scale samples and coarse-scale data. Such situations arise when we have limited point measurements, such as well data, and coarse-scale information based on seismic and/or production data. Our proposed method is a Bayesian approach to spatial modeling based on MRF and multiresolution algorithms in image analysis. Broadly, the method consists of two major parts: construction of a posterior distribution for multiscale data integration using a hierarchical model and implementing MCMC to explore the posterior distribution. Construction of a Posterior Distribution for Multiscale Data Integration. A multiresolution MRF provides an efficient framework to integrate different scales of data hierarchically, provided that the coarse-scale resolution is dependent on the next finescale resolution. 11 In general, a hierarchical conditional model over scales 1,. . ., N (from fine to coarse) can be expressed in terms of the product of conditional distributions, Equation 1 where p( x n ), n =1, . . ., N , are MRF models at each scale, and the terms p( x n | x n -1 ) express the statistical interactions between different scales. This approach links the various scales stochastically in a direct Bayesian hierarchical modeling framework ( Fig. 1 ). Knowing the fine-scale field x n does not completely determine the field at a coarser scale x n +1 , but depending on the extent of the dependence structure modeled and estimated, it influences the distribution at the coarser scales to a greater or lesser extent. This enables us to address multiscale problems accounting for the scale and precision of the data at various levels. For clarity of exposition, a hierarchical model for reconciling two different scales of data will be considered below. Equation 2 From this equation, the posterior distribution of the fine-scale random field indexed by 1 given a coarse-scale random field indexed by 2 can be derived as follows.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2011 SEG Annual Meeting, September 18–23, 2011

Paper Number: SEG-2011-3601

... regularization. In

**the**second step, we solve**the**other least-square system tailored specifically to signal and noise separation. Using synthetic and real data examples, we successfully apply this method to**the****problem****of**nonstationary signal and two different noise separation. coefficient upstream oil...
Abstract

ABSTRACT Many natural phenomena, including geologic events and geophysical data, are fundamentally nonstationary. They might exhibit stationarity on a short timescale but eventually alter their behavior in space and time. We extend the application of adaptive prediction-error filter (PEF) based on regularized nonstationary autoregression, which aims at signal and noise separation in t, -x, domain. Instead of using patching, a popular method for handling nonstationarity, we obtain smoothly nonstationary PEF coefficients by solving a regularized least-squares problem with shaping regularization. In the second step, we solve the other least-square system tailored specifically to signal and noise separation. Using synthetic and real data examples, we successfully apply this method to the problem of nonstationary signal and two different noise separation.

Journal Articles

Publisher: Society of Petroleum Engineers (SPE)

*SPE Reservoir Evaluation & Engineering*10 (01): 77–85.

Paper Number: SPE-93159-PA

Published: 01 February 2007

... within stacked lava domes while accounting for pressure data by means

**of**history matching to address**nonstationarity**in**the**real field. Building a suitable training image is commonly a difficult aspect**of**multipoint methods and poses particular**problems**for volcanic reservoirs. It was accomplished here...
Abstract

Summary A Tcf-class gas field has been producing over several decades in Japan. The reservoir body comprises stacked rhyolite lava domes erupted in a submarine environment. A porous network developed in each dome and rapid chilling on contact with seawater caused hyaloclastite to be deposited over it. Although hyaloclastite is also porous in this field, its permeability has been reduced dramatically by the presence of clay minerals. Impermeable basaltic sheets and mudstone seams are also present. Each facies plays a specific role in the pressure system. Stratigraphic correlation originally identified multiple reservoirs. Gas has been produced almost exclusively from the largest one. However, following 10 to 20 years of production, the pressures within unexploited reservoirs were noticed to have declined at a variety of rates. Unusual localized behavior has also been observed. Because seismic data were not proved particularly informative, we decided to remodel the entire system by specifically using pressure data. We employed a combination of multipoint geostatistics and probability perturbation theories. This approach successfully captured the curved facies boundaries within stacked lava domes while accounting for pressure data by means of history matching to address nonstationarity in the real field. Building a suitable training image is commonly a difficult aspect of multipoint methods and poses particular problems for volcanic reservoirs. It was accomplished here by iteratively adjusting the prototype until satisfactory history matching was achieved with a reasonable number of perturbations. Ambiguous reservoir boundaries were represented stochastically by populating a predetermined model space with pay and nonpay pixels. The modeling results closely simulate measured pressure histories and appear realistic in terms of both facies distributions and reservoir boundaries. They suggest that uneven pressure declines between different units are caused by the tortuous flow channels that connect them. The results also account for the unusual smaller-scale pressure performances observed. The final training image obtained here indicates more intensive spatial variations in facies than previously appreciated. Original gas in place (OGIP) estimates made with 20 equiprobable realizations are scattered within ±15% of the mean value. Estimates of incremental recovery made by drilling a step-out well reveal greater variation than those made by installing a booster compressor, which quantifies a higher associated geological risk.

Journal Articles

Publisher: Society of Petroleum Engineers (SPE)

*SPE Reservoir Evaluation & Engineering*7 (06): 416–426.

Paper Number: SPE-81544-PA

Published: 01 December 2004

... (MCMC) meth- ods, and (b) model extensions to account for

**nonstationarities**, discontinuity, and varying spatial properties at various scales**of**resolution are accessible in**the**MRFs. We construct fine-scale porosity distribution from well and seismic data, explicitly accounting for**the**varying scale and...
Abstract

Summary Integrating multiresolution data sources into high-resolution reservoir models for accurate performance forecasting is an outstanding challenge in reservoir characterization. Well logs, cores, and seismic and production data scan different length scales of heterogeneity and have different degrees of precision. Current geostatistical techniques for data integration rely on a stationarity assumption that often is not borne out by field data. Geologic processes can vary abruptly and systematically over the domain of interest. In addition, geostatistical methods require modeling and specification of variograms that can often be difficult to obtain in field situations. In this paper, we present a case study from the Middle East to demonstrate the feasibility of a hierarchical approach to spatial modeling based on Markov random fields (MRFs) and multiresolution algorithms in image analysis. The field is located in Saudi Arabia, south of Riyadh, and produces hydrocarbons from the Unayzah formation, a late Permian siliclastic reservoir. Our proposed approach provides a powerful framework for data integration accounting for the scale and precision of different data types. Unlike their geostatistical counterparts, which simultaneously specify distributions across the entire field, the MRFs are based on a collection of full conditional distributions that rely on the local neighborhood of each element. This critical focus on local specification provides several advantages: MRFs are far more computationally tractable and are ideally suited to simulation-based computation such as Markov Chain Monte Carlo (MCMC) methods, and model extensions to account for nonstationarities, discontinuity, and varying spatial properties at various scales of resolution are accessible in the MRFs. We construct fine-scale porosity distribution from well and seismic data, explicitly accounting for the varying scale and precision of the data types. First, we derive a relationship between the neutron porosity and the seismic amplitudes. Second, we integrate the seismically derived coarse-scale porosity with fine-scale well data to generate a 3D fieldwide porosity distribution using MRF. The field application demonstrates the feasibility of this emerging technology for practical reservoir characterization. Introduction The principal goal of reservoir characterization is to provide a reservoir model for accurate reservoir performance prediction. Integrating various data sources is an essential task in reservoir characterization. In general, we have hard data such as well logs and cores and soft data such as seismic traces, production history, a conceptual depositional model, and regional geological analysis. Seismic data in particular can play a major role in enhancing the geological model. It can be a block constraint when generating property distributions at a finer scale. However, integrating such information into the reservoir model is nontrivial. This is because different data sources scan different length scales of heterogeneity and can have different degrees of precision.1 It is essential that reservoir models preserve small-scale property variations observed in well logs and core measurements and capture the large-scale structure and continuity observed in global measurements such as seismic and production data. The large coverage area of seismic data has established that such data sources can play a major role in characterizing the reservoir. Most applications of seismic data for reservoir characterization have focused on the relationship between seismic attributes such as amplitudes or impedance and porosity.2,3 Two basic approaches have been adopted for integrating seismic data into reservoir models. For high-resolution seismic data, several geostatistical techniques such as cokriging and collocated cokriging have been proposed to estimate areal distribution of porosity.4-6 On the lower-resolution spectrum, there are methods to combine multiscale data where seismic data impose a block constraint for the finer scale.3,4,6-11 These include techniques such as sequential Gaussian simulation with block kriging3 and Bayesian updating of point kriging.10,11 Most kriging-based methods are restricted to multi-Gaussian and stationary random fields. They therefore require data transformation and variogram construction.3,4,6-11 In practice, variogram modeling with a limited data set can be difficult and strongly user dependent. Improper variograms can lead to errors and inaccuracies in the estimation. Thus, one might also need to consider the uncertainty in variogram models during estimation.12 However, conventional geostatistical methods do not provide an effective framework to take into account the uncertainty of the variogram. Furthermore, most of the multiscale integration algorithms assume a linear relationship between the scales. An alternative approach to traditional geostatistical methods is based on multiscale MRFs that can effectively integrate diverse data sources into high-resolution reservoir models. MRF methods have been applied widely in imaging processing13-15 and spatial modeling. In the oil industry, this technique is relatively new. There are limited applications in determining the reservoir facies16,17 distribution and spatial modeling of reservoir properties with synthetic examples.18 However, field-scale application of MRF has remained a challenging goal. In this paper, we further investigate our previously proposed method18 with the main objective of gaining insight on the practical implementation of this technique by a field application in the Middle East. The particular field studied here, the CNR field in Saudi Arabia, is located south of Riyadh and produces hydrocarbons from the Unayzah formation, a late Permian siliclastic reservoir. Our goal is to generate a 3D high-resolution porosity model by integrating seismic and well-log data through an MRF method.

Proceedings Papers

Paper presented at the The 29th International Ocean and Polar Engineering Conference, June 16–21, 2019

Paper Number: ISOPE-I-19-289

... noise. Earlier works are based on

**the**finite impulse response (FIR) filter and least mean square (LMS) algorithm. By employing steepest-descent search directions, LMS algorithms are computationally simple, numerically robust and widely applied. To solve**the**classical**problem****of**online frequency...
Abstract

ABSTRACT For the active narrow-band underwater acoustic positioning system, it is the key to estimate the phase shift of the received signals between the array elements accurately. With the adaptability to ocean environment, the adaptive Wiener filter can reduce noise jamming adaptively. In this paper, the recursive least squares (RLS) algorithm combined with adaptive Notch filter (ANF) is presented. Firstly, two orthogonal sinusoidal signals are used as reference signals to measure the phase. It is shown that the phase estimate converges to the true value with few iterations and remains stable in subsequent iterative calculations. And then, the RLS-ANF phase shift estimator can be obtained by connecting two adaptive phase estimators in parallel corresponding to the received signals respectively. Aiming at the phenomenon of ‘quadrant-hopping’, the RLS-ANF phase shift estimator is improved. Finally, simulation and experimental results show that the RLS-ANF phase shift estimator has the advantages of fast convergence, good stability and high accuracy. INTRODUCTION Adaptive Notch filter (ANF) is efficient to estimate and extract parameters of narrow-band signals from background noise. Earlier works are based on the finite impulse response (FIR) filter and least mean square (LMS) algorithm. By employing steepest-descent search directions, LMS algorithms are computationally simple, numerically robust and widely applied. To solve the classical problem of online frequency estimation of a sinusoidal signal, Hsu, Ortega and Damm (1999) propose a new ANF that ensure a globally convergent. Its major limitations, however, are a relatively slow speed of convergence and the sensitivity of variations to the eigenvalue spread of the input correlation matrix (Antoniou, Lu, 2007; Diniz,2013). To improve the performance of the ANF, extensive works have been proposed. Mojiri and Bakhshai (2004) develop a modified ANF to achieve online frequency estimation of periodic but not necessarily sinusoidal signals. Also, they prove that the stability analysis is simpler and alleviates the problem complexity even in the case of pure sinusoidal signal. Frequency auto-tracking adaptive frequency estimator (FATAFE) is brought forward (Liang, Yang, Wang, 2005) to overcome frequency bias increase and further reduce obviously estimated bias and variance relative to adaptive frequency estimator (AFE). By dealing with the problem of the frequency estimation of a sinusoidal signal corrupted by broad-band noise, Punchalard, Lorsawatsiri, Loetwassana, Koseeyaporn, Wardkein and Roeksabutr (2008) propose the second-order adaptive FIR Notch filter (AFNF). The performances including the rate of convergence and the mean square error (MSE) of the AFNF can be easily controlled just by the parameter of step size. Because conventional discretization processes may cause deviations in calculating the derivative of the adaptive filter state, Yoon, Bahn, W, Lee, Cho and D (2017) present a new discrete derivative method for LMS-ANF frequency estimators. The new ANF can accurately estimate the frequencies of input signals in various frequency ranges.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2003 SEG Annual Meeting, October 26–31, 2003

Paper Number: SEG-2003-1945

... predictability is estimated with multidimensional prediction-error filters. These filters are time-variant in order to handle

**the****nonstationarity****of****the**seismic data. Attenuation**of**surface-related multiples is illustrated with field data from**the**Gulf**of**Mexico with 2-D and 3-D filters.**The**3-D filters allow...
Abstract

SUMMARY Multiple attenuation in complex geology remains a very intensive research area. The proposed technique aims to use the spatial predictability of both the signal (primaries) and noise (multiples), in order to perform the multiple attenuation in the time domain. The spatial predictability is estimated with multidimensional prediction-error filters. These filters are time-variant in order to handle the nonstationarity of the seismic data. Attenuation of surface-related multiples is illustrated with field data from the Gulf of Mexico with 2-D and 3-D filters. The 3-D filters allow the best attenuation result. In particular, 3-D filters seem to cope better with inaccuracies present in the multiple model for short offset and diffracted multiples.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the SEG International Exposition and Annual Meeting, September 15–20, 2019

Paper Number: SEG-2019-3199116

...).

**The****nonstationarity**means frequency components**of**seismic signals are varied with time. A time-frequency (TF) analysis tool can transform**the**time-domain nonstationary signals into a TF space with an additional frequency dimension to characterize local spectral variations (Partyka et al., 1999). Often...
Abstract

ABSTRACT Time-frequency (TF) analysis is a useful tool for seismic data processing and interpretation. We propose a new two-step high-resolution TF analysis method involving the relevant vector machine (RVM) based wavelet decomposition and the Wigner-Ville distribution (WVD). We first decompose the seismic trace into a series of Ricker wavelets using RVM. Then, we implement WVD on the decomposed wavelets to produce the TF spectrum. By iteratively solving a Bayesian maximum posterior and a type-II maximum likelihood, RVM based decomposition can obtain the least amount of Ricker wavelets with different peak frequencies or phases from a preset wavelet dictionary, and can simultaneously invert for the associated sparse reflectivity even in the presence of thin beds. Moreover, RVM decomposition can reconstruct effective signals from the original contaminated data, and thereby has a good noise immunity performance. The WVD of decomposed wavelets can assemble TF distribution of the reconstructed signals to approximately characterize WVD of the original data. Therefore, the linear stack of WVD of all decomposed independent wavelets is immune from both the notorious cross-term interferences of the traditional WVD and random noise. Synthetic data example involving thin beds and field data are used to demonstrate the effectiveness of the proposed RVM based WVD TF analysis method and illustrate its advantages over the traditional Gabor transform and OMP based TF method. The results show that the proposed RVM based WVD method is a potentially effective, stable and high-resolution seismic TF analysis tool Presentation Date: Monday, September 16, 2019 Session Start Time: 1:50 PM Presentation Start Time: 2:40 PM Location: 214D Presentation Type: Oral

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2017 SEG International Exposition and Annual Meeting, September 24–29, 2017

Paper Number: SEG-2017-17415496

... to , and (2) we also fail to correctly and . We may choose to solve both

**problems**or .**The**quality**of**image-gathers associated with prestack migration may or may not warrant**the**simultaneous solution. In areas with irregular topography, complex near-surface, and complex subsurface, it may not. What...
Abstract

ABSTRACT In areas with complex near-surface with irregular topography and structurally complex subsurface, there is much uncertainty in rms velocity estimation for prestack time migration, whereas interval velocity estimation for prestack depth migration is despairingly challenging. We often attribute the velocity uncertainty to various factors, including strong lateral velocity variations, heterogeneity, anisotropy, and three-dimensional behavior of complex structures. Nevertheless, it is not easy to identify the cause of and account for the ucertainty as it often is a combination of the various factors. And the analyst struggles with much difficulty when estimating a velocity field whether it is for prestack time or depth migration. Velocity uncertainty invariably gives rise to erroneously high or low migration velocities, which then causes problems with prestack migration: (1) we fail to , and (2) we also fail to correctly and . We may choose to solve both problems or . The quality of image-gathers associated with prestack migration may or may not warrant the simultaneous solution. In areas with irregular topography, complex near-surface, and complex subsurface, it may not. What then? I propose a workflow, applicable to both 2-D and 3-D seismic data, to solve the two problems with prestack time migration one after the other, which includes construction of a zero-offset wavefield to capture and preserve all reflections and diffractions, followed by zero-offset time migration. The workflow includes construction of an image volume by prestack time migration of shot gathers using a range of constant velocities. This image volume can be used to pick rms velocities for prestack time migration. Yet, the multiplicity of semblance peaks associated with the image volume remains to be perilous. We can sum the image panels within the image volume over the velocity axis to obtain a composite image in time so as to all events in the image volume and avoid committing ourselves inadverdently to a velocity field which most likely would have some uncertainty. This summation strategy, however, works if the events within the volume are stationary in time and space. To meet this requirement, we unmigrate each of the image panels within the image volume and then sum over the velocity axis. The resulting unmigrated section actually is equivalent to a zero-offset wavefield. The final step in the workflow is poststack time migration of the zero-offset wavefield. I shall demonstrate this workflow using a field data set from a thrust belt. Presentation Date: Wednesday, September 27, 2017 Start Time: 10:35 AM Location: 371A Presentation Type: ORAL

Journal Articles

####
Geologic Modeling **of** Eagle Ford Facies Continuity Based on Outcrop Images and Depositional Processes

Journal:
SPE Journal

Publisher: Society of Petroleum Engineers (SPE)

*SPE Journal*23 (04): 1359–1371.

Paper Number: SPE-189975-PA

Published: 12 February 2018

...

**the**original MPS algorithm, helped in**the**investigation**of**further applications**of**MPS to practical**problems**. Since that breakthrough, several other MPS algorithms have been proposed (for a comprehensive review, see Tahmasebi and Sahimi 2016a , b ). For example, Arpat and Caers ( 2007 ) introduced a...
Abstract

Summary Geologic modeling of mudrock reservoirs is complicated by the presence of multiscale heterogeneities and lithofacies lateral discontinuity. The resolution of wireline logs is also too low to capture many small-scale heterogeneities that affect fluid flow. In addition, the large distance between logged wells results in uncertain long-range correlations. Supplementary to wireline log data, high-resolution outcrop images offer a direct representation of detailed heterogeneities and lithofacies connectivity. We used high-resolution panoramic outcrop images to collect data on lithofacies heterogeneity and the role that depositional processes play in this heterogeneity. We then used these data in different classes of reservoir algorithms—two-point-based, object-based, and higher-order statistics—to build a geologic model. To present our methodology, we used data collected from Eagle Ford outcrops in west Texas. We found the higher-order-statistics method to be especially efficient, capable of reproducing details of heterogeneity and lithofacies connectivity.

Advertisement