Skip Nav Destination
Filter
Filter
Filter
Filter
Filter

Search Results for
Operator antialiasing

Update search

Filter

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- EISBN
- ISSN
- EISSN
- Issue
- Volume
- References
- Paper Number

### NARROW

Peer Reviewed

Format

Subjects

Journal

Publisher

Conference Series

Date

Availability

1-20 of 76 Search Results for

#### Operator antialiasing

**Follow your search**

Access your saved searches in your account

Would you like to receive an alert when new items match your search?

*Close Modal*

Sort by

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2015 SEG Annual Meeting, October 18–23, 2015

Paper Number: SEG-2015-5852752

... enhanced if there are complex multiple generators in the subsurface. In this paper, I propose a simple methodology to perform

**antialiasing**in the multiple contribution gather (MCG) domain to reduce the artifacts generated in the SRME predicted multiples. Introduction SRME is a prediction and...
Abstract

Summary Surface related multiple elimination (SRME) is a powerful data driven tool to remove surface related multiples. However, it has a very strict requirement on the acquisition geometry to obtain a satisfactory result. The shortcomings of the method due to inadequate acquisition are enhanced if there are complex multiple generators in the subsurface. In this paper, I propose a simple methodology to perform antialiasing in the multiple contribution gather (MCG) domain to reduce the artifacts generated in the SRME predicted multiples. Introduction SRME is a prediction and subtraction method (see, e.g., Verschuur and Berkhout, 1997, and Weglein et al., 1997). Surface related multiples are predicted using the appropriately preprocessed input data and then subtracted from the dataset without multiple attenuation. Written explicitly, the 3D SRME prediction is the sum of autoconvolutions of the data given by the equation. equation Here M is the predicted multiple for a trace with source location (x s ,y s ) and receiver location (x g ,y g ). D is the input data. The summation is performed over the defined (x,y) aperture. The MCG contains traces generated by these individual autoconvolutions of the data D, prior to summation. SRME is a very popular and effective algorithm for removing surface related multiples. Ideally, this algorithm requires seismic sources at every receiver location. This prerequisite is not satisfied for field datasets. Figures 1 and 2 (modified from Verschuur, (2006)) clearly demonstrate the artifacts generated in the predicted multiples due to inadequate acquisition. Figure1a is the MCG generated for a single trace using an adequate source spacing. Figure 1b is the MCG generated for the same trace using double source spacing. The spatial aliasing is very noticeable in figure 1b. Comparison of the predicted multiples in figure 2, created by the summation of the MCG, clearly shows the artifacts generated due to inadequate sampling of the source.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2008 SEG Annual Meeting, November 9–14, 2008

Paper Number: SEG-2008-2386

... limits with just three multiply-add

**operations**. One pays a price in bandwidth, however, as the highest input frequency it passes is only half of temporal Nyquist in the event**antialiasing**is required. Linear interpolation, as used in Claerbout''s original paper, lifts that restriction, but at a cost in...
Abstract

Summary One quite well known method for antialiasing Kirchhoff migration is the compact and efficient triangle filter method first proposed by Claerbout (1992). In its most efficient Kirchhoff application (Lumley, et al., 1994), it requires no extra memory and implements antialias frequency limits with just three multiply-add operations. One pays a price in bandwidth, however, as the highest input frequency it passes is only half of temporal Nyquist in the event antialiasing is required. Linear interpolation, as used in Claerbout''s original paper, lifts that restriction, but at a cost in computation and some additional loss of fidelity. Supersampling can replace linear interpolation, but forces a tenfold memory increase to retain frequencies up to 90% Nyquist. In this abstract, I provide a memory efficient way to modify the original method to permit frequency limits all the way up to Nyquist, while retaining three multiply-add computational efficiency. Introduction Antialiasing, that is selective low-pass filtering, is used in Kirchhoff migration to suppress energy leaking into the migrated image from coherent events cross-cutting the summation trajectory. (See Fig 1.) Given a full description of the geometry of the input traces and the parameters that define the Kirchhoff summation trajectories, Mazzucchelli and Rocca (2001) provide a way of determining quite precisely where and how much antialiasing is required. Wang (2000, 2004) leverages migration wavelet stretch and dip estimates in an image space dealiasing method. Whatever method is selected to determine antialiasing requirements, the result is an upper frequency limit that will generally be different for every pair of input and output samples in the migration. Make multiple copies of each input trace, each filtered to a different maximum frequency, and sum samples from these copies according to the local antialias limits. Coarsen the calculation of antialias limits to, say, every 5th time sample and every 10th trace, so that the overhead of computing the limits (as distinguished from applying the limits) is small. Coarsen the application of antialias limits with small windows, say 4 to 8 samples long, of block extraction of samples from a selected frequency-limited input trace. Use a fast, albeit approximate, antialias limit application such as the Claerbout triangle filter method which integrates each trace forward and backwards and then applies a gapped second difference operator to construct each low-pass filtered sample on the fly. Any sizable additional memory overhead will generally degrade migration efficiency, incurring more cache misses, page faults and disk I/O. Computing antialiasing twice as fast does no good if the CPU''s are idle twice as long. The effects of interpolation incurred because summation trajectories pass between recorded trace samples are an issue. Standard interpolation methods are implicitly lowpass filters, so efforts to preserve high frequencies in antialiasing will be wasted if the interpolation removes those frequencies in the Kirchhoff summation. To reduce the computational cost of this extra overhead, a number of approaches can be taken, either individually or in combination:Any attempt to tackle computational load still has to keep in mind two other factors:

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2007 SEG Annual Meeting, September 23–28, 2007

Paper Number: SEG-2007-2255

... ABSTRACT A common method used to protect Kirchhoff migration

**operators**from aliasing is to apply an**operator**-dip-dependent lowpass filter to each input trace during summation. This can reduce resolution because higher frequency components of the input data are removed from the migrated image...
Abstract

ABSTRACT A common method used to protect Kirchhoff migration operators from aliasing is to apply an operator-dip-dependent lowpass filter to each input trace during summation. This can reduce resolution because higher frequency components of the input data are removed from the migrated image as output dip increases. Instead of migrating data the conventional way, we apply a wavelet transform over the time axis of the data and then apply Kirchhoff migration to the wavelet coefficients of each wavelet scale separately. We reconstruct the full-bandwidth image from the multi-scale images with an iterative wavelet reconstruction without any anti-aliasing protection. The frequency components of the data that would be aliased by the migration operator are found in the smaller scale or higher frequency/wavenumber bands. The frequency or wavenumber dependent aliased energy is automatically reduced during wavelet reconstruction because it is not consistent across wavelet scales. Using a field data example, we demonstrate that this method produces better resolution and signal-to-noise than conventionally-antialised Kirchhoff migration regardless of the dips of the summation trajectories and the frequency content of the input data.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2004 SEG Annual Meeting, October 10–15, 2004

Paper Number: SEG-2004-1135

... ABSTRACT

**Antialiasing**filters in Kirchhoff migration are designed to eliminate migration artifacts caused by**operator**aliasing. Unfortunately, they also reduce the amplitude of dipping events. This paper discusses the**antialiasing**effect on amplitudes and shows how to preserve the dipping...
Abstract

ABSTRACT Antialiasing filters in Kirchhoff migration are designed to eliminate migration artifacts caused by operator aliasing. Unfortunately, they also reduce the amplitude of dipping events. This paper discusses the antialiasing effect on amplitudes and shows how to preserve the dipping energy beyond aliasing frequencies.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2000 SEG Annual Meeting, August 6–11, 2000

Paper Number: SEG-2000-0806

... imaging quality. Therefore, the importance of

**antialiasing**has drawn increasing attention. The migration**antialiasing**dip**operators**are determined by spatial directional derivatives on diffraction hyperboloid surfaces or reflection ellipsoid surfaces depending on the different**antialiasing**algorithm, in...
Abstract

Summary Kirchhoff migration is a powerful imaging technique that accommodates the complexity of 3D seismic data. When implemented correctly, this method performs effectively on variable geometries, complex velocity models and steep dipping events. However, aliasing noise often contaminates the imaging quality. Therefore, the importance of antialiasing has drawn increasing attention. The migration antialiasing dip operators are determined by spatial directional derivatives on diffraction hyperboloid surfaces or reflection ellipsoid surfaces depending on the different antialiasing algorithm, in this paper, we present a new formula that correctly expresses dip value related to the trace spacing for 3D prestack Kirchhoff time migration.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1991 SEG Annual Meeting, November 10–14, 1991

Paper Number: SEG-1991-1211

... correction upstream oil & gas geophy dmo

**operator**amplitude**Antialiasing**and Amplitude Preserving 2-D and 3-D DMO: An Integral Implementation Ron Silva* and Paul Haskey, Simon-Horizon Ltd., England SUMMARY A method is described whereby both anti-aliasing and amplitude preserving DMO can be...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1992 SEG Annual Meeting, October 25–29, 1992

Paper Number: SEG-1992-0995

... ABSTRACT No preview is available for this paper. migration

**operator**correction equation upstream oil & gas directivity correction summation weighting factor reservoir characterization kirchhoff migration travel time wavefield extrapolator directivity factor impulse...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1987 SEG Annual Meeting, October 11–15, 1987

Paper Number: SEG-1987-0729

... ABSTRACT No preview is available for this paper. christopher liner

**operator**trsoe zero-offset section operrtion spatial equation reservoir characterization trroe modeling**operator**upstream oil & gas colorado school multichannel reflection data joshua ronen amplitude...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 1988 SEG Annual Meeting, October 30–November 3, 1988

Paper Number: SEG-1988-1113

... everett mobley

**operator**-aliasing noise Amplitude and**Antialiasing**Treatment in (x-t) domain DMO Craig J. Beasley and Everett Mobley, Western Geophysical s 17.4 SUMMARY Kirchhoff-summation types of algorithms give an efficient and practical means for accomplishing dip-moveout (DMO). In analogy with...
Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2013 SEG Annual Meeting, September 22–27, 2013

Paper Number: SEG-2013-0584

...Dip-adaptive

**operator**anti-aliasing for Kirchhoff migration Zhou Yu*, John Etgen, David Whitcombe, Linda Hodgson and Hui Liu, BP Summary Independent from data and image aliasing,**operator**aliasing occurs when the input data spacing is too coarse and the frequency is too high for the steep migration...
Abstract

Summary Independent from data and image aliasing, operator aliasing occurs when the input data spacing is too coarse and the frequency is too high for the steep migration operator summation trajectory. The conventional method for protecting the Kirchhoff migration operator from aliasing is to apply a low-pass filter to each input trace prior to summation, with an assumption of a flat reflector. This can reduce resolution for dipping structures because the high frequency components of the input data are removed from the migrated image as the output dip increases. Our new operator anti-aliasing method preserves the optimal resolution that the input data provides, while suppressing the aliased energy without any dip constraint. The method is based on dip filtering on the pre-summation gather to remove all non-flat energy. Since the pre-summation gather is often strongly spatially aliased, a local complex wavelet transform-based dip filter is used to avoid spreading the aliasing artifacts. Using both synthetic and field data examples, we demonstrate that this new dip-adaptive operator anti-aliasing produces better resolution and amplitude fidelity than the conventional anti-aliasing Kirchhoff migration for dipping reflectors, while no change occurs in the flat reflectors.

Proceedings Papers

Publisher: NACE International

Paper presented at the CORROSION 2001, March 11–16, 2001

Paper Number: NACE-01291

... ABSTRACT Removing DC trends before calculating power spectral densities is a necessary

**operation**, but the choice of the method is probably one of the most difficult problems in electrochemical noise measurements. The procedure must be simple and straightforward, must effectively attenuate the...
Abstract

ABSTRACT Removing DC trends before calculating power spectral densities is a necessary operation, but the choice of the method is probably one of the most difficult problems in electrochemical noise measurements. The procedure must be simple and straightforward, must effectively attenuate the low frequency components without eliminating useful information or create artifacts. Several procedures will be presented, including moving average removal, linear detrending, polynomial fitting, analog or digital high-pass filtering, and their effect on electronic and electrochemical signals discussed. The results show that the best technique appears to be polynomial detrending On the contrary, the recently proposed moving average removal method was found to have considerable drawbacks and its use should not be recommended. INTRODUCTION In the analysis of electrochemical noise (EN), when extracting statistical information from the time records of the fluctuations of the electrical quantities (current or voltage), one is often confronted with the problem that the signal sampled does not appear to be stationary, at least within the measurement time T. The signal is said to be drifting, and since the calculation of the power spectral density (PSD) or even of the standard deviation presupposes a stationary process, it is necessary to apply some procedure to the incoming signal so as to eliminate the contribution of what is commonly called its drift. The reasons for this behavior may be different and hard to know: for example, the signal may be stationary, but it may contain frequency components lower than fo = l/T, or there may be some slow alteration of the system under study that causes the drift, whether linear or not. In corrosion studies, progressive deterioration of the electrodes and therefore lack of stationarity, is to be expected in many c a s e s . An illustrative example is given by a random signal superimposed to a linear drift. In the implementation employed to generate Fig. la, white noise (2 mV p-p) in the range from 0 to 300 Hz, produced by a signal generator was added to a ramp with a 1.6 mV/min slope. Both time records in Fig. la and lb consist of 2048 points, but in the first the sampling rate is 10 Hz with a low-pass antialiasing filter at 3.3 Hz, while in the second it is 100 Hz, the cut-off frequency of the antialiasing filter being set at 33 Hz. For this case there is an analytical solution, and the expression for the PSD is:l, 2 a2T ~x, comp( f ) -- ~x (f) + ~ (1) 2~2f 2 where tPx~comp is the PSD of the composite signal, tPx that of the signal without drift, and the second term, which represents the effect of the drift, contains the slope, a, and the duration, T, of the time record. This component of the PSD, which gives a straight line of slope -2 in the logarithmic plot, is deterministic and not stochastic, so that the error is zero. For this reason, the low-frequency part of the spectra in Fig. 2, which are produced from the fast Fourier transform (FFT) of the two time records, is very smooth. Although the signal sampled in the two figures is the same, since Eq. 1 contains the total time T, which is different in the two cases, the PSD is different, as shown in Fig. 2. The fact that the two PSDs do not join is another indication that the signal is not stationary, so that one can only speak of pseudo-PSDs. If the sampling rate is increased to 1 kHz (antialiasing filter set at 330 Hz), curve c in Fig. 2 is a good representation of the PSD of white noise because the amplitude aT of the drift during the acquisition time is now small compared to the random fluctuations of 2 mV p-p. One has here the analytical proof of the experimentalist's common sense that if the drit~ is negligible during

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2007 SEG Annual Meeting, September 23–28, 2007

Paper Number: SEG-2007-2344

... along a diffraction surface, they are fil- tered for waveform phase shaping and pre filtered in preparation for the

**antialiasing****operation**applied during summation. Since the inte- gration process shifts the waveform shape, the filter works restoring it. The anti-aliasing filter is applied in three...
Abstract

INTRODUCTION SUMMARY Since 3D Prestack Kirchhoff Depth Migration (KPSDM) has become one of the leading imaging tools for hydrocarbon exploration, its accurate and precise handling of the kinematical and dynamical aspects of the wavefield have become center stage to the R&D efforts worldwide. In a separate paper in this proceeding by the same author, describe a modified antialiasing filter weight that corrects for amplitude artifacts observable in earlier designs. Here we continue the efforts of developing an efficient true amplitude migration algorithm by suggesting a simplification of the traditional filtering done during this process that will improve the performance and precision of the results. In most Kirchhoff migration implementations, a triangular smoothing filter is used to avoid high frequency aliasing along the migration operator. This filter is implemented in three steps: causal integration, anti-causal integration, and Laplace-type differentiation along the diffraction stacking surface. In addition a derivative filter (known as r–filter) is applied to the input data to correct for the wavelet phase rotation introduced by the Kirchhoff summation. We will find that the standard filtering sequence of applying the r–filter, causal integration, and anti-causal integration can be replaced by just an anti–causal integration. Kirchhoff migration provides one of the best imaging solutions when data are non-uniformly distributed in space. It also is fast and flexible when it comes to input/output geometries. In our quest to make 3D KPSDM better and faster we have found an alternative to achieve a comparable result by replacing the traditional filtering sequence made of a r–filter, causal integration and anti-causal integration with a single anti-causal integration filter. Note that this is applicable to time as well as depth migration algorithms. Before the data are stacked along a diffraction surface, they are filtered for waveform phase shaping and pre–filtered in preparation for the antialiasing operation applied during summation. Since the integration process shifts the waveform shape, the r–filter works restoring it. The anti-aliasing filter is applied in three stages: causal integration, anti-causal integration and then a Laplacian computation rolling along the stacking diffraction surface. By replacing these three filters by an anti-causal integration the performance and accuracy of the algorithm can be greatly improved. The results presented here also incorporate the normalization factor in the anti-aliasing filter mentioned above and described elsewhere in these proceedings. These corrections eliminate azimuthally anisotropic amplitude behavior on the migration impulse response as well as amplitude distortions with time and offset, introduced by the traditional scaling factor in Lumley et al. (1994) and Abma et al. (1999). THREE FILTERS IN ONE Differentiation in the frequency domain can be accomplished by a multiplication by -iw, where w represents the circular frequency, and D t is the temporal sampling rate. As the sampling rate D t goes to zero the approximation becomes an equality. From this it is straight forward to conclude that the three filters: r–filter, causal integration and anti-causal integration should be equivalent to just an anti-causal integration.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2003 SEG Annual Meeting, October 26–31, 2003

Paper Number: SEG-2003-1091

...: SEP 80, 447 490. Silva, R., 1992,

**Antialiasing**and the application of weighting fac- tors in Kirchhoff migration: 62nd Ann. Internat. Mtg, Soc. Expl. Geophys., Expanded Abstracts, 995 998. Sun, J., and Bernitsas, N., 1999,**Antialiasing****operator**dip in 3D prestack Kirchhoff time migration - an exact...
Abstract

SUMMARY With the widespread adoption of wavefield continuation methods for prestack migration, the concept of operator aliasing warrants revisiting. While zero-offset migration is unaffected, prestack migrations reintroduce the issue. Some situations where this problem arises include subsampling the shot-axes to save shot-profile migration costs and limited cross-line shot locations due to acquisition strategies. These problems are overcome in this treatment with the use of an appropriate source function or band-limiting the energy contributing to the image. We detail a synthetic experiment that shows the ramifications of subsampling the shot axis and the efficacy of addressing the problems introduced with our two approaches. Further, we explain how these methods can be tailored in some situations to include useful energy residing outside of the Nyquist limits.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2013 SEG Annual Meeting, September 22–27, 2013

Paper Number: SEG-2013-0956

... suboptimal further processing. Matching pursuit Fourier interpolation (MPFI) is a beyond-aliasing interpolation technique for single- component seismic data. The

**antialiasing**capabilities of the method can be improved by using priors, which are typically derived from the lower frequencies in the data, and...
Abstract

Summary Seismic data are typically irregularly and sparsely sampled along the spatial coordinates, leading to suboptimal further processing. Matching pursuit Fourier interpolation (MPFI) is a beyond-aliasing interpolation technique for single-component seismic data. The antialiasing capabilities of the method can be improved by using priors, which are typically derived from the lower frequencies in the data, and used to dealias the higher frequencies. In this paper we investigate using a prior derived from a separate, more densely sampled data set. Practical examples are dense-over/sparse-under data and time-lapse data. Tests are done by decimating an existing dataset, deriving the prior from the non-decimated data, and using the priors for the interpolation of the decimated data. It is shown that using priors from a second data set can give a significant uplift in data reconstruction compared with deriving the priors in a conventional way. In particular, some steeply dipping diffraction events are reconstructed better, and a reduction of artefacts is observed.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2010 SEG Annual Meeting, October 17–22, 2010

Paper Number: SEG-2010-3662

... data to extract

**antialiasing**prediction filters and then interpolates high frequencies be- yond aliasing. Claerbout (1992) treated Spitz s method as a prediction-error filter in the original t-x domain. Porsani (1999) proposed a half-step prediction-filter scheme that makes the interpolation process...
Abstract

SUMMARY Seismic data are often inadequately sampled along spatial axes. Spatially aliased data can produce imaging results with artifacts. We present a new adaptive prediction-error filter (PEF) approach based on regularized nonstationary autoregression, which aims at interpolating aliased seismic data. Instead of using patching, a popular method for handling nonstationarity, we obtain smoothly nonstationary PEF coefficients by solving a regularized least-squares problem. Shaping regularization is used to control the smoothness of adaptive PEFs. Finding the interpolated traces can be treated as another linear least-squares problem, that solves for data values rather than filter coefficients. Using benchmark synthetic and real data examples, we successfully apply this method to the problem of seismic trace interpolation. INTRODUCTION The spatial sampling interval is an important factor that controls seismic resolution. Too large a spatial sampling interval leads to aliasing problems, which can adversely affect migration and result in poor lateral resolution of subsurface images. An alternative to expensive dense spatial sampling is interpolation of seismic traces (Spitz, 1991). One important approach to trace interpolation is prediction interpolating methods, mainly an extension of Spitz’s original method, which uses low-frequency non-aliasing data to extract antialiasing prediction filters and then interpolates high frequencies beyond aliasing. Claerbout (1992) treated Spitz’s method as a prediction-error filter in the original t-x domain. Porsani (1999) proposed a half-step prediction-filter scheme that makes the interpolation process more efficient. Wang (2002) extended f - x trace interpolation to higher dimensions, the f -x-y domain. Gulunay (2003) introduced an algorithm similar to f -x prediction filtering, which has an elegant representation in the f -k domain. Naghizadeh and Sacchi (2009) proposed an adaptive f -x interpolation using exponentially weighted recursive least squares. Most recently Naghizadeh and Sacchi (2010) used a prediction approach similar to Spitz’s method, except that the curvelet transform is involved instead of the Fourier transform. Seismic data are nonstationary, but a standard PEF can only be used to interpolate stationary data (Claerbout, 1992). Patching is a common method to handle nonstationarity (Claerbout, 2010), although it occasionally fails in the assumption of piecewise constant dips. Crawley et al. (1999) proposed smoothly nonstationary PEFs with “micropatches” and radial smoothing, which typically produces better results than the rectangular patching approach. Fomel (2002) developed a plane-wave destruction (PWD) filter (Claerbout, 1992) as an alternative to t-x PEF and applied the PWD operator to nonstationary trace interpolation. However, the PWD method depends on the assumption of a small number of smoothly variable seismic dips. In this paper, we use the two-step strategy, similar to that of Claerbout (1992) and Crawley et al. (1999), but calculate the adaptive PEF by using regularized nonstationary autoregression (Fomel, 2009) to deal with both nonstationarity and aliasing. Shaping regularization (Fomel, 2007) controls the locally smooth interpolation. We test the new method by using several benchmark synthetic examples. Results of applying the proposed method to a field example demonstrate that regularized adaptive PEF can be effective in trace interpolation problems, even in the presence of multiple variable dips.

Proceedings Papers

Paper presented at the International Petroleum Technology Conference, December 7–9, 2009

Paper Number: IPTC-13998-ABSTRACT

... is coarse. One of the key reasons for the loss of bandwidth with large midpoint bins is due to the anti-alias filtering of the migration

**operators**that must be done. As discussed by Abma et al. (1999), such filtering is needed to prevent the generation of artifacts. The resolution implications are...
Abstract

This reference is for an abstract only. A full paper was not submitted for this conference. Introduction The word "resolution" is often assumed to refer to the specific case of temporal resolution. In that regard, Kallweit & Wood (1982) observed that when two octaves of bandwidth are present, the limit of temporal resolution can be expressed as 1/(1.4 x FMAX). However, equally important is the issue of spatial resolution. One of the methods proposed by Berkhout (1984) for quantifying spatial resolution is via the use of the "spatial wavelet". Such wavelets demonstrate that better temporal resolution leads to better spatial resolution. A key point in this paper, though, is that this relationship works the other way too. That is, better spatial resolution leads to better temporal resolution. For instance, of great interest spanning from the Gulf of Mexico to the Red Sea is the exploration for reservoirs beneath salt. In order for the migration process to be able to produce high temporal frequencies in the images of reflections beneath salt, the corrugated nature of the top-salt boundary needs to be portrayed faithfully in the velocity model. However, if a smoothed version of that boundary is used instead (as would certainly be the case in the first round of tomography), the spatial resolution of the top salt is lost. This is what leads to a forfeiture of the subsalt temporal resolution. Binning requirements The formulas for spatial wavelets are computed from calculus via the analytic integration of continuous functions. However, seismic data are sampled in time and space, and the imaging calculations use discrete summations. This means the spatial resolution in real surveys is more limited than indicated by the spatial wavelets - and the limitation gets worse when the sampling is coarse. One of the key reasons for the loss of bandwidth with large midpoint bins is due to the anti-alias filtering of the migration operators that must be done. As discussed by Abma et al. (1999), such filtering is needed to prevent the generation of artifacts. The resolution implications are demonstrated in Figure 1. A depth-varying velocity function from an onshore survey was used to model diffractions from two closely spaced points in the zone of interest. Those diffractions were then migrated and stacked. The results from two candidate survey designs are shown. The macro designs were identical. However, the source and receiver intervals were selected to yield the 40-ft (12 m) and 80-ft (24 m) CMP bin dimensions in the two surveys respectively. We can see that the 40-ft CMP bin design clearly resolves the two points that are 200 ft (61 m) apart, but the 80-ft design does not. Also, analyses of spectra (not shown) reveal that the temporal bandwidth for the 40-ft case is better than that from the 80-ft scenario - again confirming the inter-relationship of temporal and lateral resolution. This situation is definitely shared in marine surveys too. Such examples not only demonstrate the benefits of the more detailed structural interpretation that can be obtained from small-bin surveys, they also demonstrate the more detailed identification of reservoir properties that can be derived from inversion. Coordinate accuracy requirements Of course hand-in-hand with the drive for greater spatial resolution should be the drive for greater accuracy in source and receiver coordinate information. That is understandably more challenging in the offshore case. To investigate this issue, modeling and subsequent migration tests similar to those performed for Figure 1 were executed for a marine survey design. A velocity function was used from a field where the target was 6130 m deep. After the modeling of the diffraction surfaces was performed, the source and receiver coordinates were perturbed. This caused the migration to be conducted with inaccurate coordinate information. Three scenarios are featured in Figure 2. The panel on the left is used for reference. In that case, the correct coordinates were used for the migration. The panel in the middle shows the results obtained when the receiver coordinates were perturbed using a Gaussian distribution characterized by a 3-m standard deviation. That is similar to the type of accuracy that is available from leading-edge acoustic positioning systems. The panel on the right shows the result when the standard deviation was 20 m. That is akin to the type of accuracy that was available in early surveys that relied solely on compasses for streamer navigation data. We can see that the loss of resolution induced by the 3-m inaccuracy is no great consequence. The two point diffractors that are separated by 30 m are easily resolved. However, those diffractors are not resolved when the standard deviation is 20 m. Note that in this exercise the bin dimensions are 5 m. So, the right-most panel in the figure demonstrates that small bins by themselves are not sufficient for good resolution. Accuracy in coordinates is required too. Enabling technologies Improved (temporal and spatial) resolution requires denser spatial sampling. This naturally implies that massively more shots (via continuous recording techniques) and/or higher channel counts are required in acquisition. Indeed such strategies would seem to be ideal for onshore programs in the Middle East and North Africa where the desert environments place minimal restriction on access. However, in other regions, topography, vegetation, infrastructure, and many other things often severely restrict where shot points can be placed. In those cases, the burden of denser spatial sampling would have to be placed primarily on the channel count. Whatever the case, the quest for better sampling also implies that each shot should ideally be a point (as in the case of a single vibrator) and each receiver should be recorded by a separate channel - otherwise there will be smearing of the signal. But this is not to say that it would be sufficient simply to use more channels and more computers. An order of magnitude increase in the number of live channels requires paradigm shifts in data QC, data transfer, and processing. It also requires improvements in things like positioning accuracy - as mentioned above. So assuming all hurdles are overcome, how many live channels would we like to have in each shot? Well frankly, most geophysicists would probably take all that they could get. Today, large "conventional" land and marine acquisition systems might have 4,000 to 5,000 channels. However, some single-sensor land systems have offered up to 30,000 live channels - with further advances to 150,000 channels recently launched. Similarly, marine singlesensor systems can record tens of thousands of live channels - with the main limitation being how many streamers can be towed by the vessel. Final Remarks What we have said here is that resolution is multifaceted. Good temporal resolution does not depend simply on how much high-frequency energy our seismic sources can pump into the ground. Good temporal resolution in the 3D migrated image also requires good spatial sampling. Good spatial sampling requires high channel counts. High channel counts require a paradigm shift in everything from QC procedures to final interpretation. Also, the very definition of "sampling" implies discrete sampling - not mixing. And finally, the big questions of course are just how small do the bins have to be, and how many channels are needed? In other words, what are the requirements in the field design that are needed to meet the requirements in resolution? Projects with which the authors have been involved employed bins as small as 3 m or so. Such density is certainly not yet required in most areas, but it might very well be appropriate for specific instances ranging from the SAGD programs of the heavy oil province in Canada to high-resolution surveys in heavily karsted zones of the Middle East. As a matter of practice, proper survey evaluation and design studies need be conducted to answer these field-specific questions. References Abma, R., Sun, J., & Bernitsas, N., 1999, Antialiasing methods in Kirchhoff migration: Geophysics, 64 . 1783–1792 Berkhout, A. J., 1984, Seismic exploration - seismic resolution: a quantitative analysis of resolving power of acoustical echo techniques. Geophys. Press, London. Kallweit, R. S. & Wood, L. C. 1982. The limits of resolution of zero-phase wavelets. Geophysics. 47 . 1035–1046.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2008 SEG Annual Meeting, November 9–14, 2008

Paper Number: SEG-2008-2512

... the essential theoretical equivalence of the two using a simple three line mathematical argument. After that I look at similarities and differences in the implementation of the two methods, from deghosting, obliquity and

**antialiasing**, to memory requirements and out-of-core pipelining, and finally...
Abstract

Summary In this abstract I compare theoretical and practical aspects of the Delft and the Inverse Scattering surface-related multiple attenuation approaches. I first show the essential theoretical equivalence of the two using a simple three line mathematical argument. After that I look at similarities and differences in the implementation of the two methods, from deghosting, obliquity and antialiasing, to memory requirements and out-of-core pipelining, and finally comparing their relative computational cost. Of particular note, I show that (1) no spatial FFT padding is required to suppress wraparound artifacts and (2) Fourier spatial interpolation does nothing whatsoever to reduce aliasing artifacts. Introduction The fundamental principle of surface-related multiple prediction is that the upcoming energy recorded at any given trace location in a shot produces a downgoing reflected signal that is a secondary source whose response can be predicted by convolving that trace with a suitably tailored shot record whose shot position is at that given trace location. Because this basic convolution implicitly squares the source spectrum, deconvolution is used to flatten the source spectrum so that its square remains flat over the bandwidth of the data. Of course, additional adaptive shaping is needed to fine tune the imperfectly predicted multiples. Interpolation and extrapolation of the input data to fill in missing inner offsets is a necessary preprocessing step. In addition to attenuating edge effects, this step accounts for the fact that in a horizontally layered earth multiples at offset 2h arise from primaries at offset h. In addition, it is generally helpful to interpolate each shot record 2:1. This last preprocessing improvement is due to Bill Dragoset (pers. comm.) who notes that the convolution will take two dips p and q and turn it into a steeper dip p+q. Since the multiple estimate involves summation across such convolved traces, sufficiently aliased energy will stack in as background fuzz. By preinterpolating 2:1, an original dip of p msec/trace now becomes 1/2 p instead, supressing its contribution to aliasing. Deghosting is generally needed to separate the up and downgoing shot and recorded traces. This operation depends upon the source and receiver depths and the angles at which the seismic energy arrives at the surface. An obliquity correction can be applied at the same time as deghosting if done in the Fourier or ? p domains. The cosine obliquity term compensates for the difference between amplitude and energy. Theory Let us initially assume that we have arranged our shots and receivers on a common grid, with the shot and receiver spacings both equal to the grid spacing. After a temporal Fourier transform to constant frequency slices, a nice way of visualizing the basic Delft (Verschuur, et al. 1988) operation of predicting multiples, M, is as a matrix multiplication of a data matrix, D, and a proxy primary matrix, Q, which is typically the input data itself or the output of a previous demultiple iteration.

Proceedings Papers

Publisher: Society of Exploration Geophysicists

Paper presented at the 2003 SEG Annual Meeting, October 26–31, 2003

Paper Number: SEG-2003-1130

... Summary

**Operator**/Imaging aliasing introduced in Kirchhoff migration is often tackled by trace tapering, aperture truncation or time and offset-variant filtering. The latter approach is the most suitable. However, most implementations and published results using this technique are derived for...
Abstract

Summary Operator/Imaging aliasing introduced in Kirchhoff migration is often tackled by trace tapering, aperture truncation or time and offset-variant filtering. The latter approach is the most suitable. However, most implementations and published results using this technique are derived for Kirchhoff time migration and assume a constant velocity media. In this paper, we introduce an antialiasing filter for ray-based pre-stack depth migration and for general heterogeneous velocity models. We illustrate the benefits of such a scheme on a numerical example. Introduction Three kinds of spatial aliasing can occur during the migration process, all leading to poor and ambiguous images. The three categories are: data, image and operator migration aliasing and have been extensively discussed by Lumley et al. (1994) and Biondo (2001).

Journal Articles

Journal:
SPE Drilling & Completion

Publisher: Society of Petroleum Engineers (SPE)

*SPE Drilling & Completion*(2021)

Paper Number: SPE-204032-PA

Published: 25 February 2021

... back-drive dynamic drilling conference drillstring design back-drive event bit rotation speed torque drilling

**operation**texas exhibition rotation speed spe annual technical conference oscillation frequency spe iadc drilling conference In North American shale drilling applications...
Abstract

Summary North American shale drilling is a fast-paced environment where downhole drilling equipment is pushed to the limits for the maximum rate of penetration (ROP). Downhole mud motor power sections have rapidly advanced to deliver more horsepower and torque, resulting in different downhole dynamics that have not been identified in the past. High-frequency (HF) compact drilling dynamics recorders embedded in the drill bit, mud motor bit box, and motor top subassembly (top-sub) provide unique measurements to fully understand the reaction of the steerable-motor power section under load relative to the type of rock being drilled. Three-axis shock, gyro, and temperature sensors placed above and below the power section measure the dynamic response of power transfer to the bit and associated losses caused by back-drive dynamics. Detection of back-drive from surface measurements is not possible, and many measurement-while-drilling (MWD) systems do not have the measurement capability to identify the problem. Motor back-drive dynamics severity is dependent on many factors, including formation type, bit type, power section, weight on bit, and drillpipe size. The torsional energy stored and released in the drillstring can be high because of the interaction between surface rotation speed/torque output and mud motor downhole rotation speed/torque. Torsional drillstring energy wind-up and release results in variable power output at the bit, inconsistent rate of penetration, rapid fatigue on downhole equipment, and motor or drillstring backoffs and twistoffs. A new mechanism of motor back-drive dynamics caused by the use of an MWD pulser above a steerable motor has been discovered. HF continuous gyro sensors and pressure sensors were deployed to capture the mechanism in which a positive mud pulser reduces as much as one-third of the mud flow in the motor and bit rotation speed, creating a propensity for a bit to come to a complete stop in certain conditions and for the motor to rotate the drillstring backward. We have observed the backward rotation of a polycrystalline diamond compact (PDC) drill bit during severe stick-slip and back-drive events (−50 rev/min above the motor), confirming that the bit rotated backward for 9 milliseconds (ms) every 133.3 ms (at 7.5 Hz), using a 1,000-Hz continuous sampling/recording in-bit gyro. In one field test, multiple drillstring dynamics recorders were used to measure the motor back-drive severity along the drillstring. It was discovered that the back-drive dynamics are worse at the drillstring, approximately 1,110 ft behind the bit, than these measured at the motor top-sub position. These dynamics caused drillstring backoffs and twistoffs in a particular field. A motor back-drive mitigation tool was used in the field to compare the runs with and without the mitigation tool while keeping the surface drilling parameters nearly the same. The downhole drilling dynamics sensors were used to confirm that the mitigation tool significantly reduced stick-slip and eliminated the motor back-drive dynamics in the same depth interval. Detailed analysis of the HF embedded downhole sensor data provides an in-depth understanding of mud motor back-drive dynamics. The cause, severity, reduction in drilling performance and risk of incident can be identified, allowing performance and cost gains to be realized. This paper will detail the advantages to understanding and reducing motor back-drive dynamics, a topic that has not commonly been discussed in the past.

Proceedings Papers

Publisher: Offshore Technology Conference

Paper presented at the Offshore Technology Conference, May 4–7, 1998

Paper Number: OTC-8680-MS

... rsi bandwidth technology conference output space input data interpolation

**operator**frequency input sample theorem waveform output buffer reservoir characterization nyquist frequency upstream oil & gas samphg unambiguous signal recovery trajectory sample interval normal...
Abstract

Abstract A method is presented for processing multi-channel seismic data that recovers, simultaneously and unambiguously, signal frequencies above and below the temporal Nyquist of the input data, By exploiting signal stationarity and redundancy, I show that the sample interval of a single input channel does not uniquely determine the maximum recoverable frequency. The method is tested using normal moveout (NMO) and stack on synthetic model data. Frequencies of 10 to 225 Hz are recovered from input data sampled at 4 ms. I then apply the method to real data. The results demonstrate that antialias strategies based on the one-dimensional Shannon-Whittaker sampling theorem (l) impose an unnecessary limit on the ability to recover high frequencies and maximize signal resolution. The method is applicable wherever the trajectory of the signal to be imaged irregularly intersects the sampling grid. It is appropriate for resolving both temporal and spatial aliasing concerns. Introduction Since the inception of digital signal processing, concerns about signal aliasing have played a major role in determining useable signal bandwidth and the cost of obtaining that bandwidth. For example, the authors of a 1991 article in The Leading Edge observed that "..the major use of Nyquist's work in geophysics is the elimination of alias frequencies on digitally recorded seismic data. The Nyquist frequency is the highest frequency that can be obtained for a given sampling interval." (2). Such concerns have led the seismic industry to systematically restrict potential signal frequencies to below the Nyquist frequency by the application of antialiasing filters based on the Whittaker-Shannon sampling theorem. This paper describes a simple methodology for the recovery of signal components in excess of the Nyquist limit predicted by the one-dimensional Whittaker-Shannon sampling theorem. With this method, an optimum balance of image bandwidth and signal-to-noise ratio may be achieved by simple processing parameter choices. The methodology is applicable whenever the signal trajectory is irregularly intersected by a sampling grid of two or more dimensions. Aliasing It is generally accepted that the digital sampling interval and the upper limit of the recoverable signal spectrum are inexorably linked. Authors usually cite the one-dimensional Whittaker-Shannon sampling theorem which states that the maximum recoverable frequency (? Nyq ) in an evenly-sampled function is given by: (mathematical equation)(available in full paper) where ?t is the sampling interval (in seconds). Frequencies in the input function prior to sampling, which are in excess of this Nyquist frequency, are, if not removed prior to sampling, said to be aliased (or folded) in the sampled output. Aliasing is generally understood to mean that those frequencies above ? Nyq are irretrievably lost due to being "mixed" with those below ? Nyq . Commonly, aliasing has been dealt with by one or both of two methods: Alias reduction method #1 Antialias filtering is almost always used to significantly reduce contamination by aliased frequencies whenever analog data are sampled (e.g., during field recording) or whenever the sampling interval is increased. While usually successful in preventing anticipated aliasing problems, this filtering can result in a loss of valuable signal both above and below ? Nyq . Also, additional equipment and/or processing costs are usually incurred.

Advertisement