Summary

We present a method of reducing random and certain types of coherent noise from post-stack 3-D and 2-D seismic data using principal component analysis along with structure oriented filtering. Such noises contaminate seismic data and their removal leads to improved continuity of reflections, improved continuity of amplitudes, and better quality data for further processing such as spectral whitening and high resolution curvature along with providing a better data set for structural interpretation.

The noise-reduced data set also permits auto-picking routines to operate with more consistency and over larger areas of the data volume as compared to the original data.

Introduction

In our industry, seismic data is acquired for the purpose of understanding the structure and stratigraphy of the earth and any information that does not assist in this understanding is considered noise.

Noise may be broken down into two broad categories: random and coherent. Random noise occurs due to subtle motions in the earth that cannot be related to any individual source and do not have any consistent characteristics in time or in space.

Coherent noise comes from some specific source that is not related to transmission from a controlled source, reflection in the subsurface plane of the section, and reception at the receiver locations. Coherent noise often appears on a seismic section as a (steeply?) dipping event of continuous waveform. Current acquisition and processing practices are quite effective in reducing both types of noise, but surveys are still sometimes contaminated by both random noise (more often in land surveys) and coherent noise (more often in marine surveys).

We use a multi-trace principal component analysis of a sub-volume of a survey to identify the optimum surface at each sample in the survey and to attenuate energy in that sub-volume which does not correspond to that surface. (Marfurt et al, 1999). By limiting the dip rates over which we analyze, we are able to eliminate some types of coherent noise as well as random noise. We complete the processing by applying a form of structurally oriented filtering (Luo et al, 2002, and Hocker and Fehmers, 2002) to preserve and enhance legitimate discontinuities in the data.

Theory

For each sample in the data set, we use a small sub-volume, usually 9 traces and 11 samples, to compute a surface at that point. Using that sub-volume, we scan over a range of dips in three dimensions (Figure 1) until we find the dip which best represents the geologic surface at that point. The technique we use to determine that best dip is a principal component technique wherein each level of the sub-volume is plotted as a 9 dimensional vector in which axis corresponds to one trace of the sub-volume (Figure 2). For each dip rate, the vectors form a cluster and the optimum dip corresponds to the smallest cluster. The weighted center of this cluster is projected back to the axis of the trace at the center of the sub-volume and output as the conditioned data.

This content is only available via PDF.
You can access this article if you purchase or spend a download.