Poststack Processing Steps for Preconditioning Seismic Data

Seismic data are usually contaminated with two common types of noise, namely random and coherence. Such noise, if not tackled appropriately, prevents their accurate imaging. Small-scale geologic features such as thin channels, or subtle faults, etc. might not be seen clearly in the presence of noise. Similarly, seismic attributes generated on noise-contaminated data are seen as compromised on their quality, and hence their interpretation. Noise reduction techniques have been developed for poststack and prestack seismic data and are implemented wherever appropriate for enhancing the signal-to-noise ratio and achieving the goals set for reservoir characterization exercises.

While coherent noise is usually handled during processing of seismic data, mean and median filters are commonly used for random noise suppression on poststack seismic data, but tend to smear the discontinuities in the data. A more desirable application is of structure-oriented filters applied to seismic data, which has the effect of enhancing laterally continuous events by reducing randomly-distributed noise, without suppressing details in the reflection events consistent with the structure. Usually, event focusing and reduced background noise after structure-oriented filtering are clearly evident. Attribute computation on such preconditioned seismic data is seen to yield promising results, and thus interpretation. Much of such work and procedures are handled on poststack seismic data.

For prestack data analysis, such as extraction of amplitude-versus-offset (AVO) attributes (intercept/gradient analysis) or simultaneous impedance inversion, the input seismic data must be preconditioned in an amplitude-preserving manner. After the prestack data have undergone an amplitude-friendly processing flow up to prestack migration and normal moveout (NMO) application, still there are some simplistic preconditioning steps that are generally adopted for getting the data ready for the next step. Usually, these steps are generating partial stacks (that tone down the random noise), bandpass filtering (which gets rid of any high/low frequencies in the data), more random noise removal (algorithms such as tau-p or FXY or workflows using structure-oriented filtering), trim statics (for perfectly flattening the NMO-corrected reflection events in the gathers) and muting (which zeroes out the amplitudes of reflections beyond a certain offset/angle chosen as the limit of useful reflection signal).

Fresh Ideas Needed

The success of AVO attribute extraction or simultaneous impedance inversion depends on how well the preconditioning processes have conditioned the prestack seismic data. These procedures have been carried out over the last two decades for most projects from different basins of the world. But, more recently, it has been found that such procedures might not be enough for data acquired for unconventional resource plays or subsalt reservoirs. In such cases, newer and fresher ideas need to be implemented to enhance the signal-to-noise ratio of the prestack seismic data, before they are put through the subsequent attribute analysis.

Please log in to read the full article

Seismic data are usually contaminated with two common types of noise, namely random and coherence. Such noise, if not tackled appropriately, prevents their accurate imaging. Small-scale geologic features such as thin channels, or subtle faults, etc. might not be seen clearly in the presence of noise. Similarly, seismic attributes generated on noise-contaminated data are seen as compromised on their quality, and hence their interpretation. Noise reduction techniques have been developed for poststack and prestack seismic data and are implemented wherever appropriate for enhancing the signal-to-noise ratio and achieving the goals set for reservoir characterization exercises.

While coherent noise is usually handled during processing of seismic data, mean and median filters are commonly used for random noise suppression on poststack seismic data, but tend to smear the discontinuities in the data. A more desirable application is of structure-oriented filters applied to seismic data, which has the effect of enhancing laterally continuous events by reducing randomly-distributed noise, without suppressing details in the reflection events consistent with the structure. Usually, event focusing and reduced background noise after structure-oriented filtering are clearly evident. Attribute computation on such preconditioned seismic data is seen to yield promising results, and thus interpretation. Much of such work and procedures are handled on poststack seismic data.

For prestack data analysis, such as extraction of amplitude-versus-offset (AVO) attributes (intercept/gradient analysis) or simultaneous impedance inversion, the input seismic data must be preconditioned in an amplitude-preserving manner. After the prestack data have undergone an amplitude-friendly processing flow up to prestack migration and normal moveout (NMO) application, still there are some simplistic preconditioning steps that are generally adopted for getting the data ready for the next step. Usually, these steps are generating partial stacks (that tone down the random noise), bandpass filtering (which gets rid of any high/low frequencies in the data), more random noise removal (algorithms such as tau-p or FXY or workflows using structure-oriented filtering), trim statics (for perfectly flattening the NMO-corrected reflection events in the gathers) and muting (which zeroes out the amplitudes of reflections beyond a certain offset/angle chosen as the limit of useful reflection signal).

Fresh Ideas Needed

The success of AVO attribute extraction or simultaneous impedance inversion depends on how well the preconditioning processes have conditioned the prestack seismic data. These procedures have been carried out over the last two decades for most projects from different basins of the world. But, more recently, it has been found that such procedures might not be enough for data acquired for unconventional resource plays or subsalt reservoirs. In such cases, newer and fresher ideas need to be implemented to enhance the signal-to-noise ratio of the prestack seismic data, before they are put through the subsequent attribute analysis.

In the Delaware Basin, above the Bone Spring Formation (which is very prolific and the most-drilled zone these days) is a thick column of siliciclastic comprising the Brushy Canyon, Cherry Canyon and the Bell Canyon formations. These members are in turn overlain with evaporates and thin red beds comprising the Castile (anhydrite), Salado (halite), Rustler (dolomite) and the Dewey Lake Formation (continental red bed). Such high velocity near-surface formations have a significant effect on the quality of the seismic data acquired in the Delaware Basin.

Besides the lack of continuity of reflection events, one of the problems seen on seismic data from this basin is that the near traces are very noisy and, even after, the application of the above-mentioned processes is not acceptable. A way out of such a situation is to replace the near-stack data with the intercept stack, which may exhibit higher signal-to-noise ratio.

Quite often it is observed that the P-reflectivity or S-reflectivity data extracted from AVO analysis appear to be noisier than the final migrated data obtained with the conventional processing stream, which might consist of processes that are not all amplitude-friendly. This observation suggests exploring if one or more poststack processing steps could be used for preconditioning of prestack seismic data prior to putting it through simultaneous impedance inversion for example.

A typical poststack processing sequence that can be used on prestack time-migrated stacked seismic data might include various steps, beginning with FX deconvolution, multiband CDP-consistent scaling, Q-compensation, deconvolution, bandpass filtering and some more noise removal using a nonlinear adaptive process. These different processes are applied with specific objectives in mind. Beginning with attenuation of random noise using FX deconvolution, the seismic signals in the frequency-offset domain are represented as complex sinusoids in the X-direction and are predictable. Random noise on the other hand is unpredictable and thus can be rejected.

Sometimes, due to the near-surface conditions, spatial variations in amplitude and frequency are seen in different parts of the same inline or from one inline to another in the same 3-D seismic volume. Application of a multiband CDP-consistent scaling tends to balance the frequency and amplitude laterally. In such a process, the stacked seismic data are decomposed into two or more frequency bands and the scalars are computed from the RMS amplitudes of each of the individual frequency bands of the stacked data. The computed data are stacked on the individual bands and summed back to get the final scaled data.

Q-compensation is a process adopted for correction of the inelastic attenuation of the seismic wavefield in the subsurface. An amplitude-only Q-compensation is usually applied. The values of the inelastic attenuation are quantified in terms of the quality factor, Q, which can be determined from the seismic data or VSP data. In case such a computation proves to be cumbersome or challenging, a constant Q value is applied that is considered appropriate for the interval of interest.

Enhancing Frequency Content

A long time-window deconvolution can also be applied to the data with appropriate parameters, which tends to compress the embedded wavelet in the data, and thus enhance their frequency content. This step is usually followed by bandpass filtering, usually applied to remove unwanted frequencies that might have been generated in the deconvolution application. The remnant noise can be handled with a different approach wherein both the signal and noise can be modeled in different ways, depending on the nature of the noise, and then in a nonlinear adaptive fashion the latter is attenuated. Such a workflow can be more effective than a singular FX deconvolution process. That all the above-stated processes are amplitude-friendly can be checked by carrying out gradient analysis on data before and after the analysis.

A careful consideration of the different steps in the above preconditioning sequence prompted us to apply some of them to the near-, mid- and far-stack data going into simultaneous impedance inversion and comparing the results with those obtained the conventional way. Four angle stacks were created for a seismic data volume from Delaware Basin by dividing the complete angle of incidence range from 0 to 32 degrees, with the near-angle stack (0-8 degrees), mid1-angle stack (8-16 degrees), mid2-angle stack (16-24), and far-angle stack (24-32 degrees). Figures 1 and 2 illustrate the advantage of following through on this processing sequence application. Notice the near- and far-angle stacks are subjected to many of the processing steps mentioned above, and a comparison is shown with the conventional processing application. The overall signal-to-noise ratio is seen to be enhanced and stronger reflections are seen coming through after application of the proposed poststack processing steps. Similar reflection quality enhancement is seen on mid1 and mid2 angle stacks, but not shown here due to space constraints. To ensure that these processing steps have preserved true-amplitude information, gradient analysis was carried out on various reflection events selected at random from the near-, mid1-, mid2- and far-angle stack traces, and one such comparison is shown in figure 3. The amplitude trend after the proposed preconditioning shows a similar variation as seen obtained using the conventional processing flow.

In figures 4 and 5 we show a similar comparison of P-impedance and VP/VS sections using the proposed workflow and the conventional one. Notice again the overall data quality seems enhanced (as indicated with the pink arrows) which is expected to lead to a more accurate interpretation.

Conclusion

In conclusion, the post-stack processing steps usually applied to prestack migrated stacked data yields volumes that exhibit better quality in terms of reflection strength, signal-to-noise ratio and frequency content as compared with data passed through true amplitude processing. Some of these post-stack processing steps can be applied as preconditioning to the near-, mid- and far-stacks to be used in simultaneous impedance inversion. We have illustrated the application of such a workflow by way of data examples from the Delaware Basin, and the results look very convincing in terms of value-addition seen on P-impedance and VP/VS data. Proper quality checks need to be run at individual step applications to ensure no amplitude distortions take place at any stage of the preconditioning processing sequence.

You may also be interested in ...