Reservoir Characterization Processes Explained with Workflows, Flowcharts

The processing of land seismic data entails a series of steps through which the data passes, including sorting, static correction, deconvolution, residual static correction, velocity analysis, migration, stacking, filtering and scaling. Some tasks, such as velocity analysis, are interactive, while others are automated. The processing of seismic data from the same area often follows the same proven series of steps or sequence, barring any unforeseen issues that need to be addressed. Such a sequence of tasks designed to process data from their initiation to completion is referred to as a “workflow.”

Workflows help explain the exercise at hand logically, be it for seismic data processing, reservoir characterization, algorithm structure or more detailed artificial intelligence applications. Once the complete workflow for a project is laid out defining the different steps, it provides the necessary framework to delegate tasks, monitor progress, eliminate redundancy, improve efficiency and reduce the turn-around time. Much like a protocol in medical diagnosis and analysis, a good workflow provides a clear understanding of the overall process, minimizes errors and allows for any tweaks that may be required in terms of optimization or automation.

The origin of workflows is often attributed to two individuals: Frederick Taylor, a mechanical engineer, for his work on scientific management theories, and Henry Gantt (of Gantt chart fame) for his theories on project management. Today, workflows are used not only in scientific and engineering environments, but for almost any type of business venture.

Visualizing Workflow

Once a workflow has been prototyped or later validated, the steps can be communicated to the team at large using a flowchart. Whereas the workflow defines the process in a more general way, the flowchart describes the linkage and decisions between the constituent building blocks in diagrammatic form. Due to this rather subtle difference, the two terms are often seen used interchangeably.

Usually, workflows are seen using broad rectangular shapes for their description with minimal use of symbols or special signs. In contrast, flowcharts depict boxes of various shapes (ovals, diamonds, parallelograms, etc.) connected with arrows to better explain the flow of the process, such as provided by the Flowchart symbols provided by Microsoft PowerPoint.

Figure 1 shows the workflow adopted for a reservoir characterization exercise aimed at discrimination of shallow seismic anomalies in the Barents Sea. For better visual impact, the workflows are sometimes made pictorial as seen to the right.

Image Caption

Figure 1: Workflow adopted for discrimination of seismic anomalies. Modified from Chopra et al., 2017

Please log in to read the full article

The processing of land seismic data entails a series of steps through which the data passes, including sorting, static correction, deconvolution, residual static correction, velocity analysis, migration, stacking, filtering and scaling. Some tasks, such as velocity analysis, are interactive, while others are automated. The processing of seismic data from the same area often follows the same proven series of steps or sequence, barring any unforeseen issues that need to be addressed. Such a sequence of tasks designed to process data from their initiation to completion is referred to as a “workflow.”

Workflows help explain the exercise at hand logically, be it for seismic data processing, reservoir characterization, algorithm structure or more detailed artificial intelligence applications. Once the complete workflow for a project is laid out defining the different steps, it provides the necessary framework to delegate tasks, monitor progress, eliminate redundancy, improve efficiency and reduce the turn-around time. Much like a protocol in medical diagnosis and analysis, a good workflow provides a clear understanding of the overall process, minimizes errors and allows for any tweaks that may be required in terms of optimization or automation.

The origin of workflows is often attributed to two individuals: Frederick Taylor, a mechanical engineer, for his work on scientific management theories, and Henry Gantt (of Gantt chart fame) for his theories on project management. Today, workflows are used not only in scientific and engineering environments, but for almost any type of business venture.

Visualizing Workflow

Once a workflow has been prototyped or later validated, the steps can be communicated to the team at large using a flowchart. Whereas the workflow defines the process in a more general way, the flowchart describes the linkage and decisions between the constituent building blocks in diagrammatic form. Due to this rather subtle difference, the two terms are often seen used interchangeably.

Usually, workflows are seen using broad rectangular shapes for their description with minimal use of symbols or special signs. In contrast, flowcharts depict boxes of various shapes (ovals, diamonds, parallelograms, etc.) connected with arrows to better explain the flow of the process, such as provided by the Flowchart symbols provided by Microsoft PowerPoint.

Figure 1 shows the workflow adopted for a reservoir characterization exercise aimed at discrimination of shallow seismic anomalies in the Barents Sea. For better visual impact, the workflows are sometimes made pictorial as seen to the right.

After animating vertical slices through the migrated and stacked seismic amplitude volume, a common starting point is to compute coherence to provide insight into not only the tectonic fabric, but also the deposition of environment ranging from the regional to more prospect-focussed anomalies (figure 1a).

With a deeper understanding of the geologic framework, the next step is to decompose the seismic signal into constituent frequencies. Spectral decomposition is complementary to coherence and is more sensitive to lateral changes in thickness and in differentiating changes in lithology, porosity and pore-fluids that help better identify and delineate stratigraphic traps. In the context of direct hydrocarbon indicators, spectral decomposition measures the frequency dependence of reflections from fluid-saturated rocks. In this example, the reflection coefficient of a DHI was three times stronger at 14 hertz than at 50 hertz, which the authors hypothesized was associated with a gas-charged reservoir because higher frequencies suffer higher attenuation while traversing hydrocarbon reservoirs. The two images seen in figure 1b depict the higher amplitudes at 20 hertz (upper image) while much weaker amplitudes are seen on the 50 hertz display (lower image).

Advancing the Workflow

Given these encouraging results from the poststack data analysis stage, the interpretation team decided that more quantitative analysis was justified. Modern workflows call the transition from one stage to a more advanced stage of the decision-making process a “stage gate.” The next stage was to transform prestack seismic amplitudes into P- and S-impedance attributes. Figure 1c shows a representative P-impedance section computed on the original gathers which showed an encouraging low-impedance anomaly at the target level.

While inverting the data, we noticed that the spectra for the wavelets extracted from the near-, mid- and far-angle stacks exhibited a lowering in frequency content from the near- to far-angle stack, via the mid-angle stack, as well as an overall roll off on the higher frequency side. These effects are a manifestation of the offset/angle dependent attenuation as well as the changes in the seismic wavelet with time/depth. Given our encouraging results, we decided to better condition (another stage gate) the input prestack gathers by balancing or flattening the spectra of the near, mid-, and far-angle stack and bringing it to the same level. The resulting P-, and S-impedance attributes better defined the low-impedance anomalies indicated by the pink arrows in figure 1d.

The inversion process provided impedances that tied the well control. However, a more accurate assessment of the hydrocarbon bearing zones required an estimate of porosity and volume of clay. To do so, we passed through one more stage gate and applied extended elastic impedance to better integrate the P- and S-impedances with the porosity, gamma-ray, and water saturation well logs.

The crossplot in figure 1e shows X angles 28 degrees and 22 degrees optimized the correlation of the EEI curves with the Vclay and effective porosity petrophysical reservoir parameters. Figure 1f shows the resulting slice through the Vclay volume with the petrophysical log curve overlaid on it. A reasonably good match enhances our confidence in the application of the workflow for this data volume and serves as a good workflow for similar surveys acquired in this basin.

Deterministic versus Geostatistical Methods

In the May, June, and July 2015 installments of Geophysical Corner, we described the different types of seismic impedance inversion methods (poststack and prestack) that are commonly referred to as deterministic. These methods provide the transformation of seismic amplitudes into impedance (and elastic properties) in different ways, but only within the bandwidth of the seismic data.

Geostatistical – also referred to as stochastic inversion methods – generate multiple realizations of elastic properties that have higher resolution (higher-frequency content) and are consistent with the seismic data as well as the well data. Away from the wells, the mid range of frequencies are constrained by the seismic data. In contrast, the low and high frequency ranges are constrained to represent statistical variations and trends consistent with the limited well control. Here, multiple high-frequency models still fit the seismic data. The availability of multiple elastic realizations or models also allows a quantification of uncertainty associated with the results.

The deterministic methods make use of low-frequency trends of elastic properties along with other constraints to produce the output absolute elastic property attributes. The geostatistical methods on the other hand make use of a priori models that include the spatial elastic property or lithology type variation through 3-D variograms, and probability density functions. The pdfs are usually computed in a Bayesian framework, wherein a smooth best estimate model is computed for VP, VS and Rho which when combined with a seismic likelihood function associated with multiple input seismic angle stacks allows the calculation of a probability distribution.

Workflow for Geostatistical Inversion

By simultaneously providing lithology types and continuous elastic property variations, a probabilistic analysis of the results can be carried out. A typical geostatistical or stochastic inversion workflow in a Bayesian framework may be depicted as shown in figure 2.

A fine scale stratigraphic grid covering the broad reservoir interval is constructed with the use of horizons interpreted on seismic data. Within each interval (macrolayer) a number of microlayers associated with a mean thickness are generated that are consistent with the depositional model for the area. This model is populated with VP, VS, and Rho values obtained from the available well logs.

Vertical and horizontal variograms are also generated from seismic data or the attributes derived therefrom depending on the presence of the geological features present in the reservoir.

Together with these inputs, the angle dependent wavelets for the different angle stacks, and the generated pdfs are used for generating multiple P- and S-impedance realizations, which are then used for the simulation of facies and/or petrophysical reservoir properties. More advanced workflows allow the parameterization to be based on rock physics (e.g., porosity, fluid content, sand/shale ratio) which are then converted to high resolution impedance models.

A couple of points may be appropriately mentioned here in that, firstly, the extra level of detail due to the high-resolution seen on the geostatistical inversion outputs is not coming from the seismic data but from the inclusion of the vertical variograms. Secondly, if a mean of the different realizations is computed, it seems to correlate well with the equivalent deterministic inversion output, which is what we would expect.

Geostatistical inversion serves as a useful tool for interpretation of thin reservoirs, or those reservoirs where a detailed model is needed for high variability reservoirs, especially for the planning of production well patterns.

Flowchart for Structure-Oriented Filtering

Figure 3 is a flowchart describing the structure-oriented filtering of seismic data. It is designed to apply filtering along seismic events and in so doing to remove random noise, enhance lateral continuity and not smear any discontinuities or other geologic features such as thin channels, etc. The key to structure-oriented filtering is to differentiate between the dip azimuth of the reflector and that of the overlying noise. Once the dip and azimuth have been estimated (as shown by the steps in a and b), a filter is applied to enhance the signal along the reflector but ensuring that no discontinuity is present in the analysis window. A logic indicated by the diamond shaped box depicts this step at (c). A common filter used for this process is called the PC-filter, where PC stands for “principal component.”

In conclusion, workflows or flowcharts, help describe in a simple way the different tasks step-by-step in a long procedure adopted for computation of a reservoir property and thus provide a deeper understanding of the overall process. When different members in a reservoir characterization team adopt the same workflow/flowchart, they minimize the scope of error.

(Editors Note: The Geophysical Corner is a regular column in the EXPLORER, edited by Satinder Chopra, founder and president of SamiGeo, Calgary, Canada, and a past AAPG-SEG Joint Distinguished Lecturer.)

You may also be interested in ...