I first encountered the term “quantitative interpretation” in the context of seismic data around the turn of the century when I arrived at a client site to provide consultancy services. I was introduced to their Quantitative Interpretation Department, where I was told that QI was defined as “the use of seismic data to make predictions away from well locations.” At that time, most of the staff in the department were focused on performing relatively simple deterministic inversions.
Much has changed since then. Workflows have become more sophisticated, software and computing power have advanced significantly, data quality has improved, and there’s been a greater accumulation of experience. However, as the field has progressed, the term “quantitative interpretation” – or simply “QI” – has become so widely used that it has, in many cases, lost its original meaning. Today, QI is often a buzzword, applied to such a broad range of practices that it can be nearly meaningless without further clarification. This shift has led to the creation of dedicated QI departments, job titles, workshops, conference sessions and even entire companies built around the concept.
A Case Study of Misapplied Terminology
While the work carried out under the umbrella of QI is undoubtedly valuable, the ubiquitous and sometimes imprecise use of the term can lead to confusion and misaligned expectations. To illustrate this concern, I will present a case that highlights the challenges and potential pitfalls associated with the overuse or misapplication of the term “quantitative interpretation.”
The example I use is drawn from a paper we had published in Interpretation in 2017, which details the characterization of a carbonate buildup. The goal of the study was to reduce uncertainty in locating the top of the carbonate and to quantify the porosity variations within the carbonate using three angle stacks of seismic data and a single well. Three different approaches were compared, all of which could be considered as part of the broader concept of quantitative interpretation as described earlier.
The first approach involved generating relative elastic impedance volumes by inverting the near- and far-angle stack seismic data. These inversions were calibrated using well data, with estimated wavelets applied to the seismic response. The results were then interpreted by cross-plotting the near- and far-angle elastic impedance data, and facies were classified based on the values observed in these plots. The classification was done using polygon capture, where the polygons were adjusted to produce facies categories that could be roughly associated with shale, gas-carbonate and brine-carbonate lithologies.
While the resulting image showed some broad conformance with expectations, the predicted distribution of facies did not align with geological reality. The facies classification was not geologically correct, and this highlighted a key issue in QI workflows: even though the methodology may appear sound, the results can still be misaligned with the true geological framework.
The second approach applied a standard deterministic pre-stack simultaneous inversion. In this case, the three angle stacks were inverted together using individual wavelets to produce models of absolute acoustic impedance, VP/VS ratio, and density. To achieve this, a model was required to supply the low-frequency information that seismic amplitudes could not provide. The model was constructed using seismic velocities that had been calibrated to the well data. These velocities were then transformed into acoustic impedance, VP/VS, and density using relationships derived separately for the overburden shales, gas carbonate and brine carbonate, based on the well data.
This model incorporated preliminary interpretations for the top of the carbonate and the fluid contact within the carbonate. Regularization was applied within the inversion to stabilize the results, which included constraining relationships between the different elastic properties. However, because only a single set of relationships could be applied to the entire inversion, this approach became a compromise, particularly regarding its relevance to different facies.
The inversion results were then interpreted in terms of facies using polygon capture, adjusting the polygons to match the well data as closely as possible. This approach allowed for the interpretation of more facies than the previous method. Despite this, the results were neither geologically nor geophysically realistic for several reasons. For example, no fluid contact was observed in the VP/VS data, despite one being expected based on rock physics analysis. Additionally, many data points fell into regions of the P-impedance vs. VP/VS space that did not correspond to either shale or carbonate, suggesting the presence of hybrid rocks. This issue arose from the limited angle range and bandwidth of the input data, which compromised the integrity of the workflow. In some cases, data points interpreted as brine were directly above those interpreted as gas. Furthermore, the inversion suggested the presence of a body of gas carbonate floating above the flank of the main carbonate buildup, which is geologically implausible.
The third approach involved a facies-based inversion. The input seismic data and wavelets used in this method were the same as those for the deterministic inversion. However, this approach differs in that it employs a geostatistical workflow where the expected facies are identified prior to inversion. Depth-dependent elastic property probability density functions (PDFs) are then assigned to each facies, including correlations between the various elastic properties.
Prior probabilities for each facies are also assigned based on the geological framework, which allows for a more refined treatment of uncertainty around the top of the carbonate and the fluid contact. This means that, where prior geological knowledge is available, uncertainties can be reduced during the inversion process. The inversion is performed within a Bayesian framework, solving simultaneously for both the elastic properties and the facies.
One of the key advantages of this approach is that it operates in 3-D, enabling the inversion to respect both lateral and vertical continuity of facies in a manner consistent with the geological framework. The results from this method show a significant improvement over the previous two approaches. Unlike the earlier methods, the inversion produces geologically and geophysically realistic results. For example, all data points classified as shale show elastic properties consistent with shale at their respective depths, which was not the case in the other methods.
The Role of Rock Physics
The differences in results across the three approaches are striking, even though the same data were used in each case. While it might seem that the differences stem from the type of inversion technique employed, this would overlook a more fundamental issue. The primary difference lies in how and to what extent rock physics is used to integrate the data – and how it is incorporated into the results. Additionally, the way prior geological knowledge is incorporated plays a critical role in shaping the outcomes.
In the first approach, neither rock physics nor geological knowledge is considered. The method relies purely on seismic data to infer facies, without integrating any physical or geological constraints.
In the second approach, rock physics is used in a limited capacity. It helps build a low-frequency model, applies regularization in the inversion, and is used to interpret the inversion results in terms of facies. However, there is a critical disconnect: the regularization process during the inversion does not use the same rock physics relationships that were employed to create the low-frequency model or interpret the results. While geological knowledge is applied to build the low-frequency model, it is not integrated into the inversion process itself, leading to inconsistencies in the final results.
In contrast, the third approach takes a more consistent and integrated approach to rock physics. Rock physics relationships are applied throughout the entire workflow – from the inversion process itself, which estimates both elastic properties and facies, to the final facies prediction. Geological knowledge is directly incorporated by setting facies priors and ensuring continuity of facies throughout the 3-D inversion. This consistent integration of rock physics not only facilitates the incorporation of all available data but ensures that rock physics principles are embedded in the final results of the workflow, integrating all data sources in a cohesive manner.
It is important to note that this is not an argument against the usefulness of any workflow, nor is it meant to suggest that these are the only approaches available. The choice of a specific workflow will depend on the objectives of the study, the quality and availability of the data and the time constraints. However, what is crucial to understand is that not all QI workflows will deliver the same quality of results, even when using the same input data.
For instance, one should not expect a simple elastic impedance approach to yield results equivalent to those from a facies-based inversion. The success of a study in achieving its objectives is closely tied to how well rock physics is integrated into the overall workflow. If expectations are set incorrectly – perhaps due to an oversimplified understanding of what a particular QI approach can deliver – this can lead to criticisms that QI is not being done effectively.
In short, the quality of results is directly linked to the depth of integration between seismic data, rock physics and geological knowledge within the chosen workflow. Properly managing these expectations and understanding the limitations and strengths of each approach are key to a successful QI study.
A More Precise Definition of QI
“QI” has become a widely used term, yet it needs a clearer definition than the often-quoted description of simply “using seismic amplitudes to predict away from wells.”
I propose the following definition:
Quantitative interpretation is the application of rock physics to integrate geophysical data within a geological framework to generate quantitative predictions of subsurface properties. These predictions aim to address fundamental scientific and business questions of value.
By adopting this definition, we can better set realistic expectations for a QI study. Key questions to ask when evaluating a QI approach include: How and to what degree is rock physics being used to integrate the data? How is prior geological knowledge incorporated?
Answering these questions helps ensure that the workflow is designed appropriately for the objectives and that the results are grounded in a solid geological and geophysical understanding.
Editor’s note: Some of the terms used in this article have been discussed in previous articles in the Geophysical Corner columns, and if in doubt, the readers can clarify by referring to those articles: Elastic impedance (October 2012); prestack simultaneous inversion (June 2015).