A New Approach to Automated Well-Log Correlation in Three Dimensions

Geologists will soon have a set of software tools that can automatically correlate hundreds of stratigraphic tops that will encompass thousands of well logs.

Zoltán Sylvester, a research scientist at the Bureau of Economic Geology at the University of Texas at Austin, and a member of the Quantitative Clastics Laboratory industrial affiliate program, said the new approach – a software Python package, ChronoLog – will potentially change how such compilations are done.

“I think it challenges the idea that only careful manual correlation can result in high-quality geological cross sections and maps. The output is a chronostratigraphic diagram with all the wells, meaning the user can quickly generate arbitrary cross sections and maps,” he said.

Further, he said, the software “expects an input of a number of normalized well-log curves, and the approximate tops and bases for the overall interval of interest.”

The Birth of ChronoLog

Sylvester said the entire process came about after experimenting with an algorithm called “dynamic time warping,” which has been used for years, for example, in speech recognition.

“It is relatively straightforward to correlate two logs with this algorithm,” he said, adding that by performing pairwise correlations, he could create correlation panels that looked convincing.

Image Caption

Fig. 4 A map of all the well pairs that were used in the correlation (more than 15,000 pairs in this case)

Please log in to read the full article

Geologists will soon have a set of software tools that can automatically correlate hundreds of stratigraphic tops that will encompass thousands of well logs.

Zoltán Sylvester, a research scientist at the Bureau of Economic Geology at the University of Texas at Austin, and a member of the Quantitative Clastics Laboratory industrial affiliate program, said the new approach – a software Python package, ChronoLog – will potentially change how such compilations are done.

“I think it challenges the idea that only careful manual correlation can result in high-quality geological cross sections and maps. The output is a chronostratigraphic diagram with all the wells, meaning the user can quickly generate arbitrary cross sections and maps,” he said.

Further, he said, the software “expects an input of a number of normalized well-log curves, and the approximate tops and bases for the overall interval of interest.”

The Birth of ChronoLog

Sylvester said the entire process came about after experimenting with an algorithm called “dynamic time warping,” which has been used for years, for example, in speech recognition.

“It is relatively straightforward to correlate two logs with this algorithm,” he said, adding that by performing pairwise correlations, he could create correlation panels that looked convincing.

“However, I was unable to close any loops; and as a result, I was unable to create maps. Every time I tried to close a loop, most of the tops would not land exactly where they were supposed to land,” he added.

It was a significant challenge, but he then came across a remarkable 2014 paper by Wheeler and Hale, geophysicists at the Colorado School of Mines, who introduced the concept of stretching-and-squeezing logs so that they fit into a chronostratigraphic diagram and suggested a computationally efficient solution for minimizing the errors that come from the conflicting correlations.

“Every horizontal line in such a diagram is a correlation. Creating this Wheeler diagram is possible through a global optimization that minimizes the conflicts between correlations that come from different well pairs,” Sylvester explained.

He said that due to this optimization, the correlations are not always based on local lithology, as the program often realizes that it is best to correlate a sandstone to a mudstone in order to find the best larger-scale solution.

He and his staff then developed “ChronoLog” and combined it with the blocking method (based on the continuous wavelet transform) to segment the stratigraphy in a meaningful way and create cross sections and maps.

“As the number of wells increases, the number of well pairs that are likely to have valuable information for correlation explodes,” he said.

If, for example, you have five wells, then there are ten potential well pairs that could be correlated. For a hundred wells, 4,950 paired; for a thousand, 499,500.

“In reality,” said Sylvester, “you don’t want to use all these well pairs because some of the wells are too far from each other; but it is obvious that it is impossible for even the most patient geoscientist to correlate all the well pairs that are potentially useful.”

Permian Basin as a Model

QCL is a UT-Austin Jackson School industry research collaboration focusing on the sedimentology and stratigraphy of clastic depositional systems, with applications in reservoir modeling, uncertainty in subsurface stratigraphic correlation, and source-to-sink predictions for frontier exploration. Specifically, Sylvester’s research focuses on the geomorphology and stratigraphy of clastic depositional systems, using outcrop, subsurface and remote sensing data.

“As most of the datasets we have applied this technology on come from the Permian Basin, one of the recurring comments we receive is that the stratigraphy of the Permian Basin is relatively simple, so it is not too surprising that a software tool that does not know anything about sequence stratigraphy, unconformities or maximum flooding surfaces still produces reasonable results. However, those who have correlated well logs in the basinal deposits of the Permian Basin know that in many places, there is a short-range variability in these rocks and correlation of nearby wells is far from trivial, largely due to the presence of channels and mass-transport deposits,” Sylvester explained.

Many “micro-decisions,” as he calls them, need to be taken by the manual interpreter in such situations and the process quickly becomes tiresome.

“At least for me,” he qualified.

What’s encouraging is that the benefits of the new software can extend to plays beyond the Permian Basin.

“Initial results from other basins (for example, the North Slope of Alaska) suggest that useful results can be generated in settings with significantly higher geological and logistical complexity (e.g., deviated wells in densely faulted prograding delta deposits),” he said.

Further, Sylvester said it is not necessary that the computer “know” advance stratigraphic concepts in order to generate useful stratigraphic correlations.

“The simple idea of stretching-and-squeezing well logs into a chronostratigraphic diagram is quite powerful.”

Sylvester was scheduled to speak at this year’s ACE in Houston and explain in greater detail how this new software will enable faster and more objective mapping than through manual correlation. This software, he believes, will be of interest not only to geoscientists who work in areas with a large number of wells, but anyone who works with any kind of 1-D signals that can be correlated.

Comments (1)

Automated Well-log correlation - Open since 1960s
Correlation of wireline logs is a standard method used in construction of regional stratigraphic models (Sharland, et.al. 2001) and in detailed reservoir unit correlations (Hulstrand, et.al 1985). Every exploration or development geology study incorporates log-correlation as a prerequisite to reservoir modeling. Automated correlation of wireline logs had been attempted since the 1960s to early work of Prof. W. Schwarzacher (1964) and developed into methods of ‘Quantitative Stratigraphy’ under IGCP-148 project. My own earliest work on automated correlation used the program published by Kwon (1977) in a study done in 1987. “Correlator” was a computer program released by the Kansas Geological Survey in 2003. Doveton (1986) used methods to integrate information from multiple log traces to arrive at a unified single litho-column to be used in correlation. More recently, Pollock et.al (2017) used Dynamic Time Warping (DTW) to correlate thousands of wells in unconventional oil and gas fields. In my own unpublished work done using “R”, Levellie segmentation and DTW were attempted in 2016 for Bab field in Onshore Abu Dhabi. Episode Mining is a method developed in computer science for mining data available as a long sequence of events. Rather than considering individual log values, the segmented ‘zones’ may be considered as an event or entity for correlation. Overlapping principles of Markovian models of Schwarzacher (1975) to Episode mining is a continuum to create professionally acceptable quality of automated well-log correlation. Automated log correlation remains an enigma and unsatisfactorily solved problem for nearly 60 years! It is not yet a front-end technology of reliable quality mainly due to inability to use multiple log-traces coherently.
6/24/2020 5:18:04 PM

You may also be interested in ...