Exploration and development drilling along the upper region of the Middle Pennsylvanian Red Fork Sandstone has been going on since 1979 in the western part of the Anadarko basin of Oklahoma. This is fine-grained low-permeability gas and gas condensate produced at depths ranging from 12,000 to 14,000 feet from stratigraphic traps. Because of the depths and the reservoir characteristics, the Red Fork play is very sensitive to gas prices, and the good quality reservoir sandstone is tough to predict due to the complexity of the depositional environment.
Clearly, then, determining where all those potential hydrocarbons are located requires integration of rock and log data. Logs need to be calibrated to cores in order to estimate depositional environments accurately and to make a reasonable assessment of diagenetic overprints.
Since 1998, 7,551 barrels of oil and 316,515 thousand cubic feet of total gas have been discovered.
There’s much more down there and that’s what we’re talking about here.
Fangyu Li, a postdoctoral research associate in the College of Engineering at the University of Georgia, said the latest technology in multispectral coherence, specifically as it relates to bringing together seismic and other sources of information to better map all that, is developing to a point where scientists can see more of what’s down there and they can see it more clearly. Fangyu presented his doctoral dissertation, “Seismic Data Multi-Spectral Analysis,Attenuation Estimation and Seismic Sequence Stratigraphy Enhancement Applied to Conventional and Unconventional Reservoirs” last year, and the Red Fork Sandstone was included in one of the reservoirs studied.
Advances in Multispectral Coherence
About 20 years after its inception, seismic coherence volumes have been routinely used to delineate structural and stratigraphic discontinuities such as channels, faults and fractures, to highlight incoherent zones such as karst collapse and mass transport complexes, and to identify subtle tectonic and sedimentary features that might otherwise be overlooked on conventional amplitude volumes. The better the technology gets, obviously, the better the picture.
Fangyu has kept an eye on that technology.
“About coherence techniques, in general, there are three generations: C1, cross-correlation based, C2, semblance based, and C3, decomposition based,” he said.
This is not all the technology that has been developed, either. Fangyu said different methods to calculate coherence were invented, such as higher-order statistics-based methods, “but the C1, C2, C3 are most common until now.”
For him, personally, his multispectral coherence is based on C3 coherence, meaning instead of using the seismic data, he uses the spectral voices as the input.
“The innovations are how to use the spectral voices and how to display the results,” and this, he believes, is its advantage over other methods.
“We notice the coherence images calculated from different spectral voices are showing different features, which is important for geophysical and geological interpretation where the coherence images at different frequencies can show different stages of the incised valley,” he said.
The seismic combines the coherence images from different frequencies to generate a single map, which shows different scale features together.
Better Hardware Needed
The advances of the seismic coherence, he said, is in the delineation, because one usually wants to interpret the most broadband data possible. Of course, this means the hardware through the years has had to catch up with the software.
“Speaking of making the work easier, computer hardware development needs to be mentioned,” he said. “The storage is larger, as we need to generate spectral coherence images, if we need a result per frequency slice, the storage is tens or hundreds larger than the original. Second, the computation power, since the coherence calculation needs to be done repeatedly on every frequency slice – if it was ten years ago, it was too slow.”
You need – and this is almost too obvious a point – a good monitor, as well.
“In addition, the display is better than before, so more details can show,” said Fangyu.
“Because, if the data quality is not good, there is no value to look deep into it. So, in the future, when the data quality keeps increasing, the coherence results will be better and better,” he said.
The most important step, then, in coherence computation is to ensure that the processed data exhibit high bandwidth, are accurately imaged, and are free of multiples and other types of coherent noise. Once in the interpreter’s hands, many seismic amplitude volumes benefit from subsequent post-stack structure-oriented filtering and spectral balancing.
New techniques provide more and more important information.
“With the new information, detailed interpretation can be made, and better reservoir characterization can be expected,” said Fangyu.
Much of this work is being done at the Attribute Assisted Seismic Processing and Interpretation consortium in the University of Oklahoma, where Fangyu is presently located.
He warns, though, the downside is that the wealth of information is only part of the story. Sometimes it’s too much and it’s contradictory.
“For example, if you are provided a coherence image from the full bandwidth data, you can find discontinuities. However, now you are provided two spectral coherence images from different frequencies, and they are showing different features. What is your interpretation?”
As for the skills needed in those charged with such work, it’s about what you’d expect, Fangyu said.
“The interpreters need to know what features to look for and what spectral responses are corresponded to what structures or rock types,” he said.