Analyzing Big Data For Big Discovery

Seismic attribute analysis technology

Unconventional plays are producing hundreds of wells within relatively small spaces, and all are cranking out more data than any human could possibly track. It’s a fact that has driven the oil and gas industry to search for technology that can intelligently quantify such data and increase the odds of a discovery.

The best place to look these days?

Try Amazon and Google, said AAPG member Kurt Marfurt, a professor of geophysics at the University of Oklahoma and frequent contributor to the EXPLORER’s Geophysical Corner.

“Technology designed to find patterns is migrating from the marketing industry into geology and geophysics,” Marfurt said, “from the good people at Amazon suggesting you buy a structural geology book based on your most recent purchase, to those at Google popping up a coupon on your smart phone when your GPS tells them you are, once again, standing in the Cheerios aisle.

“As the size of seismic volumes and the number of seismic attributes increase, this technology allows us to more rapidly extract and subsequently analyze patterns buried in the data,” Marfurt added.

Marfurt gave a presentation on the roots of seismic attribute analysis technology and its future use in the oil and gas industry at the Society of Exploration Geophysicists’ annual conference last month in Denver.

Expanding on that talk, he described how a process called “seismic clustering” uses algorithms that can help find sweet spots and bypassed pay that can be overlooked by conventional methods of seismic attribute interpretation.

Marfurt predicted that seismic clustering will be the fastest growing attribute analysis technology in the industry in coming years.

The money behind this technology, he said, is enormous.

While businesses are analyzing “big data” looking for improved ways of selling products and services, governments are analyzing telecommunication and financial data for bad guys – money launderers, drug traffickers and terrorists.

Please log in to read the full article

Unconventional plays are producing hundreds of wells within relatively small spaces, and all are cranking out more data than any human could possibly track. It’s a fact that has driven the oil and gas industry to search for technology that can intelligently quantify such data and increase the odds of a discovery.

The best place to look these days?

Try Amazon and Google, said AAPG member Kurt Marfurt, a professor of geophysics at the University of Oklahoma and frequent contributor to the EXPLORER’s Geophysical Corner.

“Technology designed to find patterns is migrating from the marketing industry into geology and geophysics,” Marfurt said, “from the good people at Amazon suggesting you buy a structural geology book based on your most recent purchase, to those at Google popping up a coupon on your smart phone when your GPS tells them you are, once again, standing in the Cheerios aisle.

“As the size of seismic volumes and the number of seismic attributes increase, this technology allows us to more rapidly extract and subsequently analyze patterns buried in the data,” Marfurt added.

Marfurt gave a presentation on the roots of seismic attribute analysis technology and its future use in the oil and gas industry at the Society of Exploration Geophysicists’ annual conference last month in Denver.

Expanding on that talk, he described how a process called “seismic clustering” uses algorithms that can help find sweet spots and bypassed pay that can be overlooked by conventional methods of seismic attribute interpretation.

Marfurt predicted that seismic clustering will be the fastest growing attribute analysis technology in the industry in coming years.

The money behind this technology, he said, is enormous.

While businesses are analyzing “big data” looking for improved ways of selling products and services, governments are analyzing telecommunication and financial data for bad guys – money launderers, drug traffickers and terrorists.

How It Works

“Supervised algorithms” are interpreter-driven, Marfurt explained.

Using well logs, microseismic events and production as hard, but sparse data, the interpreter “trains” the computer to construct patterns that relate hard data to softer, more continuous attribute data, such as impedance, spectral components, and attenuation, thereby predicting a property of interest, such as areas of more effective reservoir completion.

In contrast, “unsupervised algorithms” are data-driven. The data “speaks for itself,” looking for patterns and relationships that the interpreter may not have contemplated, thereby avoiding human bias.

One possible application of “unsupervised learning” would be to map bypassed pay, which by definition has not been sampled by the well bore and completion process.

Hard data also can be human experience.

“Interpreters are excellent at finding Waldo in children’s books,” Marfurt said.

In this supervised learning application, the interpreter defines facies by choosing them. It is as important to choose the facies that are not of economic interest (such as a tight limestone) as it is the facies of interest (such as a high porosity chert). Fractured non-porous chert and shale make up other components of the Mississippi Lime play in the Mid-Continent. In this application, an unsupervised learning may identify facies that were not tagged by any well control, such as an incised sandstone channel.

Currently, neural networks are the most established supervised clustering algorithms, and self-organizing maps remain the most established unsupervised clustering algorithms, Marfurt said. The relationships are in general nonlinear.

Correlating fracture density measured in a horizontal image log to seismic attributes is one of the more challenging.

For example, fractures are correlated to strain, which in turn is measured by seismic curvature – but fractures initiate only after a critical amount of deformation. The fracture density then increases with an increasing amount of strain, until the rocks are saturated with fractures.

At this point, further deformation is accommodated by moving the rocks along the fractures, which are now renamed “faults.”

Breaking Down the Data

Obviously, volumetric attributes generate volumes – even land surveys can be gigabytes in size, Marfurt said.

Yet, if interpreters want to evaluate alternative attribute expression of their geology, they may need to sift through dozens of attribute volumes to identify patterns of interest.

Software vendors are now marketing programs that build on interpreter experience.

The interpreter defines important facies of interest on key lines, perhaps those with well control. The software then sifts through the multiple data volumes to determine which attributes best differentiate the desired facies.

“I predict that an interpreter will be able to interactively add and subtract attribute volumes to determine which combination of attributes differentiate a given facies of interest,” Marfurt said, “thereby testing alternative hypotheses.”

Seismic attributes, such as impedance inversion, are only as good as the seismic data and background model that went into it, he continued.

In a typical workflow:

  • The petrophysicist generates well log crossplots to define lithologies or geomechanical behavior in terms of P- and S-impedances.
  • Next, the seismic interpreter ties wells to the seismic data and builds a background P- and S-impedance model.
  • The seismic data are cleaned up, or “conditioned,” and inverted.
  • The inversion results use the crossplot model to generate a volume of a desired parameter, such as brittleness.

While this workflow requires technology experts, the computer can remember the keystrokes that the experts have made. One software vendor has prototyped a technology that links these keystrokes, allowing the automatic updating of the final brittleness prediction when a new well measurement, such as a dipole sonic log, becomes available.

Similar updates may be due to improvements in seismic data conditioning.

In addition, the interpretation team can generate a measure of confidence in the final output by tweaking assumptions in the original well log interpretation and crossplots.

More Than Technology

While technology is improving, fundamental understanding of the seismic response to different geologic features is critical, Marfurt said.

Most of these advancements will be done the hard way, through careful case studies and effective workflows, he added. Once such workflows have been prototyped and validated by seasoned interpreters, they can be emulated in computer software and distributed to the interpretation community at large.

“One of the more common challenges,” he said, “is how to link attributes that delineate specific architectural elements of a depositional system that can be imaged by seismic data to reservoir features of interest that fall below seismic resolution.”

The classic example is the prediction of sand in point bars of a meandering shale-filled channel. The expression sand injectites on coherence images and differential compaction on curvature images are relatively recent observations.

AAPG member and Robert R. Berg Outstanding Research award winner Henry Posamentier, who helped pioneer the modern approach to sequence stratigraphy, calls these features “FLTs,” or funny-looking-things.

Recognition of such FLTs – and placing them in a proper depositional, diagenetic or tectonic framework – is key to conventional interpretation, while the human being “clusters” such features into a geologic model.

Computer software will follow in the footsteps of such innovation.

You may also be interested in ...