Basin Modeling on the Verge of Major Advances

To understand the future of basin modeling, you need to know the ABCs of some relevant computing tools.

Also D and E.

In coming decades, these tools in combination will enable the industry to do basin modeling at a greatly advanced level of interpretation and scale.

Here's a quick guide.

  1. Analytics is the identification and interpretation of meaningful patterns in data, derived from computer processing, statistics, and both real-time and historic monitoring. It draws on information resulting from the systematic analysis of data.
  2. Bandwidth optimization refers to techniques for increasing data transfer speeds and efficiencies, especially across wide-area networks.
  3. Cloud computing uses a network of servers hosted on the Internet to store, manage and process data, providing shared computer processing resources and data to computers and other devices on demand.
  4. Distributed computing allows networked computers to communicate and coordinate their actions to perform a defined series of tasks or to achieve a goal. Data processing power can be amalgamated in a distributed network, along with shared data, software programs and storage devices.
  5. Edge computing enhances cloud computing systems by performing processing at or near the source of the data, at the edge of the network. It reduces the bandwidth needed between the data sources and data processors by performing analytics and knowledge generation near the data source.

Big Data

Advances in basin modeling today target the handling and utilization of enormous sets of data. One well site can produce terabytes of data daily, said Sashi Gunturu, founder and CEO of Petrabytes Inc. in Houston.

Petrabytes analyzes oilfield data in the cloud with 3-D and 4-D visualization of large datasets, applying an artificial intelligence (AI) model and using analytics throughout the asset lifecycle in seismic, drilling, completion and reservoir monitoring.

Gunturu said the company converts data to images "rather than trying to process every single point of data, which might not be so effective.” The outcome is pattern recognition for basin modeling based on millions of images, he said.

"If you have a million images, it's almost impossible for a human being to process. That's where you need an AI approach,” Gunturu said. "We are able to do this because of the scale of the cloud, and edge computing.”

Image Caption

StoRM, or “stochastic rock modeling,” uses geological information to set up probabilistic distributions of input parameters. Image courtesy of the Norwegian University of Science and Technology.

Please log in to read the full article

To understand the future of basin modeling, you need to know the ABCs of some relevant computing tools.

Also D and E.

In coming decades, these tools in combination will enable the industry to do basin modeling at a greatly advanced level of interpretation and scale.

Here's a quick guide.

  1. Analytics is the identification and interpretation of meaningful patterns in data, derived from computer processing, statistics, and both real-time and historic monitoring. It draws on information resulting from the systematic analysis of data.
  2. Bandwidth optimization refers to techniques for increasing data transfer speeds and efficiencies, especially across wide-area networks.
  3. Cloud computing uses a network of servers hosted on the Internet to store, manage and process data, providing shared computer processing resources and data to computers and other devices on demand.
  4. Distributed computing allows networked computers to communicate and coordinate their actions to perform a defined series of tasks or to achieve a goal. Data processing power can be amalgamated in a distributed network, along with shared data, software programs and storage devices.
  5. Edge computing enhances cloud computing systems by performing processing at or near the source of the data, at the edge of the network. It reduces the bandwidth needed between the data sources and data processors by performing analytics and knowledge generation near the data source.

Big Data

Advances in basin modeling today target the handling and utilization of enormous sets of data. One well site can produce terabytes of data daily, said Sashi Gunturu, founder and CEO of Petrabytes Inc. in Houston.

Petrabytes analyzes oilfield data in the cloud with 3-D and 4-D visualization of large datasets, applying an artificial intelligence (AI) model and using analytics throughout the asset lifecycle in seismic, drilling, completion and reservoir monitoring.

Gunturu said the company converts data to images "rather than trying to process every single point of data, which might not be so effective.” The outcome is pattern recognition for basin modeling based on millions of images, he said.

"If you have a million images, it's almost impossible for a human being to process. That's where you need an AI approach,” Gunturu said. "We are able to do this because of the scale of the cloud, and edge computing.”

Data that's already gone through some processing and analysis is often referred to as "rich data.” A key advantage of edge computing comes from processing data at the data-capture site.

"You compute as much as you can at the well-site location, and you only transmit rich data to the cloud,” Gunturu said.

In addition to real-time, sensor-captured data from the field, publicly available historic data can be analyzed and added to enhance the basin modeling process, according to Gunturu.

"The big thing is combining the public data with the active, measuring data. It's a combination of the pre- and post-processing with real-time modeling,” he noted. "At the end of the day, the interpretation is an integration of all this.”

"The big piece is, as the data gets bigger and bigger you need a distributed infrastructure. Onshore it works really well because it's all well connected. Offshore, the data transmission and the connectivity might not be so effective,” he said.

Petrabyes hopes to establish an industry platform – essentially an interconnected digital work and processing area – for analyzing and imaging oil and gas data using cloud, distributed and edge computing, Gunturu said.

"We want to develop this collaborative part and scale it to a much higher level, especially with the sensing. We want to be that scientific platform, like a Google Docs for that platform,” he said.

Stochastic Modeling

In other research, advanced computer-processing power has been applied to defining and refining data for basin modeling.

Krzysztof Jan Zieba is a researcher in the Department of Geoscience and Petroleum at the Norwegian University of Science and Technology in Trondheim, Norway, where he works on StoRM, a stochastic (analysis using randomly determined input parameters) rock modeling tool.

Traditionally, input data for basin models are determined based on "most trusted” values, Zieba said. Those values are usually derived from seismic data and geological information such as onshore outcrops and information from wells.

As a result, predictions of present and past rock properties and hydrocarbon accumulations may be based on "inherently biased” single-parameter values, he noted.

"The quality of the modeling relies on the availability of certain input data that are often unavailable or expensive. In the worst-case scenario, erroneous basin models might lead to wrong exploration targets,” he said.

To address that problem, StoRM uses geological information to set up probabilistic distributions of input parameters.

Many basin input values can be described as either ranges or probabilistic distributions based on available data, Zieba said. For example, in Monte Carlo solutions, randomly sampled values of input parameters are used for creating multiple, alternative basin models.

But existing stochastic basin modeling does not test the interrelations between values of various parameters, Zieba observed.

That can lead to unlikely or even impossible results, where randomly sampled values of the input parameters cannot occur together based on geological knowledge or additional real-world measurements, he said.

"In our approach, we model basin infill history and related parameter changes forward in time from deposition of the first layer to the present by using randomly sampled input values. The modeling is an iterative process where millions of modeling runs are conducted one after another,” Zieba explained.

"From millions of individual sets of input values, only a small fraction produces a rock column that matches the measurements. Only the matching values can be considered as likely ones, while the remaining ones need to be rejected in basin models,” he said.

In this approach to analytics, each modeling run is calibrated to real-world observations and measurements as a reality check, Zieba said. A key calibration method compares modeled rock-unit depth boundaries to seismic depths or borehole data, he said.

Calibrations also can involve more sophisticated data, like net erosion thickness, paleo-water depth indicators or relations between paleo environments and sedimentation/erosion rates.

Future Development

The fully realized future of basin modeling doesn't exist quite yet. Gunturu mentioned the need for reduced compute time, better imaging, more capability in identifying sweet spots, creating a seamless workflow and bringing down processing costs.

Better and more capable interconnectivity is needed for computer tools, especially to give operators real-time capabilities.

"That loop doesn't exist yet,” Gunturu said.

But the industry is somewhere on the verge of enhanced processing speeds and increased data inputs from seismic and sensors that will enable geoscientists to produce basin models at a level of quality and scope never seen before.

You may also be interested in ...