The Oil Industry's Big Hurdle for Big Data

People who think about data in the upstream oil and gas industry are surprisingly in agreement about the future.

Looking at the decades ahead, will the industry struggle with a need for greater computing power to process and analyze rapidly growing amounts of data? Or will the challenge come from capturing enough data to understand complex exploration and production operations?

Experts in seismic and other industry data seem to agree the answer will be:

C) None of the above.

Kristian Johansen is chief executive officer of TGS in Houston, a worldwide force in capturing and processing geophysical data.

“Cloud computing is the future trend of computation,” Johansen said.

“The seismic industry’s challenge is how to get the data in and out from the cloud effectively, because we are dealing with thousands of times or more data than the other industries,” he noted.

Ricardo Bertocco, partner with management consulting group Bain & Company in Dallas, served as a principal author on the company’s report “Big Data Analytics in Oil and Gas.”

“Right now the bottleneck is transmitting data. The cloud can help,” Bertocco said.

Instead of problems with collecting or processing data, the oil and gas industry of the future will face its biggest challenges in combining data types, in storing data, in accessing, moving, managing and manipulating data.

Better Machines, Better Integration

Not that the industry doesn’t want bigger and better computers. Input and output (I/O) of data can be a serious hurdle, but even today, huge data collections are demanding improved computational abilities.

Please log in to read the full article

People who think about data in the upstream oil and gas industry are surprisingly in agreement about the future.

Looking at the decades ahead, will the industry struggle with a need for greater computing power to process and analyze rapidly growing amounts of data? Or will the challenge come from capturing enough data to understand complex exploration and production operations?

Experts in seismic and other industry data seem to agree the answer will be:

C) None of the above.

Kristian Johansen is chief executive officer of TGS in Houston, a worldwide force in capturing and processing geophysical data.

“Cloud computing is the future trend of computation,” Johansen said.

“The seismic industry’s challenge is how to get the data in and out from the cloud effectively, because we are dealing with thousands of times or more data than the other industries,” he noted.

Ricardo Bertocco, partner with management consulting group Bain & Company in Dallas, served as a principal author on the company’s report “Big Data Analytics in Oil and Gas.”

“Right now the bottleneck is transmitting data. The cloud can help,” Bertocco said.

Instead of problems with collecting or processing data, the oil and gas industry of the future will face its biggest challenges in combining data types, in storing data, in accessing, moving, managing and manipulating data.

Better Machines, Better Integration

Not that the industry doesn’t want bigger and better computers. Input and output (I/O) of data can be a serious hurdle, but even today, huge data collections are demanding improved computational abilities.

“Our industry demand for computing has been and will be always ahead of the IT industry. Algorithms exist today that would require thousands of times more powerful computers than today’s supercomputers,” Johansen said.

“And yes, as we said earlier, data I/O is a very big challenge when we are moving into cloud computing,” he added.

Moore’s Law states that computer processing power will double every two years. That isn’t a scientific law, but an observation by Intel co-founder Gordon Moore about the steady increase in the number of transistors per square inch in integrated circuits.

A continuing debate questions whether that kind of computing advance can go on indefinitely, but Moore’s Law has held up for more than 50 years. Whatever the future pace of growth, no one doubts an ongoing increase in computing power.

“In a general sense computational power – the hardware – continues to grow. Some types of challenges may be simply waiting for the hardware to catch up,” said Richard Gibson Jr., a professor in the Department of Geology & Geophysics at Texas A&M University in College Station, Texas, and an expert in seismic and microseismic applications.

“One of the main challenges will be integrating the larger and larger amounts of data and different types of data,” he noted. “It becomes more and more challenging for one human being or even a team of human beings to understand what that data is telling you.”

Gibson said he defines Big Data in oil and gas not only by the quantity of data, but by the extent of types of data being captured, combined and processed.

“What might be fundamentally different here is combining the types of data we have, like seismic, with other types of data. As we take these kinds of data and try to combine them, it becomes different,” Gibson noted.

“One example might be looking at unconventional reservoirs,” he said.

In that example, microseimic data might be combined with data about geology, logs, rock mechanics, reservoir characteristics and other input, and engineering data like pressures and fluid composition.

“In a bottom-line sense, the goal is to tell engineers how to come up with optimal reservoir-management strategies to produce hydrocarbons from unconventional resources,” Gibson said.

Geophysical companies already combine seismic data with some other types of field data for a better understanding of a reservoir, so a movement toward more data integration in the future seems like a good bet.

“Integrating geological interpretation and well logs into seismic data processing enables TGS to provide clients with high-quality seismic attributes that are correlated better to reservoir properties,” Johansen said.

“Better sampling of the subsurface structures – finer grids and full azimuth acquisition – in data acquisition and improved analytical correlation of seismic data to rock properties in processing will be the key factors in the future to reduce E&P risks,” he predicted.

And Johansen said the seismic industry is now seeing more applications of artificial intelligence, especially in “classifying new data according to known clarifications of lithology and hydrocarbon indicators,” and he expects even more AI “in the near future, especially with more and more powerful computers coming.”

At some point in the future, the challenge of Big Data in oil and gas may grow into an “Enormous Data” challenge. Bertocco foresees a huge increase in the amount of data being captured.

“There’s going to be more and more data coming out of equipment,” he said

Right now only about 15 percent of oil and gas equipment generates captured data, but “going forward, we have to believe that almost 100 percent of the equipment will produce data.” he said.

Bertocco thinks the industry will be dealing with terabytes of data per well and several hundred terabytes per field. Today, “it’s impossible for anybody to have the kind of computing capability” that will be needed in the future, he said.

Future Development

Because it already deals with very large quantities of seismic data and other data, the oil and gas industry probably sees itself as a leader in data use. According to Bertocco, it’s just the opposite.

“Oil and gas is still lagging every other industry, barring construction, in capturing and using data. Which is amazing,” he said.

It’s also ironic, because 20 years ago oil and gas was a pioneer in Big Data capture and analysis, Bertocco said. Since then it has been surpassed by most other industries, from finance to airlines.

In part that isn’t surprising, since the E&P sector is just now coming out of a miserable downturn with not enough money to spend on anything, including IT. But Bertocco thinks the industry has already started a major five-year push in developing data capabilities.

He said three areas of development will be crucial:

People

This involves creating more collaborative work structures as well as building internal knowledge capacity and adding expertise, Bertocco said.

“There aren’t that many (cyber) scientists available today, and most of them are employed by other industries,” he observed.

Information Technology

Beyond the need to upgrade and add computing capacity, oil and gas companies have to deal with legacy systems that hinder progress, he said.

“A lot of companies have been struggling with very complicated and cumbersome IT systems,” Bertocco said. “We want to be sure information flows across the organization. Those systems can be, and are as of today, a big impediment.”

Change Management

In computing-related change management, less than 15 percent of companies rate themselves as satisfied and 30 percent fail, Bertocco said. An effective digital roadmap is essential, he noted.

“Unless you’re clear on how you’re going to drive the transformation, it just won’t happen,” he said.

The puzzle facing oil and gas in future decades is how to make enormous quantities of data both usable and useful. Maybe by the year 2100, the industry can look back and see how all the pieces fit together.

Comments (1)

Data Needs
Don't overlook the Data sets for well records, publications, and samples available in State Geological Surveys and State Regulatory Agencies.
4/4/2017 12:49:30 PM

You may also be interested in ...