Artificial intelligence has the potential to transform American business, making it more efficient, productive and profitable. It has the ability to improve the entire supply chain from research to development to execution.
It would be nice if it got it right.
Susan Nash, AAPG’s director of innovation, emerging science and technology (and interim director of AAPG), said figuring out how to make sure it gets it right is the responsibility of everyone. Above all, subject matter experts need to be at the heart of the endeavor.
And that includes the energy sector.
“The advent of generative AI is transforming the approach to exploration and mergers and acquisitions, making the integrity of AI-generated data pivotal for critical investment decisions,” she said.
Among the issues facing those who use AI is the question of how to guard against misinterpretation of the data – Nash said such incorrect results are known in the business as “AI hallucinations” – and these artificially-induced fever dreams, if you will, have the potential to compromise data integrity, increase risk and kill efficiency.
Further, in the industry, she said, there are four main misconceptions that surround artificial intelligence:
- Generative AI, a type of AI that creates new content like text, images, videos and audio, will replace geoscientists, engineers, and those in the field.
- Large language models, a type of AI model that uses powerful algorithms to understand and generate human language, won’t need much data.
- Data used to train the LLM won’t need to be vetted.
- Results gleaned from AI will either be completely trustworthy or completely wrong.
Finding the Right Nail …
Generative AI is just now starting to be widely used in the oil and gas industry and industry professionals are trying to figure out how to apply it.
“People are still determining the best use for it, and also how it can help us use data and information that has been stranded or made invisible because it only existed in paper files (or mylar or microfiche) and had not yet been digitized,” said Nash.
Pushpesh Sharma, senior product manager at AspenTech based in Bedford, Mass., echoed Nash’s concerns and said that overall, companies want to use AI and know they need it, even if they’re not exactly sure about its place in their operations.
“But the broader trend shows a cautious optimism toward AI. While some rely heavily on AI for pattern recognition and predictive modeling, there is a recognized gap in implementing rigorous checks and balances,” he said.
That’s a huge concern. Industry, he believes, needs to focus more on harmonizing AI outputs with traditional expertise and feels that without domain-informed validation, reliance on AI could exacerbate risk rather than mitigate it.
“AI is a tool and we should use it as a tool. It is not the answer but just a means to get to an answer faster,” said Sharma.
Kim Padeletti, head of energy data insights for Amazon Web Services, said to combat the overpromise of AI at her firm, AWS champions “customer obsession,” and takes a “human in the lead” approach, emphasizing subsurface business-led framing of the challenge in every aspect – “from the design of your prompt to the workflow design.”
In her view the energy industry has been cautious about AI, sometimes to a fault.
“This caution can lead companies to miss opportunities due to paralysis over AI’s potential inaccuracies or caution to adopt at scale. When treated as a partner to human expertise, AI can uncover patterns and insights that might take years to surface otherwise,” she said.
Guard Rails Needed
She agrees that without robust checks and balances, on the other hand, overconfidence in AI outputs can lead to costly missteps.
One way AI could get it wrong involves reservoir characterization. A generative AI model might extrapolate seismic data incorrectly, leading to predictions of continuous reservoirs in fault terrains.
“In production,” Sharma said, “AI may predict optimistic recovery factors without considering operational challenges like water breakthrough or formation damage.”
To guard against such incorrect AI insights and solutions, Nash said, “These LLMs are trained on vast volumes of data and use parameters to generate original output for tasks like answering questions, collating information and completing sentences.”
Other safeguards include retrieval-augmented generation (RAG), which is the process of optimizing the output of a large language model.
“This assures the user of integrity of the gen-AI response because it will point back to the actual documents referred to (for example, a journal article), or the company database or repository (for example, production data or contracts),” she said.
Further, the RAG augments the already powerful capabilities of LLMs to specific domains without having to retrain the model.
“Being able to cite the source (as in RAG) helps it stay relevant and useful in a number of different use cases and contexts,” said Nash.
This will only work if due diligence with generative AI large language models is done in data ingesting and data sharing. By doing so, it will ensure the data is uniform, harmonized, classified and easily shared and cleaned.
“Further,” said Nash, “it is important to put in rules and guard rails to make sure that results are not generated that do not correlate with the science or actual data.”
Padeletti also mentions of the AI’s “intelligent guard rails” will allow geoscientists more innovation and freedom. Having well-curated, high-quality and high-quantity data in schema format, such as the OSDU Standard for data, can significantly improve the accuracy and reliability of AI models in the energy sector.
At this year’s NAPE, an energy business panel, “Enhancing Decision-Making in Exploration and M&A: Strategies to Counteract Generative AI Misconceptions,” will be presented – Nash will moderate – and will address how generative AI is transforming the approach to exploration and to mergers and acquisitions, making the integrity of AI-generated data pivotal for critical investment decisions. This panel will, as well, delve into protective measures against the misinterpretation of data – those AI hallucinations to which Nash referred earlier. The goal is for attendees to gain insights from industry leaders on effective strategies to reinforce AI’s role in augmenting decision-making, thereby adding tangible value to exploration and merger and acquisition ventures. Joining Nash, Sharma and Padeletti will be Andrew Muñoz, chief operating officer of 4Cast and Sashi Gunturu, co-founder of Petrabytes Corp.
‘From Hallucination to Confidence’
One of the issues to be discussed is whether the oil and gas industry is ready to tackle the behemoth that is AI at the gates.
Sharma is concerned.
“No. There is a noticeable shortage of professionals equipped to bridge data science expertise and domain-specific knowledge. AI oversight often falls into the gaps between geoscientists and data scientists, resulting in a lack of accountability,” he said.
He said building multidisciplinary teams and fostering specialized training programs can help address this shortage.
“The training should focus on limitations of the AI models and highlight possible problem areas,” he said.
Ultimately, Padeletti said, the goal is to merge AI’s analytical prowess with geoscientists’ creative insights.
“This synergy frees geologists from routine tasks and supports them in focusing on higher-level problem-solving.”
Sharma said he hopes we are moving from “hallucination to confidence.”
For her part, Padeletti believes responsible AI in energy exploration could “unlock billions and drive massive value creation.”