Mountains of oil and gas data production data are available, but many in the industry distrust and disregard it because of imperfect information and poorly regulated reporting standards.
Despite these misgivings, Sean Clifford, an operations support reservoir engineer at Apache Corp., says public data can become a valuable tool.
He and colleague Tim Torres developed a methodology to exploit public data to refine their analysis of multi-fractured horizontal well performance in the Midland Basin. They presented their findings in a paper entitled “Using a Systematic, Bayesian Approach to Unlock the True Value of Public Data; Midland Basin Study” at the recent Unconventional Resources Technology Conference (URTeC) in Austin.
The project employed outlier identification, probabilistic forecasting tools and Bayesian calibration to refine the researchers’ analysis of multi-fractured horizontal well performance in the Midland Basin.
The researchers had access to higher quality data for better estimation and calibration of the public production data set. They continually updated their forecasts as new data became available.
“The public data is believed to be dishonest and unreliable because of the production allocation method that relies on imperfect well test data and unregulated reporting standards. E&P operators are not incentivized to provide accurate well-level production data, and they are in fact strongly against sharing such data to protect competitive advantages,” Clifford said.
This can make it difficult for third-party companies to accurately allocate lease-level production volumes to the well-level, given limited data and no way to calibrate the allocation model, Clifford said.
“Public production data sources typically provide well-level monthly production volumes and all pertinent well header information. In addition, they can provide well test data used in the allocation process and descriptions of their allocation method. The allocated monthly production volumes are most useful to us to predict well performance for each well in the population, however we rely heavily on the additional well test and well header data as well,” he said.
Compensating for Weaknesses
Clifford said one of the major challenges in the study was convincing technical staff members to trust the validity and accuracy of the project’s results because of the generally poor reputation of public production data. Gaining confidence in the third-party production allocation algorithm was also a challenge, he said.
“I believe the most surprising conclusion is how accurately we are able to match our internal estimates at a field-level for horizontal wells in the Midland Basin using the public production data set,” Clifford said.
That doesn’t necessarily mean public data is more accurate than people think it is.
“(Public data) is very commonly wrong at the well-level, but I believe that it is possible to use the data responsibly in order to derive well performance estimates at an aggregate level where the allocation uncertainty is reduced significantly.” he said.
“We are alluding to the use of the public production data in this way as the ‘true value’ in the paper title. Our evaluation technique does not correct any weaknesses in the public data set, however our machine learning forecasting algorithm minimizes the influence from such weaknesses by fitting a calibrated physical model to the rate-time data.”
In the paper abstract, the authors said their research indicated that recently drilled wells (c. 2015–16) are forecast to recover significantly more reserves — nearly twice as much in some areas — as compared to early asset developments, according to the paper.
Clifford said companies with access to higher quality data sets would be able to apply similar techniques, “However any party invested in the oil and gas industry should benefit from improved reserve estimation practices like these.”
“The biggest takeaway from the study is our ability to establish confidence in estimation by providing a performance track record of forecast accuracy and stability. We have been tracking our predictions since February 2016, and we continue to do so every month as new data is incorporated,” he said.
“We feel this research is valuable because many evaluation experts and organizations do not provide a track record of prediction accuracy. It is difficult to make a confident investment decision without understanding the underlying uncertainty or accuracy associated with a prediction. This paper demonstrates a workflow to support and improve decision making by conveying confidence in future predictions,” he said.