Supercomputing Needs in Oil, Gas Industry to Keep Growing

Supercomputing Needs in Oil, Gas Industry to Keep Growing
The oil and gas industry continues to ramp up its supercomputing capabilities as it explores for oil and gas in complex frontier basins and seeks to enhance production recovery.

Oil and gas majors such as Total S.A. and BP plc have or will seek to ramp up their supercomputing capabilities to meet their growing high-performance computing (HPC) needs.

Earlier this year, Total reported it would upgrade by 2016 the storage capacity of its Pangea supercomputer from its current 16 petabytes to around 26 petabytes.

The upgrade was planned as an option when Pangea Phase 1 was installed, said Philippe Malzac, executive vice president of information systems at Total, in a statement to Rigzone. The company decided to pursue the upgrade due to the growth in seismic acquisition density and the use of new and more sophisticated algorithms, as well as to fulfill the need of Total’s explorers and reservoir engineers.

The move towards exploring for oil and gas not only in deepwater, but away from known provinces and into promising, but complex, frontier domains, is driving the need for greater supercomputing power in the oil and gas industry, François Alabert, vice president of Exploration Technologies at Total, told Rigzone.

Total is exploring prospects and fields in the deeper portions of petroleum basins, sometimes hidden 30,000 feet underneath geological strata, making it difficult for operators to use acoustic seismic waves used to image the subsurface.

“Increasingly precise seismic algorithms are therefore needed to see and map them, to increase the chance of success and to design safe, cost-effective well operations to probe them,” Alabert said.

Oil and gas companies are also exploring complex frontier oil and gas plays, either on extreme continent edges offshore or onshore in mountainous areas, where geological risks are high as traps are more subtle, thereby requiring big imaging computing power, Alabert explained.

“Last but not least, maximizing oil and gas recovery and value from all of our fields uses increasingly precise engineering models to optimally design development schemes and production methods: petroleum engineering represents a growing demand in supercomputing resources,” Alabert noted.

Since 2000, the oil and gas industry has been continuously investing in high performance computing, increasing by approximately 10 folds its computing power every three years, Alabert explained.

“The primary objective has been to improve seismic imaging in deeper and more complex parts of the subsurface, thanks to ever more precise algorithms.”

“Without such computational power, it is fair to say that exploration and development of many large offshore petroleum basins like the Gulf of Mexico, the Gulf of Guinea, or Brazil’s deep offshore, would not have been possible.”

In late May, Australia-based Woodside Energy said it would use IBM’s Watson supercomputer as part of the company’s next steps in data science. The Watson supercomputer – which beat well-known Jeopardy game show human competitors Ken Jennings and Brad Rutter in a 2011 match – will be trained by Woodside engineers, allowing users to surface evidence-weighted insights from large volumes of unstructured and historical data contained in project reports in seconds.

Watson is part of Woodside’s strategy of using predictive data science to leverage more than 30 years of collective knowledge and experience as a leading liquefied natural gas operator, to maintain a strong competitive edge, Woodside said in a May 27 press statement. Through this strategy, the company will bring a new toolkit in the form of evidence-based predictive data science that will bring costs down and boost efficiencies across the company.

“Data science is the essential next chapter in knowledge management, enabling the company to unlock collective intelligence,” said Shaun Gregory, Woodside senior vice president strategy, science and technology, in the May 27 press release. “Data science, underpinned by an exponentially increasing volume and variety of data and the rapidly decreasing cost of computing, is likely to be a major disruptive technology in our industry over the next decade.”

The cognitive advisory service “Lesson Learned” will be delivered via the cloud, and will scale the knowledge of engineers making insights and information quickly accessible to a wide group of employees, potentially leading to faster resolutions, improved process flow and operational outcomes.

BP completed an upgrade of its computing systems in November 2014, growing its supercomputing systems from 2.2 petaflops to 3.8 petaflops.

“We are currently evaluating new technology options in advance of expected 2016 growth, but no plans are finalized yet,” said Keith Gray, director, technical Computing and high-performance computing for BP, told Rigzone. Gray runs BP’s new Center for High-Performance Computing at BP’s Westlake Campus in Houston.

BP centralized its HPC capabilities in Houston in 1999, around the time of the BP-Amoco merger.

“Back then, HPC was housed in the main office complex, but our needs to grow computing power, to be positioned for future technology, and to mitigate hurricane risks justified the construction of the BP Center for High Performance Computing in a new build in 2013,” Gray said.

The HPC team has employed state-of-the-art technology since it was founded.

“The typical life cycle of a new computer is three years, and we diligently drive out old technologies to make room for breakthroughs that enable new imaging capabilities,” Gray explained. “Due to our partnerships with Intel and HP, we are often able to use and deploy new technologies before they reach the general market.”

The Center for High Performance Computing is BP’s strategic resource for research computing.

“Our computers have large memory, fast networks and huge storage capabilities that speed development of new seismic imaging, giving BP a competitive advantage. This allows us to process information and conduct analysis very quickly and cost effectively to meet the technical challenges of our business,” Gray said.

Last year, Italy-based Eni S.p.A. placed its second major HPC system into operation. The system, which has an upgrade of 1.500 IBM server nodes, 3,000 new Intel processors and 3,000 NVIDIA GPU accelerators, will allow Eni the boost its computational capability to 3 petaflops to more effectively support exploration and reservoir activities, the company said on its website.

Exploration and production companies aren’t the only companies in the oil and gas industry seeking greater supercomputing power. In March of this year, seismic firm Petroleum Geo-Services reported that supercomputing manufacturer Cray would provide the company with a new 5-petaflop Cray XC40 supercomputer and a Cray Sonexion 2000 storage system.

“With the Cray supercomputer, our imaging capabilities will leapfrog to a whole new level,” said Guillaume Cambois, executive vice president imaging & engineering with PGS, in a statement. “We are using this technology to secure our market lead in broadband imaging and position ourselves for the future. With access to the greater compute efficiency and reliability of the Cray system, we can extract the full potential of our complex GeoStreamer imaging technologies, such as SWIM and CWI.”

According to Cray, energy companies have typically led the way in the commercial HPC space in creating and implementing new technologies as a means to reduce risk and optimize rewards.

“The energy industry is facing massive increases in compute and storage requirements as an integral component of their core business, and companies are searching for technology partnerships to help them achieve their goal of providing the world with energy.”

In the past, the oil and gas industry’s efforts to improve seismic processing throughput have come from a match between the availability of newer central processing units featuring faster clock speeds, and new algorithms designed to take advantage of these faster processors and large storage capacities, according a recent blog by Seattle-based Cray.

“But current survey and seismic simulation techniques involve incredibly large datasets and complex algorithms that face limits when run on commodity clusters,” Cray said in the blog. “These complex computing requirements mean that you can’t just throw processing power at your problems. Systems that meet the performance requirements of the world’s largest and most complex scientific and research problems must be built from the ground up.”

The oil and gas industry’s current reality means that they must find ways to use more data more effectively and make better decisions from their analyses. This has resulted in a huge influx of information entering into simulations, modelling and other supercomputing tasks.

“At the same time, the algorithms needed to support these efforts have become much more complex and demanding.”

All industries, but oil and gas in particular, will continue to invest in supercomputer processing needs because of Big Data, David Pope, strategic technical lead for U.S. Energy at SAS, told Rigzone in an interview.

“You can’t separate out supercomputing and Big Data,” said Pope. “They really go hand in hand.”

The oil and gas industry’s need to process more data in a timely manner – much of it coming from sensors in digital oil and gas fields but also from data such as handwritten technician notes – means that the industry’s supercomputing needs will grow, said Pope. Not only will oil and gas companies continue to invest in supercomputing, but Pope thinks more should be invested, given the decline in oil prices that started last summer has made the competitive environment even more competitive.

The decline in hardware prices and advancement in chips and memory capability is allowing the industry to solve problems it couldn’t solve before.

“It’s not about how fast you can do X, but the fact that you now run scenarios A, B, C, D, E, and F in the same time it took before to do X,” Pope explained.

This ability run multiple scenarios at once allows companies to “fail fast”, allowing decision-makers to make more informed decisions about the future.

The upstream oil and gas industry in particular has always invested heavily in supercomputing capacity because of seismic modelling, and will continue to do so. The increasing amount of real-time and near real-time data is allowing companies to update their models not only to optimally drill wells, but to identify safety issues before an incident can occur, Pope said. Sensors, an Internet of Things technology, not only are a growing trend in upstream, but midstream and downstream oil and gas as well, allowing for better decision-making.

WHAT DO YOU THINK?

Click on the button below to add a comment.
Post a Comment
Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Events  SUBSCRIBE TO OUR NEWSLETTER

Our Privacy Pledge
SUBSCRIBE



Most Popular Articles

From the Career Center
Jobs that may interest you
HSE Advisor Engineer IV
Expertise: Engineering Manager|HSE Manager / Advisor
Location: Houma, LA
 
Engineering CADD Technician
Expertise: Design Engineer|Engineering Technician|Piping Technician
Location: Superior, WI
 
Communications / Community Engagement III
Expertise: Marketing|PR / Corporate Communications
Location: San Ramon, CA
 
search for more jobs

Brent Crude Oil : $51.87/BBL 0.40%
Light Crude Oil : $47.64/BBL 0.56%
Natural Gas : $2.94/MMBtu 0.67%
Updated in last 24 hours