Big Data Growth Continues in Seismic Surveys
Geophysical firm CGG officially entered the Big Data market in 1971 with its first 3D seismic acquisition survey for the oil and gas industry.
The company had no commercial system to store or analyze data, and had to create systems to ferret out information critical for clients, said Hovey Cox, senior vice president of marketing & strategy for geology, geophysics and reservoir at CGG. Since then, the company has been pushing the edge in terms of its ability to collect and analyze data, helping its clients make better decisions.
Historically, most of CGG’s analytical capabilities have been deterministic, but today the company has a broad spectrum of statistical and geostatistical analysis tools and new Big Data techniques to glean more insight from data. Techniques such as wide azimuth – which illuminates a target from as many angles as possible –is also generating significantly more data.
The amount of data that CGG collects from its land and marine seismic surveys has grown considerably in recent years, Cox said during a webinar on how Big Data is driving innovation and business success in oil and gas. A land seismic survey conducted in 2005 had 400,000 sensors per square kilometer; by 2009, that number had reached 36 million. From 2005 to 2009, the average volume of data gathered on an eight hour shift grew from 100 gigabytes to more than 2 terabytes, Cox said.
The number of channels – or pixels – on a crew grew from 8,000 in 2005 to 40,000 in 2009, and the number of computers used for processing data has grown from three desktop PCs in 2005 to a 72 PC cluster in 2009. This year, CGG had more than 100,000 channels for a land seismic crew, with the goal of reaching one million, said Cox.
Marine seismic surveys are more complex, with a vessel towing a streamer a kilometer wide and a 10K race distance in length. CGG uses multiple vessels to create wide and full azimuth views to open the second eye and see in stereo below the sea floor surface. These streamers represent the largest moving infrastructure on earth, and can be seen from space as they move and collect data.
“These surveys are intense from all points of operations, requiring a tremendous amount of logistics and management of data,” Cox noted.
The data gathered in seismic surveys is big from a quantity standpoint.
“Every seven days, data the equivalent of a U.S. Library of Congress in size is gathered by each seismic vessel,” said Cox.
In its StagSeis survey of the Gulf of Mexico, more than 1.5 petabytes of data was gathered, and CGG received four copies of the data from its vessels. These copies represent very large data sets.
The information CGG gathers in its seismic surveys contains both Big Data and Fast Data. In 2009, the amount of Fast Data flowing from its seismic operations was coming in between 200 and 400 megabytes per second, Cox noted.
A significant amount of time is also involved in crunching these data sets, with resolution and speed critical. The raw data gathered in these surveys looks like a red mat, and require a lengthy processing sequence to go from solid red to a format in which patterns can be detected. The company has 40 computing centers worldwide; one of its larger computing centers is within the range of major processing centers such as Sequoia, with more than 100 petabytes/day of active data.
Finer sampling and fuller azimuth means more resolution with raw data, and a better view of legacy and target data for better decision-making. Step-changes in subsalt imaging allowed CGG to get more detail in 2014 out of Gulf of Mexico seismic data from 2006, allowing more confidence in 3D interpretation for well placing and identifying offsetting prospects, Cox said. CGG can now see details such as channels from historic rivers.
View Full Article
WHAT DO YOU THINK?
Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.
Senior Editor | Rigzone