Big Data Growth Continues in Seismic Surveys
“Taking data and being able to bring it together is allowing CGG to better understand rock properties in seismic surveys,” said Cox.
CGG’s data continues to grow in volume, variety and velocity, said Cox, adding that the company will continue to invest in Big Data technologies to address this data. This includes its use of technology for the past decade from Objectivity. CGG has used this technology to store processed data, meta data, and derived data.
Object Modelling Behind Objectivity Technology
Founded in the late 1980s, Objectivity is a developer of high-performance distributed object data technology, with deep domain expertise in fast data fusion and decades of experience in “beyond petabyte” data volumes, said Brian Clark, corporate VP of product at Objectivity, during the webinar.
The technology is based on object data modelling, which came out of the telecommunication and manufacturing industries’ need to deal with a diverse array of data. Object data modelling provided the agility needed to support complex, multidimensional queries. Because of the proliferation of data models and associated techniques, this technology filled a niche role. But the phenomenon of Big Data and complex sensor data triggered the wider use again of object data modelling, which provides support for what is called multi-dimensional indexing.
Like the other companies with whom Objectivity works, CGG needed a technology that would allow it to handle lots of data very quickly. The company’s technology fuses Big Data and Fast Data together so that data can be analyzed and generate business value for a company.
CGG – Schlumberger through its WesternGECO division – are facing similar challenges. Methodologies for collecting data such as wide azimuth scans are generating much more high density data than previous methods did, and overall, seismic surveys are gathering more data than ever. The larger volumes of overall data and high density data are challenging the cost-effectiveness of current systems. Besides cost, another driver behind the adoption of Big Data technology is the need to do more interesting analytics on a particular data set or small data set.
Objectivity’s technology combines Big Data – data gathered from mobile devices that historically is used for batch analytics and can wait – with Fast Data, or sensor data, flowing from many sources of Internet of Things (IoT) technologies. This data is transformed through filtering, refining and cleaning and value add, and then stored beside Objectivity’s application alongside existing data on Hadoop as needed.
Objectivity’s “secret sauce” is a federated database, which is actually a collection of databases that can be spread around the network. The processing can be spread around as well. Due to its architecture, a client application can access data anywhere in the federation – it doesn’t need to know the actual location, said Clark.
“Within the database, we organized the objects and relationship into containers, which are a way of clustering a logical group of objects together for efficient physical access,” Clark said. “In the object database world, the relationships between the data are as important if not more important than the data itself. Each data has its own unique 64-bit ID, meaning that we can address thousands of trillions of unique objects, and thousands of petabytes of storage.”
No matter how many objects or storage, finding an object by the ID is very fast regardless of the number of objects. As a result, data and processing where they’re needed. Designed as a distributed architecture, where data and processing can be spread out the network, allowing for a more cost effective, scalable solution.
Why are objects a good fit? Originally, data fusion dealt with hard data or sensor data, and objects are a good way to express this information. Information fusion brings much more soft, or unstructured data, such as email, social media, text, audio and video, into the picture. Expressing this data as objects allows for a better understanding of this data, such as meaning, context, and ontologies. Both objects and relationships are needed to express this information, Clark said.
Besides CGG, the company has worked with companies that have been doing the equivalent of information fusion for many years. These companies include Boeing, government agencies, Telco and network customers, technology partners such as databricks and Intel and SI partners like Raytheon.
The coming market for the industrial Internet of Things not only is huge in volume, but in the commercial value of using data to solve business problems. The vast majority of data for industrial IoT will come from sensors delivering streaming and Fast Data, said Clark. At the moment, the biggest obstacle is making sensor data useful to analytics.
View Full Article
WHAT DO YOU THINK?
Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.
Senior Editor | Rigzone