Data Volume, Efficiency Changing Oil, Gas High Performance Computing Needs

Because of security issues, the use of the cloud in high performance computing hasn’t caught on in the oil and gas industry like Hadoop-type situation has, said Holdaway, but anticipates that HPC will move into the Hadoop and cloud due to the need to quickly access and store data.

“As they push boundaries in new algorithms, it takes a lot more sophisticated algorithms to run in parallel on lots of data.”

US Industries See HPC as ‘Cost-Effective’ for R&D

According to the Council on Competitiveness report, Solve, a publication of the High Performance Computing Initiative, HPC is viewed industry leaders as a cost-effective tool for speeding up the research and development process, and two-thirds of all U.S.-based companies that use HPC say that “increasing performance of computational models is a matter of competitive survival.”

While U.S. industry representatives surveyed struggled to imagine specific discoveries and innovations from HPC, they were confident that their organizations could consume up to 1,000 increases in capability and capacity in a relatively short amount of time. However, software scalability is the most significant limiting factor in achieving the next 10x improvements in performance, and it remains one of the most significant factors in reaching 1000x. The links between U.S. government and industry also need to be strengthened to allow U.S. industry to see a benefit from government leadership in the investment in supercomputing technologies.

In mid-November, the U.S. Department of Energy announced it two new HPC awards to help put the United States on the fast-track to next generation exascale computing, which DOE said would help advance U.S. leadership in scientific research and promote America’s economic and national security.

Secretary of Energy Ernest Moniz announced $325 million to build two state-of-the-art supercomputers at DOE’s Oak Ridge and Lawrence Livermore National Laboratories. In early 2014, the joint Collaboration of Oak Ridge, Argonne and Lawrence Livermore (CORAL) was established to leverage supercomputing investments, streamline procurement processes and reduce costs to develop supercomputers that will be five to seven times more power when fully deployed than the fast systems available today in the United States.

DOE also announced approximately $100 million to further develop extreme scale supercomputing technologies as part of FastForward2, a research and development program. Computing industry leaders such as NVIDIA, AMD, Cray, IBM and Intel will lead the joint project between DOE Office of Science and the National Nuclear Security Administration.

“DOE and its National Labs have always been at the forefront of HPC and we expect that critical supercomputing investments like CORAL and FastFoward 2 will again lead to transformational advancements in basic science, national defense, environmental and energy research that rely on simulations of complex physical systems and analysis of massive amounts of data,” Moniz said in a Nov. 14 press release.


1234567

View Full Article

WHAT DO YOU THINK?


Generated by readers, the comments included herein do not reflect the views and opinions of Rigzone. All comments are subject to editorial review. Off-topic, inappropriate or insulting comments will be removed.

Sobere Opusunju  |  March 12, 2015
Its amazing how much the IT can go.


Most Popular Articles