header


Home
| Program | Speakers | REGISTER | Logistics | Committee | Contact | Sponsors | Past Workshops

Abstracts

The Value of Seismic Imaging and the Contributions of HPC

Peter Carragher, Vice President Exploration, BP

Coming soon.


Developing a Curriculum to support High Performance Computing

Jay Boisseau, Director, Texas Advanced Computing Center

Coming Soon.


Who will write the software?

Dave Hale, Colorado School of Mines

Most students of geoscience today program computers with MATLAB. But many important problems in geophysical computing today require more than array processing on a laptop, and graduates who know only MATLAB are unlikely to even attempt to solve such problems. (I have numerous examples.) Moreover, few geoscience graduates today are prepared to program computers in the fundamentally new ways required to exploit a doubling of the number of cores every two years or so. One way to fill this gap between problems and skills is to recruit computer science graduates to work alongside geoscience graduates. Indeed, this approach is already used today by many companies that hire geoscientists. I work with computer science students at Mines. But experience in industry and academia leads me to favor a different approach: get more geoscience students excited about computing. My reasoning is simple. The happiest and, therefore, most productive people I know have both the ability and responsibility to implement their own great ideas. Let's discuss ways to make people happy.


Quo Vadis, Oilfield? Populo Fidemu – in technology we excel!

André B. Erlich, Schlumberger Limited

As those of you know who have been in this industry for more than five years, energy trends are patently unsustainable over long periods of time—either socially, environmentally or economically. From the oil shock of the ’70s to the over-capacity of the ’80s, from the more recent huge demand increases of developing countries to the current recession—it’s been a wild ride on a bumpy road. And while this is not the platform for a discussion about peak oil or sustainable energy, few would dispute the idea that hydrocarbons will remain the leading energy source for a few generations to come.

Nonetheless, although E&P investment nearly tripled between 2000 and 2008, almost no additional oil production capacity has been added during the past decade. With long-term energy needs growing and fossil fuels filling the bulk of this demand, the challenge of satisfying the world’s need for energy through exploration and production from new and aging reservoirs remains a tremendous endeavor. Increasing production and recovery from current fields, and discovering and putting into production new reservoirs in complex geographical and geological environments, can be achieved only through technology. The history of the oilfield has already taught us this lesson.

In our industry, the risks and costs of doing business are growing and only technology can help mitigate these risks. As we attempt to produce from deeper and more complex reservoirs, new exploration and production technologies are deployed to enable economic recovery of resources—e.g. single-sensor seismic, intelligent completions, the so-called digital oilfield. The capacity to “see” thin and complex reservoirs through seismic and electromagnetic exploration, to image subtle reservoir structures, to improve drilling in complex geologies and thin beds with real-time updates of the sub-surface model, to position a well miles below the Earth’s surface and control the wellbore to produce at optimum recovery rates—all this “magic” requires the ability to acquire, process and integrate the information across the entire spectrum of geo-scientific disciplines. To develop and master such technological breadth requires a tremendous and sustained level of investment in R&D in each of the areas where the challenges are encountered—which is to say, around the world (Russia, Saudi Arabia, Brazil, China, etc.). However, for R&D to overcome these challenges requires rapid adoption of new operational techniques and workflows in the field—no easy feat in an industry as conservative as ours. To revolutionize the oilfield with state-of-the-art technology and workflows—i.e. to transform it into the so-called “digital oilfield” or the “smart field”—we must change the way we collaborate and are connected, and we must change the way information and decisions flow from exploration to drilling to production and probably all the way to the end-user. To succeed, the digitally enabled oilfield must be supported by appropriate infrastructure and operational processes, as well as by people with the multi-disciplinary expertise necessary to solving the problems. In other words, we must bring the necessary expertise to the problem rather than the other way around. This implies bringing the appropriate technical expertise to the operation for better decision-making, as well as making coaching and support available to younger engineers and scientists, thereby providing help where needed. This in turn leads to changed work practices, which lead to significant efficiency gains. Our own experience has shown that remote drilling operations centers can multiply drilling engineer productivity two- to three-fold when measured by the number of wells that can be supervised simultaneously. With relatively few people at the Schlumberger operations support center in Aberdeen, for example, we can follow operations on up to 28 rigs simultaneously with full backup from petrotechnical experts and a team of seasoned drilling experts.

Likewise, intelligent energy concepts linking people and technology are already enabling us to use our human resources more effectively. Gathering experts at remote (from the well or field) operations centers makes their expertise available to a wider audience, and provides greater support to the younger generation of engineers recruited over the past three or four years. In spite of the increased speed of employee development offered by new technology such as remote control centers, however, we must begin moving even faster. Indeed, the typical experience curve of the western oil company today shows few new recruits and a peak of experienced staff preparing to leave the industry over the next few years. As a result of these findings, the industry undertook extensive recruitment efforts which have since lead to a much-improved outlook. As of 2010, a new generation of engineers and scientists are joining the oil and gas industry. Still, this will not change the soon-to-be-felt retirement effect, nor will it diminish the challenge of bringing a new workforce on line faster. Competency development and knowledge management practices will play significant roles in keeping our workforce current, and will add to the effect of new technologies and processes. This will demand change in the way we work, change in the way we train, and change in the way we expect tomorrow’s workforce to arrive at technical decisions.

We can learn from other industries by looking at how they manage similar challenges. For example, because the global airline fleet is expected to double over the next 20 years, flight training institutions are expecting to have to train some 16,000 new commercial airline pilots every year over this period to satisfy industry growth and replace retiring baby-boomer aircrew who often gained their training in the military. The training methods of the past will not suffice. The time is simply not available to allow pilots to proceed to captaincy by moving from smaller to larger aircraft. New methods of simulation and visualization will help develop knowledge and promote experience sooner. New training programs are expected to move new pilots directly into an airliner’s co-pilot position in less than a year. Better prepared instructors will be required to create practical real-world learning scenarios in which judgment can be developed and exercised. Continuous learning will be needed at all stages of a pilot’s career. All in all, this doesn’t sound very different from the training and development challenges that we face in the E&P industry.


Education and Training Requirements for HPC in Oil and Gas

Panel Discussion
Panel members: Peter Carragher, Jay Boisseau, Dave Hale, Andre Erlich, Scott Lathrop.

Panel moderators: Bill Symes (Rice), Keith Gray (BP)

Computational science and engineering is an increasingly essential and integral tool in all science and engineering research and development. There are numerous reports identifying the critical need for preparing a larger and more diverse community of scientists, technologists, engineers and mathematicians (STEM).   The reports include Computational Science: Ensuring America's Competitiveness (2005 PITAC Report), NSF’s Cyberinfrastructure Vision for 21st Century Discovery (2007), Fostering Learning in the Networked World:  The Cyberlearning Opportunity and Challenge; A 21st Century Agenda for the National Science Foundation (2008); and Strategies and Policies to Support and Advance Education in e-Science (2009).

The need exists for a broad and diverse population of computational science and engineering (CS&E) practitioners with high performance computing (HPC) knowledge and experience to form the cornerstone for the advancement of science and engineering discovery for generations to come. In order for this need to be realized, we must establish a coordinated and sustained effort to better prepare the national and global workforce.

Computationally intensive science and engineering spans a broad range of technologies including high performance computing, high throughput computing, data management and analysis, scientific visualization, science and engineering instruments, sensor networks, high bandwidth networking, and collaborative and learning environments.  The needs and potential for impact span all science and engineering fields.

As the computational infrastructure rapidly evolves from terascale computing today, to sustained petascale computing in 2011, to exascale computing by 2020, the demands for a knowledgeable and skilled workforce able to make effective use of these technologies, the time is more critical than ever to address the training and education of the workforce for generations to come.

There are numerous efforts underway today to capture the needs and requirements of the community to advance science and engineering, to define computational science and HPC competencies needed among the workforce, to identify available training and education resources available for live-synchronous-asynchronous learning opportunities, to identify gaps in those offerings that need to be developed and deployed, as well as to identify effective strategies to recruit, engage, prepare and sustain a larger and more diverse global workforce for the future. 

The CS&E and HPC training and education needs directly impact and therefore must directly involve people from academia, industry and government to ensure that all available resources are brought to bear to benefit the broad needs facing society.

From my direct involvement and role, I can confidently say that TeraGrid, Blue Waters, the Virtual School for Computational Science and Engineering, and HPC University seek your needs and requirements and your participation in working together to maximize our limited resources for the greatest impact today and well in the future.

References:

  • TeraGrid: www.teragrid.org
  • Blue Waters: http://www.ncsa.illinois.edu/BlueWaters/
  • Virtual School: www.vscse.org
  • HPC University: www.hpcuniv.org
  • Ralph Regula School for Computational Science: http://www.rrscs.org/
  • Council on Competitiveness Survey Report, http://www.compete.org/publications/date/2005/.
  • Education, Outreach, and Training for High-Performance Computing, Computing in Science and Engineering, Volume 10, Issue 5, pp. 40-45, September 2008
  • Science and Engineering on Petascale Computers, Computing in Science and Engineering, Volume 11, Issue 5, pp. 7-9, September 2009
  • TeraGrid and Open Science Grid white paper to address workforce development among students: “National Workforce Development: Preparing the Next Generation of Practitioners and Educators Who Will Enable Scientific Discovery Through Effective and Sustained Use of Cyberinfrastructure.”
  • Owens, Linda and Sowmya Anand. November 2009. “GLCPC Virtual School of Computational Science & Engineering: 2009 Summer Schools on Petascale and Many-Core Processors: Final Analytic Report.” Survey Research Laboratory, University of Illinois at Chicago.
  • S.C. Glotzer, et al, “International Assessment of R&D in Simulation-Based Engineering and Science,” World Technology Evaluation Center, 2009. Sponsored by NSF, DOE, DOD, NIH, NIST, NASA.
  • S.C. Glotzer and P.T. Cummings, “Inventing a New America through Discovery and Innovation in Science, Engineering and Medicine: A Vision for Research and Development in Simulation-Based Engineering and Science in the Next Decade,” WTEC, in press.


2009 in Review: Breakthroughs and Trends

Andy Bechtolsheim, Arista

Each year technology advances, processors get faster, storage gets larger, networks move more bits. Each year critical applications in exploration geophysics demand more computational capacity to build more accurate models, lower costs, and improve safety.

This session will pause and look back at some of the key technologies introduced in 2009 and how they will impact future deployments, we will then look forward at technologies being developed and on-the-horizon that will impact the architectures of this critical infrastructure.

Focal areas will be on the significant cost reductions and performance increases happening in the networking sector as new silicon disrupts legacy incumbents, the introduction and adoption of 10/40/100 Gigabit Ethernet in HPC, the use and re-use of Graphics Processors for increased floating-point performance, and the changes in network and computational architectures that are being driven through new application demands and new infrastructure capabilities.


Large, Dynamic, and Pay as you Go: HPC on the Cloud

David Powers, Eli Lilly and Jason Stowe, CycleComputing

We will layout the business drivers inside and outside Eli Lilly that lead us to cloud computing as an option for HPC. Then we will present details of migrating various applications, including those with pipeline stages and dependencies for execution. Specifically we will outline the migration of a proteomics workflow, along with benchmark data on various instances, bandwidth, and I/O speeds.


Intel’s 48 core research chip and the future of many core processors

Tim Mattson, Intel

This discussion looks at a futuristic concept chip from Intel called “the Single Chip Cloud. This research chip contains 48 fully programmable IA cores. It also includes a high-speed on-chip network along with newly invented power management techniques that allow all 48 cores to operate on as little as 25 watts or at 125 watts when running at maximum performance. The long-term research goal of this project is to understand how to build and program highly scalable microprocessors with hundreds of cores. These future many core chips would support entirely new classes of applications. Over the next year, we plan to share approximately 100 of these chips with industry and academic partners to engage in hands-on research on new applications, scalable operating systems, and programming models that utilize the unique features of this chip.


Optimizing Reverse Time Migration on CPUs, GPGPUs and FPGAs

Bob Clapp, Stanford
with Haohuan Fu, Stanford and Olav Lindtjorn, Schlumberger

The optimal platform for Reverse Time Migration (RTM) has recently become a topic of significant debate with proponents of the Central Processing Unit (CPU), General Purpose Graphics Processing Unit (GPGPU), and Field Programable Gate Arrays (FPGA) all claiming superiority.The difficulty of comparing these three platforms for RTM performance is the underlying architecture leads to significantly different algorithmic approaches. The flexibility of the CPU allows significant algorithmic changes which can lead to more than an order magnitude improvement in performance. The GPGPU's large number of computational threads and overall memory bandwidth provide a significant uplift but require a simpler algorithmic approach, requiring more computation for the same size problem. FPGA's streaming programming model results in attractive different cost metric. limit some algorithmic complexity but enable others with The current lack of a standardized high-level language is problematic.


Multi-core evaluation and performance analysis of the ECLIPSE and INTERSECT Reservoir simulation codes

Owen Brazell, Schlumberger
with Steve Messenger & Najib Abusalbi, Schlumberger and Paul Fjertad, Chevron

In this paper we present a performance evaluation of the various Schlumberger simulation codes with the emphasis on how these codes will work on the new generation of multi-core chips. This will primarily focus on the performance of the Intel based chips comparing the performance of the newer Nehalem EP, EX and Westmere chips, with the older Harpertown chips. We also compare the results from these chips with the performance of the AMD Istanbul and Shanghai chips. The various simulator codes (ECLIPSE, E300 and ECLIPSE Frontsim) have a mix of MPI and multi-threaded message passing and compare the performance of the older Fortran codes with modern codes built on C++ . We also show some performance figures based around the latest simulator codes from Schlumberger and Chevron INTERSECT. This code, released in late 2009, is based on C++ and was designed from the ground up to be parallel, unlike the earlier codes, and shows some differences in performance on the newer multi-core chips.


HPC - The Next 5 Years

Panel Discussion
Panel Members: Andy Bechtolsheim, David Powers, Tim Mattson, Bob Clapp, Owen Brazell, Paulius Micikevicius

Panel moderators: Moderators: David Judson (WesternGeco) and Scott Morton (HESS)

A number of future-looking technologies will be discussed at this workshop. The afternoon panel will attempt to address how HPC technologies will play out over the next 5 years.

Introductory Position Talks

Paulius Micikevicius - The Present and Future of Parallelism on GPUs
Many seismic codes are amenable to data-parallel implementations, the traditional domain of GPUs.  In this talk we will briefly survey the similarities and differences between the way parallelism is achieved on NVIDIA, AMD, and Intel GPUs, and how this is exposed to the developer at the source code level.  We will also speculate how this may change in the next few years.

William C. Brantley, Ph.D. - AMD HPC Future
AMD's future is the fusion of CPUs and GPUs.

 

 

 


Sponsors:

AMD appro Arista Networks Brocade


Organizers:

bpchevronhess  rice  total  westerngeco
footer