header


Home
| Program | Speakers | REGISTER | Logistics | Workshop Committee | Contact | Past Workshops

Abstracts

Solid State Disk Impacts on Seismic Imaging and Modeling

Guillaume Thomas-Collignon, CGGVeritas;  Jacob Liberman, Dell;  and Joshua Mora, AMD
with Jean-Yves Blanc & Riju John, CGGVeritas  and Philippe Thierry, Intel Corporation
When applied to seismic imaging and modeling, scalability for high performance computing systems is bounded by parameters like floating-point capability, memory bandwidth and latency, interconnect bandwidth and latency, and the storage subsystems. In this presentation we will describe the performance impact of the latest Solid State Disk (SSD) technology on RTM and other algorithms. We will review the exceptional properties of the SSD devices and how to leverage them, and will also discuss their limitations.


Bandwidth, Throughput, IOPS, and FLOPS - the delicate balance

Bill Menger, ConocoPhillips Company
with Dan Wieder and Dave Glover, ConocoPhillips Company
ConocoPhillips recently completed a large reverse time migration project on our 1250 node cluster. Even though we completed the project "on time and under budget", there were times when we found ourselves wondering if the jobs would ever complete. We will review the elements necessary to obtain a satisfactory throughput for thousands of jobs when they share a common network and a common file system. We also share lessons we learned as we built and deployed this large cluster over a three-month span from purchase to fully operational.


Getting it right without knowing the answer: experience with quality assurance in a large-scale seismic simulation project

Bill Symes, Rice University
with Igor Terentyev, and Tetyana Vdovina, Rice University
The SEG Advanced Modeling ("SEAM") project aims to produce highly detailed subsurface models and corresponding synthetic data, primarily to aid the seismic research community in evaluation of imaging technology. The Phase I model is acoustic, for cost reasons, and emulates a deepwater subsalt exploration objective in the Gulf of Mexico. The model is 40km x 35km x 15km in extent, and areal data for 60000 shots will be acquired, accurate to perhaps 25 Hz. The authors participated in the development of the quality assurance program for this project, which involved construction of a benchmark modeling code and of a method for assessing the accuracy of large-scale simulations for which no analytic solution is available. In this talk, we will explain how we have validated the benchmark code using Richardson extrapolation, and illustrate the difficulties encountered due to the actual slow convergence of nominally high-order FD schemes. Comparison of benchmark data with that provided by multiple vendors gives us confidence that the QC method accurately assesses RMS amplitude errors, and also that these are uncomfortably large.  The results suggest that current FD technology must be improved substantially if it is to support modeling QC based on sample-by-sample accuracy of approximate solutions of wave equations.


Verification of Complex Codes

Curt Ober, Sandia NL

Over the past several years, verifying and validating complex codes at Sandia National Laboratories has become a major part of code development. These aspects tackle two  important parts of simulation modeling: determining if the models have been correctly implemented - verification, and determining if the correct models have been selected - validation.  In this talk, we will focus on verification and discuss the basics of code verification and its application to a few codes and problems at Sandia.


ECLIPSE: Performance Benchmarking and Profiling

Gilad Shainer, Mellanox Technologies
with Tong Liu, Mellanox Technologies; Owen Brazell, Schlumberger
Schlumberger's ECLIPSE Reservoir Engineering software is a widely used oil and gas reservoir numerical simulation suite. Like many other High Performance Computing (HPC) applications, ECLIPSE runs in a complex ecosystem of hardware and software components. Maximizing ECLIPSE performance requires a deep understanding of how each component impacts the overall solution. However, as new hardware and software comes to market, design decisions are often based on assumptions or projections rather than empirical testing. This presentation removes the guesswork from cluster design for ECLIPSE by providing best practices for increased performance and productivity. It includes scalability testing, interconnect performance comparisons, job placement strategies, and power efficiency considerations. It also introduces an ongoing collaboration between Dell, AMD, and Mellanox dedicated to publishing timely application-specific best practices and performance data.


Benchmarking computers for seismic processing and imaging

Evgeny Kurin, GEOLAB Ltd.
A set of essentially simple tests is suggested aimed at estimating the performance of computers used for seismic data processing/imaging. Such tests proved helpful for evaluating a new hardware, or for locating bottlenecks in existing hardware/software installations. The tests are designed so that each of them corresponds to the given subsystem (subsystems) of the computer and in the same time adequately simulates a class of processing algorithms. The tests are open-source and can be downloaded from the website http://geocomputing.narod.ru/benchmark.html.

CPU

“Pure” CPU performance is a critical issue for 1D processing algorithms. We consider that an implementation of the digital Fourier transform is a good representative algorithm to make it the test for “pure” CPU performance.

CPU + memory

For a multi-channel algorithm, we can hardly expect that all traces in a data portion fit into the CPU cache. Thus, memory access time becomes a critical issue for the overall performance. We suggest the following test to simulate operation of multi-channel algorithms: Randomly select two traces from a gather in memory and digitally convolve them (in time domain).

I/O

In order to simulate real-life workflows related to disk input/output, we suggest the following tests:

  • Successive trace reading/writing,
  • Sort-like reading/writing.

Cluster I/O

  • Successive and sort-like trace reading/writing by a number of processes on cluster nodes.

What does Performance mean in HPC?

Andrew Jones, NAG
with Rob Meyer, NAG
This talk will discuss the interplay between performance and productivity in HPC, and what productivity means. It will cover the changing nature of HPC hardware and its impact on users, especially the issues of scalability. The role of applications and algorithms will be highlighted. The talk will describe how to differentiate a HPC service to fulfil its potential as an enabler of competitive advantage in the business process.

 

 

 

 

approamdcyrusoneintel
novelandmicrosoftspacepanasas rackable uniquedigital
bpchevronhess  rice  total  westerngeco
footer