Home
| Program | Speakers | REGISTER | Logistics | Committee | Sponsors | Contact
Mailing List | Past Workshops

Abstracts

Off-Road to Exascale

Mark Seager, Lawrence Livermore National Laboratory

The technical challenges for Exascale systems are daunting and require large investment to overcome.  This talk will cover some of these hardware and software challenges and potential technology solutions.  However, evolution of these technologies over the next ten years will yield systems with fundamentally different characteristics than the systems we have today and are deploying in the petascale era.  Two of these ramifications, colloquially known as “flops are free, it’s the memory that will cost you” and “the noise in the system” will be central to exascale systems.  We will explore the impact of these ramifications and their dramatic impact on applications.  In addition, we propose on possible technology BIG BET in the BIG DATA area.


Software and Algorithms for Exascale

Kathy Yelik, University of California at Berkeley

Despite the availability of petascale systems for scientific computing, demand for computational capability grows unabated, with areas of national and commercial interest including global climate change, alternative energy sources, defense and medicine, as well as basic science. Past growth in the high end has relied on a combination of faster clock speeds and larger systems, but the clock speed benefits of Moore's Law have ended, and 200-cabinet petascale machines are near a practical limit. Future system designs will instead be constrained by power density and total system power demand, resulting in radically different architectures. The challenges associated with exascale computing will require broad research activities across computer science, including the development of new algorithms, programming models, system software and computer architecture. In future computing systems, performance and energy optimization will be the combined responsibility of hardware and software developers. Since data movement dominates energy use in a computing system, minimizing the movement of data throughout the memory and communication fabric are essential. In this talk I will describe some of the open problems in programming models and algorithm design and promising approaches used so far. Overall, the trends in hardware demand that the community undertake a broad set of research activities to sustain the growth in computing performance that users have come to expect.


What We Need and What We Can Do With an Exascale System

Scott Morton, Hess Corporation; Henri Calandra, Total; John Etgen, Rice University

Abstract coming soon.


Serious Games

Tony Elam, Rice University

We’ve got to do a better job in training our employees and educating our students.  The times, they are a changin’ and so are our options.  Serious Games and Virtual Environments will be explored as a viable vehicle for education, training and select business applications such as decision support.  Serious Games will be defined and the current technology and environments described.  Multiple examples will be shown for both gaming simulations and virtual environments, including a more in-depth view of OilSim.  In addition, the Houston Serious Games Research Consortium will be highlighted along with some example projects.  The presentation will conclude with insight into some future serious games technologies and capabilities.


The Impact of Cluster Filesystem Parameters on Seismic Data Processing Performance

Evgeny Kurin, GEOLAB; Alexander Naumov, T-Platforms

As an example of a time consuming seismic processing procedure, we consider a practical implementation of the popular surface-related multiple elimination (SRME) algorithm for the 3D wide-azimuth (WAZ) data. The computational kernel of the algorithm consists in pair-wise convolutions of seismic traces. In every pair, at least one trace is usually missing, and it has to be reconstructed either from another trace or from a set of traces.

Processing a real 3D WAZ dataset with the 3D SRME algorithm is a computational challenge, because such datasets are commonly as large as dozens of terabytes, and obtaining one output trace usually requires as many as tens of thousands of convolutions and I/O operations. We discuss that computations can be effectively parallelized for both conventional multi-core and hybrid GPU-based systems, so that I/O becomes the only bottleneck.

The two ways to overcome this are (i) redesigning the initially quasi-random I/O pattern and (ii) tuning the cluster filesystem. We analyze methods of getting the maximum performance out of a given compute cluster. Besides mid-range clusters widely used in the industry, we do our numerical experiments on the two systems from the Top500 list, both installed in the Moscow State University. One system is LOMONOSOV (#17 in the recent Top500 list), and the other one is SKIF-MSU (#179). These supercomputers have cluster filesystems of different types, and we study performance and scalability of both systems in terms of their cluster I/O throughput for the given seismic processing problem.


Scalable Real-Time Formation Mapping Inversion on the Cloud

George Kutiev, Schlumberger

Formation evaluation and mapping while drilling requires inversion that implements search in 20+ dimensional space. In addition to the inherent computational complexity, the inversion results have to be available in real-time (within 1 min of receiving downhole measurements) in order to timely affect steering decisions.

To address this challenge, a fast and highly scalable distributed parallel implementation was designed using cloud-based, on-demand techniques. The "cloud" consists of multi-core machines distributed throughout the US, Europe, Asia and South America.

The implementation allows for multiple users and several drilling operations to be conducted based on available resources, often gaining a 200-300 speedup factor over the previous, single local machine approach.

Employing fault-tolerant and automated re-scheduling techniques has virtually eliminated the chance of failure during these critical around-the-clock operations. High-grade security encryption enables data to be sent and received from client locations via heterogeneous networks, including 3G/4G broadband wireless connections to thin-client laptops.


A Global HPC-Based Model Biulding Process for Hydrocarbon Quantification Services

Feyzi Inanc, Baker Hughes Inc.

The oil and gas industry employs very sophisticated technologies for hydrocarbon exploration and production purposes. Some of those technologies utilize simulation-/model-based approaches. Due to the nature of the industry, the exploration and production locations are globally distributed but the high-performance computing resources set up to meet the demands and people who will process the data and build the models are usually clustered around certain centers. Consequently, there is a pressing demand for setting up automated data communication processes between the remote centers and high-performance computing centers.

BHI nuclear cased-hole technologies provide reservoir hydrocarbon saturation measurement services during the production stages. These services utilize not only the logging tools, data acquisition but also interpretation of the data acquired by the pulsed neutron generator-based logging tools. The interpretation requires models representing the reservoirs. The wellbore completion data needed for building models is usually proprietary information that can be obtained from the operators only after a contract is obtained. In addition, the wellbore completion, fluids, reservoir fluids show a large degree of variation and cannot be represented by pre-computed generic completion models. Building completion-specific models requires very intense Monte Carlo-based neutron/photon transport computations. A typical model requires about 30-45 CPU days on a single processor. Normally, multiple models are required for a single well. Therefore, there is a demand for high-performance computers to meet the computing needs. In addition, the turnaround time between a model request by the operations and delivery of the model to the operations is usually less than four business days.

In this presentation we provide preliminary information about nuclear cased-hole services and requirements. Then we describe how a group of nuclear scientists provide models to the field operations using high-performance resources using automated web-based data communications.


The Use of GPUs for Reservoir Simulation

Garfield Bowen, Schlumberger

The paper aims to review the use of GPU type hardware for a general purpose reservoir simulator in a commercial environment. The paper focuses on the challenges associated with parallelization of algorithms that are not naturally parallel, in particular for solution of systems of linear equations where global communication is critical. The trade off of convergence speed and reliability against the degree of parallelism available is discussed with reference to benchmarking of a range of real-world simulation models using a linear solver implemented on the GPU. Analysis of the costs and benefits of utilizing GPU type architectures are made when considering a full commercial release. Based on the analysis, the requirements and character of future algorithms are discussed to fully exploit the hardware currently under development.


Data Center Efficiency - Contained Hot Aisle Technology Drives Down PGS' Operating Costs

David Baldwin, Petroleum Geo-Services

Between 2008 and 2010, Petroleum Geo-Services (PGS) embarked on a program to revamp and rebuild their mega-centers in the UK and US. One of the main focuses of the program was to contain the spiraling energy costs associated with running major data centers and to this end we concentrated on how we could more effectively cool the data halls – a major contributor to any data center’s operating cost.

The London data center uses a combination of evaporative cooling and hot aisle containment to give an award winning reduction in the PUE (power usage effectiveness) from the ‘traditional’ level of 2.4 to 1.2. This is an astonishing achievement, saving PGS half the yearly electrical costs in London and leading to the center being declared ‘Europe’s most efficient data center’. As a comparison, Google’s most efficient data hall averages 1.18 – a figure PGS can beat when the center is in full production.

Our Houston data center was built in 2010 and expanded on the experiences in the UK. Deploying evaporative cooling was not cost effective because of the warmer and wetter climate we enjoy in Texas, so working with a co-location partner, CyrusOne, to customize the data hall we did develop the hot aisle concept further to reduce the running costs of the center by one third with a PUE of 1.6 – a level the EPA would describe as using ‘best practices’. What is more impressive is the return on investment in Houston is estimated to be 9 months on the containment system.

In this session, we will look at how PGS deployed the evaporative cooling technology and hot aisle containment in the real world and how it is driving down our data center operating costs and how these concepts could benefit your data center.