logo

top
***Please Note: This is NOT SEG Special Session 7 to be held Thursday, November 13th. ***


ABSTRACTS

8:35 - 9:05

Architectural Balance and Enabling Large Scale Parallelism: The SiCortex Cluster Systems

Matt Riley, Chief Engineer - SiCortex


Large scale parallelism is predicated on two key
features: the inherent parallelism of the algorithm, and the execution platform's ability to support the necessary interprocessor coordination and communication. SiCortex systems are aimed at solving the interprocessor communication problem for parallel programs that require strong interprocessor interactions. We'll discuss the SiCortex hardware and software architecture as well as some measures we've taken to allow programmers to take advantage of the available algorithmic parallelism.

<< Back to Program Agenda

9:05 - 9:35 Acceleration of Prestack Visualization and Attribute Extraction
Steve Briggs, V.P. Integration & Deployment - Headwave


Use of graphics processors makes visualization and analysis of Terabyte datasets feasible for Headwave customers. Analytic functions have grown to encompass Semblance, Gradiant-Intercept, and other non-traditional elements. FFT availability gives significant potential for user-designed filters and analysis process flows.

<< Back to Program Agenda

9:35 - 10:05 Processor Architecture: Past, Present, Future
Steve Wallach, Chief Scientist and Co-founder - Convey


Computer Architecture, at the processor level, will be described since the 1960's. Then using this as a base, the fundamentals of hardware and system software design will be used to describe future processor architectures. The objective is to design the fastest heterogeneous uniprocessor that is subsequently used to scale up.

<< Back to Program Agenda

10:05 - 11:05 Panel Discussion: Converts and Skeptics Discuss Accelerated Computing
Jan Odegard - Panel Moderator, Executive Director, Ken Kennedy Institute - Rice University


The jury is still very much out on accelerator technology for high performance computing. Over the years the computational science and engineering community has had a love-hate relationship with innovations that promised to revolutionize the HPC space. Most claimed to do this by offering order of magnitude improvement in performance, if only users would completely rewrite their codes. In most cases these have been blips on the radar. General microprocessor designs have adopted (borrowed or stolen) what they could from these innovations, leaving other parts to fall by the wayside.

What, if anything, is different today? One critical difference is that we can not change physics and hence vendor's ability to continue scaling the clock speeds at the rate we have become accustomed to, while keeping the processor cool, is very limited. As a result we do not get see the performance boost we used to get in every new generation of microprocessors. However, the fact that Moore's law still holds true (we can still pack roughly 2x as many transistors on a chip every 18 months) means processor vendors are looking at how to extract performance from tight integration and on-chip replication (multi-core and many-core). As a result we have an increasing number of multi-core processors and most vendors have roadmaps towards massive on-chip parallelism. Systems such as Roadrunner (the first system to achieve a petaflop per second on the Linpack benchmark) at Los Alamos National Laboratory, leveraging the IBM Cell processor, is helping validate the promise of accelerators such as GPGPUs, Cells, FPGAs, and ASICs.

NVIDIA and AMD recently released graphical processors that support massively parallel double-precision computations and both companies are providing software support for new programming models (e.g. , CUDA and Brook+/CAL). Intel is developing a technology code named "Larrabee" that includes a many-core programming model and performance analysis for several applications and IBM is continuing to develop Cell and related technologies. Each of these technologies offers a promise of increased performance, beyond that which will be available with general purpose microprocessors. However, for all its potential benefits it is not without challenges. Industry is struggling with deciding when and how to embrace and adopt accelerators as part of their HPC resource portfolio. Getting performance is still hard and programming models, while emerging, are still rudimentary and porting of code between processor generations is not flawless. The lack of good programming models and efficient migration of code creates a conundrum for the industry. On one hand rewriting the code for every new generation of processors is not cost effective, and yet the promise of performance today is appealing.

This panel will discuss and attempts to take the pulse of innovations around massive-parallelism and accelerator technology in an attempt to better understand how the technology is being adopted. The panel will also discuss barriers for broader adoption and increased productivity.

<< Back to Program Agenda

 
11:45 - 12:15 Building a New Data Center
William Deigaard, Director of Networking, Telecommunications and Data Center - Rice University


In today's out-sourced environment, academic institutions are not generally expected to be engaged in the design, development, and construction of state-of-the-art data center facilities.  Rice University, however, has recently completed such a project.  This talk will cover the logic behind the project, describe the design details, show the final results, articulate the plan for growth, and share lessons that were learned throughout.

<< Back to Program Agenda

12:15 - 12:45 Learnings from the NCSA and R Systems Data Centers
Brian Kucic, Founding Principal - R Systems NA, Inc.


Designing NCSA's Advanced Computation Building's X-treme cooling Data Center for high density computing. This facility was designed in 1999 and continues to meet the high heat loads of today's HPC servers.

R Systems installed the 44th fastest supercomputer on the June '08 Top500 list in March of 2008. This facility was completed in 4 weeks and the machine was "racked and stacked" and placed into production within two weeks of delivery.

<< Back to Program Agenda

 
12:45 - 1:30 Panel Discuss: Best Practices and Future Requirements for HPC Data Centers
Keith Gray, Panel Moderator, Manager of High Performance Computing - BP


The panel for Best Practices and Future Requirements for HPC Data Centers will:

 -  Review the growth in power and cooling requirements for HPC Data Centers
 -  Discuss the forecast for power and cooling requirements in the next 10 years
 -  Share best practices at how people have solved facilities challenges
 -  Discuss innovative solutions being developed for HPC facilities

<< Back to Program Agenda


 
1:30 - 2:00 Best Practices in Systems Administration for Large Clusters
Cindy Crooks, HPC Consultant - BP
Kim Andrews, Manager, Research Computing - Rice

The successful support of a research computing community requires the maintenance and coordination of several critical components. All of these are necessary for success, but none are sufficient on their own:

  • An uninterrupted supply of power and cooling.
  • An ample pool of computing hardware
  • A stable Operating System
  • Scalable and efficient application software with a usable development environment
  • A fair and efficient queuing system that provides required resources on demand
  • Clear documentation that enables facile and efficient access to resources
  • Desktop support for end-user entry into the systems

Creating this environment is absolutely essential for building up user communities that take ownership in and are committed to the center. In this presentation we discuss the administration of HPC centers, the methods used to address relevant issues, and the reasons for using those methods.

 

<< Back to Program Agenda

 

PAST WORKSHOPS

bottom