This talk will cover trends in microprocessor design and show that multithreading and multiprocessing on chip will bring higher levels of parallelism in even mainstream server platforms that compiler writers and application dvelopers need to take advantage of.
Dr. Dileep Bhandarkar
is an IEEE Fellow, and a Distinguished Alumnus of the Indian Institute of
Technology, Bombay, where he received his B. Tech in Electrical Engineering.
He also has a M.S. and Ph.D. in Electrical Engineering from Carnegie Mellon
University, and has done graduate work in Business Administration at the
University of Dallas.
He is currently Architect at Large in Intel's Enterprise Platforms Group. His previous positions have included Director of the Enterprise Architecture Lab, and Director of Strategic Planning for Intel Architecture processors and chipsets
Prior to joining Intel in 1975, he spent almost 18 years at Digital Equipment Corporation, where he managed processor and system architecture, and performance analysis work related to the VAX, Prism, MIPS, and Alpha architectures. He also worked at Texas Instruments for 4 years in their research labs in a variety of areas including magnetic bubble memories, charge coupled devices, fault tolerant memories, and computer architecture.
Dr. Bhandarkar holds 15 U.S. Patents and has published more than 30 technical papers in various journals and conference proceedings. He is also the author of a book titled Alpha Architecture and Implementations.
The National Renewable Energy Laboratory (NREL) in Golden, Colorado is the nation's premier laboratory for renewable energy and energy efficiency research. The laboratory's mission is to develop renewable energy and energy efficiency technologies and practices and advance related science and engineering to address the nation's energy and environmental goals. A new computational sciences initiative at NREL seeks to dramatically increase the computational expertise and capabilities of the Lab. Integrating numerical simulation and information technology into the laboratory research agenda presents unique challenges and opportunities. In this talk we will discuss the wide variety of scientific research being pursued at the laboratory and the role that computational science can play in helping to improve energy efficiency research and to dramatically reduce the cost of exploiting renewable energy technologies. This talk will also discuss challenges faced in developing large numerical applications for parallel high performance computing systems and architectural support for numerical methods of choice.
NREL's web page is: http://www.nrel.gov/
Dr. Steve Hammond
Computational Sciences Director
Science and Technology Directorate
National Renewable Energy Laboratory
Golden, Colorado
Steve is the Computational Sciences Director at the National Renewable Energy Laboratory (NREL). Prior to joining NREL in March 2002, Steve spent nine and a half years at the National Center for Atmospheric Research in Boulder, CO. During the last six years at NCAR Steve managed the computational science section of the Scientific Computing Division. Before joining NCAR Steve was a Post Doc at the European Center for Advanced Scientific Computing in Toulouse, France; Visiting Research Associate at the Research Institute for Advanced Computer Science, NASA Ames Research Center, Moffett Field, CA; and Computer Scientist at the Corporate Research and Development Center, General Electric Co., Schenectady, New York. His areas of expertise include parallel numerical computing, graph partitioning and mapping, parallel algorithms, interconnection networks, and strategic planning.
PhD Computer Science, Rensselaer Polytechnic Institute, Troy, New York (1992) MS Computer Science, University of Rochester, Rochester, New York (1984) BA Mathematics, University of Rochester, Rochester, New York (1983)
The continuing proliferation of high-quality network connectivity brings with it the possibility of building applications that can use globally distributed resources to achieve new performance levels. The Computational Grid is an emerging paradigm for supporting such applications in which programs draw computational "power" from a global resource pool the same way appliances draw electrical power from a power utility -- seamlessly, ubiquitously, and anonymously.
Attempting to realize this paradigm requires new approaches to system software design, compiler optimization, security, and distributed resource allocation. At the same time, useful and often well-understood techniques from the performance evaluation and system simulation communities take on new roles in a Computational Grid context.
In this talk we will discuss some of the research challenges inherent to the Computational Grid approach, and some of the novel investigations that attempt to meet those challenges. In particular, the ability to estimate quantitatively the performance that will be deliverable to an application, despite fluctuating resource contention and availability, has emerged as a fundamentally necessary capability. Both compilation and run time systems must adapt program structure to the performance that is deliverable from the available resources from moment to moment. Similarly, distributed services must be architected to support automatic re-configuration in response to performance changes, giving them an almost autonomic quality.
In addition, we will survey some of the efforts to transition successful research results into both production usage and the commercial sector. Technology advances derived from early Grid research has begun to have influence outside the computer systems research community. We will present some of these early successes and discuss their likely future impact.
Dr. Rich Wolski
is an assistant professor at the University of California, Santa
Barbara and also leads the Grid Systems Thrust within NSF's Partnership
for Advanced Computational Infrastructure (NPACI). His Grid research efforts
have centered around the development of on-line performance forecasting
techniques, and automatic program scheduling agents for Grid applications. In
addition, the software system he and his group have been developing for
performance forecasting -- called the Network Weather Service -- is one of
four infrastructures that has been included in the first release of the NSF
Middleware Initiative (NMI) national-scale software base