High Performance Virtual Machines -- 1996 DARPA ITO Summary
PROJECT SUMMARY
DARPA Order Number E313
Contractor:
The Board of Trustees of the University of Illinois
801 South Wright Street
Urbana, Illinois 61801
PRINCIPAL INVESTIGATOR
Andrew A. Chien
Department of Computer Science
University of Illinois
1304 West Springfield Avenue
Urbana, Illinois 61801
Phone: 217-333-6844
Fax: 217-244-6500
Email: achien@cs.uiuc.edu
Co-PI's
Daniel A. Reed and David A. Padua
Department of Computer Science/University of Illinois
Email: reed@cs.uiuc.edu
Email: padua@cs.uiuc.edu
Related Information
http://www-csag.cs.uiuc.edu/projects/hpvm.html
Objective
High Performance Virtual Machines (HPVMs) can increase the
accessibility and delivered performance of distributed computational
resources for high performance computing applications. Successful
HPVM's will reduce the effort required to build efficient parallel
applications on distributed resources, increase the performance
delivered to those applications, and leverage parallel software tools
from existing parallel systems to distributed environments.
Approach
The rapidly increasing performance of low-cost computing systems has
produced a rich environment for desktop, distributed, and wide-area
computing. However this wealth of computational resources has not
been effectively harnessed for high performance computing. High
Performance Virtual Machines (HPVMs) are a new technology which
leverage the software tools and developed understanding of parallel
computation on scalable parallel systems to exploit distributed
computing resources. The objective to reduce the effort to build high
performance applications on distributed systems.
High Performance Virtual Machines depend on building a uniform,
portable abstraction -- a virtual machine -- with predictable, high
performance characteristics. To successfully insulate application
programs, a virtual machine must (1) deliver a large fraction of the
underlying hardware performance, (2) virtualize resources to provide
portability and to reduce the effort in building application programs,
and (3) deliver predictable, high performance. The project is
developing novel technology that leverages commodity components
(hardware and software) to deliver high performance communication over
cluster and wide area interconnects, predictable communication and
computation, coordinated scheduling, and uniform access to resources
(e.g. files, mass storage, embedded sensors).
The HPVM project involves not only the development of novel
communication, scheduling, and resource management technologies, but
also dissemination of a series of software release which embody these
ideas.
1996 Accomplishments
New Start
FY 1997 Plans
The major objectives for the fiscal 1997 year for the High Performance Virtual Machines are:
-
Develop and distribute implementations of the Fast Messages 2.0
(low-level datagram and scatter/gather) and the Message Passing
Interface (MPI) which run atop Myricom's Myrinet and in a Windows NT
environment with high performance. This basic infrastructure will
enable many HPC message passing applications to be cluster capable.
-
Design, develop, and distribute global address space
interfaces for clusters including a shmem put/get library (adapted
from the Cray T3x) and Pacific Northwest Laboratory's Global Arrays.
This implementation will support operation across Myricom's Myrinet
and Windows NT. High performance implementation of these API's will
increase the range of parallel applications that can efficiently
execute on clusters.
-
Document, robustify, and release HPVM 1.0; the first version and
release of a High Performance Virtual Machine. This system will
include the high performance communication interfaces above,
appropriate glue software for making cluster use convenient (batch
managers, job monitors, etc.), and execute with high performance
within Windows NT and across Myricom's Myrinet.
-
Explore the delivery of dynamic coscheduling technology in Windows NT
and as appropriate build external modules and internal hooks to
support efficient coscheduling of threads in a parallel computation
(microsecond scale) while simultaneously supporting timesharing
workloads across computing nodes in a cluster. Demonstrate a
prototype which supports efficient coscheduling in Windows NT
-
Obtain and explore implementation of High Performance Virtual Machine
API's atop alternative cluster interconnects (Tandem's Servernet and
Digital's Memory Channel). Evaluate interconnects and select those
appropriate for HPVM implementation.
-
Explore integration of HPVM access and communication implementations
with emerging wide area metacomputing systems such as Globus and
Legion. As appropriate, define interfaces which enable these systems
to exploit HPVM's as high performance computing elements within the
metacomputing infrastructure.
Technology Transition
Illinois Fast Messages and MPI for Fast Messages has been successfully transferred to over 150 sites in corporations, the national laboratories, research labs, and academia.
Prepared 24 Oct 1996
Back to CSAG home page
webmaster