Princeton University / Intel Research
A Blueprint for Introducing Disruptive Technology into the Internet
A new class of geographically distributed network services is emerging, and the most effective way to design, evaluate, and deploy these services is by using an overlay-based testbed. Unlike conventional network testbeds, however, we advocate an approach that supports both researchers that want to develop new services, and clients that want to use them. This dual use, in turn, suggests four design principles that are not widely supported in existing testbeds: services should be able to run continuously and access a slice of the overlay's resources, control over resources should be distributed, overlay management services should be unbundled and run in their own slices, and APIs should be designed to promote application development. This talk describes this high-level vision, and reports the status and plan for the realization of the vision in PlanetLab.
Larry Peterson is a Professor of Computer Science at Princeton University. Prior to joining Princeton, he was on the faculty at the University of Arizona. He is currently on leave from Princeton, working at Intel Research--Berkeley, where he leads the PlanetLab project. His research focuses systems-related issues in computer networks, and he is a co-author of the textbook "Computer Networks: A Systems Approach". Professor Peterson has served as the Editor-in-Chief of the ACM Transactions on Computer Systems, and on the editorial boards for IEEE/ACM Transactions on Networking, the IEEE Journal on Selected Areas in Communication, and the ACM Transactions on Embedded Systems. He was the program chair of the inaugural HotNets workshop and the 19th SOSP. Peterson received his Ph.D. fom Purdue University in 1985.
University of Wisconsin
Taking Stock of Grid Technologies - Accomplishments and Challenges
Miron Livny received a B.Sc.
degree in Physics and Mathematics in 1975 from the Hebrew University and M.Sc.
and Ph.D. degrees in Computer Science from the Weizmann Institute of Science in
1978 and 1984, respectively. Since 1983 he has been on the Computer Sciences
Department faculty at the University of Wisconsin-Madison, where he is currently
a Professor of Computer Sciences and is leading the Condor project.
University of Glasgow and University of Edinburgh
Title Structured Data and the Grid: Access and Integration
Structured digital data are now fundamental to all branches of science and engineering; they play a major role in medical research and diagnosis, and underpin business and governmental decision processes. The advent of ubiquitous Internet computing and the ambition of today's enterprises has led to widespread collaboration in the creation, curation, publication, management and exploitation of these structured collections. As the scale of data increases and the sophistication of analyses develops the combination of large-scale data management and intensive computation using distributed resources becomes a recurrent necessity. This is a natural application for grids. The talk will identify the requirements raised by addressing the data intensive computations over a grid, report on progress towards building services that meet those requirements, and pose the research issues that they provoke.
Dr. Malcolm Atkinson FRSE (firstname.lastname@example.org) is the Director of the UK National e-Science Centre and a professor at the Computing Science Department, University of Glasgow and the Informatics School at the University of Edinburgh. He leads the OGSA-DAI project and the e-Science Institute programme. His research concerns the integration of programming and databases to deliver large scale and long lived systems.
University of California, San Diego
GEON: Cyberinfrastructure for the Geosciences
In this talk, we will provide an overview of the GEON project, which is a multi-institution coalition of IT and Earth Science researchers to develop cyberinfrastructure for the Geosciences. The geoscientists in the project have identified two integrative science questions, related to terrane recognition and intra-plate deformations, which provide the motivation to develop the cyberinfrastructure. The regions of study are in the mid-Atlantic and the Rocky Mountains. The project is developing tools and technologies related to knowledge representation, data integration, grid computing, and advanced visualization, to enable geoscientists to address complex science questions. We will describe how the GEON Grid is being designed and implemented to serve the needs of this community. It is anticipated that multiple additional Geoscience groups will participate in this grid infrastructure. One that has recently received funding is the Chronos project, which has the mission to develop curated databases and associated access and visualization tools in order to produce a dynamic, global timescale to frame Earth history events and processes. Another that is underway is the HydroInformatics project, which has the goal of developing a national-scale Hydrologic Information System for monitoring surface and ground water resources. The presentation will highlight the Grid-enabled Mediation Services (GeMS) that are being developed at the San Diego Supercomputer Center to facilitate integrated access to replicated, distributed information.
Chaitan Baru is Program Co-Director for Data and Knowledge Systems (DAKS) at the San Diego Supercomputer Center, University of California San Diego. The SDSC DAKS group is involved in R&D activities related to data and knowledge management technologies in support of computational science and the Grid. Baru also leads the Knowledge and Data Engineering Lab of the California Institute for Telecommunications and Information Technology Cal(IT)2, UC San Diego.
Prior to joining SDSC, Baru worked in the Database Group at IBM Toronto and the Database Technology Institute at the IBM Almaden Research Labs, where he led one of the groups that was responsible for the design and development of DB2 Parallel Edition (released December 1995). He also led a performance group, which published the industry’s first TPC-D decision support benchmark, in December 1995. Before joining IBM, Baru was Assistant Professor of CSE at the University of Michigan, Ann Arbor. He is Principal Investigator of GEON—The Geosciences Network—and one of the co-Investigators of the Biomedical Informatics Research Network (BIRN). He also leads other NSF-funded projects at SDSC in information integration and grid benchmarking.
Baru received his B.Tech from the Indian Institute of Technology (IIT), Madras, and M.E. and Ph.D. from the University of Florida, Gainesville.
Argonne National Lab & University of Chicago
Grid Services as Research Enabler
How should the research community respond to the emergence of the Open Grid Services Architecture? Will standardization of Grid interfaces encourage or stifle creativity? I argue that the definition of OGSA presents a major opportunity for the Grid research community, due to its encouragement of the critical mass, impact, large-scale deployment, and synergies required for rapid progress across a range of fronts. I propose some specific steps that the research community can take to encourage, contribute to, and leverage OGSA developments.
Ian Foster is a Senior Scientist and
Associate Director of the Mathematics and Computer Science Division at Argonne
National Laboratory, Professor of Computer Science at the University of Chicago,
and Senior Fellow in the Argonne/University of Chicago Computation Institute. He
currently co-leads the Globus Project with Dr. Carl Kesselman of USC/Information
Sciences Institute, as well as a number of other major Grid initiatives,
including the Earth System Grid (funded by the US Department of Energy) and the
GriPhyN and GRIDS Center projects (funded by the National Science Foundation).
In 2002, the Globus Toolkit was awarded an R&D100 Award and was named the
most promising technology development of the year by R&D Magazine.
Foster leads computer science projects developing advanced distributed computing technologies and parallel tools, as well as computational science efforts applying advanced computing techniques to scientific problems—in areas such as climate modeling and the analysis of data from physics experiments. He has chaired numerous conferences in the US and abroad dealing with distributed computing, super computing, and high performance computing. In May of 2003, Foster (with his colleague Kesselman) was presented the 2002 Lovelace Medal from the British Computer Society for work on the Globus Project and Grid computing. In June of 2003, Foster will be presented with the University of Chicago's Distinguished Service Award.
Foster has written frequently in popular and academic journals on the Grid distributed computing concept, which enables large-scale aggregation and sharing of computational, data, and other resources across national and institutional boundaries. Foster has written and co-edited three books on Grid computing, parallel program design and parallel programming..
University of Southern California
NEESGrid: Earthquake Engineering Meets the Grid
The discipline of Earthquake Engineering is focused on understanding how our physical infrastructure, buildings, bridges, roads, etc., responds to earthquakes. Traditionally, this understanding has been achieved through experimental observations: either by collecting data from real earthquakes, or by subjecting models to simulated earthquakes using large, expensive test facilities such as shake tables, reaction walls, and centrifuges. To the earthquake engineering communities, these experimental facilities play the same role as supercomputers do to the numerical simulation community: they are critical to advancing the discipline, they are expensive, and are often more effectively used if shared. In addition, numerical simulation is playing an increasingly important role in earthquake engineering, and consequently, the computational infrastructure complements the experimental infrastructure to form the core platform on which earthquake engineering advances. The emergence of Grid infrastructure creates an exciting opportunity for the earthquake engineering community. At its most basic, the Grid offers better access to the experimental infrastructure which is so critical to understanding the response of structures. However, more significant, the Grid offers the possibility to create entirely new types of experiments: combining experimental facilities with numerical simulation, creating a new virtual facility on which even greater understanding can be obtained. Recognizing the impact that this type of across the board sharing and collaboration can have on earthquake engineering, the National Science Foundation has created a program called the Network for Earthquake Engineering and Simulation. In my talk, I will describe NEESgrid, a Grid based infrastructure that is being developed to advance this vision of a new, network based approach to earthquake engineering experimentation. I will discuss the NEESgrid architecture and describe how NEESgrid is being used to perform new classes of earthquake engineering experiments.
Dr. Carl Kesselman is the Director of the Center for Grid Technologies at the Information Sciences and a Research Associate Professor of Computer Science, at the University of Southern California. He received a Ph.D. in Computer Science from the University of California at Los Angeles, a Masters of Science in Electrical Engineering from the University of Southern California, and Bachelors in Electrical Engineering and Computer Science from the University of Buffalo. Dr. Kesselman’s current research interests are in all aspects of Grid computing including basic infrastructure, security, resource management, high-level services and Grid applications. Together with Dr. Ian Foster, he co-leads the Globus Project™, one of the leading Grid research projects in the world. An important result of the Globus has been the development of the Globus Toolkit™, which has become the de-facto standard for Grid computing. Dr. Kesselman has received the 1997 Global Information Infrastructure Next Generation Internet award, the 2002 R&D 100 award, the 2002 R&D Editors choice award, and the Ada Lovelace Medal from the British Computing Society.
CERN, European Organization for Nuclear Research
The EU DataGrid: Building and Operating a Large Scale Grid Infrastructure
The EU DataGrid project (EDG) has as its aim to develop a large-scale research testbed for Grid computing. The Project is in its final phase and a large scale testbed has been up and running continuously since the beginning of 2002. Three application domains are using this testbed to explore the potential Grid computing has for their production environments: Particle Physics, Earth Observation and
Biomedics. The EDG testbed, spanning some 20 major sites all over Europe as well as sites in the US and Asia, offers over 10,000 CPUs
and 15 TB of storage to its more than 350 users; it is one of the largest Grid infrastructures in the world.
In this talk we present the architecture of the EDG middleware and critically review its design. Based on the experiences our user
community has gained during production tests we identify successes of EDG as well as areas that need further improvement and discuss which changes will be applied to strengthen the software over the next few months. We end the talk with on outlook on how EDG middleware is expected to be further applied and maintained after the end of the project.
Dr. Erwin Laure received his PhD degree in Business Administration and
Computer Science in 2001 from the University of Vienna. After working as a research assistant at the Institute for Software Science of the
University of Vienna he joined the European Organization for Nuclear Research (CERN) in 2002. He has been working in the data management
area of the EU DataGrid (EDG) project and is currently the Technical Coordinator of EDG. His research interests include grid computing with
a focus on data management in grid environments as well as programming environments, languages, compilers and runtime systems for parallel
and distributed computing.