Following is the seminar report on Grids and Grid technologies for wide-area distributed computing. Let us know the each term such as Distributed computing, grid and grid technology in details.
Distributed computing utilizes a network of many computers, each accomplishing a portion of an overall task, to achieve a computational result much more quickly than with a single computer. In addition to a higher level of computing power, distributed computing also allows many users to interact and connect openly. Different forms of distributed computing allow for different levels of openness, with most people accepting that a higher degree of openness in a distributed computing system is beneficial. The segment of the Internet most people are most familiar with, the World Wide Web, is also the most recognizable use of distributed computing in the public arena. Many different computers make everything one does while browsing the Internet possible, with each computer assigned a special role within the system. A home computer is used, for example, to run the browser and to break down the information being sent, making it accessible to the end user. A server at your Internet service provider acts as a gateway between your home computer and the greater Internet. These servers speak with computers that comprise the domain name system, to help decide which computers to talk to based on the URL the end user enters. In addition, each web page is hosted on another computer. Another type of distributed computing is known as grid computing. Grid computing consists of many computers operating together remotely and often simply using the idle processor power of normal computers. The highest visibility example of this form of distributed computing is the At Home project of the Search for Extra-Terrestrial Intelligence (SETI). SETI makes available a free piece of software a home user may install on a computer. The software runs when the computer is left idle. Many home computers are also examples of distributed computing – albeit less drastic ones. By using multiple processors in the same machine, a computer can run separate processes and reach a higher level of efficiency than otherwise. Many home computers now take advantage of multiprocessing, as well as a similar practice known as multithreading, to achieve much higher speeds than their single-processor counterparts.
What is Grid computing?
- The last decade has seen a substantial increase in commodity computer and network performance, mainly as a result of faster hardware and more sophisticated software.
- Nevertheless, there are still problems, in the fields of science, engineering, and business, which cannot be effectively dealt with using the current generation of
- In fact, due to their size and complexity, these problems are often very numerically and/or data intensive and consequently require a variety of heterogeneous resources that are not available on a single machine.
- A number of teams have conducted experimental studies on the cooperative use of geographically distributed resources unified to act as a single powerful computer.
- This new approach is known by several names, such as meta-computing, scalable computing, global Computing, Internet computing, and more recently peer-to-peer or Grid computing.
- The early efforts in Grid computing started as a project to link supercomputing sites, but have now grown far beyond their original intent.
- In fact, many applications can benefit from the Grid infrastructure, including collaborative engineering, data exploration, high-throughput computing, and of course distributed supercomputing.
- Moreover, due to the rapid growth of the Internet and Web, there has been a rising interest in Web-based distributed computing, and many projects have been started and aim to exploit the Web as an infrastructure for running coarse-grained distributed and parallel applications.
- In this context, the Web has the capability to be a platform for parallel and collaborative work as well as a key technology to create a pervasive and ubiquitous Grid-based infrastructure.
This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in developing this emerging technology.
Grids And Grid Technologies
The popularity of the Internet as well as the availability of powerful computers and high-speed network technologies as low-cost commodity components is changing the way we use computers today.
These technology opportunities have led to the possibility of using distributed computers as a single, unified computing resource, leading to what is popularly known as Grid computing.
The term Grid is chosen as an analogy to a power Grid that provides consistent, pervasive, dependable, transparent access to electricity irrespective of its source.
Grids enable the sharing, selection, and aggregation of a wide variety of resources including supercomputers, storage systems, data sources, and specialized devices (see Figure 1) that are geographically distributed and owned by different organizations for solving large-scale computational and data intensive problems in science, engineering, and commerce.
The concept of Grid computing started as a project to link geographically dispersed supercomputers, but now it has grown far beyond its original intent. The Grid infrastructure can benefit many applications, including collaborative engineering, data exploration, high-throughput computing, and distributed supercomputing.
- Grid Information Service
A Grid can be viewed as a seamless, integrated computational and collaborative environment (see Figure 1) and a high-level view of activities within the Grid is shown in Figure 2.
The users interact with the Grid resource broker to solve problems, which in turn performs resource discovery, scheduling, and the processing of application jobs on the distributed Grid resources.
From the end-user point of view, Grids can be used to provide the following types of services:-
- Computational services
These are concerned with providing secure services for executing application jobs on distributed computational resources individually or collectively.
Resources brokers provide the services for collective use of distributed resources. Grid providing computational services is often called a computational Grid.
Some examples of computational Grids are: NASA IPG , the World Wide Grid , and the NSF TeraGrid.
- Data services
These are concerned with proving secure access to distributed datasets and their management.
To provide a scalable storage and access to the data sets, they may be replicated, catalogued, and even different datasets stored in different locations to create an illusion of mass storage.
The processing of datasets is carried out using computational Grid services and such a combination is commonly called data Grids. Sample applications that need such services for management, sharing, and processing of large datasets are high-energy physics and accessing distributed chemical databases for drug design.
- Application service
These are concerned with application management and providing access to remote software and libraries transparently.
The emerging technologies such as Web services are expected to play a leading role in defining application services.
They build on computational and data services provided by the Grid.
An example system that can be used to develop such services is Net Solve.
- Information services
These are concerned with the extraction and presentation of data with meaning by using the services of computational, data, and/or Application services.
The low-level details handled by this are the way that information is represented, stored, accessed, shared, and maintained.
Given its key role in many scientific endeavors, the Web is the obvious point of departure for this level.
- Knowledge services
These are concerned with the way that knowledge is acquired, used, retrieved, published, and maintained to assist users in achieving their particular goals and Objectives.
Knowledge is understood as information applied to achieve a goal, solve a problem, or execute a decision.
An example of this is data mining for automatically building a new knowledge.
To build a Grid, the development and deployment of a number of services is required. These include security, information, directory, resource allocation, and payment mechanisms in an open environment; and high-level services for application development, execution management, resource aggregation, and scheduling.
Grid applications (typically multidisciplinary and large-scale processing applications) often couple resources that cannot be replicated at a single site, or which may be globally located for other practical reasons.
These are some of the driving forces behind the foundation of global Grids.
In this light, the Grid allows users to solve larger or new problems by pooling together resources that could not be easily coupled before.
Hence, the Grid is not only a computing infrastructure, for large applications, it is a technology that can bond and unify remote and diverse distributed resources ranging from meteorological sensors to data vaults, and from parallel supercomputers to personal digital organizers.
As such, it will provide pervasive services to all users that need them.
This paper aims to present the state-of-the-art of Grid computing and attempts to survey the major international efforts in this area.
General Principles of Grid Construction
This section briefly highlights some of the general principles that underlie the construction of the Grid.
In particular, the idealized design features that are required by a Grid to provide users with a seamless computing environment are discussed. Four main aspects characterize a Grid.
- Multiple administrative domains and autonomy
Grid resources are geographically distributed across multiple administrative domains and owned by different organizations.
The autonomy of resource owners needs to be honored along with their local resource management and usage policies.
A Grid involves a multiplicity of resources that are heterogeneous in nature and will encompass a vast range of technologies.
A Grid might grow from a few integrated resources to millions. This raises the problem of potential performance degradation as the size of Grids increases. Consequently, applications that require a large number of geographically located resources must be designed to be latency and bandwidth tolerant.
- Dynamistic or adaptability
In a Grid, resource failure is the rule rather than the exception.
In fact, with so many resources in a Grid, the probability of some resource failing is high.
Resource managers or applications must tailor their behavior dynamically and use the available resources and services efficiently and effectively.
The steps necessary to realize a Grid include:
- The integration of individual software and hardware components into a combined networked resource (e.g. a single system image cluster);
- The deployment of:
– low-level middleware to provide a secure and transparent access to resources;
– user-level middleware and tools for application development and the aggregation of distributed resources;
- The development and optimization of distributed applications to take advantage of the available resources and infrastructure.
The components that are necessary to form a Grid are as follows
- Grid fabric:
This consists of all the globally distributed resources that are accessible from anywhere on the Internet.
These resources could be computers (such as PCs or Symmetric Multi-Processors) running a variety of operating systems (such as UNIX or Windows), storage devices, databases, and special scientific instruments such as a radio telescope or particular heat sensor.
- Core Grid middleware:
This offers core services such as remote process management, co-allocation of resources, storage access, information registration and discovery, security, and aspects of Quality of Service (QoS) such as resource reservation and trading.
- User-level Grid middleware:
This includes application development environments, programming tools, and resource brokers for managing resources and scheduling application tasks for execution on global resources.
- Grid applications and portals:
Grid applications are typically developed using Grid-enabled languages and utilities such as HPC++ or MPI.
An example application, such as parameter simulation or a grand-challenge problem, would require computational power, access to remote data sets, and may need to interact with scientific instruments.
Grid portals offer Web-enabled application services, where users can submit and collect results for their jobs on remote resources through the Web.
In attempting to facilitate the collaboration of multiple organizations running diverse autonomous heterogeneous resources, a number of basic principles should be followed so that the Grid environment:
- Does not interfere with the existing site administration or autonomy;
- Does not compromise existing security of users or remote sites;
- Does not need to replace existing operating systems, network protocols, or services;
- Allows remote sites to join or leave the environment whenever they choose;
- Does not mandate the programming paradigms, languages, tools, or libraries that a user wants;
- Provides a reliable and fault tolerant infrastructure with no single point of failure;
- Provides support for heterogeneous components;
- Uses standards, and existing technologies, and is able to interact with legacy applications;
- Provides appropriate synchronization and component program linkage.
As one would expect, a Grid environment must be able to interoperate with a whole spectrum of current and emerging hardware and software technologies.
An obvious analogy is the Web.
Users of the Web do not care if the server they are accessing is on a UNIX or Windows platform.
From the client browser’s point of view, they ‘just’ want their requests to Web services handled quickly and efficiently.
In the same way, a user of a Grid does not want to be bothered with details of its underlying hardware and software infrastructure.
A user is really only interested in submitting their application to the appropriate resources and getting correct results back in a timely fashion.
An ideal Grid environment will therefore provide access to the available resources in a flawless manner such that physical discontinuities, such as the differences between platforms, network protocols, and administrative boundaries become completely transparent.
In real meaning, the Grid middleware turns a completely mixed environment into a practical uniform one.
Grid Computing Projects
- There are many international Grid projects worldwide, which are hierarchically categorized as integrated Grid systems, core middleware, user-level middleware, and applications/application driven efforts.
A description of two community driven forums, Global Grid Forum (GGF) and Peer-to-Peer (P2P) Working Group promoting wide-area distributed computing technologies, applications, and standards is given in Table shown as below:
Focus and technologies developed
Global Grid Forum
This is community-initiated by individual researches and practitioners working on distributed computing or grid technologies.
The P2PWG is organized to facilitate and accelerate the advance of best practicises for a P2P computing infrastructure.
Globus provides a software infrastructure that enables applications to handle distributed Mixed computing resources as a single virtual machine.
The Globus project is a U.S. multi-institutional research effort that seeks to enable the construction of computational Grids.
Globus provides basic services and capabilities that are required to construct a computational Grid.
The toolkit consists of a set of components that implement basic services, such as security, resource location, resource management, and communications.
Consequently, rather than providing a uniform programming model, such as the object-oriented model, the Globus provides a bag of services which developers of specific tools or applications can use to meet their own particular needs.
Globus is constructed as a layered architecture in which high-level global services are built upon essential low-level core local services.
The Globus toolkit is modular, and an application can exploit Globus features, such as resource management or information infrastructure, without using the Globus communication libraries.
Globus provides application developers with a practical means of implementing a range of services to provide a wide-area application execution environment.
Legion provides the software infrastructure so that a system of heterogeneous, geographically distributed, high-performance machines can interact seamlessly.
Legion attempts to provide users, at their workstations, with a single, coherent, virtual machine.
In the Legion system the following apply.
- Everything is an object:
Objects represent all hardware and software components. Each object is an active process that responds to method invocations from other objects within the system.
- Classes manage their instances:
Every Legion object is defined and managed by its own active class object.
Class objects are given system-level capabilities; they can create new instances, schedule them for execution, activate or deactivate an object, as well as provide state information to client objects.
- Users can define their own classes:
As in other object-oriented systems users can override or redefine the functionality of a class. This feature allows functionality to be added or removed to meet a user’s needs.
Legion core objects support the basic services needed by the metasystem.
The Legion system supports the following set of core object types.
- Classes and metaclasses: Classes can be considered managers and policy makers.
Metaclasses are classes of classes.
- Host objects: Host objects are abstractions of processing resources, they may represent a single processor or multiple hosts and processors.
- Vault object: Vault objects represents persistent storage, but only for the purpose of maintaining the state of Object Persistent Representation (OPR).
- Implementation objects and caches: Implementation objects hide the storage details of object implementations and can be thought of as equivalent to executable files in UNIX.
Implementation cache objects provide objects with a cache of frequently used data.
- Binding agent: A binding agent maps object IDs to physical addresses. Binding agents can cache bindings and organize themselves into hierarchies and software combining trees.
- Context objects and context spaces: Context objects map context names to Legion object IDs, allowing users to name objects with arbitrary-length string names.
Context spaces consist of directed graphs of context objects that name and organize information.
Legion objects are independent, active, and capable of communicating with each other via unordered non-blocking calls.
Like other object-oriented systems, the set of methods of an object describes its interface. The Legion interfaces are described in an Interface Definition Language (IDL).
The Legion system uses an object-oriented approach, which potentially makes it ideal for designing and implementing complex distributed computing environments.
However, using an object-oriented methodology does not come without a raft of problems, many of these being tied-up with the need for Legion to interact with legacy applications and services.
GridSim is a toolkit for modeling and simulation of Grid resources and application scheduling.
It provides a comprehensive facility for the simulation of different classes of heterogeneous resources, users, applications, resource brokers, and schedulers.
It supports primitives for application composition, information services for resource discovery, and interfaces for assigning application tasks to resources and managing their execution.
The GridSim toolkit resource modeling facilities are used to simulate the worldwide Grid resources managed as time- or space-shared scheduling policies.
In GridSim, application tasks/jobs are modeled as Gridlet objects that contain all the information related to the job and the execution management details, such as job length in MI (million instructions), disk I/O operations, input and output file sizes, and the job originator.
The broker uses GridSim’s job management protocols and services to map a Gridlet to a resource and manage it throughout its lifecycle.
The Gridbus (GRID computing and BUSiness) toolkit project is engaged in the design and development of cluster and grid middleware technologies for service-oriented computing.
It provides end-to-end services to aggregate or permit services of distributed resources depending on their availability, capability, performance, cost.
The key objective of the Gridbus project is to develop fundamental, next-generation cluster and grid technologies that support utility computing.
The following initiatives are being carried out as part of the Gridbus project.
- At grid-level, the project extends our previous work on grid economy and scheduling to
(a) Different application models;
(b) Different economy models;
(c) Data models; and
(d) Architecture models—both grids and P2P networks.
- A GridBank (GB) mechanism supports a secure Grid-wide accounting and payment handling to enable both cooperative and competitive economy models for resource sharing.
- The GridSim simulator is being extended to support simulation of these concepts for performance evaluation.
- GUI tools are being developed to enable distributed processing of legacy applications.
- The technologies are being applied to various application domains (high-energy physics, brain activity analysis, drug discovery, data mining, GridEmail, automated management of e-commerce).
UNICORE (UNiform Interface to COmputer REsources).
It provides a uniform interface for job preparation, and seamless and secure access to supercomputer resources.
It hides the system and site-specific idiosyncrasies from the users to ease the development of distributed applications. Distributed applications within UNICORE are defined as multipart applications where the different parts may run on different computer systems asynchronously or they can be sequentially synchronized.
A UNICORE job contains a multipart application augmented by the information about the destination systems, the resource requirements, and the dependencies between the different parts.
From a structural viewpoint a UNICORE job is a recursive object containing job groups and tasks. Job groups themselves consist of other job groups and tasks.
UNICORE jobs and job groups carry the information of the destination system for the included tasks.
A task is the unit, which boils down to a batch job for the destination system.
The design goals for UNICORE include a uniform and easy to use GUI, an open architecture based on the concept of an abstract job, a consistent security architecture, minimal interference with local administrative procedures, exploitation of existing and emerging technologies, a zero administration user interface through a standard Web browser and Java applets.
UNICORE is designed to support batch jobs, it does not allow for interactive processes.
At the application level asynchronous metacomputing is supported, allowing for independent and dependent parts of a UNICORE job to be executed on a set of distributed systems.
The user is provided with a unique UNICORE user-ID for uniform access to all UNICORE sites.
3.6. Information Power Grid
The NAS Systems Division is leading the effort to build and test NASA’s IPG , a network of high performance computers, data storage devices, scientific instruments, and advanced user interfaces.
The overall mission of the IPG is to provide NASA’s scientific and engineering communities with a substantial increase in their ability to solve problems that depend on the use of large-scale and/or distributed resources.
The project team is focused on creating an infrastructure and services to locate combine, integrate, and manage resources from across the NASA centers.
An important goal of the IPG is to produce a common view of these resources, and at the same time provide for distributed management and local control model.
3.7. Net Solve
Net Solve is a client/server application designed to solve computational science problems in a distributed environment.
The Netsolve system is based around loosely coupled distributed systems, connected via a LAN or WAN.
Netsolve clients can be written in C and Fortran, and use Matlab or the Web to interact with the server.
A Netsolve server can use any scientific package to provide its computational software. Communications within Netsolve is via sockets. Good performance is ensured by a load-balancing policy that enables Net Solve to use the computational resources available as efficiently as possible. Net Solve offers the ability to search for computational resources on a network, choose the best one available, solve a problem (with retry for fault-tolerance), and return the answer to the user.
The Ninf is a client/server-based system for global computing.
It allows access to multiple remote computational and database servers.
Ninf clients can semitransparently access remote computational resources from languages such as C and Fortran.
Global computing applications can be built easily by using the Ninf remote libraries as it hides the complexities of the underlying system.
3.9. Gateway—desktop access to high-performance computational resources
The Gateway system offers a programming paradigm implemented over a virtual Web of accessible resources.
A Gateway application is based around a computational graph visually edited by endusers, using Java applets.
A module developer, a person who has only limited knowledge of the system on which the modules will run, writes modules.
They need not concern themselves with issues such as: allocating resources, how to run the modules on various machines, creating inter-module connections, sending and receiving data between modules, or how to run several modules concurrently on a single machine.
This is handled by Web Flow. The Gateway system hides the configuration, management, and coordination mechanisms from the developers, allowing them to concentrate on developing their modules.
The goals of the Gateway system are:
- to provide a problem-oriented interface (a Web portal) to more effectively utilize high performance computing resources from the desktop via a Web browser;
- this ‘point & click’ view hides the underlying complexities and details of the resources, and creates a seamless interface between the user’s problem description on their desktop system and the heterogeneous resources;
- the high-performance computing resources include computational resources such as supercomputers or workstation clusters, storage, such as disks, databases, and backing store, collaborative tools, and visualization servers. Gateway is implemented as a three-tier system, as shown in Figure 3.
Tier 1 is a high-level frontend for visual programming, steering, runtime data analysis and visualization, as well as collaboration.
This tier is built on top of the Web and object-oriented commodity standards.
Tier 2 is middleware and is based on distributed, object-based, scalable, and reusableWeb servers and object brokers.
Tier 3 consists of back-end services, such as those shown in Figure 3.
The middle tier of the architecture is based on a network of Gateway servers.
The user accesses the Gateway system via a portal Web page emanating from the secure gatekeeperWeb server.
The portal implements the first component of the Gateway security, user authentication and generation of the user credentials that is used to grant access to resources.
TheWeb server creates a session for each authorized user and gives permission to download the front-end applet that is used to create, restore, run, and control user applications.
The main functionality of the Gateway server is to manage user sessions. A session is established automatically after the authorized user is connected to the system by creating a user context that is basically an object that stores the user applications.
The application consists of one or more Gateway modules. The Gateway modules are CORBA objects conforming to the JavaBeans model. The application’s functionality can be embedded directly into the body of the module or, more typically, the module serves as a proxy for specific back-end services.
The Gateway servers also provide a number of generic services, such as access to databases and remote file systems.
The most prominent service is the job service that provides secure access to high-performance computational servers. This service is accessed through a metacomputing API, such as the Globus toolkit API.
To interoperate with Globus there must be at least one Gateway node capable of executing Globus commands.
To enable this interaction at least one host will need to run a Globus and Gateway server.
This host serves as a ‘bridge’ between two domains. Here, Globus is an optional, high-performance (and secure) back-end, while Gateway serves as a high-levelWeb accessible visual interface and a job broker for Globus.
A Grid platform could be used for many different types of applications. In Grid-aware applications are categorized into five main classes:
- Distributed supercomputing (e.g. stellar dynamics);
- High-throughput (e.g. parametric studies);
- On-demand (e.g. smart instruments);
- Data intensive (e.g. data mining);
- Collaborative (e.g. collaborative design).
A new emerging class of application that can benefit from the Grid is:
- Service-oriented computing (e.g. application service provides and the users’ QoS requirements driven access to remote software and hardware resources).
There are several reasons for programming applications on a Grid, for example:
- to exploit the inherent distributed nature of an application;
- to decrease the turnaround/response time of a huge application;
- to allow the execution of an application which is outside the capabilities of a single (sequential or parallel) architecture;
- to exploit the affinity between an application component and Grid resources with a specific functionality.
Although wide-area distributed supercomputing has been a popular application of the Grid, a large number of other applications can benefit from the Grid.
Applications in these categories come from science, engineering, commerce, and educational fields. The existing applications developed using the standard message-passing interface (e.g. MPI) for clusters, can run on Grids without change, since an MPI implementation for Grid environments is available.
- Many of the applications exploiting computational Grids are embarrassingly parallel in nature.
- The nodes in these Grids work simultaneously on different parts of the problem and pass results to a central system for postprocessing.
- Grid resources can be used to solve grand challenge problems in areas such as biophysics, chemistry, biology, scientific instrumentation, drug design, tomography, high energy physics, data mining, financial analysis, nuclear simulations, material science, chemical engineering, environmental studies, climate modeling weather prediction, molecular biology, neuroscience/brain activity analysis, structural analysis, mechanical CAD/CAM, and astrophysics.
- In the past, applications were developed as monolithic entities.
- A monolithic application is typically the same as a single executable program that does not rely on outside resources and cannot access or offer services to other applications in a dynamic and cooperative manner.
- The majority of the scientific and engineering (S&E) as well as business-critical applications of today are still monolithic.
- These applications are typically written using just one programming language.
- They are generally computational intensive, batch processed, and their elapsed times are measured in several hours or days.
- Good examples of applications in the S&E area are: Gaussian , PAM-Crash , and Fluent.
- Today, the situation is rapidly changing and a new style of application development based on components has become more popular.
- With component-based applications, programmers do not start from scratch but build new applications by reusing existing off-the-shelf components and applications.
- Furthermore, these components may be distributed across a wide-area network. Components are defined by the public interfaces that specify the functions as well as the protocols that they may use to communicate with other components.
- An application in this model becomes a dynamic network of communicating objects.
- This basic distributed object design philosophy is having a profound impact on all aspects of information processing technology.
- We are already seeing a shift in the software industry towards an investment in software components and away from handcrafted, stand-alone applications.
- In addition, within the industry, a technology war is being waged over the design of the component composition architecture.
- Meanwhile, we are witnessing an impressive transformation of the ways that research is conducted.
- Research is becoming increasingly interdisciplinary; there are studies that foresee future research being conducted in virtual laboratories in which scientists and engineers routinely perform their work without regard to their physical location.
- They will be able to interact with colleagues, access instrumentation, share data and computational resources, and access information in digital libraries.
- All scientific and technical journals will be available on-line, allowing readers to download documents and other forms of information, and manipulate it to interactively explore the published research.
- This exciting vision has a direct impact on the next generation of computer applications and on the way that they will be designed and developed.
- The complexity of future applications will grow rapidly, and the time to-market pressure will mean that applications can no longer be built from scratch.
- Hence, mainly for cost reasons, it is foreseeable that no single company or organization would be able to, for example, create by itself complex and diverse software, or hire and train all the necessary expertise necessary to build an application.
- This will heighten the movement towards component frameworks, enabling rapid construction from third-party ready-to-use components.
- In general, such applications tend to be multidisciplinary multimodular, written by several development teams using several programming languages, using multisource heterogeneous data, which can be mobile and interactive.
- Their execution will take a few minutes or hours.
- In particular, future S&E applications, for example, will be multidisciplinary.
- They will be composed of several different disciplinary modules coupled into a single modeling system (fluids and structures in an aeronautics code, e.g.), or composed of several different levels of analysis combined within a single discipline (e.g. linear, Euler, and Navier–Stokes aerodynamics).
- Some of these components will be characterized by high-performance requirements. Thus, in order to achieve better performance, the challenge will be to map each component onto the best candidate computational resource available on the Grid that has the highest degree of affinity with that software component.
- There are several examples of such integrated multidisciplinary applications reported in the literature, in several science and engineering fields including aeronautics (e.g. simulation of aircraft), geophysics (e.g. environmental and global climate modeling), biological systems, drug design, and plasma physics.
- In all these areas, there is a strong interest in developing increasingly sophisticated applications that couple evermore advanced simulations of very diverse physical systems.
- In several fields where parallel and distributed simulation technologies have been successfully applied are reported.
- In particular, some applications belonging to areas such as the design of complex systems, education and training, entertainment, military, social and business collaborations, telecommunications, transportation, etc., are described.
Conclusions and Future Trends
There are currently a large number of projects and a diverse range of new and emerging Grid developmental approaches being pursued. These systems range from Grid frameworks to application testbeds, and from collaborative environments to batch submission mechanisms.
It is difficult to predict the future in a field such as information technology where the technological advances are moving very rapidly. Hence, it is not an easy task to forecast what will become the ‘dominant’ Grid approach. Windows of opportunity for ideas and products seem to open and close in the ‘blink of an eye’. However, some trends are evident.
One of those is growing interest in the use of Java and Web services for network computing. The Java programming language successfully addresses several key issues that accelerate the development of Grid environments, such as heterogeneity and security.
It also removes the need to install programs remotely; the minimum execution environment is a Java-enabled Web browser. Java, with its related technologies and growing repository of tools and utilities, is having a huge impact on the growth and development of Grid environments.
From a relatively slow start, the developments in. It is very hard to ignore the presence of the Common Object Request Broker Architecture (CORBA)in the background.
We believe that frameworks incorporating CORBA services will be very influential on the design of future Grid environments.
The two other emerging Java technologies for Grid and P2P computing are Jini and JXTA. The Jini architecture exemplifies a network-centric service-based approach to computer systems.
Jini replaces the notions of peripherals, devices, and applications with that of network-available services.
Jini helps break down the conventional view of what a computer is, while including new classes of services that work together in a federated architecture.
The ability to move code from the server to its client is the core difference between the Jini environment and other distributed systems, such as CORBA and the Distributed Common Object Model (DCOM).
Whatever the technology or computing infrastructure that becomes predominant or most popular, it can be guaranteed that at some stage in the future its star will wane.
Historically, in the field of computer research and development, this fact can be repeatedly observed. The lesson from this observation must therefore be drawn that, in the long term, backing only one technology can be an expensive mistake. The framework that provides a Grid environment must be adaptable, malleable, and extensible.
As technology and fashions change it is crucial that Grid environments evolve with them. So, we observe that Grid computing has serious social consequences and is going to have as revolutionary an effect as railroads did in the American midWest in the early 19th century. Instead of a 30–40 years lead-time to see its effects, however, its impact is going to be much faster.
It concludes by noting that the effects of Grids are going to change the world so quickly that mankind will struggle to react and change in the face of the challenges and issues they present. Therefore, at some stage in the future, our computing needs will be satisfied in the same pervasive and ubiquitous manner.
Grid computing are accelerating fast with the advent of these new and emerging technologies that we use the electricity power grid.
The analogies with the generation and delivery of electricity are hard to ignore, and the implications are enormous. In fact, the Grid is analogous to the electricity (power) Grid and the vision is to offer (almost) dependable, consistent, pervasive, and inexpensive access to resources irrespective of their location for physical existence and their location for access.