Autonomic Computing

CONTENTS

ABSTRACT

INTRODUCTION

WHAT IS AUTONOMIC COMPUTING?

KEY ELEMENTS OF AUTONOMIC COMPUTING

FUNDAMENTALS OF AUTONOMIC COMPUTING

AUTONOMIC COMPUTING AND CURRENT COMPUTING SYSTEM

AUTONOMIC COMPUTING ARCHITECTURE

NEED FOR AUTONOMIC COMPUTING

BENEFITS

CHAIIENGES

CONCLUSIONS

ABSTRACT

Imagine a world where computers fix their own problems before you even know something is wrong. IBM is building that world with a range of autonomic computing capabilities across all of our product lines, helping you control an increasingly complex and expensive IT business. .The goal of autonomic computing is to create systems that run themselves, capable of high-level functioning while keeping the system's complexity invisible to the user.

As computing systems evolve, they are subject to the effect of continuous growth in the number of degrees of freedom that must be well managed in order to maintain their efficiency. An autonomic computing system would control the functioning of computer applications and systems without input from the user, in the same way that the autonomic nervous system regulates body systems without conscious input from the individual.

Autonomic Computing is to develop computer systems capable of self-management, to overcome the rapidly growing complexity of computing systems management, and to reduce the barrier that complexity poses to further growth. In other words, autonomic computing refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes whilst hiding intrinsic complexity to operators and users. An autonomic system makes decisions on its own, using high-level policies; it will constantly check and optimize its status and automatically adapt itself to changing conditions

In this White paper, I present an overview of Autonomic Computing. Also, the basic fundamental features of Autonomic Computing. This seminar report aims to explain basic concept of Autonomic Computing.

Submitted By

Saurabh S. Gilalkar

IIIrd year, Computer Engineering.

Government Polytechnic,Amravati.

INTRODUCTION

"Civilization advances by extending the number of important operations which we can perform without thinking about them." - Alfred North Whitehead

This quote made by the preeminent mathematician Alfred Whitehead holds both the lock and the key to the next era of computing. It implies a threshold moment surpassed only after humans have been able to automate increasingly complex tasks in order to achieve forward momentum.

There is every reason to believe that we are at just such a threshold right now in computing. The millions of businesses, billions of humans that compose them, and trillions of devices that they will depend upon all require the services of the I/T industry to keep them running. And it's not just a matter of numbers. It's the complexity of these systems and the way they work together that is creating shortage of skilled I/T workers to manage all of the systems. The high-tech industry has spent decades creating computer systems with ever- mounting degrees of complexity to solve a wide variety of business problems. Ironically, complexity itself has become part of the problem. It’s a problem that's not going away, but will grow exponentially, just as our dependence on technology has.

The high-tech industry has spent decades creating computer systems with ever mounting degrees of complexity to solve a wide variety of business problems. Ironically, complexity itself has become part of the problem.
The solution may lie in automation, or creating a new capacity where important computing operations can run without the need for human intervention. On October 15th, 2001 Paul Horn, senior vice president of IBM Research addressed the Agenda conference, an annual meeting of the preeminent technological minds, held in Arizona. In his speech, and in a document he distributed there, he suggested a solution: build computer systems that regulate themselves much in the same way our nervous systems regulates and protects our bodies.

This new model of computing is called autonomic computing. The good news is that some components of this technology are already up and running. However, complete autonomic systems do not yet exist. This is not a proprietary solution. It's a radical change in the way businesses, academia, and even the government design, develop, manage and maintain computer systems. Autonomic computing calls for a whole new area of study and a whole new way of conducting business.

WHAT IS AUTONOMIC COMPUTING?

“Autonomic Computing” is a new vision of computing initiated by IBM. It is the ability of systems to be more self-managing. Autonomic computing is the next generation of integrated computer technology that will allow networks to manage themselves with little or no human intervention. By choosing the word “autonomic,” We are making an analogy with the autonomic nervous system, which controls many organs and muscles in the human body, which sends impulses that control heart rate, breathing and other functions without conscious thought or effort. The autonomic nervous system frees our conscious brain from the burden of having to deal with vital but lower level functions. Autonomic computing will free system administrators from many of today’s routine management and operational tasks

Autonomic computing is the result of the realization that unless we begin to build computing systems that reduce the complexity for those who use and manage them, we will not have the time or the expertise to unravel problems arising in newer systems.

Autonomic computing is about freeing IT professionals to focus on high-value tasks by making technology work smarter. This means letting computing systems and infrastructure take care of managing themselves. Ultimately, it is writing business policies and goals and letting the infrastructure configure, heal and optimize itself according to those policies while protecting itself from malicious activities. Self managing computing systems have the ability to manage themselves and dynamically adapt to change in accordance with business policies and objectives.

KEY ELEMENTS OF AUTONOMIC COMPUTING

The elements of autonomic computing can be summarized in to 8 key points. In short, they are as follows:

1. Knows Itself

An autonomic computing system must be capable of taking continual stock of itself, its connections, devices and resources, and know which are to be shared or protected.

2. Configure Itself

An autonomic computing system must be able to configure and reconfigure itself dynamically as needs dictate.

3. Optimizes Itself

An autonomic computing system must constantly search for ways to optimize performance.

4. Heal Itself

An autonomic computing system must perform self-healing by redistributing resources and reconfiguring itself to work around any dysfunctional elements.

5. Protect Itself

An autonomic computing system must be able to monitor security and protect itself from attack.

6. Adapt Itself

An autonomic computing system must be able to recognize and adapt to the needs of coexisting systems within its environment

7. Open Itself

It must work with shared technologies. Proprietary solutions are not compatible with autonomic computing ideology.

8. Hide Itself

An autonomic computing system will anticipate the optimized resources needed while keeping its complexity hidden.

FUNDAMENTALS OF AUTONOMIC COMPUTING

In order to incorporate these characteristics in “self-managing” systems, future autonomic computing systems will have four fundamental features.

Fig.1: Autonomic Computing Attributes

Self-Configuring

Systems adapt automatically to dynamically changing environments. Then hardware and software systems have the ability to define themselves “on-the fly,” they are self- configuring. This aspect of self-managing means that new features, software, and servers can be dynamically added to the enterprise infrastructure with no disruption of services. Systems must be designed to provide this aspect at a feature level with capabilities such as plug and play devices, configuration setup wizards, and wireless server management. These features will allow functions to be added dynamically to the enterprise infrastructure with minimum human intervention. Self-configuring not only includes the ability for each individual system to configure itself on the fly, but also for

systems within the enterprise to configure them into the e-business infrastructure of the enterprise.

Self-Healing

Systems discover, diagnose, and react to disruptions. For a system to be self- healing, it must be able to recover from a failed component by first detecting and isolating the failed component, taking it off line, fixing or isolating the failed component, and reintroducing the fixed or replacement component into service without any apparent application disruption. Systems will need to predict problems and take actions to prevent the failure from having an impact on applications. The self-healing objective must be to minimize all outages in order to keep enterprise applications up and available at all times.

Self-Optimizing

Systems monitor and tune resources automatically. Self-optimization requires hardware and software systems to efficiently maximize resource utilization to meet end- user needs without human intervention. Some of the systems already include industry leading technologies such as logical partitioning, dynamic workload management, and dynamic server clustering. These kinds of capabilities should be ex-tended across multiple heterogeneous systems to provide a single collection of computing resources that could be managed by a “logical” workload manager across the enterprise. Resource allocation and workload management must allow dynamic redistribution of workloads to systems that have the necessary resources to meet workload requirements. Similarly, storage, databases, networks, and other resources must be continually tuned to enable efficient operations even in unpredictable environments.

Self-Protecting

Systems anticipate, detect, identify, and protect themselves from attacks from anywhere. Self-protecting systems must have the ability to define and manage user access to all computing re-sources within the enterprise, to protect against unauthorized resource access, to detect intrusions and report and prevent these activities as they occur, and to provide backup and recovery capabilities that are as secure as the original resource management systems.

AUTONOMIC COMPUTING AND CURRENT COMPUTING SYSTEM

IBM frequently cites four aspects of self-management, which following Table summarizes. Early autonomic systems may treat these aspects as distinct, with different product teams creating solutions that address each one separately. Ultimately, these aspects will be emergent properties of a general architecture, and distinctions will blur into a more general notion of self-maintenance. The four aspects of self management such as self-configuring, self-healing, self-optimizing and self-protecting are compared here.

Concept / Current Computing / Autonomic Computing
Self configuration / Corporate centers have multiple vendors and platforms. Installing, configuring, and integrating systems are time consuming and error prone. / Automated configuration of components and systems follows high-level policies. Rest of the system adjusts automatically and seamlessly
Self-optimization / Systems have hundreds of manually set, nonlinear tuning parameters and their number increases with each release. / Components and systems continually seek opportunities to improve their own performance and efficiency.
Self-healing / Problem determination in large, complex systems can take a team of programmer’s weeks. / System automatically detects diagnoses and repairs localized software and hardware problems.
Self-protection / Detection of and recovery from attacks and cascading failures is manual. / System automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent system wide failures.
AUTONOMIC COMPUTING ARCHITECTURE

The autonomic computing architecture concepts provide a mechanism discussing, comparing and contrasting the approaches different vendors use to deliver self-managing attributes in an autonomic computing system. The autonomic computing architecture starts from the premise that implementing self-managing attributes involves an intelligent control loop. This loop collects information from the system. Makes decisions and then adjusts the system as necessary. An intelligent control loop can enable the system to do such things as:

·  Self-configure, by installing software when it detects that software is missing

·  Self-heal, by restarting a failed element

·  Self-optimize, by adjusting the current workload when it observes an increase in capacity

·  Self-protect, by taking resources offline if it detects an intrusion attempt.

A control loop can be provided by a resource provider, which embeds a loop in the runtime environment for a particular resource. In this case, the control loop is configured through the manageability interface provided for that resource (for example, a hard drive).In some cases, the control loop may be hard-wired or hard coded so it is not visible through the manageability interface.

Autonomic systems will be interactive collections of autonomic elements—individual system constituents that contain resources and deliver services to humans and other autonomic elements. , An autonomic element will typically consist of one or more managed elements coupled with a single autonomic manager that controls and represents them. At the core of an autonomic element is a control loop that integrates the manager with the managed element.

In an autonomic environment, autonomic elements work together, communicating with each other and with high-level management tools. They regulate themselves and, sometimes, each other. They can proactively manage the system, while hiding the inherent complexity of these activities from end users and IT professionals. Another aspect of the autonomic computing architecture is shown in the diagram below. This portion of the architecture details the functions that can be provided for the control loops. The architecture organizes the control loops into two major elements —a managed element and an autonomic manager. A managed element is what the autonomic manager is controlling. . An autonomic manager is a component that implements a control loop.

Fig.2: Autonomic Computing Architecture

Managed Element:

The managed element is a controlled system component. The managed element will essentially be equivalent to what is found in ordinary non-autonomic systems, although it can be adapted to enable the autonomic manager to monitor and control it. The managed element could be a hardware resource, such as storage, CPU, or a printer, or a software resource, such as a database, a directory service, or a large legacy system. At the highest level, the managed element could be an e utility, an application service, or even an individual business .The managed element is controlled through its sensors and effectors.

The sensors provide mechanisms to collect information about the state and state transition of an element.

The effectors are mechanisms that change the state (configuration) of an element.

The combination of sensors and effectors form the manageability interface that is available to an autonomic manager. As shown in the figure above, by the black lines connecting the elements on the sensors and effectors sides of the diagram, the architecture encourages the idea that sensors and effectors are linked together. For example, a configuration change that occurs through effectors should be reflected as a configuration change notification through the sensor interface.