Taking its meaning from the self-regulating behaviour of the human central nervous system, the field of autonomic computing seeks to automate the assembly, maintenance, and management of software systems.
“Autonomic Computing — Creating self-Evolving Software Systems” (ACCESS) is the first EPSRC funded project in the area of autonomic computing. ACCESS seeks to address some of the key research issues of autonomic computing, emphasising development of the knowledge required to create semi-autonomous systems capable of automatically adapting to their respective environments in response to ongoing change.
Autonomic agents will be able to negotiate with their human principals and each other, using evolved agent-human hybrid languages, about all interactions with, and within, the digital domain. The requirements and concerns of each and every one of us will be expressed and drive the assembly, personalisation, and evolution of our respective portals of just-in-time, just-sufficient services.
Here are the presentations from the workshop in PDF format, with accompanying presenter photographs. The author names are obfuscated email links.
Tim Millea |
|
Tim Putnam |
|
The Stakeholders' Requirements of ACCESS Diana Griffiths |
|
Mike Evans |
|
The Semantic Web and Ontologies for ACCESS Richard Newman |
|
Autonomic Computing Self Configuration Vision & Challenges Christine Draper |
|
Also present: Prof. Rachel Harrison. |
After each presentation, the speaker was invited to put forward a research question for consideration by the attendees. These questions, and their possible answers, are summarised below.
Assuming that requirements can be expressed in terms of provisions and requirements, in any ontology, then techniques exist to compose services described in these ontologies. Work on petri nets, graph composition, and Web Services (e.g. OWL-S) may inform approaches.
There are two clear hurdles to involving agents in human affairs. The first is initial usage; the second is increasing the involvement of agents to trusted usage.
Initial usage is convincing users that agents could be of any use at all, and are trustworthy — analogous to people's first steps in online shopping. This first step is aided by incentives (such as a low price), commercial endorsement, and observation of successful early adopters.
Increasing use is a self-referential process. Widespread adoption will provide measures of quality, trust networks, and established markets and standards. As users and companies become used to agents achieving modest goals, they will move towards business-critical systems (initially in the form of advisors/expert systems, and then as fully-autonomous agents).
The language model is necessarily a cache of market terms (and thus a compound ontology), drawn from the markets concerning the user requirements. We additionally expect that it would contain mappings between market terms (in different markets and domains), and also translations and profiles relating the human idiomatic language into the user-independent domain ontologies.
We first of all identified the fact that as systems become more complex, they also wield more influence on our surroundings. The difference between the terms “nuclear war” and “nuclear power” is small, and yet the meanings are completely different. This underlines the importance that requirements are expressed accurately, but how can we make sure they are? We identify two different scenarios. Firstly, a system may be testable in a “test-tube” environment, in which case feedback may be gained in a trial and error fashion by executing the system, without fear of the consequences. If this cannot take place, it is then vital that the system be correct on the first execution. This problem is compounded by the fact that many system stakeholders may not be clear about their own requirements, as well as not specifying them accurately. The issue of trust also plays a part here. If a second party in a contract is well trusted to perform the task correctly, not all requirements may need to be specified. Finally, we identify the increased leverage offered by an autonomic system. As they are expressed at a high-level, simple user actions may have greatly magnified consequences.
It is not feasible to attempt to model the nature of all things in order to give the best chance to an agent attempting to infer the meaning of expressed requirements. We can only model a modest subset. The degree to which we do this reflects the level of sophistication achieved by the system, and the maturity of the interaction process between user and system when expressing requirements.
There are clear high-level parallels between IBM's autonomic architecture, where a business process description is provided and the system ensures its maintenance, and ACCESS, where human requirements are dynamically satisfied. We also observed that autonomic systems in the ACCESS sense are able to act as service providers to IBM's customers' autonomic systems — e.g. in a datacentre, where software components vended and assembled in the software market would provide conversion and QoS tools.
The term agent-oriented is subjective, in that it depends on the level of detail at which you apply the agent-paradigm. An entire system, although it may be designed around the concept of agents, may be seen as an agent itself, acting on behalf of multiple stakeholders. Looking in closer detail, the system may, or may not, be made up of agents.