The Challenge of Self-adaptive Systems [615039]
The Challenge of Self-adaptive Systems
for E-commerce
HANS WEIGAND
INFOLAB, Tilburg University, PO Box 90153, Tilburg, 5000 LE, The Netherlands
(E-mail: [anonimizat])
WILLEM-JAN VAN DEN HEUVEL
INFOLAB, Tilburg University, PO Box 90153, Tilburg, 5000 LE, The Netherlands
(E-mail: [anonimizat])
Abstract
For future E-commerce systems that are engaged in many dynamic trading relationships, the ability to
adapt themselves smoothly will increasingly become a critical property. In this paper, we first define thebasic semantic structure of a collaborative process. Then we introduce a formal framework for self-
adaptive systems. We argue that self-adaptive systems should specify goals explicitly, and propose a goal-
based architecture. We further argue that for systems that operate in a shared environment with othersystems, self-adaptation should be extended with co-adaptation. We define four levels of co-adaptation,and present an argumentation mechanism that can be used to enable co-adaptation at the higher levels.
Key words self-adaptive systems, co-adaptation
1. Introduction
Enterprises are teaming up to better cope with the ever-more increasing customer
requirements and become more competitive, defining and executing collaborativeprocesses with potentially unknown trading partners on the fly (Smith and Fingar
2003). Unfortunately, these collaborative processes, also called c-processes, are not
stable, but tend to be in flux all of the time, e.g., to conform to evolving industrystandards, accommodate new partners, or to be aligned with changed or new legalrules. This implies that collaborations need to be adapted continually. Given thecurrent portfolio of enterprise applications, that is characterized by fragmentation of‘‘frozen’’ business processes over multiple (stovepipe) applications, redundancy ofapplications and database systems, and many complex interfaces to enhance inter-operation, this is increasingly becoming a Herculean, but nevertheless business-
critical, task. Drawing on a recent study by IBM, Salehie and Tahvildari state that
40% of all investments in IT are used just trying to get technologies to work together(Salehie and Tahvildari 2005).
According to current estimates, the maintenance of systems already makes up
about 70–80% of all costs during an enterprise application’s lifecycle (Pfleeger andAtlee 2006). In order to cut down costs of rather tedious and labor-intensive routineGroup Decision and Negotiation (2007) 16:169–190
DOI 10.1007/s10726-006-9061-7 /C211Springer 2006
maintenance tasks, and establish more effective mechanisms to deal with change,
autonomic computing is touted in industry as an effective solution (Kephart and
Chess 2003). In a nutshell, autonomic computing provides a contemporary paradigmfor managing computing resources, eliminating the need for human interference(Ganek and Corbi 2003). This is to be achieved by giving enterprise systems not onlyfull awareness about their internal model, but, also the ability to adapt themselves,particularly on non-functional properties such as performance, fault-tolerance andsecurity. However, it is fair to say that autonomic computing is mainly a vision, and
there is not much of an actual solution to be shown yet.
The main objective of this paper is to develop and explore a framework for self-
adaptive systems to leverage interoperability with other systems. It also introducessome solution components. In particular, we will argue that in the context of col-laborative processes, self-adaptation cannot be implemented effectively without somelevel of co-adaptation, which in turn requires a generic argumentation system to be inplace. The practical relevance of this paper is that it provides architectural patternsfor self-adaptation and some solution ingredients. The theoretical relevance is that it
clarifies the concept of self-adaptation and what is needed to achieve self-adaptation,
particularly in an environment of mutually dependent interacting systems.
The structure of this paper is as follows. In Section 2, we introduce a basic logical
framework for collaborative processes. In the next three sections, we build up a self-adaptation framework in three steps: first, in Section 3, we explore adaptability, next,in Section 4, we present a framework for self-adaptation, and we extend this with aframework for co-adaptation in Section 5. In Section 6 work out one solutioncomponent for co-adaptive systems in the form of an argumentation system. Section
7 concludes with a summary of results and an overview of future research.
2. A Logical Framework for Collaborative Processes
In this section, we present a basic semantic framework for collaborative processes
that provides the context in which we want to explore self-adaptation and co-adaptation.
In Weigand, Verharen and Dignum (1997), a formal language called L
illis
described with which an integrated semantics for information and communication
systems can be expressed. It is an extension of Dynamic Deontic Logic and thesemantics of speech acts is described using preconditions and postconditions. Forexample, the postcondition of an authorized request is that the Hearer is obligedto perform the requested action. Pre- and post-conditions have been used also inagent communication languages such as KQML and FIPA-ACL (Chaibdra andDignum 2002). For example, the precondition of KQML’s tellmessage states that
the sender believes what it tells and that it knows that the receiver wants to know
that the sender believes it. The postcondition of sending the tellmessage is that
the receiver can conclude that the sender believes the content of the message. In asimilar vein, FIPA-ACL uses feasibility preconditions and rational effects. ThereWEIGAND AND VAN DEN HEUVEL 170
have been critical discussions about this approach (Chaibdraa and Dignum 2002).
Some have argued that the semantics should not be based on mental states, buton social commitments (Singh 2000). Others have proposed to ground thesemantics in the notion of sign conventions (Jones and Parent 2003). Thesemantics that we propose here is in line with these latter two approaches in thesense that we agree that the effect on the social world should be at the core. Thisis also in accordance with Habermas’s theory of communicative action (Haber-mas 1984). Where our approach differs from the latter two is the Habermasian
assumption that intersubjective truth (common ground) is established by a joint
act of speaker and hearer (cf. Clark 1996).
The semantics that we propose in this paper is built on this assumption. The theory
of communicative action is based on the notion of validity claim. Speakers makeclaims, and when these claims are accepted or conceded, they turn into commonground. In this way, coordination can be achieved. The general scheme is as follows.
Definition
Inference scheme for communication semantics.
For all . ubeing a well-formed formula, i,jbeing conversational roles
[claim( i,j,u); accept( j,i,u)] agreed
i,j(u)
end
In words: when uhas been claimed and has been accepted, then it is agreed. For
example, when some customer claims that the seller should take care of the insuranceof the transport, and the seller accepts this claim, (only) then there is an agreed uponobligation. Evidently, there must be a ground for the customer claim, so in anabstract sense, as a default, the obligation can be said to exist before the agreement,
but there may always be specific circumstances that invalidate the default obligation.
The effect of the communicative action is that the obligation is materialized.
The modality agreed is rather weak from a logical perspective. It does adhere to
conjunction distribution
agreedðu^wȚ agreedðuȚ^agreedðwȚ
but (just as knowledge and belief modalities in modern epistemic logic) it is not
necessarily closed under implication:
agreedðu!wȚ^agreedðuȚ!agreedðwȚ
In terms of Habermas, the agreed is also described as the situation definition. In this
paper, we will use the term shared model for what is agreed upon at the instance level,
andcommon ground for the agreed upon rules and specifications (Stamper 2000).
The fact that uhas the status ‘‘agreed’’ does not say anything about internal
beliefs. Depending on whether the hearer is convinced of the sincerity and trust-worthiness of the speaker, he will infer (or not) that uis believed by the speaker and
believe it himself. This inference may be important, but it is not critical for theTHE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE 171
conversation process itself, because what counts for the communication partners is
what is agreed upon.
Using the basic communication semantics above, we are able to describe the
effects of conversational acts once we know which claims are made with a certainconversation act. Among the many claims that could be made by a speaker whenperforming a conversational act, we distinguish the following essential categories:
–claims about the ‘‘agreed’’ – such as presuppositions. The object of the claim
can be many kind, including the categories given below. To distinguish these
claims from the new claims, we use the modality agreed .
–references – when performing conversational acts, the speaker has to refer to
certain objects: a person, a product item, a delivery etc. By referring to anobject, the speaker claims that this object exists, and by accepting this claim,the object becomes part of the shared model. The next time the speaker refersto the same object, this reference is an agreed reference and the claim falls inthe first category. Under ‘‘references’’ we also categorize identity claims about
objects of the form x = y.
–authorization claims – when performing a conversational act, the speaker claims
that he is authorized to perform the act.
–claims about actor obligations – what a actor or set of actors should do. Obliga-
tions can be in one of the following states: created, cancelled, violated, fulfilled .
–claims about actions to be performed – the things to be achieved in the business
process: desired, intended, started, finished, approved . These phases correspond
roughly to the well-known action cycle of Norman (Norman 1990).
This categorization is not meant to be exhaustive, but it covers the most
important cases in our context. For each of the claim types, there is also a
corresponding accept action.
ExampleWe consider a purchase event. When the Buyer X.com sends the purchase order
to the Seller Y.com, this is a request for a delivery dcontaining the following claims:
–references:
/C13object (Buyer, subject ),object (Seller, subject ),
/C13object (X.com, subject ),object (Y.com, subject ),
/C13object (d, delivery ),
/C13Buyer = X.com, Seller = Y.com
–authorized(Buyer,PurchaseOrder) Buyer is permitted to make this request.
–created(obligation(Seller,d)) Seller is obliged to bring about d, that is, to deliver
the requested goods.
–intended(d) the action state of the event d is ‘‘intended’’ (‘‘to be done’’)WEIGAND AND VAN DEN HEUVEL 172
The conversation starts with the action status of dbeing already ‘‘desired’’. This
may have been established earlier in the conversation; otherwise this claim functions asa presupposition. The Buyer claims that the delivery is to be performed now (intended),and claims that the Seller (with executor role) is obliged to perform it. Both claims canbe challenged by the Seller, but if they are accepted, they lead to an obligation for theexecutor and a state change of the action itself (from intended to started).
The obligation claims and action state claims are closely related, because it would
be odd when an action is considered to be intended, but no one is responsible for the
execution, or vice versa. However, these odd situations can happen in complex
settings. For example, when the Seller withdraws after having committed because offorce major and his obligation is cancelled. By separating the two claims, it alsobecomes possible to accommodate the situation that the hearer accepts one claim butchallenges the other.
3. A Framework for Adaptability
A business collaboration is typically defined in the form of a protocol, message and
data type definitions, domain ontologies, and collaboration profiles (ebXML) or acontract. Collectively these definitions establish a shared syntax for the messageexchange and a shared semantics. The definitions can cover functional and non-functional aspects and establish the common ground for the interaction to be suc-cessful. In current E-commerce systems, the definitions are often specified in someproprietary XML-based language that is relatively close to the actual implementa-tion and notably hard to reconfigure and change, but there is a tendency to con-
ceptualize definitions using formal languages that allows logical inferences, and do
not assume in-depth knowledge about implementation-level communication andtransaction protocols that are used, nor the component model that is chosen (e.g.,.NET or J2EE). Examples of higher-level specifications of (parts of) business col-laborations include FMEC (Kimbrough and Moore 1997), the layered approach of(Weigand and Van den Heuvel 1999) and the contract language BCL (Liningtonet al., 2004). The advantage of such a high-level specification is similar to theadvantage of ‘‘architecture model-based’’ approach advocated in the self-adaptive
system literature (Garlan and Schmerl 2002; Valetto and Kaiser 2002) and the MDA
approach (Kleppe, Warmer and Bast 2003) in the field of interoperable systems.
Even when such languages are used however, adaptation of definitions remains a
delicate process. A fundamental problem, in our view, is that of effect indeterminacy .
Given a desired change, it is not easy to find out which part of the specificationshould be adapted, and given a certain change in the specification, it is not easy toknow the effect that the change has on the rest of the specification and the imple-mentation. We do not pretend to have an overall solution for the effect indetermi-
nacy problem (which can be seen as a variant of the wicked ‘‘frame problem’’ in AI),
but suggest that a goal-based architecture provides part of a solution to this problem,
as such architecture tells you the purpose of a certain component and what arepossible alternatives. It means that the specification is organized using a goal/meansTHE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE 173
structure that has been explored extensively in so-called goal-oriented requirements
engineering (cf. Lamsweerde 2001; Mylopoulos, Chung and Yu 1999). One maydistinguish between hard goals and soft goals. A hard goal includes a clear specificationabout when it is actually reached, while a soft goal does not have such clear criteria.Some authors distinguish between operational and strategic goals, formulating astrategic goal as a non-operational objective and operational goals as constraints onthe actions that can be performed by a system (Dardenne, van Lamsweerde andFickas 1993). Goals are attained using some means involving plans, recourses and
soft goals. Often, one goal may be achieved by applying alternative means, and goals
may be recursively decomposed into sub-goals (refinement), resulting into a goalhierarchy with OR and AND branching.
In the context of c-processes, it is useful to make an explicit distinction between
collaborative goals, also named interaction goals (Cheong and Winikoff 2005), andsystem goals. Collaborative goals are shared between two or more (trading) partnersexpressing what they would like to attain by collaborating, while system goals denotegoals of individual actors. In order to effectuate c-processes, system goals should be
aligned with collaborative goals.
To set up a goal hierarchy for a certain application, such as a c-process, will be
quite an investment, as this knowledge is often left implicit. However, given such astructure it becomes feasible to replace one ‘‘means’’ by another ‘‘means’’ if the newmeans satisfies at least the same goals as the old means. The required satisfactionmust be ascertained or validated somehow.
There are three ways to achieve this validation: by formal constraint checking, by
certification, or by user approval. The first approach requires that the satisfaction
criteria be formalized in a logical language that allows automated model checking. For
certain properties, this is an attractive option, but based on past experience, we think itwill provide only a partial solution. The second approach requires that trustworthystatements can be acquired, from some certification authority, about what the com-ponent supports (e.g. this component supports that protocol, or this protocol supportsnon-repudiation as defined in standard X). This should be explored as much as possible,but we can expect it to work for generic components only. The third option is to rely onthe user. In a self-adaptive system, we should try to minimize the user involvement. So,
given the limitations of each approach, we need a careful combination of all three.
We propose a goal hierarchy based on two dimensions, one for the functional
goals and one for the non-functional goals. From a language/action perspective(Dietz 2005), at least three levels can be distinguished in the functional goal struc-ture: the collaboration level, the symbolic level and the physical level (see Figure 1).
At the collaboration level, the business transaction is described in terms of valueexchanges and communicative actions; these business transactions are realized byinformation exchange, and this is the second symbolic level. Finally, the information
must somehow be transferred physically as messages via a computer network, or by
paper. This is addressed at the physical level. We have added a fourth context layer
on top of the collaboration level that describes the values and cultural norms of thesociety in which the collaboration takes place, as well as the strategic goals of eachWEIGAND AND VAN DEN HEUVEL 174
partner. These values can be used to motivate the collaboration goals when these goals
are put under discussion (cf. section 6).
The second dimension of the goal hierarchy expresses the non-functional goals such
as security, fault tolerance and performance. These goals guide the various designchoices that have to be made when decomposing a business transaction. Following theModel-Driven Architecture style (Kleppe, Warmer and Bast 2003), a distinction can bemade between the computation-independent model (CIM), the platform-independent
model (PIM) and the platform-specific model (PSM) level, that correspond directly to
the layers above. CIM goals are associated to the business level and concerned withabstract goals such as a low-cost strategy and shortening delivery times. Key consid-erations in the second PIM layer are the coordination of the collaborative processes(centralized or decentralized coordination), and transaction management (short- orlong-lived transactions, or no transactions at all). Lastly, the PSM level capturesimplementation choices. For example, if at the PIM level the collaborative process goalis to enable long-running transactions, at the PSM level one may choose between
applying WS-Transactions, CORBA OTS or developing a proprietary transaction
protocol. Note that adapting the system by replacing one component by another – e.g.a proprietary transaction protocol by WS-Transactions – is only possible when the newcomponent fulfils the same functional goals andthe non-functional goals (to a degree of
satisfaction required at that time).
Figure 1. A two-dimensional stratified goal hierarchy.THE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE 175
4. A Framework for Self-Adaptation
In this section, we introduce a background theory for self-adaptive systems, a tax-
onomy of self-adaptive systems and a framework distinguishing between first-orderand second-order self-adaptation.
4.1. Adaptation in CAS
Autonomic computing has been inspired by a large body of biological theory on
immune systems, pertaining to a special category of adaptive systems. Of particularrelevance is the theory of Complex Adaptive Systems (CAS). A complex adaptivesystem is defined as a system composed of interacting agents, which respond tostimuli, exhibiting stimulus-response behavior that can be defined in terms of rules.Agents adapt by changing their rules as experience accumulates and can be aggre-gated into meta-agents whose behavior may be complex and emergent, i.e., notdeterminable by analysis of lower level agents (Holland 1995). In analogy to the
human body (e.g., the human brain), autonomic system components may protect
themselves to external threats and adapt themselves, without the overall systembeing aware of that. In case of major disruptions, autonomic components maygenerate algedonic signals to the system to request human intervention.
Adaptation is a central concept for CAS systems in general, and autonomic
systems more in particular. In its biological context, adaptation refers to the capa-bility of a living system to adapt to its environment. From an enterprise computingperspective, adaptive systems pertain to a special flavor of enterprise systems (or
components) that may adapt themselves to changes by modifying, removing or
adding new rules that govern their behavior. While improving its capacity to evolvein a changed context, the system accumulates new knowledge, learning about‘‘good’’ and ‘‘bad’’ practices.
While evolving constantly, each CAS system occupies a special niche, which is
defined by its sum of emergent behavior and characteristics. In case a CAS systemceases to exist, another system may reoccupy the niche. From an enterprise com-puting perspective, this may sound strange; after all, applications are being designed
and deployed intentionally, to serve a specific purpose, so they should not just vanish
or evolve in completely different directions. However, there is not necessarily aconflict. From the point of view of the higher-level system that deploys the com-ponent for a specific purpose, the component may seem to be completely predictableand under control. However, from an external observer perspective, the componentdoes evolve over time (either semi-autonomously or by modifications brought aboutby the higher-level system, that is not visible) and it may also become obsolete and bereplaced by another component.
A basic prerequisite for progressive adaptation is that individual CAS systems
learn to avoid certain behavior in case that it does not result in the satisfaction of
one of the system’s (sub)goals. CAS systems are thus equipped with the capabilityto make inferences and predict the outcome of some actions, by comparing itsWEIGAND AND VAN DEN HEUVEL 176
current state with that of those it has been in the past. Anticipation of events is
achieved by using internal models. We may distinguish between two elementarytypes of internal models: tacit models and overt models. Tacit internal models arecapable of producing an implicit prediction of a future state, given a currentaction. Tacit models are typically used by rather simple systems, without sensorycapabilities. Overt models depend on more sophisticated information about theenvironment, acquired by sensors of the system, to explore explicit (alternative)scenarios for establishing a particular goal.
4.2. Towards an adaptation taxonomy
Adaptation may be structured along three orthogonal dimensions (see Figure 2).
The first dimension captures the object of adaptation (the ‘What’ dimension). Wedistinguish between two types of artifacts that may be adapted: goals and behavior(realized functionality). This distinction can be said to be relative. However, we positthat for each system at a given point in time, the distinction can be made, and is
relevant when we assess the impact of adaptation. Behavior can be further refined in
tasks (e.g. a sorting task) and processes, and for processes it is useful to distinguishbetween internal processes not visible outside and processes in the interface thattypically can be changed without affecting the core functionality.
The second perspective concentrates on the way in which we may adapt systems
(the ‘How’ dimension). Basically, we may discern between two techniques to steer theadaptation process: reasoning and learning. Reasoning entails the ability to makepredictions given a collection of facts. This typically requires an extensive causal
Figure 2. Adaptation taxonomy.THE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE 177
model, but data mining can also be used. Learning is understood here as the process
of acquiring knowledge on the basis of experience, that is, trial and error. Techniquesthat may be adopted for learning include Bayesian rules, reinforcement learning, andevolutionary computation. Imitation is a special kind of learning based on theexperience of others rather than own one’s experience.
The third dimension of adaptation taxonomy emphasizes the time dimension (the
‘When’ dimension); adaptation may occur pro-actively or reactively. Reactiveadaptation of systems is performed using predetermined responses to external sig-
nals. Proactive (pre-emptive) adaptation aims at accommodating future changes by
anticipating potential scenarios, and typically relies on continuous monitoring of theenvironment.
4.3. Self-Adaptive systems
Adaptation of software is typically considered at three levels of granularity: (1) cross-
system level, (2) system level, and (3) the component-level. Each of these levels has its
specific requirements for adaptation. Regardless of these levels, adaptation may be
either left to the discretion of external designers, or, performed (partially) by thesystem itself. The latter category of adaptation is named self-adaptation .
Self-adaptive systems evaluate their behavior, using reflection mechanisms, and
modify themselves in case that their behavior does not conform to predefined goals,
or, when the system can be optimized in terms of non-functional properties such as
Figure 3. Self-adaptive system architecture.WEIGAND AND VAN DEN HEUVEL 178
performance, security and stability. We call that part of the system that is modifiable
the adaptable system.
Figure 3 depicts a framework defining and relating essential concepts for self-
adaptation. Systems execute actions that have an effect on the environment. As
outlined, self-adaptive systems constantly sense incoming signals, and compare them
with action rules that make up the core of the adaptable system. The compare
component can be a complex process, during which not only the current status (fromthe sense data) is considered, but also a predicted status. That is why a memory
component can be necessary. The memory component captures past states of the
system, relating them to incoming and outgoing events. Having a memory compo-nent, we allow for both event-driven and state-driven comparison: in the formercase, it is a particular event that may match a certain rule and trigger some action,whereas in the latter case, it is the internal model (included in the memory, andconstantly adapted on the basis of incoming signals) that is compared to conditionvalues.
So far, the system is nothing more than what is called a feedback system in
standard system theory (Owens 1978) in which only the environment is adapted to.
In order to introduce self-adaptation, we first distinguish between the goals of the
system and an operational way of achieving the goals, the adaptable system. Thelatter can be thought of as a set of Event-Condition-Action rules, but how theseaction rules are implemented is not essential. The performance of the system ismonitored under a perpetual testing strategy (Osterweil et al. 1996) in the form of
performance indicators . In case the configuration of the adaptable system is not a
one-shot event but a continuous effort to find the optimal configuration for realizing
the goals, we talk about (first-order) self-adaptation. The extreme case is that after
each action cycle, the system determines a new configuration of action rules based onthe goals and the result (measured by the performance indicator). The determinationof the configuration can be a random explorative process in the beginning, whilegradually becoming more effective. This is the typical setting for reinforcementlearning. A less extreme case is when the adaptable system incorporates a certainplan to achieve the goal that is not evaluated after each action cycle but only aftersome period or when a break-down prompts for it.
First-order self-adaptation is bound to limitations, as the goals may be infeasible
or have become so. Hence alternatively, the system may adapt its goals. This requiresasecond-order adaptive system (double-loop learning, Argyris and Scho ¨n 1974).
Although this kind of adaptation is often projected inside the adaptive system,thereby raising its complexity, we prefer to model it as a combination of two systems,where the higher system (manager) monitors the behavior of the lower system (agent)and subsequently may adapt the goals of the agent (Figure 4). The manager also hasits own goals that it cannot change, and presumably it is itself an agent of a higher-
level system, ultimately the human user. So in general, a self-adaptive system is
realized not by one agent but by a hierarchica lly organized agent soci ety (cf. Ciancarini,
Omicini and Zambonelli 2000).THE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE 179
5. A Framework for Co-Adaptation
Traditionally, co-adaptive systems pertain to multiple interacting constituents of a
system that learn concurrently in relation to one another. Thus, co-adaptive systemsdenote a special type of self-adaptive systems for which the environment is not ablack-box, but a gray-box including other self-adaptive systems that are embedded inthe same environment. This means that self-adaptation cannot be any longerachieved in splendid isolation: adapting one system can have an impact on the other
system, and vice versa. An immediate impact of adapting a system may be that the
communication with the other system breaks down, completely or partially. But evenif the communication does still work, the consequences of associated systems toadapt themselves independently may have unexpected negative consequences forboth.
Hence there is a need for coordinated adaptation . We define co-adaptive systems as
systems that enable coordinated adaptation through direct symbolic interaction (cf.Section 3), rather than indirectly by the effect of the actions on the environment. For
interoperable systems, like the ones that support business conversations, co-adap-
tation is essential, as these systems depend heavily on agreed upon protocols, datastandards and other shared specifications (it is exactly at the interfaces that most ofthe adaptations have to be made). So changes to the system, such as moving fromEDIFACT to XML, from non-secure to secure connections, or from Dutch toEnglish naming cannot be made unilaterally.
Figure 4. Second-order self-adaptive system.WEIGAND AND VAN DEN HEUVEL 180
For the interaction between two systems, we distinguish between the shared space,
the shared model and the common ground (Figure 5). The shared space is what linksthe two systems together so that they can communicate at all. In a computer network,the shared space is the physical network between the two computers. The shared space
may also be a physical context, as the environment where ants leave so-called
pheromenes to communicate with each other. Abstracting from the devices to sendand receive signals in this environment, the shared space is the backbone over whichsystems synchronize their shared model. It corresponds to the physical level of col-laborative systems as outlined in Section 3. The shared model captures the symbolicand collaboration level and is the representation of some part of the world designatedby the signals. We use the term shared model here as an abstract notion; whether thereis an explicit shared model, such as the message list for two communicating com-
puters, or not, as is probably the case with the ants, is not relevant.
The common ground is whatever the systems use in interpreting symbols. This can
include a shared domain ontology, interaction protocols, and a shared set of infer-ence rules. Again, this common ground need not be explicit to be there and to haveeffect, but if it is explicit and in declarative format, it is of course much easier tomanipulate and adapt. Because of the common ground, the shared model will typ-ically not contain just messages, but also several interpretations. For example, ashared view of the obligations that derive from these messages (cf. Section 2).
Essentially, the common ground contains the definition of the shared space, the
definition of the shared model (in terms of interpretation rules) and the mappingbetween the shared space and the shared model.
Figure 5. Co-adaptive system architecture.THE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE 181
5.1. Levels of Co-Adaptation
We can now define the firstlevel of co-adaptation to be the communication that
occurs when the two systems adapt their shared model using signals transmitted inthe shared space. The continuous and collaborative effort of putting signals in theshared space and taking signals out of the shared space is to be viewed as a mech-anism aimed at synchronizing their shared model. A file replication mechanism in aP2P network illustrates well this level of co-adaptation (Figure 6).
Now it may be the case that inserting the signal into the shared model, inter-
preting it and subsequently expanding this model, raises a logical conflict (Figure 7).In our framework, these conflicts relate to the validity claims in the signal. Forexample, the signal may be a purchase order ( /) with the implicit assumption that
the ordered goods are available ( w). If the goods are not available, this claim is not
valid, and an argumentation process will start to solve the issue. This second level of
co-adaptation is worked out below. In the figure, a distinction is made between theShared Model and the Private DB. In a co-adaptive system, the Memory component
foreseen in the self-adaptive system framework can be decomposed into one or more
private databases and a number of shared models (as many as there are collaborativepartners) that should be mutually consistent but are not necessarily synchronized.
Thethird level of co-adaptation occurs when the common ground is adapted. A
simple example is when a symbol gets a new meaning, or when the syntactic structureof the symbol changes. At this point, traditional type checking systems typically raisean error, and perhaps signal an error message to the other system.
A level3 co-adaptive system would try to adapt the common ground in such a way
that the problem is solved (Figure 8). Note that this adaptation of the common
ground is not needed for all breakdowns. It may also be the case that the sendingsystem has made an error, or that an error has occurred in the shared space. In thatcase, the obvious solution is retransmission, which we regard as part of the level1 co-adaptation. Furthermore, it may be the case that the common ground leaves roomfor various interaction styles, and the only adaptation that is needed is switchingfrom one style to another. An example is a modem that supports multiple data rates
Shared Model 1 Shared Model 2
File-1
File-2
File-3 File-1
File-2
Shared Model 1 Shared Model 2
File-1
File-2
File-3File-1
File-2File-3
Figure 6. Level-1 co-adaptation in a P2P network.WEIGAND AND VAN DEN HEUVEL 182
and negotiates dynamically with the counter-party which data rate to use. This
flexibility is built-in in the common ground and so the common ground itself neednot be adapted.
Finally, it may occur that the common ground includes goals of the system that it
cannot change itself. In that case, we talk about a fourth level of co-adaptation. It
cannot be done without the active involvement of the manager of the agent. Actually,
in this case a breakdown signal is reported to the manager through the agent’s per-
formance indicators. In most situations, the two agents involved will have differentmanagers. What happens then is a co-adaptation process between the two managers,
Figure 7. Level 2 co-adaptation.
Figure 8. Level 3 adaptations.THE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE 183
on the basis of their common ground. The intended result of this co-adaptation
process is an agreement on a new common ground, which may involve changing thegoals of the agents below them. The claim that we make in this paper is that ulti-mately, co-adaptation should be supported on all four levels.
The basic message processing algorithms are pictured in Figure 9: one for pro-
cessing operational messages and one for processing updates to the common ground.If a received message does not adhere to the agreed upon format, an error message issent, otherwise it is accepted (level1 co-adaptation). If validity claims are not
acceptable, a ‘‘non-agree’’ message is sent to the other party. This is the beginning of an
argumentation process worked out in the next section (level2 co-adaptation). Other-wise, the claims are accepted and the shared models are synchronized. The secondalgorithm says that when an update of the common ground is received, it is eitheraccepted (level3 co-adaptation) or forwarded to the Manager (level4 co-adaptation).
6. Argumentation
Second-level co-adaptation requires some way of synchronizing shared models for
which argumentation can offer a solution. In recent years, dialogue systems forargumentation have received interest in several fields of AI, particularly in AI andlaw (Prakken 2000, 2001) and agent communication and negotiation (Amgoud,Belabbes and Prade 2005; Bentahar, Moulin and Chaib-dra 2003). In argumentation
Figure 9. Basic message processing algorithms for co-adaptive systems.WEIGAND AND VAN DEN HEUVEL 184
theory (Toulmin 1969), formal dialogue systems have been developed for so-called
‘‘persuasion’’ or ‘‘critical discussion’’ (Mackenzie 1979; Walton and Krabbe 1995 –other types are information-seeking dialogues and inquiry dialogues). The dialoguesystem in this case regulates the use of speech acts for such things as making orchallenging a claim, accepting, withdrawing or arguing. The proponent of a claimaims at making the opponent concede (accept) his claim; the opponent instead aimsat making the proponent withdraw his claim. Such a dialogue ends when one of theplayers has fulfilled his aim. The following definition of a dialogue system is based on
the work of Prakken referred to above and is also used in (De Moor and Weigand, fc).
6.1. A basic dialogue systemPrakken defines a dialogue system (in particular, a protocol for persuasion by dis-
pute, PPD for short) as a tuple consisting of many elements. We have slightlyadapted and simplified his system in the following definition. Our reformulationreflects the fact that in co-adaptation, the communicative action should be focused
on resolving conflicts rather than winning arguments.
Definition
Aprotocol for persuasion by dispute (PPD) consists of the following elements
Players, Acts, Replies, Moves, Comms, Rules, Resolution as defined below
Players , typically represented with the characters S and H
Acts, the set of discussion acts: claim(. u), argue( u,s ow), why( u), retract( u),
accept( u), where uis a wff and ‘‘ w,s ou’’ is an argument.
Replies , a function that defines for each act what are the possible reply acts (see
Table 1). They can be characterized as either agreeing or disagreeing.
Moves , the set of all well-formed moves. An initial move is a pair < Player,
Act > , a responding move is a triple < Player, Act, Move > , where the thirdcomponent indicates the move to which the current move responds.
Table 1.Argumentation acts and replies (based on Prakken 2001).
ACTS Disagreeing replies Agreeing replies
claim / why/
argue /C8so:/C30accept /
why/ argue /C8so/ retract /
accept /
retract /
argue A: Usow
where A identifies
this argumentargue B: /’s ow’
where argument B
challenges or undercuts Awhy/
I
where /I2/C8accept A
retract C
where C is an argumentchallenged by Aaccept /
iTHE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE 185
Comms is a function that assigns to each player at each stage of a dialogue a
set of propositions to which the player is committed at that stage. At the start,these can be considered empty or equal to the Agreed .
Rules is a function that for any dialogue state specifies the allowed moves at
that point, given the dialogue so far and the players’ commitments.
Resolution , is a function that determines how the discussion result is established.
One way of establishing the result is to determine who is the ‘‘winner’’, which issubsequently defined as the one whose argument cannot be defeated.
Thedialogical status of the move indicates its status in the discussion. A certain
claim is ‘‘in’’ when it has been made and not challenged. If it has been challenged, itbecomes ‘‘out’’, until the challenge itself is effectively replied to (see the publicationsreferred to for a formal definition). Another important element is Comms , the
commitments of the discussion partners at that state. These commitments are to beunderstood here in the context of the rational discussion; they represent what theplayer adheres to, even if it is only for the sake of the argument (they do not
correspond with responsibilities for action). During a discussion, the partners can
take on various commitments, of which only a part is agreed.
The logical semantics of the possible discussion acts can be represented using
Comms . For example, the effect of argue ( u,s ow) is that uandware added to the
commitments of the speaker. The preconditions of the discussion acts refer to theCommitments as well. One general condition is that Comms must be left consistent.
More specific preconditions are given in the following table based on (Prakken2001). The relationship between Comms and the Agreed (Section 2) is simply that the
Agreed is the intersection of the two Comms .
Each dialogue system specifies somehow what are the allowed moves at some
point (the Rules component). In the literature, no single or optimal set of rules canbe found, but there are some general norms that seem to be necessary in any rationaldiscussion: Non-repetition , that is, if moves m
iand m jare both reply to M, then their
content should be different; Relevance : a move is relevant iff it replies to a relevant
target. A target is relevant iff any attacking reply to it changes the dialogical statusof the initial move. Every move (except the initial move) should be relevant;
Table 2.Pre- and post-condition rules. Note that the moves can only affect the commitments of the
speaker of the act, not of the hearer.
Move Preconditions Postcondition
(effects on the speakers commitments)
claim / Comms[{/} is consistent Comms :=Comms[{/}
argue Usow Comms :=Comms[{w}[/C8
retract // 2Comms (explicitly added) Comms :=Comms /{/}
accept / /C30=2Comms
Comms do not justify:/C30Comms :=Comms[{/}
why/ Comms do not justify / (no change)WEIGAND AND VAN DEN HEUVEL 186
No self-contradiction : it is not allowed to concede to a proposition if the opposite is
justified by the player’s own commitments (it is allowed that the speaker has changedhis mind, but then he should retract his earlier commitment).
6.2. Argumentation and co-adaptationSo far we have followed the main line of Prakken’s work. We think that this dialogue
system is in instrumental in supporting level2 co-adaptation, but in its current form it
addresses formal aspects of the argumentation only. As such, it will leave many
disputes unresolved. What typically happens is that a dialogue starts, and the partiesexchange some arguments until they arrive at one or more propositions p that arenot derivable anymore (basic beliefs). Suppose the parties disagree on the value of p.The dialogue system does not provide a solution (unless the Resolution has somearbitrary decision rule, e.g. that in such a case, the first claimant always wins, but thisis not desirable).
In order to increase the chances of successful resolution, we propose to add data
authority rules to the common ground. A data authority rule states which agent is an
authorative source for a certain class of propositions. For example, it can be statedthat each agent is authorative with respect to data concerning itself or the company itrepresents. For certain type of data, third parties may be declared to be authorative.In this way, many conflicts can be resolved. There are a few special cases to consider.
–incompleteness . If there is no data authority rule for some proposition p, then
we adopt the following rule; if the parties make contradictory claims, the issue
remains unresolved. If one party makes a claim and the other does not want to
claim the opposite (because he simply does not know – for him, the truth valueis unknown), that means that the other should concede.
–inconsistency . If according to the data authority rules both parties are author-
ative, then the same rule is applied: the issue remains unresolved. Note that thismeans that overlapping authority rules are as ‘‘bad’’ as incomplete rules, andhence it does not make much sense to have them. However, the criteria used inthe data authority rules can be diverse, and so it may be hard to exclude over-
lapping a priori.
Furthermore, we note that our co-adaptation framework allows for escalation.
When a certain issue remains unresolved between two agents, this is considered as abreakdown that triggers the Managers to start a dialogue. Using both data authorityrules and escalation, the number of unresolved issues can be reduced drastically. Theremaining ones are brought to the attention of the human user.
Let us illustrate by a simple example how the argumentation framework supports
co-adaptation. Consider the level2 co-adaptation in Figure 7. The Buyer claims that
the order must be processed ( /), assuming that the goods are available ( w). Both
parties agree that the claim is only valid when the goods are available ( /fiw).
When Seller receives and interprets the claim, he will notice the logical conflictTHE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE 187
(w:w). According to Table 1, he can reply with a why or an argue move. Since he has
an argument, he replies with the move, ‘‘argue Uso:/’’, where Uis:f/!w;:wg.A s
Buyer agrees on the first part of . Uand has no way of attacking the second part (he
does not know the status of w, and realizes that Seller is authorative on this prop-
osition), he can only reply by an accept move, in which he accepts the argue moveand hence also adds :wto his commitments, that is, his part of the Shared Model in
terms of section 5.
7. Conclusion
In this paper, we have made a first step towards a generic co-adaptive framework for
business collaboration that is not only able to support operational e-commercetransactions but also able to adapt the collaboration definition itself. In particular,this paper distinguishes between self-adaptation and co-adaptation, and introduces astratified framework for each. A limitation of the paper is that the aspects ofadaptability, self-adaptation and co-adaptation are not integrated yet.
Most current work on self-adaptation focuses on individual systems only,
although configurations of self-adaptive components are considered as well. How-ever, in a shared environment, self-adaptive systems better coordinate their adap-tation efforts. This is the more true when we realize that systems usually do not existfor their own sake, but to serve another system. So we suggest that self-adaptationshould not be studied only at the micro level of an individual system, but also at themacro level of interacting systems. Interesting questions here are: What are emergentproperties? What can make the macro system unstable? What infrastructure is
desirable (e.g. matching functions, contracting mechanisms) to support co-adapta-
tion and its initialization? Answers to these questions may be found in the area ofMulti-Agent Systems, where these issues have been dealt with for some time.
The advantage of using a co-adaptive system architecture is not only the increase
in autonomy of the system, and an accompanying reduction of maintenance costs. Itcan also make the specification more transparent, as all the self-adaptation and co-adaptation is already built-in, and need not be specified anymore by the developer.The developer can concentrate on the goals that he wants to set, including the
performance indicators.
The co-adaptation framework presented here constitutes an initial research
product in nature. In the future, we wish to develop formal underpinnings for thevarious types of adaptation. Also, we plan to extend the framework with othergeneric mechanisms. It is fair to say that a main emphasis of the research efforts inthe field of Information Systems for the past few decades was to find generic rep-
resentations , such as conceptual data models, generic ontologies and general-purpose
communication languages. This approach has been proven to be quite successful, but
we believe it is not sufficient, at least not for realizing the autonomous computing
vision. Therefore, we envision a new challenge for the coming decade: to complementgeneric representations with generic mechanisms of which self-adaptation, co-
adaptation and imitation are good examples to start with.WEIGAND AND VAN DEN HEUVEL 188
References
Amgoud, L., S. Belabbes, H. Prade. (2005). ‘‘Towards a Formal Framework for the Search of a Consensus
between Autonomous Agents,’’ In: Proc. of the Fourth Int. Joint Conf. on Autonomous Agents and
Multiagent Systems , The Netherlands, July 25–29, 2005, AAMAS /C21305. ACM Press, New York.
Argyris, C. and D. Scho ¨n (1974). Theory in Practice: Increasing Professional Effectiveness . San Francisco:
Jossey-Bass.
Bentahar, J., B. Moulin, and B. Chaib-dra. (2003). ‘‘Commitment and Argument Network: A New
Formalism for Agent Communication,’’ in: AAMAS Workshop on Agent Communication Languages
and Conversation Policies, Melbourne, 2003.
Chaibdra, B. and F. Dignum (2002). ‘‘Trends in Agent Communication Language,’’ Computational
Intelligence 18(2), 89–101.
Cheong, C. and M. Winikoff. (2005). ‘‘Hermes: a Methodology for Goal Oriented Agent Interactions,’’ in:
Proceedings of the Fourth Int. Joint Conf. on Autonomous Agents and Multiagent Systems, The
Netherlands, July 25–29, 2005, AAMAS /C21305. ACM Press, New York.
Ciancarini, P., A. Omicini, and F. Zambonelli. (2000). ‘‘Multiagent Systems Engineering: the Coordi-
nation Viewpoint,’’ in: Intelligents Agents VI (ATAL99), LNAI. Springer-Verlag, Berlin.
Clark, H. (1996). Using Language. Cambridge University Press.Dardenne, A., A. van Lamsweerde and S. Fickas (1993). ‘‘Goal-Directed Requirements Acquisition,’’ Sci-
ence of Computer Programming 20(1–2), 3–50.
Dietz, J. L. G. (2005). Enterprise Ontology – Theory and Methodology . Berlin: Springer-Verlag.
Ganek, A. G. and T. A. Corbi (2003). ‘‘The Dawning of the Autonomic Computer Area,’’ IBM Systems
Journal 42(1), 5–18.
Garlan, D. and B. Schmerl. (2002). ‘‘Model-based Adaptation for Self-Healing Systems,’’ in: 1st
Workshop on Self-Healing Systems ACM Press, New York.
Habermas, J. (1984). The Theory of Communicative Action, I. Beacon Press.Holland, J. H. (1995). Hidden Order: How Adaptation builds Complexity . CambridgeMA: Perseus Books.
Jones, A. J. and X. Parent. (2003). ‘‘Conventional Signalling Acts and Conversation,’’ in: AAMAS
Workshop on Agent Communication Languages and Conversation Policies, Melbourne, 2003.
Kephart, O. and D. M. Chess (2003). ‘‘The Vision of Autonomic Computing,’’ IEEE Computer 36(1), 41–
50.
Kimbrough, S. O. and S. A. Moore (1997). ‘‘On Automated Message Processing in Electronic Commerce
and Work Support Systems: Speech act Theory and Expressive Felicity,’’ ACM Transactions on
Information Systems 15(4), 321–367.
Kleppe, A. G., J. Warmer, and W. Bast. (2003). MDA Explained: The Model Driven Architecture: Practice
and Promise . Addison-Wesley Longman Publishing Co., Inc.
van Lamsweerde, A. (2001). ‘‘Goal-Oriented Requirements Engineering: A Guided Tour,’’ in: Proc.
RE /C21301, Toronto, August 2001, pp. 249–263.
Linington, P. F., Z. Milosevic, J. Cole, S. Gibson, S. Kulkami and S. Neal (2004). ‘‘A Unified Behavioural
Model and a Contract Language for Extended Enterprise,’’ Data and Knowledge Engineering 51(1), 5–
30.
Mackenzie, J. D. (1979). ‘‘Question-Begging in Non-Cumulative Systems,’’ Journal of Philosophical Logic
8, 117–133.
Mylopoulos, J., L. Chung and E. Yu (1999). ‘‘From Object-oriented to Goal-oriented Requirements
Analysis,’’ Communications of the ACM 42(1), 31–37.
de Moor, A. and H. Weigand. (fc). Formalizing the Evolution of Virtual Communities, Information
Systems. (accepted for publication).
Norman, D. A. (1990). The Design of Everyday Things . New York: Doubleday.
Osterweil, L. J., L. A. Clarke, D. J. Richardson, and M. Young. (1996). ‘‘Perpetual Testing,’’ in:
Proceedings of the Ninth International Software Quality Week.
Owens, D. H. (1978). Feedback and Multivariate Systems . Stevenage: Peter Peregrinus Ltd.
Pfleeger, S. L. and J. M. Atlee. (2006). Software Engineering – Theory and Practice. 3rd ed., Pearson Int.Prakken, H. (2000). ‘‘On Dialogue Systems with Speech Acts, Arguments and Counterarguments,’’ In:
Proc. of the 7th European Workshop on Logic for Artificial Intelligence (JELIA /C2132000), LNAI 1919,
Springer-Verlag, pp. 224–238.
Prakken, H. (2001). ‘‘Modeling Reasoning about Evidence in Legal Procedure (2001),’’ in: Proc. ICAIL-
2001, St. Louis, Missouri.THE CHALLENGE OF SELF-ADAPTIVE SYSTEMS FOR E-COMMERCE
189
Salehie, M. and L. Tahvildari. (2005). ‘‘Autonomic Computing: Emerging Trends and Open Problems,’’
in: Proceedings of the 2005 Workshop on Design and Evolution of Autonomic Application Software
(St. Louis, Missouri, May 21–21, 2005). DEAS /C21305. ACM Press, New York.
Singh, M. P. (2000). ‘‘Social Semantics for Agent Communication Languages’’. in F. Dignum, & M.
Greaves, (Eds) Issues in Agent Communication (pp. 31–45). Berlin: Springer-Verlag.
Smith, H. and P. Fingar. (2003). Business Process Management: The Third Wave . Meghan-Kiffer Press.
Stamper, R. (2000). ‘‘New Directions for Systems Analysis and Design’’. in J. Filipe (ed.) Enterprise
Information Systems (pp. 14–39). London: Kluwer Academic Publ..
Toulmin, S. E. (1969). The Uses of Argument . Cambridge University Press.
Valetto, G. and G. Kaiser (2002). A Case Study in Software Adaptation. Proc. 1st Workshop on Self-
Healing Systems . New York: ACM Press.
Walton, D. N. and E. Krabbe (1995). Commitment in Dialogue. Basic Concepts of Interpersonal Reasoning .
Albany NY: State Univ of New York Press.
Weigand, H., E. Verharen, and F. Dignum. (1997). ‘‘Integrated Semantics for Information and
Communication Systems,’’ In: Meersman, R. and L. Mark. (eds.) Proc. of IFIP DS-6 ‘‘Database
Application Semantics ,’’ Stone-Mountain, Georgia.
Weigand, H. and W. J. van den Heuvel (1999). ‘‘Meta-Patterns for Electronic Commerce Transactions
based on FLBC,’’ International Journal on Electronic Commerce 3(2), 45–66.WEIGAND AND VAN DEN HEUVEL 190
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: The Challenge of Self-adaptive Systems [615039] (ID: 615039)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
