Скачать презентацию L 10 Agent Negotiations When Definition Скачать презентацию L 10 Agent Negotiations When Definition

e7166a50e353a3b645f1dd4f8e0dae61.ppt

  • Количество слайдов: 56

L 10.  Agent Negotiations • • When Definition and concepts Strategies – negotiation modeling L 10.  Agent Negotiations • • When Definition and concepts Strategies – negotiation modeling Examples – a buyer-seller negotiation

When negotiations occur? • Task and resource allocation • Recognition of conflicts • Improved When negotiations occur? • Task and resource allocation • Recognition of conflicts • Improved coherence for agent society • Deciding Organizational Structure

Definitions of Negotiation • Davis&Smith Negotiation is a process of improving agreement (reducing inconsistency Definitions of Negotiation • Davis&Smith Negotiation is a process of improving agreement (reducing inconsistency and uncertainty) on common viewpoint or plans through the exchange of relevant information 1. 2. 3. Two-way exchange of information (e. g. 2 agents) Individual perspective evaluation of information Possible final agreement

Related Elements • Negotiation – three main structures 1. Language 2. Decision 3. Process Related Elements • Negotiation – three main structures 1. Language 2. Decision 3. Process

PROCESS age u Lang NEGOTIATION si on D i ec Process PROCESS age u Lang NEGOTIATION si on D i ec Process

Negotiation Problem Domains Three-level hierarchy 1. Task-Oriented – – 2. Non-conflicting jobs/tasks Jobs/tasks can Negotiation Problem Domains Three-level hierarchy 1. Task-Oriented – – 2. Non-conflicting jobs/tasks Jobs/tasks can be redistributed among agents (for mutual benefit) State-Oriented • • • 3. Superset of task-oriented domain Goals/jobs/tasks can have side-effects (i. e. Conflicting) Negotiation joint plans/schedules for agents Worth-Oriented • • • Superset of state-oriented domain Each goal has a rating or value (e. g. Numeric) Negotiation joint plans/schedules/goal relaxation

Postmen Problem Domain Type: task-oriented Situation: • Several postmen located at a post office Postmen Problem Domain Type: task-oriented Situation: • Several postmen located at a post office • Post arrives to the post office • Post is supposed to be delivered by the postmen to private postal boxes which is geographically (spatially) distributed • Which postman should deliver which post to where?

Postmen Domain Post Office 1 TOD 2 a / / c b / d Postmen Domain Post Office 1 TOD 2 a / / c b / d e / f /

Blocks World Problem Domain Type: state-oriented Situation: agents have their own agenda on how Blocks World Problem Domain Type: state-oriented Situation: agents have their own agenda on how to stack various colored blocks. Blocks are a shared resource. How to coordinate the agents actions to solve conflicting block moves?

Slotted Blocks World SOD 1 3 2 1 2 3 Slotted Blocks World SOD 1 3 2 1 2 3

Multiagent Tile World Problem Domain Type: worth-oriented Situation: agents operate on a grid, there Multiagent Tile World Problem Domain Type: worth-oriented Situation: agents operate on a grid, there are tiles that needs to be put into holes. The different holes have different values. In addition there are obstacles. How to coordinate the agents actions to solve conflicting tile-moves and get good compromises regarding the agents obtained values?

The Multi-Agent Tileworld WOD agents hole B A tile 22 2 5 5 obstacle The Multi-Agent Tileworld WOD agents hole B A tile 22 2 5 5 obstacle 2 34

Building Blocks • Domain – A precise definition of what a goal is – Building Blocks • Domain – A precise definition of what a goal is – Agent operations • Negotiation protocol – A definition of a deal – A definition of utility – A definition of the conflict deal • Negotiation Strategy – In Equilibrium – Incentive-compatible

Task-Oriented Domain – formal description • • Described by a tuple - <T, A, Task-Oriented Domain – formal description • • Described by a tuple - T – set of all tasks (all possible actions in the domain) A – list of agents c – a monotonic cost function for each task to a real number

Possible Deals 1. 2. 3. 4. 5. ({a}, {b}) ({b}, {a}) ({a, b}, ) Possible Deals 1. 2. 3. 4. 5. ({a}, {b}) ({b}, {a}) ({a, b}, ) ( , {a, b}) ({a}, {a, b}) 6. 7. 8. 9. The conflict deal ({b}, {a, b}) ({a, b}, {a}) ({a, b}, {b}) ({a, b}, {a, b})

Formal Description of a ”Deal” A deal is a pair (D 1, D 2) Formal Description of a ”Deal” A deal is a pair (D 1, D 2) such that: D 1 D 2 = T 1 T 2 T 1 – Agent 1’s original task T 2 – Agent 2’s original task D 1 – Agent 1’s new task – result of deal D 2 – Agent 2’s new task – result of deal

Utility Function Given encounter <T 1, T 2>, the utility of deal d to Utility Function Given encounter , the utility of deal d to agent k is: utilityk(d) = c(Tk) – costk(d) • d = • c(Tk) is the stand-alone cost to agent k (the cost of achieving its goal with no help) • costk(d) = c(Dk)

Example: parcel delivery domain -- utility distribution point Cost function: c( ) = 0 Example: parcel delivery domain -- utility distribution point Cost function: c( ) = 0 1 1 a c({a}) = 1 b c({b}) = 1 c({a, b}) = 3 Utility for agent 1: Utility for agent 2: 1. utility 1({a}, {b}) = 0 1. utility 2({a}, {b}) = 2 2. utility 1({b}, {a}) = 0 2. utility 2({b}, {a}) = 2 3. utility 1({a, b}, ) = -2 3. utility 2({a, b}, ) = 3 4. utility 1( , {a, b}) = 1 4. utility 2( , {a, b}) = 0 5. utility 1({a}, {a, b}) = 0 5. utility 2({a}, {a, b}) = 0 6. utility 1({b}, {a, b}) = 0 6. utility 2({b}, {a, b}) = 0 7. utility 1({a, b}, {a}) = -2 7. utility 2({a, b}, {a}) = 2 8. utility 1({a, b}, {b}) = -2 8. utility 2({a, b}, {b}) = 2 9. utility 1({a, b}, {a, b}) = -2 9. utility 2({a, b}, {a, b}) = 0

Deals 1. 2. 3. 4. 5. 6. 7. 8. 9. ({a}, {b}) ({b}, {a}) Deals 1. 2. 3. 4. 5. 6. 7. 8. 9. ({a}, {b}) ({b}, {a}) ({a, b}, ) ( , {a, b}) ({a}, {a, b}) ({b}, {a, b}) ({a, b}, {a}) ({a, b}, {b}) ({a, b}, {a, b}) Invidual rational Pareto optimal ({a}, {b}) ({b}, {a}) ( , {a, b}) ({a}, {a, b}) ({b}, {a, b}) ({a}, {b}) ({b}, {a}) ({a, b}, ) ( , {a, b}) Negotiation sets ({a}, {b}) ({b}, {a}) ( , {a, b})

The Negotiation Set Illustrated The Negotiation Set Illustrated

Pareto optimality: Named after Vilfredo Pareto, Pareto optimality is a measure of efficiency. An Pareto optimality: Named after Vilfredo Pareto, Pareto optimality is a measure of efficiency. An outcome of a game is Pareto optimal if there is no other outcome that makes every player at least as well off and at least one player strictly better off. That is, a Pareto Optimal outcome cannot be improved upon without hurting at least one player.

Negotiation Protocols • Agents use a product-maximizing negotiation protocol (as in Nash bargaining theory) Negotiation Protocols • Agents use a product-maximizing negotiation protocol (as in Nash bargaining theory) • It should be a symmetric PMM (product maximizing mechanism) • Examples: 1 -step protocol, monotonic concession protocol…

The Monotonic Concession Protocol Rules of this protocol are as follows… • Negotiation proceeds The Monotonic Concession Protocol Rules of this protocol are as follows… • Negotiation proceeds in rounds • On round 1, agents simultaneously propose a deal from the negotiation set • Agreement is reached if one agent finds that the deal proposed by the other is at least as good or better than its proposal • If no agreement is reached, then negotiation proceeds to another round of simultaneous proposals • In round u + 1, no agent is allowed to make a proposal that is less preferred by the other agent than the deal it proposed at time u • If neither agent makes a concession in some round u > 0, then negotiation terminates, with the conflict deal

The Zeuthen Strategy Three problems: • What should an agent’s first proposal be? Its The Zeuthen Strategy Three problems: • What should an agent’s first proposal be? Its most preferred deal • On any given round, who should concede? The agent least willing to risk conflict • If an agent concedes, then how much should it concede? Just enough to change the balance of risk

Willingness to Risk Conflict • Suppose you have conceded a lot. Then: – Your Willingness to Risk Conflict • Suppose you have conceded a lot. Then: – Your proposal is now near the conflict deal – In case conflict occurs, you are not much worse off – You are more willing to risk confict • An agent will be more willing to risk conflict if the difference in utility between its current proposal and the conflict deal is low

Nash Equilibrium Again… • The Zeuthen strategy is in Nash equilibrium: under the assumption Nash Equilibrium Again… • The Zeuthen strategy is in Nash equilibrium: under the assumption that one agent is using the strategy the other can do no better than use it himself… • This is of particular interest to the designer of automated agents. It does away with any need for secrecy on the part of the programmer. An agent’s strategy can be publicly known, and no other agent designer can exploit the information by choosing a different strategy. In fact, it is desirable that the strategy be known, to avoid inadvertent conflicts.

Nash equilibrium: A Nash equilibrium, named after John Nash, is a set of strategies, Nash equilibrium: A Nash equilibrium, named after John Nash, is a set of strategies, one for each player, such that no player has incentive to unilaterally change her action. Players are in equilibrium if a change in strategies by any one of them would lead that player to earn less than if she remained with her current strategy. For games in which players randomize (mixed strategies), the expected or average payoff must be at least as large as that obtainable by any other strategy.

A Hybrid Negotiation Model • base on the original Bazaar model • take wholesalers A Hybrid Negotiation Model • base on the original Bazaar model • take wholesalers into considerations • use game theory in generating initial strategy • combine common&public knowledge

Extended bazaar model - a brief description • a 10 -tuple, <G, W, D, Extended bazaar model - a brief description • a 10 -tuple, – G, a set of players – W, a set of wholesalers – D, a set of negotiation issues – S, a set of agreements over each issue – A, a set of all possible actions – H, a set of history sequences – Ω, a set of relevant information entities – P, a set of subjective probability distribution – C, a set of communication costs – E, a set of evaluation functions

Extended bazaar model – in a bilateral case • a 10 -tuple, <G, W, Extended bazaar model – in a bilateral case • a 10 -tuple, – G, a seller and a buyer – W, a wholesaler – D, a single issue-product price – S, price offer/counter offer – A, possible price offers/counter offers – H, a sequence of price offers/counter offers at each negotiation round, (ak|k=1, 2, …, K H)∩(L

– continue … • a 10 -tuple, <G, W, D, S, A, H, Ω, – continue … • a 10 -tuple, – Ω, a set of knowledge entities a seller/buyer has about environment (average price, economic situation, …), counter party (RP, payoff function, type…) x. – P, subjective probability distribution of hypothesis on a belief P[h, 1] (x), P[h, 2] (x) – C, communication costs for a seller or buyer to continue another negotiation round – E, Ei: (P[i, h] (x)|x Ωi, Pfi, a) → utility(gi), a Ai, Ei E, i=1, 2

– continue … • a 10 -tuple, <G, W, D, S, A, H, Ω, – continue … • a 10 -tuple, – E, two evaluation function, one for a seller and one for a buyer. Ei: (P[i, h] (x)|x Ωi, Pfi, a) → utility(gi), a Ai, Ei E, i=1, 2 For any action a, it falls into three types: Ui = 1. 0 -> {agreement: accept}, Ui = 0. 0 ->{agreement: quit}, and 0. 0 < Ui < 1. 0 ->{new agreement }

Making a decision over price only • Accept:   price(akseller) < RPbuyer, then E[1, Making a decision over price only • Accept:   price(akseller) < RPbuyer, then E[1, ak]=1, ak=accept If • Quit: If (price(akseller) –RPseller<=C 1 )∩(price(akseller) >RPbuyer), then E[1, ak]=0, ak=quit • fitness: f 1(skj)=1 -(CPbuyer(j)-RPseller)/(RPbuyer-RPseller), RPbuyer- C 1>CPbuyer(j)>RPseller skj=CPbuyer(j) S 1, j=1, 2, …, Np skj 0 is selected as the counter-offer if we have f 1(skj 0)=max{ f 1(skj)} , j 0 j • skj 0 = RPseller+ is regarded as a psychological factor

Learning with Bayesian rule updating • P[h[1, k], 1](Bj|h[1, k])= P[h[1, k 1], 1](Bj)*P[h[1, Learning with Bayesian rule updating • P[h[1, k], 1](Bj|h[1, k])= P[h[1, k 1], 1](Bj)*P[h[1, k], 1](h[1, k]|Bj)/( bj=1 P[h[1, k], 1](h[1, k]|Bj)* P[h[1, k-1], 1] (Bj) ) (1) • P[h[1, k], 1](h[1, k]|Bj)= 1 -(|(h[1, k]/(1 - )+WP[1, k]+ wp)/2 -Bj|)/(h[1, k]/(1 - )+ WP[1, k] + wp)/2) • RPseller = bj=1 P[h[1, k], 1]( Bj|h[1, k])* Bj – P[h[1, k], 1] (Bj| h[1, k]) is posterior distribution –P [h[1, k-1], 1] (Bj) is prior distribution – h[1, k] is newly incoming information – B is hypothesis on a belief. RP j seller (2)

Enhanced extended Bazaar model • Instead of setting the probability of each hypothesis Pk=0(Bj)=1/b, Enhanced extended Bazaar model • Instead of setting the probability of each hypothesis Pk=0(Bj)=1/b, for each j, Pk=0(Bj) is calculated. • collecting public available information (a list of prices) to estimate counter party’s possible demand (RP) RP’seller=( GPi+ (WPj+ wp))/(u+v) (3) • finding a solution using the estimated demand max(RPbuyer-x)(x-RP’seller), x = (RPbuyer+ RP’seller)/2 (4) • initiating the probability distribution P’(Bj) = 1 -|x-Bj|/x (5) Pk=0(Bj) = P’(Bj)/ P’(Bj) (6)

Updating probability distribution K Offer Counter Offer P(B 1) P(B 2) P(B 3) P(B) Updating probability distribution K Offer Counter Offer P(B 1) P(B 2) P(B 3) P(B) 0 --- 0. 17 0. 26 0. 33 0. 24 1 140 107. 9 0. 16 0. 22 0. 29 0. 33 2 135 109. 7 0. 07 0. 18 0. 46 0. 29 3 130 110. 2 0. 03 0. 14 0. 61 0. 22

Comparisons The normalized joint utility is defined as: Joint. Utility=(priceagreed-RPseller)*(RPbuyer-priceagreed)/( RPbuyer-RPseller)2 (7) Comparisons The normalized joint utility is defined as: Joint. Utility=(priceagreed-RPseller)*(RPbuyer-priceagreed)/( RPbuyer-RPseller)2 (7)

– continue … – continue …

System configuration System configuration

A Real World Trading Oriented Market-driven Model for Negotiation Agent Yoshizo Ishihara and Runhe A Real World Trading Oriented Market-driven Model for Negotiation Agent Yoshizo Ishihara and Runhe Huang Faculty of Computer and Information Sciences, Hosei University, Tokyo, Japan

Negotiation Agent Buyer Negotiation Seller Agent Bid Buyer Agent Seller Negotiation Agent Buyer Negotiation Seller Agent Bid Buyer Agent Seller

Negotiation Factors • Sim’s model is guided by following four negotiation factors: – – Negotiation Factors • Sim’s model is guided by following four negotiation factors: – – Trading Opportunity Trading Competition Trading Time Trading Eagerness of the agent itself • The spread k’ between an agent’s bid/offer and that of others in the next trading cycle is determined as:

Our Improved Model • We improved Sim’s model in 2004 using Bayesian updating rule Our Improved Model • We improved Sim’s model in 2004 using Bayesian updating rule to learn opponent’s eagerness. • An agent can make a concession for its opponent’s motivation. • The spread k’ is redefined as:

A Precondition • In both Sim’s and our improved model, a negotiation agent has A Precondition • In both Sim’s and our improved model, a negotiation agent has same behaviors and actions to all trading partners. $800 Same

A Real World Trading • In fact, a negotiation strategy between a buyer and A Real World Trading • In fact, a negotiation strategy between a buyer and a seller is kept in secret and unknown to others. ? ? ? ? Unknown

A Revised Model • A revised market-driven model takes each trading partner as an A Revised Model • A revised market-driven model takes each trading partner as an individual with different strategies and actions. $750 $850 Different & Unknown

The competition factor in the previous model b[1] b[2] Item . . . b[n] The competition factor in the previous model b[1] b[2] Item . . . b[n] • Each trading partner has a same number of competitors. Item • Each seller gets a same number of demands. a[1] a[2] . . . Full connected a[m] • Each buyer gets a same number of supplies.

Individual Competition (IC) Item a[1] a[2] b[n] Item . . . . Individual connected Individual Competition (IC) Item a[1] a[2] b[n] Item . . . . Individual connected a[m] is the probability that the buyer agent a will become supplied target for requested items from the seller agent b. • . . . . A buyer requests i items. A seller has s supplies and sum(i) = d demands. • b[1] • • If (s >= d), then • If (s < d), then

Apply to Conflict Probability • IC = 1 do not affect to previous conflict Apply to Conflict Probability • IC = 1 do not affect to previous conflict probability. • Lower IC makes higher conflict probability. • IC = 0 makes conflict probability as 1. Pc 1 ex) Higher demands make higher IC. Supply Demand Previous Value 0 0 IC 1

Individual Opportunity (IO) • Learnt opponent eagerness, , will affect to opportunity. • The Individual Opportunity (IO) • Learnt opponent eagerness, , will affect to opportunity. • The probability that buyer agent a will obtain a utility v, with seller agent b: – If Pc = 0. 0 : Pc -> 0. 001 – If Pc < 0. 5 : – If Pc = 0. 5 : – If Pc > 0. 5 : – If Pc = 1. 0 : Pc -> 0. 999

Revised Negotiation Strategy • To bring close up to , the agent makes an Revised Negotiation Strategy • To bring close up to , the agent makes an amount of concession based on the time-dependent strategy: – when

Relationship among factors Supplies & Demands Individual Competition Conflict Probability Spread Deadline & Present Relationship among factors Supplies & Demands Individual Competition Conflict Probability Spread Deadline & Present time Plausible Offer Learnt Opponent Eagerness Agent Eagerness Individual Opportunity Time Strategy Next Bid

Negotiation Results Each value shows: Bid Price Learnt Opponent Eagerness Individual Opportunity Negotiation Results Each value shows: Bid Price Learnt Opponent Eagerness Individual Opportunity

Negotiation Results Each value shows: Bid Price Learnt Opponent Eagerness Individual Opportunity Negotiation Results Each value shows: Bid Price Learnt Opponent Eagerness Individual Opportunity

References: http: //www. csc. liv. ac. uk/~mjw/pubs/gdn 2001. pdf http: //www. ecs. soton. ac. References: http: //www. csc. liv. ac. uk/~mjw/pubs/gdn 2001. pdf http: //www. ecs. soton. ac. uk/~mml/papers/ker 99 -2. pdf http: //crpit. com/confpapers/CRPITV 4 Rahwan. pdf http: //xenia. media. mit. edu/~guttman/research/pubs/amet 98. pdf http: //www. umiacs. umd. edu/users/sarit/Articles/acai 01. pdf http: //www-agki. tzi. de/ecai 00 -mas/lopes. pdf