Скачать презентацию LECTURE 7 Reaching Agreements An Introduction to Multi Скачать презентацию LECTURE 7 Reaching Agreements An Introduction to Multi

3d8ca8a99f1b0ed6f1b36df0e54ed94f.ppt

  • Количество слайдов: 92

LECTURE 7: Reaching Agreements An Introduction to Multi. Agent Systems http: //www. csc. liv. LECTURE 7: Reaching Agreements An Introduction to Multi. Agent Systems http: //www. csc. liv. ac. uk/~mjw/pubs/imas Chapters 14, 15, and 16 in the Second Edition Chapter 7 in the First Edition 7 -1

Reaching Agreements n n How do agents reaching agreements when they are self interested? Reaching Agreements n n How do agents reaching agreements when they are self interested? In an extreme case (zero sum encounter) no agreement is possible — but in most scenarios, there is potential for mutually beneficial agreement on matters of common interest (Another special case: common-payoff games; what’s the challenge there? ) The capabilities of negotiation and argumentation are central to the ability of an agent to reach such agreements 7 -2

What is mechanism design? n In mechanism design, we get to design the game What is mechanism design? n In mechanism design, we get to design the game (or mechanism) q n n n e. g. , the rules of the auction, marketplace, election, … Goal is to obtain good outcomes when agents behave strategically (game-theoretically) Mechanism design often considered part of game theory 2007 Nobel Prize in Economics

Mechanisms, Protocols, and Strategies n n Negotiation is governed by a particular mechanism, or Mechanisms, Protocols, and Strategies n n Negotiation is governed by a particular mechanism, or protocol The mechanism defines the “rules of encounter” between agents Mechanism design is designing mechanisms so that they have certain desirable properties Given a particular protocol, how can a particular strategy be designed that individual agents can use? 7 -4

Mechanism Design n Desirable properties of mechanisms: q q q q Convergence/guaranteed success Maximizing Mechanism Design n Desirable properties of mechanisms: q q q q Convergence/guaranteed success Maximizing social welfare Pareto efficiency Individual rationality Stability Simplicity Distribution 7 -5

Auctions n n n An auction takes place between an agent known as the Auctions n n n An auction takes place between an agent known as the auctioneer and a collection of agents known as the bidders The goal of the auction is for the auctioneer to allocate the good to one of the bidders In most settings the auctioneer desires to maximize the price (i. e. , maximize its revenue); bidders desire to minimize price 7 -6

Auction Parameters n Goods can have q q q n Winner determination may be Auction Parameters n Goods can have q q q n Winner determination may be q q n first price second price Bids may be q q n private value public/common value correlated value open cry sealed bid Bidding may be q q q one shot ascending descending 7 -7

English Auctions n Most commonly known type of auction: q q q n n English Auctions n Most commonly known type of auction: q q q n n first price open cry ascending Dominant strategy is for agent to successively bid a small amount more than the current highest bid until it reaches their valuation, then withdraw Susceptible to: q q winner’s curse shills 7 -8

Dutch Auctions n Dutch auctions are examples of open-cry descending auctions: q q q Dutch Auctions n Dutch auctions are examples of open-cry descending auctions: q q q auctioneer starts by offering good at artificially high value auctioneer lowers offer price until some agent makes a bid equal to the current offer price the good is then allocated to the agent that made the offer 7 -9

First-Price Sealed-Bid Auctions n First-price sealed-bid auctions are one-shot auctions: q q n there First-Price Sealed-Bid Auctions n First-price sealed-bid auctions are one-shot auctions: q q n there is a single round bidders submit a sealed bid for the good is allocated to agent that made highest bid winner pays price of highest bid Best strategy is to bid less than true valuation 7 -10

Vickrey Auctions n Vickrey auctions are: q q n n n second-price sealed-bid Good Vickrey Auctions n Vickrey auctions are: q q n n n second-price sealed-bid Good is awarded to the agent that made the highest bid; at the price of the second highest bid Bidding to your true valuation is dominant strategy in Vickrey auctions susceptible to antisocial behavior 7 -11

Lies and Collusion n n The various auction protocols are susceptible to lying on Lies and Collusion n n The various auction protocols are susceptible to lying on the part of the auctioneer, and collusion among bidders, to varying degrees All four auctions (English, Dutch, First-Price Sealed Bid, Vickrey) can be manipulated by bidder collusion A dishonest auctioneer can exploit the Vickrey auction by lying about the 2 nd-highest bid Shills can be introduced to inflate bidding prices in English auctions 7 -12

Example: (single-item) auctions n n n Sealed-bid auction: every bidder submits bid in a Example: (single-item) auctions n n n Sealed-bid auction: every bidder submits bid in a sealed envelope First-price sealed-bid auction: highest bid wins, pays amount of own bid Second-price sealed-bid auction: highest bid wins, pays amount of second-highest bid 1: $10 bid 2: $5 bid 3: $1 0 first-price: bid 1 wins, pays $10 second-price: bid 1 wins, pays $5

Which auction generates more revenue? n Each bid depends on q q q n Which auction generates more revenue? n Each bid depends on q q q n In a first-price auction, it does not make sense to bid your true valuation q n bidder’s true valuation for the item (utility = valuation - payment), bidder’s beliefs over what others will bid (→ game theory), and. . . the auction mechanism used Even if you win, your utility will be 0… In a second-price auction, (we will see that next) it always makes sense to bid your true valuation bid 1: $10 a likely outcome for the first-price mechanism bid 1: $5 a likely outcome for the secondprice mechanism bid 2: $4 bid 3: $1 0 bid 2: $5 bid 3: $1 0 Are there other auctions that perform better? How do we know when we have found the best one?

Bidding truthfully is optimal in the Vickrey auction! • What should a bidder with Bidding truthfully is optimal in the Vickrey auction! • What should a bidder with value v bid? b = highest bid among other bidders Option 1: Win the item at price b, get utility v - b Option 2: Lose the item, get utility 0 0 Would like to win if and only if v - b > 0 – but bidding truthfully accomplishes this! We say the Vickrey auction is strategy-proof

Collusion in the Vickrey auction • Example: two colluding bidders v 1 = first Collusion in the Vickrey auction • Example: two colluding bidders v 1 = first colluder’s true valuation v 2 = second colluder’s true valuation b = highest bid among other bidders 0 price colluder 1 would pay when colluders bid truthfully gains to be distributed among colluders price colluder 1 would pay if colluder 2 does not bid

Analyzing the expected revenue of the first-price and second-price (Vickrey) auctions • First-price auction: Analyzing the expected revenue of the first-price and second-price (Vickrey) auctions • First-price auction: probability of there not being a bid higher than b is (bn/(n-1))n (for b < (n-1)/n) – This is the cumulative density function of the highest bid • Probability density function is the derivative, that is, it is nbn-1(n/(n-1))n • Expected value of highest bid is n(n/(n-1))n∫(n-1)/nbndb = (n-1)/(n+1) • Second-price auction: probability of there not being two bids higher than b is bn + nbn-1(1 -b) – This is the cumulative density function of the second-highest bid • Probability density function is the derivative, that is, it is nbn-1 + n(n-1)bn-2(1 -b) - nbn-1 = n(n-1)(bn-2 - bn-1) • Expected value is (n-1) - n(n-1)/(n+1) = (n-1)/(n+1)

Revenue equivalence theorem • Suppose valuations for the single item are drawn i. i. Revenue equivalence theorem • Suppose valuations for the single item are drawn i. i. d. from a continuous distribution over [0, 1] (with no “gaps”), and agents are risk-neutral • Then, any two auction mechanisms that – in equilibrium always allocate the item to the bidder with the highest valuation, and – give an agent with valuation 0 an expected utility of 0, will lead to the same expected revenue for the auctioneer

(As an aside) what if bidders are not risk-neutral? • Behavior in second-price/English does (As an aside) what if bidders are not risk-neutral? • Behavior in second-price/English does not change, but behavior in first-price/Dutch does • Risk averse: first price/Dutch will get higher expected revenue than second price/English • Risk seeking: second price/English will get higher expected revenue than first price/Dutch

(As an aside) interdependent valuations • E. g. , bidding on drilling rights for (As an aside) interdependent valuations • E. g. , bidding on drilling rights for an oil field • Each bidder i has its own geologists who do tests, based on which the bidder assesses an expected value vi of the field • If you win, it is probably because the other bidders’ geologists’ tests turned out worse, and the oil field is not actually worth as much as you thought – The so-called winner’s curse • Hence, bidding vi is no longer a dominant strategy in the second-price auction • In English auctions, you can update your valuation based on other agents’ bids, so no longer equivalent to second-price • In these settings, English > second-price > first-price/Dutch in terms of revenue

Two More Examples Generalized Second Price Auctions (like Google Ad. Words) All-pay Auctions 21 Two More Examples Generalized Second Price Auctions (like Google Ad. Words) All-pay Auctions 21

Generalized Second Price Auctions • • n bidders, k slots Each slot has a Generalized Second Price Auctions • • n bidders, k slots Each slot has a click probability of α, α 1 through αk α 1 ≥ α 2 ≥ … ≥ α k Each bidder has a value for one slot, vi, and submits a bid, bi Each bidder will have to pay a price, pi, for their slot Utility of a bidder is ui = αi(vi – pi) Order the bidders by their bid, give the top slot to the highest bidder, the second top slot to the second highest bidder, … Each bidder pays the bid of the next highest bidder: pi = bi-1 22

Generalized Second Price Auctions • The GSP auction is not truthful • For example: Generalized Second Price Auctions • The GSP auction is not truthful • For example: • Two slots, with α 1 = 1 and α 2 = 0. 4 • Three bidders, with v 1 = 7, v 2 = 6, v 3 = 1 • Bidding b 1 = 7, b 2 = 6, b 3 = 1 is not a Nash Equilibrium • Telling the truth, the highest bidder pays 6 and gets utility of 1 (i. e. , α 1 * (7 – 6) ); if that bidder had bid 5 instead, they would pay 1 and get utility of 2. 4 (i. e. , 0. 4 * (7 – 1) ) • How does this differ from the second-price auctions you’ve seen? 23

Do not visit this web site: 24 Do not visit this web site: 24

From Wikipedia, “Bidding Fee Auctions” “Participants pay a non-refundable fee to purchase bids. Each From Wikipedia, “Bidding Fee Auctions” “Participants pay a non-refundable fee to purchase bids. Each of the bids increases the price of the item by a small amount, such as one penny…and extends the time of the auction by a few seconds. Bid prices vary by site and quantity purchased at a time, but generally cost 10 – 150 times the price of the bidding increment. The auctioneer receives the money paid for each bid, plus the final price of the item. “For example, if an item worth 1, 000 currency units (dollars, euros, etc. ) sells at a final price of 60, and a bid costing 1 raises the price of the item by 0. 01, the auctioneer receives 6, 000 for the 6, 000 bids and 60 as the final price, a total of 6, 060, a profit of 5, 060. If the winning bidder used 150 bids in the process, they would have paid 150 for the bids plus 60 for the final price, a total of 210 and a saving of 790. All the other, losing, bidders collectively paid 5, 850 and received nothing. ” 25

All-pay Auctions • • The item is awarded to the highest bidder as in All-pay Auctions • • The item is awarded to the highest bidder as in a conventional auction But all bidders must pay what they bid, regardless of whether they win Used to model situations like political lobbying, looking for a job, architectural competitions, etc. Max-profit and sum-profit models of auctioneer revenue (i. e. , what does the auctioneer get? ) 26

Subject of Current HU Group Research • Mergers and Collusion in All-Pay Auctions and Subject of Current HU Group Research • Mergers and Collusion in All-Pay Auctions and Crowdsourcing Contests, Omer Lev, Maria Polukarov, Yoram Bachrach, and Jeffrey S. Rosenschein. The Twelfth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013), Saint Paul, Minnesota, May 2013. • Agent Failures in All-Pay Auctions, Yoad Lewenberg, Omer Lev, Yoram Bachrach, and Jeffrey S. Rosenschein. The Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI 2013), Beijing, August 2013.

Negotiation n n Auctions are only concerned with the allocation of goods: richer techniques Negotiation n n Auctions are only concerned with the allocation of goods: richer techniques for reaching agreements are required Negotiation is the process of reaching agreements on matters of common interest Any negotiation setting will have four components: q A negotiation set: possible proposals that agents can make q A protocol q Strategies, one for each agent, which are private q A rule that determines when a deal has been struck and what the agreement deal is Negotiation usually proceeds in a series of rounds, with every agent making a proposal at every round 7 -28

Negotiation in Task-Oriented Domains Imagine that you have three children, each of whom needs Negotiation in Task-Oriented Domains Imagine that you have three children, each of whom needs to be delivered to a different school each morning. Your neighbor has four children, and also needs to take them to school. Delivery of each child can be modeled as an indivisible task. You and your neighbor can discuss the situation, and come to an agreement that it is better for both of you (for example, by carrying the other’s child to a shared destination, saving him the trip). There is no concern about being able to achieve your task by yourself. The worst that can happen is that you and your neighbor won’t come to an agreement about setting up a car pool, in which case you are no worse off than if you were alone. You can only benefit (or do no worse) from your neighbor’s tasks. Assume, though, that one of my children and one of my neighbors’ children both go to the same school (that is, the cost of carrying out these two deliveries, or two tasks, is the same as the cost of carrying out one of them). It obviously makes sense for both children to be taken together, and only my neighbor or I will need to make the trip to carry out both tasks. --- Rules of Encounter, Rosenschein and Zlotkin, 1994 7 -29

Machines Controlling and Sharing Resources n Electrical grids (load balancing) n Telecommunications n PDA’s Machines Controlling and Sharing Resources n Electrical grids (load balancing) n Telecommunications n PDA’s (schedulers) n Shared n Traffic networks (routing) databases (intelligent access) control (coordination) 7 -30

Heterogeneous, Self-motivated Agents The systems: n are n do not centrally designed not have Heterogeneous, Self-motivated Agents The systems: n are n do not centrally designed not have a notion of global utility n are dynamic (e. g. , new types of agents) n will not act “benevolently” unless it is in their interest to do so 7 -31

The Aim of the Research n Social engineering for communities of machines q The The Aim of the Research n Social engineering for communities of machines q The creation of interaction environments that foster certain kinds of social behavior The exploitation of game theory tools for high-level protocol design 7 -32

Broad Working Assumption n n Designers (from different companies, countries, etc. ) come together Broad Working Assumption n n Designers (from different companies, countries, etc. ) come together to agree on standards for how their automated agents will interact (in a given domain) Discuss various possibilities and their tradeoffs, and agree on protocols, strategies, and social laws to be implemented in their machines 7 -33

Attributes of Standards ü ü ü Efficient: Stable: Simple: Pareto Optimal No incentive to Attributes of Standards ü ü ü Efficient: Stable: Simple: Pareto Optimal No incentive to deviate Low computational and communication cost Distributed: No central decision-maker Symmetric: Agents play equivalent roles Designing protocols for specific classes of domains that satisfy some or all of these attributes 7 -34

Distributed Artificial Intelligence (DAI) n Distributed Problem Solving (DPS) q ü Centrally designed systems, Distributed Artificial Intelligence (DAI) n Distributed Problem Solving (DPS) q ü Centrally designed systems, built-in cooperation, have global problem to solve Multi-Agent Systems (MAS) ü Group of utility-maximizing heterogeneous agents co-existing in same environment, possibly competitive 7 -35

Phone Call Competition Example Customer wishes to place long-distance call n Carriers simultaneously bid, Phone Call Competition Example Customer wishes to place long-distance call n Carriers simultaneously bid, sending proposed prices n Phone automatically chooses the carrier (dynamically) n MCI $0. 18 AT&T $0. 20 Sprint $0. 23 7 -36

Best Bid Wins n Phone chooses carrier with lowest bid n Carrier gets amount Best Bid Wins n Phone chooses carrier with lowest bid n Carrier gets amount that it bid MCI $0. 18 AT&T $0. 20 Sprint $0. 23 7 -37

Attributes of the Mechanism ü ü û û û Distributed Symmetric Stable Simple Efficient Attributes of the Mechanism ü ü û û û Distributed Symmetric Stable Simple Efficient Carriers have an incentive to invest effort in strategic behavior MCI “Maybe I can bid as high as $0. 21. . . ” $0. 18 AT&T $0. 20 Sprint $0. 23 7 -38

Best Bid Wins, Gets Second Price (Vickrey Auction) n Phone chooses carrier with lowest Best Bid Wins, Gets Second Price (Vickrey Auction) n Phone chooses carrier with lowest bid n Carrier gets amount of second-best price MCI $0. 18 AT&T $0. 20 Sprint $0. 23 7 -39

Attributes of the Vickrey Mechanism ü ü ü Distributed Symmetric Stable Simple Efficient Carriers Attributes of the Vickrey Mechanism ü ü ü Distributed Symmetric Stable Simple Efficient Carriers have no incentive to invest effort in strategic behavior MCI “I have no reason to overbid. . . ” $0. 18 AT&T $0. 20 Sprint $0. 23 7 -40

Domain Theory n Task w à Agents have tasks to achieve Task redistribution n Domain Theory n Task w à Agents have tasks to achieve Task redistribution n State w w à Oriented Domains Goals specify acceptable final states Side effects Joint plan and schedules n Worth w à Oriented Domains Function rating states’ acceptability Joint plan, schedules, and goal relaxation 7 -41

Postmen Domain Post Office 1 TOD 2 a / / c b / d Postmen Domain Post Office 1 TOD 2 a / / c b / d / f / e 7 -42

Database Domain TOD “All female employees making over $50, 000 a year. ” Common Database Domain TOD “All female employees making over $50, 000 a year. ” Common Database “All female employees with more than three children. ” 2 1 7 -43

Fax Domain 2 1 TOD faxes to send a c b f d e Fax Domain 2 1 TOD faxes to send a c b f d e Cost is only to establish connection 7 -44

Slotted Blocks World SOD 1 3 2 1 2 3 7 -45 Slotted Blocks World SOD 1 3 2 1 2 3 7 -45

The Multi-Agent Tileworld WOD agents hole B A tile 22 2 5 5 obstacle The Multi-Agent Tileworld WOD agents hole B A tile 22 2 5 5 obstacle 2 34 7 -46

TODs Defined n A TOD is a triple <T, Ag, c> where q q TODs Defined n A TOD is a triple where q q q n T is the (finite) set of all possible tasks Ag = {1, …, n} is the set of participating agents c = Ã(T) ú + defines the cost of executing each subset of tasks An encounter is a collection of tasks where Ti Í T for each i Î Ag 7 -47

Building Blocks ü Domain A precise definition of what a goal is q Agent Building Blocks ü Domain A precise definition of what a goal is q Agent operations q n Negotiation Protocol A definition of a deal q A definition of utility q A definition of the conflict deal q n Negotiation Strategy In Equilibrium q Incentive-compatible q 7 -48

Deals in TODs n n n Given encounter <T 1, T 2>, a deal Deals in TODs n n n Given encounter , a deal is an allocation of the tasks T 1 È T 2 to the agents 1 and 2 The cost to i of deal d = is c(Di), and will be denoted costi(d) The utility of deal d to agent i is: utilityi(d) = c(Ti) – costi(d) The conflict deal, Q, is the deal consisting of the tasks originally allocated. Note that utilityi(Q) = 0 for all i Î Ag Deal d is individual rational if it weakly dominates the conflict deal 7 -49

The Negotiation Set n The set of deals over which agents negotiate are those The Negotiation Set n The set of deals over which agents negotiate are those that are: q q individual rational pareto efficient 7 -50

The Negotiation Set Illustrated 7 -51 The Negotiation Set Illustrated 7 -51

Negotiation Protocols n Agents use a product-maximizing negotiation protocol (as in Nash bargaining theory) Negotiation Protocols n Agents use a product-maximizing negotiation protocol (as in Nash bargaining theory) n It should be a symmetric PMM (product maximizing mechanism) n Examples: 1 -step protocol, monotonic concession protocol… 7 -52

The Monotonic Concession Protocol Rules of this protocol are as follows… n Negotiation proceeds The Monotonic Concession Protocol Rules of this protocol are as follows… n Negotiation proceeds in rounds n On round 1, agents simultaneously propose a deal from the negotiation set n Agreement is reached if one agent finds that the deal proposed by the other is at least as good or better than its proposal n If no agreement is reached, then negotiation proceeds to another round of simultaneous proposals n In round u + 1, no agent is allowed to make a proposal that is less preferred by the other agent than the deal it proposed at time u n If neither agent makes a concession in some round u > 0, then negotiation terminates, with the conflict deal 7 -53

The Zeuthen Strategy Three problems: n What should an agent’s first proposal be? Its The Zeuthen Strategy Three problems: n What should an agent’s first proposal be? Its most preferred deal n On any given round, who should concede? The agent least willing to risk conflict n If an agent concedes, then how much should it concede? Just enough to change the balance of risk 7 -54

Willingness to Risk Conflict n Suppose you have conceded a lot. Then: q q Willingness to Risk Conflict n Suppose you have conceded a lot. Then: q q q n Your proposal is now near the conflict deal In case conflict occurs, you are not much worse off You are more willing to risk confict An agent will be more willing to risk conflict if the difference in utility between its current proposal and the conflict deal is low 7 -55

Nash Equilibrium Again… n n The Zeuthen strategy is in Nash equilibrium: under the Nash Equilibrium Again… n n The Zeuthen strategy is in Nash equilibrium: under the assumption that one agent is using the strategy the other can do no better than use it himself… This is of particular interest to the designer of automated agents. It does away with any need for secrecy on the part of the programmer. An agent’s strategy can be publicly known, and no other agent designer can exploit the information by choosing a different strategy. In fact, it is desirable that the strategy be known, to avoid inadvertent conflicts. 7 -56

Building Blocks ü Domain A precise definition of what a goal is q Agent Building Blocks ü Domain A precise definition of what a goal is q Agent operations q ü Negotiation Protocol A definition of a deal q A definition of utility q A definition of the conflict deal q n Negotiation Strategy In Equilibrium q Incentive-compatible q 7 -57

Deception in TODs n Deception can benefit agents in two ways: q q Phantom Deception in TODs n Deception can benefit agents in two ways: q q Phantom and Decoy tasks Pretending that you have been allocated tasks you have not Hidden tasks Pretending not to have been allocated tasks that you have been 7 -58

Negotiation with Incomplete Information Post Office 1 h a g f / 1 b Negotiation with Incomplete Information Post Office 1 h a g f / 1 b / 1 2 c e / 2 d What if the agents don’t know each other’s letters? 7 -59

– 1 Phase Game: Broadcast Tasks Post Office b, f h a g f – 1 Phase Game: Broadcast Tasks Post Office b, f h a g f / 1 b 1 / 1 e 2 c e / 2 d Agents will flip a coin to decide who delivers all the letters 7 -60

Hiding Letters Post Office f h a g f / 1 b c e Hiding Letters Post Office f h a g f / 1 b c e / 2 d 1 / (1) (hidden) b e 2 They then agree that agent 2 delivers to f and e 7 -61

Another Possibility for Deception Post Office b, c a 1 2 c b / Another Possibility for Deception Post Office b, c a 1 2 c b / 1, 2 They will agree to flip a coin to decide who goes to b and who goes to c 7 -62

Phantom Letter Post Office b, c, d b, c a 1 2 c b Phantom Letter Post Office b, c, d b, c a 1 2 c b / 1, 2 / d 1 (phantom) They agree that agent 1 goes to c 7 -63

Negotiation over Mixed Deals Mixed deal <D 1, D 2> : p The agents Negotiation over Mixed Deals Mixed deal : p The agents will perform with probability p, and the symmetric deal with probability 1 – p Theorem: With mixed deals, agents can always agree on the “all-ornothing” deal – where D 1 is T 1 È T 2 and D 2 is the empty set 7 -64

Hiding Letters with Mixed All-or-Nothing Deals Post Office h a g f / 1 Hiding Letters with Mixed All-or-Nothing Deals Post Office h a g f / 1 f b c e / 2 d 1 / (1) (hidden) b e 2 They will agree on the mixed deal where agent 1 has a 3/8 chance of delivering to f and e (see next slide…) 7 -65

Maximize Product of Utilities n n n Deal’s utility for an agent is cost Maximize Product of Utilities n n n Deal’s utility for an agent is cost of conflict deal minus cost of his part of deal 1’s utility: 6 – 8 p (will do whole circuit with probability p) 2’s utility: 8 – 8(1 – p) To maximize product, the two utilities must be equal: 6 – 8 p = 8 – 8(1 – p) p = 3/8 7 -66

Phantom Letters with Mixed Deals Post Office b, c, d b, c a 1 Phantom Letters with Mixed Deals Post Office b, c, d b, c a 1 2 c b / 1, 2 / They will agree on the 1, 2 mixed deal where 1 has 3/4 chance of delivering d all letters, lowering his / 1 (phantom) expected utility 7 -67

Sub-Additive TODs TOD < T, Ag, c > is sub-additive if for all finite Sub-Additive TODs TOD < T, Ag, c > is sub-additive if for all finite sets of tasks X, Y in T we have: c(X È Y) £ c(X) + c(Y) 7 -68

Sub-Additivity X Y c(X È Y) £ c(X) + c(Y) 7 -69 Sub-Additivity X Y c(X È Y) £ c(X) + c(Y) 7 -69

Sub-Additive TODs The Postmen Domain, Database Domain, and Fax Domain are sub-additive. / / Sub-Additive TODs The Postmen Domain, Database Domain, and Fax Domain are sub-additive. / / The “Delivery Domain” (where postmen don’t have to return to the Post Office) is not sub-additive 7 -70

Incentive Compatible Mechanisms Sub-Additive Hidden Phantom A/N L T Mix L Pure n n Incentive Compatible Mechanisms Sub-Additive Hidden Phantom A/N L T Mix L Pure n n L T/P L means “there exists a beneficial lie in some encounter” T means “truth telling is dominant, there never exists a beneficial lie, for all encounters” T/P means “truth telling is dominant, if a discovered lie carries a sufficient penalty” A/N signifies all-or-nothing mixed deals 7 -71

Incentive Compatible Mechanisms a h a g f / b c e / / Incentive Compatible Mechanisms a h a g f / b c e / / (1) (hidden) c Sub-Additive d Hidden Phantom A/N 2 L T Mix 1 L Pure L b / 1, 2 / d 1 (phantom) T/P Theorem: For all encounters in all sub-additive TODs, when using a PMM over all-or-nothing deals, no agent has an incentive to hide a task. 7 -72

Incentive Compatible Mechanisms Hidden Phantom A/N L T Mix L Pure n L T/P Incentive Compatible Mechanisms Hidden Phantom A/N L T Mix L Pure n L T/P Explanation of the up-arrow: If it is never beneficial in a mixed deal encounter to use a phantom lie (with penalties), then it is certainly never beneficial to do so in an all-or-nothing mixed deal encounter (which is just a subset of the mixed deal encounters) 7 -73

Decoy Tasks Decoy tasks, however, can be beneficial even with all-or-nothing deals 1 / Decoy Tasks Decoy tasks, however, can be beneficial even with all-or-nothing deals 1 / Sub-Additive Hidden Phantom Pure L A/N T Mix L L T/P Decoy L L L / 1 / 2 / 1 / 1 Decoy lies are simply phantom lies where the agent is able to manufacture the task (if necessary) to avoid discovery of the lie by the other agent. 7 -74

Decoy Tasks Sub-Additive Hidden Phantom Pure A/N T Mix n L L L T/P Decoy Tasks Sub-Additive Hidden Phantom Pure A/N T Mix n L L L T/P Decoy L L L Explanation of the down arrow: If there exists a beneficial decoy lie in some all-ornothing mixed deal encounter, then there certainly exists a beneficial decoy lie in some general mixed deal encounter (since all-or-nothing mixed deals are just a subset of general mixed deals) 7 -75

Decoy Tasks Sub-Additive Hidden Phantom Pure A/N T Mix n L L L T/P Decoy Tasks Sub-Additive Hidden Phantom Pure A/N T Mix n L L L T/P Decoy L L L Explanation of the horizontal arrow: If there exists a beneficial phantom lie in some pure deal encounter, then there certainly exists a beneficial decoy lie in some pure deal encounter (since decoy lies are simply phantom lies where the agent is able to manufacture the task if necessary) 7 -76

Concave TODs TOD < T, Ag, c > is concave if for all finite Concave TODs TOD < T, Ag, c > is concave if for all finite sets of tasks Y and Z in T , and X Í Y, we have: c(Y È Z) – c(Y) £ c(X È Z) – c(X) Concavity implies sub-additivity 7 -77

Concavity Z Y X The cost Z adds to X is more than the Concavity Z Y X The cost Z adds to X is more than the cost it adds to Y. (Z – X is a superset of Z – Y) 7 -78

Concave TODs The Database Domain and Fax Domain are concave (not the Postmen Domain, Concave TODs The Database Domain and Fax Domain are concave (not the Postmen Domain, unless restricted to trees). Z 1/ X 1/ / 2 1 /2 / / 1 1 This example was not concave; Z adds 0 to X, but adds 2 to its superset Y (all blue nodes) 7 -79

Three-Dimensional Incentive Compatible Mechanism Table Theorem: For all encounters in all concave TODs, when Three-Dimensional Incentive Compatible Mechanism Table Theorem: For all encounters in all concave TODs, when using a PMM over all-ornothing deals, no agent has any incentive to lie. Concave Hidden Phantom Decoy Pure L L L A/N T T T Mix L T T Sub-Additive Hidden Phantom Pure L A/N T Mix L L T/P Decoy L L L 7 -80

Modular TODs TOD < T, Ag, c > is modular if for all finite Modular TODs TOD < T, Ag, c > is modular if for all finite sets of tasks X, Y in T we have: c(X È Y) = c(X) + c(Y) – c(X Ç Y) Modularity implies concavity 7 -81

Modularity X Y c(X È Y) = c(X) + c(Y) – c(X Ç Y) Modularity X Y c(X È Y) = c(X) + c(Y) – c(X Ç Y) 7 -82

Modular TODs The Fax Domain is modular (not the Database Domain nor the Postmen Modular TODs The Fax Domain is modular (not the Database Domain nor the Postmen Domain, unless restricted to a star topology). Even in modular TODs, hiding tasks can be beneficial in general mixed deals 7 -83

Three-Dimensional Incentive Compatible Mechanism Table Modular H H H Pure A/N Mix L P Three-Dimensional Incentive Compatible Mechanism Table Modular H H H Pure A/N Mix L P D L L T T/P L L T/P L D Pure Sub-Additive P L L L A/N T T L T T A/N T T T Mix L T T T Mix D Pure Concave P T 7 -84

Related Work n n n Similar analysis made of State Oriented Domains, where situation Related Work n n n Similar analysis made of State Oriented Domains, where situation is more complicated Coalitions (more than two agents, Kraus, Shechory) Mechanism design (Sandholm, Nisan, Tennenholtz, Ephrati, Kraus) Other models of negotiation (Kraus, Sycara, Durfee, Lesser, Gasser, Gmytrasiewicz) Consensus mechanisms, voting techniques, economic models (Procaccia, Bachrach, Elkind, Walsh, Wellman, Ephrati) 7 -85

Conclusions n n By appropriately adjusting the rules of encounter by which agents must Conclusions n n By appropriately adjusting the rules of encounter by which agents must interact, we can influence the private strategies that designers build into their machines The interaction mechanism should ensure the efficiency of multi-agent systems Rules of Encounter Efficiency 7 -86

Conclusions n To maintain efficiency over time of dynamic multi-agent systems, the rules must Conclusions n To maintain efficiency over time of dynamic multi-agent systems, the rules must also be stable n The use of formal tools enables the design of efficient and stable mechanisms, and the precise characterization of their properties Stability Formal Tools 7 -87

Argumentation is the process of attempting to convince others of something Gilbert (1994) identified Argumentation is the process of attempting to convince others of something Gilbert (1994) identified 4 modes of argument: n n 1. 2. 3. 4. Logical mode “If you accept that A and that A implies B, then you must accept that B” Emotional mode “How would you feel if it happened to you? ” Visceral mode “Cretin!” Kisceral mode “This is against Christian teaching!” 7 -88

Logic-based Argumentation Basic form of logical arguments is as follows: Database | (Sentence, Grounds) Logic-based Argumentation Basic form of logical arguments is as follows: Database | (Sentence, Grounds) where: n Database is a (possibly inconsistent) set of logical formulae n Sentence is a logical formula known as the conclusion n Grounds is a set of logical formulae such that: q q Grounds is a subset of Database; and Sentence can be proved from Grounds 7 -89

Attack and Defeat Let (f 1, G 1) and (f 2, G 2) be Attack and Defeat Let (f 1, G 1) and (f 2, G 2) be arguments from some database D… Then (f 2, G 2) can be defeated (attacked) in one of two ways: n (f 1, G 1) rebuts (f 2, G 2) if f 1 iff f 2 n (f 1, G 1) undercuts (f 2, G 2) if f 1 iff y 2 for some y 2 in G 2 A rebuttal or undercut is known as an attack n 7 -90

Abstract Argumentation n n Concerned with the overall structure of the argument (rather than Abstract Argumentation n n Concerned with the overall structure of the argument (rather than internals of arguments) Write x y q q q “argument x attacks argument y” “x is a counterexample of y” “x is an attacker of y” where we are not actually concerned as to what x, y are n An abstract argument system is a collection or arguments together with a relation “ ” saying what attacks what n An argument is out if it has an undefeated attacker, and in if all its attackers are defeated 7 -91

An Example Abstract Argument System 7 -92 An Example Abstract Argument System 7 -92