Скачать презентацию Device Addressing Discovery Prasun Dewan Department of Скачать презентацию Device Addressing Discovery Prasun Dewan Department of

8271a421bfc77c0d6d69c584e192aad0.ppt

  • Количество слайдов: 183

Device (Addressing &) Discovery Prasun Dewan Department of Computer Science University of North Carolina Device (Addressing &) Discovery Prasun Dewan Department of Computer Science University of North Carolina dewan@unc. edu 1

Addressing Devices vs. Traditional Servers n n n Sever and network is always around Addressing Devices vs. Traditional Servers n n n Sever and network is always around - static address Centralized heavyweight scheme acceptable to give install and give unique addresses to servers Client expected to know traditional server address u n n imap. cs. unc. edu With so many dynamic devices and ad-hoc networks lightweight decentralized scheme needed Client may not know or care about device server address print to the nearest printer u turn off all light bulbs u Discovery phase possible www. cnn. com u logical name to physical name can be bound dynamically Devices may be dynamically added on ad-hoc networks dynamic address u n Implies later binding and thus a discovery phase 2

0 Addressing n Control point and device get address u Use a DHCP server 0 Addressing n Control point and device get address u Use a DHCP server u Else use Auto IP n What is Auto IP? u IETF Draft Automatically Choosing an IP Address in an Ad-Hoc IPv 4 Network n What steps does it take? u Pick an address in 169. 254/16 range u Check to see if it is used (ARP) u Periodically check for DHCP server Could use DNS and include DNS Client 3

Discovery : Active and Passive n Server Client SSDP Solution: Hybrid approach u Advertisement Discovery : Active and Passive n Server Client SSDP Solution: Hybrid approach u Advertisement has lifetime u F u Can simulate pure push model HTTP over UDP What if message gets lost? F Must send UDP message 3 times F Solution over TCP planned F Server 4

Issues raised by UPn. P discovery n Scaling problem of multicast-based discovery u Auto Issues raised by UPn. P discovery n Scaling problem of multicast-based discovery u Auto n Shut off problem Simple-minded search u Attribute-based n Lack of access control 5

Jini Discovery Java Type Template Client Service Object Attributes lookup() NS join() Server Service Jini Discovery Java Type Template Client Service Object Attributes lookup() NS join() Server Service Object discover() Service Object Attributes announce() Attributes 6

Jini Discovery n Discovery is of Java object reference u Can be used directly Jini Discovery n Discovery is of Java object reference u Can be used directly to invoke methods or register for events u Language-based solution n Can search by type u But type is a Java interface/class F E. g. edu. unc. cs. appliances. Printer F Can use inheritance for matching • edu. unc. cs. appliances. Output F Versioning problems • Client type version may not be same as Server’s 7

Jini Discovery n Service has typed attributes u Color Printing, local printer name, physical Jini Discovery n Service has typed attributes u Color Printing, local printer name, physical location, document format, paper size, resolution, access list u Some of this info may come from third-party (local admin) F Physical n location, local name, access list Client specifies u Type of service u Template of attributes F Non-null values are filled in by discovery • E. g. user-interface 8

Jini Lookup n Special name server expected on network F Servers join • Can Jini Lookup n Special name server expected on network F Servers join • Can register object by value (local) or reference (remote) • Service has lease F Clients lookup • Proxy of reference object loaded • Copy of by value objects loaded – May behave as smart proxy • Class of the object may also be dynamically loaded F Servers and clients discover it using (Local. Disovery object, which) multicasts F Discovery service multicasts to announce existence 9

Jini Discovery Java Type Template Client Service Object Attributes lookup() NS join() Server Service Jini Discovery Java Type Template Client Service Object Attributes lookup() NS join() Server Service Object discover() Service Object Attributes announce() Attributes n What if no name server? 10

Peer Discovery Java Type Template Client Service Object Attributes lookup() join() Server Service Object Peer Discovery Java Type Template Client Service Object Attributes lookup() join() Server Service Object discover() Service Object Attributes announce() Attributes n What if no name server? Client uses some special method in server to find server? u Server uses some special method in client to announce service? u Actually, can reuse methods of lookup-based discovery. u 11

Peer Lookup Service Object NS Attributes Client join() discover()? Service Object n Service Object Peer Lookup Service Object NS Attributes Client join() discover()? Service Object n Service Object announce() Attributes Server Attributes What if no name server? u Client pretends to be a lookup service F multicasting announcement of service F replying to lookup service searches? u Servers F Can send it join information filter it 12

Dual? Java Type NS Template lookup() Service Object Server discover()? Client Service Object announce() Dual? Java Type NS Template lookup() Service Object Server discover()? Client Service Object announce() Attributes n Attributes What if no name server? u Server pretends to be a name server F sending announcement of service F replying to lookup service searches. u Clients F Can u Every send it lookup information filter it lookup request sent to every server? 13

Service Location Protocol Type Template Client mlookup() Server Service Attributes n announce() Attributes What Service Location Protocol Type Template Client mlookup() Server Service Attributes n announce() Attributes What if no name server? u Client multicasts lookup request rather than unicast u More network traffic? n n SLP Address: IP address, port number, typedependent path Not bound to Java 14

SLP with Lookup and DHCP Type Template Client Service Attributes lookup() NS join() Server SLP with Lookup and DHCP Type Template Client Service Attributes lookup() NS join() Server Service Attributes register lookup (NS) DHCP NS lookup (NS) 15

No DHCP Type Template Client Service Attributes lookup() NS Service announce() Service Object Attributes No DHCP Type Template Client Service Attributes lookup() NS Service announce() Service Object Attributes discover() Attributes join() announce() discover() Server Service Attributes 16

Replicated Lookups Type Service Template Attributes lookup() Client Service Attributes NS 1 discover() Service Replicated Lookups Type Service Template Attributes lookup() Client Service Attributes NS 1 discover() Service Attributes discover(l 1) NS 2 Service join() Server Not a multicast! Attributes n n Joins sent to all NS discovered. Not considered desirable to discover all NS. 17

Scoped Discovery Type Template Client Service 1 Attributes 1 Service 1 Attributes lookup() discover(legal) Scoped Discovery Type Template Client Service 1 Attributes 1 Service 1 Attributes lookup() discover(legal) NS 1 join() Server 1 Service 1 Attributes announce (legal) Service 1 Attributes 1 Service 2 Attributes legal announce (accounts) NS 2 Service 2 Attributes 2 join() Attributes 2 Server 2 Service 2 Attributes accounts 18

Peer Scoped Discovery Type Template Client mlookup(legal) Server 1 Service 1 Attributes 1 legal Peer Scoped Discovery Type Template Client mlookup(legal) Server 1 Service 1 Attributes 1 legal Server 2 Service 2 Attributes accounts 19

SLP Scaling n n NS discovered through DHCP Also through UDP-based multicast u n SLP Scaling n n NS discovered through DHCP Also through UDP-based multicast u n Repeated multicasts list who has been found so far NS replicated Synchronized via multicasts of join to detected lookups u Lookups partitioned u Room 115 vs 150 F Legal vs. Account Payable F a la DNS F Partitions can overlap u Wide-area scaling? u F Every service contacts every name server in partition discovered. 20

Wide-Area Extension (WASRV) n n Divide world into SLP domains Each domain has u Wide-Area Extension (WASRV) n n Divide world into SLP domains Each domain has u Advertising agent F Multicast services to other domains F Configuring necessary • Only selected services multicast • Otherwise everything shared u Brokering agent F Listens to multicasts from remote advertising agents F Advertises services to local domain 21

WASRV Limitations addressed by SDS n n Wide-area multicast “ill-advised” Configuring necessary to determine WASRV Limitations addressed by SDS n n Wide-area multicast “ill-advised” Configuring necessary to determine what is multicast u Otherwise F Linear everything shared scaling 22

Controlled Partitioning n Partitioning automatically selected u Based on query criteria? F Globe, Ocean. Controlled Partitioning n Partitioning automatically selected u Based on query criteria? F Globe, Ocean. Store, Tapestry, Chord, Freenet, Data. Interface F location = UNC server F location = UCB Berkeley server u Works as long as single criteria F location = UNC & type = printer • All printers and all UNC devices in one domain F type = printer & model = 2001 • All printers and 2001 models and UNC devices in one domain F Popular criteria (2001 models) can lead to bottleneck 23

Query Flooding n n No controlled partitioning Query sent to all partitions u Service Query Flooding n n No controlled partitioning Query sent to all partitions u Service announcement sent to specific (arbitrary or local) partition u These sent frequently n No control over query rate u Scaling problem 24

Centralization n A single central name server u Napster, Web search engine u Multi Centralization n A single central name server u Napster, Web search engine u Multi criteria search u Bottleneck 25

DNS Hybrid n n Hierarchical scheme A single central name server at root level DNS Hybrid n n Hierarchical scheme A single central name server at root level u All n n queries and service announcements contact it Forwards request to partitioned lower-level servers base d on single-criterion query Works because u Positive and negative caching u Low update rates 26

SDS Hybrid n n n Partitioning Centralization Query Flooding 27 SDS Hybrid n n n Partitioning Centralization Query Flooding 27

SDS Query Filtering n n Service announcement given to specific NS node – the SDS Query Filtering n n Service announcement given to specific NS node – the local domain node Query given to specific NS node – the local domain node NS advertisement on well known multicast address u Address can be used to find NS using expanding ring search u F n n Increase TTL until found From each NS all other nodes reachable Information reachable from a neighbour is summarized u Summarization is lossy aggregation Hashing F Can give false positives F u Can direct query not satisfied locally to matching neighbours Multiple neighbours because of false positives F Can do this in parallel F Intelligent query flooding F u Recursive algorithm 28

Mesh vs. Tree n Mesh u Need to detect cycles TTL F Or give Mesh vs. Tree n Mesh u Need to detect cycles TTL F Or give unique Ids to queries to avoid repetition F n Tree u Can summarize information in one direction F u Parent summarizes children Upper nodes are bottlenecks Centralization a la DNS F Queries start at any node rather than always at root F Upper nodes may not be contacted F Service announcements propagated to all ancestors F Bandwidth used for propagation bounded F • Less the bandwidth more responsive to changes F Summaries propagated so lower load • More concise summary more query flooding 29

Summarization Filters n All pass filter u Low update load u High query load Summarization Filters n All pass filter u Low update load u High query load F Must n contact all neighbours Send complete description to neighbour u No loss in aggregation u High update load u No flooding n Need something in between u False positives OK u False negatives not OK (those nodes not searched) 30

Cntroid Indexed Terminals Filter n n WHOIS++/LDAP For each attribute send all values of Cntroid Indexed Terminals Filter n n WHOIS++/LDAP For each attribute send all values of that attribute u Service 1 F location u Service = “UNC”, model = “ 2001” 2 F location = “Duke”, model = “ 2000” u Summary F location: “UNC”, “Duke” F model: “ 2001”, “ 2000” u False Positives F location = “UNC”, model = “ 2000” F Location = “Duke”, model = “ 2001” u No false negatives 31

Cross Terminals Filter n For each attribute description create hashes attribute cross products u Cross Terminals Filter n For each attribute description create hashes attribute cross products u Service F u location = “UNC”, model = “ 2001” Possible matching queries location = “UNC” F model = “ 2001” F location = “UNC” & model = “ 2001” F If actual query hash equal to a possible matching query hash, then positive u Bound number of attributes considered to avoid exponential # of cross products u Must find cross products of query because which service attributes used in cross products unknown u F Otherwise false negatives 32

Bloom-filtered Cross Terminals n Given a list of hashes: u d 1, n Create Bloom-filtered Cross Terminals n Given a list of hashes: u d 1, n Create a compressed word of size L from hashing salts u s 1, n d 2 , . . dn s 2 , . . sn Bit x is set if u hash n An item d is in list if u Hash n (di + sj ) mod L = x for some i, j (d + sj ) mod L is set for all j. What happens when service deleted u reference count with each set bit 33

False Positives in BCT n n n Cross products Limiting number of attributes Bloom False Positives in BCT n n n Cross products Limiting number of attributes Bloom filters 34

Range queries n Query: Location = University of * u Model > 2000 u Range queries n Query: Location = University of * u Model > 2000 u Type = printer u n Bloom filter elides query, ignores range attributes u n Type = printer Keeps list of false positives with queries u Negative caching 35

Tree construction n n Support multiple hierarchies Can be specified by configuration file u Tree construction n n Support multiple hierarchies Can be specified by configuration file u Admin F edu, n domains com Computed automatically F Network topology • Hops F Geographic location • Distance n n Query specifies which domain to consider in search. Special primary domain guaranteeing coverage of all nodes. 36

Tree construction n n A node can serve multiple levels in the hierarchy Child Tree construction n n A node can serve multiple levels in the hierarchy Child nodes can be dynamically added/deleted Services and clients continuously listen for domain, address announcements Replication for fault tolerance u Multicast n address representing replicas Replicas and parents listen for heart beats 37

Other SDP Features n n Security Hierarchical rather than flat attributes 38 Other SDP Features n n Security Hierarchical rather than flat attributes 38

SDP Security Kinds of security: n Access control u Arbitrary client cannot discover arbitrary SDP Security Kinds of security: n Access control u Arbitrary client cannot discover arbitrary service u Arbitrary clients and services can invoke NS methods (lookup() and join()) n Authentication u Clients, n Services, and NS Privacy u Service descriptions u Not queries or NS announcements 39

Access Control Mechanisms Access control u Capability lists vs. access lists u Access lists Access Control Mechanisms Access control u Capability lists vs. access lists u Access lists for persistence F AC server keeps them F Client (not NS) contacts AC server u Derived cap lists for performance FA la open call returning file handler F Given to client by AC server after authentication 40

Authentication Mechanisms n Trusted machine address, port number u Cannot n use when variable Authentication Mechanisms n Trusted machine address, port number u Cannot n use when variable address Public key u Sender decrypts message with own private key u Authenticator encrypts message with sender’s public key u Used for Client and Server 41

Privacy Mechanisms n Symmetric Encryption and decryption key are the same u E. g. Privacy Mechanisms n Symmetric Encryption and decryption key are the same u E. g. XOR, Blowfish u n Asymmetric Encryption Sender encrypts with receiver’s public key u Receiver decrypt’s with receiver’s private key u RSA u n Performance: u Blowfish F u RSA F u n Encryption: 15. 5 seconds; Decryption: 142. 5 ms (raising message to power of key) DSA F n Encryption: 2. 0 ms; Decryption: 1. 7 ms Signature: 33. 1 ms, Verification: 133. 4 ms NS can get overloaded with asymmetric How to establish symmetric key? 42

SDS Privacy Mechanisms n n Use asymmetric for establishing symmetric key for some tunable SDS Privacy Mechanisms n n Use asymmetric for establishing symmetric key for some tunable time period Use symmetric key for sending info during that period. 43

SDP Hierarchical Attributes n n Service associated with XML DDT describing attributes Join() describes SDP Hierarchical Attributes n n Service associated with XML DDT describing attributes Join() describes hierarchical attributes u Must follow syntax specified by DDT u Can add tags print 466; lws 466 466 yes yes http: //joker. cs/lws 466 44

SDP Hierarchical Attributes n Lookup describes hierarchical template() u <? xml version=1. 0”? > SDP Hierarchical Attributes n Lookup describes hierarchical template() u yes yes n n DDT for it? Bound to a particular type? 45

An Architecture for a Secure Service Discovery Service Steven Czerwinski, Todd Hodes, Ben Zhao, An Architecture for a Secure Service Discovery Service Steven Czerwinski, Todd Hodes, Ben Zhao, Anthony Joseph, Randy Katz UC Berkeley Internet Scale Research Group 46

Outline n n n Intro Architecture Security Wide Area Conclusion 47 Outline n n n Intro Architecture Security Wide Area Conclusion 47

Supporting Ubiquitous Computing n Ubiquitous Computing envisions… Billions of computers and devices available to Supporting Ubiquitous Computing n Ubiquitous Computing envisions… Billions of computers and devices available to users u Devices seamlessly interact with all others u Networks and computers as an unobtrusive utility u n One problem: Locating servers and devices How can you locate a light bulb among billions? u Solution must be scalable, fault-tolerant, self-configuring, secure, and support wide-area u n Existing solutions don’t adequately address needs 48

A Secure Service Discovery Service The Idea: A secure directory tool which tracks services A Secure Service Discovery Service The Idea: A secure directory tool which tracks services in the network and allows authenticated users to locate them through expressive queries n n Services are applications/devices running in the network One piece of the puzzle Helps manage explosive growth of services u Aids in configuration by providing indirection u Aids in protecting user and services by providing security u 49

Berkeley Service Discovery Service The SDS X czerwin@cs LQ M r se 3 4 Berkeley Service Discovery Service The SDS X czerwin@cs LQ M r se 3 4 “ 4 a Ph ” io. printer yes ice on rv pti Se cri s De y er u Where is a color printer? 443 Phaser 443 Phaser io. printer Soda/443 yes yes rmi: //batman. cs 50

Discovery Services n Discovery/Directory services are not new Provide a mapping of attribute values Discovery Services n Discovery/Directory services are not new Provide a mapping of attribute values to domain specific addresses u Examples: Telephone book, card catalogs, etc. . u n Computer network discovery services DNS u NIS u SAP u Globe u LDAP u Jini Look. Up service u 51

Differentiating Discovery Services n Query Routing u Implicitly n Queries u Query n specified Differentiating Discovery Services n Query Routing u Implicitly n Queries u Query n specified by the query (DNS, globe) grammar complexity (LDAP vs. DNS) Push (advertisements) versus pull (queries) u Pull only (DNS) vs. Push Only (SAP modulo caching) n Update rate u Short for mobility vs. long for efficient caching 52

Discovery Services Cont. n Bootstrapping “Well-known” local name (“www. ”) u List of unicast Discovery Services Cont. n Bootstrapping “Well-known” local name (“www. ”) u List of unicast addresses (DNS) u Well-known global/local multicast address (SAP, SLP) u n Soft state vs. hard state u n Service data u n Implicit recovery vs. guaranteed persistence Reference (globe) vs. content (SAP+SDP) Security u Privacy and authentication 53

Features of the Berkeley SDS n Hierarchical network of servers u n Queries u Features of the Berkeley SDS n Hierarchical network of servers u n Queries u n Listen on well-known global channel for all parameters Soft-state approach u n Use XML for service descriptions and queries Bootstrapping via Multicast announcements u n Multiple hierarchies based on query types State rebuilt by listening to periodic announcements Secure u Use certificates/capabilities to authenticate 54

The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter UC Berkeley Printer Certificate The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter UC Berkeley Printer Certificate Authority SDS Server Jukebox Soda Hall Printer Cory Hall Client “czerwin@cs” Room 464 Room 466 55

The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter Printer Certificate Authority SDS The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter Printer Certificate Authority SDS Server UC Berkeley Jukebox Soda Hall Printer Cory Hall Client “czerwin@cs” Room 464 Room 466 SDS Servers • Create hierarchy for query routing • Store service information and process requests • Advertise existence for bootstrapping 56

The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter UC Berkeley Printer Certificate The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter UC Berkeley Printer Certificate Authority SDS Server Jukebox Soda Hall Printer Cory Hall Client “czerwin@cs” Room 464 Room 466 Services • Responsible for creating and propagating XML service description 57

The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter UC Berkeley Printer Certificate The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter UC Berkeley Printer Certificate Authority SDS Server Jukebox Soda Hall Printer Cory Hall Client “czerwin@cs” Room 464 Room 466 Clients • The users of the system • Perform look up requests via SDS server 58

The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter Printer Certificate Authority SDS The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter Printer Certificate Authority SDS Server UC Berkeley Jukebox Soda Hall Printer Cory Hall Client “czerwin@cs” Room 464 Room 466 Certificate Authority • Provides a tool for authentication • Distributes certificates to other components 59

The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter Printer Certificate Authority SDS The Berkeley SDS Architecture Capability Manager Services SDS Servers Converter Printer Certificate Authority SDS Server UC Berkeley Jukebox Soda Hall Printer Cory Hall Client “czerwin@cs” Room 464 Room 466 Capability Manager • Maintains access control rights for users • Distributes capabilities to other components 60

How the Pieces Interact. . . Client Queries: • SDS address from server • How the Pieces Interact. . . Client Queries: • SDS address from server • Sends service specification • Gets service description and URL SDS Server Announcements: • Global multicast address • Periodic for fault detection • Provides all parameters Backup SDS Server Client Service Announcements: • Multicast address from server • Periodic for soft state • Contains description Printer Music Server 61

Security Goals n Access control n Authentication of all components n Encrypted communication 62 Security Goals n Access control n Authentication of all components n Encrypted communication 62

Security Goals n Access control u n Services specify which users may “discover” them Security Goals n Access control u n Services specify which users may “discover” them Authentication of all components Protects against masquerading u Holds components accountable for false information u n Encrypted communication Authentication meaningless without encryption u Hides sensitive information (service announcements) u n No protection against denial of service attacks 63

Security Hazards <soda-admin@cs> Clients: • Encryption for 2 -way communication • Have to prove Security Hazards Clients: • Encryption for 2 -way communication • Have to prove rights • Authenticated RMI SDS Server Announcements: • Have to sign information • No privacy needed • Signed broadcasts Backup SDS Server Client Service Announcements: Printer • Only intended server can decrypt • Signed descriptions to validate • Secure One-Way Broadcasts Music Server All components: • Use certificates for authentication 64

Secure One-Way Broadcasts Service Description Service KPrivate Signing (DSA) KSession Symmetric Encryption (Blowfish) KSession Secure One-Way Broadcasts Service Description Service KPrivate Signing (DSA) KSession Symmetric Encryption (Blowfish) KSession {Signed Description} Asymmetric Encryption (RSA) Server EKPublic {Session Key} Key idea: Use asymmetric algorithm to encrypt symmetric key 65

Secure One-Way Broadcasts KSession {Signed Description} Symmetric Encryption (Blowfish) Signed Service Description EKPublic {Session Secure One-Way Broadcasts KSession {Signed Description} Symmetric Encryption (Blowfish) Signed Service Description EKPublic {Session Key} Asymmetric Encryption (RSA) Server EKPrivate KSession (Cache it) To decode, only intended server can decrypt session key • Use session to retrieve service description • Cache session key to skip later asymmetric operations 66

Wide Area Root UC Berkeley UCB Physics UCB CS ISRG Kinko’s #123 IRAM Stanford Wide Area Root UC Berkeley UCB Physics UCB CS ISRG Kinko’s #123 IRAM Stanford U CS Physics Mobile People Room 443 Hierarchy motivation: • Divide responsibility among servers for scalability The big question: • How are queries routed between servers? 67

The Wide Area Strategy n Build hierarchies based upon query criteria Administrative domain u The Wide Area Strategy n Build hierarchies based upon query criteria Administrative domain u Network topology u Physical location u n n Þ Aggregate service descriptions (lossy) Route queries based on aggregation tables Parent Based Forwarding (PBF) 68

n Service Description Aggregationdescription Hash values of tag subsets of service used as description n Service Description Aggregationdescription Hash values of tag subsets of service used as description summary n n n Hash list compressed with Bloom Filter [Bloom 70] Fixed-size aggregation tables prevent explosion at roots Guarantees no false negatives Can have false positives, probability affected by table size Algorithm: u To add service, compute description tag subsets, insert into Bloom Filter table u To query, compute query tag subsets, examine corresponding entries in Bloom Filter table for possible matches 69

Multiple Hierarchies Root UC Berkeley UCB Physics UCB CS ISRG Kinko’s #123 IRAM Stanford Multiple Hierarchies Root UC Berkeley UCB Physics UCB CS ISRG Kinko’s #123 IRAM Stanford U CS Physics Mobile People Room 443 Administrative Hierarchy 70

Multiple Hierarchies Northern California Root UC Berkeley UCB Physics Soda Hall ISRG IRAM Hearst Multiple Hierarchies Northern California Root UC Berkeley UCB Physics Soda Hall ISRG IRAM Hearst St Berkeley, US Stanford, US Kinko’s Stanford U CS Kinko’s #123 Physics Mobile People Room 443 Physical Location Hierarchy 71

Query Routing in Action Berkeley, US UC Berkeley UCB Physics <type>fax </type> <color>yes </color>? Query Routing in Action Berkeley, US UC Berkeley UCB Physics fax yes ? Soda Hall ISRG IRAM Hearst St Kinko’s #123 Color Fax SDS servers Room 443 Services Clients czerwin@cs 72

Query Routing in Action Berkeley, US UC Berkeley UCB Physics <type>fax </type> <color>yes </color>? Query Routing in Action Berkeley, US UC Berkeley UCB Physics fax yes ? Soda Hall ISRG IRAM Hearst St Kinko’s #123 Color Fax SDS servers Room 443 Services Clients czerwin@cs 73 Room 443 server examines its data and tables, routes to parent

Query Routing in Action Berkeley, US UC Berkeley UCB Physics <type>fax </type> <color>yes </color>? Query Routing in Action Berkeley, US UC Berkeley UCB Physics fax yes ? Soda Hall ISRG IRAM Hearst St Kinko’s #123 Color Fax SDS servers Room 443 Services Clients czerwin@cs 74 Each server checks aggregation tables, Hearst sees possible hit

Query Routing in Action Berkeley, US UC Berkeley UCB Physics <type>fax </type> <color>yes </color>? Query Routing in Action Berkeley, US UC Berkeley UCB Physics fax yes ? Soda Hall ISRG IRAM Hearst St Kinko’s #123 Color Fax SDS servers Room 443 Services Clients czerwin@cs Kinko’s #123 finds match, returns service description 75

Conclusion n A tool for other applications Provides a listing of services in the Conclusion n A tool for other applications Provides a listing of services in the network u XML descriptions allow for flexibility u Well defined security model u Fault tolerant, scalable u Releasing local area implementation as part of Ninja u n Ongoing work u n Experimenting with wide area strategy and caching For more information u sds@iceberg. cs. berkeley. edu 76

INS Issues n System-supported search u System parameters F u Application parameters F n INS Issues n System-supported search u System parameters F u Application parameters F n Hop count Least loaded printer Mobility u Node mobility Node may move between discovery and operation F End-to-end solution F u Service mobility F Ideal node changes • Least loaded printer • Closest location n n How to store hierarchical attributes? Fault tolerance and availability 77

System supported search n Allow service to advertise an application-defined metric (load) u Single System supported search n Allow service to advertise an application-defined metric (load) u Single metric F Either n least loaded or closest printer Name server will find service with least value of metric 78

Mobility n Client never sees physical address u n Discovery infrastructure also does message Mobility n Client never sees physical address u n Discovery infrastructure also does message routing u n Query serves as intentional name for source and destination Different from query routing in SDS Conventional model Get address from query u Use address to send message u n INS model Send message with query u What if multiple services u F Anycast • Send to service with least value of metric F Multicast • Send to all matching services • Cannot use internet multicast! 79

Multicast approach n Internet multicast groups Contain internet addresses u But internet addresses may Multicast approach n Internet multicast groups Contain internet addresses u But internet addresses may change! u n Point to point multicast u n Inefficient because of duplication of messages along common paths Overlay routing network of NS u Multiple replicated NS F Number can vary dynamically Each service and client bound to specific NS u Spanning tree of NS u F u Based on round trip latency NS forwards message to appropriate neighbours 80

Distributed Spanning Tree Construction n n List of current name servers kept in (possibly Distributed Spanning Tree Construction n n List of current name servers kept in (possibly replicated) NS server New name server addition u Gets list of NS’s from NS server F NS server serializes requests Does a ping to all existing NS u Makes closest NS its neighbour (parent) u Puts itself in NS server list u n Connections made a spanning tree? NS’s are connected u n-1 links made u Any connected graph with n-1 links is tree u Nodes put in NS server in linear order u A later node cannot be parent of earlier node u 81

Load balancing n n NS server keeps list of potential NS sites As load Load balancing n n NS server keeps list of potential NS sites As load gets high u New NSs added to inactive potential sites F dynamically n attach themselves to spanning tree As load gets low at a site u NS killed u Tree must adjust F Incremental vs. batch F Incremental • Informs peers (children) about this • They rejoin 82

Storing/Searching Attributes Service 1 announcement: Service 2 announcement: [service = camera [color = true]] Storing/Searching Attributes Service 1 announcement: Service 2 announcement: [service = camera [color = true]] [service = printer [postscript = true]] [location = 536] Name record: IP Adr, next hop NS Name record: IP adr, next hop NS root service location camera printer color postscript true Service 1 name record 536 Query: [location = 536] [service = camera] true Service 2 name record 83

Storing/Searching Attributes Lookup (T, q): Set of name records let S be the set Storing/Searching Attributes Lookup (T, q): Set of name records let S be the set of all name-records for each av pair, (aq, vq) of query if vq = * find attribute node aq in T intersect S with name records in subtree(aq ) else find av node (aq, vn) in T if vq or vn is leaf intersect S with name records in subtree(vn ) else intersect S with name records in lookup (subtree(vn ), subtree(vn)) return S 84

Lookup overhead na ra root 2 d rv n n n T(d) = na Lookup overhead na ra root 2 d rv n n n T(d) = na * (ta + tv + T(d-1)) T(d) = na * (1 + T(d-1)) (hashing) T(0) = b (intersect all name records with selected set) T(d) = O (nad * (1 + b)) T(0) = O(n), (n = number of name records) “experimentally” from random name specifiers and tree T(d) = O (nad * n)) 85

Fault Tolerance n All name servers are replicas each announcement forwarded to neighbours u Fault Tolerance n All name servers are replicas each announcement forwarded to neighbours u announcements sent periodically (hearbeat) u n n Hearbeat mechanism detects service and NS failure if service fails u n If NS fails u n announcement removed reconfigure overlay network Client queries sent to single NS u load balancing 86

Forwarding Announcements root service location camera printer color postscript true Service 1 name record Forwarding Announcements root service location camera printer color postscript true Service 1 name record 536 true Service 2 name record 87

Forwarding Announcements [service = camera [color = true] [location = 536] root service [service Forwarding Announcements [service = camera [color = true] [location = 536] root service [service = camera] [color = true]] [ [location = 536] [color = true] location camera printer color postscript true Service 1 name record 536 true Service 2 name record 88

Get Name Synthesized Get. Name(r) name a new empty name-specifier attributes! root. PTR name Get Name Synthesized Get. Name(r) name a new empty name-specifier attributes! root. PTR name for each parent value node of r trace (parentvalue. Node, null) return n Trace(value. Node, name) if value. Node. PTR != null if name != null graft name as child of value. Node. PTR else value. Node. PTR (value. Node. parent. Attribute(), value. Node. value()) If (name != null) graft name as child of value. Node. PTR Trace(value. Node. parent. Value(), value. Node. PTR) 89

Update time n n n t = n * (Tlookup + Tgraft + Tupdate Update time n n n t = n * (Tlookup + Tgraft + Tupdate + network_delay) Experiments show costly operation Solution is to divide name space into disjoint virtual spaces ala SLP u camera-ne 43 u building-ne 43 n n Each service heart beat not sent each NS Sending tree diffs as a solution? 90

Applications n Floorplan: An map-based navigation tool u Example service F u Retrieving map Applications n Floorplan: An map-based navigation tool u Example service F u Retrieving map F u [service = camera[entity = transmitter][Id=a]] [room=150] [service = locator [entity = server]] [region = first floor] Retrieving devices F [room = 150] 91

Applications n Load-balancing printer u Unicast F [id = hplj 156] F Retrieve, delete, Applications n Load-balancing printer u Unicast F [id = hplj 156] F Retrieve, delete, submit job u Anycast F [service=printer u Printer [entity=spooler]][room=517] advertises metric based on F Error status, number of enqueued jobs, length of jobs 92

Applications n Camera: A mobile image/video service u Request camera image F destination address Applications n Camera: A mobile image/video service u Request camera image F destination address • [service = camera[entity = transmitter]][room = 510] F source address • [service = camera[entity = receiver][id=r]][room = 510] F Transmitters/receivers u Multicast to receivers F Multiple u Multicast F User can move users viewing image to senders viewing multiple cameras 93

NS Caching n Case 1 Camera multicasts image u Some other client subsequently requests NS Caching n Case 1 Camera multicasts image u Some other client subsequently requests image u n Case 2 Client requests image u Requests image again u A la web caching u n Caching implemented for any service using IN u n Assumes RPC call interface? Cache lifetime given 94

intentional naming system William Adjie-Winoto Hari Balakrishnan Elliot Schwartz Jeremy Lilley MIT Laboratory for intentional naming system William Adjie-Winoto Hari Balakrishnan Elliot Schwartz Jeremy Lilley MIT Laboratory for Computer Science http: //wind. lcs. mit. edu/ SOSP 17, Kiawah Island Resort December 14, 1999 95

Environment n n Heterogeneous network with devices, sensors and computers Dynamism Mobility u Performance Environment n n Heterogeneous network with devices, sensors and computers Dynamism Mobility u Performance variability u Services “come and go” u Services may be composed of groups of nodes u n Example applications Location-dependent mobile apps u Network of mobile cameras u n Problem: resource discovery 96

Design goals and principles Expressiveness Names are intentional; apps know what, not where Responsiveness Design goals and principles Expressiveness Names are intentional; apps know what, not where Responsiveness Integrate name resolution and message routing (late binding) Robustness Decentralized, cooperating resolvers with soft-state protocol Easy configuration Name resolvers self-configure into overlay network 97

Naming and service discovery n Wide-area naming u n Attribute-based systems u n IETF Naming and service discovery n Wide-area naming u n Attribute-based systems u n IETF SLP, Berkeley service discovery service Device discovery u n X. 500, Information Bus, Discover query routing Service location u n DNS, Global Name Service, Grapevine Jini, Universal plug-and-play Intentional Naming System (INS) Mobility & dynamism via late binding u Decentralized, serverless operation u Easy configuration u 98

INS architecture Client Service Name resolver Late binding Name with message Intentional multicast Intentional INS architecture Client Service Name resolver Late binding Name with message Intentional multicast Intentional anycast Overlay network of resolvers Name Message routing using intentional names 99

Name-specifiers n n Expressive name language (like XML) Resolver architecture decoupled from language Providers Name-specifiers n n Expressive name language (like XML) Resolver architecture decoupled from language Providers announce descriptive names Clients make queries u Attribute-value matches u Wildcard matches u Ranges [vspace = lcs. mit. edu/camera] [building = ne 43 [room = 510]] [resolution=800 x 600]] [access = public] [status = ready] [vspace = mit. edu/thermometer] [building = ne 43 [floor = 5 [room = *]] [temperature < 600 F] data 100

Name lookups n Lookup Tree-matching algorithm u AND operations among orthogonal attributes u n Name lookups n Lookup Tree-matching algorithm u AND operations among orthogonal attributes u n Polynomial-time in number of attributes u O(nd) where n is number of attributes and d is the depth 101

Resolver network n n Resolvers exchange routing information about names Multicast messages forwarded via Resolver network n n Resolvers exchange routing information about names Multicast messages forwarded via resolvers Decentralized construction and maintenance Implemented as an “overlay” network over UDP tunnels Not every node needs to be a resolver u Too many neighbors causes overload, but need a connected graph u Overlay link metric should reflect performance u Current implementation builds a spanning tree u 102

Late binding n n n Mapping from name to location can change rapidly Overlay Late binding n n n Mapping from name to location can change rapidly Overlay routing protocol uses triggered updates Resolver performs lookup-and-forward u lookup(name) route n is a route; forward along Two styles of message delivery u Anycast u Multicast 104

Intentional anycast n n lookup(name) yields all matches Resolver selects location based on advertised Intentional anycast n n lookup(name) yields all matches Resolver selects location based on advertised service-controlled metric u E. g. , n n server load Tunnels message to selected node Application-level vs. IP-level anycast u Service-advertised metric is meaningful to the application 105

Intentional multicast n n n Use intentional name as group handle Each resolver maintains Intentional multicast n n n Use intentional name as group handle Each resolver maintains list of neighbors for a name Data forwarded along a spanning tree of the overlay network u Shared n tree, rather than per-source trees Enables more than just receiver-initiated group communication 106

Robustness n n Decentralized name resolution and routing in “serverless” fashion Names are weakly Robustness n n Decentralized name resolution and routing in “serverless” fashion Names are weakly consistent, like network-layer routes u Routing protocol with periodic & triggered updates to exchange names n Routing state is soft u Expires if not updated u Robust against service/client failure u No need for explicit de-registration 107

Routing Protocol Scalability Name-tree at resolver vspace=camera Routing updates for all names n n Routing Protocol Scalability Name-tree at resolver vspace=camera Routing updates for all names n n vspace=5 th-floor Delegate this to another INR vspace = Set of names with common attributes Virtual-space partitioning: each resolver now handles subset of all vspaces 109

Applications n Location-dependent mobile applications Floorplan: An map-based navigation tool u Camera: A mobile Applications n Location-dependent mobile applications Floorplan: An map-based navigation tool u Camera: A mobile image/video service u Load-balancing printer u TV & jukebox service u n n n Sensor computing Network-independent “instant messaging” Clients encapsulate state in late-binding applications 111

Status n Java implementation of INS & applications Several thousand names on single Pentium Status n Java implementation of INS & applications Several thousand names on single Pentium PC; discovery time linear in hops u Integration with Jini, XML/RDF descriptions in progress u n Scalability u n Wide-area implementation in progress Deployment Hook in wide-area architecture to DNS u Standardize virtual space names (like MIME for devices/services) u 112

Conclusion n n INS is a resource discovery system for dynamic, mobile networks Expressiveness: Conclusion n n INS is a resource discovery system for dynamic, mobile networks Expressiveness: names that convey intent Responsiveness: late binding by integrating resolution and routing Robustness: soft-state name dissemination with periodic refreshes Configuration: resolvers self-configure into an overlay network 113

Active Name Issues Example u Printer selected F randomly F round-robin n n INS Active Name Issues Example u Printer selected F randomly F round-robin n n INS metric of load not sufficient Need some application-specific code 114

Active Approaches Active networks u Install code in internet routers F can only look Active Approaches Active networks u Install code in internet routers F can only look at low-level packets F not enough semantics n Active services u Allows application to install arbitrary code in network F application-awareness 115

Active Names n Active names u Install code in name servers F enough semantics Active Names n Active names u Install code in name servers F enough semantics F application-transparency u vs INS F installs metric and data F extension is to install code F declarative vs procedural F security problem • confine code a la applet u vs HTTP caching F special proxies translate name 116

Active Names n Active name u namespace F specified program to interpret name by Active Names n Active name u namespace F specified program to interpret name by active name • name space program to interpret name • recursively until – name – specifier of predefiened name space programs such as DNS, HTTP u Example F printing round-robin • install program that listens for printer announcements • picks printers in round-robin 117

Active Name RPC n Example u n Camera data caching Active name used in Active Name RPC n Example u n Camera data caching Active name used in RPC call u name space program has notion of data (parameters) F reply F u it can cache data 118

Upward comapatibity Example n Service name without transport protocol www. cs. utexas. edu/home/smith u Upward comapatibity Example n Service name without transport protocol www. cs. utexas. edu/home/smith u Transport protocol concern of name space program u n n n Root name space delegates to WWW-root active name program WWW-root implements default web caching Response to URL indicates name space program to interpret name www. cs. utexas. edu/home/smith/active/* u Original request must be prefix of active name program u F www. cs. utexas. edu/* illegal 119

Active Name Delegation n Example u Might wish to transcode camera image F color Active Name Delegation n Example u Might wish to transcode camera image F color to black and white u Name space program in client’s NS can transcode F but might want transcoding nearer data producer if network is slow n Name delegation u Each F F name space program interprets part of the name data input stream u Chooses next NS and name space program to interpret the rest u DNS special case 120

After Methods n Return path for result may not be the same as path After Methods n Return path for result may not be the same as path for data u n request forwarded to more appropriate NS Each delegation specifies return path for result active name called after method u pushed on stack of current active names u n Stack popped on way back u Each node on return path pops active name F sends name, part of result and popped stack to NS F u Leaf destination chooses closest NS and top-level after method Can influence how request is serviced on the network F Transcoding, adding banner ads F Name space resolver and after method at other return nodes choose processing and subsequent node u Direct cal cost: 0. 2 s, After method cost: 3. 2 s u 121

Security n n n Answer could come back from anyone How to trust? Assume Security n n n Answer could come back from anyone How to trust? Assume transitive trust u. A trusts B u B trusts C u A trusts C n Client sends capability u Some n n unforgeable object It is passed along Trusts anyone who returns it. 122

Active Names: Flexible Location and Transport of Wide-Area Resources Amin Vahdat Systems Tea April Active Names: Flexible Location and Transport of Wide-Area Resources Amin Vahdat Systems Tea April 23, 1999 123

Active Naming Vision n n Today: Name is static binding to physical location and Active Naming Vision n n Today: Name is static binding to physical location and object (DNS, LDAP) Want: dynamic, flexible binding to service/data Server selection among geographic replicas (CISCO, IBM, etc. ) u Client customization (e. g. , distillation, custom CNN) u Server customization (e. g. , hit counting, ad rotation, etc. ) u n An Active Name is a mobile program that invokes a service or acquires data Flexibly support various naming semantics u Minimize wide-area communication u 124

Outline n n Background Active Names u Opportunity u Implementation n Examples u Programmability Outline n n Background Active Names u Opportunity u Implementation n Examples u Programmability u Location Independence u Composibility n Conclusions 125

Current Name Services n DNS translates machine names to IP addresses Updates propagated over Current Name Services n DNS translates machine names to IP addresses Updates propagated over a period of days u Bindings cached at client, TTL invalidation u Assumes bindings change slowly, updated centrally u n RPC name service binds caller and callee u n Assumes every service provider is equivalent In wide area: heterogeneous quality of service depending on selection of provider 126

Wide Area Naming Today: HTTP Redirect w/Dynamic Content DNS Server 2 Host Client 1 Wide Area Naming Today: HTTP Redirect w/Dynamic Content DNS Server 2 Host Client 1 Name Binding Proxy 3 URL 4 URL HTTP 5 Name Program Server 6 Data Redirect HTTP Server 127

Current Attempts to Add Flexibility to Name Binding n n n n HTTP redirect Current Attempts to Add Flexibility to Name Binding n n n n HTTP redirect DNS round robin Cisco Local Director/Distributed Director URN’s with sed scripts to mangle names Global object IDs (e. g. , Globe, Legion) Web caches/Active Caches Mobile IP. . . 128

The Active Naming Opportunity n Name translation often incorporates clientspecific info Custom home pages The Active Naming Opportunity n Name translation often incorporates clientspecific info Custom home pages (www. cnn. com => your news page) u Distilling picture to client requirements (small B&W for PDA’s) u n Naming often a step in a larger process n Availability of remotely programmable resources u n Java, Active Networks Importance of minimizing wide-area latency for requests 129

Active Naming Implementation n n Clients generate Active Names domain: name Active Name Resolver Active Naming Implementation n n Clients generate Active Names domain: name Active Name Resolver determines domainspecific program Location independent, can run anywhere u Application specific, name resolved in domainspecific manner u n Domain-specific code to check for cache hit u n Active caching (hit counters, ad-rotation) After Methods associated with each Active Name List of programs guaranteed to be called after initial eval u Multi-way RPC, anonymization, distillation u Client-specific transformation of data u 130

Active Name Resolution Active Name Resolver n Domain Resolver Cache n Client Virtual Machine Active Name Resolution Active Name Resolver n Domain Resolver Cache n Client Virtual Machine Name After Methods Distillation Return Data Program Data n n Program agent of service u Hit counting u Dynamic content generation Location independent u Hand off to other resolvers After methods perform client-specific transforms u Distillation u Personalization Virtual Machine u Resource allocation u Safety 131

Multi-Way RPC n Usually have to pass results all the way down a hierarchy Multi-Way RPC n Usually have to pass results all the way down a hierarchy u Adds latency u Store and forward delays u n Traditional Goal: minimize latency Multi-Way RPC leverages after methods Convention: last after method transmits result back to client u Minimize latency u Back fill for caches? u P Request C P Response S P P Multi-Way RPC Request P P C Response P S P 132

Change the Socket API? n Network programming traditional model: ipaddr = gethostbyname(“www. cs. duke. Change the Socket API? n Network programming traditional model: ipaddr = gethostbyname(“www. cs. duke. edu”); socket = connect(ipaddr, 80 /* port */); write(socket, “GET /index. html HTTP/1. 0nn”); read(socket, data. Buffer); u data. Buffer = ANResolver. Eval(“www. cs. duke. edu/index. html”); u n Analogs in other areas Filesystems, virtual memory, Java URL objects u Programmer does not see inodes or physical page addresses u Allows for reorganization underneath the hood u 133

Security Considerations n Multi-way RPC Send request to local resolver u Wait for answer Security Considerations n Multi-way RPC Send request to local resolver u Wait for answer on a socket u Answer could be transmitted by anyone u n Solution: use capabilities Associate a capability with each request u Capability must come back with reply u n Future work: integrate with CRISIS xfer certificates Specify the privileges available for name resolution (local state) u Inspect complete chain of transfers linked to replies u 134

Outline n n Background Active Names u Opportunity u Implementation n Examples u Programmability Outline n n Background Active Names u Opportunity u Implementation n Examples u Programmability u Location Independence u Composibility n Conclusions 135

Example: Load Balancing Seattle Replica l » Randomly choose replica » Avoid hotspots l Example: Load Balancing Seattle Replica l » Randomly choose replica » Avoid hotspots l Berkeley Replica Berkeley Clients DNS Round-Robin Distributed Director » Route to nearest replica » Geographic locality l Active Naming » Previous performance, distance » Adaptive 136

Load Balancing Performance l Optimal load balancing varies with offered load: » Low load: Load Balancing Performance l Optimal load balancing varies with offered load: » Low load: choose closest server » High load: distribute load evenly 137

Example: Mobile Distillation Client-Specific Naming n n Variables: Network, Screen Clients name a single Example: Mobile Distillation Client-Specific Naming n n Variables: Network, Screen Clients name a single object Returned object based on client u Network connection, screen Current approach [Fox 97] u Proxy maintains client profile u Requests object, distills Active naming u Transmit name + program u Flexible distillation point u Tradeoff computation/bandwidth u Support mobile clients 138

Determine Placement of Computation: First-Cut Placement Algorithm server. Dist= (eval. Cost / specintserver* load. Determine Placement of Computation: First-Cut Placement Algorithm server. Dist= (eval. Cost / specintserver* load. Averageserver) + (small. File. BW / distill. File. Size); proxy. Dist = (large. File. BW / orig. File. Size) + (eval. Cost / specintproxy * load. Averageproxy); Prob(server. Eval)= proxy. Dist / (proxy. Dist + server. Dist); n n Distribute load based on estimate of cost Feedback confidence information? In wide area u Must use approximate information u Avoid deterministic decisions 139

Importance of Location Independence I n n n Distill 59 K image to 9 Importance of Location Independence I n n n Distill 59 K image to 9 K Clients/Proxy at UC Berkeley Server at Duke Active Policy tracks then beats best static policy 140

Importance of Location Independence II n n Server loaded with 10 competing processes No Importance of Location Independence II n n Server loaded with 10 competing processes No longer makes sense to perform all distills at server Dynamic placement of computation for optimal performance 141

Example: Active Caches Low (50%) hit rates to proxy caches; causes for misses: 142 Example: Active Caches Low (50%) hit rates to proxy caches; causes for misses: 142

Example: Active Caches n n n 50% hit rate to caches Active Name Resolvers Example: Active Caches n n n 50% hit rate to caches Active Name Resolvers promise to run domainspecific code to retrieve/enter cache entries Cache program implements u Ad rotation, Server side include (SSI) expansion, Access checking, Hit counting No magic bullets, have to compose multiple extension » Combination of distillation, server customization outperforms distillation-only by 50%, customization-only by 100% 143

Related Work n Active Networks: u New routing protocols: multicast, anycast, RSVP u Must Related Work n Active Networks: u New routing protocols: multicast, anycast, RSVP u Must modify routers u Bottom of protocol stack: end-to-end app performance n Active Services: u No composibility, extensibility u Restricted to single point in the network u Unrestricted programming model 144

Hyper-Active Naming n Buying stocks Name stock, requested price, credit card u Execute Active Hyper-Active Naming n Buying stocks Name stock, requested price, credit card u Execute Active Name applet at server to purchase u n Cooperating agents Request best price on a commodity (book) u Active Name applet returns with results and option to buy u n Print to nearest printer (mobility) Active name applet locates printer with client prefs u Runs method at print server to print document u 145

Discussion n Query optimization problem u Reordering n Placement of wide-area computation u What’s Discussion n Query optimization problem u Reordering n Placement of wide-area computation u What’s n n n of after methods available for running my job? Estimating remote bandwidth, CPU load Integration with CRISIS security Resource allocation Debugging? Performance overheads? 146

Conclusions n n Active Name: mobile program that invokes a service or acquires data Conclusions n n Active Name: mobile program that invokes a service or acquires data Prototype demonstrates Feasibility of approach u Interesting applications u n Provides dynamic, flexible binding to service/data Server selection among geographic replicas (Rent-A -Server) u Client customization (e. g. , distillation, custom CNN) u Server customization (e. g. , hit counting, ad rotation, etc. ) u Active Caching u 147

Active vs. Intentional Names n INS and Active Networks u Name servers deliver and Active vs. Intentional Names n INS and Active Networks u Name servers deliver and route message u Name servers are general modules with application-specific features F INS declarative • attributes and metric F Active name procedural • Name space resolver programs u Overlay F Use network IP for routing 148

Sensor Networks n Sensor networks u Name servers directly do routing F u Connected Sensor Networks n Sensor networks u Name servers directly do routing F u Connected directly to each other with physical point-topoint links F u Not built on top of IP Special ad-hoc network separate from internet No distinction between Client F Service F Name server F Router F Every node is a routing sensor and name server u Framework for building such networks u 149

Distributed Sensors vs. Appliances n n Sensors u produce and consume data Communication u Distributed Sensors vs. Appliances n n Sensors u produce and consume data Communication u data messages u a la event notifications n Appliances respond to operations u send back events u n Communication remote method call u event notifications u 150

Internet vs. Sensor Networks Internet n Plentiful power n Plentiful bandwidth n Low delay Internet vs. Sensor Networks Internet n Plentiful power n Plentiful bandwidth n Low delay n Router throughput an issue Sensor network n Scarce power n Bandwidth dear n High-delay n Sensor/router nodes powerful compared to bandwidth u 3000 instructions take as much power as sending a bit 100 m by radio n Tradeoff computation in router for communication u Rainfall aggregation near source node u Duplicate suppression u Avoid flooding, multicast … 151

No IP address n source Client does not use address u As n in No IP address n source Client does not use address u As n in active name, IN Service does not register address u Unlike active name, IN u No special name server sink 152

Naming and Routing Away path n source How do nodes communicate? u Naming F Naming and Routing Away path n source How do nodes communicate? u Naming F attribute-based F as in INS u Routing F Broadcast to all neighbours sink 153

Avoiding Away paths n n n Sink sends interest to source Interest forwarded by Avoiding Away paths n n n Sink sends interest to source Interest forwarded by intermediate nodes Sink and intermediate nodes record gradient: neighbours from which each interest came u And update rate, active/inactive interest u Away interest path n n Data matching interest returned to these neighbours if active at update rate Avoiding away paths for interest - Directed diffusion 154

Avoiding interest awaypaths Away interest about interest path n Away paths of interest u Avoiding interest awaypaths Away interest about interest path n Away paths of interest u send interest about interests Extends Model View Controller F Many to many communication F u must stop recursion F Away paths for interest about interest 155

Avoiding source overload activated n n Source does not generate/sense data until interest about Avoiding source overload activated n n Source does not generate/sense data until interest about interest arrives Interest can be dynamically activated/deactivated at source and intermediate nodes u changes gradient u 156

Avoiding sink overload n Update rate associated with interest u intermediate nodes forward events Avoiding sink overload n Update rate associated with interest u intermediate nodes forward events at update rate Not sent u recorded in gradient because of update rate 157

Multiple paths n Event goes to destination along all paths 158 Multiple paths n Event goes to destination along all paths 158

Avoiding Multiple Paths n n n Send “source” sends exploratory message to all possible Avoiding Multiple Paths n n n Send “source” sends exploratory message to all possible sinks Each sink reinforces path along which message reached earliest Subsequently reinforced path used Exploratory messages periodically sent to update routes Local repair on node failure Negative reinforcements possible if alternative paths better u How determined? 159

Aggregation n Same real-world event may trigger multiple sensors u Installed Filter n Sink Aggregation n Same real-world event may trigger multiple sensors u Installed Filter n Sink may be interested in aggregation of events from multiple sensors u n Multiple motion detectors Rainfall sensors Download filters at intermediate nodes They match attributes u Examine matching messages u Can respond by doing application-specific aggregation u F A la active networks 160

Aggregation Kind/Examples n Binary u There n was a detection Area u Detection n Aggregation Kind/Examples n Binary u There n was a detection Area u Detection n in specific quadrant Probability u 80% chance of detection 161

Highpower Two-Level Sensing Low-power n Wireless monitoring system u Low-power sensors Light and motion Highpower Two-Level Sensing Low-power n Wireless monitoring system u Low-power sensors Light and motion detectors F Always on F u High-power sensors Microphones and steerable cameras F Triggered by low-power sensors F u Sink Configured externally by some distant client F Where the user is Must go through some intermediate nodes u How configured? u 162

Conventional Solution Conventional n Client listens for lowpower events n Triggers high-power events n Conventional Solution Conventional n Client listens for lowpower events n Triggers high-power events n Lots of communication 163

Nested Query Solution Nested query solution n Nested interest sent to secondary sensor n Nested Query Solution Nested query solution n Nested interest sent to secondary sensor n It registers interest in primary sensors n Less communication 164

Attribute Matching Free variable (actual Bound variable (actual parameter) n Interest n Published class Attribute Matching Free variable (actual Bound variable (actual parameter) n Interest n Published class IS interest class IS data task EQ “detect. Animal” task IS “detect. Animal” confidence GT 5. 0 confidence IS 90 latitude GE 10. 0 latitude IS 20. 0 latitude LE 100. 0 longitude IS 80. 0 longitude GE 5. 0 target IS “ 4 -leg” longitude LE 95. 0 val op target IS “ 4 -leg” key Matching = Unification EQ = subclass of ? 165

Attribute Matching Algorithm One-way match (Attribute. Set A, B) for each unbound a in Attribute Matching Algorithm One-way match (Attribute. Set A, B) for each unbound a in A { matched false for each bound b in B where a. key = b. key matched compare (a. key, b. key, a. op) if not matched return false return true 166

API Subscription subscribe (Attribute. Set attribute. Set, Subscription. Callback callback); unsubscribe(Subscription subscription); Publication publish(Attribute. API Subscription subscribe (Attribute. Set attribute. Set, Subscription. Callback callback); unsubscribe(Subscription subscription); Publication publish(Attribute. Set attribute. Set); unpublish (Publication publication); send (Publication publication, Attribute. Set send. Attrs); Filter add. Filter(Attribute. Set attribute. Set, int priority, Filter. Callback callback); remove. Filter (Filter filter) send. Message(Message message, Handle handle, Agent agent) send. Message. To. Next(Message message, Handle handle) Accesses gradient, message, previous and next destination 167

Two Implementations n Full size 55 KB Code u 8 Kb data u 20 Two Implementations n Full size 55 KB Code u 8 Kb data u 20 KB library u 4 KB data n 3 K code u 100 byte data u u n Micro size n Micro functionality Single attribute u 5 active gradients u 10 packet cache u 2 relevant bytes/packet u No filters u Meant for secondary sensors n Meant for primary sensors 168

Databased composition Queries over multiple devices u For each rainfall sensor, average rainfall u Databased composition Queries over multiple devices u For each rainfall sensor, average rainfall u For each sensor in Tompkin county, current rainfall u For next 5 hrs, every 30 minutes, rainfall in Tompkin county 169

Query Kinds Device Queries n Historical u n Snapshot u n For each rainfall Query Kinds Device Queries n Historical u n Snapshot u n For each rainfall sensor, average rainfall For each sensor in Tompkin county, current rainfall Long-running For next 5 hrs, every 30 minutes, rainfall in Tompkin county u New kind of query u n Defining device databases and queries? 170

Cougar Device Model n Embedded vs. Attached u Embedded F u Regular appliance attached Cougar Device Model n Embedded vs. Attached u Embedded F u Regular appliance attached to computer F n n n Network appliance/sensor Computer connected to regular device Stationary vs. Mobile Strongly vs. Intermittently connected Local area vs. wide area Device database work focuses on stationary devices Data Gatherers (sensors) vs. Operation Servers (appliances)? 171

Cougar Device Operations n Operation Model acquire, store, and process data u may trigger Cougar Device Operations n Operation Model acquire, store, and process data u may trigger action in physical world u return result u n Synchronous operation u n returns result immediately Asynchronous operation u result(s) later abnormal rainfall F as event F n Intermittently connected device only asynchronous operations possible u device not guaranteed to be connected when operation invoked u intermittently connected device as server u 172

Defining Device Database Device DBMS vs. Traditional Relational DBMS n Device vs. Data collection Defining Device Database Device DBMS vs. Traditional Relational DBMS n Device vs. Data collection u n Computed vs. stored values Distributed information sources Data needed not available locally u May not even be available remotely for intermittent Solution n Base device relations u n Long-running queries. u Not modelled by traditional DBMS Virtual Relations Records partitioned over distributed nodes u Includes results of device operations u u n One record for each device n Extended query language over virtual relations 173

Base Relations ID X Y Collection of devices of a particular type. n one Base Relations ID X Y Collection of devices of a particular type. n one record per device n attributes u device id u X coordinate u Y coordinate 174

Virtual Relations f(a 1, …, am ): T ID a 1 am Val TS Virtual Relations f(a 1, …, am ): T ID a 1 am Val TS Per device function u Attribute for F each function argument F result F global timestamp of result F device id u New record added for each new result u Append-only relation u Each device contributes to part of relation 175

RFSensors Device ID X Example Y RFSensor n One function u get. Rainfall. Level() RFSensors Device ID X Example Y RFSensor n One function u get. Rainfall. Level() n : int Base relation u RFSensors VRFSensors. Get. Rainfall. Level Device ID Value TS n Virtual relation u VRFSensors. Get. Rainfa ll. Level 176

Long running queries For next four hours, retrieve every 30 seconds, rainfall level of Long running queries For next four hours, retrieve every 30 seconds, rainfall level of each sensor IN. . . if it is greater than 50 mm. Query Q: SELECT VR. value FROM RFSensors R, VRFSensors. Get. Rainfall. Level VR WHERE R. ID = VR. ID AND VR. value > 50 AND R. X =. . . AND $every(30) Run Q for 4 hours 200 devices R cardinality = 200 VR cardinality = 480 join WHY JOIN? selection 177

Execution Strategies Q R R. ID = VR. ID R Materialized VR VR Local Execution Strategies Q R R. ID = VR. ID R Materialized VR VR Local rate info, remote Q VR. value > 50 VR VR Val every 3 O min No local knowledge R VR. value > 50 VR Val every 3 O min if R. ID = VR. ID Local Join, Remote Selection R. ID = VR. ID R VR Val every 3 O min if VR. value > 50 178 Local Selection, Remote Join

Performance Metrics n Traditional u Throughput u Response time F Long query running time Performance Metrics n Traditional u Throughput u Response time F Long query running time implementation independent n Sensor specific u Resource usage F network, n power Reaction time u Production to consumption time 179

Power Usage Components CPU n Memory access n Sending message n Sending Nbytes Cost Power Usage Components CPU n Memory access n Sending message n Sending Nbytes Cost in joules = Wcpu*CPU + Wram*RAM + Wmsg. Msg + Wbytes*NBytes n 180

Centralized Warehouse No local knowledge n Single location monitors all sensors n Queries are Centralized Warehouse No local knowledge n Single location monitors all sensors n Queries are sent to site n n Works for historical queries Wastes resources for long running queries Irrelevant sites u Higher rate than necessary u n What to monitor for long running queries u n Camera direction? Centralization Workload u Bottleneck u 181

Distributed Device Database n n All sensors together form a distributed device database system Distributed Device Database n n All sensors together form a distributed device database system Individual nodes sense on demand u do part of query processing u n n n Better resource utilization Know what to monitor Historical? u based on some long -running query 182

Distributed Database Approach n n All sensors together form a distributed device database system Distributed Database Approach n n All sensors together form a distributed device database system Individual nodes n n n Better resource utilization Know what to monitor Historical? sense on demand u do part of query processing u 183

Remote Query Evaluation R. ID = VR. ID R VR. value > 50 VR Remote Query Evaluation R. ID = VR. ID R VR. value > 50 VR VR n VR Val every 3 O min n Only relevant sensors send data No relations sent to sensors Local rate info, remote Q 184

Local Join VR. value > 50 R VR Val every 3 O min if Local Join VR. value > 50 R VR Val every 3 O min if R. ID = VR. ID Local Join, Remote Selection n n Only relevant sensors send value Whole relation send to device u communication overhead 185

Execution Strategies R. ID = VR. ID R VR Val every 3 O min Execution Strategies R. ID = VR. ID R VR Val every 3 O min if VR. value > 50 Local Selection, Remote Join n n Only relevant sensors send value Whole relation send to device u communication overhead 186