Скачать презентацию Chapter 7 Packet-Switching Networks Network Services and Internal Скачать презентацию Chapter 7 Packet-Switching Networks Network Services and Internal

fb2f3ebcc0286daf4c2a61d7470adc71.ppt

  • Количество слайдов: 150

Chapter 7 Packet-Switching Networks Network Services and Internal Network Operation Packet Network Topology Datagrams Chapter 7 Packet-Switching Networks Network Services and Internal Network Operation Packet Network Topology Datagrams and Virtual Circuits Routing in Packet Networks Shortest Path Routing ATM Networks Traffic Management 1

Chapter 7 Packet-Switching Networks Network Services and Internal Network Operation 2 Chapter 7 Packet-Switching Networks Network Services and Internal Network Operation 2

Network Layer Network Layer: the most complex layer Requires the coordinated actions of multiple, Network Layer Network Layer: the most complex layer Requires the coordinated actions of multiple, geographically distributed network elements (switches & routers) Must be able to deal with very large scales Billions of users (people & communicating devices) Biggest Challenges Addressing: where should information be directed to? Routing: what path should be used to get information there? 3

Packet Switching t 1 t 0 Network Transfer of information as payload in data Packet Switching t 1 t 0 Network Transfer of information as payload in data packets Packets undergo random delays & possible loss Different applications impose differing requirements on the transfer of information 4

Network Service Messages Segments Transport layer Network service Network layer Data link layer layer Network Service Messages Segments Transport layer Network service Network layer Data link layer layer Physical layer End system Physical α End system β Network layer can offer a variety of services to transport layer Connection-oriented service or connectionless service 5 Best-effort or delay/loss guarantees

Network Service vs. Operation Network Service Connectionless Datagram Transfer Internal Network Operation Connectionless Connection-Oriented Network Service vs. Operation Network Service Connectionless Datagram Transfer Internal Network Operation Connectionless Connection-Oriented Reliable and possibly constant bit rate transfer IP Connection-Oriented Telephone connection ATM Various combinations are possible Connection-oriented service over Connectionless operation Connectionless service over Connection-Oriented operation Context & requirements determine what makes sense 6

Complexity at the Edge or in the Core? C 4 3 21 1 2 Complexity at the Edge or in the Core? C 4 3 21 1 2 1 End system α 1 12 3 21 12 3 B Medium A Physical layer entity 2 Data link layer entity 3 End system β 2 1 Network 1 2 1 2 21 2 3 12 Network layer entity 21 123 4 3 Network layer entity 4 Transport layer entity 7

The End-to-End Argument for System Design An end-to-end function is best implemented at a The End-to-End Argument for System Design An end-to-end function is best implemented at a higher level than at a lower level End-to-end service requires all intermediate components to work properly Higher-level better positioned to ensure correct operation Example: stream transfer service Establishing an explicit connection for each stream across network requires all network elements (NEs) to be aware of connection; All NEs have to be involved in reestablishment of connections in case of network fault In connectionless network operation, NEs do not deal with each explicit connection and hence are much simpler in design 8

Network Layer Functions Essential Routing: mechanisms for determining the set of best paths for Network Layer Functions Essential Routing: mechanisms for determining the set of best paths for routing packets requires the collaboration of network elements Forwarding: transfer of packets from NE inputs to outputs Priority & Scheduling: determining order of packet transmission in each NE Optional: congestion control, segmentation & reassembly, security 9

Chapter 7 Packet-Switching Networks Packet Network Topology 10 Chapter 7 Packet-Switching Networks Packet Network Topology 10

End-to-End Packet Network Packet networks very different than telephone networks Individual packet streams are End-to-End Packet Network Packet networks very different than telephone networks Individual packet streams are highly bursty User demand can undergo dramatic change Statistical multiplexing is used to concentrate streams Peer-to-peer applications stimulated huge growth in traffic volumes Internet structure highly decentralized Paths traversed by packets can go through many networks controlled by different organizations No single entity responsible for end-to-end service 11

Access Multiplexing Access MUX To packet network Packet traffic from users multiplexed at access Access Multiplexing Access MUX To packet network Packet traffic from users multiplexed at access to network into aggregated streams DSL traffic multiplexed at DSL Access Mux Cable modem traffic multiplexed at Cable Modem Termination System 12

Oversubscription r r Access Multiplexer • • • r Nr nc Nc N subscribers Oversubscription r r Access Multiplexer • • • r Nr nc Nc N subscribers connected @ c bps to mux Each subscriber active r/c of time Mux has C=nc bps to network Oversubscription rate: N/n Find n so that at most 1% overflow probability Feasible oversubscription rate increases with size N r/c n N/n 10 0. 01 1 10 10 extremely lightly loaded users 10 0. 05 3 3. 3 10 very lightly loaded user 10 0. 1 4 2. 5 10 lightly loaded users 20 0. 1 6 3. 3 20 lightly loaded users 40 0. 1 9 4. 4 40 lightly loaded users 100 0. 1 18 5. 5 100 lightly loaded users 13

Home LANs Wi. Fi Ethernet Home Router To packet network Home Router LAN Access Home LANs Wi. Fi Ethernet Home Router To packet network Home Router LAN Access using Ethernet or Wi. Fi (IEEE 802. 11) Private IP addresses in Home (192. 168. 0. x) using Network Address Translation (NAT) Single global IP address from ISP issued using Dynamic Host Configuration Protocol (DHCP) 14

LAN Concentration Switch / Router LAN Hubs and switches in the access network also LAN Concentration Switch / Router LAN Hubs and switches in the access network also aggregate packet streams that flows into switches and routers 15

Servers have redundant connectivity to backbone Campus Network Organization Servers To Internet or wide Servers have redundant connectivity to backbone Campus Network Organization Servers To Internet or wide area network s s Gateway Backbone R R R S R Departmental Server S S R R s s High-speed Only outgoing campus packets leave backbone net LAN through connects dept routers s s s 16

Connecting to Internet Service Provider Internet service provider Border routers Campus Network Border routers Connecting to Internet Service Provider Internet service provider Border routers Campus Network Border routers Interdomain level Autonomous system or domain Intradomain level s LAN s s network administered by single organization 17

Internet Backbone National Service Provider A National Service Provider B NAP National Service Provider Internet Backbone National Service Provider A National Service Provider B NAP National Service Provider C Private peering Network Access Points: set up during original commercialization of Internet to facilitate exchange of traffic Private Peering Points: two-party inter-ISP agreements to exchange traffic 18

National Service Provider A (a) National Service Provider B NAP National Service Provider C National Service Provider A (a) National Service Provider B NAP National Service Provider C (b) NAP Private peering RA RB Route Server LAN RC 19

Key Role of Routing How to get packet from here to there? Decentralized nature Key Role of Routing How to get packet from here to there? Decentralized nature of Internet makes routing a major challenge Interior gateway protocols (IGPs) are used to determine routes within a domain Exterior gateway protocols (EGPs) are used to determine routes across domains Routes must be consistent & produce stable flows Scalability required to accommodate growth Hierarchical structure of IP addresses essential to keeping size of routing tables manageable 20

Chapter 7 Packet-Switching Networks Datagrams and Virtual Circuits 21 Chapter 7 Packet-Switching Networks Datagrams and Virtual Circuits 21

The Switching Function Dynamic interconnection of inputs to outputs Enables dynamic sharing of transmission The Switching Function Dynamic interconnection of inputs to outputs Enables dynamic sharing of transmission resource Two fundamental approaches: Connectionless Connection-Oriented: Call setup control, Connection control Backbone Network Switch Access Network 22

Packet Switching Network User Transmission line Network Packet switching network Transfers packets between users Packet Switching Network User Transmission line Network Packet switching network Transfers packets between users Transmission lines + packet switches (routers) Origin in message switching Two modes of operation: Connectionless Virtual Circuit 23

Message Switching Message Source Message Switches Destination Message switching invented for telegraphy Entire messages Message Switching Message Source Message Switches Destination Message switching invented for telegraphy Entire messages multiplexed onto shared lines, stored & forwarded Headers for source & destination addresses Routing at message switches Connectionless 24

Message Switching Delay Source T t Switch 1 t Switch 2 Destination t t Message Switching Delay Source T t Switch 1 t Switch 2 Destination t t Delay Minimum delay = 3 + 3 T Additional queueing delays possible at each link 25

Long Messages vs. Packets 1 Mbit message source BER=p=10 -6 BER=10 -6 dest How Long Messages vs. Packets 1 Mbit message source BER=p=10 -6 BER=10 -6 dest How many bits need to be transmitted to deliver message? Approach 1: send 1 Mbit message Probability message arrives correctly On average it takes about 3 transmissions/hop Total # bits transmitted ≈ 6 Mbits Approach 2: send 10 100 -kbit packets Probability packet arrives correctly On average it takes about 1. 1 transmissions/hop Total # bits transmitted ≈ 2. 2 Mbits 26

Packet Switching - Datagram Messages broken into smaller units (packets) Source & destination addresses Packet Switching - Datagram Messages broken into smaller units (packets) Source & destination addresses in packet header Connectionless, packets routed independently (datagram) Packet may arrive out of order Pipelining of packets across network can reduce delay, increase throughput Lower delay than message switching, suitable for interactive traffic Packet 1 Packet 2 27

Packet Switching Delay Assume three packets corresponding to one message traverse same path t Packet Switching Delay Assume three packets corresponding to one message traverse same path t 1 2 3 t Delay Minimum Delay = 3τ + 5(T/3) (single path assumed) Additional queueing delays possible at each link Packet pipelining enables message to arrive sooner 28

Delay for k-Packet Message over L Hops Source Switch 1 Switch 2 t 1 Delay for k-Packet Message over L Hops Source Switch 1 Switch 2 t 1 2 3 t 1 Destination 2 3 t L hops 3 + 2(T/3) first bit received L + (L-1)P first bit received 3 + 3(T/3) first bit released L + LP first bit released 3 + 5 (T/3) last bit released L + LP + (k-1)P last bit released where T = k P 29

Routing Tables in Datagram Networks Destination address Output port 0785 7 1345 12 1566 Routing Tables in Datagram Networks Destination address Output port 0785 7 1345 12 1566 6 2458 12 Route determined by table lookup Routing decision involves finding next hop in route to given destination Routing table has an entry for each destination specifying output port that leads to next hop Size of table becomes impractical for very large number of destinations 30

Example: Internet Routing Internet protocol uses datagram packet switching across networks Hosts have two-port Example: Internet Routing Internet protocol uses datagram packet switching across networks Hosts have two-port IP address: Network address + Host address Routers do table lookup on network address Networks are treated as data links This reduces size of routing table In addition, network addresses are assigned so that they can also be aggregated Discussed as CIDR in Chapter 8 31

Packet Switching – Virtual Circuit Packet Virtual circuit Call set-up phase sets ups pointers Packet Switching – Virtual Circuit Packet Virtual circuit Call set-up phase sets ups pointers in fixed path along network All packets for a connection follow the same path Abbreviated header identifies connection on each link Packets queue for transmission Variable bit rates possible, negotiated during call set-up Delays variable, cannot be less than circuit switching 32

Connection Setup Connect request Connect confirm SW 1 Connect request Connect confirm SW 2 Connection Setup Connect request Connect confirm SW 1 Connect request Connect confirm SW 2 … SW n Connect request Connect confirm Signaling messages propagate as route is selected Signaling messages identify connection and setup tables in switches Typically a connection is identified by a local tag, Virtual Circuit Identifier (VCI) Each switch only needs to know how to relate an incoming tag in one input to an outgoing tag in the corresponding output Once tables are setup, packets can flow along path 33

Connection Setup Delay t Connect request CC CR Connect confirm 1 2 3 1 Connection Setup Delay t Connect request CC CR Connect confirm 1 2 3 1 2 Release 3 t t 1 2 3 Connection setup delay is incurred before any packet can be transferred Delay is acceptable for sustained transfer of large number of packets This delay may be unacceptably high if only a few packets are being transferred t 34

Virtual Circuit Forwarding Tables Input VCI Output port Output VCI 12 13 44 15 Virtual Circuit Forwarding Tables Input VCI Output port Output VCI 12 13 44 15 15 23 27 13 16 58 7 34 Each input port of packet switch has a forwarding table Lookup entry for VCI of incoming packet Determine output port (next hop) and insert VCI for next link Very high speeds are possible Table can also include priority or other information about how packet should be treated 35

Cut-Through switching Source Switch 1 Switch 2 t 2 1 3 t 2 1 Cut-Through switching Source Switch 1 Switch 2 t 2 1 3 t 2 1 t 1 Destination 3 2 3 t Minimum delay = 3 + T Some networks perform error checking on header only, so packet can be forwarded as soon as header is received & processed Delays reduced further with cut-through switching 36

Message vs. Packet Minimum Delay Message: L + LT = L + (L – Message vs. Packet Minimum Delay Message: L + LT = L + (L – 1) T + T Packet L + L P + (k – 1) P = L + (L – 1) P + T Cut-Through Packet (Immediate forwarding after header) = L + T Above neglect header processing delays 37

Example: ATM Networks All information mapped into short fixed-length packets called cells Connections set Example: ATM Networks All information mapped into short fixed-length packets called cells Connections set up across network Virtual circuits established across networks Tables setup at ATM switches Several types of network services offered Constant bit rate connections Variable bit rate connections 38

Chapter 7 Packet-Switching Networks Datagrams and Virtual Circuits Structure of a Packet Switch 39 Chapter 7 Packet-Switching Networks Datagrams and Virtual Circuits Structure of a Packet Switch 39

 • • • Packet Switch: Intersection where Traffic Flows Meet 2 • • • • • Packet Switch: Intersection where Traffic Flows Meet 2 • • • 1 2 N • • • 1 N Inputs contain multiplexed flows from access muxs & other packet switches Flows demultiplexed at input, routed and/or forwarded to output ports Packets buffered, prioritized, and multiplexed on output lines 40

Generic Packet Switch “Unfolded” View of Switch Ingress Line Cards Controller N Line card Generic Packet Switch “Unfolded” View of Switch Ingress Line Cards Controller N Line card 2 Line card 3 Line card … Line card 1 … … 3 Line card Interconnection fabric 2 Line card … 1 Input ports Output ports Controller (a) Transfer packets between line cards Egress Line Cards Data path Control path Routing in small switches Signalling & resource allocation Interconnection Fabric N Header processing Demultiplexing Routing in large switches Scheduling & priority Multiplexing 41

Line Cards Framer Network processor Transceiver To physical ports Backplane transceivers Framer To switch Line Cards Framer Network processor Transceiver To physical ports Backplane transceivers Framer To switch fabric To other line cards Folded View 1 circuit board is ingress/egress line card Physical layer processing Data link layer processing Network header processing Physical layer across fabric + framing Interconnection fabric Transceiver 42

Shared Memory Packet Switch Ingress Processing Connection Control Output Buffering 1 1 Queue Control Shared Memory Packet Switch Ingress Processing Connection Control Output Buffering 1 1 Queue Control 2 2 3 N Shared Memory … … 3 N Small switches can be built by reading/writing into shared memory 43

Crossbar Switches (b) Output buffering (a) Input buffering Inputs 1 3 1 2 83 Crossbar Switches (b) Output buffering (a) Input buffering Inputs 1 3 1 2 83 2 3 … … 3 N N … 1 2 3 Outputs … N 1 2 3 Outputs N Large switches built from crossbar & multistage space switches Requires centralized controller/scheduler (who sends to whom when) 44 Can buffer at input, output, or both (performance vs complexity)

Self-Routing Switches Inputs Outputs 0 0 1 1 2 2 3 3 4 4 Self-Routing Switches Inputs Outputs 0 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 Stage 1 Stage 2 Stage 3 Self-routing switches do not require controller Output port number determines route 101 → (1) lower port, (2) upper port, (3) lower port 45

Chapter 7 Packet-Switching Networks Routing in Packet Networks 46 Chapter 7 Packet-Switching Networks Routing in Packet Networks 46

Routing in Packet Networks 1 3 6 4 2 Node (switch or router) Three Routing in Packet Networks 1 3 6 4 2 Node (switch or router) Three possible (loopfree) routes from 1 to 6: 5 1 -3 -6, 1 -4 -5 -6, 1 -2 -5 -6 Which is “best”? Min delay? Min hop? Max bandwidth? Min cost? Max reliability? 47

Creating the Routing Tables Need information on state of links Need to distribute link Creating the Routing Tables Need information on state of links Need to distribute link state information using a routing protocol Link up/down; congested; delay or other metrics What information is exchanged? How often? Exchange with neighbors; Broadcast or flood Need to compute routes based on information Single metric; multiple metrics Single route; alternate routes 48

Routing Algorithm Requirements Responsiveness to changes Optimality Resource utilization, path length Robustness Topology or Routing Algorithm Requirements Responsiveness to changes Optimality Resource utilization, path length Robustness Topology or bandwidth changes, congestion Rapid convergence of routers to consistent set of routes Freedom from persistent loops Continues working under high load, congestion, faults, equipment failures, incorrect implementations Simplicity Efficient software implementation, reasonable processing load 49

Routing in Virtual-Circuit Packet Networks 2 1 A Host 1 VCI 5 2 Switch Routing in Virtual-Circuit Packet Networks 2 1 A Host 1 VCI 5 2 Switch or router 5 5 2 B 4 2 6 8 6 1 4 3 C 3 3 5 7 D Route determined during connection setup Tables in switches implement forwarding that realizes selected route 52

Routing Tables in VC Packet Networks Node 3 Node 1 Incoming Node VCI A Routing Tables in VC Packet Networks Node 3 Node 1 Incoming Node VCI A 1 A 5 3 2 3 3 Outgoing Node VCI 3 2 3 3 A 1 A 5 Incoming Node VCI 1 2 1 3 4 2 6 7 6 1 4 4 Outgoing Node VCI 6 7 4 4 6 1 1 2 4 2 1 3 Node 6 Incoming Node VCI 3 7 3 1 B 5 B 8 Outgoing Node VCI B 8 B 5 3 1 3 7 Node 4 Node 2 Incoming Node VCI C 6 4 3 Outgoing Node VCI 4 3 C 6 Incoming Node VCI 2 3 3 4 3 2 5 5 Outgoing Node VCI 3 2 5 5 2 3 3 4 Node 5 Incoming Node VCI 4 5 D 2 Outgoing Node VCI D 2 4 5 Example: VCI from A to D From A & VCI 5 → 3 & VCI 3 → 4 & VCI 4 → 5 & VCI 5 → D & VCI 2 53

Routing Tables in Datagram Packet Networks Node 3 Node 1 Destination Next node 2 Routing Tables in Datagram Packet Networks Node 3 Node 1 Destination Next node 2 2 3 3 4 4 5 2 6 3 Destination 1 3 4 5 6 Node 2 Next node 1 1 4 5 5 Destination 1 2 4 5 6 Next node 1 4 4 6 6 Destination 1 2 3 5 6 Node 4 Next node 1 2 3 5 3 Node 6 Destination Next node 1 3 2 5 3 3 4 3 5 5 Node 5 Destination Next node 1 4 2 2 3 4 4 4 6 6 54

Non-Hierarchical Addresses and Routing 0000 0111 1010 1101 1 0011 0110 1001 1100 4 Non-Hierarchical Addresses and Routing 0000 0111 1010 1101 1 0011 0110 1001 1100 4 3 2 0001 0100 1011 1110 R 1 0000 0111 1010 … 1 1 1 … R 2 0001 0100 1011 … 5 4 4 4 … 0011 0101 1000 1111 No relationship between addresses & routing proximity Routing tables require 16 entries each 55

Hierarchical Addresses and Routing 0000 0001 0010 0011 1 1000 1001 1010 1011 4 Hierarchical Addresses and Routing 0000 0001 0010 0011 1 1000 1001 1010 1011 4 3 2 0100 0101 0110 0111 R 1 00 01 10 11 1 3 2 3 R 2 00 01 10 11 3 4 3 5 5 1100 1101 1110 1111 Prefix indicates network where host is attached Routing tables require 4 entries each 56

Specialized Routing Flooding Useful in starting up network Useful in propagating information to all Specialized Routing Flooding Useful in starting up network Useful in propagating information to all nodes Deflection Routing Fixed, preset routing procedure No route synthesis 58

Flooding Send a packet to all nodes in a network No routing tables available Flooding Send a packet to all nodes in a network No routing tables available Need to broadcast packet to all nodes (e. g. to propagate link state information) Approach Send packet on all ports except one where it arrived Exponential growth in packet transmissions 59

1 3 6 4 2 5 Flooding is initiated from Node 1: Hop 1 1 3 6 4 2 5 Flooding is initiated from Node 1: Hop 1 transmissions 60

1 3 6 4 2 5 Flooding is initiated from Node 1: Hop 2 1 3 6 4 2 5 Flooding is initiated from Node 1: Hop 2 transmissions 61

1 3 6 4 2 5 Flooding is initiated from Node 1: Hop 3 1 3 6 4 2 5 Flooding is initiated from Node 1: Hop 3 transmissions 62

Limited Flooding Time-to-Live field in each packet limits number of hops to certain diameter Limited Flooding Time-to-Live field in each packet limits number of hops to certain diameter Each switch adds its ID before flooding; discards repeats Source puts sequence number in each packet; switches records source address and sequence number and discards repeats 63

Deflection Routing Network nodes forward packets to preferred port If preferred port busy, deflect Deflection Routing Network nodes forward packets to preferred port If preferred port busy, deflect packet to another port Works well with regular topologies Manhattan street network Rectangular array of nodes Nodes designated (i, j) Rows alternate as one-way streets Columns alternate as one-way avenues Bufferless operation is possible Proposed for optical packet networks All-optical buffering currently not viable 64

0, 0 0, 1 0, 2 0, 3 1, 0 1, 1 1, 2 0, 0 0, 1 0, 2 0, 3 1, 0 1, 1 1, 2 1, 3 2, 0 2, 1 2, 2 2, 3 3, 0 3, 1 3, 2 Tunnel from last column to first column or vice versa 3, 3 65

Example: Node (0, 2)→(1, 0) busy 0, 0 0, 1 0, 2 0, 3 Example: Node (0, 2)→(1, 0) busy 0, 0 0, 1 0, 2 0, 3 1, 0 1, 1 1, 2 1, 3 2, 0 2, 1 2, 2 2, 3 3, 0 3, 1 3, 2 3, 3 66

Chapter 7 Packet-Switching Networks Shortest Path Routing 67 Chapter 7 Packet-Switching Networks Shortest Path Routing 67

Shortest Paths & Routing Many possible paths connect any given source and to any Shortest Paths & Routing Many possible paths connect any given source and to any given destination Routing involves the selection of the path to be used to accomplish a given transfer Typically it is possible to attach a cost or distance to a link connecting two nodes Routing can then be posed as a shortest path problem 68

Routing Metrics Means for measuring desirability of a path Path Length = sum of Routing Metrics Means for measuring desirability of a path Path Length = sum of costs or distances Possible metrics Hop count: rough measure of resources used Reliability: link availability; BER Delay: sum of delays along path; complex & dynamic Bandwidth: “available capacity” in a path Load: Link & router utilization along path Cost: $$$ 69

Shortest Path Approaches Distance Vector Protocols Neighbors exchange list of distances to destinations Best Shortest Path Approaches Distance Vector Protocols Neighbors exchange list of distances to destinations Best next-hop determined for each destination Ford-Fulkerson (distributed) shortest path algorithm Link State Protocols Link state information flooded to all routers Routers have complete topology information Shortest path (& hence next hop) calculated Dijkstra (centralized) shortest path algorithm 70

Distance Vector Do you know the way to San Jose? Sa n. J os Distance Vector Do you know the way to San Jose? Sa n. J os e San Jose 392 29 4 Sa San Jose 596 n Jo se 25 0 71

Distance Vector Local Signpost Direction Distance Routing Table For each destination list: Next Node Distance Vector Local Signpost Direction Distance Routing Table For each destination list: Next Node dest next dist Distance Table Synthesis Neighbors exchange table entries Determine current best next hop Inform neighbors Periodically After changes 72

Shortest Path to SJ Focus on how nodes find their shortest path to a Shortest Path to SJ Focus on how nodes find their shortest path to a given destination node, i. e. SJ San Jose Dj Cij i Di j If Di is the shortest distance to SJ from i and if j is a neighbor on the shortest path, then Di = Cij + Dj 73

But we don’t know the shortest paths i only has local info from neighbors But we don’t know the shortest paths i only has local info from neighbors Dj' San Jose j' Cij' i Di j Cij” j" Dj Dj" Pick current shortest path 74

Why Distance Vector Works. SJ sends accurate info 3 Hops From SJ 2 Hops Why Distance Vector Works. SJ sends accurate info 3 Hops From SJ 2 Hops From SJ 1 Hop From SJ Accurate info about SJ ripples across network, Shortest Path Converges San Jose Hop-1 nodes calculate current (next hop, dist), & send to neighbors 75

Bellman-Ford Algorithm Consider computations for one destination d Initialization Each node table has 1 Bellman-Ford Algorithm Consider computations for one destination d Initialization Each node table has 1 row for destination d Distance of node d to itself is zero: Dd=0 Distance of other node j to d is infinite: Dj= , for j d Next hop node nj = -1 to indicate not yet defined for j d Send Step Send new distance vector to immediate neighbors across local link Receive Step At node i, find the next hop that gives the minimum distance to d, Minj { Cij + Dj } Replace old (nj, Dj(d)) by new (nj*, Dj*(d)) if new next node or distance Go to send step 76

Bellman-Ford Algorithm Now consider parallel computations for all destinations d Initialization Each node has Bellman-Ford Algorithm Now consider parallel computations for all destinations d Initialization Each node has 1 row for each destination d Distance of node d to itself is zero: Dd(d)=0 Distance of other node j to d is infinite: Dj(d)= , for j d Next node nj = -1 since not yet defined Send Step Send new distance vector to immediate neighbors across local link Receive Step For each destination d, find the next hop that gives the minimum distance to d, Minj { Cij+ Dj(d) } Replace old (nj, Di(d)) by new (nj*, Dj*(d)) if new next node or distance found Go to send step 77

Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1, ) Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1, ) (-1, ) 1 2 3 Table entry @ node 3 for dest SJ Table entry @ node 1 for dest SJ 2 1 3 5 San Jose 1 2 4 3 1 2 6 3 4 5 2 78

Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1, ) Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1, ) (-1, ) 1 (-1, ) (6, 1) (-1, ) (6, 2) 2 3 D 3=D 6+1 n 3=6 D 6=0 3 1 2 1 5 1 2 0 4 3 1 2 6 3 4 D 5=D 6+2 n 5=6 San Jose 2 5 2 D 6=0 79

Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1, ) Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1, ) (-1, ) 1 (-1, ) (6, 1) (-1, ) (6, 2) 2 (3, 3) (5, 6) (6, 1) (3, 3) (6, 2) 3 3 1 2 3 1 5 3 1 2 0 4 3 1 2 6 6 3 4 5 San Jose 2 2 80

Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1, ) Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (-1, ) (-1, ) 1 (-1, ) (6, 1) (-1, ) (6, 2) 2 (3, 3) (5, 6) (6, 1) (3, 3) (6, 2) 3 (3, 3) (4, 4) (6, 1) (3, 3) (6, 2) 3 1 2 3 1 5 3 1 2 0 4 3 1 2 6 4 6 3 4 5 San Jose 2 2 81

Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3, 3) Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3, 3) (4, 4) (6, 1) (3, 3) (6, 2) 1 (3, 3) (4, 4) (4, 5) (3, 3) (6, 2) 2 3 3 1 5 2 3 1 5 3 1 2 0 4 3 1 4 2 6 3 4 5 San Jose 2 2 Network disconnected; Loop created between nodes 3 and 4 82

Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3, 3) Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3, 3) (4, 4) (6, 1) (3, 3) (6, 2) 1 (3, 3) (4, 4) (4, 5) (3, 3) (6, 2) 2 (3, 7) (4, 4) (4, 5) (5, 5) (6, 2) 3 5 37 2 1 5 3 53 1 2 0 4 3 1 2 4 6 3 4 San Jose 2 5 2 Node 4 could have chosen 2 as next node because of tie 83

Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3, 3) Iteration Node 1 Node 2 Node 3 Node 4 Node 5 Initial (3, 3) (4, 4) (6, 1) (3, 3) (6, 2) 1 (3, 3) (4, 4) (4, 5) (3, 3) (6, 2) 2 (3, 7) (4, 4) (4, 5) (5, 5) (6, 2) 3 (3, 7) (4, 6) (4, 7) (5, 5) (6, 2) 7 5 7 2 3 1 5 5 1 2 0 4 3 1 2 46 6 3 4 5 San Jose 2 2 Node 2 could have chosen 5 as next node because of tie 84

Iteration Node 1 Node 2 Node 3 Node 4 Node 5 1 (3, 3) Iteration Node 1 Node 2 Node 3 Node 4 Node 5 1 (3, 3) (4, 4) (4, 5) (3, 3) (6, 2) 2 (3, 7) (4, 4) (4, 5) (2, 5) (6, 2) 3 (3, 7) (4, 6) (4, 7) (5, 5) (6, 2) 4 (2, 9) (4, 6) (4, 7) (5, 5) (6, 2) 79 2 3 1 5 5 7 1 2 0 4 3 1 2 6 6 3 4 5 San Jose 2 2 Node 1 could have chose 3 as next node because of tie 85

Counting to Infinity Problem (a) 1 (b) 1 1 1 2 2 1 1 Counting to Infinity Problem (a) 1 (b) 1 1 1 2 2 1 1 3 3 4 1 4 X Nodes believe best path is through each other (Destination is node 4) Update Node 1 Node 2 Node 3 Before break (2, 3) (3, 2) (4, 1) After break (2, 3) (3, 2) (2, 3) 1 (2, 3) (3, 4) (2, 3) 2 (2, 5) (3, 4) (2, 5) 3 (2, 5) (3, 6) (2, 5) 4 (2, 7) (3, 6) (2, 7) 5 (2, 7) (3, 8) (2, 7) … … 86

Problem: Bad News Travels Slowly Remedies Split Horizon Do not report route to a Problem: Bad News Travels Slowly Remedies Split Horizon Do not report route to a destination to the neighbor from which route was learned Poisoned Reverse Report route to a destination to the neighbor from which route was learned, but with infinite distance Breaks erroneous direct loops immediately Does not work on some indirect loops 87

Split Horizon with Poison Reverse (a) 1 (b) 1 1 1 2 2 1 Split Horizon with Poison Reverse (a) 1 (b) 1 1 1 2 2 1 1 3 3 1 X 4 4 Nodes believe best path is through each other Update Node 1 Node 2 Node 3 Before break (2, 3) (3, 2) (4, 1) After break (2, 3) (3, 2) (-1, ) Node 2 advertizes its route to 4 to node 3 as having distance infinity; node 3 finds there is no route to 4 1 (2, 3) (-1, ) Node 1 advertizes its route to 4 to node 2 as having distance infinity; node 2 finds there is no route to 4 2 (-1, ) Node 1 finds there is no route to 4 88

Link-State Algorithm Basic idea: two step procedure Each source node gets a map of Link-State Algorithm Basic idea: two step procedure Each source node gets a map of all nodes and link metrics (link state) of the entire network Find the shortest path on the map from the source node to all destination nodes Broadcast of link-state information Every node i in the network broadcasts to every other node in the network: ID’s of its neighbors: Ni=set of neighbors of i Distances to its neighbors: {Cij | j Ni} Flooding is a popular method of broadcasting packets 89

Dijkstra Algorithm: Finding shortest paths in order Closest node to s is 1 hop Dijkstra Algorithm: Finding shortest paths in order Closest node to s is 1 hop away Find shortest paths from source s to all other destinations 2 nd closest node to s is 1 hop away from s or w” 3 rd closest node to s is 1 hop away from s, w”, or x w' z w s x w" z' x' 90

Dijkstra’s algorithm N: set of nodes for which shortest path already found Initialization: (Start Dijkstra’s algorithm N: set of nodes for which shortest path already found Initialization: (Start with source node s) Step A: (Find next closest node i) N = {s}, Ds = 0, “s is distance zero from itself” Dj=Csj for all j s, distances of directly-connected neighbors Find i N such that Di = min Dj for j N Add i to N If N contains all the nodes, stop Step B: (update minimum costs) For each node j N Dj = min (Dj, Di+Cij) Go to Step A Minimum distance from s to 91 j through node i in N

Execution of Dijkstra’s algorithm 2 1 5 4 1 2 4 1 6 2 Execution of Dijkstra’s algorithm 2 1 5 4 1 2 4 1 6 2 3 1 3 2 5 3 4 3 1 2 5 6 2 3 2 1 3 2 5 4 Iteration N D 2 D 3 D 4 D 5 D 6 Initial {1} 3 2 5 1 {1, 3} 3 2 4 3 2 {1, 2, 3} 3 2 4 7 3 3 {1, 2, 3, 6} 3 2 4 5 3 4 {1, 2, 3, 4, 6} 3 2 4 5 3 5 {1, 2, 3, 4, 5, 6} 3 2 4 5 3 92

Shortest Paths in Dijkstra’s Algorithm 2 1 5 3 2 2 5 3 2 Shortest Paths in Dijkstra’s Algorithm 2 1 5 3 2 2 5 3 2 1 2 5 6 2 3 3 2 4 5 5 2 1 3 5 6 2 4 1 2 2 3 3 4 1 4 4 1 6 2 2 1 3 1 5 4 2 5 4 5 2 2 3 3 4 1 1 6 2 3 4 2 1 3 6 2 1 5 4 5 2 1 3 3 4 1 2 1 6 2 3 1 1 3 2 4 93 5

Reaction to Failure If a link fails, Router sets link distance to infinity & Reaction to Failure If a link fails, Router sets link distance to infinity & floods the network with an update packet All routers immediately update their link database & recalculate their shortest paths Recovery quick But watch out for old update messages Add time stamp or sequence # to each update message Check whether each received update message is new If new, add it to database and broadcast If older, send update message on arriving link 94

Why is Link State Better? Fast, loopless convergence Support for precise metrics, and multiple Why is Link State Better? Fast, loopless convergence Support for precise metrics, and multiple metrics if necessary (throughput, delay, cost, reliability) Support for multiple paths to a destination algorithm can be modified to find best two paths 95

Source Routing Source host selects path that is to be followed by a packet Source Routing Source host selects path that is to be followed by a packet Strict: sequence of nodes in path inserted into header Loose: subsequence of nodes in path specified Intermediate switches read next-hop address and remove address Source host needs link state information or access to a route server Source routing allows the host to control the paths that its information traverses in the network Potentially the means for customers to select what service providers they use 96

Example 3, 6, B 1, 3, 6, B 1 3 6 B A 4 Example 3, 6, B 1, 3, 6, B 1 3 6 B A 4 B Source host 2 5 Destination host 97

Chapter 7 Packet-Switching Networks ATM Networks 98 Chapter 7 Packet-Switching Networks ATM Networks 98

Asynchronous Tranfer Mode (ATM) Packet multiplexing and switching Fixed-length packets: “cells” Connection-oriented Rich Quality Asynchronous Tranfer Mode (ATM) Packet multiplexing and switching Fixed-length packets: “cells” Connection-oriented Rich Quality of Service support Conceived as end-to-end Supporting wide range of services Real time voice and video Circuit emulation for digital transport Data traffic with bandwidth guarantees Detailed discussion in Chapter 9 99

ATM Networking Voice Video Packet ATM Adaptation Layer ATM Network End-to-end information transport using ATM Networking Voice Video Packet ATM Adaptation Layer ATM Network End-to-end information transport using cells 53 -byte cell provide low delay and fine multiplexing granularity 100 Support for many services through ATM Adaptation Layer

TDM vs. Packet Multiplexing Variable bit rate Delay TDM Multirate only Packet Easily handled TDM vs. Packet Multiplexing Variable bit rate Delay TDM Multirate only Packet Easily handled Burst traffic Processing Low, fixed Inefficient Minimal, very high speed Variable Header & packet* processing required Efficient *In mid-1980 s, packet processing mainly in software and hence slow; By late 1990 s, very high speed packet processing possible 101

ATM: Attributes of TDM & Packet Switching Voice Data packets Images 1 2 MUX ATM: Attributes of TDM & Packet Switching Voice Data packets Images 1 2 MUX 3 Wasted bandwidth 4 TDM • Packet structure gives flexibility & efficiency • Synchronous slot transmission gives high speed & density 3 2 1 4 3 2 1 ATM 3 2 Packet Header 102

ATM Switching Switch carries out table translation and routing 1 1 … Switch … ATM Switching Switch carries out table translation and routing 1 1 … Switch … 6 data 32 N voice 32 video 61 N 1 75 32 61 3 2 video 67 2 data 39 3 39 67 67 … 5 video 25 25 32 voice 67 video 75 N ATM switches can be implemented using shared memory, shared backplanes, or self-routing multi-stage fabrics 103

ATM Virtual Connections Virtual connections setup across network Connections identified by locally-defined tags ATM ATM Virtual Connections Virtual connections setup across network Connections identified by locally-defined tags ATM Header contains virtual connection information: 8 -bit Virtual Path Identifier 16 -bit Virtual Channel Identifier Powerful traffic grooming capabilities Multiple VCs can be bundled within a VP Similar to tributaries with SONET, except variable bit rates possible Virtual paths Physical link Virtual channels 104

VPI/VCI switching & multiplexing VPI 3 a b c d e ATM Sw 1 VPI/VCI switching & multiplexing VPI 3 a b c d e ATM Sw 1 a VPI 5 ATM Sw 2 ATM crossconnect ATM Sw 3 b c VPI 2 VPI 1 Sw = switch ATM Sw 4 d e Connections a, b, c bundled into VP at switch 1 Crossconnect switches VP without looking at VCIs VP unbundled at switch 2; VC switching thereafter VPI/VCI structure allows creation virtual networks 105

MPLS & ATM initially touted as more scalable than packet switching ATM envisioned speeds MPLS & ATM initially touted as more scalable than packet switching ATM envisioned speeds of 150 -600 Mbps Advances in optical transmission proved ATM to be the less scalable: @ 10 Gbps Segmentation & reassembly of messages & streams into 48 -byte cell payloads difficult & inefficient Header must be processed every 53 bytes vs. 500 bytes on average for packets Delay due to 1250 byte packet at 10 Gbps = 1 msec; delay due to 53 byte cell @ 150 Mbps ≈ 3 msec MPLS (Chapter 10) uses tags to transfer packets across virtual circuits in Internet 106

Chapter 7 Packet-Switching Networks Traffic Management Packet Level Flow-Aggregate Level 107 Chapter 7 Packet-Switching Networks Traffic Management Packet Level Flow-Aggregate Level 107

Traffic Management Vehicular traffic management Traffic lights & signals control flow of traffic in Traffic Management Vehicular traffic management Traffic lights & signals control flow of traffic in city street system Objective is to maximize flow with tolerable delays Priority Services Police sirens Cavalcade for dignitaries Bus & High-usage lanes Trucks allowed only at night Packet traffic management Multiplexing & access mechanisms to control flow of packet traffic Objective is make efficient use of network resources & deliver Qo. S Priority Fault-recovery packets Real-time traffic Enterprise (high-revenue) traffic High bandwidth traffic 108

Time Scales & Granularities Packet Level Flow Level Queueing & scheduling at multiplexing points Time Scales & Granularities Packet Level Flow Level Queueing & scheduling at multiplexing points Determines relative performance offered to packets over a short time scale (microseconds) Management of traffic flows & resource allocation to ensure delivery of Qo. S (milliseconds to seconds) Matching traffic flows to resources available; congestion control Flow-Aggregate Level Routing of aggregate traffic flows across the network for efficient utilization of resources and meeting of service levels “Traffic Engineering”, at scale of minutes to days 109

End-to-End Qo. S Packet buffer … 1 2 N– 1 N A packet traversing End-to-End Qo. S Packet buffer … 1 2 N– 1 N A packet traversing network encounters delay and possible loss at various multiplexing points End-to-end performance is accumulation of per-hop performances 110

Scheduling & Qo. S End-to-End Qo. S & Resource Control Buffer & bandwidth control Scheduling & Qo. S End-to-End Qo. S & Resource Control Buffer & bandwidth control → Performance Admission control to regulate traffic level Scheduling Concepts fairness/isolation priority, aggregation, Fair Queueing & Variations WFQ, PGPS Guaranteed Service WFQ, Rate-control Packet Dropping aggregation, drop priorities 111

FIFO Queueing Arriving packets Packet buffer Packet discard when full Transmission link All packet FIFO Queueing Arriving packets Packet buffer Packet discard when full Transmission link All packet flows share the same buffer Transmission Discipline: First-In, First-Out Buffering Discipline: Discard arriving packets if buffer is full (Alternative: random discard; pushout head-of-line, i. e. oldest, packet) 112

FIFO Queueing Cannot provide differential Qo. S to different packet flows Different packet flows FIFO Queueing Cannot provide differential Qo. S to different packet flows Different packet flows interact strongly Statistical delay guarantees via load control Restrict number of flows allowed (connection admission control) Difficult to determine performance delivered Finite buffer determines a maximum possible delay Buffer size determines loss probability But depends on arrival & packet length statistics Variation: packet enqueueing based on queue thresholds some packet flows encounter blocking before others higher loss, lower delay 113

FIFO Queueing with Discard Priority (a) Packet buffer Arriving packets Packet discard when full FIFO Queueing with Discard Priority (a) Packet buffer Arriving packets Packet discard when full (b) Transmission link Packet buffer Arriving packets Transmission link Class 1 discard when full Class 2 discard when threshold exceeded 114

HOL Priority Queueing Packet discard when full High-priority packets Low-priority packets Packet discard when HOL Priority Queueing Packet discard when full High-priority packets Low-priority packets Packet discard when full Transmission link When high-priority queue empty High priority queue serviced until empty High priority queue has lower waiting time Buffers can be dimensioned for different loss probabilities Surge in high priority queue can cause low priority queue to saturate 115

HOL Priority Features Delay (Note: Need labeling) Per-class loads Provides differential Qo. S Pre-emptive HOL Priority Features Delay (Note: Need labeling) Per-class loads Provides differential Qo. S Pre-emptive priority: lower classes invisible Non-preemptive priority: lower classes impact higher classes through residual service times High-priority classes can hog all of the bandwidth & starve lower priority classes Need to provide some 116 isolation between classes

Earliest Due Date Scheduling Arriving packets Sorted packet buffer Tagging unit Packet discard when Earliest Due Date Scheduling Arriving packets Sorted packet buffer Tagging unit Packet discard when full Transmission link Queue in order of “due date” packets requiring low delay get earlier due date packets without delay get indefinite or very long due dates 117

Fair Queueing / Generalized Processor Sharing Packet flow 1 Approximated bit-level round robin service Fair Queueing / Generalized Processor Sharing Packet flow 1 Approximated bit-level round robin service Packet flow n C bits/second … … Packet flow 2 Transmission link Each flow has its own logical queue: prevents hogging; allows differential loss probabilities C bits/sec allocated equally among non-empty queues transmission rate = C / n(t), where n(t)=# non-empty queues Idealized system assumes fluid flow from queues Implementation requires approximation: simulate fluid system; sort packets according to completion time in ideal system 118

Buffer 1 at t=0 Fluid-flow system: both packets served at rate 1/2 1 Buffer Buffer 1 at t=0 Fluid-flow system: both packets served at rate 1/2 1 Buffer 2 at t=0 t 0 1 Packet from buffer 2 waiting 2 Packet-by-packet system: buffer 1 served first at rate 1; then buffer 2 served at rate 1. 1 Packet from buffer 1 being 0 served Both packets complete service at t = 2 Packet from buffer 2 being served 1 2 t 119

2 Buffer 1 at t=0 Fluid-flow system: both packets served at rate 1/2 1 2 Buffer 1 at t=0 Fluid-flow system: both packets served at rate 1/2 1 Buffer 2 at t=0 Packet from buffer 2 served at rate 1 0 Packet from buffer 2 waiting Packet from buffer 1 served at rate 1 2 t 3 Packet-by-packet fair queueing: buffer 2 served at rate 1 1 0 1 2 3 t 120

Buffer 1 at t=0 Fluid-flow system: packet from buffer 1 served at rate 1/4; Buffer 1 at t=0 Fluid-flow system: packet from buffer 1 served at rate 1/4; Buffer 2 at t=0 1 Packet from buffer 2 served at rate 3/4 Packet from buffer 1 served at rate 1 t 0 1 2 Packet-by-packet weighted fair queueing: buffer 2 served first at rate 1; then buffer 1 served at rate 1 Packet from buffer 1 waiting 1 Packet from buffer 1 served at rate 1 Packet from buffer 2 0 served at rate 1 t 1 2 121

Packetized GPS/WFQ Arriving packets Sorted packet buffer Tagging unit Packet discard when full Transmission Packetized GPS/WFQ Arriving packets Sorted packet buffer Tagging unit Packet discard when full Transmission link Compute packet completion time in ideal system add tag to packet sort packet in queue according to tag serve according to HOL 122

Bit-by-Bit Fair Queueing Assume n flows, n queues 1 round = 1 cycle serving Bit-by-Bit Fair Queueing Assume n flows, n queues 1 round = 1 cycle serving all n queues If each queue gets 1 bit per cycle, then 1 round = # active queues Round number = number of cycles of service that have been completed rounds Current Round # If packet arrives to idle queue: Finishing time = round number + packet size in bits If packet arrives to active queue: 123 Finishing time = finishing time of last packet in queue + packet size

Number of rounds = Number of bit transmission opportunities Rounds … Buffer 1 Buffer Number of rounds = Number of bit transmission opportunities Rounds … Buffer 1 Buffer 2 Buffer n Packet completes transmission k rounds later Packet of length k bits begins transmission at this time Differential Service: If a traffic flow is to receive twice as much bandwidth as a regular flow, then its packet completion time would be half 124

Computing the Finishing Time F(i, k, t) = finish time of kth packet that Computing the Finishing Time F(i, k, t) = finish time of kth packet that arrives at time t to flow i P(i, k, t) = size of kth packet that arrives at time t to flow i R(t) = round number at time t Generalize so R(t) continuous, not discrete rounds R(t) grows at rate inversely proportional to n(t) Fair Queueing: F(i, k, t) = max{F(i, k-1, t), R(t)} + P(i, k, t) Weighted Fair Queueing: F(i, k, t) = max{F(i, k-1, t), R(t)} + P(i, k, t)/wi 125

WFQ and Packet Qo. S WFQ and its many variations form the basis for WFQ and Packet Qo. S WFQ and its many variations form the basis for providing Qo. S in packet networks Very high-speed implementations available, up to 10 Gbps and possibly higher WFQ must be combined with other mechanisms to provide end-to-end Qo. S (next section) 126

Buffer Management Packet drop strategy: Which packet to drop when buffers full Fairness: protect Buffer Management Packet drop strategy: Which packet to drop when buffers full Fairness: protect behaving sources from misbehaving sources Aggregation: Drop priorities: Per-flow buffers protect flows from misbehaving flows Full aggregation provides no protection Aggregation into classes provided intermediate protection Drop packets from buffer according to priorities Maximizes network utilization & application Qo. S Examples: layered video, policing at network edge Controlling sources at the edge 127

Early or Overloaded Drop Random early detection: drop pkts if short-term avg of queue Early or Overloaded Drop Random early detection: drop pkts if short-term avg of queue exceeds threshold pkt drop probability increases linearly with queue length mark offending pkts improves performance of cooperating TCP sources increases loss probability of misbehaving sources 128

Random Early Detection (RED) Packets produced by TCP will reduce input rate in response Random Early Detection (RED) Packets produced by TCP will reduce input rate in response to network congestion Early drop: discard packets before buffers are full Random drop causes some sources to reduce rate before others, causing gradual reduction in aggregate input rate Algorithm: Maintain running average of queue length If Qavg < minthreshold, do nothing If Qavg > maxthreshold, drop packet If in between, drop packet according to probability Flows that send more packets are more likely to have packets dropped 129

Probability of packet drop Packet Drop Profile in RED 1 0 minth maxth full Probability of packet drop Packet Drop Profile in RED 1 0 minth maxth full Average queue length 130

Chapter 7 Packet-Switching Networks Traffic Management at the Flow Level 131 Chapter 7 Packet-Switching Networks Traffic Management at the Flow Level 131

Congestion occurs when a surge of traffic overloads network resources Congestion 3 6 1 Congestion occurs when a surge of traffic overloads network resources Congestion 3 6 1 4 8 2 5 7 Approaches to Congestion Control: • Preventive Approaches: Scheduling & Reservations • Reactive Approaches: Detect & Throttle/Discard 132

Throughput Ideal effect of congestion control: Resources used efficiently up to capacity available Controlled Throughput Ideal effect of congestion control: Resources used efficiently up to capacity available Controlled Uncontrolled Offered load 133

Open-Loop Control Network performance is guaranteed to all traffic flows that have been admitted Open-Loop Control Network performance is guaranteed to all traffic flows that have been admitted into the network Initially for connection-oriented networks Key Mechanisms Admission Control Policing Traffic Shaping Traffic Scheduling 134

Admission Control Bits/second Peak rate Average rate Time Typical bit rate demanded by a Admission Control Bits/second Peak rate Average rate Time Typical bit rate demanded by a variable bit rate information source Flows negotiate contract with network Specify requirements: Peak, Avg. , Min Bit rate Maximum burst size Delay, Loss requirement Network computes resources needed “Effective” bandwidth If flow accepted, network allocates resources to ensure Qo. S delivered as long as source conforms to contract 135

Policing Network monitors traffic flows continuously to ensure they meet their traffic contract When Policing Network monitors traffic flows continuously to ensure they meet their traffic contract When a packet violates the contract, network can discard or tag the packet giving it lower priority If congestion occurs, tagged packets are discarded first Leaky Bucket Algorithm is the most commonly used policing mechanism Bucket has specified leak rate for average contracted rate Bucket has specified depth to accommodate variations in arrival rate Arriving packet is conforming if it does not result in overflow 136

Leaky Bucket algorithm can be used to police arrival rate of a packet stream Leaky Bucket algorithm can be used to police arrival rate of a packet stream water poured irregularly leaky bucket water drains at a constant rate Leak rate corresponds to long-term rate Bucket depth corresponds to maximum allowable burst arrival 1 packet per unit time Assume constant-length packet as in ATM Let X = bucket content at last conforming packet arrival Let ta – last conforming packet arrival time = depletion in bucket 137

Leaky Bucket Algorithm Arrival of a packet at time ta Depletion rate: 1 packet Leaky Bucket Algorithm Arrival of a packet at time ta Depletion rate: 1 packet per unit time X’ = X - (ta - LCT) Current bucket content Interarrival time X’ < 0? Non-empty Nonconforming packet arriving packet would cause overflow Yes I = increment per arrival, nominal interarrival time Yes No X’ > L? L+I = Bucket Depth X’ = 0 empty No X = X’ + I LCT = ta conforming packet X = value of the leaky bucket counter X’ = auxiliary variable LCT = last conformance time conforming packet 138

Leaky Bucket Example I=4 L=6 Nonconforming Packet arrival Time L+I Bucket content I * Leaky Bucket Example I=4 L=6 Nonconforming Packet arrival Time L+I Bucket content I * * * * * Non-conforming packets not allowed into bucket & hence not included in calculations Time 139

Policing Parameters T = 1 / peak rate MBS = maximum burst size I Policing Parameters T = 1 / peak rate MBS = maximum burst size I = nominal interarrival time = 1 / sustainable rate MBS T L I Time 140

Dual Leaky Bucket Dual leaky bucket to police PCR, SCR, and MBS: Incoming traffic Dual Leaky Bucket Dual leaky bucket to police PCR, SCR, and MBS: Incoming traffic Leaky bucket 1 SCR and MBS Tagged or dropped Untagged traffic Leaky bucket 2 PCR and CDVT Untagged traffic Tagged or dropped PCR = peak cell rate CDVT = cell delay variation tolerance SCR = sustainable cell rate MBS = maximum burst size 141

Traffic Shaping Traffic shaping 1 Network A Policing Traffic shaping 2 3 Network B Traffic Shaping Traffic shaping 1 Network A Policing Traffic shaping 2 3 Network B Policing 4 Network C Networks police the incoming traffic flow Traffic shaping is used to ensure that a packet stream conforms to specific parameters Networks can shape their traffic prior to passing it to another network 142

Leaky Bucket Traffic Shaper Incoming traffic Size N Shaped traffic Server Packet Buffer incoming Leaky Bucket Traffic Shaper Incoming traffic Size N Shaped traffic Server Packet Buffer incoming packets Play out periodically to conform to parameters Surges in arrivals are buffered & smoothed out Possible packet loss due to buffer overflow Too restrictive, since conforming traffic does not need to be completely smooth 143

Token Bucket Traffic Shaper Tokens arrive periodically An incoming packet must have sufficient tokens Token Bucket Traffic Shaper Tokens arrive periodically An incoming packet must have sufficient tokens before admission into the network Size K Token Incoming traffic Size N Shaped traffic Server Packet Token rate regulates transfer of packets If sufficient tokens available, packets enter network without delay K determines how much burstiness allowed into the network 144

Token Bucket Shaping Effect The token bucket constrains the traffic from a source to Token Bucket Shaping Effect The token bucket constrains the traffic from a source to be limited to b + r t bits in an interval of length t b bytes instantly b+rt r bytes/second t 145

Packet transfer with Delay Guarantees (a) Bit rate > R > r e. g. Packet transfer with Delay Guarantees (a) Bit rate > R > r e. g. , using WFQ A(t) = b+rt Token Shaper R(t) 1 (b) 2 Buffer occupancy at 1 0 b R No backlog of packets b R-r Empty t t Assume fluid flow for information Token bucket allows burst of b bytes 1 & then r bytes/second Since R>r, buffer content @ 1 never greater than b byte Thus delay @ mux < b/R Rate into second mux is r

Delay Bounds with WFQ / PGPS Assume traffic shaped to parameters b & r Delay Bounds with WFQ / PGPS Assume traffic shaped to parameters b & r schedulers give flow at least rate R>r H hop path m is maximum packet size for the given flow M maximum packet size in the network Rj transmission rate in jth hop Maximum end-to-end delay that can be experienced by a packet from flow i is: 147

Scheduling for Guaranteed Service Suppose guaranteed bounds on end-to-end delay across the network are Scheduling for Guaranteed Service Suppose guaranteed bounds on end-to-end delay across the network are to be provided A call admission control procedure is required to allocate resources & set schedulers Traffic flows from sources must be shaped/regulated so that they do not exceed their allocated resources Strict delay bounds can be met 148

Current View of Router Function Routing Agent Reservation Agent Mgmt. Agent Admission Control [Routing Current View of Router Function Routing Agent Reservation Agent Mgmt. Agent Admission Control [Routing database] [Traffic control database] Classifier Input driver Internet forwarder Pkt. scheduler Output driver 149

Closed-Loop Flow Control Congestion control End-to-end vs. Hop-by-hop feedback information to regulate flow from Closed-Loop Flow Control Congestion control End-to-end vs. Hop-by-hop feedback information to regulate flow from sources into network Based on buffer content, link utilization, etc. Examples: TCP at transport layer; congestion control at ATM level Delay in effecting control Implicit vs. Explicit Feedback Source deduces congestion from observed behavior Routers/switches generate messages alerting to congestion 150

End-to-End vs. Hop-by-Hop Congestion Control Source Packet flow Destination (a) Source Destination (b) Feedback End-to-End vs. Hop-by-Hop Congestion Control Source Packet flow Destination (a) Source Destination (b) Feedback information 151

Traffic Engineering Management exerted at flow aggregate level Distribution of flows in network to Traffic Engineering Management exerted at flow aggregate level Distribution of flows in network to achieve efficient utilization of resources (bandwidth) Shortest path algorithm to route a given flow not enough Does not take into account requirements of a flow, e. g. bandwidth requirement Does not take account interplay between different flows Must take into account aggregate demand from all flows 152

3 1 7 4 2 3 5 (a) Shortest path routing congests link 4 3 1 7 4 2 3 5 (a) Shortest path routing congests link 4 to 8 8 1 2 6 4 6 7 5 8 (b) Better flow allocation distributes flows more uniformly 153