Networking.pptx
- Количество слайдов: 41
Сетевые решения UCS Антон Погребняк Инструктор Fast Lane RCIS
UCS LAN Deep Dive - Agenda High-level system overview – Unified Ports – I/O module Fabric Interconnect Forwarding modes – End-host mode (EHM) vs Switch mode – Dynamic and static pinning concepts Server Connectivity Options – Cisco VIC 1200 series C-Series Integration
UCS 6248: Unified Ports Dynamic Port Allocation: Lossless Ethernet or Fibre Channel FC Native Fibre Channel Eth Lossless Ethernet: 1/10 Gb. E, FCo. E, i. SCSI, NAS Benefits Use-cases Simplify switch purchase - remove ports ratio guess work Flexible LAN & storage convergence based on business needs Increase design flexibility Remove specific protocol bandwidth bottlenecks Service can be adjusted based on the demand for specific traffic
UCS 6248: Unified Ports Dynamic Port Allocation: Lossless Ethernet or Fibre Channel Ports on the base card or the Unified Port GEM Module can either be Ethernet or FC Only a continuous set of ports can be configured as Ethernet or FC Ethernet Ports have to be the 1 st set of ports Port type changes take effect after next reboot of switch for Base board ports or power-off/on of the GEM for GEM unified ports. Base card – 32 Unified Ports Eth GEM – 16 Unified Ports FC Eth FC
Configuring Unified Ports
Unified Port Screen Configured on a per FI basis Slider based configuration Reboot is required for the new port personality to take into affect Recommendation is to configure GEM card, therefore GEM is only needed to be rebooted
UCS Fabric Topologies Chassis Bandwidth Options 2208 XP only 2 x 1 Link 20 Gbps per Chassis 2 x 2 Link 40 Gbps per Chassis 2 x 4 Link 80 Gbps per Chassis 2 x 8 Links 160 Gbps per Chassis
IOM Connections A IOM (sometimes called ‘Fabric Extender’) provides – 1 for internal managment – 10 G-KR sever facing links (HIF) – Fabric links (NIF) The servers’ mezz cards use those IO channels for external connectivity Each IOM provides a separate dedicated IO channel for internal management connectivity
UCS 2204 IO Module Enable Dual 20 Gbps to Each Blade Server UCS-IOM-2204 XP • Bandwidth increase for improved response esp for bursty Applications o o • • 40 G to the Network 160 G to the Host Redundant o (2 x 10 G/ Half width slot; 4 x 10 G/ Full width slot) Latency Lowered to 0. 5 us within IOM Investment Protection with Backward and Forward Compatibility
UCS 2208 IO Module Enable Dual 40 Gbps to Each Blade Server UCS-IOM-2208 XP • Bandwidth increase for improved response esp for bursty Applications o o • • 80 G to the Network 320 G to the Host Redundant o (4 x 10 G/ Half width slot; 8 x 10 G/ Full width slot) Latency Lowered to 0. 5 us within IOM Investment Protection with Backward and Forward Compatibility
220 x-XP Architecture Fabric Ports to FI 2208 2204 FLASH DRAM Feature 2204 -XP 2208 -XP ASIC Woodside Fabric Ports (NIF) 4 8 Host Ports (HIF) 16 32 Co. S 8 8 Latency ~ 500 ns EEPROM Chassis Management Controller Control IO Woodside. ASIC Switch Chassis Signals 2204 2208 Internal backplane ports to blades No Local Switching – ever! Traffic goes up to FI
Blade Northbound Ports These interfaces (show int brief – NXOS shell) are backplane traces Eth x/y/z where – x = chassis number – y = is always 1 – z = host interface port number ----------------------------------------Eth 1/1/1 1 eth access up none 10 G(D) -Eth 1/1/2 Eth 1/1/3 Eth 1/1/4 Eth 1/1/5 Eth 1/1/6 Eth 1/1/7 Eth 1/1/8 Eth 1/1/9 Eth 1/1/10 Eth 1/1/11 Eth 1/1/12 1 1 1 <output truncated> eth eth eth access vntag access access down up down down down Administratively down none Administratively down Link not connected Administratively down Administratively down 10 G(D) -10 G(D) 1365 10 G(D) -10 G(D) 1369 10 G(D) -10 G(D) –
UCS Internal Block Diagram UCS 6248 Fabric Interconnects 16 x SFP+ Expansion Module UCS 6248 16 x SFP+ Expansion Module Double the Fabric Uplinks IO Modules 2208 XP Quadruple the Downlinks Midplane Mezzanine Mezz Card x 16 Gen 2 IOH Server Blade CPU
IO Module HIF to NIF Pinning 2208 XP – 1 Link Slot 1 Slot 2 Slot 3 1 -4 5 -8 9 -12 Slot 4 13 -16 Slot 5 17 -20 Slot 6 21 -24 Slot 7 25 -28 Slot 8 29 -32 FEX Fabric Interconnect
IO Module HIF to NIF Pinning 2208 XP – 4 Link Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 1 -4 5 -8 9 -12 13 -16 17 -20 21 -24 25 -28 29 -32 FEX Fabric Interconnect
IOM and Failover What happens in a 4 -link topology when you loose 1 link? – – Servers’ v. NIC on that link will lose a data path. The remaining 3 links will still pass traffic for the other blade servers To recover the failed servers’ v. NIC, re-acknowledged of the chassis is required Since we only support 1, 2, and 4 link topologies the UCS will fall back to 2 links with regards to blade to fabric port mapping.
IOM and Failover 1 Loose IOM link 1 2 3 4 Switch 4 links active IOM 1 Blade 2 Blade 3 Blade 4 Loose connectivity on mezzanine port mapped to IOM 1 for blades 1 and 5 Blade 6 Blade 7 Blade 8
Increased Bandwidth Access to Blades 4 links, Discrète - Today slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X Fabric Interconnect 8 links, Discrète slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 8 F E X Fabric Interconnect Up to 8 links, Port-channel F E X Fabric Interconnect Available bandwidth per blade – 10 Gb Available bandwidth per blade – 20 Gb Available bandwidth per blade – up to 160 Gb Statically pinned to individual fabric links Statically pinned to Portchannel Deterministic Path Increased and shared bandwidth Guaranteed 10 Gb to each blade Higher Availability
Port-Channel Pinning No slot based pinning No invalid link count for NIF ports VIC 1200 adaptor with DCE links in Port-Channel Gen-1 adaptor with single 10 G link IOM Pinned to Po
UCS FI and IOMconnectivity Fabric Interconnect VIF calculation 1 2 3 4 5 6 Every 8 10 Gb. E ports (on FI) are controlled by the same Unified Port Controller (UPC) Connect fabric links from IOM to the FI to the same UPC For fabric port-channeling, Virtual Interface (VIF) namespace varies, depending on number and how the fabric links are connected to the FI ports. – Connecting to the same UPC (a set of eight ports), Cisco UCS Manager maximizes the number of VIFs used in service profiles deployed on the servers. – If uplink connections are distributed across UPC, the VIF count is decreased. For example, if you connect seven (IOM) fabric links to (FI) ports 1 -7, but the eighth fabric link to FI port 9, the number of available VIFs is based on 1 link – IOM port 8 to FI port 9.
Abstracting the Logical Architecture Physical 6200 -A Logical 6200 -A Switch 6200 -A v. Eth 1 v. FC 1 v. Eth 1 Dynamic, Rapid Provisioning State abstraction Location Independence Blade or Rack Eth 1/1 IOM A 10 GE A Cable Adapter Blade v. HBA 1 v. NIC 1 Service Profile (Server) Physical Cable v. HBA 1 (Server) v. NIC 1 Virtual Cable (VN-Tag)
VN-Tag: Instantiation of Virtual Interfaces Virtual interfaces (VIFs) help distinguish between FC and Eth interfaces They also identify the origin server VIFs are instantiated on the FI and correspond to frame-level tags assigned to blade mezz cards A 6 -byte tag (VN-Tag) is preprended by Palo and Menlo as traffic leaves the server to identify the interface – VN-Tag associates frames to a VIFs are ‘spawned off’ the server’s Eth. X/Y/Z interfaces (examples follow)
VIFs Ethernet and FC are muxed on the same physical links concept of virtual interfaces (vifs) to split Eth and FC Two types of VIFs: veth and vfc – Veth for Ethernet and FCo. E; vfc for FC traffic Each Eth. X/Y/Z or Po interface typically has multiple vifs attached to it to carry traffic to and from a server To find all vifs associated with a Eth. X/Y/Z or Po interface, do this:
UCS Cisco 1280 VIC Adapter Customerbenefits UCS 2208 IOM Dual 4 x 10 GE (80 Gb per host) VM-FEX scale, up to 112 VM interfaces /w ESX 5. 0 Featuredetails • Dual 4 x 10 GE port-channels to a single server slot • • Host connectivity PCIe Gen 2 x 16 bandwidthlimit is 32 Gbps HW Capable of 256 PCIe devices • OS restriction apply PCIe virtualization. OS independent(same as M 81 KR) Single OS driver image for both M 81 KR and 1280 VIC Fabric. Failover supported Eth hash inputs : Source MAC Address, Destination. MAC Address, • FC Hash inputs: Source MAC Address • • • Source Pprt, Destination. Port, Source IP address, Destination, P address and VLAN Destination. MAC Address, FC SID and FC DID Side A UCS 1280 VIC 256 PCIe devices Side B
Connectivity IOM to Adapter Up to 32 Gbps throughput per v. NIC using flow based port-channel hash 2208 IOM Implicit Port-channel between UCS 1280 VIC adapter and UCS 2208 IOM Side B Side A UCS 1280 VIC A v. NIC is active on side A or B. v. NIC 1 VM 7 -Tuple Flow based hash VM Flows 1. 2. 10 Gb FTP traffic 10 Gb UDP traffic A v. NIC have access to up to 32 Gbps throughput.
Block Diagram: Next Gen UCS Fabric Details UCS 6248 Fabric 16 x SFP+ Expansion Module UCS 6248 16 x SFP+ Expansion Module Interconnects IO Modules 2208 XP Midplane Mezzanine 1280 VIC x 16 Gen 2 IOH Server Blade CPU 4 x 10 Gbps Ether channel from VIC 1280 to 2208 IO Modules No user configuration required v. NIC flows are 7 -tuple Load Balanced across links Each individual flow limited to 10 Gb Fabric Failover available UCS Blade Chassis
Fabric Forwarding Mode of Operations Modes of Operation End-host mode (EHM): Default mode – – – No spanning-tree protocol (STP); no blocked ports Admin differentiates between server and network ports Using dynamic (or static) server to uplink pinning No MAC address learning except on the server ports; no unknown unicast flooding Fabric failover (FF) for Ethernet v. NICs (not available in switch mode) Switch mode: User configurable – Fabric Interconnects behave like regular ethernet switches – STP parameters are lock
End Host Mode LAN Spanning Tree – Presents itself as a bunch of hosts to the network FI A v. Eth 3 Fabric A Completely transparent to the network v. Eth 1 VLAN 10 L 2 Switching VNIC 0 Server 2 Server 1 MAC Learning No STP – simplifies upstream connectivity MAC Learning All uplinks ports are forwarding – never blocked
End Host Mode Unicast Forwarding LAN Server 2 Uplink Ports MAC/VLAN plus policy based forwarding – Server pinned to uplink ports Deja-Vu FI RPF Policies to prevent packet looping – déjà vu check v. Eth 1 VLAN 10 v. Eth 3 – RPF – No uplink to uplink forwarding No unknown unicast or multicast VNIC 0 Server 2 Server 1 – igmp-snooping can be disable on per-VLAN basis
End Host Mode Multicast Forwarding LAN B B Broadcast Listener per VLAN Uplink Ports FI Broadcast traffic for a VLAN is pinned on exactly one uplink port (or port-channel) i. e. , it is dropped when received on other uplinks Server to server multicast traffic is locally switched v. Eth 1 v. Eth 3 B VNIC 0 Server 2 Server 1 RPF and déjà vu check also applies for multicast traffic
Switch Mode Root LAN Fabric Interconnect behaves like a normal L 2 switch Rapid-STP+ to prevent loops MAC Learning v. Eth 3 v. Eth 1 VLAN 10 – STP parameters are not configurable Server v. NIC traffic follows STP forwarding states – Use VPC to get around blocked ports L 2 Switching VNIC 0 Server 2 Server 1 VTP is not supported MAC address learning on both uplinks and server links
End Host Mode - Dynamic Pinning UCSM manages the v. Eth pinning to the uplink LAN FI A v. Eth 2 Pinning v. Eth 3 VLAN 10 v. Eth 1 Switching VNIC 0 Server 2 Server 3 Server 1 UCSM will periodically v. Eth distribution and redistribute the v. Eths across the uplinks
End Host Mode – Individual Uplinks Dynamic Re-pinning of failed uplinks FI-A v. Eth 3 Fabric A All uplinks forwarding for all VLANs GARP aided upstream convergence No STP Sub-second re-pinning No server NIC disruption Sub-second re-pinning VLAN 10 L 2 Switching v. Eth 1 Switching VNIC stays up VNIC 0 MAC A VNIC 0 Pinning Server 2 v. Switch / N 1 K ESX HOST 1 VM 1 MAC B VM 2 MAC C
End Host Mode – Port Channel Uplinks Recommended: Port Channel Uplinks No disruption No GARPs needed FI-A v. Eth 3 Fabric A More Bandwidth per Uplink Per flow uplink diversity No Server NIC disruption Fewer GARPs needed Faster bi-directional convergence Fewer moving parts RECOMMENDED Sub-second convergence VLAN 10 L 2 Switching v. Eth 1 Pinning Switching NIC stays up VNIC 0 MAC A v. Switch / N 1 K ESX HOST 1 VM 1 VNIC 0 Server 2 VM 2 MAC B MAC C
End Host Mode – Static Pinning Administrator Pinning Definition LAN v. Eth Interfaces Uplink v. Eth 1 v. Eth 2 Blue v. Eth 3 FI A Blue Purple Pinning v. Eth 3 VLAN 10 v. Eth 1 Switching VNIC 0 Server 2 Server 3 Server 1 Administer controls the v. Eth pinning Deterministic traffic flow Pinning configuration is done under the LAN tab -> LAN Pin groups and assigned under the v. NIC No re-pinning with in the same FI Static and dynamic pinning can coexist
Fabric Failover End Host Mode (only) LAN SAN A SAN B Fabric provides NIC failover capabilities chosen when defining a service profile Traditionally done using NIC bonding driver in the OS UCS Fabric Interconnects Chassis Fabric Extender Ci. MC Half Width Blade Works for any OS on bare metal and hypervisors Adapter v. NIC Fabric Extender Provides failover for both unicast and multicast traffic Ci. MC Half Width Blade
Recommended Topology for Upstream Connectivity Access/Aggregation Layer v. PC/VSS Fabric Interconnect A Forwarding Layer 2 links Fabric Interconnect B
C-Series UCSM Integration Mix of B & C Series is supported (no B Series required) Nexus 2232 2 LOM ports exclusive CIMC connectivity C 200 M 2, C 210 M 2, C 220 M 3, C 240 M 3, C 250 M 2, C 260 M 2 or C 460 M 2 GE LOM CIMC PCIe Adapter CPU Mem OS or Hypervisor Adapter support: Emulex CNA Qlogic CNA Intel 10 g NIC Broadcom 10 g NIC Cisco VIC Mgmt Traffic Data Traffic
C-Series UCSM Integration Single Wire Management with VIC 1225 Mix of B & C Series is supported (no B Series required) Nexus 2232 GE LOM CIMC C 260 M 2, C 460 M 2 C 220 M 3, C 240 M 3 C 22 M 3, C 24 M 3 PCIe Adapter CPU Mem OS or Hypervisor Mgmt Traffic Data Traffic
Спасибо за внимание! Антон Погребняк a. pogrebnyak@flane. ru
Networking.pptx