Скачать презентацию Network performance measurements Les Cottrell SLAC Prepared Скачать презентацию Network performance measurements Les Cottrell SLAC Prepared

8fa054fafb2de158c25ad91013403ed0.ppt

  • Количество слайдов: 29

Network performance measurements Les Cottrell – SLAC Prepared for the ICFA-SCIC, CERN December 8, Network performance measurements Les Cottrell – SLAC Prepared for the ICFA-SCIC, CERN December 8, 2001 Partially funded by DOE/MICS Field Work Proposal on Internet End-to-end Performance Monitoring (IEPM), also supported by IUPAP 1

Ping. ER deployment • Measurements from – 34 monitors in 14 countries • 6 Ping. ER deployment • Measurements from – 34 monitors in 14 countries • 6 Do. E Labs, 4 Do. E Uni – Over 600 remote hosts • 65 Do. E funded universities – – Over 72 countries Over 3300 monitor-remote site pairs Measurements go back to Jan-95 Reports on RTT, loss, reachability, jitter, reorders, duplicates … • Countries monitored – Contain 78% of world population – 99% of online users of Internet • Lightweight (100 bps/host pair) – Very useful for inter-regional and poor links, need more intensive for high performance & Grid sites • Plan to continue project 2

New stuff • 8 metrics, added: duplicates, out of order, jitter, min RTT, conditional New stuff • 8 metrics, added: duplicates, out of order, jitter, min RTT, conditional loss probability • Now have defined ~ 45 affinity groups + 72 countries & 32 monitoring sites • FNAL re-involved: cleaning up , the database and graphing engine, looking to migrate database, already much more reliable, creating web site • UCL, Daresbury extending to Iperf. ER • Besides ESnet, there are data sets for IPv 6 6 bone, XIWT, NTON when it existed. 3

Throughput quality improvements TCPBW < MSS/(RTT*sqrt(loss)) 80% annual improvement ~ factor 10/4 yr China Throughput quality improvements TCPBW < MSS/(RTT*sqrt(loss)) 80% annual improvement ~ factor 10/4 yr China Note E. Europe keeping up Macroscopic Behavior of the TCP Congestion Avoidance Algorithm, Matthis, Semke, Mahdavi, Ott, Computer Communication Review 27(3), July 1997 4

Losses: Rest of world by region • <1% = good, 2. 5%=acceptable, < 5%=poor, Losses: Rest of world by region • <1% = good, 2. 5%=acceptable, < 5%=poor, <12%=v. poor, >12%=bad • E. Europe still problems • Middle East, S. & Central America also problem regions 5

Rest of world by TLD • Russia poor to bad China poor 6 Rest of world by TLD • Russia poor to bad China poor 6

We need to better understand • Closer to applications, e. g. FTP • Understand We need to better understand • Closer to applications, e. g. FTP • Understand how to make throughput measurements: – Duration & frequency (balance impact against granularity needed), – Windows and or vs parallel streams, – OS dependencies, cpu utilization, interface speeds, security (e. g. ssh) – Impact on others, variability on different time-scales – Can we use QBSS, can/should application self limit? – How well does simulation work, how to improve? – How to relate to simpler measurements – How does file transfer work compared to iperf? – Is compression useful and when? – How useful is it for the application to get feedback from the network? 7

How to measure network throughput • Selected about 2 dozen major collaborator sites in How to measure network throughput • Selected about 2 dozen major collaborator sites in US, CA, JP, FR, CH, IT, UK over last year – Of interest to SLAC (HENP, PPDG, Internet measurement centers) – Can get logon accounts • Use iperf, bbcp (soon bbftp, gridftp) + traceroute etc. • Choose window size and # parallel streams – – Run for 10 seconds together with ping (loaded) Stop iperf, run ping (unloaded) for 10 seconds Change window or number of streams & repeat Record # streams, window, throughput (Mbits/s), loaded & unloaded ping responses. Verify window sizes are set properly by using tcpdump can’t believe what application tells you LM • Compare bandwidth measurement tools plus with iperf, bbcp, bbftp, grid. FTP – choose minimum set, automate 8

Typical results Improves ~ linearly with streams for small windows 64 k. B 100 Typical results Improves ~ linearly with streams for small windows 64 k. B 100 k. B 32 k. B 16 k. B 8 k. B Today Hi-thru usually = big windows & multiple streams Broke 100 Mbps Trans Atlantic Barrier Solaris Default window size 9

Windows vs Streams • Multi-streams often more effective than windows – more agile in Windows vs Streams • Multi-streams often more effective than windows – more agile in face of congestion • Often easier to set up – Need root to configure kernel to set max window – Network components may not support big windows – Some OS’ treat max windows strangely D • May be able to take advantage of multiple paths • But: – may be considered over-aggressive (RFC 2914) p – can take more cpu cycles – how to know how many streams? 10

Iperf client CPU utilization • As expected increases with throughput (mainly kernel) • d Iperf client CPU utilization • As expected increases with throughput (mainly kernel) • d 0. 7*MHz/Mbits/s • For fixed throughput –Fewer streams take less cpu J 6 –E. g. 1 -4 streams take 20% less cpu than 8 -16 streams for same throughput (if can get it) 11

CPU vs window vs streams vs throughput • MHz ~ 0. 97 * Mbps CPU vs window vs streams vs throughput • MHz ~ 0. 97 * Mbps • Bigger windows = less cpu for fixed throughput Increasing window ms ea str g asin re Inc Hooks at end = saturation 12

Diurnal variation often indicates saturation Routing change Pathologies Flat time series due to host Diurnal variation often indicates saturation Routing change Pathologies Flat time series due to host NIC limit (100 Mbps) 13

BBCP vs Iperf iperf bbcp Slope Avg=0. 6 Sd = 0. 2 14 BBCP vs Iperf iperf bbcp Slope Avg=0. 6 Sd = 0. 2 14

Bbcp memory vs disk (=>/tmp) 15 Bbcp memory vs disk (=>/tmp) 15

But … LANL has Maxtor IDE disks 16 But … LANL has Maxtor IDE disks 16

Compression • 60 Mbyte Objectivity file, using zlib, 8 streams, 64 KB window • Compression • 60 Mbyte Objectivity file, using zlib, 8 streams, 64 KB window • Can improve throughput on this link with these hosts (Sun Ultra Sparcs with 360 MHz cpus) by more than a factor of 2. • Want to characterize improvement as function(hosts, link speeds, …) 17

Impact on Others • Make ping measurements with & without iperf loading – Loss Impact on Others • Make ping measurements with & without iperf loading – Loss loaded(unloaded) – RTT • Looking at how to avoid impact: e. g. QBSS/LBE, application pacing, control loop on stdev(RTT) reducing streams, want to avoid scheduling 18

File Transfer • Used bbcp (written by Andy Hanushevsky) – similar methodology to iperf, File Transfer • Used bbcp (written by Andy Hanushevsky) – similar methodology to iperf, except ran for file length rather than time, provides incremental throughput reports, supports /dev/zero, adding duration – looked at /afs/, /tmp/, /dev/null – checked different file sizes • Behavior with windows & streams similar to iperf • Thrubbcp ~0. 8*Thruiperf • For modest throughputs (< 50 Mbits/s) rates are independent of whether destination is /afs/, /tmp/ or /dev/null. • Cpu utilization ~ 1 MHz/Mbit/s is ~ 20% > than for iperf 19

Application rate-limiting • Bbcp has transfer rate limiting – Could use network information (e. Application rate-limiting • Bbcp has transfer rate limiting – Could use network information (e. g. from Web 100 or independent pinging) to bbcp to reduce/increase its transfer rate, or change number of parallel streams No rate limiting, 64 KB window, 32 streams 15 MB/s rate limiting, 64 KB window, 32 streams 20

Typical QBSS test bed • Set up QBSS testbed • Configure router interfaces 100 Typical QBSS test bed • Set up QBSS testbed • Configure router interfaces 100 Mbps – 3 traffic types: 1 Gbps Cisco 7200 s 10 Mbps 100 Mbps • QBSS, BE, Priority – Define policy, e. g. • QBSS > 1%, priority < 30% – Apply policy to router interface queues 21

Example of effects Kicks in fast (<~ 1 s) • Also tried: 1 stream Example of effects Kicks in fast (<~ 1 s) • Also tried: 1 stream for all, and priority at 30%, 100 Mbps & 2 Gbps bottlenecks • 2 Gbps challenge to saturate: did at SC 2001, 3 Linux cpus with 5*1 Gbps NIC cards and 2 Gbps trunk from 22 subnet to floor network, sending to 17 hosts in 5 countries

Impact on response time (RTT) • Run ping with Iperf loading with various Qo. Impact on response time (RTT) • Run ping with Iperf loading with various Qo. S settings, iperf ~ 93 Mbps – No iperf ping avg RTT ~ 300 usec (regardless of Qo. S) – Iperf = QBSS, ping=BE or Priority: RTT~550 usec • 70% greater than unloaded – Iperf=Ping Qo. S (exc. Priority) then RTT~5 msec • > factor of 10 larger RTT than unloaded – If both ping & iperf have Qo. S=Priority then ping RTT very variable since iperf limited to 30% • RTT quick when iperf limited, long when iperf transmits 23

Possible HEP usage • Apply priority to lower volume interactive voice/video-conferencing and real time Possible HEP usage • Apply priority to lower volume interactive voice/video-conferencing and real time control • Apply QBSS to high volume data replication • Leave the rest as Best Effort • Since 40 -65% of bytes to/from SLAC come from a single application, we have modified to enable setting of TOS bits • Need to identify bottlenecks and implement QBSS there • Bottlenecks tend to be at edges so hope to try with a few HEP sites 24

Acknowledgements for SC 2001 • Many people assisted in getting accounts, setting up servers, Acknowledgements for SC 2001 • Many people assisted in getting accounts, setting up servers, providing advice, software etc. – Suresh Man Singh, Harvey Newman, Julian Bunn (Caltech), Andy Hanushevsky, Paola Grosso, Gary Buhrmaster, Connie Logg (SLAC), Olivier Martin (CERN), Loric Totay, Jerome Bernier (IN 2 P 3), Dantong Yu (BNL), Robin Tasker, Paul Kummer (DL), John Gordon (RL), Brian Tierney, Bob Jacobsen, (LBL), Stanislav Shalunov (Internet 2), Joe Izen (UT Dallas), Linda Winkler, Bill Allcock (ANL), Ruth Pordes, Frank Nagy (FNAL), Emanuele Leonardi (INFN), Chip Watson (JLab), Yukio Karita (KEK), Tom Dunigan (ORNL), John Gordon (RL), Andrew Daviel (TRIUMF), Paul Avery, Greg Goddard (UFL), Paul Barford, Miron Livny (UWisc), Shane Canon (NERSC), Andy Germain (NASA), Andrew Daviel (TRIUMF), Richard baraniuk, Rolf Reidi (Rice). 25

SC 2001 Bandwidth Challenge/demo • Send data from SLAC/FNAL booth computers (emulate a tier SC 2001 Bandwidth Challenge/demo • Send data from SLAC/FNAL booth computers (emulate a tier 0 or 1 HENP site) to over 20 other sites with good connections in about 6 countries – Throughputs from SLAC range from 3 Mbps to > 300 Mbps • 2 Dell/Linux cpus + 2 GE NICs each, 1. 4 Gbps/cpu, plus 1 Dell with 1 GE NIC – Can get 980 Mbits/s/cpu with jumbo frames • Saturate 2 Gbps connection to floor network • Got 1. 6 Mbits/s from booth to floor network • Apply QBSS to some sites, priority to a few and rest Best Effort – See how QBSS works at high speeds • Competing bulk throughput streams • Interactive low throughput streams, look at RTT with ping 26

Ns-2 thruput & loss predict 90% • Indicates on unloaded link can get 70% Ns-2 thruput & loss predict 90% • Indicates on unloaded link can get 70% of available bandwidth without causing noticeable packet loss • Can get over 80 -90% of available bandwidth • Can overdrive: no extra throughput BUT extra loss 27

WAN thruput conclusions • High FTP performance across WAN links is possible –Can do WAN thruput conclusions • High FTP performance across WAN links is possible –Can do 100 s of Mbits/s –Even with 20 -30 Mbps bottleneck can do > 100 Gbytes/day –Can easily saturate a fast Ethernet interface over WAN –Need GE NICs, > OC 3 WANs to improve performance –Need high speed cpus, disks, buses –Careful attention to copies, buffering etc. • Performance is improving • OS must support big windows selectable by application • Need multiple parallel streams in some cases • Improvements of 5 to 60 in thruput by using multiple streams & larger windows • Impacts others users, QBSS looks hopeful 28

More Information • IEPM/Ping. ER home site: – www-iepm. slac. stanford. edu/ • Bulk More Information • IEPM/Ping. ER home site: – www-iepm. slac. stanford. edu/ • Bulk throughput site: – www-iepm. slac. stanford. edu/monitoring/bulk/ • SC 2001 & high throughput measurements – www-iepm. slac. stanford. edu/monitoring/bulk/sc 2001/ • Transfer tools: – – http: //dast. nlanr. net/Projects/Iperf/release. html http: //doc. in 2 p 3. fr/bbftp/ www. slac. stanford. edu/~abh/bbcp/ http: //hepwww. rl. ac. uk/Adye/talks/010402 -ftp/html/sld 015. htm • TCP Tuning: – www. ncne. nlanr. net/training/presentations/tcp-tutorial. ppt – www-didc. lbl. gov/tcp-wan. html • QBSS measurements – www-iepm. slac. stanford. edu/monitoring/qbss/measure. html 29