Скачать презентацию DDo S Experiments with Third Party Security Mechanisms Скачать презентацию DDo S Experiments with Third Party Security Mechanisms

bd5928bbb7c5723c5727702d521fd1e9.ppt

  • Количество слайдов: 19

DDo. S Experiments with Third Party Security Mechanisms Carla Brodley, Sonia Fahmy, Cristina Nita-Rotaru, DDo. S Experiments with Third Party Security Mechanisms Carla Brodley, Sonia Fahmy, Cristina Nita-Rotaru, Catherine Rosenberg Current Students: Roman Chertov, Yu-Chun Mao, Kevin Robbins Undergraduate Student: Christopher Kanich June 9 th, 2004 1

Year 1 Objectives Ø Understand the testing requirements of different types of detection and Year 1 Objectives Ø Understand the testing requirements of different types of detection and defense mechanisms: Ø We focus on network-based third party mechanisms Ø Design, integrate, and deploy a methodology for performing realistic and reproducible DDo. S experiments: Ø Tools to configure traffic and attacks Ø Tools for automation of experiments, measurements, and effective visualization of results Ø Integration of multiple software components built by others Ø Gain insight into the phenomenology of attacks including their first-order and their second-order effects, and impact on detection mechanisms 2

Year 1 Accomplishments Ø Designed and implemented experimental tools (to be demoed): q Automated Year 1 Accomplishments Ø Designed and implemented experimental tools (to be demoed): q Automated measurement tools, and routing/security mechanism log processing tools, and graph plotting tools q Automated configuration of interactive and replayed background traffic, routing, attacks, and measurements q Scriptable event system to control and synchronize events at multiple nodes Ø Installed and configured the following software: q Quagga/Zebra, Web. Stone, Man. Hunt, Sentivist Ø Performed experiments and obtained preliminary results Ø Generated requirements for DETER to easily support the testing of third party products 3

Why Third Party Products? No Insider Information: we do not control or understand the Why Third Party Products? No Insider Information: we do not control or understand the internals of mechanisms, therefore we cannot customize tests. 2. Vendor Neutrality: we have no incentive to design experiments for either success/failure. 3. Requirements for DETER: third party tools were not designed for DETER; therefore, we can uncover setup and implementation challenges for DETER. 4. User Perspective: understanding the effectiveness of popular tools to defend against attacks will benefit many user communities. Ø Selected mechanisms: Symantec Man. Hunt v 3. 0 and Network Flight Recorder (NFR) Sentivist. 1. 4

Why Man. Hunt and Sentivist? Ø Provide DDo. S detection and response Ø Use Why Man. Hunt and Sentivist? Ø Provide DDo. S detection and response Ø Use coordinated distributed detection sensors Ø We only test the single sensor configuration now Ø Available in a software-only form that runs on Red. Hat Linux. q In contrast, many commercial solutions are available only as hardware boxes (e. g. , Mazu Networks Enforcer), and some require Microsoft Windows XP, which makes remotely experimenting with them difficult on the current DETER testbed. Ø Obtained both Man. Hunt and Sentivist at no cost. Ø Mechanisms serve as proof-of-concept for: q Experimental methodology and tools. q Identifying DETER testbed requirements for testing third-party commercial mechanisms. 5

Symantec Man. Hunt Claims Currently, we only focus on Man. Hunt detection capabilities Ø Symantec Man. Hunt Claims Currently, we only focus on Man. Hunt detection capabilities Ø ``Use protocol anomaly detection and traffic Ø Ø monitoring to detect DDo. S attacks, including zeroday attacks. ’’ ``Provide session termination, traceback capabilities using “Flow. Chaser, ” Qo. S filters, and handoff responses across domains for DDo. S protection. ’’ ``Provide the ability to coordinate distributed detection sensors. ’’ ``Detection at up to 2 gigabits per second traffic. ’’ ``Identifies unknown attacks via analysis engine. ’’ http: //enterprisesecurity. symantec. com/products. cfm? Product. ID=156&EID=0 6

Attacks Studied Ø Tools like Stracheldraht, TFN, Trinoo only should be sanitized first to Attacks Studied Ø Tools like Stracheldraht, TFN, Trinoo only should be sanitized first to ensure that they will not attempt to contact daemons outside the testbed Ø We experiment with a few recently published attacks: Ø Tunable randomization of Src and Dst [A. Hussain, J. Heidemann, and C. Papadopoulos. A framework for classifying denial of service attacks. SIGCOMM 2003] q UDP constant/square wave flooding [A. Kuzmanovic and E. W. Knightly. Low-rate targeted denial of service attacks. SIGCOMM 2003] q RST reflection (response to unsolicited ACKs) q ICMP echo request reflection q ICMP echo flooding q SYN flooding with variable rates 7

Experimental Goals Identify challenges associated with testing third party products on DETER 2. Identify Experimental Goals Identify challenges associated with testing third party products on DETER 2. Identify impact of different attack parameters on application-level and network-level metrics 3. Identify impact of the selection of traffic to train an anomaly detection mechanism on false alarms Ø How? Our experiments vary: 1. q The mix of attacks q Attack parameters, e. g. , on and off periods q Background traffic during the training and testing phases q Security mechanisms: Man. Hunt, and Sentivist Our current victim is an Apache web server and a subset of its clients 8

Experimental Setup Ø Topology: generated by GT-ITM [Calvert/Zegura, 1996] and adapted to DETER by Experimental Setup Ø Topology: generated by GT-ITM [Calvert/Zegura, 1996] and adapted to DETER by observing: q Limit of 4 on router degree q Cannot employ power law (cd- ), small world topologies Delays and bandwidths consume nodes Ø Quagga/Zebra [http: //www. quagga. net/]: introduces BGP routers that generate dynamic routing traffic Ø Web. Stone [http: //mindcraft. com/webstone]: creates interactive WWW traffic with 40 clients at 5 sites q Ø File sizes: 500 B, 5 k. B, 500 k. B, 5 MB with decreasing request frequency Ø Replayed NZIX traffic from 2 hosts mapped to all hosts [http: //pma. nlanr. net/Traces/long/nzix 2. html] 9

Topology 10 Topology 10

Square Wave Experiment Ø Varies: Square wave attack burst length l q Number/location of Square Wave Experiment Ø Varies: Square wave attack burst length l q Number/location of attacker(s), attack period T, and rate R were also varied, but results not reported here Ø Objectives: q Understand attack effectiveness q Identify attack effects on routing q Identify attack effects on application-level and network-level metrics at multiple nodes q Identify when a mechanism starts identifying attacks Rate l l T-l R Time 11

Impact on Throughput 12 Impact on Throughput 12

Impact on Routing 2004/06/05 2004/06/05 2004/06/05 14: 26 14: 24: 43 14: 25: 50 Impact on Routing 2004/06/05 2004/06/05 2004/06/05 14: 26 14: 24: 43 14: 25: 50 14: 26: 29 BGP: BGP: BGP: 10. 1. 39. 3 10. 1. 44. 3 [Error] bgp_read_packet error: Connection reset by peer sending KEEPALIVE rcvd UPDATE w/ attr: nexthop 0. 0 rcvd UPDATE about 10. 0. 254. 0/24 -- withdrawn rcvd UPDATE w/ attr: nexthop 10. 1. 24. 2 rcvd 10. 0. 254. 0/24 13

Aggregate Packet Statistics 14 Aggregate Packet Statistics 14

Agg. Application-level Metrics 15 Agg. Application-level Metrics 15

Demo Ø RST reflection and tuned square wave attack (60 ms— 200 ms) Ø Demo Ø RST reflection and tuned square wave attack (60 ms— 200 ms) Ø Objectives: Illustrate ease of experimental setup with our tool on DETER q Identify attack effects on application-level and network-level metrics at multiple nodes q Identify attack effects on Man. Hunt Ø Experiment timeline (in seconds): q 0 quagga/zebra router setup q 220 host setup q 223/224 start Web. Stone and replay q 274 RST reflection begins q 474 RST reflection ends q 524 square wave begins q 674 square wave ends q 900 end of demo q 16

Lessons Learned Ø Insights into sensitivity to emulation environment q Some effects we observe Lessons Learned Ø Insights into sensitivity to emulation environment q Some effects we observe may not be observed on actual routers and vice versa (architecture and buffer sizes) q q Emulab and DETER results significantly differ for the same test scenario (CPU speed) Limit on the degree of router nodes, delays, bandwidths Ø Difficulties in testing third party products q Products (hardware or software) connect to hubs, switches, or routers q q Layer 2/layer 3 emulation and automatic discovery/allocation can simplify DETER use for testing third party mechanisms Due to licenses, we need to control machine selection in DETER Windows XP is required to test some products, e. g. , Sentivist administration interface Difficult to evaluate performance when mechanism is a black box e. g. , cannot mark attack traffic and must solely rely on knowledge of attack 17

Plans for Years 2 and 3 Ø Formulate a testing methodology for DETER: Ø Plans for Years 2 and 3 Ø Formulate a testing methodology for DETER: Ø Design increasingly high fidelity experiments and better tools to be made available to the DETER/EMIST teams Ø Identify simulation/emulation artifacts Ø Understand the impact of scale, including topology, and statistical properties of traffic Ø Gain better insight into the phenomenology of attacks/defenses including their second-order effects, and how each is affected by experimental parameters Ø Develop a taxonomy of testable claims that security mechanisms make, and map each class of claims into realistic experiments and metrics to validate such claims 18

Summary Ø Identified challenges when testing third party mechanisms, providing feedback on requirements to Summary Ø Identified challenges when testing third party mechanisms, providing feedback on requirements to DETER testbed design team Ø Understood the design of high fidelity experiments (e. g. , topology, dynamic routing, interactive traffic) Ø Contributed to the collection of EMIST/DETER tools: experimental setup, attack mix, and measurement tools Ø Proved the power of the DETER testbed by presenting a subset of representative experiments 19