Скачать презентацию SERENE Spring School Birkbeck College UK April 14 Скачать презентацию SERENE Spring School Birkbeck College UK April 14

deadfbb1e55caa6c178d1f2e17b746fd.ppt

  • Количество слайдов: 174

SERENE Spring School, Birkbeck College, UK April 14, 2010 Tools to Make Objective Information SERENE Spring School, Birkbeck College, UK April 14, 2010 Tools to Make Objective Information Security Decisions — The Trust Economics Methodology Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

motivation Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. vanmoorsel@newcastle. motivation Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

security and trust data loss http: //www. youtube. com/watch? v=JCy. Aw. Yv 0 Ly security and trust data loss http: //www. youtube. com/watch? v=JCy. Aw. Yv 0 Ly 0 identity theft http: //www. youtube. com/watch? v=CS 9 pt. A 3 Ya 9 E worms http: //www. youtube. com/watch? v=Yq. Mt 7 a. NBTq 8 http: //www. informationweek. com/news/software/show. Arti cle. jhtml? article. ID=221400323 © Aad van Moorsel, Newcastle University, 2010 3

security and trust security: protection of a system against malicious attacks information security: preservation security and trust security: protection of a system against malicious attacks information security: preservation of confidentiality, integrity and availability of information CIA properties • confidentiality • integrity • availability © Aad van Moorsel, Newcastle University, 2010 4

why metrics and why quantification? two uses: • gives the ability to monitor the why metrics and why quantification? two uses: • gives the ability to monitor the quality of a system as it is being used (using measurement) • gives the ability to predict for the future the quality of a design or a system (using modelling) a good metric is critical to make measurement and modelling useful © Aad van Moorsel, Newcastle University, 2010 5

security and trust: a personal, subjective perception of the quality of a system or security and trust: a personal, subjective perception of the quality of a system or personal, subjective decision to rely on a system evaluation trust: the subjective probability by which an individual A expects that another individual B performs a given action on which A’s welfare depends (Gambetta 1988) decision trust: the willingness to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible (Mc. Knight and Chervany 1996) © Aad van Moorsel, Newcastle University, 2010 6

security and trust what is more important, security or trust? “Security Theatre and Balancing security and trust what is more important, security or trust? “Security Theatre and Balancing Risks” (Bruce Schneier) http: //www. cato. org/dail ypodcast/podcastarchive. php? podcast_id=8 12 © Aad van Moorsel, Newcastle University, 2010 7

defining the problem space: ontology of security Aad van Moorsel Newcastle University Centre for defining the problem space: ontology of security Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

ontologies • a collection of interrelated terms and concepts that describe and model a ontologies • a collection of interrelated terms and concepts that describe and model a domain • used for knowledge sharing and reuse • provide machine-understandable meaning to data • expressed in a formal ontology language (e. g. OWL, DAML+OIL) © Aad van Moorsel, Newcastle University, 2010 9

ontology features • common understanding of a domain – formally describes concepts and their ontology features • common understanding of a domain – formally describes concepts and their relationships – supports consistent treatment of information – reduces misunderstandings • explicit semantics – machine understandable descriptions of terms and their relationships – allows expressive statements to be made about domain – reduces interpretation ambiguity – enables interoperability © Aad van Moorsel, Newcastle University, 2010 10

ontology features (cont. ) • expressiveness – ontologies built using expressive languages – languages ontology features (cont. ) • expressiveness – ontologies built using expressive languages – languages able to represent formal semantics – enable human and software interpretation and reasoning • sharing information – information can be shared, used and reused – supported by explicit semantics – applications can interoperate through a shared understanding of information © Aad van Moorsel, Newcastle University, 2010 11

ontology example © Aad van Moorsel, Newcastle University, 2010 12 ontology example © Aad van Moorsel, Newcastle University, 2010 12

ontology structure • ontologies describe data semantics • express semantics by: – defining information ontology structure • ontologies describe data semantics • express semantics by: – defining information representation building blocks – describe relationships between building blocks – describe relationships within building blocks • building blocks are classes, individuals and properties © Aad van Moorsel, Newcastle University, 2010 13

ontology structure (cont. ) • classes – represent groups of object instances with similar ontology structure (cont. ) • classes – represent groups of object instances with similar properties – related to object class concept in OO programming – general statements can be made to include all of a class’ member objects at once • individuals – represent class object instances – similar to objects in OO programming but do not have associated functionality © Aad van Moorsel, Newcastle University, 2010 14

ontology structure (cont. ) • properties – – – associate an individual to a ontology structure (cont. ) • properties – – – associate an individual to a values can be simple data values or an object individuals may have multiple properties similar to accessor methods in OO programming can be associated with multiple unrelated classes leading to reusability of property descriptions © Aad van Moorsel, Newcastle University, 2010 15

relating information • need to describe relationships between classes, individuals and properties • the relating information • need to describe relationships between classes, individuals and properties • the most important relationships are: – individual to class “is an instance of” – individual to property “has value of” – restrictions between classes and properties • individual to class – relationship between an individual and its owning class must be explicitly stated – supports identification of a class’ members © Aad van Moorsel, Newcastle University, 2010 16

relating information (cont. ) • individual to property – individuals have values described by relating information (cont. ) • individual to property – individuals have values described by properties – relationship allows the specification of values for particular attributes of the individual • restrictions between classes and properties – can define which classes have which properties – can constrain property values to be of a certain class (range) or to only describe particular classes (domain) © Aad van Moorsel, Newcastle University, 2010 17

core information security ontology elements • information assets being accessed – information that is core information security ontology elements • information assets being accessed – information that is of value to the organisation, which individuals interact with and which must be secured to retain its value • the vulnerabilities that users create – within IT infrastructure, but also within the processes that a ‘user’ may partake in • the intentional or unintentional threats user actions pose – not just to IT infrastructure, but to process security and productivity • the potential process controls that may be used and their identifiable effects – these may be technical, but also actions within a business process • this formalised content is then encoded in an ontology – represented in the Web Ontology Language (OWL) © Aad van Moorsel, Newcastle University, 2010 18

security ontology: relationships Fentz, ASIACCS’ 09, Formalizing Information Security Knowledge © Aad van Moorsel, security ontology: relationships Fentz, ASIACCS’ 09, Formalizing Information Security Knowledge © Aad van Moorsel, Newcastle University, 2010 19

security ontology: concepts Fentz, ASIACCS’ 09, Formalizing Information Security Knowledge © Aad van Moorsel, security ontology: concepts Fentz, ASIACCS’ 09, Formalizing Information Security Knowledge © Aad van Moorsel, Newcastle University, 2010 20

security ontology: example of fire threat Fentz, ASIACCS’ 09, Formalizing Information Security Knowledge © security ontology: example of fire threat Fentz, ASIACCS’ 09, Formalizing Information Security Knowledge © Aad van Moorsel, Newcastle University, 2010 21

base metrics Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. base metrics Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

objective • understand the basis metrics important when assessing IT systems © Aad van objective • understand the basis metrics important when assessing IT systems © Aad van Moorsel, Newcastle University, 2010 23

classifying metrics (part one) quantitative vs. qualitative: quantitative metrics can be expressed through some classifying metrics (part one) quantitative vs. qualitative: quantitative metrics can be expressed through some number, while qualitative metrics are concerned with TRUE or FALSE quality-of-service (Qo. S) metrics: express a grade of service as a quantitative metric non-functional properties are system properties beyond the strictly necessary functional properties IT management is mostly about quantitative/Qo. S/nonfunctional © Aad van Moorsel, Newcastle University, 2010 24

common metrics (performance) response time, waiting time, propagation delay: user request arrival at server common metrics (performance) response time, waiting time, propagation delay: user request arrival at server buffer start processing server response reply received by user time propagation delay waiting proc. time propagation delay response time (server perspective) response time (client perspective) © Aad van Moorsel, Newcastle University, 2010 25

quantitative metrics if you measure a metric several time, you will not always (or quantitative metrics if you measure a metric several time, you will not always (or always not) get the exact same result metrics can be represented as random variables X, which has characteristics such as mean, standard deviation, variance and distribution also for response time: mean response time, variance of the response time, distribution of the response time in next section we see how to derive these characteristics given a set of measurement data © Aad van Moorsel, Newcastle University, 2010 26

classifying metrics (part two) performance metrics: timing and usage metrics • CPU load • classifying metrics (part two) performance metrics: timing and usage metrics • CPU load • throughput • response time dependability or reliability metrics: metrics related to accidental failure • MTTF • availability • reliability security metrics: metrics related to malicious failures (attacks) • ? business metrics: metrics related to cost or benefits • number of buy transaction in web site • cost of ownership • return on investment © Aad van Moorsel, Newcastle University, 2010 27

common metrics (performance) throughput = number of tasks a resource can complete per time common metrics (performance) throughput = number of tasks a resource can complete per time unit: • jobs per second • requests per second • millions of instructions per second (MIPS) • floating point operations per second (FLOPS) • packets per second • kilo-bits per second (Kbps) • transactions per second • . . . © Aad van Moorsel, Newcastle University, 2010 28

common metrics (performance) capacity = maximum sustainable number of tasks load = offered number common metrics (performance) capacity = maximum sustainable number of tasks load = offered number of tasks overload = load is higher than capacity utilization = the fraction of resource capacity in use (CPU, bandwidth); for a CPU, this corresponds to the fraction of time the resource is busy (some times imprecisely called the CPU load) © Aad van Moorsel, Newcastle University, 2010 29

relation between performance metrics utilization versus response time, throughput and load: overload utilization response relation between performance metrics utilization versus response time, throughput and load: overload utilization response time 1 utilization 0 1 load throughput 0 0 utilization 1 © Aad van Moorsel, Newcastle University, 2010 30

common metrics (dependability/reliability) systems with failures: repair operating failure up repair failed down time common metrics (dependability/reliability) systems with failures: repair operating failure up repair failed down time © Aad van Moorsel, Newcastle University, 2010 31

common metrics (dependability/reliability) Mean Time To Failure (MTTF) = average length up time period common metrics (dependability/reliability) Mean Time To Failure (MTTF) = average length up time period Mean Time To Repair (MTTR) = average length down time period availability = fraction of time system is up = fraction up time = MTTF / (MTTF + MTTR) unavailability = 1 – availability = fraction up time reliability at time t = probability the system does not go down before t © Aad van Moorsel, Newcastle University, 2010 32

relation between dependability metrics availability and associated yearly down time: availability yearly down time relation between dependability metrics availability and associated yearly down time: availability yearly down time 0. 9 37 days 0. 99 4 days 0. 999 9 hours 0. 9999 50 minutes 0. 99999 5 minutes © Aad van Moorsel, Newcastle University, 2010 33

relation between dependability metrics if you have a system with 1 day MTTF and relation between dependability metrics if you have a system with 1 day MTTF and 1 hour MTTR, would you work on the repair time or the failure time to improve the availability? availability required MTTF if MTTR = 1 hour required MTTR if MTTF = 1 day 0. 96 0. 99999 1 day 4 days 6 weeks 14 months 11 years 1 hour 14 minutes 1½ minutes 9 seconds 1 second © Aad van Moorsel, Newcastle University, 2010 34

five nines © Aad van Moorsel, Newcastle University, 2010 35 five nines © Aad van Moorsel, Newcastle University, 2010 35

conclusion discussed the basis metrics important when managing IT systems: • performance metrics: throughput, conclusion discussed the basis metrics important when managing IT systems: • performance metrics: throughput, utilization, response time • dependability metrics: availability, MTTF, MTTR • security metrics • business metrics © Aad van Moorsel, Newcastle University, 2010 36

security metrics Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. security metrics Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

measure security Lord Kelvin: “what you cannot measure, you cannot manage” how true is measure security Lord Kelvin: “what you cannot measure, you cannot manage” how true is this: • in science? • in engineering? • in business? and how true is: “we only manage what we measure”? © Aad van Moorsel, Newcastle University, 2010 38

some related areas: performance measuring performance: • we know CPU speed (and Intel measured some related areas: performance measuring performance: • we know CPU speed (and Intel measured it) • we can easily measure sustained load (throughput) • we can reasonably okay model for performance (queuing, simulations) • we’re pretty good in adapting systems for performance, through load balancing etc. • there is a TOP 500 for supercomputers • we buy PCs based on performance, and their performance is advertised • companies buy equipment based on performance © Aad van Moorsel, Newcastle University, 2010 39

some related areas: availability measuring availability: • we do not know much about CPU some related areas: availability measuring availability: • we do not know much about CPU reliability (although Intel measures it) • it is easy, but time consuming to measure down time • we can reasonably okay model for availability (Markov chains), although we do not know parameter values for fault and failure occurrences • we’re rather basic in adapting systems for availability, but there are various fault tolerance mechanisms • there is no TOP 500 for reliable computers • we do not buy PCs based on availability, and their availability is rarely advertised • companies buy equipment based on availability only for top end applications (e. g. goods and finances admin of super market chains) © Aad van Moorsel, Newcastle University, 2010 40

how about security measuring security: • we do not know much about level of how about security measuring security: • we do not know much about level of CPU security (and Intel does not know how to measure it) • it is possible to measure security breaches, but how much do they tell you? • we do not know how to model for levels of security, for instance we do not know what attacks look like • we’re only just starting to research adapting systems for security—there sure are many security mechanisms available • there is no TOP 500 for secure computers • we do not buy PCs based on privacy or security, and their privacy/security is rarely advertised • companies are very concerned about security, but do not know how to measure it and show improvements © Aad van Moorsel, Newcastle University, 2010 41

what’s special about security is a hybrid between functional and non-functional (performance/availability) property • what’s special about security is a hybrid between functional and non-functional (performance/availability) property • it is tempting to think security is binary: it is secured or not common mistake • security deals with loss and attacks – you can measure after the fact, but would like to predict – maybe loss can still be treated like accidental failures (as in availability) – attacks certainly require knowledge of attackers, how they act, when they act, what they will invent • security level (even if we somehow divined it) is meaningless: – what’s the possible consequence – how do people react to it (risk averse? ) © Aad van Moorsel, Newcastle University, 2010 42

how do people now measure security? reporting after the fact: • industry and government how do people now measure security? reporting after the fact: • industry and government are obligated to report breaches (NIST database and others) • measure how many non-spam email went through, etc. some predictive metrics as ‘substitute’ for security: • how many CPU cycles needed to break encryption technique? • risk analysis: likelihood X impact, summed for all breaches but we know neither likelihood nor impact © Aad van Moorsel, Newcastle University, 2010 43

why is measurable security important? without good measures • security is sold as all why is measurable security important? without good measures • security is sold as all or nothing • security purchase decisions are based on scare tactics • system configuration (including cloud, Saa. S) cannot be judged for resulting security © Aad van Moorsel, Newcastle University, 2010 44

CIA metrics how about CIA confidentiality: keep organisation’s data confidential (privacy for the organisation) CIA metrics how about CIA confidentiality: keep organisation’s data confidential (privacy for the organisation) integrity: data is unaltered availability: data is available for use • you can score them, and sum them up (see CVSS later) • you can measure for integrity and availability if there is centralized control • you cannot easily predict them © Aad van Moorsel, Newcastle University, 2010 45

good metrics in practice, good metrics should be: • consistently measured, without subjective criteria good metrics in practice, good metrics should be: • consistently measured, without subjective criteria • cheap to gather, preferably in an automated way • a cardinal number or percentage, not qualitative labels • using at least one unit of measure (defects, hours, . . . ) • contextually specific—relevant enough to make decisions from Jaquith’s book ‘Security Metrics’ © Aad van Moorsel, Newcastle University, 2010 46

good metrics in practice, metrics cover four different aspects: • perimeter defenses – # good metrics in practice, metrics cover four different aspects: • perimeter defenses – # spam detected, # virus detected • coverage and control – # laptops with antivirus software, # patches per month • availability and reliability – host uptime, help desk response time • application risks – vulnerabilities per application, assessment frequence for an application from Jaquith’s book ‘Security Metrics’ © Aad van Moorsel, Newcastle University, 2010 47

good metrics how good do you think the metrics from Jaquith’s book ‘Security Metrics’ good metrics how good do you think the metrics from Jaquith’s book ‘Security Metrics’ are? it’s the best we can do now, but as the next slide shows, there a lot of open issues © Aad van Moorsel, Newcastle University, 2010 48

good metrics ideally good metrics should: • not measure the process used to design, good metrics ideally good metrics should: • not measure the process used to design, implement or manage the system, but the system itself • not depend on things you will never know (such as in risk management) • be predictive about the future security, not just reporting the past but these are very challenging properties we do not yet know how to deliver © Aad van Moorsel, Newcastle University, 2010 49

trust metrics Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. trust metrics Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

trust [Gambetta]: “expectation of an actor (trustor) that another actor (trustee) will emit certain trust [Gambetta]: “expectation of an actor (trustor) that another actor (trustee) will emit certain behavior, in a context where the trustor may not be able to monitor the trustee, and where outcome of a social phenomenon involving the trustor is dependent on that behavior of the trustee” © Aad van Moorsel, Newcastle University, 2010 51

trust: very complex notion © Aad van Moorsel, Newcastle University, 2010 52 trust: very complex notion © Aad van Moorsel, Newcastle University, 2010 52

trust is needed when uncertainty exists, either because – something is not known, or trust is needed when uncertainty exists, either because – something is not known, or – one does not trust the source of the info uncertainty in online transactions: – the customer cannot monitor the provider – the customer does not believe the legal institutions in a different country will act honestly in case of disputes – the customer does not believe a technological claim, e. g. , transactional properties uncertainty may even exist when a proof is given: why believe the proof is correct? © Aad van Moorsel, Newcastle University, 2010 53

trust uncertainty may even exist when a proof is given: why believe the proof trust uncertainty may even exist when a proof is given: why believe the proof is correct? that is, uncertainty is in the eye of the beholder so, per definition, no technology solution exists for trust note furthermore: ‘trust is cheap, security is expensive’ ‘the more trust exists, the less security is needed’ © Aad van Moorsel, Newcastle University, 2010 54

trust no technology solution exists for trust but we still can work toward improving trust no technology solution exists for trust but we still can work toward improving trust let us look at trust in cloud, where cooperation between parties leads to business interactions to establish cooperation, trust is needed study cooperation, tools to establish cooperation are trust enablers theory of Axelrod © Aad van Moorsel, Newcastle University, 2010 55

how does trust develop? © Aad van Moorsel, Newcastle University, 2010 56 how does trust develop? © Aad van Moorsel, Newcastle University, 2010 56

Axelrod on cooperation mutuality of preferences: cooperation is beneficial for all shadow of the Axelrod on cooperation mutuality of preferences: cooperation is beneficial for all shadow of the future: if one does not cooperate, it may hurt later sanctioning: if one does not cooperate, other parties (legal institutions, mafia, . . . ) will sanction © Aad van Moorsel, Newcastle University, 2010 57

how does trust develop? © Aad van Moorsel, Newcastle University, 2010 58 how does trust develop? © Aad van Moorsel, Newcastle University, 2010 58

Scott on institutions to achieve Axelrod’s incentives, we use institutions: constraints on behaviour of Scott on institutions to achieve Axelrod’s incentives, we use institutions: constraints on behaviour of participating parties • a legal institution can be expected to deter parties from interactions that are against the law (‘regulative institutions’) • a norm from society or parents will influence how parties behave (‘normative institutions’) • the expectation that all parties are in it to make money will restrict choices (‘cognitive institutions’) © Aad van Moorsel, Newcastle University, 2010 59

Scott on institutions © Aad van Moorsel, Newcastle University, 2010 60 Scott on institutions © Aad van Moorsel, Newcastle University, 2010 60

technological institutions for online interactions, technology also constrains behaviour technological institutions examples: • a technological institutions for online interactions, technology also constrains behaviour technological institutions examples: • a dedicated communication link instead of WWW reduces uncertainty • encryption reduces danger of losing important information (perhaps at the cost of performance jitter) • automated contract and non-repudiation • automated agent-based negotiation • . . . © Aad van Moorsel, Newcastle University, 2010 61

removing uncertainty through institutions all states impossible because of they would not be economically removing uncertainty through institutions all states impossible because of they would not be economically rational states impossible because of laws states impossible because technology does not support them © Aad van Moorsel, Newcastle University, 2010 62

one party’s view, including (dis)trust all states remaining states from one party’s perspective © one party’s view, including (dis)trust all states remaining states from one party’s perspective © Aad van Moorsel, Newcastle University, 2010 63

cooperation a party will participate if it believes there are beneficial states a party cooperation a party will participate if it believes there are beneficial states a party will remain participating as long as the states remain beneficial (can be relaxed) a market will remain functioning if – overlap in states for participating parties – states will remain in overlap area © Aad van Moorsel, Newcastle University, 2010 64

conclusion an institutional framework for trust in cloud and other online interactions agents may conclusion an institutional framework for trust in cloud and other online interactions agents may implement state machines – depicts the state the agent believes to be in – determine the next action to take basis for middleware for trust (uncertaintytolerance) © Aad van Moorsel, Newcastle University, 2010 65

web of trust and reputation systems Aad van Moorsel Newcastle University Centre for Cybercrime web of trust and reputation systems Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

trust and reputation is a believe about a person’s (or thing’s) character or standing trust and reputation is a believe about a person’s (or thing’s) character or standing (Jøsang, 2009) could call it ‘trust in dependability of someone or something’ so: reputation of B = expected [dependability trust in B] a quantitative metric! © Aad van Moorsel, Newcastle University, 2010 67

trust and reputation is a believe about a person’s (or thing’s) character or standing trust and reputation is a believe about a person’s (or thing’s) character or standing (Jøsang, 2009) characteristics: • reputation is public • reputation is shared, but not necessarily adopted by all people (they might not ‘trust’ the reputation number) reputation is the trust level reported by someone else (person A), but you may decide: 1. you do not trust the judgement of A 2. you do not trust the honesty of A so you can have: • I trust you despite your bad reputation • I trust you because of your good reputation © Aad van Moorsel, Newcastle University, 2010 68

security and trust from Jøsang, IFIPTM 2009 • add cost to this figure • security and trust from Jøsang, IFIPTM 2009 • add cost to this figure • at productivity loss to this figure • at emergence of new applications to this figure © Aad van Moorsel, Newcastle University, 2010 69

is trust transitive? from Jøsang, IFIPTM 2009 © Aad van Moorsel, Newcastle University, 2010 is trust transitive? from Jøsang, IFIPTM 2009 © Aad van Moorsel, Newcastle University, 2010 70

PGP: Pretty Good Privacy PGP is an asymmetric key solution: - symmetric: we share PGP: Pretty Good Privacy PGP is an asymmetric key solution: - symmetric: we share a password asymmetric: we each have a public and a private password, and only share the public one use of asymmetric keys: 1. signing: I use my private key to encrypt a document everyone can use my public key to decrypt it, but if the decryption succeeds, it is proof it came from me 2. encryption: I use your public key to encrypt a document only your private key can decrypt it © Aad van Moorsel, Newcastle University, 2010 71

PGP: Pretty Good Privacy can we trust in PGP? main issue: if you give PGP: Pretty Good Privacy can we trust in PGP? main issue: if you give me your public key, why should I trust it comes from you and has not been tampered with? solutions: 1. PKI: Public Key Infrastructure 2. Web of Trust (PGP’s solution) © Aad van Moorsel, Newcastle University, 2010 72

PKI Certification Authority (CA) assures the identity of the owner of a public key PKI Certification Authority (CA) assures the identity of the owner of a public key if you want a private/public key pair, you go to the CA • it checks who you are and what kind of trust people can place in you • all your details and the public key are put in a public key certificate • you receive the public key certificate and the private/public key pair when you send the public key certificate to someone, the person can check with the CA if you are who you say you are © Aad van Moorsel, Newcastle University, 2010 73

PKI from Jøsang, IFIPTM 2009 © Aad van Moorsel, Newcastle University, 2010 74 PKI from Jøsang, IFIPTM 2009 © Aad van Moorsel, Newcastle University, 2010 74

Web of Trust no certification authority • you, A, as a new participant, create Web of Trust no certification authority • you, A, as a new participant, create your own public key certificate • you go to someone, B, who is already in the web of trust and ask that person to sign your public key certificate • anyone who trusts B will now trust your public key • others may ask you to sign their public key certificate © Aad van Moorsel, Newcastle University, 2010 75

reputation systems assume a system, with a number of participants • all participants give reputation systems assume a system, with a number of participants • all participants give feedback about other participants • some mathematical formula computes resulting reputation – reputation = sum positive scores – sum negative scores (e. Bay) – reputation = average all scores – reputation = (1 + sum positive) / (1 + sum all scores) –. . . • reputation computation can be centralized or distributed © Aad van Moorsel, Newcastle University, 2010 76

data collection: honey pots and security breaches data banks Aad van Moorsel Newcastle University data collection: honey pots and security breaches data banks Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

honeypots a honeypot • pretends to be a resource with value to attackers, but honeypots a honeypot • pretends to be a resource with value to attackers, but is actually isolated and monitored, in order to: • misguide attackers and analyze the behaviour of attackers. © Aad van Moorsel, Newcastle University, 2010 78

honeypots two types: – high-interaction: • real services, real OS, real application higher risk honeypots two types: – high-interaction: • real services, real OS, real application higher risk of being used to break in or attack others • honeynets (two or more honeypots in a network) – low-interaction: • emulated services low risk • honeypots like nepenthes © Aad van Moorsel, Newcastle University, 2010 79

an example of a hacker in a honeypot SSH-1. 99 -Open. SSH_3. 0 SSH-2. an example of a hacker in a honeypot SSH-1. 99 -Open. SSH_3. 0 SSH-2. 0 -GOBBLES GGGGO*GOBBLE* uname -a; id Open. BSD pufferfish 3. 0 GENERIC#94 i 386 uid=0(root) gid=0(wheel) groups=0(wheel) ps -aux|more USER PID %CPU %MEM root 16042 0. 0 0. 1 root 25892 0. 0 0. 2 root 13304 0. 0 0. 1. . . root 1 0. 0 0. 1 332 VSZ RSS TT STARTED TIME COMMAND 372 256 ? ? R 2: 48 PM 0: 00. 00 more (sh) 104 452 ? ? Ss Tue 02 PM 0: 00. 14 syslogd 64 364 ? ? Is Tue 02 PM 0: 00. 00 portmap 200 ? ? Is Tue 02 PM 0: 00. 02 /sbin/init id uid=0(root) gid=0(wheel) groups=0(wheel) who cat inetd. conf attempt to edit the configuration file for network services © Aad van Moorsel, Newcastle University, 2010 80

data from a honeypot data from 2003, number of different ‘attack’ sources Pouget, Dacier, data from a honeypot data from 2003, number of different ‘attack’ sources Pouget, Dacier, Debar: “Attack processes found on the Internet” © Aad van Moorsel, Newcastle University, 2010 81

data from honeypots a lot of other data can be obtained • how do data from honeypots a lot of other data can be obtained • how do worms propagate? • how do attackers use zombies? • what kind of attackers do exist, and which ones start denial-of -service attacks? • which country do the attacks come from • . . . Pouget, Dacier, Debar: “Attack processes found on the Internet” © Aad van Moorsel, Newcastle University, 2010 82

honeypots a honeynet is a network of honeypots and other information system resources T. honeypots a honeynet is a network of honeypots and other information system resources T. Holz, “Honeypots and Malware Analysis—Know Your Enemy” © Aad van Moorsel, Newcastle University, 2010 83

honeynets three tasks in a honeynet: 1. data capture 2. data analysis 3. data honeynets three tasks in a honeynet: 1. data capture 2. data analysis 3. data control: especially highinteraction honeypots are vulnerable of being misused by attackers control of data flow, neither to come inside the organisation, nor to other innocent parties T. Holz, “Honeypots and Malware Analysis—Know Your Enemy” © Aad van Moorsel, Newcastle University, 2010 84

US-CERT security vulnerabilities United State Computer Emergency Readiness Team people submit vulnerability notes, e. US-CERT security vulnerabilities United State Computer Emergency Readiness Team people submit vulnerability notes, e. g. : Vulnerability Note VU#120541 SSL and TLS protocols renegotiation vulnerability A vulnerability exists in SSL and TLS protocols that may allow attackers to execute an arbitrary HTTP transaction. Credit: Marsh Ray of Phone. Factor © Aad van Moorsel, Newcastle University, 2010 85

CVSS scoring in US-CERT it uses a scoring system to determine how serious the CVSS scoring in US-CERT it uses a scoring system to determine how serious the vulnerability is: Common Vulnerability Scoring System (CVSS) P. Mell et al, “CVSS—A Complete Guide to the CVSS Version 2. 0” © Aad van Moorsel, Newcastle University, 2010 86

CVSS Base. Score = 0. 6 x Impact + 0. 4 x Exploitability Impact CVSS Base. Score = 0. 6 x Impact + 0. 4 x Exploitability Impact = 10. 41 x (1 - Conf. Impact) x (1 - Integ. Impact) x (1 Avail. Impact) ) Conf. Impact = case Confidentiality. Impact of • none: 0. 0 • partial: 0. 275 • complete: 0. 660. . . P. Mell et al, “CVSS—A Complete Guide to the CVSS Version 2. 0” © Aad van Moorsel, Newcastle University, 2010 87

statistics for modelling and measurement Aad van Moorsel Newcastle University Centre for Cybercrime and statistics for modelling and measurement Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

objective • understand the basis statistics so you can take your own measurements © objective • understand the basis statistics so you can take your own measurements © Aad van Moorsel, Newcastle University, 2010 89

measurements x 1, x 2, x 3, . . . , x. N are measurements x 1, x 2, x 3, . . . , x. N are statistically independent measurement samples (for instance response time measurements) mathematically, xi, i=1. . . N, are realizations of random variable X (for instance X represents the measured response time) an estimator is a function of the samples (notation ) X has a mean value (expectation), E[X] what would be a reasonable estimator © Aad van Moorsel, Newcastle University, 2010 for E[X]? 90

unbiased estimators represents the estimator based on measured response time X the real response unbiased estimators represents the estimator based on measured response time X the real response time is represented by random variable R mathematically, an unbiased estimator implies E[ ]= E[X] source of estimator bias: • mathematical subtleties (see standard deviation for one example) for an experimentalist, an unbiased experiment implies also E[X]=E[R] sources of experiment bias: • measurements not precise • measurement points not exactly placed correctly • environment of experiments not representative © Aad van Moorsel, Newcastle University, 2010 91

unbiased estimators mean: variance: (Var) standard deviation: (SD) 100α percentile: smallest x such that unbiased estimators mean: variance: (Var) standard deviation: (SD) 100α percentile: smallest x such that © Aad van Moorsel, Newcastle University, 2010 92

confidence intervals central limit theorem: if I carry out an experiment often enough, and confidence intervals central limit theorem: if I carry out an experiment often enough, and average the result, then this average converges, and takes on values close to a Normal distribution Allows one to construct “p-percent confidence intervals”: the real value is p percent certain within the confidence interval around the estimated value note: that implies 100 -p percent you’re off. . . © Aad van Moorsel, Newcastle University, 2010 93

confidence interval p-percent confidence interval: where the constant cp depends on p, and can confidence interval p-percent confidence interval: where the constant cp depends on p, and can be read from tables for Normal distribution percentiles: p = 95% cp = 1. 65 p = 99% cp = 1. 96 © Aad van Moorsel, Newcastle University, 2010 94

example you measure response time of a web site x 1 = 1 sec. example you measure response time of a web site x 1 = 1 sec. , x 2 = 3 sec. , x 3 = 2 sec. using the unbiased estimators: = 2 sec and SD = 1 sec 95% confidence interval: [ 1. 05 , 2. 95 ] 99% confidence interval: [ 0. 87 , 3. 13 ] increase the number of samples, 99% confidence interval: N = 100: [ 1. 83, 2. 17 ] N = 10000: [ 1. 98, 2. 02 ] N = 1000000: [ 1. 998, 2. 002 ] © Aad van Moorsel, Newcastle University, 2010 95

confidence intervals prerequisite: independent samples from identical distributions the confidence interval statistics in this confidence intervals prerequisite: independent samples from identical distributions the confidence interval statistics in this section applies only when samples are statistically independent and representative for the identical distribution (called i. i. d. or independent and identically distributed) examples of dependence: • response time of subsequent web page customers: they might both be influenced by the server being busy • down time of a system in consecutive days: they might both be influenced by a virus that was spread just around these days example of independence: • subsequent throws of a dice or tosses of a coin • in systems, often samples are reasonably independent, but you can never be sure © Aad van Moorsel, Newcastle University, 2010 96

dealing with dependent samples if you incorrectly assume independence, your confidence interval will typically dealing with dependent samples if you incorrectly assume independence, your confidence interval will typically much narrower than it should you will have more confidence in your estimate than justified to deal with dependence you can use method of batch means or method of batches the method of batches puts dependent samples in the same batch can assume independence between the batches (and then the confidence interval theory applies again) © Aad van Moorsel, Newcastle University, 2010 97

method of batch means 1. take the number of batches M to be 30 method of batch means 1. take the number of batches M to be 30 or more 2. create the batches: put x 1 to x. N/M in batch 1, x. N/M+1 to x 2 N/M in batch 2, etc. (note: you will have M batches, each with N/M samples) 3. compute the means of each batch and call this yj for the j-th batch 4. then compute confidence intervals using y 1 to y. M instead of the original samples x 1 to x. N explanation: batches will be reasonably independent, as long as each batch has a large quantity of samples handy check: try it with different values of M (for instance for both 30 and 100 batches), if the batches are independent, the resulting confidence intervals should be about the same (unfortunately, the opposite doesn’t hold, so use with care) © Aad van Moorsel, Newcastle University, 2010 98

conclusion have discussed the basics behind doing good system analysis • metrics • statistics conclusion have discussed the basics behind doing good system analysis • metrics • statistics © Aad van Moorsel, Newcastle University, 2010 99

human factors and economic drivers Aad van Moorsel Newcastle University Centre for Cybercrime and human factors and economic drivers Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

trust economics • research project funded by the UK Government’s Technology Strategy Board (TSB) trust economics • research project funded by the UK Government’s Technology Strategy Board (TSB) • members include both academic institutions and industrial organisations – universities: Newcastle, Bath, UCL, Aberdeen, – companies: HP Labs Bristol, Merrill Lynch, National Grid • work is built on attempts to further understanding of a number of diverse factors in information security management, and how they are related – human factors (UCL, Bath) – economics of security (Bath, Aberdeen) – business processes (HP, Merrill Lynch) – security management tools (Newcastle) • encapsulate insights from these strands of research © Aad van Moorsel, Newcastle University, 2010 101

trust economics • human factors – user behaviour studies (e. g. USB sticks, password trust economics • human factors – user behaviour studies (e. g. USB sticks, password behaviours) – rationalising human/security interaction • economics of security – economic modelling – formalising relationships between security choices and economic influences • business processes – utility functions – system modelling • security management tools – information security & human factors ontology – information security knowledge base tools © Aad van Moorsel, Newcastle University, 2010 102

Newcastle ontology • explicitly represent human factors in information security management, and relate them Newcastle ontology • explicitly represent human factors in information security management, and relate them to infrastructure components and processes that are familiar to a Chief Information Security Officer (CISO) – elements of IT infrastructure that must be secured and controlled – security standards that guide policy decisions (e. g. ISO 27 K) • provides support for IT security management decision-making – clearly and unambiguously represent elements of the IT security infrastructure, but also the human behaviours that may arise or be controlled as a result of deploying specific security mechanisms • provides a means of encoding expert knowledge of human factors as relates to IT security management – this knowledge was previously unavailable to CISOs, or hidden away in a form that made it difficult to communicate © Aad van Moorsel, Newcastle University, 2010 103

use of an ontology • an ontology helps to formally describe concepts and taxonomies use of an ontology • an ontology helps to formally describe concepts and taxonomies within information security management – the exact meanings of terms can often be lost as they are communicated or internalised within an organisation • use of an ontology provides scope to identify the interdependencies between various elements of information security management • an ontology can also be used to relate and communicate different domains of knowledge across shared concepts, and demonstrate the hierarchies that exist within them • an ontology provides scope to reason about captured knowledge • reasoning may be achieved manually or by a machine © Aad van Moorsel, Newcastle University, 2010 104

ontology structure © Aad van Moorsel, Newcastle University, 2010 105 ontology structure © Aad van Moorsel, Newcastle University, 2010 105

ontology – password policy case study © Aad van Moorsel, Newcastle University, 2010 106 ontology – password policy case study © Aad van Moorsel, Newcastle University, 2010 106

case study – recall methods © Aad van Moorsel, Newcastle University, 2010 107 case study – recall methods © Aad van Moorsel, Newcastle University, 2010 107

case study – password reset function © Aad van Moorsel, Newcastle University, 2010 108 case study – password reset function © Aad van Moorsel, Newcastle University, 2010 108

conclusion Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. vanmoorsel@newcastle. conclusion Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security aad. [email protected] ac. uk

conclusion part 1: assessment of security and trust • used an ontology to describe conclusion part 1: assessment of security and trust • used an ontology to describe the problem space • noted subtle differences between security and trust • realised that several security metrics have been proposed and used: CIA, CVSS, Jaquith’s, but that – security metrics are extremely challenging to define effectively: how to make representative and predictive? • noted that trust metrics are used in reputation systems and web of trust, but that these are – equally challenging to make representative and predictive • provided basic methodology for good measurement studies you now know some basics, apply it in the field and improve on the methodologies used in information security assessment! © Aad van Moorsel, Newcastle University, 2010 110

trust economics methodology trust economics methodology

trust economics methodology for security decisions trade off: legal issues, human tendencies, business concerns, trust economics methodology for security decisions trade off: legal issues, human tendencies, business concerns, . . . a model of the information system © Aad van Moorsel, Newcastle University, 2010 stakeholders discuss 112

trust economics research from the trust economics methodology, the following research follows: 1. identify trust economics research from the trust economics methodology, the following research follows: 1. identify human, business and technical concerns 2. develop and apply mathematical modelling techniques 3. glue concerns, models and presentation together using a trust economics information security ontology 4. use the models to improve the stakeholders discourse and decisions © Aad van Moorsel, Newcastle University, 2010 113

our involvement 1. identify human, business and technical concerns – are working on a our involvement 1. identify human, business and technical concerns – are working on a case study in Access Management (Maciej, James, with Geoff and Hilary from Bath) 2. develop and apply mathematical modelling techniques – are generalising concepts to model human behaviour, and are validating it with data collection (Rob, Simon, with Doug, Robin and Bill from UIUC) – do a modelling case study in DRM (Wen) 3. glue concerns, models and presentation together using a trust economics information security ontology – developed an information security ontology, taking into account human behavioural aspect (Simon) – made an ontology editing tool for CISOs (John) – are working on a collaborative web-based tool (John, Simon, Stefan from SBA, Austria) 4. use the models to improve the stakeholders discourse and decision – using participatory design methodology, are working with CISOs to do a user study (Simon, Philip and Angela from UCL) © Aad van Moorsel, Newcastle University, 2010 114

example of the trust economics methodology passwords example of the trust economics methodology passwords

Information Security Management Find out about how users behave, what the business issues are: Information Security Management Find out about how users behave, what the business issues are: CISO 1: Transport is a big deal. Interviewer 1: We’re trying to recognise this in our user classes. CISO 1: We have engineers on the road, have lots of access, and are more gifted in IT. Interviewer 1: Do you think it would be useful to configure different user classes? CISO 1: I think it’s covered. Interviewer 1: And different values, different possible consequences if a loss occurs. I’m assuming you would want to be able to configure. CISO 1: Yes. Eg. customer list might or might not be very valuable. Interviewer 1: And be able to configure links with different user classes and the assets. CISO 1: Yes, if you could, absolutely. Interviewer 1: We’re going to stick with defaults at first and allow configuration if needed later. So, the costs of the password policy: running costs, helpdesk staff, trade-off of helpdesk vs. productivity CISO 1: That’s right. © Aad van Moorsel, Newcastle University, 2010 116

Information Security Management Find out about how users behave, what the business issues are: Information Security Management Find out about how users behave, what the business issues are: Discussion of "Productivity Losses": CISO 2: But it’s proportional to amount they earn. This is productivity. eg. $1 m salary but bring $20 m into the company. There are expense people and productivity people. Interviewer 1: We have execs, “road warrior”, office drone. Drones are just a cost. Interviewer 2: And the 3 groups have different threat scenarios. CISO 2: Risk of over-complicating it, hard to work out who is income-earner and what proportion is income earning. Interviewer 2: But this is good point. CISO 2: Make it parameterisable, at choice of CISO. … CISO 2: So, need to be able to drill down into productivity, cost, - esp in small company. © Aad van Moorsel, Newcastle University, 2010 117

a model of the IT system © Aad van Moorsel, Newcastle University, 2010 118 a model of the IT system © Aad van Moorsel, Newcastle University, 2010 118

tool to communicate the result to a CISO © Aad van Moorsel, Newcastle University, tool to communicate the result to a CISO © Aad van Moorsel, Newcastle University, 2010 119

an information security ontology incorporating human-behavioural implications Simon Parkin, Aad van Moorsel Newcastle University an information security ontology incorporating human-behavioural implications Simon Parkin, Aad van Moorsel Newcastle University Centre for Cybercrime and Computer Security UK Robert Coles, Bank of America, Merrill Lynch UK

trust economics ontology • we want to have a set of tools that implement trust economics ontology • we want to have a set of tools that implement the trust economics methodology • needs to work for different case studies • need a way to represent, maintain and interrelate relevant information • glue between – problem space: technical, human, business – models – interfaces © Aad van Moorsel, Newcastle University, 2010 121

using an ontology • We chose to use an ontology to address these requirements, using an ontology • We chose to use an ontology to address these requirements, because: – An ontology helps to formally define concepts and taxonomies – An ontology serves as a means to share knowledge • Potentially across different disciplines – An ontology can relate fragments of knowledge • Identify interdependencies © Aad van Moorsel, Newcastle University, 2010 122

business, behaviour and security • Example: Password Management – There is a need to business, behaviour and security • Example: Password Management – There is a need to balance security and ease-of-use – A complex password may be hard to crack, but might also be hard to remember • Is there a way to: – Identify our choices in these situations? – Consider the potential outcomes of our choices in a reasoned manner? © Aad van Moorsel, Newcastle University, 2010 123

requirements • Standards should be represented – Information security mechanisms are guided by policies, requirements • Standards should be represented – Information security mechanisms are guided by policies, which are increasingly informed by standards • The usability and security behaviours of staff must be considered – – Information assets being accessed; The vulnerabilities that users create; The intentional or unintentional threats user actions pose, and; The potential process controls that may be used and their identifiable effects • CISOs must be able to relate ontology content to the security infrastructure they manage – Representation of human factors and external standards should be clear, unambiguous, and illustrate interdependencies © Aad van Moorsel, Newcastle University, 2010 124

information security ontology • We created an ontology to represent the humanbehavioural implications of information security ontology • We created an ontology to represent the humanbehavioural implications of information security management decisions – Makes the potential human-behavioural implications visible and comparable • Ontology content is aligned with information security management guidelines – We chose the ISO 27002: “Code of Practice” standard – Provides a familiar context for information security managers (e. g. CISOs, CIOs, etc. ) – Formalised content is encoded in the Web Ontology Language (OWL) • Human factors researchers and CISOs can contribute expertise within an ontology framework that connects their respective domains of knowledge – Input from industrial partners and human factors researchers helps to make the ontology relevant and useful to prospective users © Aad van Moorsel, Newcastle University, 2010 125

ontology - overview © Aad van Moorsel, Newcastle University, 2010 126 ontology - overview © Aad van Moorsel, Newcastle University, 2010 126

ontology – password policy example © Aad van Moorsel, Newcastle University, 2010 127 ontology – password policy example © Aad van Moorsel, Newcastle University, 2010 127

example – password memorisation © Aad van Moorsel, Newcastle University, 2010 128 example – password memorisation © Aad van Moorsel, Newcastle University, 2010 128

example – recall methods © Aad van Moorsel, Newcastle University, 2010 129 example – recall methods © Aad van Moorsel, Newcastle University, 2010 129

example – password reset function © Aad van Moorsel, Newcastle University, 2010 130 example – password reset function © Aad van Moorsel, Newcastle University, 2010 130

conclusions • CISOs need an awareness of the human-behavioural implications of their security management conclusions • CISOs need an awareness of the human-behavioural implications of their security management decisions • human Factors researchers need a way to contribute their expertise and align it with concepts that are familiar to CISOs – standards – IT infrastructure – business processes • we provided an ontology as a solution – serves as a formalised base of knowledge – one piece of the Trust Economics tools © Aad van Moorsel, Newcastle University, 2010 131

an ontology for structured systems economics Adam Beaument UCL, HP Labs David Pym HP an ontology for structured systems economics Adam Beaument UCL, HP Labs David Pym HP Labs, University of Bath

ontology to link with the models thus far, trust economics ontology represent technology and ontology to link with the models thus far, trust economics ontology represent technology and human behavioural issues how to glue this to the mathematical models? © Aad van Moorsel, Newcastle University, 2010 133

ontology © Aad van Moorsel, Newcastle University, 2010 134 ontology © Aad van Moorsel, Newcastle University, 2010 134

example process algebra model © Aad van Moorsel, Newcastle University, 2010 135 example process algebra model © Aad van Moorsel, Newcastle University, 2010 135

conclusion on trust economics ontology is work in progress - added human behavioural aspects conclusion on trust economics ontology is work in progress - added human behavioural aspects to IT security concepts - provided an abstraction that allows IT to be represented tailored to process algebraic model to do: - complete as well as simplify. . . - proof is in the pudding: someone needs to use it in a case study © Aad van Moorsel, Newcastle University, 2010 136

an ontology editor and a community ontology John Mace (project student) Simon Parkin Aad an ontology editor and a community ontology John Mace (project student) Simon Parkin Aad van Moorsel Stefan Fenz SBA, Austria

stakeholders • Chief Information Security Officers (CISOs) • Human Factors Researchers • Ontology experts stakeholders • Chief Information Security Officers (CISOs) • Human Factors Researchers • Ontology experts © Aad van Moorsel, Newcastle University, 2010 138

current ontology development • • requires use of an ontology creation tool graphical or current ontology development • • requires use of an ontology creation tool graphical or text based tools both create machine readable ontology file from user input user must define underlying ontology structure © Aad van Moorsel, Newcastle University, 2010 139

current development issues • knowledge required of ontology development and tools • development knowledge current development issues • knowledge required of ontology development and tools • development knowledge held by ontology experts and not those whose knowledge requires capture • current tools are complex and largely aimed at ontology experts • process is time-consuming and error prone © Aad van Moorsel, Newcastle University, 2010 140

" src="http://present5.com/presentation/deadfbb1e55caa6c178d1f2e17b746fd/image-141.jpg" alt="how would you want to write ontology content? " /> how would you want to write ontology content? © Aad van Moorsel, Newcastle University, 2010 141

proposed solution • a simple, intuitive tool to create/modify ontology in graphical form • proposed solution • a simple, intuitive tool to create/modify ontology in graphical form • captures knowledge of domain experts while removing need to know of ontology construction techniques • underlying information security ontology structure is predefined • interactive help system and mechanisms to minimise error © Aad van Moorsel, Newcastle University, 2010 142

implementation overview © Aad van Moorsel, Newcastle University, 2010 143 implementation overview © Aad van Moorsel, Newcastle University, 2010 143

ontology editor © Aad van Moorsel, Newcastle University, 2010 144 ontology editor © Aad van Moorsel, Newcastle University, 2010 144

adding new concept © Aad van Moorsel, Newcastle University, 2010 145 adding new concept © Aad van Moorsel, Newcastle University, 2010 145

ontology diagram © Aad van Moorsel, Newcastle University, 2010 146 ontology diagram © Aad van Moorsel, Newcastle University, 2010 146

Java translation program © Aad van Moorsel, Newcastle University, 2010 147 Java translation program © Aad van Moorsel, Newcastle University, 2010 147

ontology file • written in machine readable Web Ontology Language OWL • created using ontology file • written in machine readable Web Ontology Language OWL • created using OWL API • file structure: – header – classes – data properties – object properties – individuals © Aad van Moorsel, Newcastle University, 2010 148

ontology file example © Aad van Moorsel, Newcastle University, 2010 149

summary • need for information security ontology editing tool • proposed tool allows domain summary • need for information security ontology editing tool • proposed tool allows domain experts to develop ontology without knowledge of ontology construction • delivers machine readable ontology files • simplifies development process • allow further development of ‘base’ ontology © Aad van Moorsel, Newcastle University, 2010 150

future developments • ontology too large for small group to develop effectively • vast future developments • ontology too large for small group to develop effectively • vast array of knowledge held globally • ontology development needs to be a collaborative process to be effective • web-oriented collaborative editing tool • basis for 3 rd year dissertation © Aad van Moorsel, Newcastle University, 2010 151

user evaluation for trust economics software Simon Parkin Aad van Moorsel Philip Inglesant Angela user evaluation for trust economics software Simon Parkin Aad van Moorsel Philip Inglesant Angela Sasse UCL

participatory design of a trust economics tool assume we have all pieces together: • participatory design of a trust economics tool assume we have all pieces together: • ontology • models • CISO interfaces what should the tool look like? we conduct a participatory design study with CISOs from: • ISS • UCL • National Grid method: get wish list from CISOs, show a mock-up tool and collect feedback, improve, add model in background, try it out with CISOs, etc. © Aad van Moorsel, Newcastle University, 2010 153

information security management find out about how users behave, what the business issues are: information security management find out about how users behave, what the business issues are: CISO 1: Transport is a big deal. Interviewer 1: We’re trying to recognise this in our user classes. CISO 1: We have engineers on the road, have lots of access, and are more gifted in IT. Interviewer 1: Do you think it would be useful to configure different user classes? CISO 1: I think it’s covered. Interviewer 1: And different values, different possible consequences if a loss occurs. I’m assuming you would want to be able to configure. CISO 1: Yes. Eg. customer list might or might not be very valuable. Interviewer 1: And be able to configure links with different user classes and the assets. CISO 1: Yes, if you could, absolutely. Interviewer 1: We’re going to stick with defaults at first and allow configuration if needed later. So, the costs of the password policy: running costs, helpdesk staff, trade-off of helpdesk vs. productivity CISO 1: That’s right. © Aad van Moorsel, Newcastle University, 2010 154

information security management find out about how users behave, what the business issues are: information security management find out about how users behave, what the business issues are: Discussion of "Productivity Losses": CISO 2: But it’s proportional to amount they earn. This is productivity. eg. $1 m salary but bring $20 m into the company. There are expense people and productivity people. Interviewer 1: We have execs, “road warrior”, office drone. Drones are just a cost. Interviewer 2: And the 3 groups have different threat scenarios. CISO 2: Risk of over-complicating it, hard to work out who is income-earner and what proportion is income earning. Interviewer 2: But this is good point. CISO 2: Make it parameterisable, at choice of CISO. … CISO 2: So, need to be able to drill down into productivity, cost, - esp in small company. © Aad van Moorsel, Newcastle University, 2010 155

example of the trust economics methodology access management Maciej Machulak (also funded by JISC example of the trust economics methodology access management Maciej Machulak (also funded by JISC SMART) James Turland (funded by EPSRC AMPS) Wen Zeng (for DRM) Aad van Moorsel Geoff Duggan Hilary Johnson University of Bath

SMART project description • the SMART (Student-Managed Access to Online Resources) project will develop SMART project description • the SMART (Student-Managed Access to Online Resources) project will develop an online data access management system based on the User-Managed Access (UMA) Web protocol, deploy it within Newcastle University and evaluate the system through a user study. – The project team will also contribute to the standardisation effort of the UMA protocol by actively participating in the User-Managed Access Work Group (UMA WG – charter of the Kantara Initiative) © Aad van Moorsel, Newcastle University, 2010 157

project description - UMA • User-Managed Access protocol – allows an individual control the project description - UMA • User-Managed Access protocol – allows an individual control the authorization of data sharing and service access made between online services on the individual's behalf. Source: http: //kantarainitiative. org/confluence/display/uma/UMA+Explained © Aad van Moorsel, Newcastle University, 2010 158

project description – objectives • objectives: – Define scenario for UMA use case within project description – objectives • objectives: – Define scenario for UMA use case within Higher Education (HE) environments – Develop UMA-based authorisation solution – Deploy the UMA-based solution within Newcastle University: • Integrate the system with institutional Web applications • Evaluate the system through a user study – Contribute with the scenario, software and project findings to the UMA WG and actively participate in the standardisation effort of the UMA Web protocol. – Demonstrate, document and disseminate project outputs © Aad van Moorsel, Newcastle University, 2010 159

trust economics applied to access management • we build the application • we build trust economics applied to access management • we build the application • we build models to quantify trust or CIA properties • we investigate user interfaces and user behaviour to input into the model related: we also build DRM models, trading off productivity and confidentiality © Aad van Moorsel, Newcastle University, 2010 160

modelling concepts and model validation Rob Cain (funded by HP) Simon Parkin Aad van modelling concepts and model validation Rob Cain (funded by HP) Simon Parkin Aad van Moorsel Doug Eskin (funded by HP) Robin Berthier Bill Sanders University of Illinois at Urbana-Champaign

project objectives • performance models traditionally have not included human behavioural aspects in their project objectives • performance models traditionally have not included human behavioural aspects in their models • we want to have generic modelling constructs to represent human behaviour, tendencies and choices: – compliance budget – risk propensity – impact of training – role dependent behaviour • we want to validate our models with collected data – offline data, such as from interviews – online data, measure ‘live’ • we want to optimise the data collection strategy • in some cases, it makes sense to extend our trust economics methodology with a strategy for data collection 162 © Aad van Moorsel, Newcastle University, 2010

presentation of Möbius © Aad van Moorsel, Newcastle University, 2010 163 presentation of Möbius © Aad van Moorsel, Newcastle University, 2010 163

sample Möbius results Without Comp Budget Feedback 380 360 340 320 Utility 300 HB sample Möbius results Without Comp Budget Feedback 380 360 340 320 Utility 300 HB Score 280 260 240 220 0 0. 1 0. 2 0. 3 0. 4 0. 5 0. 6 Prob of Encryption 0. 7 © Aad van Moorsel, Newcastle University, 2010 0. 8 0. 9 1 164

sample Möbius results (cont. ) Using Comp Budget Feedback 380 360 340 320 Utility sample Möbius results (cont. ) Using Comp Budget Feedback 380 360 340 320 Utility 300 HB Score 280 260 240 220 0 0. 1 0. 2 0. 3 0. 4 0. 5 0. 6 Prob of Encryption 0. 7 © Aad van Moorsel, Newcastle University, 2010 0. 8 0. 9 1 165

criticality of using data • the goal of using data is to provide credibility criticality of using data • the goal of using data is to provide credibility to the model: – by defining and tuning input parameters according to individual organization – by assessing the validity of prediction results • issues: – numerous data sources – collection and processing phases are expensive and time consuming – no strategy to drive data monitoring – mismatch between model and data that can be collected © Aad van Moorsel, Newcastle University, 2010 166

data collection approach Stakeholders 1 2 Data Sources Cost / Quality Model Importance 3 data collection approach Stakeholders 1 2 Data Sources Cost / Quality Model Importance 3 4 1. 2. 3. 4. • • Input parameter definition Output validation Design specialized model according to requirements Classify potential data sources according to their cost and quality Optimize collection of data according to parameter importance Run data validation and execute model © Aad van Moorsel, Newcastle University, 2010 167

data sources classification • Cost: – Cost to obtain – Time to obtain – data sources classification • Cost: – Cost to obtain – Time to obtain – Transparency – Legislative process • Quality: – Accuracy – Applicability • Importance: – Influence of parameter value on output © Aad van Moorsel, Newcastle University, 2010 168

Organization Budget Parameters input/o Category utput Parameter Description Variables Influence Data Sources and Cost Organization Budget Parameters input/o Category utput Parameter Description Variables Influence Data Sources and Cost IT security survey (http: //www. gartner. com, http: //www. gocsi. com) in Budget Total security investment IT budget. Default is 100 medium interview with IT directors public gov. budget data Training investment Training budget. Always, one-off 100 USB stick = 100, software = 0, install and maintenance = 0 in Budget in Experimental value. Proportion of Support proportion Active Security Budget of budget Investment used for support in Budget Low Medium High Monitoring proportion of budget Experimental value. 1 – (Support proportion of budget) interview with IT directors low public gov. budget data interview with IT directors high © Aad van Moorsel, Newcastle University, 2010 public gov. budget data 169

Overall Human Parameters input/ output in in Category User behavior Parameter Compliance budget Perceived Overall Human Parameters input/ output in in Category User behavior Parameter Compliance budget Perceived benefit of task Description Variables Influence Data Sources and Cost Effort willing to spend conforming with security policy that doesn't benefit you. Effort willing to put in without using compliance budget. Generalised: understanding, investment, incentives © Aad van Moorsel, Newcastle University, 2010 User survey 170

password: probability of break-in input/ou tput Category Parameter in Culture of organization in User password: probability of break-in input/ou tput Category Parameter in Culture of organization in User behavior Password strength in Attacker Password strength determination threshold in User behavior in Description Prob, of leaving default password Variables Influence Organization policy, user training medium Compromised by brute force attack Password stength, attacker determination medium Password update frequency Organization policy, user training medium User behavior Prob. of being locked out when password is forgotten Organization policy, user training medium in User interface Prob. of finding lost password efficiency of password recovery tech. medium in User interface Prob. of needing support (#support queries / #users) prob. of forgetting password medium in User behavior Management reprimands in User behavior Negative support experiences out User behavior out Data Sources and Cost medium Prob. password can be compromised high Security Availability #successful data transfer high Security Confidentiality #exposures + #reveals high © Aad van Moorsel, Newcastle University, 2010 171

data collection research four sub problems: • determine which data is needed to validate data collection research four sub problems: • determine which data is needed to validate the model: – provide input parameter values – validate output parameters • technical implementation of the data collection • optimize data collection such that cost is within a certain bound: need to find the important parameters and trade off with cost of collecting it • add data collection to the trust economics methodology: – a data collection strategy will be associated with the use of a model © Aad van Moorsel, Newcastle University, 2010 172

conclusion trust economics research in Newcastle: • ontology for human behavioural aspects, incl. editor conclusion trust economics research in Newcastle: • ontology for human behavioural aspects, incl. editor and community version • tool design with CISOs • modelling: DRM and Access Management • data collection strategies for validation work to be done: • generic ontology for trust economics, underlying the tools • actual tool building • evaluation of the methodology © Aad van Moorsel, Newcastle University, 2010 173

trust economics info http: //www. trust-economics. org/ Publications: • An Information Security Ontology Incorporating trust economics info http: //www. trust-economics. org/ Publications: • An Information Security Ontology Incorporating Human-Behavioural Implications. Simon Parkin, Aad van Moorsel, Robert Coles. International Conference on Security of Information and Networks, 2009 • Risk Modelling of Access Control Policies with Human-Behavioural Factors. Simon Parkin and Aad van Moorsel. International Workshop on Performability Modeling of Computer and Communication Systems, 2009. • A Knowledge Base for Justified Information Security Decision-Making. Daria Stepanova, Simon Parkin, Aad van Moorsel. International Conference on Software and Data Technologies, 2009. • Architecting Dependable Access Control Systems for Multi-Domain Computing Environments. Maciej Machulak, Simon Parkin, Aad van Moorsel. Architecting Dependable Systems VI, R. De Lemos, J. Fabre C. Gacek, F. Gadducci and M. ter Beek (Eds. ), Springer, LNCS 5835, pp. 49— 75, 2009. • Trust Economics Feasibility Study. Robert Coles, Jonathan Griffin, Hilary Johnson, Brian Monahan, Simon Parkin, David Pym, Angela Sasse and Aad van Moorsel. Workshop on Resilience Assessment and Dependability Benchmarking, 2008. • The Impact of Unavailability on the Effectiveness of Enterprise Information Security Technologies. Simon Parkin, Rouaa Yassin-Kassab and Aad van Moorsel. International Service Availability Symposium, 2008. Technical reports: • Architecture and Protocol for User-Controlled Access Management in Web 2. 0 Applications. Maciej Machulak, Aad van Moorsel. CS-TR 1191, 2010 • Ontology Editing Tool for Information Security and Human Factors Experts. John Mace, Simon Parkin, Aad van Moorsel. CS-TR 1172, 2009 • Use Cases for User-Centric Access Control for the Web, Maciej Machulak, Aad van Moorsel. CS-TR 1165, 2009 • A Novel Approach to Access Control for the Web. Maciej Machulak, Aad van Moorsel. CS-TR 1157, 2009 • Proceedings of the First Trust Economics Workshop. Philip Inglesant, Maciej Machulak, Simon Parkin, Aad van Moorsel, Julian Williams (Eds. ). CS-TR 1153, 2009. • A Trust-economic Perspective on Information Security Technologies. Simon Parkin, Aad van Moorsel. CS-TR 1056, 2007 © Aad van Moorsel, Newcastle University, 2010 174