
d9c1cfdc66663eb6dafa690e37ee02ec.ppt
- Количество слайдов: 66
Outline Introduction Background Distributed DBMS Architecture Distributed Database Design à Fragmentation à Data Location Semantic Data Control Distributed Query Processing Distributed Transaction Management Parallel Database Systems Distributed Object DBMS Database Interoperability Current Issues Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 1
Design Problem In the general setting : Making decisions about the placement of data and programs across the sites of a computer network as well as possibly designing the network itself. In Distributed DBMS, the placement of applications entails à placement of the distributed DBMS software; and à placement of the applications that run on the database Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 2
Dimensions of the Problem Access pattern behavior dynamic static data partial information Level of knowledge data + program complete information Level of sharing Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 3
Distribution Design Top-down à mostly in designing systems from scratch à mostly in homogeneous systems Bottom-up à when the databases already exist at a number of sites Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 4
Top-Down Design Requirements Analysis Objectives User Input Conceptual Design View Integration View Design Access Information GCS Distribution Design ES’s User Input LCS’s Physical Design LIS’s Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 5
Distribution Design Issues Why fragment at all? How to fragment? How much to fragment? How to test correctness? How to allocate? Information requirements? Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 6
Fragmentation Can't we just distribute relations? What is a reasonable unit of distribution? à relation views are subsets of relations locality extra communication à fragments of relations (sub-relations) concurrent execution of a number of transactions that access different portions of a relation views that cannot be defined on a single fragment will require extra processing semantic data control (especially integrity enforcement) more difficult Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 7
Fragmentation Alternatives – Horizontal PROJ 1 : projects with budgets less than $200, 000 PROJ 2 : projects with budgets greater than or equal to $200, 000 PROJ 1 PNO P 1 P 2 P 3 P 4 P 5 PNAME BUDGET Instrumentation 150000 Database Develop. 135000 CAD/CAM 250000 Maintenance 310000 CAD/CAM 500000 LOC Montreal New York Paris Boston PROJ 2 PNAME Instrumentation BUDGET 150000 LOC PNO PNAME BUDGET LOC Distributed DBMS P 3 CAD/CAM 250000 New York P 4 Maintenance 310000 Paris P 5 P 2 Database Develop. 135000 Montreal CAD/CAM 500000 Boston © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 8
Fragmentation Alternatives – Vertical PROJ 1: information about project budgets PROJ 2: information about project names and locations PNO P 1 P 2 P 3 P 4 P 5 PROJ 1 BUDGET PNO P 1 P 2 P 3 P 4 P 5 150000 135000 250000 310000 500000 P 1 P 2 P 3 P 4 P 5 BUDGET Instrumentation 150000 Database Develop. 135000 CAD/CAM 250000 Maintenance 310000 CAD/CAM 500000 LOC Montreal New York Paris Boston PROJ 2 PNO PNAME Distributed DBMS PNAME Instrumentation Database Develop. CAD/CAM Maintenance CAD/CAM © 1998 M. Tamer Özsu & Patrick Valduriez LOC Montreal New York Paris Boston Page 5. 9
Degree of Fragmentation finite number of alternatives tuples or attributes relations Finding the suitable level of partitioning within this range Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 10
Correctness of Fragmentation Completeness à Decomposition of relation R into fragments R 1, R 2, . . . , Rn is complete if and only if each data item in R can also be found in some Ri Reconstruction à If relation R is decomposed into fragments R 1, R 2, . . . , Rn, then there should exist some relational operator such that R = 1≤i≤n. Ri Disjointness à If relation R is decomposed into fragments R 1, R 2, . . . , Rn, and data item di is in Rj, then di should not be in any other fragment Rk (k ≠ j ). Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 11
Allocation Alternatives Non-replicated à partitioned : each fragment resides at only one site Replicated à fully replicated : each fragment at each site à partially replicated : each fragment at some of the sites Rule of thumb: If read - only queries 1 replication is advantageous, update quries otherwise replication may cause problems Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 12
Comparison of Replication Alternatives Full-replication QUERY PROCESSING Partial-replication Partitioning Easy Same Difficulty DIRECTORY MANAGEMENT Easy or Non-existant Same Difficulty CONCURRENCY CONTROL Moderate Difficult Easy RELIABILITY Very high High Low REALITY Possible application Realistic Possible application Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 13
Information Requirements Four categories: à Database information à Application information à Communication network information à Computer system information Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 14
Fragmentation Horizontal Fragmentation (HF) à Primary Horizontal Fragmentation (PHF) à Derived Horizontal Fragmentation (DHF) Vertical Fragmentation (VF) Hybrid Fragmentation (HF) Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 15
PHF – Information Requirements Database Information à relationship SKILL TITLE, SAL EMP L 1 PROJ ENO, ENAME, TITLE L 2 PNO, PNAME, BUDGET, LOC L 3 ASG ENO, PNO, RESP, DUR à cardinality of each relation: card(R) Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 16
PHF - Information Requirements Application Information à simple predicates : Given R[A 1, A 2, …, An], a simple predicate pj is pj : Ai Value where {=, <, ≤, >, ≥, ≠}, Value Di and Di is the domain of Ai. For relation R we define Pr = {p 1, p 2, …, pm} Example : PNAME = "Maintenance" BUDGET ≤ 200000 à minterm predicates : Given R and Pr={p 1, p 2, …, pm} define M={m 1, m 2, …, mr} as M={ mi|mi = pj Pr pj* }, 1≤j≤m, 1≤i≤z where pj* = pj or pj* = ¬(pj). Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 17
PHF – Information Requirements Example m 1: PNAME="Maintenance" BUDGET≤ 200000 m 2: NOT(PNAME="Maintenance") BUDGET≤ 200000 m 3: PNAME= "Maintenance" NOT(BUDGET≤ 200000) m 4: NOT(PNAME="Maintenance") NOT(BUDGET≤ 200000) Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 18
PHF – Information Requirements Application Information à minterm selectivities: sel(mi) The number of tuples of the relation that would be accessed by a user query which is specified according to a given minterm predicate mi. à access frequencies: acc(qi) The frequency with which a user application qi accesses data. Access frequency for a minterm predicate can also be defined. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 19
Primary Horizontal Fragmentation Definition : Rj = Fj (R ), 1 ≤ j ≤ w where Fj is a selection formula, which is (preferably) a minterm predicate. Therefore, A horizontal fragment Ri of relation R consists of all the tuples of R which satisfy a minterm predicate mi. Given a set of minterm predicates M, there as many horizontal fragments of relation R as there are minterm predicates. Set of horizontal fragments also referred to as minterm fragments. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 20
PHF – Algorithm Given: A relation R, the set of simple predicates Pr Output: The set of fragments of R = {R 1, R 2, …, Rw} which obey the fragmentation rules. Preliminaries : àPr should be complete àPr should be minimal Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 21
Completeness of Simple Predicates A set of simple predicates Pr is said to be complete if and only if the accesses to the tuples of the minterm fragments defined on Pr requires that two tuples of the same minterm fragment have the same probability of being accessed by any application. Example : àAssume PROJ[PNO, PNAME, BUDGET, LOC] has two applications defined on it. àFind the budgets of projects at each location. (1) àFind projects with budgets less than $200000. (2) Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 22
Completeness of Simple Predicates According to (1), Pr={LOC=“Montreal”, LOC=“New York”, LOC=“Paris”} which is not complete with respect to (2). Modify Pr ={LOC=“Montreal”, LOC=“New York”, LOC=“Paris”, BUDGET≤ 200000, BUDGET>200000} which is complete. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 23
Minimality of Simple Predicates If a predicate influences how fragmentation is performed, (i. e. , causes a fragment f to be further fragmented into, say, fi and fj) then there should be at least one application that accesses fi and fj differently. In other words, the simple predicate should be relevant in determining a fragmentation. If all the predicates of a set Pr are relevant, then Pr is minimal. acc(mi) acc(mj) card(fi) card(fj) ––––– ≠ ––––– Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 24
Minimality of Simple Predicates Example : Pr ={LOC=“Montreal”, LOC=“New York”, LOC=“Paris”, BUDGET≤ 200000, BUDGET>200000} is minimal (in addition to being complete). However, if we add PNAME = “Instrumentation” then Pr is not minimal. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 25
COM_MIN Algorithm Given: a relation R and a set of simple predicates Pr Output: a complete and minimal set of simple predicates Pr' for Pr Rule 1: a relation or fragment is partitioned into at least two parts which are accessed differently by at least one application. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 26
COM_MIN Algorithm Initialization : find a pi Pr such that pi partitions R according to Rule 1 set Pr' = pi ; Pr – pi ; F fi Iteratively add predicates to Pr' until it is complete find a pj Pr such that pj partitions some fk defined according to minterm predicate over Pr' according to Rule 1 set Pr' = Pr' pi ; Pr – pi; F F fi if pk Pr' which is nonrelevant then Pr' – pk F F – fk Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 27
PHORIZONTAL Algorithm Makes use of COM_MIN to perform fragmentation. Input: a relation R and a set of simple predicates Pr Output: a set of minterm predicates M according to which relation R is to be fragmented Distributed DBMS Pr' COM_MIN (R, Pr) determine the set M of minterm predicates determine the set I of implications among pi Pr eliminate the contradictory minterms from M © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 28
PHF – Example Two candidate relations : PAY and PROJ. Fragmentation of relation PAY à Application: Check the salary info and determine raise. à Employee records kept at two sites application run at two sites à Simple predicates p 1 : SAL ≤ 30000 p 2 : SAL > 30000 Pr = {p 1, p 2} which is complete and minimal Pr'=Pr à Minterm predicates m 1 : (SAL ≤ 30000) m 2 : NOT(SAL ≤ 30000) (SAL > 30000) Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 29
PHF – Example PAY 1 TITLE Mech. Eng. PAY 2 SAL TITLE SAL Elect. Eng. 40000 Programmer 24000 Distributed DBMS 27000 Syst. Anal. 34000 © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 30
PHF – Example Fragmentation of relation PROJ à Applications: Find the name and budget of projects given their no. Issued at three sites Access project information according to budget one site accesses ≤ 200000 other accesses >200000 à Simple predicates à For application (1) p 1 : LOC = “Montreal” p 2 : LOC = “New York” p 3 : LOC = “Paris” à For application (2) p 4 : BUDGET ≤ 200000 p 5 : BUDGET > 200000 à Pr = Pr' = {p 1, p 2, p 3, p 4, p 5} Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 31
PHF – Example Fragmentation of relation PROJ continued à Minterm fragments left after elimination m 1 : (LOC = “Montreal”) (BUDGET ≤ 200000) m 2 : (LOC = “Montreal”) (BUDGET > 200000) m 3 : (LOC = “New York”) (BUDGET ≤ 200000) m 4 : (LOC = “New York”) (BUDGET > 200000) m 5 : (LOC = “Paris”) (BUDGET ≤ 200000) m 6 : (LOC = “Paris”) (BUDGET > 200000) Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 32
PHF – Example PROJ 2 PROJ 1 PNO PNAME BUDGET P 1 Instrumentation 150000 LOC PNO Montreal P 2 PROJ 4 PNO PNAME P 3 CAD/CAM Distributed DBMS PNAME BUDGET Database Develop. 135000 LOC New York PROJ 6 BUDGET 250000 LOC PNO New York P 4 PNAME Maintenance © 1998 M. Tamer Özsu & Patrick Valduriez BUDGET LOC 310000 Paris Page 5. 33
PHF – Correctness Completeness à Since Pr' is complete and minimal, the selection predicates are complete Reconstruction à If relation R is fragmented into FR = {R 1, R 2, …, Rr} R = Ri FR Ri Disjointness à Minterm predicates that form the basis of fragmentation should be mutually exclusive. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 34
Derived Horizontal Fragmentation Defined on a member relation of a link according to a selection operation specified on its owner. à Each link is an equijoin. à Equijoin can be implemented by means of semijoins. SKILL TITLE, SAL L 1 EMP PROJ ENO, ENAME, TITLE PNO, PNAME, BUDGET, LOC L 2 L 3 ASG ENO, PNO, RESP, DUR Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 35
DHF – Definition Given a link L where owner(L)=S and member(L)=R, the derived horizontal fragments of R are defined as Ri = R F Si, 1≤i≤w where w is the maximum number of fragments that will be defined on R and Si = Fi (S) where Fi is the formula according to which the primary horizontal fragment Si is defined. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 36
DHF – Example Given link L 1 where owner(L 1)=SKILL and member(L 1)=EMP 1 = EMP SKILL 1 EMP 2 = EMP SKILL 2 where SKILL 1 = SAL≤ 30000 (SKILL) SKILL 2 = SAL>30000 (SKILL) EMP 1 EMP 2 ENO ENAME E 3 E 4 E 7 A. Lee J. Miller R. Davis Distributed DBMS TITLE Mech. Eng. Programmer Mech. Eng. ENO E 1 E 2 E 5 E 6 E 8 ENAME J. Doe M. Smith B. Casey L. Chu J. Jones © 1998 M. Tamer Özsu & Patrick Valduriez TITLE Elect. Eng. Syst. Anal. Page 5. 37
DHF – Correctness Completeness à Referential integrity à Let R be the member relation of a link whose owner is relation S which is fragmented as FS = {S 1, S 2, . . . , Sn}. Furthermore, let A be the join attribute between R and S. Then, for each tuple t of R, there should be a tuple t' of S such that t[A]=t'[A] Reconstruction à Same as primary horizontal fragmentation. Disjointness à Simple join graphs between the owner and the member fragments. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 38
Vertical Fragmentation Has been studied within the centralized context à design methodology à physical clustering More difficult than horizontal, because more alternatives exist. Two approaches : à grouping attributes to fragments à splitting relation Distributed DBMS to fragments © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 39
Vertical Fragmentation Overlapping fragments à grouping Non-overlapping fragments à splitting We do not consider the replicated key attributes to be overlapping. Advantage: Easier to enforce functional dependencies (for integrity checking etc. ) Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 40
VF – Information Requirements Application Information à Attribute affinities a measure that indicates how closely related the attributes are This is obtained from more primitive usage data à Attribute usage values Given a set of queries Q = {q 1, q 2, …, qq} that will run on the relation R[A 1, A 2, …, An], if attribute Aj is referenced by query qi 1 use(qi, Aj) = 0 otherwise use(qi, • ) can be defined accordingly Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 41
VF – Definition of use(qi, Aj) Consider the following 4 queries for relation PROJ q 1 : q 3 : SELECT BUDGET PNAME, BUDGET FROM PROJ FROM WHERE PNO=Value SELECT PNAME q 4: SUM(BUDGET) FROM PROJ FROM WHERE LOC=Value A 1 A 2 A 3 q 2 : SELECT PROJ WHERE A 4 Let A 1= PNO, A 2= PNAME, A 30 BUDGET, A 4= LOC = 1 q 1 1 0 q 2 1 1 0 q 3 0 1 q 4 Distributed DBMS 0 0 0 1 1 © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 42
VF – Affinity Measure aff(Ai, Aj) The attribute affinity measure between two attributes Ai and Aj of a relation R[A 1, A 2, …, An] with respect to the set of applications Q = (q 1, q 2, …, qq) is defined as follows : aff (Ai, Aj) (query access) all queries that access Ai and Aj query access Distributed DBMS all sites access frequency of a query © 1998 M. Tamer Özsu & Patrick Valduriez access execution Page 5. 43
VF – Calculation of aff(Ai, Aj) Assume each query in the previous example accesses the attributes once during each execution. S 1 Also assume the access frequencies S 3 aff(A 1, A 3) = 15*1 + 20*1+10*1 = 45 and the attribute affinity matrix is © 1998 M. Tamer Özsu & Patrick Valduriez q 1 15 20 10 q 2 5 0 0 q 3 Then Distributed DBMS S 2 25 25 25 q 4 3 0 0 A 1 A 2 A 3 A 4 45 AA 45 0 0 5 75 0 80 45 5 53 3 3 78 0 75 Page 5. 44
VF – Clustering Algorithm Take the attribute affinity matrix AA and reorganize the attribute orders to form clusters where the attributes in each cluster demonstrate high affinity to one another. Bond Energy Algorithm (BEA) has been used for clustering of entities. BEA finds an ordering of entities (in our case attributes) such that the global affinity measure AM (affinity of A and A with their neighbors) i i j j is maximized. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 45
Bond Energy Algorithm Input: The AA matrix Output: The clustered affinity matrix CA which is a perturbation of AA Initialization: Place and fix one of the columns of AA in CA. Iteration: Place the remaining n-i columns in the remaining i+1 positions in the CA matrix. For each column, choose the placement that makes the most contribution to the global affinity measure. Row order: Order the rows according to the column ordering. Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 46
Bond Energy Algorithm “Best” placement? Define contribution of a placement: cont(Ai, Ak, Aj) = 2 bond(Ai, Ak)+2 bond(Ak, Al) – 2 bond(Ai, Aj) where n bond(Ax, Ay) = aff(A , A ) z x z y z 1 Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 47
BEA – Example Consider the following AA matrix and the corresponding CA matrix where A 1 and A 2 have been placed. Place A 3: Ordering (0 -3 -1) : cont(A 0, A 3, A 1) = 2 bond(A 0 , A 3)+2 bond(A 3 , A 1)– 2 bond(A 0 , A 1) = 2* 0 + 2* 4410 – 2*0 = 8820 Ordering (1 -3 -2) : cont(A 1, A 3, A 2) = 2 bond(A 1 , A 3)+2 bond(A 3 , A 2)– 2 bond(A 1, A 2) = 2* 4410 + 2* 890 – 2*225 = 10150 Ordering (2 -3 -4) : cont (A 2, A 3, A 4) Distributed DBMS = 1780 © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 48
BEA – Example Therefore, the CA matrix has to form A 1 A 3 A 2 45 45 0 0 5 80 45 53 5 0 Distributed DBMS 3 75 © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 49
BEA – Example When A 4 is placed, the final form of the CA matrix (after row organization) is A 1 A 3 A 2 A 4 A 1 45 45 0 0 A 3 45 53 5 3 A 2 5 80 75 A 4 Distributed DBMS 0 0 3 75 78 © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 50
VF – Algorithm How can you divide a set of clustered attributes {A 1, A 2, …, An} into two (or more) sets {A 1, A 2, …, Ai} and {Ai, …, An} such that there are no (or minimal) applications that access both (or more than one) of the sets. A 1 A 2 A 3 … Ai Ai+1. . . Am . . . A 1 A 2 TA Ai . . . Ai+1 BA Am Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 51
VF – ALgorithm Define TQ = BQ = OQ = set of applications that access only TA set of applications that access only BA set of applications that access both TA and BA and CTQ = total number of accesses to attributes by applications that access only TA CBQ = total number of accesses to attributes by applications that access only BA COQ = total number of accesses to attributes by applications that access both TA and BA Then find the point along the diagonal that maximizes CTQ CBQ COQ 2 Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 52
VF – Algorithm Two problems : Cluster forming in the middle of the CA matrix à Shift a row up and a column left and apply the algorithm to find the “best” partitioning point à Do this for all possible shifts à Cost O(m 2) More than two clusters à m-way partitioning à try 1, 2, …, m– 1 split points along diagonal and try to find the best point for each of these à Cost O(2 m) Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 53
VF – Correctness A relation R, defined over attribute set A and key K, generates the vertical partitioning FR = {R 1, R 2, …, Rr}. Completeness à The following should be true for A: A = AR i Reconstruction à Reconstruction can be achieved by R= K Ri Ri FR Disjointness à TID's are not considered to be overlapping since they are maintained by the system à Duplicated keys are not considered to be overlapping Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 54
Hybrid Fragmentation R HF HF R 1 R 2 VF VF VF R 11 Distributed DBMS R 12 R 21 VF R 22 © 1998 M. Tamer Özsu & Patrick Valduriez VF R 23 Page 5. 55
Fragment Allocation Problem Statement Given F = {F 1, F 2, …, Fn} S ={S 1, S 2, …, Sm} Q = {q 1, q 2, …, qq} fragments network sites applications Find the "optimal" distribution of F to S. Optimality à Minimal cost Communication + storage + processing (read & update) Cost in terms of time (usually) à Performance Response time and/or throughput à Constraints Distributed DBMS Per site constraints (storage & processing) © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 56
Information Requirements Database information à selectivity of fragments à size of a fragment Application information à access types and numbers à access localities Communication network information à unit cost of storing data at a site à unit cost of processing at a site Computer system information à bandwidth à latency à communication overhead Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 57
Allocation File Allocation (FAP) vs Database Allocation (DAP): à Fragments are not individual files relationships have to be maintained à Access to databases is more complicated remote file access model not applicable relationship between allocation and query processing à Cost of integrity enforcement should be considered à Cost of concurrency control should be considered Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 58
Allocation – Information Requirements Database Information à selectivity of fragments à size of a fragment Application Information à à à number of read accesses of a query to a fragment number of update accesses of query to a fragment A matrix indicating which queries updates which fragments A similar matrix for retrievals originating site of each query Site Information à unit cost of storing data at a site à unit cost of processing at a site Network Information à communication cost/frame between two sites à frame size Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 59
Allocation Model General Form min(Total Cost) subject to response time constraint storage constraint processing constraint Decision Variable xij Distributed DBMS 1 0 if fragment Fi is stored at site Sj otherwise © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 60
Allocation Model Total Cost query processing cost all queries all sites all fragments cost of storing a fragment at a site Storage Cost (of fragment Fj at Sk) (unit storage cost at Sk) (size of Fj) xjk Query Processing Cost (for one query) processing component + transmission component Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 61
Allocation Model Query Processing Cost Processing component access cost + integrity enforcement cost + concurrency control cost à Access cost all sites all fragments (no. of update accesses+ no. of read accesses) xij local processing cost at a site à Integrity enforcement and concurrency control costs Can Distributed DBMS be similarly calculated © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 62
Allocation Model Query Processing Cost Transmission component cost of processing updates + cost of processing retrievals à Cost of updates all sites all fragments all sites update message cost all fragments acknowledgment cost à Retrieval Cost all fragments minall sites (cost of retrieval command cost of sending back the result) Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 63
Allocation Model Constraints à Response Time execution time of query ≤ max. allowable response time for that query à Storage Constraint (for a site) storage requirement of a fragment at that site all fragments storage capacity at that site à Processing constraint (for a site) all queries processing load of a query at that site processing capacity of that site Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 64
Allocation Model Solution Methods à FAP is NP-complete à DAP also NP-complete Heuristics based on à single commodity warehouse location (for FAP) à knapsack problem à branch and bound techniques à network flow Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 65
Allocation Model Attempts to reduce the solution space à assume all candidate partitionings known; select the “best” partitioning à ignore replication at first à sliding window on fragments Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 5. 66
d9c1cfdc66663eb6dafa690e37ee02ec.ppt