Скачать презентацию Computational Genomics Lecture 7 c Hidden Markov Models Скачать презентацию Computational Genomics Lecture 7 c Hidden Markov Models

7c1546dc41305a4349dfd8815c22043c.ppt

  • Количество слайдов: 22

Computational Genomics Lecture 7 c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Computational Genomics Lecture 7 c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU). Modified by Benny Chor (TAU)

Outline u u u Finite, or Discrete, Markov Models Hidden Markov Models Three major Outline u u u Finite, or Discrete, Markov Models Hidden Markov Models Three major questions: Q 1: Compute the probability of a given sequence of observations. A 1: Forward – Backward dynamic programming algorithm (Baum Welch). Q 2: Compute the most probable sequence of states, given a sequence of observations. A 2: Viterbi’s dynamic programming Algorithm Q 3: Learn best model, given an observation, . A 3: The Expectation Maximization (EM) heuristic. 2

Markov Models A discrete (finite) system: l N distinct states. l Begins (at time Markov Models A discrete (finite) system: l N distinct states. l Begins (at time t=1) in some initial state(s). l At each time step (t=1, 2, …) the system moves from current to next state (possibly the same as the current state) according to transition probabilities associated with current state. u This kind of system is called a finite, or discrete Markov model. Aka probabilistic finite automata. u u After Andrei Andreyevich Markov (1856 -1922) 3

Outline Markov Chains (Markov Models) u Hidden Markov Chains (HMMs) u Algorithmic Questions u Outline Markov Chains (Markov Models) u Hidden Markov Chains (HMMs) u Algorithmic Questions u Biological Relevance u 4

Discrete Markov Model: Example Discrete Markov Model with 5 states. u Each aij represents Discrete Markov Model: Example Discrete Markov Model with 5 states. u Each aij represents the probability of moving from state i to state j u The aij are given in a matrix A = {aij} u The probability to start in a given state i is pi , The vector p represents these start probabilities. u 5

Markov Property • Markov Property: The state of the system at time t+1 depends Markov Property • Markov Property: The state of the system at time t+1 depends only on the state of the system at time t Xt=1 Xt=2 Xt=3 Xt=4 Xt=5 6

Markov Chains Stationarity In general, a process is called stationary if transition probabilities are Markov Chains Stationarity In general, a process is called stationary if transition probabilities are independent of t, namely This means that if system is in state i, the probability that the system will next move to state j is pij , no matter what the value of t is. This property clearly holds for our Markov models. 7

Simple Minded Weather Example • raining today rain tomorrow prr = 0. 4 • Simple Minded Weather Example • raining today rain tomorrow prr = 0. 4 • raining today no rain tomorrow prn = 0. 6 • not raining today rain tomorrow pnr = 0. 2 • not raining today no rain tomorrow prr = 0. 8 Pr( Dry Wed. Dec. 14 Rainy Fri. Dec. 16)? ? ? 8

Simple Minded Weather Example Transition matrix for our example • Note that rows sum Simple Minded Weather Example Transition matrix for our example • Note that rows sum to 1 (but columns don’t) • Such matrix is called a Stochastic Matrix • If the rows of a matrix and the columns of a matrix all sum to 1, we have a Doubly Stochastic Matrix 9

Coke vs. Pepsi (a cental cultural dilemma) Given that a person’s last cola purchase Coke vs. Pepsi (a cental cultural dilemma) Given that a person’s last cola purchase was Coke ™, there is a 90% chance that her next cola purchase will also be Coke ™. If that person’s last cola purchase was Pepsi™, there is an 80% chance that her next cola purchase will also be Pepsi™. 0. 1 0. 9 coke 0. 8 pepsi 0. 2 10

Coke vs. Pepsi Given that a person is currently a Pepsi purchaser, what is Coke vs. Pepsi Given that a person is currently a Pepsi purchaser, what is the probability that she will purchase Coke two purchases from now? The transition matrices are: (corresponding to one purchase ahead) 11

Coke vs. Pepsi Given that a person is currently a Coke drinker, what is Coke vs. Pepsi Given that a person is currently a Coke drinker, what is the probability that she will purchase Pepsi three purchases from now? 12

Coke vs. Pepsi Assume each person makes one cola purchase per week. Suppose 60% Coke vs. Pepsi Assume each person makes one cola purchase per week. Suppose 60% of all people now drink Coke, and 40% drink Pepsi. What fraction of people will be drinking Coke three weeks from now? Let (Q 0, Q 1)=(0. 6, 0. 4) be the initial probabilities. P 00 We will regard Coke as 0 and Pepsi as 1 We want to find P(X 3=0) 13

Equilibrium (Stationary) Distribution u Suppose 60% of all people now drink Coke, and 40% Equilibrium (Stationary) Distribution u Suppose 60% of all people now drink Coke, and 40% drink Pepsi. What fraction will be drinking Coke 10, 1000, 10000 … weeks from now? u For each week, probability is well defined. But does it converge to some equilibrium distribution [p 0, p 1]? u If it does, then eqs. : . 9 p 0+. 2 p 1 =p 0, . 8 p 1+. 1 p 0 =p 1 must hold, yielding p 0= 2/3, p 1=1/3. 0. 1 0. 9 coke 0. 8 pepsi 0. 2 14

Equilibrium (Stationary) Distribution Whether or not there is a stationary distribution, and whether or Equilibrium (Stationary) Distribution Whether or not there is a stationary distribution, and whether or not it is unique if it does exist, are determined by certain properties of the process. Irreducible means that every state is accessible from every other state. Aperiodic means that there exists at least one state for which the transition from that state to itself is possible. Positive recurrent means that the expected return time is finite for every state. 0. 9 0. 1 0. 8 coke pepsi 0. 2 http//: en. wikipedia. org/wiki/Markov_chain 15

Equilibrium (Stationary) Distribution u If the Markov chain is positive recurrent, there exists a Equilibrium (Stationary) Distribution u If the Markov chain is positive recurrent, there exists a stationary distribution. If it is positive recurrent and irreducible, there exists a unique stationary distribution, and furthermore the process constructed by taking the stationary distribution as the initial distribution is ergodic (defined shortly). Then the average of a function f over samples of the Markov chain is equal to the average with respect to the stationary distribution, http//: en. wikipedia. org/wiki/Markov_chain 16

Equilibrium (Stationary) Distribution u Writing P for the transition matrix, a stationary distribution is Equilibrium (Stationary) Distribution u Writing P for the transition matrix, a stationary distribution is a vector π which satisfies the equation l Pπ = π. u In this case, the stationary distribution π is an eigenvector of the transition matrix, associated with the eigenvalue 1. http//: en. wikipedia. org/wiki/Markov_chain 17

Discrete Markov Model - Example u States – Rainy: 1, Cloudy: 2, Sunny: 3 Discrete Markov Model - Example u States – Rainy: 1, Cloudy: 2, Sunny: 3 u Transition matrix A – u Problem – given that the weather on day 1 (t=1) is sunny(3), 18

Discrete Markov Model – Example (cont. ) u The answer is - 19 Discrete Markov Model – Example (cont. ) u The answer is - 19

Ergodicity Ergodic model: Strongly connected - directed path w/ positive probabilities from each state Ergodicity Ergodic model: Strongly connected - directed path w/ positive probabilities from each state i to state j (but not necessarily a complete directed graph) u 20

Third Example: The Friendly Gambler Game starts with 10$ in gambler’s pocket – At Third Example: The Friendly Gambler Game starts with 10$ in gambler’s pocket – At each round we have the following: or • Gambler wins 1$ with probability p • Gambler loses 1$ with probability 1 -p – Game ends when gambler goes broke (no sister in bank), or accumulates a capital of 100$ (including initial capital) – Both 0$ and 100$ are absorbing states (or boundaries) p 0 1 1 -p p p N-1 2 1 -p p 1 -p Start (10$) N 1 -p 21

Third Example: The Friendly Gambler Irreducible means that every state is accessible from every Third Example: The Friendly Gambler Irreducible means that every state is accessible from every other state. Aperiodic means that there exists at least one state for which the transition from that state to itself is possible. Positive recurrent means that for every state, the expected return time is finite. If the Markov chain is positive recurrent, there exists a stationary distribution. Is the gambler’s chain positive recurrent? Does it have a stationary distribution (independent of initial distribution)? p 0 1 1 -p p p N-1 2 1 -p p 1 -p Start (10$) N 1 -p 22