
Algorithmic Methods for Transient Analysis of Continuous Time Markov Chains
Explore the transient analysis of continuous time Markov chains, including finding equilibrium distributions, transient probabilities, matrix decomposition, and transient measures. Learn about the homogeneous, stationary, and irreducible properties of Markov chains. Discover how to derive the generator matrix and equilibrium equations, and the equivalence between discrete and continuous time Markov chains.
Uploaded on | 0 Views
Download Presentation

Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
Lecture 2: Algorithmic Methods for transient analysis of continuous time Markov Chains Dr. Ahmad Al Hanbali Industrial Engineering Dep University of Twente a.alhanbali@utwente.nl 1
Lecture 2 This Lecture deals with continuous time Markov processes as opposed to discrete time Markov chains in Lecture 1 Objectives: 1. Find equilibrium distribution 2. Find transient probabilities 1.Matrix decomposition 2.Uniformization method 3. Find Transient measures Lecture 2: transient analysis of continuous time Markov chains 2
Background (1) Let {?(?): ? 0} denote a continuous time stochastic process of state space {0,1, ,?} ?(?) is a Markov chain if the conditional transition probability, for every ?,? 0 and ? ?(?(? + ?) = ? | ?(?); ? ? ) = ?(?(? + ?) = ? | ?(?) ) ?(?) is homogeneous (or stationary) if ?(?(? + ?) = ? | ?(?) = ? ) = ?(?(?) = ? | ?(0) = ?) = ???(?) ?(?) is irreducible if all states can communicate Lecture 2: transient analysis of continuous time Markov chains 3
Background (2) Define (infinitesimal) transition rate from state i to j of a Markov process ??? ? ??? = lim ,? ? ? ? 0 Let {?? ? = 0,1,..} denote epochs of transition of CTMC then for ? 0 (by convention ?0= 0) ? ?? ?? 1 ? | ?(?? 1) = ?,?(??) = ? = 1 exp( ???), where ?? (= ? ? ??? ) is the total outgoing rate of state ? 1 ??? ? ? ??= lim ? 0 Lecture 2: transient analysis of continuous time Markov chains 4
Background (3) Let ??? = ??. The matrix ? = [???]0 ?,? ?is called the generator of the continuous time Markov chain (CTMC). Note: ? ??? = 0 Let ? = (?0, ,??) equilibrium probabilities. The equilibrium equations of CTMC gives ?? ? ? ???= ?????, ? ? in matrix equation ?? = 0, ?? = 1 Idea: take advantage of Methods developed for discrete time Markov chains (in Lecture 1) Lecture 2: transient analysis of continuous time Markov chains 5
An equivalent discrete time Markov chain Equilibrium distribution ? can be obtained from an equivalent Markov chain via an elementary transformation. Let be real number such that ( 1/???), and ? = ? + ? 0 < min ? ?is a stochastic matrix, i.e., its entries are between 0 and 1, and its rows sum to 1. Further, ?? = ? ?? = 0 The Markov chain with transition probability ? is a discretization of the Markov process of ? with time step Lecture 2: transient analysis of continuous time Markov chains 6
Uniformization of CTMC To have same mean sojourn time in all states per visit the uniformization of CTMC introduces fictitious transitions from states to themselves Let 0 < ????( 1/???), introduce a fictitious transition from state ? to itself with rate (???+ 1/ ). This yields: 1. Equilibrium distribution of Q doesn't change 2. Outgoing rate from state i becomes (???+ 1/ ???= 1/ ) same for all states 3. Equilibrium distribution of the uniformized Markov process of Q is same as the Markov chain of transition matrix P(=I+ Q) embedded at epoch of transitions (jumps) The transitions of the uniformized process take place according to a Poisson process with rate Lecture 2: transient analysis of continuous time Markov chains 7
Equilibrium distribution All methods developed for solving the equilibrium equation for discrete time Markov chain can be applied to the uniformized Markov chain of transition matrix P Lecture 2: transient analysis of continuous time Markov chains 8
Transient Behavior of CTMC Kolmogorov's equations are needed for the transient analysis. Let define the transient probability ???(?) = ?(?(?) = ? | ?(0) = ?) Then, for 0 ? < ? ???(?) = ? ???(?) ???(? ?) Kolmogorov's equations are set of differential equations for ???(?) Lecture 2: transient analysis of continuous time Markov chains 9
Background (1) Let ?(?) the matrix of (?,?) entries ???(?) Kolmogorov's forward equations are derived by letting s approaches t from below (backward equations) ? ? = ? ? ?,? 0 = ? ? Hence, ?(?) = ? 0 ? 0?? = exp ?? ?! Truncating the infinite sum is inefficient since ? has positive and negative elements Lecture 2: transient analysis of continuous time Markov chains 10
Matrix decomposition method Let ??,? = 0, ,?, be the (? + 1) eigenvalues of ? Let ?? and ?? be the left and right eigenvectors corresponding to ??, such that ?? ?? = 1 and ?? ?? = 0 for ? ?. The matrix then reads ? = ??? 1, where ? 1 is the matrix whose rows are ??, ?is the diagonal matrix of entries ??, and?is the matrix whose columns are?? Lecture 2: transient analysis of continuous time Markov chains 11
Matrix decomposition method (cnt'd) The transient probability matrix then reads ? ?? ?! = ?.???(??).? 1= ? ??.??? ??? .?? ?(?) = ? 0 What is the interpretation of ?( )? What conditions li should satisfy when t tends infinity ? Disadvantage of matrix decomposition? Lecture 2: transient analysis of continuous time Markov chains 12
Uniformization method Let 0 < min Conditioning on ?, number of transitions in (0,?) which is Poisson distributed with mean ?/ , gives ( 1/???) and ? = ? + ?. ? ?/ ? ?! ? ? = ? ??= ?? ? ? = exp ?/ ? 0 ? 0 Truncating the latter sum on the first ? terms gives a good approximation, ? = max{20,?/ + 5 ?/ } It is better to take the largest possible value of = min ? ( 1/???) Lecture 2: transient analysis of continuous time Markov chains 13
Occupancy Time: mean Occupancy time of a state is the sojourn time in that state during (0,?). Note that depends on state at time 0 Let ???(?) denote the mean occupancy time in state ? during (0,?) given initial state ?. Then, ? ? ? ???? = ? 1? ? =?|? 0 =??? = ? 1? ? =?|? 0 =? ?? = ???? ?? ?=0 ?=0 ?=0 In matrix equation, ? ? ? ? = ???(?) = ???? ?? = ? ? ?? ?=0 ?=0 Lecture 2: transient analysis of continuous time Markov chains 14
Mean occupancy time (cnt'd) Using the uniformized process ( ,P) then, ? ? ?/ ? ?! ?/ ? ?! ???? = ?? ??. ? ? = exp ?/ exp ?/ ?=0 ?=0 ? 0 ? 0 ?/ ? ?! ? Note ?=0 is a Poisson random variable of mean ?/ ?? = 1 ? ? ? , where Y exp ?/ We find that 1 ? ? ? ??. ? ? = ? 0 Lecture 2: transient analysis of continuous time Markov chains 15
Cumulative distribution of occupancy time Let ?(?) denote the total sojourn time during [0,T] in a subset of states, ??. Then, for 0 ? < ? ? ? ? ?/ ?/ ? ? ? ? ? ? ? ? 1 ? ? ? ? ? = ? ?,? , ?! ? ?=0 ?=0 ? ?,? + 1 , ?=? ? ?/ ?/ ? ? ? ? = ? = ?=0 where ?(?,?) is the probability that uniformized process visits ? times ? during [0,?] given that it makes ? transitions. Proof, see for details Tijms 2003: 1. Condition on Poisson number of transitions of the uniformized process to be n 2. Occupancy time is smaller than x if uniformized process will visit k times ?? out of the n visits and at least k of these transitions happens before x. The former probability is ? ?,? and latter is function of a binomial distribution ?! 16 Lecture 2: transient analysis of continuous time Markov chains
Moments of occupancy time Proposition: The m-th moment of O(T) is given by: ?+1 ?+? 1 ? ? ?? ?? ?/ ? ? + ? ! ? ?/ = ? ?,? ? ?=0 ?=1 ?=? Proposition: Given that the chain starts in equilibrium the second moment of the occupancy time in the subset 0 during [0,T] gives ? ? ? ?2 2?2 ?/ ? ? + 2 !?0 ? ?/ + ?/ 1 ?/ 2 ? ?/ ? ? + 1 ???0+ = ?? , ?=1 ?=1 ? 0 where ??is the steady state probability of the Markov chain in state ?, ?0is the column vector with i-th entry equal to ?? if ? ?and zero otherwise, and ?0 is the column vector with i-th entry equal to 1 if ? ? and zero otherwise. 17 Lecture 2: transient analysis of continuous time Markov chains
References V.G. Kulkarni. Modeling, analysis, design, and control of stochastic systems. Springer, New York, 1999 Tijms, H. C. A first course in stochastic models. New York: Wiley, 2003 http://www.win.tue.nl/~iadan/algoritme/ 18