Скачать презентацию Model Building For ARIMA time series Consists of Скачать презентацию Model Building For ARIMA time series Consists of

7bcfcb382fb04c5f0cee3660eb1c35a3.ppt

  • Количество слайдов: 81

Model Building For ARIMA time series Consists of three steps 1. Identification 2. Estimation Model Building For ARIMA time series Consists of three steps 1. Identification 2. Estimation 3. Diagnostic checking

ARIMA Model building Identification Determination of p, d and q ARIMA Model building Identification Determination of p, d and q

To identify an ARIMA(p, d, q) we use extensively the autocorrelation function {rh : To identify an ARIMA(p, d, q) we use extensively the autocorrelation function {rh : - < h < } and the partial autocorrelation function, {Fkk: 0 k < }.

The definition of the sample covariance function {Cx(h) : - < h < } The definition of the sample covariance function {Cx(h) : - < h < } and the sample autocorrelation function {rh: - < h < } are given below: The divisor is T, some statisticians use T – h (If T is large, both give approximately the same results. )

It can be shown that: Thus Assuming rk = 0 for k > q It can be shown that: Thus Assuming rk = 0 for k > q

The sample partial autocorrelation function is defined by: The sample partial autocorrelation function is defined by:

It can be shown that: It can be shown that:

Identification of an Arima process Determining the values of p, d, q Identification of an Arima process Determining the values of p, d, q

 • Recall that if a process is stationary one of the roots of • Recall that if a process is stationary one of the roots of the autoregressive operator is equal to one. • This will cause the limiting value of the autocorrelation function to be non-zero. • Thus a nonstationary process is identified by an autocorrelation function that does not tail away to zero quickly or cut-off after a finite number of steps.

To determine the value of d Note: the autocorrelation function for a stationary ARMA To determine the value of d Note: the autocorrelation function for a stationary ARMA time series satisfies the following difference equation The solution to this equation has general form where r 1, r 2, r 1, … rp, are the roots of the polynomial

For a stationary ARMA time series The roots r 1, r 2, r 1, For a stationary ARMA time series The roots r 1, r 2, r 1, … rp, have absolute value greater than 1. Therefore If the ARMA time series is non-stationary some of the roots r 1, r 2, r 1, … rp, have absolute value equal to 1, and

stationary non-stationary stationary non-stationary

 • If the process is non-stationary then first differences of the series are • If the process is non-stationary then first differences of the series are computed to determine if that operation results in a stationary series. • The process is continued until a stationary time series is found. • This then determines the value of d.

Identification Determination of the values of p and q. Identification Determination of the values of p and q.

To determine the value of p and q we use the graphical properties of To determine the value of p and q we use the graphical properties of the autocorrelation function and the partial autocorrelation function. Again recall the following:

More specically some typical patterns of the autocorrelation function and the partial autocorrelation function More specically some typical patterns of the autocorrelation function and the partial autocorrelation function for some important ARMA series are as follows: Patterns of the ACF and PACF of AR(2) Time Series In the shaded region the roots of the AR operator are complex

Patterns of the ACF and PACF of MA(2) Time Series In the shaded region Patterns of the ACF and PACF of MA(2) Time Series In the shaded region the roots of the MA operator are complex

Patterns of the ACF and PACF of ARMA(1. 1) Time Series Note: The patterns Patterns of the ACF and PACF of ARMA(1. 1) Time Series Note: The patterns exhibited by the ACF and the PACF give important and useful information relating to the values of the parameters of the time series.

Summary: To determine p and q. Use the following table. MA(q) ACF AR(p) ARMA(p, Summary: To determine p and q. Use the following table. MA(q) ACF AR(p) ARMA(p, q) Tails off Cuts after q PACF Tails off Cuts Tails off Note: Usually p + q ≤ 4. There is no harm in over after p identifying the time series. (allowing more parameters in the model than necessary. We can always test to determine if the extra parameters are zero. )

Examples Examples

The data The data

The data The data

Possible Identifications 1. d = 0, p = 1, q= 1 2. d = Possible Identifications 1. d = 0, p = 1, q= 1 2. d = 1, p = 0, q= 1

ACF and PACF for xt , Dxt and D 2 xt (Sunspot Data) ACF and PACF for xt , Dxt and D 2 xt (Sunspot Data)

Possible Identification 1. d = 0, p = 2, q= 0 Possible Identification 1. d = 0, p = 2, q= 0

ACF and PACF for xt , Dxt and D 2 xt (IBM Stock Price ACF and PACF for xt , Dxt and D 2 xt (IBM Stock Price Data)

Possible Identification 1. d = 1, p =0, q= 0 Possible Identification 1. d = 1, p =0, q= 0

Estimation of ARIMA parameters Estimation of ARIMA parameters

Preliminary Estimation Using the Method of moments Equate sample statistics to population paramaters Preliminary Estimation Using the Method of moments Equate sample statistics to population paramaters

Estimation of parameters of an MA(q) series The theoretical autocorrelation function in terms the Estimation of parameters of an MA(q) series The theoretical autocorrelation function in terms the parameters of an MA(q) process is given by. To estimate a 1, a 2, … , aq we solve the system of equations:

This set of equations is non-linear and generally very difficult to solve For q This set of equations is non-linear and generally very difficult to solve For q = 1 the equation becomes: Thus or This equation has the two solutions One solution will result in the MA(1) time series being invertible

For q = 2 the equations become: For q = 2 the equations become:

Estimation of parameters of an ARMA(p, q) series We use a similar technique. Namely: Estimation of parameters of an ARMA(p, q) series We use a similar technique. Namely: Obtain an expression for rh in terms b 1, b 2 , . . . , bp ; a 1, . . . , aq of and set up q + p equations for the estimates of b 1, b 2 , . . . , bp ; a 1, a 2, . . . , aq by replacing rh by rh.

Estimation of parameters of an ARMA(p, q) series Example: The ARMA(1, 1) process The Estimation of parameters of an ARMA(p, q) series Example: The ARMA(1, 1) process The expression for r 1 and r 2 in terms of b 1 and a 1 are: Further

Thus the expression for the estimates of b 1, and s 2 are : Thus the expression for the estimates of b 1, and s 2 are : and

Hence or This is a quadratic equation which can be solved Hence or This is a quadratic equation which can be solved

Example (Chemical. Concentration Data) the time series was identified as either an ARIMA(1, 0, Example (Chemical. Concentration Data) the time series was identified as either an ARIMA(1, 0, 1) time series or an ARIMA(0, 1, 1) series. If we use the first identification then series xt is an ARMA(1, 1) series.

Identifying the series xt is an ARMA(1, 1) series. The autocorrelation at lag 1 Identifying the series xt is an ARMA(1, 1) series. The autocorrelation at lag 1 is r 1 = 0. 570 and the autocorrelation at lag 2 is r 2 = 0. 495. Thus the estimate of b 1 is 0. 495/0. 570 = 0. 87. Also the quadratic equation becomes which has the two solutions -0. 48 and -2. 08. Again we select as our estimate of a 1 to be the solution -0. 48, resulting in an invertible estimated series.

Since d = m(1 - b 1) the estimate of d can be computed Since d = m(1 - b 1) the estimate of d can be computed as follows: Thus the identified model in this case is xt = 0. 87 xt-1 + ut - 0. 48 ut-1 + 2. 25

If we use the second identification then series Dxt = xt – xt-1 is If we use the second identification then series Dxt = xt – xt-1 is an MA(1) series. Thus the estimate of a 1 is: The value of r 1 = -0. 413. Thus the estimate of a 1 is: The estimate of a 1 = -0. 53, corresponds to an invertible time series. This is the solution that we will choose

The estimate of the parameter m is the sample mean. Thus the identified model The estimate of the parameter m is the sample mean. Thus the identified model in this case is: Dxt = ut - 0. 53 ut-1 + 0. 002 or xt = xt-1 + ut - 0. 53 ut-1 + 0. 002 (An ARIMA(0, 1, 1) model) This compares with the other identification: xt = 0. 87 xt-1 + ut - 0. 48 ut-1 + 2. 25 (An ARIMA(1, 0, 1) model)

Preliminary Estimation of the Parameters of an AR(p) Process Preliminary Estimation of the Parameters of an AR(p) Process

The regression coefficients b 1, b 2, …. , bp and the auto correlation The regression coefficients b 1, b 2, …. , bp and the auto correlation function rh satisfy the Yule. Walker equations: and

The Yule-Walker equations can be used to estimate the regression coefficients b 1, b The Yule-Walker equations can be used to estimate the regression coefficients b 1, b 2, …. , bp using the sample auto correlation function rh by replacing rh with rh. and

Example Considering the data in example 1 (Sunspot Data) the time series was identified Example Considering the data in example 1 (Sunspot Data) the time series was identified as an AR(2) time series. The autocorrelation at lag 1 is r 1 = 0. 807 and the autocorrelation at lag 2 is r 2 = 0. 429. The equations for the estimators of the parameters of this series are which has solution Since d = m( 1 -b 1 - b 2) then it can be estimated as follows:

Thus the identified model in this case is xt = 1. 321 xt-1 -0. Thus the identified model in this case is xt = 1. 321 xt-1 -0. 637 xt-2 + ut +14. 9

Maximum Likelihood Estimation of the parameters of an ARMA(p, q) Series Maximum Likelihood Estimation of the parameters of an ARMA(p, q) Series

The method of Maximum Likelihood Estimation selects as estimators of a set of parameters The method of Maximum Likelihood Estimation selects as estimators of a set of parameters q 1, q 2, . . . , qk , the values that maximize L(q 1, q 2, . . . , qk) = f(x 1, x 2, . . . , x. N; q 1, q 2, . . . , qk) where f(x 1, x 2, . . . , x. N; q 1, q 2, . . . , qk) is the joint density function of the observations x 1, x 2, . . . , x. N. L(q 1, q 2, . . . , qk) is called the Likelihood function.

It is important to note that: finding the values -q 1, q 2, . It is important to note that: finding the values -q 1, q 2, . . . , qk- to maximize L(q 1, q 2, . . . , qk) is equivalent to finding the values to maximize l(q 1, q 2, . . . , qk) = ln L(q 1, q 2, . . . , qk). l(q 1, q 2, . . . , qk) is called the log-Likelihood function.

Again let {ut : t ÎT} be identically distributed and uncorrelated with mean zero. Again let {ut : t ÎT} be identically distributed and uncorrelated with mean zero. In addition assume that each is normally distributed. Consider the time series {xt : t ÎT} defined by the equation: (*) xt = b 1 xt-1 + b 2 xt-2 +. . . +bpxt-p + d + ut +a 1 ut-1 + a 2 ut-2 +. . . +aqut-q

Assume that x 1, x 2, . . . , x. N are observations Assume that x 1, x 2, . . . , x. N are observations on the time series up to time t = N. To estimate the p + q + 2 parameters b 1, b 2, . . . , bp ; a 1, a 2, . . . , aq ; d , s 2 by the method of Maximum Likelihood estimation we need to find the joint density function of x 1, x 2, . . . , x. N f(x 1, x 2, . . . , x. N |b 1, b 2, . . . , bp ; a 1, a 2, . . . , aq , d, s 2) = f(x| b, a, d , s 2).

We know that u 1, u 2, . . . , u. N are We know that u 1, u 2, . . . , u. N are independent normal with mean zero and variance s 2. Thus the joint density function of u 1, u 2, . . . , u. N is g(u 1, u 2, . . . , u. N ; s 2) = g(u ; s 2) is given by.

It is difficult to determine the exact density function of x 1, x 2, It is difficult to determine the exact density function of x 1, x 2, . . . , x. N from this information however if we assume that p starting values on the x-process x* = (x 1 -p, x 2 -p, . . . , xo) and q starting values on the u-process u* = (u 1 -q, u 2 -q, . . . , uo) have been observed then the conditional distribution of x = (x 1, x 2, . . . , x. N) given x* = (x 1 p, x 2 -p, . . . , xo) and u* = (u 1 -q, u 2 -q, . . . , uo) can easily be determined.

The system of equations : x 1 = b 1 x 0 + b The system of equations : x 1 = b 1 x 0 + b 2 x-1 +. . . +bpx 1 -p + d + u 1 +a 1 u 0 + a 2 u-1 +. . . + aqu 1 -q x 2 = b 1 x 1 + b 2 x 0 +. . . +bpx 2 -p + d + u 2 +a 1 u 1 + a 2 u 0 +. . . +aqu 2 -q. . . x. N= b 1 x. N-1 + b 2 x. N-2 +. . . +bpx. N-p + d + u. N +a 1 u. N-1 + a 2 u. N-2 +. . . + aqu. N-q

can be solved for: u 1 = u 1 (x, x*, u*; b, a, can be solved for: u 1 = u 1 (x, x*, u*; b, a, d) u 2 = u 2 (x, x*, u*; b, a, d). . . u. N = u. N (x, x*, u*; b, a, d) (The jacobian of the transformation is 1)

Then the joint density of x given x* and u* is given by: Then the joint density of x given x* and u* is given by:

Let: = “conditional likelihood function” Let: = “conditional likelihood function”

“conditional log likelihood function” = “conditional log likelihood function” =

The values that maximize are the values that minimize with The values that maximize are the values that minimize with

Comment: The minimization of: Requires a iterative numerical minimization procedure to find: • Steepest Comment: The minimization of: Requires a iterative numerical minimization procedure to find: • Steepest descent • Simulated annealing • etc

Comment: The computation of: for specific values of can be achieved by using the Comment: The computation of: for specific values of can be achieved by using the forecast equations

Comment: The minimization of : assumes we know the value of starting values of Comment: The minimization of : assumes we know the value of starting values of the time series {xt| t T} and {ut| t T} Namely x* and u*.

Approaches: 1. Use estimated values: 2. Use forecasting and backcasting equations to estimate the Approaches: 1. Use estimated values: 2. Use forecasting and backcasting equations to estimate the values:

Backcasting: If the time series {xt|t T} satisfies the equation: It can also be Backcasting: If the time series {xt|t T} satisfies the equation: It can also be shown to satisfy the equation: Both equations result in a time series with the same mean, variance and autocorrelation function: In the same way that the first equation can be used to forecast into the future the second equation can be used to backcast into the past:

Approaches to handling starting values of the series {xt|t T} and {ut|t T} 1. Approaches to handling starting values of the series {xt|t T} and {ut|t T} 1. Initially start with the values: 2. Estimate the parameters of the model using Maximum Likelihood estimation and the conditional Likelihood function. 3. Use the estimated parameters to backcast the components of x*. The backcasted components of u* will still be zero.

4. Repeat steps 2 and 3 until the estimates stablize. This algorithm is an 4. Repeat steps 2 and 3 until the estimates stablize. This algorithm is an application of the E-M algorithm This general algorithm is frequently used when there are missing values. The E stands for Expectation (using a model to estimate the missing values) The M stands for Maximum Likelihood Estimation, the process used to estimate the parameters of the model.

Some Examples using: • • Minitab Statistica S-Plus SAS Some Examples using: • • Minitab Statistica S-Plus SAS