The convergence rate of the Kalman filter is relatively fast, but the implementation is more complex than that of LMS-based algorithms.

The Kalman filter is a linear optimum filter that minimizes the mean of the squared error recursively.

Recall that the equation J(k) = E[e2(k)] defines the cost function. The following procedure lists the steps of the Kalman filter algorithm.

  1. Initialize the parametric vector
    wk+1=wk+ek·Kk
    using a small positive number ε.

    w0=ε, ε, ..., εT

  2. Initialize the data vector
    φk
    .

    φ0=0, 0, ..., 0T

  3. Initialize the k × k matrix P(0).

    P0=ε0000ε0000...0000ε

  4. For k = 1, update the data vector
    φk
    based on
    φk-1
    and the current input data u(k) and output data y(k).
  5. Compute the predicted response
    y^k
    by solving the following equation:

    y^k=φTk·wk

  6. Compute the error e(k) by solving the following equation:

    ek=yk-y^k

  7. Update the Kalman gain vector
    Kk
    defined by the following equation:

    Kk=Pk·φkQM+φTk·Pk·φk

    QM is the measurement noise and P(k) is a k × k matrix whose initial value is defined by P(0) in step 3.
  8. Update the parametric vector
    wk
    .

    wk+1=wk+ek·Kk

  9. Update the P(k) matrix.

    Pk+1=Pk-Kk·φTk·Pk+QP

    QP is the correlation matrix of the process noise.
  10. Stop if the error is small enough, else set k = k + 1 and repeat steps 4–10.