The Least Mean Squares (LMS) algorithm is one of the most widely used and understood adaptive algorithms.

The LMS method uses the following equations to define the cost function J(k) = E[e2(k)].

The parametric vector ŷ(k) updates according to the following equation:

wk+1=wk+μekφk

k is the number of iterations, μ is step-size, which is a positive constant, and ⃗φ(k) is the data vector from the past input data u(k) and output data y(k). ⃗φ(k) is defined by the following equation:

φk=-yt-1...-yt-kut-1...ut-mT

The following procedure describes how to implement the LMS algorithm:

  1. Initialize the step-size μ.
  2. Initialize the parametric vector
    wk
    using a small positive number ε.
    w0=ε,ε,...,εT
  3. Initialize the data vector
    φk=ukyk
    φ0=0,0,...,0T
  4. For k = 1, update the data vector
    φk
    based on
    φk-1
    and the current input data u(k) and output data y(k).
  5. Compute the predicted response ŷ(k) using the following equation:
    y^k=φTkwk
  6. Compute the error e( k) by solving the following equation:
    ek=yk-y^k
  7. Update the parameter vector
    wk
    .
    wk+1=wk+μθkφk
  8. Stop if the error is small enough, else set k = k + 1 and repeat steps 4–8.

Selecting the step-size μ is important with the LMS algorithm, because the selection of the step-size μ directly affects the rate of convergence and the stability of the algorithm. The convergence rate of the LMS algorithm is usually proportional to the step-size μ. The larger the step-size μ, the faster the convergence rate. However, a large step-size μ can cause the LMS algorithm to become unstable. The following equation describes the range of the step-size μ.

0 < μ < μ max

μ max is the maximum step-size that maintains stability in the LMS algorithm. μ max is related to the statistical property of the stimulus signal. A uniformly optimized step-size μ that achieves a fast convergence speed while maintaining the stability in the system does not exist, regardless of the statistical property of the stimulus signal. For better performance, use a self-adjustable step-size μ and the Normalized Least Mean Squares (NLMS) algorithm.