You can transform a linear system of equations into a system whose solution is simpler to compute by factoring the input matrix into the multiplication of several simpler matrices. The LU decomposition technique factors the input matrix as a product of upper and lower triangular matrices. Other commonly used factorization methods are Cholesky, QR, and the Singular Value Decomposition (SVD). You can use these factorization methods to solve many matrix problems, such as solving linear system of equations, inverting a matrix, and finding the determinant of a matrix.
If the input matrix A is symmetric and positive definite, an LU factorization can be computed such that A = UTU, where U is an upper triangular matrix. This is Cholesky factorization. This method requires only about half the work and half the storage compared to LU factorization of a general matrix by Gaussian Elimination. You can determine if a matrix is positive definite by using CheckPosDef.
A matrix Q is orthogonal if its columns are orthonormal—that is, if QTQ = I, the identity matrix. QR factorization technique factors a matrix as the product of an orthogonal matrix Q and an upper triangular matrix R—that is, A = QR. QR factorization is useful for both square and rectangular matrices. A number of algorithms are possible for QR factorization, such as the Householder transformation, the Givens transformation, and the Fast Givens transformation.
The Singular Value Decomposition (SVD) method decomposes a matrix into the product of three matrices—A = USVT. U and V are orthogonal matrices. S is a diagonal matrix whose diagonal values are called the singular values of A. The singular values of A are the nonnegative square roots of the eigenvalues of ATA, and the columns of U and V, which are called left and right singular vectors, are orthonormal eigenvectors of AAT and ATA, respectively. SVD is useful for solving analysis problems such as computing the rank, norm, condition number, and pseudoinverse of matrices.
The pseudoinverse of a scalar σ is defined as 1/σ if σ ≠ 0, and zero otherwise. In case of scalars, pseudoinverse is the same as the inverse. You now can define the pseudoinverse of a diagonal matrix by transposing the matrix and then taking the scalar pseudoinverse of each entry. Then the pseudoinverse of a general real m × n matrix A, denoted by A†, is given by
A† = VS†UT | (A) |
The pseudoinverse exists regardless of whether the matrix is square or rectangular. If A is square and nonsingular, the pseudoinverse is the same as the usual matrix inverse. You can use PseudoInverse and CxPseudoInverse to compute the pseudoinverse of real and complex matrices, respectively.