Two matrices, A and B, are equal if they have the same number of rows and columns and their corresponding elements all are equal. Multiplication of a matrix A by a scalar α is equal to multiplication of all its elements by the scalar. That is,
![]() |
(A) |
For example,
![]() |
(B) |
Two (or more) matrices can be added or subtracted only if they have the same number of rows and columns. If both matrices A and B have m rows and n columns, their sum C is an m × n matrix defined as C = A ± B, where ci, j = ai, j ± bi, j. For example,
![]() |
(C) |
For multiplication of two matrices, the number of columns of the first matrix must be equal to the number of rows of the second matrix. If matrix A has m rows and n columns and matrix B has n rows and p columns, their product C is an m × p matrix defined as C = AB, where
![]() |
(D) |
For example,
![]() |
(E) |
So you multiply the elements of the first row of A by the corresponding elements of the first column of B and add all the results to get the elements in the first row and first column of C. Similarly, to calculate the element in the ith row and the jth column of C, multiply the elements in the ith row of A by the corresponding elements in the jth column of C, and then add them all. This is shown pictorially in the following figure.
Matrix multiplication, in general, is not commutative, that is, AB ≠ BA. Also, multiplication of a matrix by an identity matrix results in the original matrix.
If X represents a vector and Y represents another vector, the dot product of these two vectors is obtained by multiplying the corresponding elements of each vector and adding the results. This is denoted by
![]() |
(F) |
where n is the number of elements in X and Y. Both vectors must have the same number of elements. The dot product is a scalar quantity and has many practical applications.
For example, consider the vectors a = 2i + 4j and b = 2i + j in a two-dimensional rectangular coordinate system, as shown in the following figure.
Then the dot product of these two vectors is given by
![]() |
(G) |
The angle α between these two vectors is given by
![]() |
(H) |
where |a| denotes the magnitude of a.
As a second application, consider a body on which a constant force a acts, as shown in the following figure. The work W done by a in displacing the body is defined as the product of |d| and the component of a in the direction of displacement d. That is,
![]() |
(I) |
On the other hand, the outer product of these two vectors is a matrix. The (i, j)th element of this matrix is obtained using the formula
a(i, j) = xi × yj | (J) |
For example,
![]() |
(K) |
To understand eigenvalues and eigenvectors, start with the classical definition. Given an n × n matrix A, the problem is to find a scalar λ and a nonzero vector x such that
Ax = λx | (L) |
In Equation L, λ is an eigenvalue. Similar matrices have the same eigenvalues. In Equation L, x is the eigenvector that corresponds to the eigenvalue. An eigenvector of a matrix is a nonzero vector that does not rotate when the matrix is applied to it.
Calculating the eigenvalues and eigenvectors are fundamental principles of linear algebra and allow you to solve many problems such as systems of differential equations when you understand what they represent. Consider an eigenvector x of a matrix A as a nonzero vector that does not rotate when x is multiplied by A, except perhaps to point in precisely the opposite direction. x may change length or reverse its direction, but it will not turn sideways. In other words, there is some scalar constant λ such that Equation L holds true. The value λ is an eigenvalue of A.
Consider the following example. One of the eigenvectors of the matrix A, where
![]() |
(M) |
is
![]() |
(N) |
Multiplying the matrix A and the vector x simply causes the vector x to be expanded by a factor of 6.85. Hence, the value 6.85 is one of the eigenvalues of the vector x. For any constant α, the vector αx also is an eigenvector with eigenvalue λ because
A(αx) = αAx = λαx | (O) |
In other words, an eigenvector of a matrix determines a direction in which the matrix expands or shrinks any vector lying in that direction by a scalar multiple, and the expansion or contraction factor is given by the corresponding eigenvalue. A generalized eigenvalue problem is to find a scalar lλ and a nonzero vector x such that
Ax = λBx | (P) |
where B is another n × n matrix.
The following are some important properties of eigenvalues and eigenvectors:
There are many practical applications in the field of science and engineering for an eigenvalue problem. For example, the stability of a structure and its natural modes and frequencies of vibration are determined by the eigenvalues and eigenvectors of an appropriate matrix. Eigenvalues also are very useful in analyzing numerical methods, such as convergence analysis of iterative methods for solving systems of algebraic equations and the stability analysis of methods for solving systems of differential equations.
SymEigenValueVector and GenEigenValueVector both include the inputMatrix input parameter, which is an n-by-n real square matrix. Use SymEigenValueVector to input a symmetric matrix; use GenEigenValueVector to input a general matrix. A symmetric matrix always has real eigenvalues and eigenvectors. A general matrix has no special property such as symmetry or triangular structure.
Use the outputChoice parameter to specify what to compute. A value of 0 indicates to compute only eigenvalues. A value of 1 indicates to compute both the eigenvalues and the eigenvectors. It is computationally expensive to compute both the eigenvalues and the eigenvectors. So it is important that you use the outputChoice parameter carefully. Depending on the particular application, you might just want to compute the eigenvalues or both the eigenvalues and the eigenvectors. Also, a symmetric matrix needs less computation than a nonsymmetrical matrix. Choose the matrix type carefully.