Term
|
Definition
|
|
Term
Conventions for writing matrices |
|
Definition
[image] When multiplying A (m x n) and B (n x p): [image] |
|
|
Term
|
Definition
[image] Transposition rules: [image] |
|
|
Term
|
Definition
2 x 2: [image] 3 x 3: [image] |
|
|
Term
Properties of the determinant |
|
Definition
|
|
Term
|
Definition
[image] It is clear that the three columns of matrix A are the result of transforming the unit base vectors. Therefore, to determine a matrix we need to find where the three basis vectors map to under the transformation. |
|
|
Term
Rotation of a vector in 2D derivation |
|
Definition
[image] The rotation is positive in the anticlockwise direction. |
|
|
Term
Rotation of a vector in 2D |
|
Definition
[image] Therefore, a rotation clockwise would be the inverse of this matrix. [image] Which leads to the fact that: [image] Which is known as an orthogonal matrix. Furthermore, [image]. Making it a 'proper orthogonal' matrix or a 'rotation' matrix. |
|
|
Term
|
Definition
Let Q represent an arbitrary rotation about some axis. Q is determined by considering the effect on the basis vectors. [image] |
|
|
Term
Rotation about the y-axis |
|
Definition
When deciding direction of axis rotation (direction [image] rotates) use the right hand rule and point the axis being rotated about toward you. The rotate the other two axis anti-clockwise. (x -> y, y -> z, z -> x). [image] |
|
|
Term
|
Definition
By the very nature of rotations, it is clear that the vectors q1, q2 and q3 are themselves unit vectors and mutually perpendicular.
Such a set of vectors is said to be orthonormal, and a matrix of such vectors is called an orthogonal/orthonormal matrix |
|
|
Term
|
Definition
[image] This is because: [image] |
|
|
Term
Change of basis (components) for a vector |
|
Definition
The basis components of a system are rotated [image] by a matrix R. A vector 'a' from the original system can be said to have rotated in the opposite direction [image]. [image] |
|
|
Term
Transformation 'matrices' |
|
Definition
|
|
Term
Applications of this transformation of a matrix |
|
Definition
|
|
Term
|
Definition
Eigenvectors are the vectors whose direction does not change after a transformation has occurred. Only its magnitude changes. [image] A non-zero vector that obeys this relation is an eigenvector of A, and the scalar [image] is the corresponding eigenvalue. The eigenvalue is the scale factor applied to the magnitude of x, and then the eigenvector x is the direction. 1. Eigenvalue can be 0 but eigenvector cannot. 2. If x is an eigenvector of A, then so is kx for any non-zero value of k. (only the direction that matters). 3. Usually, eigenvectors are normalised and they are represented as u after normalisation. |
|
|
Term
|
Definition
[image] This equation can only have a non-trivial solution when the matrix (A - [image]I) maps the (non-zero) vector x to the zero vector. This can only happen if the determinant of the mapping is zero i.e. no inverse. [image] Expending this determinant will always give a polynomial of order n. This is often referred to as the characteristic equation. [image] This polynomial will always have n solutions (so could be complex). [image] |
|
|
Term
Determining the eigenvectors |
|
Definition
For each eigenvalue [image], we need to find the corresponding eigenvector [image] that satisfies (A - [image]I)x = 0. [image] Plug in the eigenvalues. Delete one row of the Matrix, and set one of the values in the eigenvector to 1 (you can do this because the eigenvector only matters directionally). Then solve for the other components of the eigenvector. And then normalise it. [image] |
|
|
Term
|
Definition
A matrix A is said to be symmetric if [image].
1. If S is a real, symmetric matrix then there is a complete set of real eigenvalues and eigenvectors for S.
2. If S is a real, symmetric matrix then the eigenvectors of S are orthogonal.
It can also be said that the eigenvectors are orthogonal with respect to S. If the matrix is not symmetric it can have complex eigenvalues and its eigenvectors are unlikely to be orthogonal.
An antisymmetric matrix is defined as: [image] |
|
|
Term
|
Definition
Not all n x n matrices have n linearly independent eigenvectors. If the eigenvalues happens to be a repeated root. In these cases it is not always possible to find n linearly independent eigenvectors, thus making the matrix a 'defective matrix'. |
|
|
Term
|
Definition
[image] The condition for the existence of [image] is that det(U) [image] 0, which requires all the eigenvectors to be linearly independent. And as such matrix A is symmetric and [image]. [image] |
|
|
Term
Understanding diagonalisation |
|
Definition
It can be said that [image] is the matrix S expressed in the new coordinate system created when the unit basis components are rotated to fit the directions of the eigenvectors. If the matrix R maps our original coordinate system to the coordinate system aligned with [image] ([image], where S' is the transformation expressed in the rotated coordinate system). Hence, R [image] U and therefore: [image] |
|
|
Term
Repeated multiplication by a matrix |
|
Definition
Let A be a 3 x 3 matrix whose eigenvectors are linearly independent (but not necessarily orthogonal) [image] Hence, [image] has the same eigenvectors as A, but the eigenvalues are raised to the power of n. |
|
|
Term
Repeated multiplication to infinity |
|
Definition
Since the eigenvectors of A are linearly independent, any 3D space can be expressed as a linear combination of the three. [image] After repeated multiplication the largest eigenvalue will dominate, so that: [image] |
|
|
Term
Generalised eigenvalue problem |
|
Definition
|
|