Term
|
Definition
a rectangular array of numbers called entries / elements
m rows x n columns
indexed a_ii |
|
|
Term
|
Definition
|
|
Term
|
Definition
all nondiagonal entries are 0 |
|
|
Term
|
Definition
diagonal matrix where all entries are 1 |
|
|
Term
matrices A and B are equivalent if and only if... |
|
Definition
same size and same entries in each index |
|
|
Term
|
Definition
A and B must be same size
A + B = [aij + bij] |
|
|
Term
scalar multiplication of a matrix |
|
Definition
|
|
Term
|
Definition
matrix will all elements = 0 |
|
|
Term
|
Definition
cols of matrix A must equal rows of matrix B
(m x n) * (n x r) -> produces m x r matrix
A * B = [vector ai o vector bj] |
|
|
Term
properties of matrix addition / multiplication |
|
Definition
AB != BA (not all the time at least)
A(BC) = (AB)C
A(kB) = k(AB)
A(B+C) = AB + AC
(A+B)C = AC + BC
IA = AI = A
A^3 = A * A * A |
|
|
Term
|
Definition
rowi(A^T) = coli(A)
rotate the such that the nth row becomes the nth col of the new matrix |
|
|
Term
a matrix is symmetric iff |
|
Definition
A = A^T
[aij] = [aji]
think reflecting across the diagonal |
|
|
Term
properties of transposing a matrix |
|
Definition
(A^T)^T = A
(kA^T) = k(A^T)
(A^r)^T = (A^T)^r (r > 0)
(A + B)^T = A^T + B^T
(AB)^T = (B^T)(A^T)
|
|
|
Term
the inverse of a matrix A |
|
Definition
A is an nxn (square) matrix
A^-1 is an nxn matrix such that
A(A^-1) = (A^-1)A = I
if A is invertible, then A^-1 is unique |
|
|
Term
|
Definition
null(A) is the set of all vectors x such that Ax = 0
(A times the vector x is the zero vector)
or, null(A) = {x | Ax = 0}
observation:
A is invertible iff null(A) = {0 vector} |
|
|
Term
|
Definition
a set of vectors that
1) spans S
2) is linearly independent
e.g. a basis for R2 is {[0 1][1 0]} because it is two linearly independent vectors that lets you get around all of R2 through its linear combinations |
|
|
Term
finding bases for row(A) and col(A)
(the row and col space of matrix A)
|
|
Definition
row(A) = span({nonzero rows of r.r.e.f. of A})
col(A) = span{(set of columns in *A* that have leading terms in the r.r.e.f. of A)}
WARNING: col(A) != col(r.r.e.f.(A)) (generally) |
|
|
Term
is a 2x2 matrix [[a b][c d]] invertible? |
|
Definition
iff ad - bc = 0 (the determinant) |
|
|
Term
|
Definition
any matrix performed by performing one elementary row operation on I
1) swap two rows
2) multiply a row by a scalar
3) add a multiple of one row to another |
|
|
Term
fundamental theorem of invertible matrices |
|
Definition
TFAE
a) A is invertible
b) Ax = b has the unique soltuion x = (A^-1)(b) for all b in R^n
c) Ax = b has a unique solution for all b, so x = 0 is the only trivial solution
d) [A | 0] has only the trivial solution (no free variables) [A | 0] = [I | 0]
e) there are elementary row operations R1 ... Rk that transform A into I
let Ei = the elementary matrix of Ri
Ek ... E3E2E1A = I
A = E1^-1(E2^-1)...(En-1^-1)(Ek^-1). thus,
f) (E1 ... En)^-1 = Ek^-1 ... E1^-1 (invertible)
-
f) rank(A) = n
g) nullility(A) = 0
h) columns of A are linearly independent
i) columns span R^n
j) columns are a basis of R^n
k) rows of A are linearly independent
l) rows span R^n
m) rows are a basis of R^n
|
|
|
Term
a subspace in Rn is a set of vectors S such that |
|
Definition
1) 0 vector is in S
2) if u, v in S, then u + v in S (closed under addition)
3) if u in S, c in R, then cu in S (closed under scalar multiplication
thus, a subspace S is a set of vectors such that you can't use linear combinations to get out
equivalent: if u, v i nS and c, d in R then cu + dv in S (closed under linear combinations) |
|
|
Term
|
Definition
|
|
Term
row space and col space of A |
|
Definition
row(A) = space spanned by rows - R^n
col(A) = space spanned by cols - R^m |
|
|
Term
|
Definition
dim(S) is the number of vectors in a basis of S
therefore, dim(col(A)) == dim(row(A))
because dim(row(A)) = # nonzero rows in rref(A)
= # of leading 1s in rref(A)
= dim(col(A)) |
|
|
Term
|
Definition
rank(A) = dim(row(A))
= dim(col(A))
easy proof:
rank(A) = rank(A^T) |
|
|
Term
|
Definition
nullility(A) = dim(null(A))
observe:
nullility(A) = # free variables in rref(A) aug. with 0 vector
free variables of rref = basis vectors of null space |
|
|
Term
S is a subspace of Rn with basis B = {v1 ... vk}
For all vectors in S, there is a unique linear combination
c1v1 + c2v2 + ... + ckvk = v
these c's are referred to as what? |
|
Definition
the coordinates of v with respect to B
[c1 ... ck] = coordinate vector of v with respect to B |
|
|
Term
for an nxn matrix A, the determinant of A = ? |
|
Definition
det(A) = a11det(A11) - a12det(A12) +- ... + (-1)^n+1 (a12det(A1n)
A11 = matrix A with row 1 col 1 "struck" out
summation (i = 1 to n) a1i * C1i
^ the cofactor expansion |
|
|
Term
Laplace expansion theorem |
|
Definition
the determinant of a nxn matrix A = [aij] (n >= 2) can be computed as
det(A) = ai1Ci1 + ai2Ci2 + ... + ainCin
(expansion along row i)
or
det(A) = a1jC1j + a2jC2j + ... + anjCnj
(expansion down column j) |
|
|
Term
the determinant of an upper/lower triangular matrix is ... |
|
Definition
the product of its diagonal entries |
|
|
Term
|
Definition
a) if A has an all-0 row or col, det(A) = 0
b) if B comes from swapping two rows of A, then the det(B) = -det(A)
c) if A has two equal rows, then det(A) = 0
d) if B comes from multiplying a row or col by a constant k, then det(B) = k*det(A)
e) if A, B, C are the same except the ith row of column of C is the sum of the ith row/col of A and B then det(C) = det(A) + det(B) (???)
f) if B comes from adding a multiple of row i in A to rowj (i != j) then det(A) = det(B)
theorem:
if E is an elementary matrix in det(EB) = det(E)det(B)
A is invertible iff det(A) != 0
|
|
|