Inner Product ^
If u and V are vectors in R^n, Then we regard u and B as n x 1 matrices. The transpose u^T is a 1 x n matrix, and the matrix product U^TV is a 1 x 1 matrix, which we write as a single real number without brackets. The number u^T is Called the inner product of u and V, and often is written as u . V
Essentially this is what we call the dot product from what we learned in Calculus classes and Physics classes
Let u, v, and w be vectors in R", and let c be a scalar. Then
a. uv = V . U
b. (u+v) . w = u . w + v . w
c. (cu) . v = c (u . v) = u. (cv)
d. u . u >= 0 and u .u = 0 if and only if u = 0
Essentially taking the length of a V is just taking the magnitude of the vector or normalizing it
Orthogonal Sets
A set of vectors {u,..., u,} in R" is said to be an orthogonal set if each pair of distinct vectors from the set is orthogonal, that is, if u{i} . u_{j} DNET 0 whenever i DNET j.
An orthogonal basis for a subspace W of Rn is a basis for W that is also an orthogonal set.
If S ={u1...up} is an orthogonal set of nonzero vectors in Rn, then S is linearly independent and hence is a basis for the subspace spanned by S
Let {u1,..,up} be an orthogonal basis for a subspace W of RN. For each y in W, the weights in the linear combination
An m x n Matrix U has columns if and only if UTY =1
Let U be an m x n matrix with orthonormal columns, and let x and y be in RN
Orthogonal Projections
TODT Let w be a subspace of Rn. Then each y in Rn can be written uniquely in the form y= yhat + z (1) where yhat is in W and Z is in WT. In fact, if {u1...up} is any orthogonal basis of W,.
If y is in W = Span{u1,...,up}, then profj y=y
TBAT Let W be a subspace of Rn, let y be any vector in Rn and let yhat be the orthogonal projection of y onto W. then yhat is the closest point in W to y.
Gram-Schmidt Process
Essentially almost like orthogonal projections but slightly different steps but in this case we turn LI vectors into a set of orthonormal vectors that span the same space spanned by the original set
To save space here essentially this theorem is a set of rules to find a vector for given basis of {x1,....xp}
QR factorization: essentially this is really similar to the LU factorization but with vectors that follow rules pretty similar to LU factorization
Least-Squares Problems
If A is m x n and b is in Rm, a least-sqaures solution of Ax=b is an xhat in rn such that ||b-Axhat || <||B -Ax || for all x in Rn
The set of least-squares solutions of Ax=B cincides with the nonempty set of solutions of the normal equation ATAx = ATB
Let A be an mn matrix. The following statements are logically equivalent:
a. The equation Ax = b has a unique least-squares solution for each b in mathbb R ^ m .
b. The columns of A are linearly independent.
c. The matrix A ^ T * A is invertible.
When these statements are true, the least-squares solution â is given by
hat x = (A ^ T * A) ^ - 1 * A ^ T * b
Inner Product Spaces
An inner product on a vector space V is a function that, to each pair of vectors u and v in V, associates a real number (u, v) and satisfies the following axioms, for all u, v, and w in V and all scalars c:
Vector Spaces & Subspaces
Is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars, subject to the ten axioms. The axioms must hold for for all vectors u, v and w in V and for all scalars c and d
A subspace of a vector space V is a subset
H of V has three properties
The zero vector of V is in H^2
H is closed under vector addition. That is, for each u and v in H, the sum u + V is in H
H is closed under multiplication by scalars. That is, for each u in H and each scalar c, the vector cu is in H
If v1,.....,vp are in a vector space V, the Span{v1,...,vp} is a subspace of V.
Null, Column, Row Spaces and Linear Transformations
Null space of an m x n matrix A, written as Nul A, is the set of all solutions of the homogeneous equation Ax = O. IN set notation,
Nul A = {x :: x is in R^N and Ax=0}
In simpler terms Nul A is defined as implicitly because it is defined by the condition Ax=0 , and if Nul A gives you the equation AX=B it does not meet the condition Ax=b thus making producing a explicit description
The null space of an m x n matrix A is a subspace of R^n. Equivalently, the set of all solutions to a system Ax=0 of m homogenous linear equations in n unknowns is a subspace of R^n
The column space of an m x n matrix A, written as Col A, is the set of all linear combinations of the columns of A. If A=[a1....an], then
Col A = Span {a1.....ap}
The column space of an m x n matrix A is a subspace of R^M
A linear transformation T from a vector space V into a vector space W is a rule that assigns to each vector x in V a unique vector T(x) in W, Such that
1. T(U+V)=T(u)+T(V) ; for all u, v in V
2. T(cu)=cT(U) ; for all u in V and all scalars C
The Kernel (or null space) of T is the set of all u in V such that T(u) = o (the zero vector in W).
The range of T is the set of all vectors in W of the form T(x) for some x in V.
Linearly Independent Sets and Bases
An Indexed set of vectors {v1,v2,...,vp} in V is said to be LI if the vector equation c1v1+c2v2+...+cpvp = 0 has ONLY the trivial solutions c1=0,....,cp=0
The set {v1,v2,...,vp} is said to be LD if there is a nontrivial solution to c1v1+c2v2+..+cpvp=0 ^
An indexed set {v1,....,vp} of two or more vectors, with v1 DNET 0, is LD if an only if some vj (with J>1) is a linear combination of the preceding vectors v1...vj01.
The spanning Set Theorem ;Let S ={v1,...,vP} be a set in a Vector space V, and let H= Span{v1,...vp}.
A.) If one of the vectors in S-say, vk-is a linear combination of the remaining vectors in S, then the set formed from S by removing vk still spans H.
B.) If H DNET {0}, some subset of S is basis for H
The Pivot Columns of a matrix A form a basis for COl A
Coordinate Systems
The unique representation theorem ; Let B={B1,..,Bn} be a basis for a vector Space V. Then for each x in V, there exists a set of scalars c1...cn such that x = c1b1+...+cnbn
Suppose B={b1,...,bn} be a basis for a vector space V and x is in V. The coordinate of x relative to the basis B are the weights c1...cn such that x = c1b1+..+cnbn
Let B={b1...bn} be a basis for a vector space V. Then the coordinate mapping x->[x]b is a one-to-one linear transformation from V onto R^N
The Dimension of a Vector Space
If a Vector Space V is spanned by finite set, the V is said to be finite-dimensional, and the dimension of V, written as dim V, is the number of vectors in a basis for V. The dimension of the zero vector space {0} is defined to be zero. If V is not spanned by a finite set, then V is said to be infinite-dimensional
If a vector space V has a basis B={b1...bn} , the any set in V containing more than n vectors must be linearly dependent
If a vector space V has a basis of n vectors, then every basis of V must consist of n vectors
The rank of an m x n matrix A is the dimension of the column space and the nullity of A is the dimension of the null space
The Rank Theorem ; The dimensions of the column space and the null space of an m x n matrix A satisfy the equation : Rank A + nullity A =number of columns in A
Change of Basis
Let B={b1...bn} and C={c1...cn} be bases of a vector space V. Then there is a unique n x n matrix c<--B such that [X]c=c<--b {x}b
The columsn of C<--B are the C-coordinate vectors of the vectors in teh basis B. That is, C<--B ={[b1]c [b2]c....[bn]c}
Eigenvectors and Eigenvalues
An eigenvector of an n x n matrix A is a nonzero vector x such that Ax=λx for some scalar λ. A scalar λ is called an eigenvalue of A if there is a nontrivial solution x of Ax=λx; such an x is called an eigenvector corresponding to λ.
The eigenvalue of a triangular matrix are the entries on its main diagonal
If v1....vr are eigenvectors that correspond to distinct eigenvalues λ1,...,λr of an n x n matrix A, then the set {v1...vr} is linearly independent
The Characteristic equation
Properties of Determinates ; Let A and B be n x n matrices.
A.) A is an invertible if and only if det A DNET O
B.) det AB = (det A ) (det B)
C.) Det AT = det A
e.) A row replacement operation on A does not change the determinant. A row interchange changes the sign of the determinate. A row scaling also scales the determinant by the same scalar factor.
Some of these things listed are pretty similar to the The IMT.
Let A be an n x n matrix. Then A is invertible if and only if ;
r. The number 0 is not an eigenvalue of A
If n x n matrices A and B are similar, then they have the same characteristic polynomial and hence the same eigenvalues (with the same multiplicities)
Diagonalization
The Diagonalization Theorem ;
An n x n matrix A is diagnosable if and only if A has n eigen vectors. In fact, A=PDP^-, with D a diagonal matrix, if and only if the columns of P are n linearly independent eigenvectors of A. In this case, the diagonal entries of D are eigenvalues of A that correspond, respectively, to the eigen vectors in P
An n x n matrix with n distinct eigen values is diagonalizable.^
Let A be an n xn matrix whose distinct eigenvalues are λ.....λp.
a. For 1 ≤ k ≤ p, the dimension of the eigenspace for λ is less than or equal to the multiplicity of the eigenvalue λk.
b. The matrix A is diagonalizable if and only if the sum of the dimensions of the eigenspaces equals n, and this happens if and only if (i) the characteristic polynomial factors completely into linear factors and (ii) the dimension of the eigenspace for each λ equals the multiplicity of Ak.
c. If A is diagonalizable and B is a basis for the eigenspace corresponding to λk for each k, then the total collection of vectors in the sets B₁,..., B, forms an eigenvector basis for R".
Eigen Vectors and Linear Transformations
Let V be a vector space. An eigenvector of a linear transformation T : V--> V is a nonzero vector x in V such that 7(x) = 2x for some scalar λ. A scalar 2 is called an eigenvalue of T if there is a nontrivial solution x of T(x) = x; such an x is called an eigenvector corresponding to 2.
Diagonal Matrix Representation ; Suppose A=PDP^-1, where D is a diagonal n x n matrix. IF B is the basis for R^N formed from the columns of P, the D is the B-matrix for the transformation x--> AX
Complex Eigenvalues
The matrix eigenvalue-eigenvector theory already developed for R" applies equally well to C". So a complex scalar & satisfies det(A - I) = 0 if and only if there is a nonzero vector x in C" such that Ax = 2x. We call λ a (complex) eigenvalue and x a (complex) eigenvector corresponding to λ.
THE ONE AND ONLY Invertible Matrx Theorem