So, semester begins. I suppose that this would be helpful

Lec 1. Introduction


Some notations.

Lec 2. Algebraic structures and spaces


Recall of linear algebra and dicrete math.
Some of them did become vague. Gotta find some corresponding notations in English for better understanding.

Practice 1. Matrices propertiers


Begin with recalling some old LA notations.

Practice 2. System of Linear Equations


As the name suggest, mainly concerned with linear equations, eigenvalues, eigenvectors, etc.

Lec 3. Matrix invariants and non-ivariants


1.Matrix invariants and its properties.

Characteristic polynomial

  • det(\lambda I-A)=det(\lambda I-B)

Algebraic spectra of eigenvalues of similar matrices

Determinants

  • det(A)=det(B)=\Pi \lambda

Traces

  • tr(A)=tr(B)=\Sigma \lambda

Ranks

2.Matrix non-invanriants

Geometric spectra of eigenvectors

Norms of similar matrices

  • Euclidean norm
  • Operator norm
  • Infinity norm (row norm)

Algebraic spectra of singular numbers of similar matrices

Condition numbers of similar matrices

Lec 4. Singular number decomposition


Definition

The singular value decomposition of a real-valued matrix N of demension (mxn) os ots factorization
N=U \Sigma V^T, where U is and orthogonal matrix (m x m), V is an orthogonal matrix that fom left and right singular basics and has the following peroperties:
U U^T = U^T U= I, V V^T = V^T V = I

Condition number

The condition number of an arbitrary (nxn) matrix N is the positive-valued scalar characteristic of this matrix
C{N} = ||N||_p ||N^{-1}||_p
Condition number based on spectral norms takes the form
C{N} = ||N||_{p=2} ||N^{-1}||_2 = a_M (N) a_m^{-1} (N)
the former shows that the condition number of the matrix of the linear algebraic problem \eta = N \chi, characterize the degree of flattening of the ellipsoid obtained by mapping a shere of the unit radius (geometric definition of the condition number).

Lec 5. Canonical Forms of Matrices


Definition of similar matrices

Two matrices A and \bar(A) are called similar if they define the same linear operator A with respect to different pairs of bases (e,f) and (\bar{e},\bar{f})

Definition of canonical form

The canonical form of the matrix of a linear operator is the form of a matrix that is built in accordance with a certian rule.

The use of canonical forms helps solve one of the possible problems: reducing the amount of matrix calculations by minimizing the number of nonzero matrix elements; facilitating the analysis of the structure of the linear operator’s space, ensuring the computational stability of all matrix procedures by reducing the condition number of the linear operator’s matrix, etc.

Basic cononical forms are built on two algebraic spectra of the original matrix A given in an aarbitrary basis

1. Diagonal canonical form of a matrix

IFF all eigenvalues are real and simple (different, not multiples).
$\bar{A} = diag\{\lambda_i , i=1…n\}$

2. Block diagonal canonical form of a matrix

Can be constructed when the spectrum of the eigen values contains non-multiple compelx conjugate eigenvalues.

3. Combined block-diagonal form

Can be constructed when the algebraic specturm of eigenvalues contains only simple real and complex conjugate eigenvalues.
n_r - number of real eigenvalues
n_c - number of complex conjugate eigenvalues

4. Jordan canonical form

Can be constructed when the algebraic spectrum of eigenvalues has the form

The jordan canonical form can also be constructed for the case when the spectrum of eigenvalues of the matrix contains multiple conjugate

Canonical Forms Constructed on the Algebraic Spectrum of the Characteristic Polynomial

Also called normal or Frobenius canonical form

Canonical controllable form
$\bar{A} = [0 1 0 … 0; 0 0 1 … 0; …; 0 0 0 …1; -a_n … -a_1]$
Canonical observable form
It is $\bar{A}^T$

Matrices of similarity transformation to canonical forms

Statement 1. The columns of the matrix M, which transforms the matrix A of a simple structureto the diagonal form, are the eigenvector of the matrix A.
Statement 1. Eigenvectors of the canonical controllable form A_F of a simple structure matrix A are constructed according to the Vandermode scheme.
Statement 3. Let matrix A_F is a canonical form of a simple structure matrix. Then matrix $A_F$ can be transformed into diagonal matrix $\Lambda$ with the help of Vendermonde matrix $M_V$, whose columns are eigenvectors of the form $\Lambda=M_V^{-1} A_F M_V$

Practice 5. Calculation of similarity transformation matrices


Begin with multiple examples.
Basically, do with the previous steps.
Remember the notation of Vendermonde matrix
Also, the condition for a matrix to transform into the diagonal matrix is the rank of thier eigenvalue.
That is, whether the eigenvectors are all linear independent.

Lec 7. Linear and quadratic forms


Definition 1: Scalar function of a vector

Definition 2: Real linear form

A function whose domain of definition is a linear space R^n and whose domain of values is real numbers set R is called a real linear form.

We can see that linear form is a scalar function of vector argument.

Definition 3: Kernel of the lienar form

The set of all vectors is called the kernel of the linear form (functional) and is denoted N(F1):

Definition 4: Quadratic form

The rank of a quadratic form is the rank of its matrix 𝐴

Canonical form of quadratic form

With non-degenerate transformation $x=T_y$
$F_2(y)=\Sigma \lambda_i(y_i)^2$
Here 𝜆𝑖 (𝑖 = 1, … 𝑛) are eigenvalues of matrix 𝐴, columns of matrix 𝑇 are pairwise orthogonal normalized eigenvectors of the matrix 𝐴 (norm of each eigenvector is equal to 1).

Positive and negative definiteness of a quadratic form

write a quadratic form as scalar product with weight A:

  1. The quadratic form 𝐹 𝑥 = 𝐴𝑥,𝑥 is called positively (negatively) definite if 𝐴𝑥, 𝑥 > 0 ( 𝐴𝑥, 𝑥 < 0) for 𝑥 ≠ 0 and 𝐴𝑥, 𝑥 = 0 for 𝑥 = 0.
    The quadratic form 𝐹 𝑥 = 𝐴𝑥, 𝑥 is called nonnegatively (nonpositive) definite if it takes only nonnegatively (nonpositive) values and becomes zero not only for 𝑥 = 0.
    If the quadratic form 𝐹 𝑥 has different signs for some 𝑥 then this form is called indefinite.

    Theorem 1. If all eigenvalues of a matrix of a quadratic form are positive, then it is positive definite. If all eigenvalues are negative, then it is negative definite.
    Theorem 2 (Sylvester’s criterion for the sign-definiteness of a quadratic form).
    For a quadratic form to be positive definite, it is necessary and sufficient that all the angular minors of its matrix be positive.
    For a quadratic form to be negative definite, it is necessary and sufficient that the signs of the angular minors of its matrix alternate as follows 𝛿1 <0,𝛿2 >0,𝛿3 <0,…

The rules for differentiating of matrices with respect to scalar variables, scalar and vector function of a vector argument with respect to vector variables

Definition 6: derivative of a matrix with respect to a scalar variable

Let 𝐴 be a matrix whose elements are functions 𝐴𝑖𝑗 = 𝐴𝑖𝑗(𝑞) of a scalar variable 𝑞. Then the derivative of the matrix 𝐴(𝑞) is the matrix 𝐴𝑞 = 𝜕𝐴(𝑞)/𝜕q composed of the derivatives of its elements 𝐴𝑖𝑗𝑞 = 𝜕𝐴𝑖𝑗(𝑞)/𝜕𝑞 with respect to the variable 𝑞, which can be written in the form 𝐴𝑞 =𝑟𝑜𝑤{𝑐𝑜𝑙(𝐴𝑖𝑗;𝑖=1,…𝑚);𝑗=1,…𝑛}

Definition 7: derivative of a scalar function of vector argument.

The numbe of formulas is large, check them on slides.

Definition 8: derivative of vector function of a vector argument

Same as Def. 7