Pace is weird.

Lec 7. Linear and quadratic forms

Recall the following concepts:

1) Scalar function of a vector
2) Linear and quadratic forms,
3) positive and negative definiteness of a quadratic form,
4) The rules for differentiating of matrices with respect to scalar variables, scalar and vector function of a vector argument with respect to vector variables


Prac 6. Properties of matrix functions

I. Main properties

  1. The matrix function of the matrix $f(A)$ preserves the block-diagonal form of the matrix A if A is diagonal matrix $A=diag\{a_i\}$, i.e.if $\sigma\{A\}=\sigma\{\lambda_1,\lambda_2,…,\lambda_n\}$ Then $\sigma\{f(A)\}=\{f(\lambda_1),f(\lambda_2),…\}$

2. Examples

  1. $e^A =I+A+\frac{1}{2!}A^2+…=\Sigma\frac{1}{i!}A^i$
  2. $cos A = 1-\frac{1}{2!}A^2 +1\frac{1}{4!}A^4 -\frac{1}{6!}+…$
  3. $sin A = A-\frac{1}{3!}A^3 +\frac{1}{5!}A^5 +…$

3. Some facats

$cos(2A)=2cos^2(A)-I$

$sin(2A)=sin(A)cos(A)$

$cos^2(A)+sin^2(A)=I$

$sin(A\pm B)=sin(A)cos(B)\pm cos(A)sin(B)$ IF $AB=BA$

$cos(A\pm B)=cos(A)cos(B)\mp sin(A)sin(B)$ IF $AB=BA$

4. Methods

  1. Approximate method
  2. The exact method based on eigenvalues

    $\Lambda=M^{-1}AM$

    Where $\Lambda=\left[\begin{array}{ccc}\lambda_1&0&0\\0&…&0\\0&0&\lambda_n\end{array}\right],M=[\xi_1,…,\xi_n]$

    $F(A)=M^{-1} F(\Lambda)$

5. Matrix exponent

  1. $e^{At}=I+At+\frac{1}{2!}A^2 t^2+\frac{1}{3!}A^3 t^3+\ldots=\sum_0^\infty\frac{1}{i!}A^it^i$

Properties:

  1. if $A=0$ or $t=0$ Then $e^{At}=I$
  2. if $AB=BA$ Then $e^{At}e^{Bt}=e^{(A+B
    )t}$
  3. In general case $AB\ne BA$ And $e^{At}e^{Bt}\ne e^{(A+B
    )t}$
  4. $e^{At}e^{A\tau}=e^{A(t+\tau)}$
  5. $\frac{d}{dt}e^{At}=Ae^{At}=e^{At}\cdot A$
  1. For matrices of simple structure ($\lambda_i:\lambda_i\ne\lambda_j,i\ne j,Im(\lambda_i)=0,i=1\ldots n$) matrix exponent is calculated as Where $\Lambda=\left[\begin{array}{ccc}\lambda_1&0&0\\0&…&0\\0&0&\lambda_n\end{array}\right],M=[\xi_1,…,\xi_n]$

Lec 8. Matrix functions. Matrix Exponent

Definitions:

Considering a square matrix A, $dim(A)=n\times n$;

  1. A scalar function (SFM) of a square matrix A is a function $f(A)$ That imlements the mapping.

    where $R$ is the set of real numbers.

    Examples: determinant, trace, norm and condition number of the matrix.

  2. A vector function of a square matrix A is a function $f(A)$ that implements the mapping

    where $R^n$ is n-dimension real space.

    Examples: vectors which consist of elements of algebraic spectra of eigenvalues and signular values.

Matrix series and matrix functions of matrices

The matrix function of a matrix (MFM) implements the mapping

  1. Let $f(\alpha)$ be a scalar power series (polynomial) with respect to a scalar variable $\alpha$Then the scalar series $f(\alpha)$ generates a matrix function $f(A)$ Of the matrix A in the form of a matrix series, if in the representation $(1)$ For $f(\alpha)$ the scalar variable is replaced by the matrix $A$

Hamilton-Cayley theorem.

Square matrix $A$ with characteristic polynomial

sets its characteristic polynomial to zero so that the matrix realtion is satisfied

where $0-(n\times n)$ null matrix

Using the Hamilton-Cayley theorem, we introduce the following definitions:

  1. A polynomial (power series) $\varphi(A)=0$ with respect to a scalar variable $\alpha$ is called an annihilating polynomial of a square matrix A if the condition

The annigilating polynomial of the matrix A, by virtue of the Hamilton-Cayley theorem, is primarily its characteristic polynomial.

  1. The annihilating polynomial $\psi(\alpha)$ of the lest degree m with the highest coefficient at $\alpha^m$ Equal to one is called the minimal polynomial of the matrix A.

    Construct expansion of the polynomial $f(a)(1)$ , which defines the matrix function of the matrix $f(A)$ in the form (2), modulo the minimal polynomial $\psi(\alpha)$ of the matrix A,

    Where the polynomial $r(\alpha)$ has degree $deg(r(\alpha))$ Less than the degree $deg(\psi(\alpha))$ Of the minimal polynomial $\psi(a)$ of matrix A.

  2. Let the polynomial $f(a)$ with respect to the scalar variable a be represented in the form (5), then the matrix function $f(A)$ can be written in the minial form

Properties of a matrix function of a matrix

  1. The matrix function of the matrix $f(A)$ preserves the spectrum of eigenvalues of the matrix

  2. The matrix function of the matrix $f(A)$ preserves the matrix similarity relation, i.e. if A is similar to $B(B=T^{-1}AT)$, then

  3. The matrix function of the matrix $f(A)$ preserves the block-diagonal form of the matrix A if A is diagonal matrix $A=diag\{a_i\}$, i.e

Main ways to calculate matrix exponent

  1. Numerical way

    Based on the transition from continuous time t to discrete time k, expressed by the number of discrete intervals of duration $\triangle t, t=k(\triangle t)$

  2. Diagonalization method (eigenvalue method)

    It is applied to matrices of simple struture

    for which the relation

    Holds, $\Lambda=diag\{\lambda_i,i=1,\ldots,n\}$

    M is matrix of eigenvectors A

  3. Method based on reduction to normal Jordan form

    Applies to matrices whose eigenvalue spectrum contains r multiple eigenvalues $\lambda_i$ of multiplicity $m_i$ each. Then the matrix similarity relation holds

    for the matrix exponent,

  4. Laplace transform method

    Calculation of the inverse Laplace transform from the resovent $(sI-A)^{-1}$ in the form

    To expand without inverting, the Faddeev-Leverrier algorithm is used based on the representation

    $(n\times n)-matrices\ H_i(i=0,\ldots,n-1)$ and coefficients of the characteristic equation are calculated using the recurrent procedure

    Now resolvent can be rewritten as

    And matrix exponent takes the view

Matrix Inversion using the Hamilton-Cayley theorem

Matrix relation (3)

$D(A)=A^n+a_1A^{n-1}+\ldots+a_{n-1}A+a_nI=0$

which is the analytical content of the Hamilton-Cayley theorem, in the form

Multiply (20) on the right by $A^{-1}$

Solve (18) with respect to the inverse matrix

We obtained an algorithmic matrix inversion base. Matrix relation (22) has the positive property: itis insensitive to the conditionality of the inverted matrix.

The disadvantage of inverting matrices using expression (22) is the need to know the coefficients of the characteristic polynomial. Therefore, the proposed inversion procedure will not cause noticeable difficulties for the case of sparse matrices, and it is especially convenient to use it when inverting matrices given in the Frobenius form, as the coefficients of the characteristic polynomial are explicitly present in it.


Prac 7. Hamilton-Caylery theorem.

Consequences

Any square matrix $A(n\times n)$ with characteristic polynomial

Satisfies its own characteristic equation, i.e.

The proof of the theorem

calculations of the matrix function can be represented as a finite sum

Interpolation

The Lagrange interpolation polynomial

consider the polynomial matrix

Successive multiplications of these equalities by $\lambda^p$ and $A^p$ will give

Sylvester theorem for $\lambda_i\ne \lambda_j$, $i\ne j,\ i=1,\ldots,n,\ j=1,\ldots, n$,

The Becker Formula

Where D is the Vandermonde determinant

$D_{n-1}$ - is determinant D with (n-1) - row replaced with a row

High degree matrices

Let us use the Sylvester formula to calculate $A^p$, as given in formula $(1)$

$Z_i$ Does not depoend on the p

if p is very large, then we can neglect $\lambda_2^p,\ldots,\lambda_n^p$, compared to $\lambda_1^p$

Fadeev-LeVerrier algorithm

Any square matrix $A(n\times n)$ With characteristic polynomial

The Fadeev-LeVerrier algorithm is based on thje following recursion rule for matrices $B_0,\dots, B_n$ and coefficients $c_0,\dots,c_n$

consequences


Lec 9. SISO Models of Continuous and Discrete-Time Systems

Definitions

Mathematical model of a dynamic system is the mathemetical description of the relation ship between the variables of the system, characterizing its behaviour

Mathematical model allows us to study the behaviour of the system when it is exposed to physical signals (independent variables: reference and control influences and disturbances).

A control system is an interconnection of elements forming a system configuration to provide a desired response.

System theory provides basis for analysis of a system.

The input-output relationship represents cause-and-effect relationship of the process, which in turn, represents a processing of the input signal to provide an output signal variable.

The mathematical description depends on the type of converted signals.

Transformation

Continuous-time signal transformation:

dynamic systems are called continuous, and differential equations are used to describe them.

Discrete time signal transformation

Discrete interval ∆𝑡 at time points 𝑡=𝑘(∆𝑡), where 𝑘 is discrete time expressed in the number of discrete intervals: dynamic systems are called discrete, recurrent (difference) equations are used to describe them.

SISO

Consider a non linear continuous dynamic system with one input and one output, described by a nonlinear ordinary differential equation of the n-th order

If we carry out linearization of (1) and leave the dependent variables on the left side, and the independent variables on the right side, then we obtain a linear (linearized) differential equation

Dynamic systems, mathematical models of which can be represented in the form of equation (2) are continuous linear systems.

When all the coefficients of equation (2) are constant the system is called stationary

Linearization of a system in the input-output form (SISO)

Consider a system in form (1) and in a steady state (equilibrium position)

It means that

It is required to obtain a linearized model in the neighborhood of the equilibrium position

Introduce new coordinates - deviations from the equilibrium state. Check slides for more detail.

SISO mathematical models of discrete control systems

In the control system, the functions of the controller can be performed by a digital (discrete) device. Such devices are implemented in the form of microcomputers, microcontrollers, microprocessors, interfaced with digital-to-analog converters.

The input of information into a discrete device is carried out at certain time intervals, therefore, for a mathematical description and analysis of the quality of discrete systems, it is necessary to develop a special method. A discrete system operates on data obtained from a continuous signal by sampling its values at equally spaced time intervals. The result is a time sequence of data called a discrete signal. The transition from continuous time 𝑡 to discrete moments of time is carried out according to the formula 𝑡=𝑘∆𝑡, 𝑘 is an integer that takes the values 𝑘=0, 1, 2,…


Lec 10. MIMO Models of Continuous and Discrete-Time Systems

Questions

What are the disadvantages of Input-Output models? Why was there a request for the Input-State-Output (MIMO, State Space) model?

The class of models Input-Output historically appeared from the theory of electrical circuits. Up to a certain point, it fully satisfied the needs of developers of dynamic systems.

Let us write the Input-Output model in an explicit form:

Where, $u(\nu)$ is the input funciton, $\nu$ Is continuous time t in the case of continuous objects or systems, and discrete time k in the case of discrete ones.

As the control problems become more complicated the description of systems in the Input-Output form, it was found that when using State Space models, it is much more convenient to take into account the existing physical ideas about the mechanisms of the system.

This problem was solved by parametrizing the ratio of form (0)

where the parameter vector $x(\nu)$ is called the state vector (or simply the state) of the dynamic system.

Definition

  1. The minimum set of parameters that completely removes the uncertainty of the input-output relationship of a dynamic object 𝑦(𝜈)=𝛿(𝑢(𝜈)) is called a state vector (or simply a state).

    if the state of a dynamic system 𝑥(𝜈𝑠) at some moment 𝜈 = 𝜈𝑠 is known, then the response of the system at 𝜈 ≥ 𝜈𝑠 will be uniquely determined only by the state 𝑥(𝜈𝑠) and the control signal 𝑢(𝜈) at 𝜈 ≥ 𝜈𝑠.

  2. Let us call an eight-component macrovector a dynamic system

    where 𝑈 is set of instantaneous values of 𝑟 − dimensional input (control) signals 𝑈 ∈ 𝑅𝑟;

    𝑋 is set 𝑛 −dimensiona states 𝑋 ∈ 𝑅𝑛;
    𝑌 is set of instantaneous values 𝑚 − dimensional outputs;
    Τ i set of time points forming the interval of control and observation; Ω is the set of admissible input signals;
    Γ – set of output values;

    𝜆 – system transition function from some previous state 𝑥 at the time moment

    𝜏 ∈ 𝑇 to the next state 𝑥 at the time moment 𝑡 under input signal 𝑈;

    𝛿 – system output function, which defines the rule for obtaining the instantaneous value of the output 𝑌 at the time moment 𝑡 ∈ 𝑇 under transition of the system from some previous state 𝑥 at the time moment 𝜏 ∈ 𝑇 under input signal 𝑈.

    We will use the reduced definition of a dynamic system, omitting the description of the sets Ω and Γ , i.e. define a dynamic system as a six-component macrovector

MIMO-Models of continuous control systems.

Unforced and forced response of the system. Fundamental and transition matrices. Construction of MIMO-Models of continuous systems by transfer functions.

The transition functions 𝜆 and 𝛿 in continuous systems are given in the following form:

Where $x\in R^n,\ y\in R^m, u\in R^r,\ \dot{x}(t)=\frac{d}{dt}x(t).$

If the rules 𝜆 and 𝛿 in the description of continuous systems can be represented as a composition of linear operations of addition and multiplication of matrices by a vector, then such systems s are linear. MIMO means multi input – multi output

For linear continuous dynamic systems, the description of the functions 𝜆 and 𝛿 takes the form

where 𝐴 − (𝑛 × 𝑛) is state matrix, 𝐵 − (𝑛 × 𝑟) is control matrix, 𝐶 − (𝑚 × 𝑛) matrix, 𝐷 − (𝑚 × 𝑟) is input-output matrix. In our further investigation we suppose that 𝐷 = 0.

Transfer matrix (function)

Let us apply the Laplace transform to equations (5), (6):

Where $x(0)=x(t)\vert_{t=0},U(s),X(s),Y(s)$ are Laplace transform of u(t), x(t), y(t).

Resovle the resulting expressions with respect to U(s) and Y(s)

with the initial state of the control systems is zero, (7) is

Unforced and forced components of state

Let us consider a linear continuous system described by equations (5), (6), which defined by the MIMO-model in differential form with a zero matrix 𝐷

Integral form

If we use the principle of superposition, which is valid for linear systems, then we can write

unforced and forced components, respectively, and caused by $x(0)\ne 0$, or movement generated by $u(t)\ne 0$

The general view of the integral model state space model (MIMO) of a linear continuous system takes the form eq(13), see more on slides,

$\Phi(t)$– fundamental matrix of the system
$\Phi(t,\tau)=\Phi(t)\Phi^{-1}(\tau)$ – transition matrix of the system
$w(t)=C\Phi(t,0)=C\Phi(t)B$ – weight matrix of the system

Discrete-time MIMO-models

A discrete system is a system in which, at least in one element, with a continuous change in the input value, the output value does not change continuously, but has the form of separate pulses that appear at certain intervals.

The functions of transition 𝜆 and of exit 𝛿 in discrete systems are given in the following form

In linear discrete system, are written in the form

A discrete system makes sampling with an interval of duration ∆𝑡 from the state and output variables of a continuous dynamic process. State variables between sampling moments change in accordance with the integral state model of a continuous system, output variables change according to the same law, and input (control) variables between sampling moments are fixed at the level of values at the previous sampling moment.