The Finite Element Method - Fundamentals - Matrix Algebra [3]

Matrix Algebra

The FEM is a numerical approach which results in systems of equations often involving thousands and thousands of unknowns. The use of matrix algebra turns out to be very convenient when dealing with FEM systems as we will see later on (Assembly of systems etc.).

In our third chapter of the FEM course I will show you some basic fundamentals of matrix algebra. However no proofs will be given otherwise I would never come to an end typing in all the LaTeX expressions. But if you want to learn more about matrix algebra in general I recommend you to watch the Linear Algebra course by Gilbert Strang (math professor @MIT):

So what we are doing is basically recall the elementary results of matrix algebra.

Let’s start! :thumbsup:


1. Definitions

A matrix consists of so called components. These components are ordered in rows and columns. We can say that if the number of rows or the columns is equal to one, the matrix is one-dimensional, otherwise we have a two-dimensional matrix.

Example: We consider now a column matrix, where the number of columns is equal to one.
We write vectors of this type with a lower-case letter in bold type.

\mathbf{a} = \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \end{bmatrix}; \mathbf{b} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ \end{bmatrix};\mathbf{c} = \begin{bmatrix} c_1 \\ c_2 \\ c_3 \\ c_4 \\ c_5 \\ \end{bmatrix} \tag{1}

Where for instance a_1,a_2 and a_3 are the so called components of the matrix \mathbf{a}.
The specific dimension of a matrix is given by the number of rows and columns.

Matrix \mathbf{a} and \mathbf{b} is 3 x 1 but the dimension of matrix \mathbf{c} is 4 x 1.

You first indicate the row THEN the column.

Transpose

The transpose of our matrix \mathbf{a} is \mathbf{a}^\intercal

\mathbf{a}^\intercal = [a_1 \ a_2 \ a_3] \tag{2}

which now is the dimension 1 x 3 (remember: first row then column).

2-D Matrix

We write a two-dimensional matrix usually with an upper-case letter in bold:

\mathbf{B} = \begin{bmatrix} B_{11} & B_{12} & B_{13} \\ B_{21} & B_{22} & B_{23} \\ B_{31} & B_{32} & B_{33}\\ \end{bmatrix}; \mathbf{C} = \begin{bmatrix} C_{11} & C_{12} \\ C_{21} & C_{22} \\ C_{31} & C_{32} \\ C_{41} & C_{42} \\ \end{bmatrix} \tag{3}

\mathbf{B} is said to be a square matrix since the number of rows and the number of columns is equal. (Dimension of \mathbf{B} is 3x3 and \mathbf{C} is 4 x 2)

How do we get the transpose of this matrix now? Is this hard? Not at all! Just interchange rows and columns!

\mathbf{B}^\intercal = \begin{bmatrix} B_{11} & B_{21} & B_{31} \\ B_{12} & B_{22} & B_{32} \\ B_{13} & B_{23} & B_{33}\\ \end{bmatrix} \tag{4}

Another property of our matrix \mathbf{B} is symmetry. \mathbf{B} = \mathbf{B}^\intercal.
We will often encounter symmetric matrices because if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations (Cauchy’s equations of motion for zero acceleration).

At the same time, according to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine

Diagonal Matrix

If only the diagonal of a matrix is different from zero, our matrix is termed a diagonal matrix.

\mathbf{A} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 8 & 0 & 0 \\ 0 & 0 & 7 & 0 \\ 0 & 0 & 0 & 3 \\ \end{bmatrix} \tag{5}

Unit Matrix

If all the diagonal components of a diagonal matrix are qual to 1, the matrix is a so called unit matrix.
We often write a unit matrix as \mathbf{I}.

\mathbf{I} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{bmatrix} \tag{6}

Zero Matrix

If all components of a matrix are zero we call the matrix a zero matrix \mathbf{0}

\mathbf{0} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \\ \end{bmatrix} \tag{7}

2. Addition and Subtraction

If two matrices have the same dimension the can be added and subtracted.

\mathbf{c} = \mathbf{a} \pm \mathbf{b} = \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \end{bmatrix} \pm \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ \end{bmatrix} = \begin{bmatrix} a_1 \pm b_1 \\ a_2 \pm b_2 \\ a_3 \pm b_3 \\ \end{bmatrix} \tag{8}

For transposed matrices we have

\begin{align} \mathbf{c}^\intercal& =\mathbf{a}^\intercal \pm \mathbf{b}^\intercal = [a_1 \ a_2 \ a_3] \pm [b_1 \ b_2 \ b_3]\\ & =[a_1 \pm b_1 \ a_2 \pm b_2 \ a_3 \pm b_3 ] \end{align} \tag{9}

and for a two-dimensional matrix we have

\mathbf{C} = \mathbf{A} \pm \mathbf{B} = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \\ A_{31} & A_{32} \\ \end{bmatrix} \pm \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \\ B_{31} & B_{32} \\ \end{bmatrix} = \begin{bmatrix} A_{11} \pm B_{11} & A_{12} \pm B_{12} \\ A_{21} \pm B_{21} & A_{22} \pm B_{22} \\ A_{31} \pm B_{31} & A_{32} \pm B_{32} \\ \end{bmatrix} \tag{10}

Some rules follow:

\begin{align} \mathbf{a} \pm \mathbf{b}& = \pm \mathbf{b} + \mathbf{a}\\ \mathbf{a}^\intercal \pm \mathbf{b}^\intercal & = \pm \mathbf{b}^\intercal + \mathbf{a}^\intercal\\ \mathbf{A} \pm \mathbf{B}& = \pm \mathbf{B} + \mathbf{A}\\ \end{align} \tag{11}

Moreover, we have the following rules

\begin{align} (\mathbf{a} \pm \mathbf{b})^\intercal& = \mathbf{a}^\intercal \pm \mathbf{b}^\intercal \\ (\mathbf{a}^\intercal \pm \mathbf{b}^\intercal)^\intercal& = \mathbf{a} \pm \mathbf{b} \\ (\mathbf{A} \pm \mathbf{B})^\intercal& = \mathbf{A}^\intercal \pm \mathbf{B}^\intercal \\ \end{align} \tag{12}

3. Multiplication

A matrix can be multiplied by a number c and this implies that each component is multiplied by the number c.

c \mathbf{a} = c \begin{bmatrix} a_1 \\ a_2 \\ a_3 \\ \end{bmatrix} = \begin{bmatrix} ca_1 \\ ca_2 \\ ca_3 \\ \end{bmatrix} \tag{13}

The same rule applies to two-dimensional matrices.

Scalar product

The scalar product of two matrices \mathbf{a} and \mathbf{b} having the same dimension is defined according to

\boxed{\mathbf{a}^\intercal \mathbf{b} = \sum_{i=1}^n a_i b_i} \tag{14}

where n is the number of rows. It follows that

\mathbf{a}^\intercal \mathbf{b} = \mathbf{b}^\intercal \mathbf{a} \tag{15}

With \mathbf{a} and \mathbf{b} given from the beginning we obtain

\mathbf{a}^\intercal \mathbf{b} = [a_1 \ a_2 \ a_3] \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ \end{bmatrix} = a_1 b_1 + a_2 b_2 + a_3 b_3 \tag{16}

The length of \mathbf{a} or \mathbf{a}^\intercal is denoted by \vert \mathbf{a} \vert and defined by

\vert \mathbf{a} \vert = (a^2_{1} + a^2_{2} + ... + a^2_{n})^{\frac{1}{2}} \tag{17}

Let us now generalize the multipliation rule (14) to more complicated situations.
Let \mathbf{A} have the dimension m x n and \mathbf{x} the dimension n x 1; then the matrix product \mathbf{Ax} defines a row matrix \mathbf{c} with the dimension m x 1 according to

\boxed{\mathbf{c} = \mathbf{Ax} \quad \text{where} \quad c_i = \sum_{j=1}^n A_{ij} x_j} \tag{18}
\mathbf{c} = \mathbf{A} \mathbf{x} = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \\ A_{31} & A_{32} \\ \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \end{bmatrix} = \begin{bmatrix} A_{11} x_1 & A_{12} x_2\\ A_{21} x_1 & A_{22} x_2\\ A_{31} x_1 & A_{32} x_2\\ \end{bmatrix} \tag{19}

In a similar manner let now \mathbf{x} have the dimension m x 1 and \mathbf{A} the dimension m x n , then the matrix product \mathbf{x}^\intercal \mathbf{A} defines a row matrix \mathbf{c}^\intercal with the dimension 1 x n according to

\mathbf{c}^\intercal = \mathbf{x}^\intercal \mathbf{A} \quad \text{where} \quad c_j = \sum_{i=1}^n x_i A_{ij} \tag{20}

An example would be

\begin{align} \mathbf{c}^\intercal = \mathbf{x}^\intercal \mathbf{A}& = [x_1 \quad x_2] \begin{bmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ \end{bmatrix} \\ \tag{21} &=[x_1 A_{11} + x_2 A_{21} \quad x_1 A_{12} + x_2 A_{22} \quad x_1 A_{13} + x_2 A_{23}] \end{align}

Finally, we can carry out matrix multiplication of two-dimensional matrices. If \mathbf{A} has the dimension m x n and \mathbf{B} the dimension n x p, the matrix product \mathbf{AB} defines the matrix \mathbf{C} with the dimension m x p according to

\boxed{\mathbf{C} = \mathbf{A} \mathbf{B} \quad \text{where} \quad C_{ij} = \sum_{k=1}^n A_{ik} B_{kj}} \tag{22}

See the following example

\begin{align} \mathbf{C} = \mathbf{AB}& = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \\ A_{31} & A_{32} \\ \end{bmatrix} \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \\ \end{bmatrix}& \\ & = \begin{bmatrix} A_{11} B_{11} + A_{12} B_{21} & A_{11} B_{12} + A_{12} B_{22} \\ A_{21} B_{11} + A_{22} B_{21} & A_{21} B_{12} + A_{22} B_{22} \\ A_{31} B_{11} + A_{32} B_{21} & A_{31} B_{12} + A_{32} B_{22} \\ \end{bmatrix} \tag{23} \end{align}

We keep in mind that matrix multiplication is only defined if the matrices possess the correct dimensions!

Moreover if \mathbf{A} has the dimension m x n and \mathbf{B} the dimension n x m both multiplications \mathbf{AB} and \mathbf{BA} are defined BUT

\mathbf{AB} \neq \mathbf{BA} \tag{24}

Why? Because \mathbf{AB} defines a matrix with the dimension m x m , whereas \mathbf{BA} defines a matrix with the dimension n x n.

Even if m = n (24) will (in general) hold true.

\mathbf{AB} = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \\ \end{bmatrix} \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \\ \end{bmatrix} = \begin{bmatrix} A_{11} B_{11} + A_{12} B_{21} & A_{11} B_{12} + A_{12} B_{22} \\ A_{21} B_{11} + A_{22} B_{21} & A_{21} B_{12} + A_{22} B_{22} \\ \end{bmatrix} \tag{25}

and

\mathbf{BA} = \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \\ \end{bmatrix} \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \\ \end{bmatrix} = \begin{bmatrix} B_{11} A_{11} + B_{12} A_{21} & B_{11} A_{12} + B_{12} A_{22} \\ B_{21} A_{11} + B_{22} A_{21} & B_{21} A_{12} + B_{22} A_{22} \\ \end{bmatrix} \tag{26}

Some definitions

\mathbf{AI} = \mathbf{A} \tag{27}

We also keep in mind that

\boxed{(\mathbf{AB})^\intercal = \mathbf{B}^\intercal \mathbf{A}^\intercal} \tag{28}

(28) implies

(\mathbf{ABC})^\intercal = ((\mathbf{AB})\mathbf{C})^\intercal = \mathbf{C}^\intercal (\mathbf{AB})^\intercal = \mathbf{C}^\intercal \mathbf{B}^\intercal \mathbf{A}^\intercal \tag{29}

we also have

(\mathbf{Ax})^\intercal = \mathbf{x}^\intercal \mathbf{A}^\intercal \tag{30}

Moreover, it follows trivially that if c denotes a scalar (number) then

c \mathbf{AB} = \mathbf{A}(c \mathbf{B}) \tag{31}

Finally, we recall the distribution law from ordinary algebra which also holds for matrices!

\begin{align} (\mathbf{A} + \mathbf{B}) \mathbf{x}& = \mathbf{Ax} + \mathbf{Bx} \\ \mathbf{x}^\intercal (\mathbf{A} + \mathbf{B})& = \mathbf{x}^\intercal \mathbf{A}+ \mathbf{x}^\intercal \mathbf{B} \\ (\mathbf{A} + \mathbf{B}) \mathbf{C}& = \mathbf{AC} + \mathbf{BC} \\ \mathbf{C} (\mathbf{A} + \mathbf{B})& = \mathbf{CA} + \mathbf{CB} \tag{32} \end{align}

Tip: If you want to get the resulting dimension of a matrix write down both matrix dimensions. For instance matrix \mathbf{A} has the dimension 1 x m and matrix \mathbf{B} has the dimension m x n. We write

\begin{matrix} 1 & x & m \\ m & x & n \\ \end{matrix}

In the next step we erase the following components

\require{cancel} \begin{matrix} 1 & x & \cancel{m} \\ \cancel{m} & x & n \\ \end{matrix}

The leftover of our trick is 1 x n which is the new matrix dimension, so simple! :slight_smile:


Excursus

We often name matrices tensors. The order/rank of the tensor tells us what kind of matrix we have.

\begin{array}{c|c} Rank&Object\\ \hline 0 & scalar\\ 1 & vector\\ 2 & N \ x \ N \ matrix\\ \geq{3} & tensor\\ \end{array}

Do tensors of the order \geq{3} exist? Yes!

A simple example would be a tensor of order 4 namely the Young’s modulus having 81 components.

Hooke’s law in tensor notation looks like this

\sigma_{ij} = c_{ijkl} \cdot \epsilon_{kl}

Our Young’s modulus has to be a tensor of order four so that every component of \sigma can depend of every component of \epsilon.


4. Determinant

For the square matrix \mathbf{A}, it is possible to calculate the so called determinant det \mathbf{A} of \mathbf{A}. Let \mathbf{A} be of dimension n x n . If n = 1 then, by definition, det \mathbf{A} = A_{11}. Before it is possible to determine det \mathbf{A} for n \geq 2 some definitions have to be made.

If row number i and column number k are deleted, a new square matrix with dimension (n - 1) x (n - 1) emerges. The determinant of this matrix is called a minor of \mathbf{A} and is denoted by det M_{ik}. The cofactor of \mathbf{A} is denoted by A^c_{ik} and is defined by

\boxed{A^c_{ik} = (-1)^{i+k} \ det \ M_{ik}} \tag{33}

According to the expansion formula of Laplace we then have

\boxed{det \mathbf{A} = \sum_{k=1}^n A_{ik} A^c_{ik}} \tag{34}

where i indicates any row number by the range 1 \leq i \leq n .
As an example consider \mathbf{A} given by

\mathbf{A} = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \\ \end{bmatrix} \tag{35}

It follows that

det \ M_{11} \ = \ A_{22}; det \ M_{12} \ = \ A_{21}; det \ M_{21} \ = \ A_{12}; det \ M_{22} \ = \ A_{11}; \tag{36}

Suppose we use formula (34) and chose i = 1; then with formula (33) and (36) we obtain the following result

\begin{align} det \mathbf{A}& =\sum_{k=1}^2 A_{1k} A^c_{1k} = A_{11}A^c_{11} + A_{12}A^c_{12}\\ & =A_{11} (-1)^{1+1} \ det \ M_{12} + A_{12} (-1)^{1+2} \ det \ M_{12} \\ & =A_{11}A_{22} - A_{12}A_{21} \tag{37} \end{align}

We can verify that the same result is obtained if we chose i = 2 instead of 1 in (34).
(34) expresses the determinant as a certain linear combination of all components A_{ik} in the arbitrary row i. The expansion formula of Laplace can also be written as a linear combination of all components in an arbitrary column. We have

\boxed{det \mathbf{A} = \sum_{k=1}^n A_{kj} A^c_{kj}} \tag{38}

where j indicates any column number in the range 1 \leq j \leq n. As an example consider \mathbf{A} given as in (35) and choose j = 1. Then with (38),(33) and (36) we obtain

\begin{align} det \mathbf{A}& =\sum_{k=1}^2 A_{k1} A^c_{k1} = A_{11}A^c_{11} + A_{21}A^c_{21}\\ & =A_{11} (-1)^{1+1} \ det \ M_{11} + A_{21} (-1)^{2+1} \ det \ M_{21} \\ & =A_{11}A_{22} - A_{21}A_{12} \end{align}

in accordance with (37).

In practice, the direct use of the expansion formulae of Laplace in numerical calculations is unsuitable since many operations are necessary.
In theory, these expansion formulae are very important as they enable one to establish a number or properties for determinants.

It can be shown

\boxed{ \begin{align} 1.& \text{If a row (or column) consists of zeros, the determinant is zero};\\ 2.& \text{If two rows (or columns) are proportional, the determinant is zero};\\ 3.& \text{If a row (or column) is multiplied by a factor k, the determinant is also multiplied by k}\\ \end{align} }

For later purposes it is convenient to recall that a row (or column) operation is an operation where all components of one row (or column) are multiplied by a certain factor and then added to another row (or column). We then have

\boxed{ \begin{align} 4.& \text{Row (or column) operations do not change the determinant};\\ 5.& \text{If two rows (or columns) are interchanged, the determinant changes it sign} \end{align} }

Finally, we note that

det \mathbf{A}^\intercal = det \mathbf{A} \tag{39}
det \mathbf{(AB)} = det \mathbf{A} + det \mathbf{B} \tag{40}

whereas

det \mathbf{(A+B)} \neq det \mathbf{A} + det \mathbf{B} \\ \tag{41}

5. Inverse Matrix

The inverse \mathbf{A}^{-1} of a square matrix \mathbf{A} is defined by

\mathbf{A}^{-1} \mathbf{A} = \mathbf{AA^{-1}} = I \tag{42}

By means of the cofactor A^c_{ik} of \mathbf{A} as defined by (33), it is possible to present an explicit expression for the inverse matrix \mathbf{A}^{-1}.
For all values of i and k, we are able to construct a square matrix of dimension n x n by means of (33). If this matrix is transposed, we obtain the so called adjoint matrix of \mathbf{A} given by

adj \mathbf{A} = \begin{bmatrix} A^c_{11} & A^c_{12} & \cdots & A^c_{1n} \\ A^c_{21} & A^c_{22} & \cdots & A^c_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ A^c_{n1} & A^c_{n2} & \cdots & A^c_{nn} \\ \end{bmatrix}^\intercal \tag{43}

The inverse matrix is given by

\boxed{\mathbf{A}^{-1} = adj \mathbf{A} / det \mathbf{A}} \tag{44}

Please remember that \mathbf{A}^{-1} only exists if det \mathbf{A} \neq 0. A square matrix \mathbf{A} for which det \mathbf{A} = 0 is called a singular matrix. We recall that

(\mathbf{A^{-1}})^\intercal = (\mathbf{A^\intercal})^{-1} \tag{45}
(\mathbf{AB})^{-1} = \mathbf{B}^{-1}\mathbf{A}^{-1} \tag{46}

and the latter expression implies

(\mathbf{ABC})^{-1} = \mathbf{C}^{-1}(\mathbf{AB})^{-1} = \mathbf{C}^{-1}\mathbf{B}^{-1}\mathbf{A}^{-1} \tag{47}

We note that if

\mathbf{A}^{-1} = \mathbf{A}^\intercal \tag{48}

then the matrix is called an orthogonal matrix. If \mathbf{A} is orthogonal, we have \mathbf{A^\intercal A} = \mathbf{A A^\intercal} = I

6. Linear equations: Number of equations is equal to number of unknowns

Consider the following system of equations

\boxed{\mathbf{Ax} = b} \tag{49}

where \mathbf{A} is a square matrix (n x n) and \mathbf{x} and \mathbf{b} both have the dimension (n x 1).

If \mathbf{b} = 0 then (49) is called a homogeneous system of equations otherwise we call it an inhomogeneous system of equations.

Assuma that

det \mathbf{A} \neq 0 \tag{50}

then by premultiplying (49) by \mathbf{A}^{-1} we get

\mathbf{A}^{-1} \mathbf{Ax} = \mathbf{A}^{-1} \mathbf{b}

which, due to (42), gives us

\boxed{\mathbf{x} = \mathbf{A}^{-1} \mathbf{b}} \tag{51}

For the homogeneous system of equations

\mathbf{Ax} = 0 \tag{52}

for which det \mathbf{A} only has the trivial solution \mathbf{x} = \mathbf{0}. The non-trivial solution of (52) requires that det \mathbf{A} = 0.

Homogeneous systems of equaions are of importance, for instance in buckling and vibration problems.

For homogeneous systems of equations, we have

\mathbf{Ax} = \mathbf{0} \tag{53}
\begin{align} •& \text{If det $\mathbf{A}$ = 0, a non-trivial solution exists}\\ •& \text{If det $\mathbf{A}$ $\neq$ 0, no non-trivial solution exists} \end{align}

For inhomogeneous systems of equations, we have

\mathbf{Ax} = \mathbf{b}; \qquad \mathbf{b} \neq \mathbf{0} \tag{54}
\boxed{\begin{align} •& \text{If det $\mathbf{A}$ $\neq$ 0, one unique solution given by $(51)$ exists}\\ •& \text{If det $\mathbf{A}$ = 0, no unique solution exists.} \end{align}}

Depending on the specific \mathbf{b}-matrix we may have no solution or an infinite amount of solutions if det \mathbf{A} = 0.

In practical numerical calculations the direct establishment of the inverse \mathbf{A}^{-1} is too cumbersome and other approaches like the common and efficient Gauss elimination are used.

In Gauss elimination, the equations are combined in such a manner that the lower left part of the new coefficient matrix (\mathbf{A}) consists of zeros. To get there, a multiple of the first equation is added to the second one such that the first component of the second row in the new coefficient matrix becomes zero. A multiple of the first equation is then added to the third equation such that the first component of the third row in the new coefficient matrix becomes zero.

Equation (54) is then transformed into the form

\mathbf{A'x} = \mathbf{b'} \tag{55}

We say \mathbf{A'} and \mathbf{b} to emphasize that the new coefficient matrix and right-hand side have changed as a result of our elimination process. \mathbf{A'x} has the form

\mathbf{A'} = \begin{bmatrix} A'_{11} & A'_{12} &A'_{13} & \cdots & A'_{1n}\\ 0 & A'_{22} & A'_{23} & \cdots & A'_{2n}\\ 0 & 0 & A'_{33} & \cdots & A'_{3n} \\ \vdots & \vdots & \vdots& \ddots &\vdots\\ 0 & 0 & 0 & \cdots & A'_{nn}\\ \end{bmatrix} \tag{56}

As you can see the lower left part of our matrix consists of zeros. We can also say that the system of equations has been triangularized. The diagonal elements A'_{11},A'_{22},…,A'_{nn} are called pivot elements.

Easy-peasy now! From the last equation,we obtain

x_n = \frac{b'_n}{A'_{nn}} \tag{57}

This solution is substituted into equation number n - 1 to provide x_{n - 1} and we continue in this manner until we got all \mathbf{x}-components. We call this process back-substitution

7. Linear equations: number of equations is different from number of unknowns

We consider the linear homogeneous system of equations \mathbf{Ax} = \mathbf{0} where \mathbf{A} has the dimension m x n, whereas \mathbf{x} has the dimension n x 1 and \mathbf{0} the dimension m x 1 (furthermore n > m).

\mathbf{Ax} = \mathbf{0} \tag{58}
\boxed{• \text{For n > m,id est more unknowns than equations, we have at least n - m non-trivial solutions}}

8. Quadratic forms and positive definiteness

For a square matrix \mathbf{A} consider the quantity \mathbf{x^\intercal Ax}, which is a scalar (a number). This quantity is called a quadratic form. If

\mathbf{x^\intercal Ax} > 0 \qquad \text{for all} \quad \mathbf{x} \neq \mathbf{0} \tag{59}

then the matrix \mathbf{A} is said to be positive definite. We will see later on why positive definite matrices are very important!

From (59) follows that the determinant of a positive definite matrix is different from zero. Assume that (59) holds and that det \mathbf{A} = 0.
Since det \mathbf{A} = 0, it is possible to determine a non-trivial solution to the homogeneous system of equations \mathbf{Ax} = \mathbf{0}, which means that our equation (59) is violated. This means det \mathbf{A} \neq 0 must hold for positive definite matrices,id est

\boxed{\text{If $\mathbf{A}$ is positive definite then det $\mathbf{A}$ $\neq$ 0}} \tag{60}

The inverse argument does not hold! If det \mathbf{A} \neq 0 we cannot conclude that \mathbf{A} is positive definite.

Also keep in mind that the square matrix \mathbf{A} is called positive semi-definite if

\boxed{\mathbf{x^\intercal Ax} \geq 0 \qquad \text{for all} \quad \mathbf{x} \neq \mathbf{0}} \tag{61}

9. Partitioning

To facilitate matrix multiplications, it may be useful to partition a matrix into so called submatrices. The idea is demonstrated below where the lines show the partitioning

\mathbf{A}=\begin{bmatrix} \begin{array}{cc|c} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ \hline A_{31} & A_{32} & A_{33} \end{array} \end{bmatrix} \tag{62}

Using this partitioning the matrix \mathbf{A} can be written in the following form

\mathbf{A}=\begin{bmatrix} \mathbf{B} & \mathbf{C} \\ \mathbf{D} & \mathbf{E} \\ \end{bmatrix} \tag{63}

where

\mathbf{B} = \begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \\ \end{bmatrix} \tag{64}
\mathbf{C} = \begin{bmatrix} A_{13} \\ A_{23} \\ \end{bmatrix} \tag{65}
\mathbf{D} = \begin{bmatrix} A_{31} & A_{32} \\ \end{bmatrix} \tag{66}
\mathbf{E} = \begin{bmatrix} A_{33} \end{bmatrix} \tag{67}

10. Differentiation and integration

Consider the matrix \mathbf{A} given by

\mathbf{A} = \begin{bmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ \end{bmatrix} \tag{68}

If the components of the matrix depend on a variable x, we define matrix differentiation as

\frac{d \mathbf{A}}{dx} = \begin{bmatrix} \frac{d \mathbf{A_{11}}}{dx} & \frac{d \mathbf{A_{12}}}{dx} & \frac{d \mathbf{A_{13}}}{dx} \\ \frac{d \mathbf{A_{21}}}{dx} & \frac{d \mathbf{A_{22}}}{dx} & \frac{d \mathbf{A_{23}}}{dx} \\ \end{bmatrix} \tag{69}

So all components are differentiated with respect to x. If we have a system of equations

\mathbf{A}(x)\mathbf{b} = \mathbf{f}(x) \tag{70}

where \mathbf{A} and \mathbf{f} depend on x, whereas \mathbf{b} is constant. Differentiation of (70) yields

\frac{d \mathbf{A}}{dx} \mathbf{b} = \frac{d \mathbf{f}}{dx} \tag{71}

Integration of the matrix \mathbf{A} given by (68) is defined as an integration of each component, i.e.

\int \mathbf{A}\;dx = \begin{bmatrix} \int \mathbf{A_{11}}\;dx & \int \mathbf{A_{12}}\;dx & \int \mathbf{A_{13}}\;dx \\ \int \mathbf{A_{21}}\;dx & \int \mathbf{A_{22}}\;dx & \int \mathbf{A_{23}}\;dx \\ \end{bmatrix}

Sources:
• Introduction to the Finite Element Method - Niels Ottosen & Hans Petersson
Cauchy stress tensor - Wikipedia
Tensor Rank -- from Wolfram MathWorld
Gaussian elimination - Wikipedia


This was the third part for FEM fundamentals. The following chapter will be about the direct approach for discrete systems.

If you find any mistakes or have wishes for the next chapters please let me know.

First chapter: The Finite Element Method - Fundamentals - Introduction [1]
Second chapter: The Finite Element Method - Fundamentals - Physical Problems [2]

Do not hesitate to ask questions. Avoid writing me a private message because your question(s) might also be useful for others.

3 Likes

Hi jousefm
Brilliant video, Wish I had had a tutor like that at Uni. Only watched the video upto now but will work through the tutorial latter, keep up the good work.

2 Likes

Hi @jousefm,

This is really helpful, Thank you so much for taking your time to write these posts.

I look forward to your next chapter,

Many Thanks,
Darren Lynch

1 Like

Hi Darren (@1318980) ,

thank you for your kind words! I hope I can start the next chapter soon. The formatting part is the most time consuming. Takes me about one day to finish a chapter :smiley:

But if you guys can profit from it, it’s all worth it :wink:

You’re welcome!

Jousef