The Finite Element Analysis (FEA) is the simulation of any given physical phenomenon using the numerical technique called Finite Element Method (FEM). Engineers use it to reduce the number of physical prototypes and experiments and optimize components in their design phase to develop better products, faster.
It is necessary to use mathematics to comprehensively understand and quantify any physical phenomena such as structural or fluid behavior, thermal transport, wave propagation, the growth of biological cells, etc. Most of these processes are described using Partial Differential Equations (PDEs). However, for a computer to solve these PDEs, numerical techniques have been developed over the last few decades and one of the prominent ones, today, is the Finite Element Analysis.
Differential equations can not only describe processes of nature but also physical phenomena encountered in engineering mechanics. These partial differential equations (PDEs) are complicated equations that need to be solved in order to compute relevant quantities of a structure (like stresses (ϵ
$\u03f5$), strains (ϵ
$\u03f5$), etc.) in order to estimate a certain behavior of the investigated component under a given load. It is important to know that FEA only gives an approximate solution of the problem and is a numerical approach to get the real result of these partial differential equations. Simplified, FEA is a numerical method used for the prediction of how a part or assembly behaves under given conditions. It is used as the basis for modern simulation software and helps engineers to find weak spots, areas of tension, etc. in their designs. The results of a simulation based on the FEA method are usually depicted via a color scale that shows for example the pressure distribution over the object.
Depending on one’s perspective, FEA can be said to have its origin in the work of Euler, as early as the 16th century. However, the earliest mathematical papers on Finite Element Analysis can be found in the works of Schellbach [1851] and Courant [1943].
FEA was independently developed by engineers in different companies and industries to address structural mechanics problems related to aerospace and civil engineering. The development for real life applications started around the mid1950s as papers by Turner, Clough, Martin and Topp [1956], Argyris [1957] and Babuska and Aziz [1972] show. The books by Zienkiewicz [1971] and Strang and Fix [1973] also laid the foundations for future developments in FEA.
To be able to make simulations, a mesh, consisting of up to millions of small elements that together form the shape of the structure needs to be created. Calculations are made for every single element. Combining the individual results gives us the final result of the structure. The approximations we just mentioned are usually polynomial and in fact, interpolations over the element(s). This means we know values at certain points within the element but not at every point. These ‘certain points’ are called nodal points and are often located at the boundary of the element. The accuracy with which the variable changes is expressed by the approximation, which can be linear, quadratic, cubic etc. In order to get a better understanding of approximation techniques, we will look at a onedimensional bar. Consider the true temperature distribution T(x) along the bar in the picture below:
Let’s assume we know the temperature of this bar at 5 specific positions (Numbers 15 in the illustration). Now the question is: How can we predict the temperature in between these points? A linear approximation is quite good but there are better possibilities to represent the real temperature distribution. If we choose a square approximation, the temperature distribution along the bar is much more smooth. Nevertheless, we see that irrespective of the polynomial degree, the distribution over the rod is known once we know the values at the nodal points. If we would have an infinite bar, we would have an infinite amount of unknowns (DEGREES OF FREEDOM (DOF)). But in this case, we have a problem with a finite number of unknowns:
A system with a finite number of unknowns is called a discrete system. A system with an infinite number of unknowns is called a continuous system.
For the purpose of approximations we can find the following relation for a field quantity u(x)
$u(x)$:
u(x)=uh(x)+e(x)(1)
$$\begin{array}{}\text{(1)}& u(x)={u}^{h}(x)+e(x)\end{array}$$
Where uh(x)
${u}^{h}(x)$is the approximation, which differs from the real solution u(x)
$u(x)$with the error term e(x)
$e(x)$which is not known before. De facto, this error term has to vanish so that the approximate solution uh(x)
${u}^{h}(x)$becomes valid. The question that arises is how uh(x)
${u}^{h}(x)$can be parametrized in a discrete function space. One possible solution is to express uh(x)
${u}^{h}(x)$as a sum of shape function ϕi(x)
${\varphi}_{i}(x)$with coefficients αi(x)
${\alpha}_{i}(x)$.
uh(x)=∑i=1nαiϕi(x)(2)
$$\begin{array}{}\text{(2)}& {u}^{h}(x)=\sum _{i=1}^{n}{\alpha}_{i}{\varphi}_{i}(x)\end{array}$$
The line illustrated at the top shows this principle for a 1D problem. u
$u$can represent the temperature along the length of a rod that is heated in an nonuniform manner. In our case, there are four elements along the xaxis, where the function defines the linear approximation of the temperature illustrated by dots along the line.
One of the biggest advantages we have when using the Finite Element Analysis is that we can either vary the discretization per element or discretize the corresponding basis functions. De facto, we could use smaller elements in regions where high gradients of u
$u$are expected. For the purpose of modelling the steepness of the function we need to make approximations.
Before proceeding with the Finite Element Analysis itself, it is important to understand the different types of PDE’s and their suitability for FEA. Understanding this is important to everyone, irrespective of one’s motivation to using finite element analysis. One should constantly remind oneself that FEA is a tool and any tool is only as good as its user.
PDE’s can be categorized as elliptic (are quite smooth), hyperbolic (support solutions with discontinuities), and parabolic (describe time dependent diffusion problems). When solving these differential equations boundary and/or initial conditions need to be provided. Based on the type of PDE, the necessary inputs can be evaluated. Examples for PDE’s in each category include Poisson equation (Elliptic), Wave equation (Hyperbolic) and Fourier law (Parabolic).
There are two main approaches to solving elliptic PDE’s – Finite Difference Analysis (FDA) and Variational (or Energy) Methods. FEA falls into the second category of variational methods. Variational approaches are primarily based on the philosophy of energy minimization.
Hyperbolic PDE’s are commonly associated with jumps in solutions. The wave equation for instance is a hyperbolic PDE. Owing to the existence of discontinuities (or jumps) in solutions, the original FEA technology (or BubnovGalerkin Method) was believed to be unsuitable for solving hyperbolic PDE’s. However, over the years, modifications have been developed to extend the applicability of FEA technology.
It is important to consider the consequence of using a numerical framework that is unsuitable for the type of PDE that is chosen. Such usage leads to solutions that are known as “improperly posed”. This could mean that small changes in the domain parameters lead to large oscillations in the solutions or the solutions exist only on a certain part of the domain or time. These are not reliable. Wellposed solutions are defined with a unique one, that exists continuously for the defined data. Hence, considering reliability, it is extremely important to obtain them.
The mathematical models of heat conduction and elastostatics covered in this series consist of (partial) differential equations with initial conditions as well as boundary conditions. This is also referred to as the socalled Strong Form of the problem. A few examples of “strong forms” are given in the illustration below.
Differential Equations  Physical problem  Quantities 

ddx(AkdTdx)+Q=0
$\frac{d}{dx}\left(Ak\frac{dT}{dx}\right)+Q=0$

Onedimensional heat flow  T
$T$
=temperature, A $A$=Area, k $k$=thermal conductivity, q $q$=heat supply 
ddx(AEdudx)+b=0
$\frac{d}{dx}\left(AE\frac{du}{dx}\right)+b=0$

Axially loaded elastic bar  u
$u$
=displacement, A $A$=Area, E $E$=Young’s modulus, b $b$=axial loading 
Sd2wdx2+p=0
$S\frac{d{}^{2}w}{dx{}^{2}}+p=0$

Transversely loaded flexible string  w
$w$
=deflection, S $S$=String force, p $p$=lateral loading 
ddx(ADdcdx)+Q=0
$\frac{d}{dx}\left(AD\frac{dc}{dx}\right)+Q=0$

Onedimensional diffusion  c
$c$
=iron contraction, A $A$=area, D $D$=diffusion coefficient, Q $Q$=ion supply 
ddx(AγdVdx)+Q=0
$\frac{d}{dx}\left(A\gamma \frac{dV}{dx}\right)+Q=0$

Onedimensional electric current  V
$V$
=voltage, A $A$=area, γ $\gamma $=electric conductivity, Q $Q$=electric charge supply 
ddx(AD232μdpdx)+Q=0
$\frac{d}{dx}\left(A\frac{D{}^{2}}{32\mu}\frac{dp}{dx}\right)+Q=0$

Laminar flow in pipe (Poiseuille flow)  p
$p$
=pressure, A $A$=area, D $D$=diameter, μ $\mu $=viscosity, Q $Q$=fluid supply 
Table 1: Second order differential equations
Second order partial differential equations demand a high degree of smoothness for the solution u(x)
$u(x)$. That means that the second derivative of the displacement has to exist and has to be continuous! This also implies requirements for parameters that can not be influenced like the geometry (sharp edges) and material parameters (different Young’s modulus in a material).
To develop the finite element formulation, the partial differential equations must be restated in an integral form called the weak form. The weak form and the strong form are equivalent! In stress analysis, the weak form is called the principle of virtual work.
∫l0dwdxAEdudxdx=(wAt¯)x=0+∫l0wbdx ∀w with w(l)=0(3)
$$\begin{array}{}\text{(3)}& {\int}_{0}^{l}\frac{dw}{dx}AE\frac{du}{dx}dx=(wA\overline{t}{)}_{x=0}+{\int}_{0}^{l}wbdx\text{}\text{}\text{}\mathrm{\forall}w\text{}with\text{}w(l)=0\end{array}$$
The given equation is the socalled weak form (in this case the weak formulation for elastostatics). The name states that solutions to the weak form do not need to be as smooth as solutions of the strong form, which implies weaker continuity requirements.
You have to keep in mind that the solution satisfying the weak form is also the solution of the strong counterpart of the equation. Also remember that the trial solutions u(x)
$u(x)$must satisfy the displacement boundary conditions. This is an essential property of the trial solutions and this is why we call those boundary conditions essential boundary conditions.
If you want to read about the equivalence between the weak and strong formulation, please read more in the forum topic about the equivalence between the weak and strong formulation of PDEs for FEA.
The Finite Element Analysis can also be executed with a variation principle. In the case of onedimensional elastostatics, the minimum of potential energy is resilient for conservative systems. The equilibrium position is stable if the potential energy of the system Π
$\mathrm{\Pi}$is minimum. Every infinitesimal disturbance of the stable position leads to an energetic unfavorable state and implies a restoring reaction. An easy example is a normal glass bottle that is standing on the ground, where it has minimum potential energy. If it falls over, nothing is going to happen, except for a loud noise. If it is standing on the corner of a table and falls over to the ground, it is rather likely that the bottle breaks since it carries more energy towards the ground. For the variation principle we make use of this fact. The lower the energy level, the less likely it is to get a wrong solution. The total potential energy Π
$\mathrm{\Pi}$of a system consists of the work of the inner forces (strain energy)
Ai=∫l012E(x)A(x)(dudx)212σϵA(x)dx(4)
$$\begin{array}{}\text{(4)}& {A}_{i}={\int}_{0}^{l}\underset{\frac{1}{2}\sigma \u03f5A(x)}{\underset{\u23df}{\frac{1}{2}E(x)A(x){\left(\frac{du}{dx}\right)}^{2}}}dx\end{array}$$
and the work of the external forces
Aa=A(x)t¯(x)u(x)Γt(5)
$$\begin{array}{}\text{(5)}& {A}_{a}=A(x)\overline{t}(x)u(x){}_{{\mathrm{\Gamma}}_{t}}\end{array}$$
The total energy is:
Π=Ai−Aa(6)
$$\begin{array}{}\text{(6)}& \mathrm{\Pi}={A}_{i}{A}_{a}\end{array}$$
Find more about the minimum potential energy in our related forum topic.
One of the most overlooked issues in computational mechanics that affect accuracy, is mesh convergence. This is related to how small the elements need to be to ensure that the results of an analysis are not affected by changing the size of the mesh.
The figure above shows the convergence of quantity with increase in degrees of freedom. As depicted in the figure, it is important to first identify the quantity of interest. At least three points need to be considered and as the mesh density increases, the quantity of interest starts to converge to a particular value. If two subsequent mesh refinements do not change the result substantially, then one can assume the result to have converged.
Going into the question of mesh refinement, it is not always necessary that the mesh in the entire model is refined. St. Venant’s Principle enforces that the local stresses in one region do not affect the stresses elsewhere. Hence, from a physical point of view, the model can be refined only in particular regions of interest and further have a transition zone from coarse to fine mesh. There are two types of refinements (h and prefinement) as shown in the figure above. hrefinement relates to the reduction in the element sizes, while prefinement relates to increasing the order of the element.
Here it is important to distinguish between geometric effect and mesh convergence, especially when meshing a curved surface using straight (or linear) elements will require more elements (or mesh refinement) to capture the boundary exactly. Mesh refinement leads to a significant reduction in errors:
Refinement like this can allow an increase in the convergence of solutions without increasing the size of the overall problem being solved.
So now that the importance of convergence has been discussed, how can convergence be measured? What is a quantitative measure for convergence? The first way would be to compare with analytical solutions or experimental results.
eu=u−uh(7)
$$\begin{array}{}\text{(7)}& {e}_{u}=u{u}^{h}\end{array}$$
where u
$u$is the analytical solution for the field.
eϵ=ϵ−ϵh(8)
$$\begin{array}{}\text{(8)}& {e}_{\u03f5}=\u03f5{\u03f5}^{h}\end{array}$$
where ϵ
$\u03f5$is the analytical solution for the strain field.
eσ=σ−σh(9)
$$\begin{array}{}\text{(9)}& {e}_{\sigma}=\sigma {\sigma}^{h}\end{array}$$
where σ
$\sigma $is the analytical solution for the stress field.
As shown in the equations above, several errors can be defined for displacement, strains and stresses. These errors could be used for comparison and they would need to reduce with mesh refinement. However, in a FEA mesh, the quantities are calculated at various points (nodal and Gauss). So in this case, what needs to be determined is where and at how many points should the error be calculated.
eu?=chp(10)
$$\begin{array}{}\text{(10)}& {e}_{u}{}_{?}=c{h}^{p}\end{array}$$
Learn more about how to measure convergence.
The Finite Element Analysis started with significant promise in modeling several mechanical applications related to aerospace and civil engineering. The applications of Finite Element Method are just starting to reach their potential. One of the most exciting prospects is its application to coupled problems like fluidstructure interaction; thermomechanical, thermochemical, thermochemomechanical problems piezoelectric, ferroelectric, electromagnetics and other relevant areas:
With static analysis, you can analyze linear static and nonlinear quasistatic structures. In a linear case with an applied static load, only a single step is needed to determine the structural response. Geometric, contact and material nonlinearity can be taken into account. An example is a bearing pad of a bridge.
Dynamic analysis helps you analyze the dynamic response of a structure that experienced dynamic loads over a specific time frame. To model the structural problems in a realistic way, you can also analyze impacts of loads as well as displacements. An example is the impact of a human skull, with or without a helmet.
Eigenfrequencies and eigenmodes of a structure due to vibration can be simulated using modal analysis. The peak response of a structure or system under a given load can be simulated with harmonic analysis. An example is the start of an engine.
As discussed earlier in the section on PDE’s, traditional FEM technology has demonstrated shortcomings in modeling problems related to fluid mechanics, wave propagation, etc. Several improvements have been made over the last two decades to improve the solution process and extend the applicability of finite element analysis to a wide genre of problems. Some of the important ones still being used include:
The BubnovGalerkin method requires continuity of displacements across elements. Problems like contact, fracture and damage however, involve discontinuities and jumps that cannot be directly handled by Finite Element Methods. To overcome this shortcoming, XFEM was born in 1990’s. XFEM works through expansion of the shape functions with Heaviside step functions. Extra degreesoffreedom are assigned to the nodes around the point of discontinuity so that the jumps can be considered.
GFEM was introduced around the same time as XFEM in the 90’s. It combines the features of traditional FEM and meshless methods. Shape functions are primarily defined in the global coordinates and further multiplied by partitionofunity to create local elemental shape functions. One of the advantages of GFEM is the prevention of remeshing around singularities.
In several problems, like contact or incompressibility, constraints are imposed using Lagrange multipliers. These extra degrees of freedom arising from Lagrange multipliers are solved independently. The system of equations is solved like a coupled system of equations.
hpFEM is a combination of using automatic mesh refinement (hrefinement) and increase in order of polynomial (prefinement). This is not the same as doing h and p refinements separately. When automatic hprefinement is used, and an element is divided into smaller elements (hrefinement), each element can have different polynomial orders as well.
DGFEM has shown significant promise for using the idea of Finite Elements for solving hyperbolic equations where traditional Finite Element Methods have been weak. In addition, it has also shown promise in bending and incompressible problems which are commonly observed in most material processes. Here additional constraints are added to the weak form that include a penalty parameter (to prevent interpenetration) and terms for other equilibrium of stresses between the elements.
The Finite Element Analysis (FEA) component of SimScale enables you to virtually test and predict the behavior of structures and hence solve complex structural engineering problems subjected to static and dynamic loading conditions. The platform uses scalable numerical methods that can calculate mathematical expressions and would otherwise be very challenging due to complex loading, geometries or material properties.
1
${}^{1}$: Jacob Fish and Ted Belytschko, “A First Course in Finite Elements by Jacob Fish and Ted Belytschko”, Wiley, 2007 2
${}^{2}$: R . Courant, “Variational methods for the solution of problems of equilibrium and vibrations”, 1943 3
${}^{3}$: K . Schellbach, “Probleme der Variationsrechnung”, 1851, Berlin