Fill out the form to download

Required field
Required field
Not a valid email address
Required field
Required field


Structural Analysis Numerics

The Numerics panel contains the settings that control the equation solvers for FEA Structural Analysis models. This includes the selection of algorithms, type of residuals and threshold values, maximum number of iterations, time integration schemes, etc. Following is a detailed description of each one of the settings and its effects on the solution process.

The set of default values for these parameters is good enough for most situations, but advanced users can leverage the available options to optimize their solution process, for speed, robustness, and precision.

Linear Equations Solver

linear equation solvers structural analysis numerics simscale
Figure 1: Default numeric parameters for the linear equations solver

Linear equation solvers are algorithms that solve the general linear equation:

$$ Ku=F $$

For the unknowns \(u\), given a matrix of coefficients \(K\) and an independent term \(F\). This type of equation appears naturally in the Finite Elements Method, as a result of the discretization of the continuum. In other words, a set of linear equations appear when changing a domain of continuum variables into the discrete finite elements domain, where the variables are computed at a finite number of points.

Many optimized algorithms have been developed to efficiently solve the general linear equation. The two main families are the direct solvers and the iterative solvers:

  • Direct solvers decompose the coefficient matrix into factors by leveraging the matrix properties, and then find the solution by simple operations such as matrix-vector multiplication.
  • Iterative solvers start with a guess solution and then implement a process of successive approximations until convergence is reached.

Direct solvers are generally preferred because they are more robust, but they are limited by the number of equations to solve due to their higher memory consumption. For larger models, an iterative solver could be the only option. Also, iterative solvers allow to set a convergence tolerance, which is very useful to optimize for run time. Following is a description of the available solvers in SimScale and their setup parameters:

Direct Solvers

The following parameters are common to all solvers in this category:

  • Precision singularity detection: Numerical precision for matrix singularity assessment. A negative value turns off the detection.
  • Stop if singular: Stop the computation if the singularity assessment results positive. Can be disabled under the risk of getting wrong results. For nonlinear solutions, it can be replaced by the Newton criteria.


MUltifrontal Massively Parallel Sparse direct solver. It is optimized for parallelization and handling of sparse matrices, such as those arising from Finite Elements discretization.

  • Force Symmetric: Force the matrix to be symmetric.
  • Matrix type: Allows to specify the type of matrix for the solver.
    • Asymmetric
    • Automatic detection
    • Symmetric Positive Indefinite
  • Memory percentage for pivoting: Proportion of memory reserved on top of the estimated amount for pivoting operations.
  • Linear system relative residual: Residual for the quality measurement of the solution. If negative, the assessment is disabled.
  • Preprocessing: Enables the pre-analysis of the matrix to optimize the computation.
  • Renumbering method: Algorithm for optimization of the matrix. It has a big impact on the memory consumption of the solution process:
    • AMD: Uses the Approximate Minimum Degree method.
    • SCOTCH: Is a powerful renumbering tool, suited for most scenarios and the standard choice for MUMPS.
    • AMF: Uses the Approximate Minimum Fill method.
    • PORD: Is a renumbering tool included in MUMPS.
    • QAMD: Is a variant of AMD with automatic detection of quasi-dense matrix lines.
    • Automatic: MUMPS automatically selects the renumbering method.
  • Postprocessing: Enables additional refinement iterations to reduce the residuals in the solution. The available options are:
    • Inactive
    • Active
    • Automatic
  • Distributed matrix storage: If enabled, the matrix storage is split across the different processes. If disabled, one copy of the matrix is saved for each process.
  • Memory management: Allows to select the usage of RAM vs DISK. Memory demand evaluation .
    • Automatic: Lets the solver to decide for the optimal setting.
    • In-core: Optimizes for calculation time, by storing all objects in memory.
    • Memory demand evaluation: Delivers an assessment of the optimal settings in the solver log, without actually computing the solution.
    • Out-of-core: Optimizes for memory usage, by storing objects out of memory.


Performs a classic Gauss elimination process on the coefficients matrix.


Performs a multifrontal matrix factorization to build a LU or Cholesky decomposition of the sparse matrix.

  • Renumbering method: Selects the algorithm to perform the matrix pre-processing. For problems with 50000 or more degrees of freedom, consider using MDA.
    • MDA: Chose this option for large models of more than 50 000 degrees of freedom.
    • MD: Chose for smaller models.

Iterative Solvers

The following parameters are common to all solvers in this category:

  • Max Iterations: Maximum allowed iterations of the solution algorithm. If set to zero, the algorithm performs an estimation of the value.
  • Convergence threshold: Value for the solution residual. If after any iteration the residual falls below this value, the algorithm ends.
  • Preconditioner: Selects the algorithm to compute and recondition the matrix for optimal solution search.
    • MUMPS LDLT: Complete Cholesky decomposition in single precision.
      • Update rate: Rectualization interval, in terms of number of iterations.
      • Memory percentage for pivoting: Reserved memory for pivoting operations.
    • Incomplete LDLT: Incomplete Cholesky decomposition
      • Matrix completeness: Set the level of completeness for the approximation of the precondition matrix to the inverse. A larger value means more complete approximation.
      • Preconditioner matrix growth: Speed of filling of the incomplete approximation matrix.


Uses different algorithms and components from the Portable, Extensible Toolkit for Scientific Computation library.

  • Algorithm: Selects the solution algorithm.
    • CG: Conjugate Gradients
    • CR: Conjugate Residuals
    • GCR: Generalized Conjugate Residuals.
    • GMRES: Minimal Generalized Residuals, is the best compromise between robustness and speed.
  • Preconditioner: Additional to the two common algorithms, the following options are available for this solver:
    • Jacobi: Diagonal preconditioning.
    • SOR: Successive over-relaxation.
    • Inactive: No preconditioning of the matrix is performed.
    • Renumbering method: Algorithm for pre-processing of the matrix.
      • RCMK: Reverse Cuthill-McKee
      • Inactive
    • Distributed matrix storage: If enabled, the matrix storage is split across the different processes. If disabled, one copy of the matrix is saved for each process.


Uses standard Conjugate Gradients for the solution, combined with some preconditioning of the matrix.



Last updated: January 22nd, 2022

Data Privacy