History of Simulation
May 27, 2021
Computer simulation of physical processes is as universal in science and engineering as salt and pepper on the dinner table. Simulations are involved in advanced research, product development, and just about every weather report we see.
Finite Element Method and Analysis (FEA), the simulation of a physical process, originated in the 1950s. Nearly 30 years later, numerical analysis was implemented in the technology base of FEA.
In this 2-part blog, we take you on a chronological journey through the history of FEA, from numerical analysis to the cloud computing technology that we use today.
In the beginning, there was numerical analysis
Ancient Egyptians described numerical algorithms as root-finding methods for solving equations (c. 1650 BC). While algorithmic mathematics flourished in India and China, Greeks refined geometry. Later, Greek mathematicians including Archimedes and Exodus de Cnidus used numerical approximation, the basis of numerical integration used today, to calculate areas and volumes of geometric figures. These numerical methods played a crucial role in the development of calculus by Sir Isaac Newton and Gottfried Leibnitz.
Most physical phenomena such as those found in astronomy, geodesy, physics, and engineering can be mathematically modeled using calculus. Exact or closed-form solutions for these models are difficult to find, especially for complex domains, thus developing numerical methods to address the practical side of mathematics.
Following the application of numerical methods to unlock the mystery of gravity force, many scientists used numerical analysis to solve problems involving solid and fluid mechanics. Numerical methods were used to solve electrical fields and magnetism in the 19th century, then we saw the extension of these methods to the fields of relativity and quantum mechanics in the 20th century.
The figure below shows a coarse classification of numerical analysis with the caveat that there is overlap between these areas. The figure shows a sampling of the numerous methods developed through the history of numerical analysis.
Now we’ll dive into the details about linear/nonlinear algebra, interpolation theory, ordinary/partial differential equations, and their relevance to the development of computer modeling and simulation using FEM as we know it today.
Linear and non-linear algebra
Efficiently solving linear and nonlinear equations and systems of equations stand at the core of finite element analysis. In the modern age, solving a system of linear equations is handled through matrix inversion and matrix multiplication.
Direct algorithms such as Gaussian elimination, LU decomposition, Cholesky and Thomas, and iterative methods for solving systems of linear equations such as Jacobi, Gauss-Seidel, Predictor-Corrector, and Successive over-relaxation are used in modern FEM solvers as needed.
The choice of solver and matrix multiplication algorithms are dictated by the type of matrices appearing in numerical analysis. Spare, block, diagonally dominant, and Hilbert matrices are a few types of matrices that require or fit specialized multiplication methods. Nonlinear equation solvers rely on numerical methods such as Newton-Raphson, bisection and secant, fixed-point integration, or combinations of bisection and secant methods.
Interpolation and approximation theory
Numerical analysis relies on interpolation to generate new data points from a set of existing data points. This concept is also relevant to FEM. The solution of a FEM analysis is usually computed at “integration points'' inside the finite element, and it needs to be interpolated, using the nearest-neighbor interpolation method, for example, at the finite element nodes.
There are many ways to interpolate in numerical analysis
- Polynomial interpolation performs this operation using high order polynomial functions
- Spline interpolation, on the other hand, performs interpolation by piecewise polynomials
- Trigonometric and multivariate interpolations are additional methods widely used in the field
A snapshot of linear regression analysis, redacted for copyright protection, using least-squares approximation used in the thermal analysis of a medical device is shown in figure 2, below.
Linear regression and confidence interval plots using the least-squares optimization method
We cannot decouple interpolation without mentioning approximation theory. This theory describes how complex functions can best be represented using simpler functions and quantifies the error introduced by this approximation.
Ordinary and partial differential equations
Partial differential equations (PDE) and ordinary differential equations (ODE) are used to describe physical phenomena in mathematical terms. Solving differential equations used to model engineering and physical problems over complex domains, such as fuel combustion in an engine manifold, requires numerical analysis, leading to an approximate solution. Numerical integration is usually used to find a numerical approximation of the solution of ODEs.
Numerical methods used to solve ODEs are usually classified as explicit, and implicit. Explicit methods estimate the future solution based on the values at the current time, and implicit methods estimate the solution at a future time using both the current solution and the future time itself.
PDEs, prevalent in translating physical phenomena in mathematical terms, use finite difference, finite element, finite volume, boundary element, and meshfree methods. These methods are well adapted to the initial value and boundary value problems and benefit from perfect adaptation to parallelization and cloud computing.
As we can see, numerical analysis is extremely important for FEM development, a technique we use today in all fields of sciences. We will explore the history and significant developments in FEM in part 2 of this series.