P-Fea Course: Introduction

Take this course to learn what’s “under the hood”

This course is intended to teach students and developers who want to learn how to write a p-adaptive code using the stressRefine library. It will start with the theory behind p-adaptivity, then take you through the practical aspects of creating p-adaptive elements using the library, and writing analysis routines that do p-adaptive analysis. It will continue on to show how to do analysis of local regions. This will all be presented first with simplified elements and solution assembly routines, then I’ll cover some code optimization and parallelization.

Having taken at least one finite element course will be assumed as a prerequisite. Familiarity with basic numerical methods concepts will be assumed, but concepts like efficient equation solution will be covered.

Suggested Reading:

Szabo, B, and Babuska, I, Finite Element Analysis, Wiley, NY, 1991. Superb text for understanding the implementation of heirarchical finite elements

Zienkiewicz, O, and Taylor, R, Finite Element Method. Volume 1- The Basis, Butterworth, 2000. Excellent introduction to Fea in general, plus error estimation and adaptivity concepts.

Lesson 1- FEA Displacement Approximations And Convergence

Approximating Functions

Finite Element Analysis uses an integral expression over the entire model being solved, rather than solving the governing equations (such as the equilibrium equations in elasticity) point by point throughout a model.  A series of functions is used to approximate the unknowns, such as the displacements u,v,w in the x,y,z directions in structural analysis:

u = ∑i = 1 to N aifi(x,y,z)   v = ∑i = 1 to N bigi(x,y,z) w = ∑i = 1 to N cihi(x,y,z)

These functions have to satisfy some conditions such as continuity, and displacement boundary conditions (for example if you have an immovable support, the displacements must be zero there).

The most famous use of this approach is the Ritz method, also known as Rayleigh-Ritz. A series of unknown functions like those in the equation above is inserted in an integral expression for the potential energy (strain energy plus work done on the boundary). If the energy is E, then the minimum potential energy is found by setting the derivatives of E with respect to the unknown coefficients to zero, for example

∂E/∂ai = 0 for I = 1 to N,

for the displacements in the x direction (u), and similar expressions for v and w. This results in a linear system of equations that is solved for the unknown coefficients.

Consider a one-D example, solving for the deflection of a beam of length 100. Now u represents the lateral deflection of the beam, and we approximate it with some functions:

u = ∑i = 1 to N aifi(x)

A popular choice is fi(x) = sin(i*π*x/L) where x is the length of the beam. Since L = 100, here’s what the first few functions look like:

These functions are not able to produce a nonzero value for displacement on the boundary. So if there’s an enforced displacement, we could introduce a linear function that’s nonzero on the boundary or a local function that’s nonzero on the boundary and decays rapidly into the interior, like the blue or orange functions shown:

These boundary functions can be used in addition to the sin functions.

This technique works well with functions that are smooth and linearly independent, and often gives accurate results with a few functions. It also helps if the functions are orthogonal to each other or close to it. We want ∫fi(x)fj(x)dx over the body, in this case the length of the beam, to be close to 0 if i≠j and close to 1 if i = j, and that works out well for the sin functions.

To illustrate the concept of completeness, suppose we omitted f1 in the figure above. The series would be incomplete, and even if we used an infinite number of functions, it might not converge to an accurate answer. The typical functions used in conventional finite elements codes are complete, and will converge to an accurate answer as long as enough elements are used (fine enough mesh). In a p-type finite element code, the mesh can be kept fixed, and an accurate answer will results as long as the polynomial order is high enough.

Now suppose we wanted to solve a 1D problem where the exact solution for displacement looks like this:

We could use a lot of the sin functions shown above, plus add in the boundary functions. But that’s a lot easier to do in 1D, and in 2D for simpler shapes like rectangles. It gets a lot harder for complex 3D shapes to come up with functions that can satisfy the boundary conditions and be continuous. One way to get around this is to use functions that are piecewise continuous. If we subdivide the line in the figure above into multiple segments, we can represent the displacement in each segment with more simple functions like polynomials. For example we can use linear functions or quadratic lagrange polynomials:

These functions have the property that they nonzero at a node (such as at the ends of the segment) and zero at all other nodes. Continuity is assured if the displacement is the same at nodes shared by adjacent segments. The segments are the “elements” in this 1D example. Now to follow the very wiggly displacement above well, we’d need a lot of the linear elements, Fewer of the quadratic elements would be needed. We can take this further by introducing higher order elements:

These are the 1D versions of the basis functions proposed by professors Szabo and Babusca [1]. In this example we could get by with only one of these segments if we went to high enough polynomial order. But there is a limit to how high the polynomial order can go in practice, because using very high polynomial order is computationally intensive. This means in more complicated problems we’d still need multiple elements. So there are two basis approaches to adaptivity: If we refer to the length of a segment as “h”, the basic idea with simpler elements, like the linear and quadratic ones above, is to achieve accuracy by keeping the type of element fixed (for example it remains quadratic), and to make “h” smaller by using more elements, This is h-adaptivity, which is achieved by adaptive mesh refinement. An alternative is to use higher order elements where the polynomial order can be raised. Keeping the number of elements fixed, we successively increase the p-order to achieve accuracy. This is p-adaptivity. It has the advantage that it can be achieved entirely inside the finite element analysis code, while h-adaptivity requires collaboration between the analysis code and the automesher. The disadvantage of the p-adaptive approach is that the elements are more complicated. This is especially true if you have a library of existing elements that would have to be converted to p-elements. The purpose of the stressRefine library is to provide tools to make such conversion easier, and in this course we’ll go into detail about showing how to use the library for that purpose.

The piecewise continuity approach is extended to 3D by using 3D elements. These are typically simple shapes like bricks (hexahedra), wedges (pentahedra), or “tets” (tetrahedra). The conventional h version of quadratic elements looks like this:

These a called “serendipity elements” which are based on a variation of the 1d Lagrange polynomials shown above. This is simplest to explain for the brick (c). Lagrange polynomials can be used in x,y, and z, and multiplied together. But this would require there to be additional nodes on the middle of the faces of the element, as well as the interior of the volume. It turns out the functions can be modified so that such nodes are not needed, which simplify the work of meshing, and good accuracy is still achieve. I’ll omit the details of that because my focus is on p-adaptivity, but it’s explained well in [2], and there’s a discussion online here. Until now I haven’t discussed mapping of elements. The basic shape shown above must be capable of being distorted into physical space, shown here in 2D:

The “natural coordinates” of the original elements are r,s,t. These need to be mapped to the physical coordinates x,y,z. To do this, we assume the position anywhere in the element is described by approximating functions called “shape functions”:

x = ∑i = 1 to N aifi(r,s,t)   y = ∑i = 1 to N bigi(r,s,t) z = ∑i = 1 to N cihi(r,s,t)

You can use more or less functions to represent the shape than the displacements. This means that more or less coefficients, or “parameter”, are used. If less are used for mapping than displacement, the element is “subparametric”. If the same number are used, the element is “isoparametric”, and if more are used for mapping the element is “superparametric”. Superparametric elements are not the best idea because they can have trouble representing rigid body motion. Isoparametric elements are the most commonly used conventional finite elements. The elements in stressRefine start out isoparametric at polynomial order 2, then become subparametric. I’ll go into more detail about that in a future lesson.

An obvious choice for the shape functions is to use the same functions used for the displacements, so not only as the same number of functions used, but the same functions. This is typically what is meant by a conventional isoparametric element.

Higher order polynomial elements are created by blending the 1D polynomials shown above in 2 and 3 dimensions. Here is what that looks like for the version created by Szabo and Babusca [1]:

Figure 1

I’ve discussed these functions in detail in a previous post. I discussed there how the number of each type of function must be chosen for completeness. Continuity requirements for these elements are trickier than for conventional elements. For conventional elements, continuity is assured as long as the element have the same nodes on shared faces and elements. An obvious violation of that would occur if a linear element were placed next to a quadratic one. If the displacement cases the quadratic element’s edge to become curved, the edge of the linear element cannot follow, so a crack opens up that is not present in the physical model:

The same issue occurs for an incomptibale mesh, which can occur, for example, at a part boundary in an assembly. There are methods to resolve this such as “glued contact” constraints that will be covered in a future lesson.

For p-elements, it is not enough to assure the nodes, which are only at the corners of the element, are compatible. If the adjacent elements are at different polynomial orders, discontinuity occurs at a shared edge. But they can be discontinuous at a shared edge or face even at the same p-order if you’re not careful in the definition of edges and faces, as discussed previously. Assuring continuity when using higher order polynomial functions will be covered in detail in the next lesson, where for the first time we’ll take a peak at some code.

Comments on the Ritz Method

Minimizing potential energy, as is done in the Ritz method, is only valid for linear problems and a subset of nonlinear problems for which a potential energy function exists. But there are more general alternatives, such as the Galerkin method that work in general nonlinear problems. The idea is the same, approximating functions are introduced into a global equation integrated over the body, and you come up with a set of equations to determine the unknown coefficients. However, if the governing equations are nonlinear and/or time dependent, then so will the equations for the coefficients. This requires “time marching” numerical techniques and solution methods like modfied Newton-Raphson. The Galerkin method is also readily generalized to other physical systems.

Up into the 1950s, before the advent of finite elements, a great deal of effort was put into coming up with functions that would work for more complex shapes. In the US, the National Advisory Committee for Aeronautics (NACA), predecessor to NASA, published a lot of solutions obtained this way for shapes useful in aeronautics. One of my professors at Stanford, Jean Mayers, was one of the experts who had contributed in that era. This seems to be all obsolete now with the advent of finite elements. But there is the sticky point that automeshers do not always succeed in creating the desired mesh, so it’s intriguing to revisit a meshless approach more similar to the Ritz method. That is exactly what was done with the external method in SimSolid. Implementing this requires sophisticated techniques to assure continuity at part boundaries, which is similar to the issue for incompatible meshes, which we’ll touch upon in a future lesson.

Summary

We’ve covered approaches to come up with approximation functions for displacements to be able to represent arbitrary motion in complex domains. We’re not done with that because the major issue of assuring continuity for higher order elements remains. That will be the subject of lesson 2. Once we have approximating functions, how to we choose how many are needed to achieve desired accuracy? We need a way to estimate errors in the current solution. Then, from the error-estimate, a way to figure out how many functions are needed. For h-adaptivity, we use that to estimate required element size, while in the p-method we use is to estimate require polynomial order. That is the subject of lesson 3.

Homework problem: The volume functions for hierarchical elements are 0 at all element nodes, edges, and faces. Do they play any role in continuity? If not, why are they needed?

References:

  1. Szabo, B, and Babuska, I, Finite Element Analysis, Wiley, NY, 1991.
  2. Zienkiewicz, O, and Taylor, R, Finite Element Method. Volume 1- The Basis, Butterworth, 2000.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s