Source for Full StressRefine functionality, and P-Fea lesson 3

The source for the UI and analysis engine (SRwithMkl) for stressRefine is now available on the stressRefine github in two new repositories fullUI and fullEngine. This includes multi-threaded and optimized elements and solution, and full breakdown functionality. The full version also has the recent enhancement that you can get a convergence plot on maximum principal stress as well as von Mises stress. There is also a choice for “Max Custom Stress” under result options. This allows you to use a user-subroutine to supply your own stress criterion. To use this, modify the routine SRmodel::UpdateCustomCriterion in the SRwithMkl project. The latest version of the UI and source executables are available here (they are in the 2019exe folder). If you have previously installed stressRefine with stressRefineSetup.exe, these replace the executables SRwithMkl.exe and SRui.exe in your stressRefine folder.

There are two potential uses for the stressRefine source. The first is to learn p-adaptivity, following along with the P-Fea course, or to work p-adaptivity into your own code. For that purpose, the library version is recommended, which uses the projects SRwithMklProj, SRlibSimpleProj, and SRUiProj. This version is simplified to be more readable. But is less efficient for larger models.

The other use for the stressRefine source is for users who want the full functionality, but may want to make minor modifications. For this the more efficient version is recommended, which uses the projects fullEngine and fullUI.

The Nastran translator bdfTranslate in project bdfTranslateProj is compatible with either version.

You will notice a change in the ui if you’ve used the previous version. Previously, when it fired up separate executables like SRwithMkl, it suppressed the console window and read the redirected output from the other process, displaying a status line and progress bar. There is a regression bug in VS2019 in C# (the language of the UI) so the output redirection doesn’t work properly. So I simplified things by just using the console window. It makes the code simpler and more robust and is just as functional.

These are all still windows projects, this week I will concentrated on coming up with Linux makefiles.

P-Fea Course lesson 3 has also been added.

P-Fea Course. Lesson 3- Converting Element Stiffness Routine to p-Adaptive

The stiffness matrix for a conventional isoparametric finite element takes the form:

Kel = ∫BTCBdV

where C is the element stiffness or constitutive matrix, and B expresses the strain-displacement relations. If the strain is stored as a 1D vector e = [ex,ey,ezxy,yxzyz]. and displacements as u = [ux,uy,uz], then

e = [B]u.

The strain-displacement matrix B has 3×6 submatrices for each basis function hj:

Evaluating B is the only difference between a conventional element routine and an element with p-adaptive functions. In a conventional routine, the shape functions are used for the mapping, and the same functions are used for the basis functions. In a p-adaptive element, different functions are used for mapping and displacement. In stressRefine, quadratic serendipity functions are used for the mapping, and higher order polynomials for the displacements.

where r,s,t are the natural coordinates in an element and J-1 is the inverse of the mapping from r,s,t at each integration point. The integral over the volume is also converted into an integral in natural coordinates using the determinant of the mapping |J|. J is computed from the shape functions. It is invertible to J-1 unless the element is invalid (e.g. too highly distorted).

Examining the code.

A simple implementation of a p-adaptive stiffness matrix is in the routine CalculateStiffnessMatrix in SRelement.cpp in the stressRefine library SRlibSimple.

The line

int nfun = globalFunctionNumbers.GetNum();

looks up the number of displacement basis functions in the element, which is calculated from the polynomial order of each edge of the element.

The integration points for the element are determined with the call to model.math.FillGaussPoints. The stressRefine library uses degenerated brick Gauss quadrature for tets and wedges. This could be made more efficient by using the triangular points developed by Cowper [1] for the r,s quadrature in wedges and the tetrahedral points developed by JinYun [2]. These do not go up to the higher polynomial orders needed by stressRefine, but could be used when the polynomial order is low enough, switching to the degenerated Gauss quadrature when needed.

There is a quadrature loop in the element matrices

for (gp = 0; gp < nint; gp++)

Inside that loop, the natural coordinates and the quadrature weight are determined in model.math.GetGP3d, and J-1 and |J| are calculated in FillMapping. An error is raised if |J| is too small. However, before the element routines are calculated, stressRefine tests the mapping of each element and attempts to recover by partially flattening curved elements as discussed previously.
The derivatives of the basis functions ∂hj/∂r, etc are calculated at each integration point with the call FillBasisFuncs.

After this the x,y and z derivatives of the basis functions can be calculated.

BTC is calculated in fillBTC, which accounts for the zeros in B and C.

The multiplication (BTC) times B is then calculated in FillKel33GenAnisoRowWise.

This returns a 3×3 submatrix kel33 of the element matrix which is stored in the appropriate location in the symmetcially-stored element stiffness matrix.

Converting an existing conventional element stiffness routine

Any stiffness routine will have a loop over the integration points, and loops for the rows and columns over the number of basis functions in the element.

A few modifications need to be made. This discussion assumes the setup functions for stressRefine have been called, and for each conventional element for which p-adaptivity will be used, a corresponding SRelement has been created, This was discussed in lesson 2. Then CountElementFunctions can be called in the library function in SRbasis to determine the number of element functions. FillGaussPoints in model.math will calculate and store the integration points and return the number. The numerical integration in the element must be modified to use the increased number appropriate to the number of basis functions corresponding to the polynomial order of the element.

The derivatives of the higher-order polynomial basis functions ∂hj/∂r, etc must be used instead of the shape functions. These can be calculated directly using ElementBasisFuncs in SRbasis. Everything else in the element routine is unchanged.

Stress Recovery

The stresses in an element can be directly computed from

σ = [D]ε = = [D][B]u once the displacements u are known.

This is computed the same way as for a conventional element, except, as for the case with the element stiffness matrix, the displacement gradients in [B] are calculated using the higher-order polynomial basis functions, not the conventional shape functions.

Adapting the polynomial order

Between this and the previous lesson, we can calculate the stiffness matrix for an element given the polynomial orders of each of its edges. In the next lesson we’ll cover adaptivity: after determining the solution for the current polynomial order, errors are calculated, and used to estimate the required polynomial order to achieve the desired accuracy.

Homework problem: Is the element routine CalculateStiffnessMatrix “thread-safe” if we want to calculate the elements in parallel?

References

  1. Cowper, G, “Gaussian Quadrature Formulae For Triangles”, Int J Num Meth Engr, 1973
  2. Jinyun, Yu, “Symmetric Gaussian Quadrature Formulae For Tetrahedral Regions”, Computer Meth. Appl. Mech. Engr, 1984.

P-Fea Lesson 2 and Source Update

I just published the second lesson in the P-Fea course. I’ve heard from more than one person that I used github incorrectly when I uploaded the source in zip files. Sorry! I’m new to this open-source game. I’m working on fixing that with proper repositories, which should be available later today. They will still require the use of visual studio to compile. I verified that they work with the last free version, VS2019 community. Next I’ll be working on providing makefiles so they can be built from the command line on both windows and Linux.

P-Fea Course. Lesson 2- Basis Function Continuity

This course will continuously refer to [1]. That book is highly recommended as a reference. Detailed handwritten notes of important derivations for stressRefine are also available here.

For conventional finite elements the basis functions used to represent displacement are continuous at interfaces between elements as long as the adjacent elements have the same nodes at the interface. These functions are often called “shape functions” because they’re the same functions used to represent the element mapping from natural coordinates to physical coordinates, which determines the element’s physical shape. Here will make a distinction. “Basis functions” is used to describe functions representing displacement, while “mapping functions” (interchangeable with “shape functions”) are used to describe functions representing the element mapping.

Before I get to how continuity of basis functions, let me describe a slight tweak I made in stressRefine to the classic heirarchical basis functions developed by Szabo and Babusca [1].

This image has an empty alt attribute; its file name is image-11.png

Variation used in stressRefine:

In the classic functions the only nodes are at the element corners and the edge functions start at polynomial order 2. In the stressRefine functions, the nodal functions are the same as the conventional finite element quadratic functions, which requires the introduction of mid-edge nodes. The edge functions start at polynomial order 3. The advantage of this change is that if the elements is a polynomial order 2, its is identical to a conventional quadratic isoparametric element. A possible disadvantage is if this would degrade the element stiffness conditioning. (The element stiffness matrices are assembled into a linear system Ku = f. If K is not well-conditioned, slight errors in f can cause large errors in u). But I was able to prove numerically that the element stiffness matrices are as well conditioned with the modified functions as with the classic ones, at least as high as polynomial order 8.

So for the stressRefine functions, continuity across element interfaces is assured if the adjacent elements have the same nodes, up until polynomial order 2. Additional work is needed to assure continuity at shared edges and faces for polynomial orders higher than 2.

To show why this is so for the case of edges, consider the odd-numbered polynomial functions for polynomial orders 3,5,7, … In the figure shown (for a 2D mesh), the upper element has the shared edge defined between corner 1 and 2 based on the local element numbering, so it, while the lower elements has the shared edge defined between corners 2 and 3, so the edge runs backwards from the other element. This means that the local definition of the p3 (short for polynomial order 3) function is different, and for the lower element that function has to be multiplied by -1 to assure continuity.

The situation is more complicated for faces. Consider an exploded view of two adjacent elements that share a face with global nodes 10, 11, 12:

But for the left element these correspond to local nodes 1,2,3 while on the right element they correspond to local nodes 3,1,2. So for the two elements the natural coordinates on the face rf, sf are defined differently, and the basis functions are misoriented on the right face with respect to the left.

This brings up an interesting historical aside about Mechanica. My colleague and cofounder Christos Katsis handled everything related to coding up the basis functions. He introduced the concept that there is a global face, and local faces can be rotated (or even reflected) compared to it. There is then a coordinate transformation to make sure the local face is compatible with the global face. It is further complicated by the fact that we need to relate elements natural coordinates r,s,t to face natural coordinates rf, sf. So there was a 3×2 transformation relating r,s,t to the local face, and a second 2×2 transformation relating the local face to the global face. None of this is computationally intensive, but it was tricky to code. Christos took care of all of this while we were in full-blown early start-up mode, with tons of distractions. A major feat of concentration!

When I started to develop stressRefine I was not looking forward to the equivalent of all of that, but fortunately found there’s an easier way to handle it. There was a hint in Szabo and Babusca’s textbook [1], when describing implementation of the hierarchical basis functions. When discussing the 2D basis functions for a triangle, instead of referring to local coordinates r and s they used l2 – l1 and  2l3-1 where l1,l2, and l3 are the area coordinates of the face. From the definition of the area coordinates, r = l2 – l1 and s = 2l3-1. I realized that they had done this because it guarantees continuity of basis functions for elements sharing the same face as long as the nodes of the face are numbered consistently. So we just need to introduce the concept of how the local node numbering of the face relates to the node numbering of the global face.

The global face is defined by the first element that owns that faces. So in the example above, the left hand element is encountered first, and the global node numbering of the face is local node 1 = global node 10, local node 2 = global node 11, and local node 3 = global node 12. Now when we encounter the face again in the right hand element, the local nodes of the face are 3,1,2. When we work with the basis functions of the face, compatibility is assure if we use rf =l1– l3 and sf = 2l3-1 when referring to the face for the second element. This is for 2D, It turns out in generalizes to 3D if we use the same idea with volume coordinates instead of area coordinate. This also takes care of relating element r,s,t to face rf, sf.

I’ll show how this is coded up below. In short, we introduce a variable gno for “global node order”, so for the example face on the right hand element gno(1) = 3, gno(2) = 1, and gno(3) = 2.

All of this works for quadrilaterals in 2D, and bricks and wedges in 3D as well, But for bricks, for example, we define rf, sf using the linear mapping functions of the brick N1 through N8 instead of volume coordinates.

Details of Basis functions for a tet

Nodal functions: These have the value 1 at one node (corner or midedge) and 0 at all others, so they are the same as the quadratic shape functions for a 10 noded tet [2]. for example N1 = (2L1 -1)L1 is a typical function for a corner node, N5 = 4L1L2 is a typical function for a midedge node.

The edge, face, and volume functions are the same as described in [1] except the edge functions start at p3.

Edge Functions: These are nonzero on one element edge and zero on all others. They start at p3 because the quadratic edge functions are the same as nodal midedge functions. They are the integrals of Legendre polynomials shown here blended with volume coordinates so they are zeroes on all other edges:

bn = 4LILJφp(re)/(1-re2) where φp is the integral of Legendre polynomial over the edge for polynomial number p, re = LJ – LI is the natural coordinate on the edge, and n is the basis function number of the element. The term LILJ with the volume functions assures this reduces to the nonzero 1D function φp on the edge with corners I, J, and is 0 on all other edges, and zero on all faces except those with corner I,J.

Face Functions: These are nonzero interior to one face, zero on all edges, and blend to 0 on all other faces.

bn = 4LILJLkPpr(rf)Pps(sf) where Ppr is the Legendre polynomial in the rf direction and pr is the polynomial number in that direction, Pps is the Legendre polynomial in the sf direction and ps is the polynomial number in that direction, rf = LJ – LI and Sf = 2Lk-1 are the natural coordinates on the face, and n is the basis function number of the element The term LILJLk with the volume functions assures this is nonzero interior to the face with corners I, J,K, and is 0 on all edges,and all other faces,

Volume functions: These are 0 on all edges and faces, and nonzero in the interior to the element.

bn = LILJLkLlPpr(r)Pps(s)Ppt(t) where Ppr is the Legendre polynomial in the r direction and pr is the polynomial number in that direction, and similarly for s,t. The volume functions assure that these are 0 on all edges and faces.

Wedges and Bricks

Similar techniques are used to blend the 1D functions wedges, and bricks, For calculating element stiffnesses, we need the derivatives of the basis functions with respect to the element natural coordinates r,s,t. This is a little messy but not difficult to code once you have the formulae derived. The basis functions and their derivatives for tets, bricks, and wedges are described in my notes here. Note that we do not have to actually evaluate the integrals of Legendre polynomials, there are regression formulae for computing them described in [1].

Use of “global node order”: I described assuring continuity of the basis functions above. For edges we just need to correct if the local edge runs backwards from the global edge. A direction variable, which is +/- 1 is assigned to each edge. The basis function is multiplied by “direction” to assure continuity (the same thing could be achieved by switching I and J in the edge basis function if the edge runs backwards, but the direction flag is easier. For the face functions of a triangular face we need to use the “global node order” described. So in the face functions shown above, I,J,K would normally be the element local corner number associated with local corner numbers 1,2,3 of the face. Instead we use I =gno(1), J = gno(2), and K = gno(3). In the example above, for the face on the right hand element shared by the two elements in the exploded view, gno(1) = 3, gno(2) = 1, and gno(3) = 2. Note that the discussion here is “1-based” but the code is in c++ so in the code is is “0-based”, e.g. gno(1) is really gno[0] in the code.

This may all seem messy but it is only necessary if you dive deep into the bowels of the basis function routines, such as TetBasisFuncs is file basis.cpp in SRLibSimple. To actually use the library, two setup actions need to be performed:

Create Global Edges For all elements in your code, create corresponding SRelements. This is done by first allocating space for them:

model.allocateElements(nel)

where nel is the number of elements. Then loop over your elements, and, for each, call

model.CreateElem(id, userid, nnodes, nodes[], mat)

where nnodes is the number of corner and midnodes in the element, e,g, 10 for a quadratic tet, nodes is the vector of node numbers for the element (corners first followed by midedges), and “mat” contains the element material properties. This are all be described in a programmers guide I uploaded here. The global edges automatically get created on the fly, with the “direction” flag assigned.

Create Global Faces:

After nodes and elements have been defined, just call

Model.FillGlobalFaces()

This creates global faces for all element faces in the model, and assigns the global node order variable. It also determines which elements own each global face, and which faces are boundary faces (only have one owner).

Subsequently the basis function routines can be called as needs and continuity will be assured.

Homework problem: The volume coordinates for a tetrahedron have the property that they are 1 at a single node, and 0 at the opposite face. For example L1 is 1 at node 1, and 0 on face 2,3,4. Given that property, explain why the blending works properly for the edge, face, and volume functions: For edge functions, the functions are nonzero on the edge, and zero on all other edges and all faces that don’t touch the edge. For face functions, the functions are nonzero on the face, 0 on all other faces. For volume functions, the functions are only nonzero in the interior of the volume.

References

  1. Szabo, B, and Babuska, I, Finite Element Analysis, Wiley, NY, 1991

stressRefine Source on github

The stressRefine Source is now on github at stressrefine/sros. This is still the windows version that requires visual studio to build (there is a free download for VS here, click the download button under the “community version”). I put a readme.txt on github that explains how to build stressrefine using VS. There is also the document “Using The StressRefine source.pdf” That explains which of the four projects you need depending on your purpose. To just link the stressrefine library to your own executable you only need “SRlibSimple”.

My next step is a Linux version, for which I’ll provide a new readme to explain how to build, as well as makefiles.

OpenSource StressRefine Uploaded

The source code for stressRefine has been uploaded here. It is all free per the terms of the GNU General Public License. There is a zip file stressRefineSource.zip containing four folders bdftranslate (the Nastran translator), SRui (the user interface), SRlibSimple (the stressRefine library), and srwithmkl (an executable analysis engine using the library). All are free per the Gnu Gpl. However, the analysis engine srwithmkl needs the Intel pardiso direct solver from the Intel Mkl library, and use of that library is subject to the Intel license. This is all explained in the document “Using the stressRefine source” which is in the same link above. Use of Gnu gpl for code that needs a separate proprietary library linked to it is allowed, though a bit awkward, as explained here. I’ll be working on replacing Intel/pardiso to remove this limitation. Only srwithmkl needs pardiso, so the other three, bdftranslate, srui, and srlibsimple, are not affected.

Please see the document “Using the stressRefine source” for instructions on how to make use of it. Options are to use the library only, with your own Fea code executable, to use the library linked to srwithmkl to create an analysis engine, or to build all four of the folders mentioned above to get a fully functioning UI, translator, and analysis engine.

The first lesson in the P-Fea course was posted today. This is introduction and theoretical background. Starting with next monday’s lesson I’ll dive into topics that will explain the code in the stressRefine library in depth.

P-Fea Course: Introduction

Take this course to learn what’s “under the hood”

This course is intended to teach students and developers who want to learn how to write a p-adaptive code using the stressRefine library. It will start with the theory behind p-adaptivity, then take you through the practical aspects of creating p-adaptive elements using the library, and writing analysis routines that do p-adaptive analysis. It will continue on to show how to do analysis of local regions. This will all be presented first with simplified elements and solution assembly routines, then I’ll cover some code optimization and parallelization.

Having taken at least one finite element course will be assumed as a prerequisite. Familiarity with basic numerical methods concepts will be assumed, but concepts like efficient equation solution will be covered.

Suggested Reading:

Szabo, B, and Babuska, I, Finite Element Analysis, Wiley, NY, 1991. Superb text for understanding the implementation of heirarchical finite elements

Zienkiewicz, O, and Taylor, R, Finite Element Method. Volume 1- The Basis, Butterworth, 2000. Excellent introduction to Fea in general, plus error estimation and adaptivity concepts.

Lesson 1- FEA Displacement Approximations And Convergence

Approximating Functions

Finite Element Analysis uses an integral expression over the entire model being solved, rather than solving the governing equations (such as the equilibrium equations in elasticity) point by point throughout a model.  A series of functions is used to approximate the unknowns, such as the displacements u,v,w in the x,y,z directions in structural analysis:

u = ∑i = 1 to N aifi(x,y,z)   v = ∑i = 1 to N bigi(x,y,z) w = ∑i = 1 to N cihi(x,y,z)

These functions have to satisfy some conditions such as continuity, and displacement boundary conditions (for example if you have an immovable support, the displacements must be zero there).

The most famous use of this approach is the Ritz method, also known as Rayleigh-Ritz. A series of unknown functions like those in the equation above is inserted in an integral expression for the potential energy (strain energy plus work done on the boundary). If the energy is E, then the minimum potential energy is found by setting the derivatives of E with respect to the unknown coefficients to zero, for example

∂E/∂ai = 0 for I = 1 to N,

for the displacements in the x direction (u), and similar expressions for v and w. This results in a linear system of equations that is solved for the unknown coefficients.

Consider a one-D example, solving for the deflection of a beam of length 100. Now u represents the lateral deflection of the beam, and we approximate it with some functions:

u = ∑i = 1 to N aifi(x)

A popular choice is fi(x) = sin(i*π*x/L) where x is the length of the beam. Since L = 100, here’s what the first few functions look like:

These functions are not able to produce a nonzero value for displacement on the boundary. So if there’s an enforced displacement, we could introduce a linear function that’s nonzero on the boundary or a local function that’s nonzero on the boundary and decays rapidly into the interior, like the blue or orange functions shown:

These boundary functions can be used in addition to the sin functions.

This technique works well with functions that are smooth and linearly independent, and often gives accurate results with a few functions. It also helps if the functions are orthogonal to each other or close to it. We want ∫fi(x)fj(x)dx over the body, in this case the length of the beam, to be close to 0 if i≠j and close to 1 if i = j, and that works out well for the sin functions.

To illustrate the concept of completeness, suppose we omitted f1 in the figure above. The series would be incomplete, and even if we used an infinite number of functions, it might not converge to an accurate answer. The typical functions used in conventional finite elements codes are complete, and will converge to an accurate answer as long as enough elements are used (fine enough mesh). In a p-type finite element code, the mesh can be kept fixed, and an accurate answer will results as long as the polynomial order is high enough.

Now suppose we wanted to solve a 1D problem where the exact solution for displacement looks like this:

We could use a lot of the sin functions shown above, plus add in the boundary functions. But that’s a lot easier to do in 1D, and in 2D for simpler shapes like rectangles. It gets a lot harder for complex 3D shapes to come up with functions that can satisfy the boundary conditions and be continuous. One way to get around this is to use functions that are piecewise continuous. If we subdivide the line in the figure above into multiple segments, we can represent the displacement in each segment with more simple functions like polynomials. For example we can use linear functions or quadratic lagrange polynomials:

These functions have the property that they nonzero at a node (such as at the ends of the segment) and zero at all other nodes. Continuity is assured if the displacement is the same at nodes shared by adjacent segments. The segments are the “elements” in this 1D example. Now to follow the very wiggly displacement above well, we’d need a lot of the linear elements, Fewer of the quadratic elements would be needed. We can take this further by introducing higher order elements:

These are the 1D versions of the basis functions proposed by professors Szabo and Babusca [1]. In this example we could get by with only one of these segments if we went to high enough polynomial order. But there is a limit to how high the polynomial order can go in practice, because using very high polynomial order is computationally intensive. This means in more complicated problems we’d still need multiple elements. So there are two basis approaches to adaptivity: If we refer to the length of a segment as “h”, the basic idea with simpler elements, like the linear and quadratic ones above, is to achieve accuracy by keeping the type of element fixed (for example it remains quadratic), and to make “h” smaller by using more elements, This is h-adaptivity, which is achieved by adaptive mesh refinement. An alternative is to use higher order elements where the polynomial order can be raised. Keeping the number of elements fixed, we successively increase the p-order to achieve accuracy. This is p-adaptivity. It has the advantage that it can be achieved entirely inside the finite element analysis code, while h-adaptivity requires collaboration between the analysis code and the automesher. The disadvantage of the p-adaptive approach is that the elements are more complicated. This is especially true if you have a library of existing elements that would have to be converted to p-elements. The purpose of the stressRefine library is to provide tools to make such conversion easier, and in this course we’ll go into detail about showing how to use the library for that purpose.

The piecewise continuity approach is extended to 3D by using 3D elements. These are typically simple shapes like bricks (hexahedra), wedges (pentahedra), or “tets” (tetrahedra). The conventional h version of quadratic elements looks like this:

These a called “serendipity elements” which are based on a variation of the 1d Lagrange polynomials shown above. This is simplest to explain for the brick (c). Lagrange polynomials can be used in x,y, and z, and multiplied together. But this would require there to be additional nodes on the middle of the faces of the element, as well as the interior of the volume. It turns out the functions can be modified so that such nodes are not needed, which simplify the work of meshing, and good accuracy is still achieve. I’ll omit the details of that because my focus is on p-adaptivity, but it’s explained well in [2], and there’s a discussion online here. Until now I haven’t discussed mapping of elements. The basic shape shown above must be capable of being distorted into physical space, shown here in 2D:

The “natural coordinates” of the original elements are r,s,t. These need to be mapped to the physical coordinates x,y,z. To do this, we assume the position anywhere in the element is described by approximating functions called “shape functions”:

x = ∑i = 1 to N aifi(r,s,t)   y = ∑i = 1 to N bigi(r,s,t) z = ∑i = 1 to N cihi(r,s,t)

You can use more or less functions to represent the shape than the displacements. This means that more or less coefficients, or “parameter”, are used. If less are used for mapping than displacement, the element is “subparametric”. If the same number are used, the element is “isoparametric”, and if more are used for mapping the element is “superparametric”. Superparametric elements are not the best idea because they can have trouble representing rigid body motion. Isoparametric elements are the most commonly used conventional finite elements. The elements in stressRefine start out isoparametric at polynomial order 2, then become subparametric. I’ll go into more detail about that in a future lesson.

An obvious choice for the shape functions is to use the same functions used for the displacements, so not only as the same number of functions used, but the same functions. This is typically what is meant by a conventional isoparametric element.

Higher order polynomial elements are created by blending the 1D polynomials shown above in 2 and 3 dimensions. Here is what that looks like for the version created by Szabo and Babusca [1]:

Figure 1

I’ve discussed these functions in detail in a previous post. I discussed there how the number of each type of function must be chosen for completeness. Continuity requirements for these elements are trickier than for conventional elements. For conventional elements, continuity is assured as long as the element have the same nodes on shared faces and elements. An obvious violation of that would occur if a linear element were placed next to a quadratic one. If the displacement cases the quadratic element’s edge to become curved, the edge of the linear element cannot follow, so a crack opens up that is not present in the physical model:

The same issue occurs for an incomptibale mesh, which can occur, for example, at a part boundary in an assembly. There are methods to resolve this such as “glued contact” constraints that will be covered in a future lesson.

For p-elements, it is not enough to assure the nodes, which are only at the corners of the element, are compatible. If the adjacent elements are at different polynomial orders, discontinuity occurs at a shared edge. But they can be discontinuous at a shared edge or face even at the same p-order if you’re not careful in the definition of edges and faces, as discussed previously. Assuring continuity when using higher order polynomial functions will be covered in detail in the next lesson, where for the first time we’ll take a peak at some code.

Comments on the Ritz Method

Minimizing potential energy, as is done in the Ritz method, is only valid for linear problems and a subset of nonlinear problems for which a potential energy function exists. But there are more general alternatives, such as the Galerkin method that work in general nonlinear problems. The idea is the same, approximating functions are introduced into a global equation integrated over the body, and you come up with a set of equations to determine the unknown coefficients. However, if the governing equations are nonlinear and/or time dependent, then so will the equations for the coefficients. This requires “time marching” numerical techniques and solution methods like modfied Newton-Raphson. The Galerkin method is also readily generalized to other physical systems.

Up into the 1950s, before the advent of finite elements, a great deal of effort was put into coming up with functions that would work for more complex shapes. In the US, the National Advisory Committee for Aeronautics (NACA), predecessor to NASA, published a lot of solutions obtained this way for shapes useful in aeronautics. One of my professors at Stanford, Jean Mayers, was one of the experts who had contributed in that era. This seems to be all obsolete now with the advent of finite elements. But there is the sticky point that automeshers do not always succeed in creating the desired mesh, so it’s intriguing to revisit a meshless approach more similar to the Ritz method. That is exactly what was done with the external method in SimSolid. Implementing this requires sophisticated techniques to assure continuity at part boundaries, which is similar to the issue for incompatible meshes, which we’ll touch upon in a future lesson.

Summary

We’ve covered approaches to come up with approximation functions for displacements to be able to represent arbitrary motion in complex domains. We’re not done with that because the major issue of assuring continuity for higher order elements remains. That will be the subject of lesson 2. Once we have approximating functions, how to we choose how many are needed to achieve desired accuracy? We need a way to estimate errors in the current solution. Then, from the error-estimate, a way to figure out how many functions are needed. For h-adaptivity, we use that to estimate required element size, while in the p-method we use is to estimate require polynomial order. That is the subject of lesson 3.

Homework problem: The volume functions for hierarchical elements are 0 at all element nodes, edges, and faces. Do they play any role in continuity? If not, why are they needed?

References:

  1. Szabo, B, and Babuska, I, Finite Element Analysis, Wiley, NY, 1991.
  2. Zienkiewicz, O, and Taylor, R, Finite Element Method. Volume 1- The Basis, Butterworth, 2000.