Adaptive mesh refinement is now commonly available in commercial FEA codes. This is a powerful method and in my experience works well and can yield high accuracy. I’ve tried various versions including SOLIDWORKS Simulation, Ansys, and Autodesk Fusion with good results. The disadvantages of adaptive mesh refinement, as I see it, are:
- For large models, automeshing can be difficult. Adaptive mesh refinement will fail if the model cannot be remeshed
- In some commercial implementations, adaptive mesh refinement is not available for mixed meshes such as shells and solids.
The main advantage of p-adaptivity is the more accurate results can be extracted without remeshing. Consider a model with a local stress concentration:
The mesh in the vicinity of the fillet follows the geometry well. But the stresses vary rapidly in that vicinity, so with conventional FEA this mesh yields a result that is in error by 20%. To achieve more accurate results, the mesh must be refined in that vicinity. But this mesh is ideal for p-adaptivity. Accurate stresses can be calculated by increasing the polynomial orders of the elements to achieve better accuracy.
The realization that modern automeshers in conventional FEA codes produce high quality meshes (especially if a mesh setting like “curvature-based refinement” is turned on, which I highly recommend), was the genesis of stressRefine. Existing commercial p-adaptive codes like ESRD’s StressCheck and Mechanica (now PTC/Creo Simulate) use element’s with higher order mapping or exact geometry that are better suited for special meshes that are coarser than conventional FEA meshes. What if I instead developed a new code designed from scratch to use conventional FEA elements? Then accurate stresses could be calculated from FEA meshes directly. Further, local regions of large models could be reanalyzed very quickly for more accurate stress, using the results from the “global” FEA solution. This is also referred to as breakout analysis, and has been implemented in various commercial codes, for example it is called “submodeling” in SOLIDWORKS Simulation, but it is typically not adaptive, the user is responsible for manually refining the mesh in the submodel. In stressCheck, breakout analysis requires creating a new local model, with tools provided to transfer loads from an external global FEA model. In both these approaches, considerable user intervention can be required. But with a code like stressRefine that works with conventional FEA meshes and results, and is p-adaptive, this procedure can be completely automated. The breakout region is automatically extracted and then analyzed adaptively.
With that lengthy preamble, now I’ll get into how this is implemented.
The wrong way
The most obvious way to implement p-adaptivity is to just use conventional higher-order isoparametric elements. So you can start with a linear 4 node tetrahedral element, go to quadratic by adding mid-edge nodes (10 node tet), and keep adding more nodes to get cubic, quartic, etc., elements.
The problem is that these elements become numerically poorly conditioned at high p-orders and are much more sensitivity to distortion . There can also be severe problems if stresses are calculated directly at the corners of elements, where the distortion effects are particularly bad, Despite this limitation, this method has been implemented in some commercial codes. Back in the 1990s when Rasna was making a lot of marketing noise, some of our competitors did the quick higher order isoparametric implementation so they could say “we have p elements too”. One of them was Cosmos, and their implementation still persists as p-adaptivity in SOLIDWORKS Simulation. Since that code has a robust adaptation of adaptive mesh refinement, I recommend using that instead and avoiding their p-implementation. It can demo reasonably well on some small problems, but it is easy to find models for which it fails or gets inaccurate results. I did a head-to-head comparison of stressRefine vs. SOLIDWORKS Simulation p-adaptivity on 25 models of varying complexity. StressRefine got an accurate answer on all of them, in all cases more accurate than SOLIDWORKS Simulation p-adaptivity. SOLIDWORKS Simulation p-adaptivity failed or got an inaccurate answer on more the 75% of the models, including some simple ones, This comparison is summarized here. My intent is not to pick on SOLIDWORKS Simulation, I like their product. But poor commercial implementations like this can give the mistaken impression that “p-adaptivity doesn’t work”. It does work, if done correctly.
The right way
The robust way to implement p elements was worked out by professors Szabo and Babuska. They proved convergence theorems for these functions and also worked out the details of using them .
Finite elements use approximating functions to represent the element mapping and the displacements.
Mapping: The physical coordinates of a point (x,y,z) at a natural coordinate r,s,t in an element are
x(r,s,t ) = ∑ xi Ni y(r,s,t ) = ∑ yi Ni z(r,s,t ) = ∑ xi Ni
where the Ni(r,s,t) are the shape functions.
Displacements: The displacements coordinates of a point (u,v,w) at a natural coordinate r,s,t in an element are
u(r,s,t) = ∑ uihi v(r,s,t) = ∑ vi hi w(r,s,t) = ∑ wi hi where hi(r,s,t) are the displacement basis functions.
In a conventional isoparametric element, the same functions are used for shape and displacement, for example quadratic serendipity functions . In a “p element”, different functions are used. As mentioned above, previous commerical p-codes have used higher order mapping for the shape functions, or exact geometric mapping from the underlying CAD geometry. This has the disadvantage that it can fail to recover strain-free rigid body motions at low p-order, although this problem goes away as the p-order is raised.
StressRefine uses the same quadratic mapping as conventional FEA codes. For this reason it would not be suitable for the coarser meshes produced by codes like StressCheck or Mechanica, but it works well with conventional meshes such as in the figure above. Since StressRefine starts at p2 when representing displacements, rigid body motion is always represented exactly.
The conventional functions have the value 1 at one of the element’s nodes, and 0 at all the others. This looks like this in 1D:
The p-basis functions developed by professors Szabo and Babuska  are as shown here in 1D;
These are called heirarchical functions because when you raise the p-order the lower functions remain the same. For example if you have an element at p 2, the two p1 functions and the p2 function are present. If you raise it to p 3, those functions remain unchanged, and the p3 function is added. This is useful computationally, you don’t have recalculate the element stiffness matrix coefficients for the lower functions, you just have to add those coefficients for the function you just added. This is not true of conventional finite elements, the functions for a cubic element are completely different from those of a quadratic element, for example.
The Szabo-Babuska functions are orthogonal in 1D, for example the integral over the length of the edge of p1 times p2 is 0. When they are blended together into a 2D or 3D element this make them very well conditioned numerically even at higher p orders.
For a 3D element, the corner functions are calculated separately and are the same as a conventional linear element: they have the value 1 at one element corner and 0 at other corners. For example for a tetrahedron they are just the element volume coordinates . For higher order edge functions (p 2 and up), the 1D functions shown above are used on each edge, but they have to be blended with linear functions so they have nonzero values on one edge and 0 on all other element edges. The details of this are described in .
At polynomial order 3 and up additional functions are needed. First there are face functions (also called bubble functions), which are nonzero on one face and zero on all other faces. Then there are volume (or internal) functions which are zero on all faces and only nonzero in the interior of the element volume. The correct number of face and volume functions needed at any p order can be determined from Pascal’s triangle . This ensemble of corner, edge, face, and volume functions forms a complete set: any arbitrary displacement function can be represented by a combination of these functions. This is a crucial prerequisite for convergence.
Here are examples of what these functions look liked for a tetrahedron, except for the volume functions which I can’t draw:
The only problem with these functions is that they are incompatible with conventional FEA elements that have midside nodes. For this reason I made a slight variation to them for use in stressRefine: It uses quadratic nodal functions for the corners and mid-edge nodes of the elements. For p 3 and higher it uses the 1D functions shown in the figure above, blended together in the manner described. This variation looks like this:
It turns out elements using these functions have similar numerical conditioning to those based on the Szabo-Babuska functions for polynomial orders 1 through 8, so they work fine. I also had about a test suite of about 200 cases that I ran using the modified vs original functions, and the results came out practically identical, with no degradation in accuracy.
The modified functions have the advantage that if the elements are at p2 they are identical to conventional quadratic elements. This means it is possible to use a conventional mesh, and have conventional elements and p-adaptive elements arbitrarily mixed together.
A major advantage of using the heirarchical functions for adaptivity is that a different polynomial order can be specified on each edge of the model, which allows adjacent elements to have different orders as is illustrated for a simple 2D case below. For the P2 element, the polynomial order is just set to 4 at the common edge with the p4 element. This makes arbitrary adaptivity easy to implement. Given local error measures, polynomial orders can be adjusted as needed. For typical models solved in stressRefine, most of the model stays at low polynomial order.
There is difficulty extending to 2 and 3 dimensions when using higher order basis functions with arbitrary polynomial order on any edge. The orientation of element local edges and faces compared to global model edges and faces must be properly considered to ensure continuity of basis functions across element boundaries. This is not a problem for conventional elements, for which continuity is guaranteed as long as adjacent elements share the same nodes, including corner nodes and any midside nodes on edges or faces. But p-elements have edge and face functions which are not nodally-based. A simple 2D example is shown below:
The local node numbering at the corners of two adjacent elements is shown. Consider polynomial edge functions on an edge, depicted in Fig. 3. The local edges run backwards between the two elements, so for the “odd” functions like polynomial order 3, 5, 7, etc. to be continuous we have to change their signs at the edge for one of the elements. In 3d there is a similar situation at faces, but the orientation of the local faces must also be corrected for. This is all a bit tricky to code but is not computationally intensive. Once the 3 dimensional basis function library is available, however, it is readily integrated into any element routine. I’ll go into more details about this in the next post, describing the details of stressRefine’s element library for anyone that may wish to make use of it.
- Bathe, K, Finite Element Procedures in Engineering Analysis, Prentice-Hall, 1982.
- Szabo, B, and Babuska, I, Finite Element Analysis, Wiley, NY, 1991