# Implementing P-Adaptivity For Conventional FEA Meshes- Part 2 (Details for Accurate Stresses)

At the end of my previous post I mentioned I would cover the details of the element and basis function library in stressRefine next for those that might want to make use of it. I am cleaning up the code for that library in order to release it for opensource use and am not quite ready yet, so I decided to defer that discussion until it is available. Instead, in this post I’ll talk about details of extracting accurate stresses with p-adaptivity.

Stress Calculation

Calculating stresses involves the mapping (shape) functions of the elements and the displacement functions [1] As is the case for element stiffnesses, the difference between conventional elements and p-elements is that different displacement functions are used, so the conversion of the stress calculation to p-elements is still straightforward. This is for the directly calculated or “raw” stress.

In stressRefine. and some other codes a further step is taken of smoothing the stresses: This is done by choosing smooth and continuous functions to represent the stress variation, and least-squares fitting these to the stresses in a group of elements, commonly referred to as a patch [2]. In stressRefine, the functions chosen to represent the stress are the displacement basis functions described in part 1. The smoothed stresses also tend to be more accurate, and are sometimes referred to as superconvergent [2]. This shows how the “raw” vs. smoothed stresses looks like in 1D across several elements:

Error estimators

In order to perform adaptivity, estimates of needed of the local errors in stress in each element. The stress-smoothing procedure described provides one such measure. The directly computed or “raw” stresses can be sampled at some points in an element, and their values compared with the superconvergent stresses at the same points. This is the Zienkiewicz-Zhu error norm, which was first suggested in [3] and is widely used.

This measure turns out not to be reliable if the elements are quadratic, or at polynomial order 2 , which is awkward because we want to start at p2 in the adaptive loop. StressRefine gets around this by also using an alternate measure. The traction (dot product of the stress tensor with the normal to a surface) at the faces between adjacent elements must be continuous. For the raw stresses it will not be, unless the results are converged. So the jump in traction from raw stresses at elements sharing a face is another stress measure.

Once the error estimate is available for each element, the required p-order in the element to achieve desired accuracy can be estimated as described in [2]. Now we have all the ingredients for an adaptive algorithm

In practice with most meshes there is marginal improvement after a total of 3 solutions, so stressRefine terminates the loop after 2 passes by default.

The adaptive algorithm described is for adaptive refinement, where the p order can be varied arbitrarily throughout the mesh. In uniform adaptivity, the entire model is progressed through each polynomial. A convergence plot of desired quantities such as  energy norms or maximum stress  in the model can then be computed. This can appear safer because it is possible for adaptive refinement not to converge to the correct answer if the adaptive algorithm increases the p order incorrectly (the same is actually true for adaptive mesh refinement, if the algorithm does not increase the density of the mesh enough in local regions where needed). The problem is that uniform refinement of large model can be very computationally intensive.

This can be alleviated by only performing the uniform updating in a local region, for example near a stress concentration, but this is also not guaranteed to converge on the correct solution, it can get the wrong answer if the region chosen for updating is too small.

In practice, in my experience on many test cases, adaptive refinement gives good results using the error measures in stressRefine and the adaptive algorithm described above. A series of validation cases are described here. StressRefine does allow uniform adaptivity if you wanted to double check your results, but it may be time consuming for larger models.

Singularities and Sacrificial Elements

I discussed the issue of singularities in my article on accurate stresses From FEA, which was mentioned in a previous post. The stresses at these are theoretically infinite and the stress results in the vicinity are physically meaningless. They strongly affect the results in nearby elements. If any element touches a singularity, the results in the entire element are affected, a phenomenon sometimes referred to as “error pollution”.

Professor Szabo discusses handling singularities at length ([3] p ).169, using the “L-shaped domain” example. He recommends handling them by place a double row of small elements near the corner to isolate them:

The small elements in the double row are called “sacrificial”. In stressRefine we cannot create such a graded mesh because we are reusing an existing mesh. The next best thing is to automatically identify which elements touch singularities, and mark them as sacrificial. Sacrificial elements are kept at polynomial order 2, and are excluded when calculating the maximum stress in the model. This works very well in practice. However it is still possible, as discussed in the previous post , that the max stress in the model is in the vicinity of a singularity, in which case the results may not be physically meaningful. This situation is detected and a warning issued to the user.

Unsupported Entities in the Mesh

StressRefine only has solid elements, because often the regions where important three dimensional stress concentrations are occurring are in the solid portions of the model. But “real world” FEA models are likely to have mixed meshes, which might contain solids, shells, beams, rigid bars, glued contacts, etc. Since stressRefine is designed to calculate more accurate stresses from a mesh that has already been solve with a conventional FEA code, it is straightforward to work around this. As we’ll discuss in the next post, in breakout analysis the assumption is made that the existing solution has already obtained accurate displacements in the “global” model, Using that same assumption, where any “unsupported entities” like non-solid elements or glued contacts touch the solid elements, the displacements from the existing FEA solution are applied as a boundary condition. Thus “unsupported entities” is treated as a special case of breakout analysis, where the local region is the part of the model that contains only solid elements.

Here is an example. The girders share are touched by shells at the top, and have glued contacts where they touch the legs at the bottom:

Since the stress may not be accurate in the elements where unsupported entities touch the mesh (in fact the situation is often singular, for example, if the end of a beam or edge of a shell touches the mesh), those elements are treated as sacrificial.

Distorted Elements

Automatic meshers in most commercial codes produce good quality meshes as long as settings such as “curvature-based refinement” are turned on. If not, they can sometimes produce distorted elements, especially in the vicinity of curved surfaces. One reason for this is that the meshers often start by producing linear elements, then project the midnodes to curved surfaces as needed. Here is an extreme example in 2D, where a coarse element is next to a curved surface. Projecting the midnode to the surface distorts the element so badly that the element mapping is invalid:

No solver can handle such elements, and you will get error messages like “invalid mapping” or “invalid Jacobian”. Heirarchical p-elements can handle high distortion, but not invalid elements. Rather than just give a cryptic error message, stressRefine attempts to recover from this situation. Pulling the midside node back so the distorted face is flat fixes the problem, but often makes the mesh not follow the geometry well and can produce artificial “kinks”, leading to degraded accuracy. Instead, stressRefine pulls midside nodes back the smallest amount necessary to make the mapping valid. This allows the run to proceed, and results are often ok as long as the stress in the affected elements is not high compared to the maximum in the model. If high stress occurs near a distorted element that had to be modified, a warning is issued. The remedy is to refine the mesh. The situation depicted above is fixed, for example, if two element edges span the arc shown instead of one.

Coming up…

As described in part 1, StressRefine’s elements use the same quadratic mapping as conventional FEA codes. Further, when the elements are ag polynomial order 2 they are exactly equivalent to conventional quadratic FEA elements. Because of this, it is also possible to do “breakout” modeling automatically, to calculate more accurate stress results in local regions of large models. How automatic breakout analysis is implemented will be discussed next.

References (ver all needed)!!

1. Zienkiewicz, O, and Taylor, R, The Finite Element Method, Butterworth Heinemann, Oxford, 2000.
2. Zienkiewicz, O, and Zhu, J, “Superconvergent Patch Recovery and a posteriori error estimation in the finite element method”, Int. J. Num. Meth. Engr, 33, 1331, 1992.
3. Szabo, B, and Babusca, I, Finite Element Analysis, Wiley, NY, 1991