Featured

Welcome to My Mechanical Engineering Site

This site has articles about stress analysis and other aspects of mechanical CAE, as well as my software stressRefine. StressRefine automatically extracts more accurate stresses from finite element models without remeshing (see the software page).

I am a mechanical engineer, Ph.D. Stanford, 1980, and was co-inventor and one of the lead developers of Mechanica, a pioneering easy to use p-adaptive finite element code. More details on my background are on the about page.

StressRefine Results on Larger Models

I’ve just added several test cases to the stress validation model, a couple of which are shown below. The new manual is here.

I also made a demo slide show of creating one of the more interesting models, available here. I’m also making a video of the demo which will be available shortly.

These larger models show that the solution time in stressRefine does not go up regardless of model size, because only a local region is reanalyzed. The translation time drifts up a little, but very large models can be translated in less than 30 seconds.

Automatic P-Adaptive Breakout Analysis Using Conventional FEA Meshes

Global-local modeling allows you to reanalyze a local portion of a large model more accurately without repeating the overall analysis (e.g. Fig. 9). This is also referred to as breakout modeling because a local region of the model is broken out of the global model, and the solution to the global model is used as boundary conditions for the local model. It has been available in codes like Nastran and Ansys for some time and is also available in Solidworks Simulation (which refers to it as “submodeling”).

If we assume that the global solution has accurately predicted displacements and load flow through the overall model, then refinement of a localized region will not affect the overall solution. As long as the boundary of the localized region is chosen so the stress gradients are fairly low at its boundary, the stress calculation will be accurate in the local model.

This is similar to the familiar concept of a free body diagram for analyzing load equilibrium. For the cantilever beam shown below, the stresses in the center portion of the beam can be obtained by analyzing that portion with the forces and moments applied on the dashed boundaries. Alternatively, the displacements from the overall model may be applied on the boundary of the local region. Using displacements for this purpose is useful in finite element analysis because displacements are typically analyzed very accurately using the default meshes produced by most FEA automeshers.

This is advantageous for very large models. The solution time for finite element models is proportional to N3 where N is the number of elements. But the breakout model itself is much smaller so its solution time is considerably less (a 1000 element breakout model can be used to accurately analyze stress in a local region of a million element global model). Another benefit is for models with multiple element types. For a large Nastran model with a mixture of solids, shells, mpcs, rbar, beams, etc., you may be interested in obtaining more accurate stress in a bulkhead modeled with solid elements. With breakout analysis, this solid region, or a portion of it, can be analyzed in a local model.

Automatic Breakout Modeling

Breakout Modeling with conventional codes involves

  1. Identify the local region of interest
  2. Refine the mesh in the local region
  3. Copy the displacement results from the outer or global model to the boundary with the local region. Since the local region’s mesh is no longer compatible with the mesh of the global model, this requires interpolation.
  4. Solve the local problem

This can require considerable user intervention. The easiest implementation I have seen from a user’s standpoint is submodeling in Solidworks Simulation. This is only available for assemblies, for which identifying the local regions simply consists of selecting one or more parts from the model tree. Step 3 is automated. It is up to the user to manually refine the mesh in step 2, but that is easier than doing so for the entire model.

I have put considerable effort into completely automating this procedure in stressRefine. I also want to be able to extract local regions from large part models, as well as assembly models. Since I am using a p-adaptive code that is compatible with conventional meshes, the procedure above is simplified. Step 2 is not needed, and step 3 just involves copying the displacements wherever the local region’s mesh touches the global mesh. The user specifies the location of interest as the point of maximum stress in the model, or a selected point.

The specified location is the center of the local region. To achieve accurate results, the boundary of this region needs to be chosen so that the solution is fairly smooth, which is done by assuring that the stress error at the boundary is small compared to the stress at the center of the region. By noticing element faces that are part of the boundary of the global model, or a part of surfaces with connections (“bsurfs” in Nastran), individual parts can be automatically identified as local regions. Experience has shown that it also helps to retain a minimum of a few thousand elements in the mesh for the local region.

So for the multiple column model shown in the figure above, a local region such as this is chosen for the breakout model:

This is for a single part model. An example of an assembly is shown here:

Once the local region has been identified, the displacement results from the global model are copied to the interface where it touches the global model. If there are any loads or constraints on the free surfaces of the local model, they also have to be applied, such as the pressure in the figure above. For the assembly model with fillets shown, the Nastran result for the maximum von Mises stress was 33 Mpa. In less than 5 seconds, stressRefine extracted the local region and solved it with p-adaptivity, obtaining the more accurate result of 38.7 Mpa, which is in error less than 1% compared with a highly refined version of the global model.

Multiple results like these are shown in the validation manual for stressRefine, here. The accuracy of the initial FEA solution varies compared to the severity of the local stress concentration. The worst I saw so far was 35% error for the tube joint model shown:

I am preparing examples with much larger models that I will show here in the near future.

Implementing P-Adaptivity For Conventional FEA Meshes- Part 2 (Details for Accurate Stresses)

At the end of my previous post I mentioned I would cover the details of the element and basis function library in stressRefine next for those that might want to make use of it. I am cleaning up the code for that library in order to release it for opensource use and am not quite ready yet, so I decided to defer that discussion until it is available. Instead, in this post I’ll talk about details of extracting accurate stresses with p-adaptivity.

Stress Calculation

Calculating stresses involves the mapping (shape) functions of the elements and the displacement functions [1] As is the case for element stiffnesses, the difference between conventional elements and p-elements is that different displacement functions are used, so the conversion of the stress calculation to p-elements is still straightforward. This is for the directly calculated or “raw” stress.

In stressRefine. and some other codes a further step is taken of smoothing the stresses: This is done by choosing smooth and continuous functions to represent the stress variation, and least-squares fitting these to the stresses in a group of elements, commonly referred to as a patch [2]. In stressRefine, the functions chosen to represent the stress are the displacement basis functions described in part 1. The smoothed stresses also tend to be more accurate, and are sometimes referred to as superconvergent [2]. This shows how the “raw” vs. smoothed stresses looks like in 1D across several elements:

Error estimators

In order to perform adaptivity, estimates of needed of the local errors in stress in each element. The stress-smoothing procedure described provides one such measure. The directly computed or “raw” stresses can be sampled at some points in an element, and their values compared with the superconvergent stresses at the same points. This is the Zienkiewicz-Zhu error norm, which was first suggested in [3] and is widely used.

This measure turns out not to be reliable if the elements are quadratic, or at polynomial order 2 , which is awkward because we want to start at p2 in the adaptive loop. StressRefine gets around this by also using an alternate measure. The traction (dot product of the stress tensor with the normal to a surface) at the faces between adjacent elements must be continuous. For the raw stresses it will not be, unless the results are converged. So the jump in traction from raw stresses at elements sharing a face is another stress measure.

Once the error estimate is available for each element, the required p-order in the element to achieve desired accuracy can be estimated as described in [2]. Now we have all the ingredients for an adaptive algorithm

This image has an empty alt attribute; its file name is image-6.png
In practice with most meshes there is marginal improvement after a total of 3 solutions, so stressRefine terminates the loop after 2 passes by default.

Uniform Vs. Adaptive P-Refinement

The adaptive algorithm described is for adaptive refinement, where the p order can be varied arbitrarily throughout the mesh. In uniform adaptivity, the entire model is progressed through each polynomial. A convergence plot of desired quantities such as  energy norms or maximum stress  in the model can then be computed. This can appear safer because it is possible for adaptive refinement not to converge to the correct answer if the adaptive algorithm increases the p order incorrectly (the same is actually true for adaptive mesh refinement, if the algorithm does not increase the density of the mesh enough in local regions where needed). The problem is that uniform refinement of large model can be very computationally intensive.

This can be alleviated by only performing the uniform updating in a local region, for example near a stress concentration, but this is also not guaranteed to converge on the correct solution, it can get the wrong answer if the region chosen for updating is too small.

In practice, in my experience on many test cases, adaptive refinement gives good results using the error measures in stressRefine and the adaptive algorithm described above. A series of validation cases are described here. StressRefine does allow uniform adaptivity if you wanted to double check your results, but it may be time consuming for larger models.

Singularities and Sacrificial Elements

I discussed the issue of singularities in my article on accurate stresses From FEA, which was mentioned in a previous post. The stresses at these are theoretically infinite and the stress results in the vicinity are physically meaningless. They strongly affect the results in nearby elements. If any element touches a singularity, the results in the entire element are affected, a phenomenon sometimes referred to as “error pollution”.

Professor Szabo discusses handling singularities at length ([3] p ).169, using the “L-shaped domain” example. He recommends handling them by place a double row of small elements near the corner to isolate them:

This image has an empty alt attribute; its file name is image-16.png

The small elements in the double row are called “sacrificial”. In stressRefine we cannot create such a graded mesh because we are reusing an existing mesh. The next best thing is to automatically identify which elements touch singularities, and mark them as sacrificial. Sacrificial elements are kept at polynomial order 2, and are excluded when calculating the maximum stress in the model. This works very well in practice. However it is still possible, as discussed in the previous post , that the max stress in the model is in the vicinity of a singularity, in which case the results may not be physically meaningful. This situation is detected and a warning issued to the user.

Unsupported Entities in the Mesh

StressRefine only has solid elements, because often the regions where important three dimensional stress concentrations are occurring are in the solid portions of the model. But “real world” FEA models are likely to have mixed meshes, which might contain solids, shells, beams, rigid bars, glued contacts, etc. Since stressRefine is designed to calculate more accurate stresses from a mesh that has already been solve with a conventional FEA code, it is straightforward to work around this. As we’ll discuss in the next post, in breakout analysis the assumption is made that the existing solution has already obtained accurate displacements in the “global” model, Using that same assumption, where any “unsupported entities” like non-solid elements or glued contacts touch the solid elements, the displacements from the existing FEA solution are applied as a boundary condition. Thus “unsupported entities” is treated as a special case of breakout analysis, where the local region is the part of the model that contains only solid elements.

Here is an example. The girders share are touched by shells at the top, and have glued contacts where they touch the legs at the bottom:

This image has an empty alt attribute; its file name is image-15.png

Since the stress may not be accurate in the elements where unsupported entities touch the mesh (in fact the situation is often singular, for example, if the end of a beam or edge of a shell touches the mesh), those elements are treated as sacrificial.

Distorted Elements

Automatic meshers in most commercial codes produce good quality meshes as long as settings such as “curvature-based refinement” are turned on. If not, they can sometimes produce distorted elements, especially in the vicinity of curved surfaces. One reason for this is that the meshers often start by producing linear elements, then project the midnodes to curved surfaces as needed. Here is an extreme example in 2D, where a coarse element is next to a curved surface. Projecting the midnode to the surface distorts the element so badly that the element mapping is invalid:

No solver can handle such elements, and you will get error messages like “invalid mapping” or “invalid Jacobian”. Heirarchical p-elements can handle high distortion, but not invalid elements. Rather than just give a cryptic error message, stressRefine attempts to recover from this situation. Pulling the midside node back so the distorted face is flat fixes the problem, but often makes the mesh not follow the geometry well and can produce artificial “kinks”, leading to degraded accuracy. Instead, stressRefine pulls midside nodes back the smallest amount necessary to make the mapping valid. This allows the run to proceed, and results are often ok as long as the stress in the affected elements is not high compared to the maximum in the model. If high stress occurs near a distorted element that had to be modified, a warning is issued. The remedy is to refine the mesh. The situation depicted above is fixed, for example, if two element edges span the arc shown instead of one.

Coming up…

As described in part 1, StressRefine’s elements use the same quadratic mapping as conventional FEA codes. Further, when the elements are ag polynomial order 2 they are exactly equivalent to conventional quadratic FEA elements. Because of this, it is also possible to do “breakout” modeling automatically, to calculate more accurate stress results in local regions of large models. How automatic breakout analysis is implemented will be discussed next.

References (ver all needed)!!

  1. Zienkiewicz, O, and Taylor, R, The Finite Element Method, Butterworth Heinemann, Oxford, 2000.
  2. Zienkiewicz, O, and Zhu, J, “Superconvergent Patch Recovery and a posteriori error estimation in the finite element method”, Int. J. Num. Meth. Engr, 33, 1331, 1992.
  3. Szabo, B, and Babusca, I, Finite Element Analysis, Wiley, NY, 1991

Implementing P-Adaptivity For Conventional FEA Meshes- Part 1 (Overview)

Adaptive mesh refinement is now commonly available in commercial FEA codes. This is a powerful method and in my experience works well and can yield high accuracy. I’ve tried various versions including SOLIDWORKS Simulation, Ansys, and Autodesk Fusion with good results. The disadvantages of adaptive mesh refinement, as I see it, are:

  • For large models, automeshing can be difficult. Adaptive mesh refinement will fail if the model cannot be remeshed
  • In some commercial implementations, adaptive mesh refinement is not available for mixed meshes such as shells and solids.

The main advantage of p-adaptivity is the more accurate results can be extracted without remeshing. Consider a model with a local stress concentration:

The mesh in the vicinity of the fillet follows the geometry well. But the stresses vary rapidly in that vicinity, so with conventional FEA this mesh yields a result that is in error by 20%. To achieve more accurate results, the mesh must be refined in that vicinity. But this mesh is ideal for p-adaptivity. Accurate stresses can be calculated by increasing the polynomial orders of the elements to achieve better accuracy.

The realization that modern automeshers in conventional FEA codes produce high quality meshes (especially if a mesh setting like “curvature-based refinement” is turned on, which I highly recommend), was the genesis of stressRefine. Existing commercial p-adaptive codes like ESRD’s StressCheck and Mechanica (now PTC/Creo Simulate) use element’s with higher order mapping or exact geometry that are better suited for special meshes that are coarser than conventional FEA meshes. What if I instead developed a new code designed from scratch to use conventional FEA elements? Then accurate stresses could be calculated from FEA meshes directly. Further, local regions of large models could be reanalyzed very quickly for more accurate stress, using the results from the “global” FEA solution. This is also referred to as breakout analysis, and has been implemented in various commercial codes, for example it is called “submodeling” in SOLIDWORKS Simulation, but it is typically not adaptive, the user is responsible for manually refining the mesh in the submodel. In stressCheck, breakout analysis requires creating a new local model, with tools provided to transfer loads from an external global FEA model. In both these approaches, considerable user intervention can be required. But with a code like stressRefine that works with conventional FEA meshes and results, and is p-adaptive, this procedure can be completely automated. The breakout region is automatically extracted and then analyzed adaptively.

With that lengthy preamble, now I’ll get into how this is implemented.

The wrong way

The most obvious way to implement p-adaptivity is to just use conventional higher-order isoparametric elements. So you can start with a linear 4 node tetrahedral element, go to quadratic by adding mid-edge nodes (10 node tet), and keep adding more nodes to get cubic, quartic, etc., elements.

The problem is that these elements become numerically poorly conditioned at high p-orders and are much more sensitivity to distortion [1]. There can also be severe problems if stresses are calculated directly at the corners of elements, where the distortion effects are particularly bad, Despite this limitation, this method has been implemented in some commercial codes. Back in the 1990s when Rasna was making a lot of marketing noise, some of our competitors did the quick higher order isoparametric implementation so they could say “we have p elements too”. One of them was Cosmos, and their implementation still persists as p-adaptivity in SOLIDWORKS Simulation. Since that code has a robust adaptation of adaptive mesh refinement, I recommend using that instead and avoiding their p-implementation. It can demo reasonably well on some small problems, but it is easy to find models for which it fails or gets inaccurate results. I did a head-to-head comparison of stressRefine vs. SOLIDWORKS Simulation p-adaptivity on 25 models of varying complexity. StressRefine got an accurate answer on all of them, in all cases more accurate than SOLIDWORKS Simulation p-adaptivity. SOLIDWORKS Simulation p-adaptivity failed or got an inaccurate answer on more the 75% of the models, including some simple ones, This comparison is summarized here. My intent is not to pick on SOLIDWORKS Simulation, I like their product. But poor commercial implementations like this can give the mistaken impression that “p-adaptivity doesn’t work”. It does work, if done correctly.

The right way

The robust way to implement p elements was worked out by professors Szabo and Babuska. They proved convergence theorems for these functions and also worked out the details of using them [2].

Finite elements use approximating functions to represent the element mapping and the displacements.

Mapping: The physical coordinates of a point (x,y,z) at a natural coordinate r,s,t in an element are

x(r,s,t ) = ∑ xi Ni     y(r,s,t ) = ∑ yi Ni     z(r,s,t ) = ∑ xi Ni

where the Ni(r,s,t) are the shape functions.

Displacements: The displacements coordinates of a point (u,v,w) at a natural coordinate r,s,t in an element are

u(r,s,t) = ∑ uihi     v(r,s,t) = ∑ vi hi    w(r,s,t) = ∑ wi hi where hi(r,s,t) are the displacement basis functions.

In a conventional isoparametric element, the same functions are used for shape and displacement, for example quadratic serendipity functions [1]. In a “p element”, different functions are used. As mentioned above, previous commerical p-codes have used higher order mapping for the shape functions, or exact geometric mapping from the underlying CAD geometry. This has the disadvantage that it can fail to recover strain-free rigid body motions at low p-order, although this problem goes away as the p-order is raised.

StressRefine uses the same quadratic mapping as conventional FEA codes. For this reason it would not be suitable for the coarser meshes produced by codes like StressCheck or Mechanica, but it works well with conventional meshes such as in the figure above. Since StressRefine starts at p2 when representing displacements, rigid body motion is always represented exactly.

The conventional functions have the value 1 at one of the element’s nodes, and 0 at all the others. This looks like this in 1D:

The p-basis functions developed by professors Szabo and Babuska [2] are as shown here in 1D;

These are called heirarchical functions because when you raise the p-order the lower functions remain the same. For example if you have an element at p 2, the two p1 functions and the p2 function are present. If you raise it to p 3, those functions remain unchanged, and the p3 function is added. This is useful computationally, you don’t have recalculate the element stiffness matrix coefficients for the lower functions, you just have to add those coefficients for the function you just added. This is not true of conventional finite elements, the functions for a cubic element are completely different from those of a quadratic element, for example.

The Szabo-Babuska functions are orthogonal in 1D, for example the integral over the length of the edge of p1 times p2 is 0. When they are blended together into a 2D or 3D element this make them very well conditioned numerically even at higher p orders.

For a 3D element, the corner functions are calculated separately and are the same as a conventional linear element: they have the value 1 at one element corner and 0 at other corners. For example for a tetrahedron they are just the element volume coordinates [2]. For higher order edge functions (p 2 and up), the 1D functions shown above are used on each edge, but they have to be blended with linear functions so they have nonzero values on one edge and 0 on all other element edges. The details of this are described in [2].

At polynomial order 3 and up additional functions are needed. First there are face functions (also called bubble functions), which are nonzero on one face and zero on all other faces. Then there are volume (or internal) functions which are zero on all faces and only nonzero in the interior of the element volume. The correct number of face and volume functions needed at any p order can be determined from Pascal’s triangle [2]. This ensemble of corner, edge, face, and volume functions forms a complete set: any arbitrary displacement function can be represented by a combination of these functions. This is a crucial prerequisite for convergence.

Here are examples of what these functions look liked for a tetrahedron, except for the volume functions which I can’t draw:

The only problem with these functions is that they are incompatible with conventional FEA elements that have midside nodes. For this reason I made a slight variation to them for use in stressRefine: It uses quadratic nodal functions for the corners and mid-edge nodes of the elements. For p 3 and higher it uses the 1D functions shown in the figure above, blended together in the manner described. This variation looks like this:

It turns out elements using these functions have similar numerical conditioning to those based on the Szabo-Babuska functions for polynomial orders 1 through 8, so they work fine. I also had about a test suite of about 200 cases that I ran using the modified vs original functions, and the results came out practically identical, with no degradation in accuracy.

The modified functions have the advantage that if the elements are at p2 they are identical to conventional quadratic elements. This means it is possible to use a conventional mesh, and have conventional elements and p-adaptive elements arbitrarily mixed together.

A major advantage of using the heirarchical functions for adaptivity is that a different polynomial order can be specified on each edge of the model, which allows adjacent elements to have different orders as is illustrated for a simple 2D case below. For the P2 element, the polynomial order is just set to 4 at the common edge with the p4 element. This makes arbitrary adaptivity easy to implement. Given local error measures, polynomial orders can be adjusted as needed. For typical models solved in stressRefine, most of the model stays at low polynomial order.

Element Orientation

There is difficulty extending to 2 and 3 dimensions when using higher order basis functions with arbitrary polynomial order on any edge. The orientation of element local edges and faces compared to global model edges and faces must be properly considered to ensure continuity of basis functions across element boundaries. This is not a problem for conventional elements, for which continuity is guaranteed as long as adjacent elements share the same nodes, including corner nodes and any midside nodes on edges or faces. But p-elements have edge and face functions which are not nodally-based. A simple 2D example is shown below:

Assuring continuity at element boundaries.

The local node numbering at the corners of two adjacent elements is shown. Consider polynomial edge functions on an edge, depicted in Fig. 3. The local edges run backwards between the two elements, so for the “odd” functions like polynomial order 3, 5, 7, etc. to be continuous we have to change their signs at the edge for one of the elements. In 3d there is a similar situation at faces, but the orientation of the local faces must also be corrected for. This is all a bit tricky to code but is not computationally intensive. Once the 3 dimensional basis function library is available, however, it is readily integrated into any element routine. I’ll go into more details about this in the next post, describing the details of stressRefine’s element library for anyone that may wish to make use of it.

References

  1. Bathe, K, Finite Element Procedures in Engineering Analysis, Prentice-Hall, 1982.
  2. Szabo, B, and Babuska, I, Finite Element Analysis, Wiley, NY, 1991

Tips On Calculating Accurate Stresses With Altair SimSolid

SimSolid is a very easy to use code for calculating accurate stresses. This shows an exceptional range if you think about it: it is most famous for being a code that lets you quickly analyze large assemblies. But stress analysis can be the other end of the spectrum, zooming in to get a localized result at a stress concentration. Simsolid does very well at that also.

Normally when you run SimSolid, you can just accept the default settings. This will run very quickly and do a good job of calculatiing more global quantities such as displacements or modal frequencies and mode shapes. It will also identify where local stress “hot spots” are. You can stay in this mode while comparing design alternatives. But this solution, the “first stage”, may not get accurate stresses.

For that, the next step is to run and additional solution with some easy tweaks to the settings, the “second stage”. For smaller models this can be just using an “adapt to features” checkbox. For larger models you select on or more parts where the first solution has shown higher stresses, and turn on adapt to features for those parts only. This second stage solution may be slower than the first but will typically still be a lot faster than trying to do adaptive analysis with FEA codes.

I have described this procedure in detail in the document “Accurate Stress Analysis With Simsolid” available here. This explains some subtleties from a user’s perspective, including issues like dealing with singularities, as well as giving an overview of the theoretical background.

The initial solution in SimSolid identifies the max stress to be at the root of the notch:

Now we move on to stage II to make sure the stress is accurate. Under solution settings, select the specimen part, check adapt to features, and rerun:

The stress is now very accurate. This is one of the models from the “real world” validation cases I did for SimSolid, available here.

StressRefine- automatically calculating accurate stresses from finite element models, without remeshing

Calculating accurate stresses with finite elements can be a tedious and time-consuming process involving manual mesh refinement, especially for large models. The pioneering software Mechanica, of which I was a co-inventor and a lead developer, made this easier by using automatic p-adaptive analysis. More recently, other codes have added automatic mesh refinement capabilities to address this problem. But this method can fail for large models if the software fails to remesh the model. It can also lead to long solution times since the entire model is being adaptively remeshed. Also, there are limitations, for example for some codes automatic mesh refinement is not available for mixed shell and solid meshes. And this is not useful for legacy models unless they are recreated in the newer software codes.

StressRefine addresses this problem by reanalyzing meshes from other FEA codes, using p-adaptivity to calculate stress more accurately. This works very well for the typical meshes produced by commercial FEA automeshers. For large models, stressRefine automatically extracts local regions of the model and solves them quickly and adaptively to calculate more accurate stress results. This is available in other codes where it is referred to as “breakout modeling” or “sub-modeling”, but to my knowledge stressRefine is the only code that does this automatically and adaptively. StressRefine can also automatically extract more accurate stresses in the solid elements of models with mixed meshes (such as shells, beams, links, etc).

StressRefine is available for free download, please see the software page.