P-Fea Course. Lesson 5- Miscelleneous Implementation Issues

———————————————————-

Answer to Homework problem For Lesson 4: “using equation (2) above, if the normalized error in an element that is currently at p2 is 0.5, and the error tolerance is 5 percent (0.05), what is the estimate of the required p order in the next solution pass. Comment on how aggressive the use of this estimate is compared to simply increasing the p order by 1 on each solution pass.

In equation 2, p1 is 2 (the current p order). The current error, ε1 is 0.5, and the target error ε2 is 0.05. So the required p order is p2 = 2(10) 1/2= 6.3, which is rounded down to 6 because the fraction is less than 1/2. I would say this is pretty aggressive, especially considering some adaptive algorithms simply increase the p order by 1 each iteration, while this algorithm is jumping from p 2 straight to p 6.

———————————————————-

There are a few ways in which p-adaptivity is a little trickier to implement than conventional finite elements. Also, there are some issues about reading meshes that were specifically created for conventional elements and using them for higher order elements. And there are some issues about element quality. These are the subjects of this lesson.

Point vs distributed Loads and Constraints

It is common for finite element preprocessors to convert distributed loads and constraints into equivalent nodal ones. But they are only equivalent for conventional elements. I’ll describe that in detail below for loads. The other important issue is that point loads and constraints are actually singular. If they are not processed correctly, there would be localized stress concentrations at them., the code that detects singularities would be creating sacrificial elements. So the first step is to detect the intent. If all the nodes of a face are loaded or constrained the same way, that is assumed to be a face boundary condition, not nodal. If a series of faces have the same boundary condition, and there is not a significant slope discontinuity at the shared edge of adjacent faces, that is assumed to be a boundary condition on the underlyings surface. The distributed face constraints can be handled readily in Cartesian coordinates. In curvilinear coordinates, the easiest way to handle them is with a penalty method. I’ll discuss that next. Handling conversion of point loads to equivalent distributed tractions is below.

Constraints in Curvilinear Coordinates

In a conventional code a curvilinear constraint such as shown on the left can be converted into nodal constraints as on the right. So for a node a 45 degrees, The constraint in the R direction means no translation in the “x-prime” direction at 45 degrees. This won’t work for higher order functions, unless they are constrained to 0, but that would make the elements adjacent to the constraint unable to adapt. The mathematical values of the constraints on the higher order functions can be calculated, and in general for curvilinear coordinates end up being equations involving multiple functions. These can be processed directly, but that can slow down solvers considerably. That was our experience with Mechanica. An easy to implement approach is a penalty method, which essentially uses stiff springs in constrained directions. As long as the springs are stiff enough compared to the adjacent elements this is accurate.

This approach is used in stressRefine. There is the further advantage that the penalty can be added at the element level, which is done in SRelement::AddPenaltyToStiffnessandEnfDisp. The penalty is calibrated by assuming there is a small layer if thickness equal to (element size/Penalty), of the same material as the element, attached at the boundary. So, for example, if Penalty = 10000, the layer is 0.0001 times the element size. The size is estimated from the distance from the middle of the constrained face to the element centroid. The higher the penalty, the more accurately is the constraint enforced, but too high of a penalty degrades the conditioning of the element stiffness matrix. 10000 is a good compromise. Experience with stressRefine has shown that this approach is accurate and does not affect the speed of the equation solution.

Distributed Tractions vs. Point Loads

Many finite elements accept distributed tractions as inputs, for example the PLOAD4 card in Nastran. But preprocessors often don’t use them, instead converting the tractions to equivalent nodal loads. They are “energy consistent”, in that the point loads do the same virtual work on the boundary as the original tractions, as is discussed in any finite element textbook. But they are consistent only for the p order for which this calculation was done, such as 2 for a conventional quadratic element. After it has been determined, as discussed above, that a series of nodal loads on a surface actually originated from distributed tractions, the traction values are recovered by doing the virtual work equivalency in reverse. In stressRefine this is called “energy smoothing”, and is done in SRinput::EnergySmoothNodalForces.

Element Quality

A major advantage of higher order elements, properly formulated, is that they are relatively distortion-insensitive. The quality of elements from typical automeshers is often plenty good enough, as long as a mesh setting “curvature-based” or equivalent is turned on. There is on exception that was discussed here: automeshers sometimes produce poor quality or even invalid elements when they first produce linear elements, then project the midnodes to curved surfaces. Many finite element codes will simply fail when presented with these elements, giving a cryptic error code, and users are typically advised to refine the mesh. This can be frustrating for large models, when you have trouble getting them to mesh in the first place. It is possible to recover from this situation by moving the midnodes away from the surface. The problem is you are no longer solving the right physical model. For example if you started out with the geometry on the left and ended up solving the one on the right because a midnode got moved because of an invalid element, you’d end up introducing a singulariy that did not exist in the original model.

One thing that can be done is to note that is not necessary to flatten a curved face entirely to make the element valid. That is depicted below in 2D, There is a vector from the original midnode position and the position of the middle of the straight chord. The midnode can be moved in intermediate steps along that vector

This is done in stressRefine. When an invalid element is encountered (the determinant of the Jacobian of the mapping is <= 0), curved boundary faces are flattened in successive steps, to try to find the smallest amount of flattening necessary to fix the element. This is carried out before the elements stiffness routines are called, in SRmodel::checkElementMapping. This calls SRelement::testMapping for each element, which does the flattening if necessary. Elements whose curved faces have been significantly flattened are made “sacrificial” (they are kept at p2 and excluded from calculation of the maximum stress in the model).

Often this has little effect on the accuracy of the solution. The important factor is whether any element that has been significantly flattened is near the maximum stress in the model. This is checked after the solution in SRmodel::checkFlattenedElementHighStress. A warning is issued if this situation is encountered, and it is recommended that the user refine the mesh. This procedure has been able to “rescue” a significant amount of models for which Nastran and other FEA codes would “choke” on the mesh. And in practice it is not necessary to issue the warning very often.

———————————————————-

Homework Problem: Consider the first figure with the constrained hole above. Suppose instead that this was a distributed traction in R. Now suppose the preprocessor replaced the traction with nodal loads. We are using higher order elements, so they are not energy-equivalent to the original traction. Explain physically why these nodal loads introduce singularities.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s