Relax Factors Issue

I am zeroing in on relaxation issues which, in my opinion, are turning out to be the most pervasive and shocking issue we have come across so far in our quest of accurate verification of Experimental Data by SimScale CFD methods…

In this cfd_online forum post by LuckyTran (a senior member there with the 6th highest member Reputation out of about 150,000 members) presents what seems to be the commonly accepted behaviour as to what should happen to the value of stable results (like CL and CD) on a simulation run when relaxation values are changed in order to keep simulations from diverging.

LuckTran said this:

Unless your solution is diverging, do not bother with the under-relaxation factors. If it is diverging, do as you did, lower the factors until you can a stable solution, and then raise them again if possible. If not, then it is a hint that there is an inherent stability issues in the problem you are trying to solve or perhaps one of your modeling parameters is incorrect.

All your observations are correct. Lowering the under-relaxation factors will limit the change in the solution per iteration, which will make the residuals appear to change less per iteration. Be careful, as this can give a false sense of, oh, my solution is now converged. That is why residuals are not a good measure of solution convergence, they are really only useful for determining if the solution is diverging. Under-relaxation factors does not change the way the solution evolves in the long run, you still arrive at the same solution more or less (unless the solution diverges) it just takes you longer to get there.

As a test (do this as a mental exercise to not waste compute hours), setting all your under-relaxation factors to 0 will make the residuals constant because the solution does not change between iterations. It is possible to obtain very very very small residuals by setting the under-relaxation factors to very small fractions. Again, this does not mean that the solution is converged.

So in general, and this was obvious from the start, you want the highest under-relaxation factors. In fact, over-relaxation (under-relaxation factor greater than 1.0) would be ideal in the sense that you would arrive at the converged solution in the least number of iterations. However, because of the numerical schemes involved there are stability problems, and conservative values for the under-relaxation are necessary to prevent the solution from diverging

In the last 24 hours, I used about 1200 core hours, on five simulation runs that seem to show that the above BOLDED statement is NOT supported by these runs.

These runs come from my ‘NACA0012 - TET vs HEX’ project that is still unfinished.

It will remain unfinished until I have been able to obtain closer than ±1% on CL and CD or until I can show that ±1% using ‘best practices’ is not possible yet.

Here is the shocking truth (considering Experimental data point was EXP CL=1.11 0% EXP CD=0.0117 0% from NACA TM4074 at 10 degree AOA, Mach 0.3 and 6,000,000 Re):

(the DEFAULTS are : p0.3u0.7k0.3w0.3)

Sim Run: 2372i32c117m62.9chr_Relax-DEFAULTS shows a CFD CL=1.140 +3% and CFD CD=0.01507 +29%

Sim Run: 5000i32c517m275.7chr_Relax-p0.7u0.3kw0.7 shows a CFD CL=1.19 +7% and CFD CD=0.0109 -7%

Sim Run: 5000i32c629m335.5chr_Relax-pukw0.5 shows a CFD CL=1.16 +4% and CFD CD=0.0135 +15%

Sim Run: 5000i32c712m380.3chr_Relax-p0.35u0.75kw0.35 CFD CL=1.137 +2% and CFD CD=0.01532 +31%

Sim Run: 2500i32c302m161.1chr_Relax-p0.25u0.65kw0.25 CFD CL=1.143 +2% and CFD CD=0.01471 +26%

After my I achieve a ‘best practice’ incompressible sim run, my plan is to do the same for a compressible simulation for the same sim set up…

BUT, until I can figure what are the ‘best practice’ relaxation values to use, I will stick to incompressible…

How can there be ANY ‘best practice’ determination for relax factor values, especially since the results change significantly when the relax factors are changed :question:

If the answer is to begin with the default values and increase each of the 4 relax factors (pukw) individually, until just before the simulation diverges (or stops giving stable results), then it will certainly take thousands of core hours to determine ‘best practice’ relax factors for just this one sim run :sob:

Any other ideas out there :question:

2 Likes

With a little more digging, I think there are two theories ‘out there’ which should both apply to ‘relax factor value optimization’

  1. Varying Relax factor values should produce the SAME converged results (LuckyTran++)…
  2. LOWER residual values should produce more accurate results, if the results are stable…

Unfortunately, we have data where these two theories can not be universally applied to relax factor optimization…

This Sim Run has all LOWER relax factors AND all LOWER residuals,
Sim Run: 2372i32c117m62.9chr_Relax-DEFAULTS shows a CFD CL=1.140 +3% and CFD CD=0.01507 +29%

than this one:
Sim Run: 5000i32c712m380.3chr_Relax-p0.35u0.75kw0.35 CFD CL=1.137 +2% and CFD CD=0.01532 +31%

1 Like

So I’ll also post some results here from NACA0012 validation runs using Ladson force data for 10 AOA (1.0809Cl 0.01165Cd).
General setup: cell limited least squares for gradient terms, gauss linear upwind vU for div phi U, gauss linear corrected for laplacian. 2M cells mesh, 40y+, 100% layering.

Relax: 0.5pukw: 1.111Cl +2.7% 0.0125Cd +7.3%. 1e-5 residuals for velocity, almost 1e-3 for p.
Relax: 0.3p0.7u0.3k0.3w (default): 1.07Cl -1% 0.017Cd +46%. Almost 1e-7 residuals for velocity, almost 1e-5 for p.
Relax: 0.35p0.75u0.5k0.5w: 1.07Cl -1% 0.0175Cd +50%. 1e-8 residuals for velocity, almost 1e-7 for p. (This, although Cd is quite different from experiment, is the most tightly converged solution from this set).
Relax: 0.4p0.7u0.5k0.5w: 1.07Cl -1% 0.0171Cd +47%. Almost 1e-7 residuals for velocity, almost 1e-5 for p.
Relax: 0.35p0.75u0.35k0.35w: 1.07Cl -1% 0.0175Cd +50%. Almost 1e-8 residuals for velocity, 1e-6 for p.

1 Like

Hi everyone, this is a bump so you see this topic…

This PU topic has been created by moving a 16 day old limited distribution message to the forum here…

Anyone here have comments?

Is anyone interested in Relax Factors and how they can affect your CFD results :question:

Heyho Dale!

Are you currently working on this? Would be great!

In CFD you have just plenty of stuff to talk about so I guess we will never run out of topics :smiley:

Jousef

Actually I am wanting help here from any ‘smarter than me’ person…

We have presented the data and given a detailed discussion of the issue…

The above question needs answering.

AND…

The theory contradictions need to be addressed…

Can you tag some SimScale gurus here?

Well, guess I am out then :wink:

Maybe one of the @cfd_squad can help here, most likely @Ricardopg? :stuck_out_tongue:

Cheers,

Jousef

Dale, Retsam and I have discussed this matter quite a bit around a month ago.
The general consensus from (my) researches is that optimally the practitioner should start off with the default relaxation factors and adapt them if need be.

But it’s difficult to pinpoint exactly what’s going on with these simulations. NACA airfoils at higher AOAs are tricky. I think it would require more tests, not only with NACA validations but also other projects.
Of course there could also be other things interfering with the results.

Following a solid methodology (estimating errors, running mesh independency studies, sensitivity tests etc.) is key here. Some pointers can be found in Ferziger & Peric; Versteeg & Malalasekera and other books.

1 Like

I agree, and until we get some answers to my questions of Post 1 and beyond, I believe that the only safe thing we can do here is to never change Relax Factors from the DEFAULT values.

With the data we have assembled about changing Relax Factors, I can see no way to justify changing them.

Even using DEFAULTs, I am very wary of the reliability of result predictions in any of my CFD simulations…

1 Like

Hi All,

My experience with relaxation factors comes from requiring the simulation to converge sufficiently in exchange for simulation time.

Sometimes a complex mesh or meshes for highly complex phenomena is very difficult to refine and even after significant iterations of mesh refinement, there will still be cells that will cause significant numerical fluctuations in the residuals. That’s where the relaxation factors come in. With relaxation factors you are able to influence either the final convergence or speed of the convergence. In exchange, you sacrifice either simulation time (under-relaxation) or convergence stability (over-relaxation).

So all in all, it really depends on what you need. Say if your simulation is very complex, difficult to refine but you have the computing time and resources available, usage of under-relaxation factors to allow the simulation to converge might be what is just needed to get the results you want.

Similarly, if you have a relatively simple case with a good mesh that is taking significantly longer to converge, you can apply over-relaxation to allow it to converge faster provided the instability in residuals does not cause the solver to prompt and error.

In use cases, as Ric has pointed out, one would leave the relaxation factors as default and then apply them as needed. Relaxation factors should not affect the result, merely the convergence. Of course then, a poor convergence will mean inaccurate results so keep that in mind.

As for the math, I went to do some searching and this simple equation poped out from the CFD forums.

Under and over relaxation factors control the stability and convergence rate of the iterative process.The under relaxation factor increases the stability while over relaxation increases the rate of convergence.

xk+1 = w.xcal + (1-w).xk
yk+1 = w.ycal + (1-w).yk

For 0<w<1, the method is known as successive under relaxation.
For 1<w<2, the method is known as successive over relaxation.

where xcal is the value calculated based on xk

If anyone can find a proper mathematical description and break it down that would be great.

Cheers.

1 Like

Barry, thanks for your research and insight, but it seems to magnify my concerns.

With our data, we have shown that the accepted rules do not seem to apply at the same time here.

Unfortunately, I do not think the issue will get resolved to my satisfaction until someone actually reviews our data and our sim runs to show us where we have gone wrong.

It seems that we can pick any relax factors and get results to be of any value we want while justifying doing so by citing things like ‘we needed to get the residuals to a lower level’, or’ it was taking too long to converge’.

Relax factor adjustments just seem like a convenient way to validate CFD results to experimental data to me…

And I see no reason to have faith that the default values actually give us the most accurate results.

I will try to present our results in a way that their randomness can be more clearly seen (if that is even possible)

Does anyone have any ideas on how to show the issue using our data in a more clear manner?

I think it may be time that I move this ‘Relax factors’ topic back to the public forum so that we might get more input on this pervasive issue.

I see that while I was gone, that SimScale has decided to give us an ‘Auto Relax’ feature and also to make it the default.

I am totally mystified as to why this was done when relax factors are such a problem since they affect the results to such a large degree and that there is a very serious contradiction in basic theories as to what levels they should be set to.

Can we at least have a full explanation as to what values are used when ‘Auto Relax’ is used :question:

Dale,

I am too struggling with some of these parameters, with much less knowledge then you or most of the other users on the forum. Currently, when manual is selected, these are the default settings.

image

And these are the settings for Automatic
image

The automatic factors seem quite low (except for pressure) but i think someone on the forum told me that the automatic factors start out much higher, and are then tightened towards the end of the simulations. I just cant remember where that was said.

HOWEVER,

When doing some research on Non-Orthogonality, i did a small and basic thread here, with how mesh settings effects Non-orthogonal cell count.

For the simulation side, I found that using Non-Orthogonal corrector loops is necessary when there are higher numbers of Non-orthogonal cells, and when these cells have a higher angle (above 70 deg) Im not sure how strong the connection is between the corrector loops and relaxation factors, but my assumption is that both are necessary for convergence, especially with meshes of higher numbers of illegal cells.

Basically, as i understand it, most CFD solvers use the Over-relaxation method developed by Hrvoje Jasak, where the treatment of the explicit term during discretization allows for less corrector loops to achieve convergence. This is best said from his thesis:

Source:
Google Search
(Error Analysis and Estimation for the Finite Volume Method with Applications to Fluid Flows by: Hrvoje Jasak)

PG 113

In this study, the diffusion term will therefore be split into the implicit orthogonal contribution, which includes only the first neighbours of the cell and creates a diagonally equal matrix and the non orthogonal correction, which will be added to the source. If it is important to resolve the non-orthogonal parts of the diffusion operators (like in the case of the pressure equation, see Section 3.8), non-orthogonal correctors are included. The system of algebraic equations, Eqn. (3.42), will be solved several times. Each new solution will be used to update the non-orthogonal correction terms, until the desired tolerance is met. It should again be noted that this practice only improves the quality of the matrix but does not guarantee boundedness. If boundedness is essential, the non-orthogonal contribution should be discarded, thus creating a discretisation error described in Section 3.6.

Jasak is saying that if boundness (convergence) cannot be achieved, that the explicit term should be discarded. He then continues to say:

PG 114

The above discussion concentrates on the analysis of the discretisation on a term by-term basis. In reality, all of the above coefficients contribute to the matrix, thus influencing the properties of the system. It has been shown that the only terms that actually enhance the diagonal dominance are the linear part of the source and the temporal derivative. In steady-state calculations, the beneficial influence of the temporal derivative on the diagonal dominance does not exist. In order to enable the use iterative solvers, the diagonal dominance needs to be enhanced in some other way, namely through under-relaxation.

Then some confusing equations … followed by

PG 115

Here, (theta p) here represents the value of (theta)’ from the previous iteration and a is the under-relaxation factor (0 < a ^ 1).

And finally the relaxation factor definition from OpenFOAM

Source: OpenFoam

Under-relaxation works by limiting the amount which a variable changes from one iteration to the next, either by modifying the solution matrix and source prior to solving for a field or by modifying the field directly. An under-relaxation factor α, 0 < α ≤ 1 elax pecial {t4ht= specifies the amount of under-relaxation, ranging from none at all for α = 1 elax pecial {t4ht= and increasing in strength as α → 0 elax pecial {t4ht=. The limiting case where α = 0 elax pecial {t4ht= represents a solution which does not change at all with successive iterations. An optimum choice of α elax pecial {t4ht= is one that is small enough to ensure stable computation but large enough to move the iterative process forward quickly; values of α elax pecial {t4ht= as high as 0.9 can ensure stability in some cases and anything much below, say, 0.2 are prohibitively restrictive in slowing the iterative process.

So as i understand it. Non-orthogonal corrector loops help to achieve accuracy. When the non-orthogonal component (explicit term) is too large, it must be discarded, leaving only the orthogonal contribution (implicit term). This leads to necessity of under relaxation factors to make up for the deletion of this non-orthogonal component in order to either get an accurate reading (lower relaxation factor values eg: 0.2, 05) or to allow the inaccuracies to slide, in favor of convergence and speed.

I am taking this assumption from the only source i have read on this topic, the thesis from Jasak. Please correct me, as i am only attempting to understand what is going on here and apply it to my simulations.

Working off of this information, I am currently getting some very incorrect numbers on my full car sims as shown below. Not to try and change the topic of discussion but i have changed both of these settings in hopes that they would help.

First run here shows the following coefficient graph that is very wrong.

Relaxation type: Automatic
Relaxation factors:
P = 0.3
U = 0.7
K = 0.7
W = 0.7
Non- orthogonal correction = 0
EDIT
Sim time = 21.76 hrs
Core hours = 695.5

Second run: here reduced relaxation factors and added a non-orthogonal correction loop.
Relaxation type: Automatic
Relaxation factors:
P = 0.3
U = 0.7
K = 0.5
W = 0.5
Non- orthogonal correction = 1
EDIT
Sim time = 23.23 hrs
Core hours = 743.5

This brings me to the mesh quality as the most likely cause - here

Most of my previous simulations done with Simscale have had a illegal cell count of around 2-500 with a 15-20 million total cell count. Most of the illegal cells are Non-orthogonal. These previous simulations also had the max non-orthogonal quality setting at 75 deg. When changing this value to 70 deg, the amount of non-orthogonal cells jumps to 5000 on current meshes.

Question: does the amount of illegal cells (in this case non- orthogonal cells) that are recorded after the mesh is created, affect the simulation? As in, do the solvers treat the discretisation of these cells differently based on how many are reported as illegal, or do the solvers treat based on the actual degree of non-orthogonality

Sorry for the long post, i have a bad habit of doing that haha

Dan

Hey @dschroeder & @DaleKramer!

I already forwarded this and there will be some progress related to the relaxation factor! Tagging my colleague @sguenther here in any case.

Best,

Jousef

Hi Dan,

Glad to see you jump in here with an amazing amount of research! :smiley:

As I have time I will respond in detail where I can but it looks like a look of what you have come up with is over my head right now and I will have to research and experiment more when I get some time and more core hours…

Dale

Dale,

Im sure we both agree that some help on this topic would be useful. I would also like to do tests with smaller geometries but i dont have time right now as i urgently need to get results for my thesis. As of now, very few meshes are working due to the 3 adjacent cell problem. With the only mesh that did succeeded, i made 2 simulations with terrible results in the links i shared above.

I am not sure how the mesh quality is related to simulation convergence in terms of reported illegal cells, but i am getting really frustrated and wasting a lot of core hours on this problem.

I am now doing another test where the max non-orthogonality will be reverted back to 75, which will drastically lower the illegal cell count. Then, if by some miracle, the mesh succeeds, i will do 2 more simulations with the same settings as before, in an attempt to see the difference. I just hope that the new mesh wont have the convergence problems that the previous runs had.

Ill post anything i find, but i could use some help!

Dan

Wow, about 15,000 mesh faces in the vehicles geometry …

I see you are only layering the wings and endplates and a BMB face…

Have you used our method of getting 11 or more mesh quality parameters by installing OF and looking at them in ParaView?

I too would be concerned with such a high ratio of illegal cells. (5000 in a 20,000,000 cell mesh)

I am not using AutoRelax at all right now. I use .3p.7u.3kw

I have not played much with orthogonal correctors but it looks like they are in order for you.

Have you got simpler geometries that give results that are not unreal (I can’t find any of your forces results that are not unreal), if so what changes caused the issues?

I will look further…

Dale,

I have not checked the mesh quality parameters as do not have OF installed. I have never used the native OF but i am aware that it can check the mesh quality.

I can easily lower the illegal cell count by changing the max non-ortho Quality setting to 75 deg, but im still not sure if that has the desired effect.

My simulation setups and methodology is as follows:

  1. I performed all my Non-ortho reduction testing on the wheels as they had a significant amount due to the geometry. This is found in my thread here

  2. I used my research and new strategies to create a successful 1/2 car mesh to validate the results. This project is here

The half car simulation was done with max non-ortho quality of 70 deg, which resluted in the following illegal cell count.

Checking faces in error :
non-orthogonality > 70 degrees : 2682
faces with face pyramid volume < 1e-13 : 15
faces with concavity > 80 degrees : 1
faces with skewness > 4 (internal) or 20 (boundary) : 4
faces with interpolation weights (0…1) < 0.02 : 0
faces with volume ratio of neighbour cells < 0.01 : 3
faces with face twist < 0.01 : 15
faces on cells with determinant < 0.001 : 0

This project was also the original testing ground for my radiator fan / rad fan MRF zone testing. Since this was successful, my plan was to simply make a full car sim because i naively though that this would work no problem

And the stable results with default settings for relaxation factors, with one non-ortho corrector loop

image

Force plot

Coefficients Plot

These results made me think that all was well and good … NOPE.

After the first attempts at a full car simulation (with yaw, pitch, roll, and steering angle) i knew this would be a battle. I immediately had mesh failures, due to memory problems.

  • I reduced the region refinement settings which fixed this

Once the mesh began to work it failed again due to the 3 adjacent faces problem.

  • with support from the Simscale staff i was able to get the locations of these points on the geometry and fix it.

Then i finally get a 20 million cell mesh to work, with rotating fan assembly and all… Simulation ran out of memory.

This led me to delete the fan assembly to gain some cells back as a buffer. Since i changed this geometry, the 3 adjacent cell problem came back.

  • again receive quick support from Simscale staff … geometry fixed … good mesh.

Simulate results - wildly wrong results as you saw in the links from my first post.

Now i am still confused because my half car sim with half the illegal cells (2682) worked just fine, so i dont think thats the problem … but i have no idea.

So to conclude. I have a more complicated mesh working with converged results that also has a large number of Non-ortho cells.

Dan