Background mesh sizing and simulation convergence tips

Okay, so I’ve been running these fairly large simulations and encountering strange behavior. They’re about 21 core-hours each so I could really use some feedback from the experts before I burn more time.

I’m working on lateral stability of an aircraft, so it has both a beta and alpha angle. I’ve got three walls as velocity inlets with the correct flow vector components. Then the opposing three walls are pressure outlets with 0 Pa specified. If you go to the linked simulation you’ll find two successful runs under “CAC Deflected”. The second run I changed the velocity a bit and increased the simulation time.

If you then go to “510G Deflected”, which was duplicated from “CAC Deflected”, you’ll find a successful Run 3. After that, I changed the velocity a bit and increased the simulation time. From then on I get results that look like the outlets forget to be outlets and the inlets pressurize the whole volume. If I run the sim shorter (run 4), this doesn’t seem to happen. When I increase the time, I get this similar behavior. If I run it long enough (Alpha12Beta10), I eventually get a failure to converge. All the runs appear to converge.

Correction, in the prior simulation it is showing timestep 0. The setting is for 800s with write interval 800s, but it seems it did not write the result.

HOWEVER, here is an example of a result showing this weird behavior

Hi again @jhartung!

At first glance the domain you are using is way too small to be physically “correct”. Increase the domain size and see how that affects the results (in every direction by the way).

Best,

Jousef

Hi Jousef, I like this idea in principle, but when I went to larger meshes I routinely experienced out of memory errors and am unable to go abloce the 16 core limit. Any suggestions? Also do you have any heuristics regarding simulation bounds for given fluid velocity?

Regardless of the domain size, I’m still concerned about the divergent behavior between sim runs. Everything runs well in the duplicate set up and I get results that look consistent with how I would expect the fluid to flow. In this case I’m interested in observing flow patterns and relative differences between different geometries so the correctness of the simulation values isn’t my primary concern.

Okay on the hunch that the simulation was somehow corrupt, I duplicated the simulation that was working well and re-added the appropriate mesh. It solved properly this time.

Question remains: ways to increase the domain size within the 16 core constraint?

Hi @jhartung!

This is indeed the only restriction - so you cannot do an infinitely small mesh (would not make sense). So the upper limit for you is 16 cores. The options are either to have a “coarse” bounding box and use geometry primitives to refine the regions of interest.

Regarding the heuristics I would have to find some papers and have a look at the application but I always go by feeling. Maybe @Retsam, @anirudh2821998, @Get_Barried and/or @DaleKramer have more impetus here.

I can have a look at the divergent behavior later on and let you know what needs to be adapted :+1:

Best,

Jousef

Hi @jhartung,

Here are some hints and as I see from your posting you try to KISS already (Occam razor) you can benefit from them.
First, as you do not use tunnel with slippy walls (reflecting the pressure), in my opinion, you do not need to make simulation domain bigger. But I’ve seen mesh masters advices to have BMB grid to be ~> 30. Currently you at 20.

In order to be able to simulate your project on 16 processors, which on your size of mesh should be enough, you can do the following:

  • Shorten simulation time. Convergence stability once reached, nothing new happens into your domain, but processors memory is still filled.
  • Pace inlet flow (in your three directions) over first 10 - 20 seconds. It should gradually add energy to simulation, allowing better Convergence (reduction of initial wake). As a result, you can shorten simulation time by 20 - 30 %. (In half of that pacing period, for final velocity 20 m/s, you should have 14.14 m/s).
  • There is a way I did not experiment with (but @anirudh2821998 did), allowing to restart simulation. So, your could run your simulation for 100 seconds in first try and, if not satisfied with convergence, start from the end point. This should allow to go father with processors memory as well.

Cheers,

Retsam

1 Like

Hi @Retsam! Thank you so much for these pointers… I had observed the initial convergence problems during the first few seconds and my solution was to add initial velocity conditions that match my boundary conditions. This didn’t work. I’ll attempt a ramp and report back.

Re: convergence, I’ve been struggling to know from the convergence plot when the simulation has actually converged. Depending on the run, I’ve seen some residuals reach an absolute minimum while others continue a downward trend over time. Over the runs that I’ve performed, I haven’t been able to identify a pattern of which variables exhibit this behavior or not - it seems completely random. Example below. Would you say this plot converges ~400s or never?

Re: out of memory, in my model this is typically a limitation in meshing as opposed to simulation. I’m not sure I understand what 20-30 BMB grid means. Could you elaborate?

Very interested in how to restart simulation. @anirudh2821998 do you have a post somewhere?

Hi @jhartung! Those are pertinent questions and I can say that tuning mesh and simulation params is like an art. Perhaps formal knowledge can be captured / extracted from thousands of real user cases by an Ai agent in near future. :nerd_face:

  • Personally, when I see omega shivering in 1-e5 zone, I consider that simulation converged (if you were measuring Forces or having Probe points installed, you will see curves getting flat at that moment). In your current case, it is around 180 - 200 sec of simulation. Even if other divergence values go further down, no significant impact will be observed.
  • BMB means Background Mesh Box, being in your mesh definition set to 20 x 20 x 50. Suggested rule of thumb is to have at least 30 divisions in one direction, which means 30 x 30 x 50.
  • Moreover, 3D grid of BMB is nicer to simulation algorithms, if it has squares (and not rectangles) in all axis. In your case it means that having BMB size of 40 x 40 x 50 m, you get good hexahedrons when BMB has 20 x 20 x 25 divisions (but it seems not good enough), so go for 32 x 32 x 40.
  • It means automatically , that you will have more grid cells. hence I suggest to make Region refinement slightly smaller, in order to compensate and keep the mesh fit on your 16 processors.

There is also a ‘simplistic’ way I use to observe geometry behaviour over a span of different flow velocities. I start simulation gently to get to the low part of velocity and keep that speed, waiting of “dust to settle” (like 20 -50 seconds, depending on your simulation params, but look at omega). Than I add 5-10% of flow speed and again wait appropriate time, level by level. In “Simulation control” you need to play with “Delta t” and "Write interval’ to be able to observe different states over the whole simulation.
For accelerating the inlet flow you know already that CSV file can be used for that purpose (like for ‘pacing’ your simulation start).

As far as you wish to only see the ‘Solution Field’ (and do not want to get Forces and Moments from simulation, you can play with flow speed and directions. Currently Forces and Moments seem not to report correct values with rotating wind, and I prepare a case to submit to SimScale support.

Cheers,

Retsam

2 Likes

Hi @Retsam, amazing feedback! Moving to a square mesh on the BMB significantly decreased solving time, memory usage, and resulted in a much more uniform mesh across the model. Here’s a quick mesh clip before and after. You’ll note the addition of a tail region. Even with that this mesh solved in about half the time!


Next I’ll work on softening the startup of fluid flow and report back.

@jousefm I’d highly recommend you add this tip to your tutorials or meshing docs.

3 Likes

Updated thread title to be more consistent with content.

1 Like