The LBM solver has the ability to deal with many CAD types and is generally more robust than many solvers in terms of cleanliness of the geometry, where open geometry, poor faces and small faces don’t really matter. That said in the odd occasion you have issues or error, if you inspect the geometry and don’t find anything fundamentally wrong, an STL can normally be loaded
The Y+ requirements for LBM tend to be more robust than those of the equivalent finite volume methods, for example, the K-omega SST (URANS) model in the FVM implementation has an approximate requirement of 30 < Y+ < 300, however in the SimScale’s LBM implementation the lower bound is not considered a requirement and instead a more robust upper bound of less than 500 and certainly not higher than 1000 is recommended. The solver will additionally warn for Y+ values higher than 2000 for in the near-wall voxel.
If the Y+ is much higher than expected, where results are likely to be impacted the user will be warned in the interface:
‘High velocities encountered that might not be handled by the current mesh resolution. Please check your results and consider refining the mesh further.’
‘Mesh resolution might not be sufficient for correct turbulence modeling. Please check your results and consider refining the mesh.’
This shows a maximum Y+ of 100k which is obviously wrong and needs to be reduced. The main methods of doing this are to apply some Reynolds scaling (see section below) or to refine the surface. If the surface is already refined to a reasonable level, the scaling is the only option without excessively increasing the cost of your simulation.
‘Regarding Y+ targets, Pacefish® is much more flexible than FVM codes with wall functions. It has no limitation regarding the low-bound value. The results should not suffer from wall resolution as long as the size of the wallnext voxels is not exceeding 500 to 1000.
K-omega SST Model
From Figure 1 we can see the different regions of the boundary layer and why when modelling the layer it’s encouraged to avoid the Log Law Region. However, K-omega SST models the layer up until the first cell then solves from there on, and this model has been proven to be very accurate in many industries including the aerospace industry.
Although K-omega SST model is highly accurate, more accurate models exist, namely LES or Large Eddy Simulation.
LES is more accurate as this models only eddies smaller than the grid filter and solves the flow regime larger than the grid filter size. Amongst one of its downfalls is its inability in its standard form to model walls, therefore requiring a very fine mesh, or simply dealing with flows where wall interactions are least predominant. Pure LES models such as the ‘LES Smagorinsky’ model require similar Y+ requirements to the equivalent FVM model, where y+ is around or below 1. This is one of the main reasons LES will be a more expensive simulation.
However, if a wall model were to be added, we could obtain the accuracy improvements without the requirement of such a fine mesh, and this is where the advantage of DES or Detached Eddy Simulation comes from.
In the LBM solver two detached eddy models are available, the K-omega SST DDES (Delayed Detached Eddy Simulation) and the K-omega SST IDDES (Improved Delayed Detached Eddy Simulation). Here, the same wall requirements exist for the wall modelling however, at some point the near wall region transitions from K-omega SST to LES.
The DES models ‘K-omega SST DDES’ and ‘K-omega SST IDDES’ have similar requirements to the URANS ‘K-omega SST’ since the wall model is based upon the same model. however
When you rerun the simulations please consider using the “SST IDDES” turbulence model instead of plain kOmega-SST and Smagorinsky models. DES turbulence models are a hybrid LES-uRANS model that uses RANS formulation in the boundary layer and LES formulation in the farfield achieving an optimum between both worlds. In the present case, the kOmega-SST model probably swallows some of the transient effects. For good results with the plain Smagorinsky model, wall resolution has to be around or below Y+ of 1.’ – (Eugen, 2018)
The difference however between DDES and IDDES is that IDDES blends from uRANS to LES in the buffer region which can be approximated to be somewhere between 5 < y+ < 30, whereas the DDES model blends from uRANS to LES in the log-law region 30 < Y+, therefore depending upon the Y+ values of your simulation might make a difference to which model you select. For example if your Y+ is around 100, then the DDES model would be better than IDDES, however, if the Y+ is below 5, the IDDES would be more suited.
Reynolds Scaling Factor
It is commonplace to scale down a model physically for wind tunnel testing or to slow down a flow, where examples are testing a scaled building or a plane in subsonic flows. The Reynolds scaling factor can apply this scaling automatically to a full-scale geometry.
Not only is this scaling important in wind tunnels for obvious sizing reasons, but it is also required in the LBM method, where a high Reynolds number will create a thin boundary layer, but a thin boundary layer will also need a finer mesh to compensate. Since the LBM requires a lattice where the aspect ratio is 1, a perfect cube, refining to required Y+ values may become expensive, on top of that, if you were to refine to the required level at the surface without scaling, then because of the Courant number is being maintained at a value lower than 1, then the number of time steps required for the same time scale would increase further increasing simulation expense.
The depicted validation case, AIJ Case E, for pedestrian wind comfort is compared to a wind tunnel where the scale of the city is 1:250, therefore a scaling factor is applied of 0.004. If dealing with high Reynolds number it is recommended that some literature review is used to understand an acceptable scaling factor for the application, or if in research, choose the matching scale factor to the wind tunnel you are comparing to.
The Reynolds scaling factor is located in an Incompressible LBM simulation under the Model node.
The Reynolds number is defined as Re = U.L/v where ‘L’ is the reference length, ‘U’ is the velocity and ‘v’ is the kinematic viscosity of the fluid. When a scaling factor is applied, instead of sizing the geometry down, the viscosity is instead increased to ensure that the Reynolds number is reduced to the correct scaling.
We applied a simple rule of thumb where the mesh of the worst resolved solid is still max two refinement levels below the best resolved solid. Because memory consumption scales with second order and computation effort scales with third order you already will have a huge saving in relation to resolve all solids at highest refinement level (93% less memory and 99% less computation time), but at the same time have stable (not-changing resolution) at the wall getting rid of numeric effects at the transitions. Please consider grid transition at solids as some EXPENSIVE operation in terms of results quality even if you do not get any NaNs and do not directly see the effects. This means you can use it, but do it carefully. Try best to maintain same refinement level for solids as far as possible. Just try to follow the above mentioned rule of thumb using a VoxelizedVolume with unidirectional extrusion size of 4 voxels and directional downstream extrusion of 16 voxels and you will get very good geometry-adapted meshes being a lot better suited for the simulation in almost any case than those refinement regions build of manual boxes. Generally consider refinement boxes being some tool from the Navier-Stokes world. They still work for Pacefish, but VoxelizedVolumes work much better.
The difference between the standard and LBM solvers is vast, but the SimScale user interface does an excellent job at making the transition between the two as seamless as possible, however, one of the biggest differences that cannot be hidden in the vast amount of options available for data exportation.
The reason for this level of control is that in ordinary OpenFOAM based solvers, usually run in the steady state, and usually on grids sub 20 million, saving the entire results for the final step is no issue. However, on the LBM solver, it’s normal to have grids bigger than 100million cells, and what is more, since its transient, results are produced at every timestep. The size of a complete result set is obviously so large that they cannot all realistically be returned.
For this reason, the LBM allows three main methods of exportation, Transient, Statistics and Snap Shot. And for each of these, we have the option to specify the interval, the region to be saved and whether to save surface, fluid or both result sets. Let’s go through these three options.
If the machine runs out of storage to hold the asked for results an error will start appearing in the logs:
FATAL @ EnSightExport.cpp:3679: EnSight data export to “export/trans_Pedestrian__PACEFISHSPACE__Level__PACEFISHSPACE__SlicePACEFISH” FAILED because of file I/O issue. Please check the access rights and the available disk space at the destination.
If this starts appearing it is advised to immediately stop the simulation and re-adjust the result controls to reduce the size of the written data, as any further produced data is unlikely to be written and therefore further solve time will not gain you additional results, and will thus become a waste of GPU hours.
The amount of data a machine can hold is not an exact science, the results will depend upon the mesh size, the export domain size, frequency of transient result write and the time a simulation is run for, so, although it might be hard to judge, simply being conservative, realistic and putting thought into what you need at the end of a simulation will likely produce simulation results without error. If errors like the above are observed, it doesn’t take long before you get an idea as to how much data is too much.
General advice on reducing the size of the results is to be conservative, and this can be further elaborated upon. If you are interested in results in a large area, for example, peak velocities at various points at pedestrian height in a city, you could simply export transient data of the encompassing area, however, to get good transient results many writes will be needed, and realistically, at every time step. This will however not be possible for a realistic case, i.e. a case with cells near or exceeding 100 million cells with appropriate wall refinements. An alternative would be to save a region much smaller, to do this we could slice a region using a small region height, and this will export a region with one cell thick in the vertical height, drastically reducing the size of the results. However, we could be even more conservative, we could know the point at which we are interested in and upload these points as a CSV file, and export every time step, this would reduce the results footprint drastically allowing for the space to be used for other things and thus getting more out of the simulation.
Another example of being conservative might be in wind loading, where you simply want to understand pressures on the surfaces of the building, you could export fluid and surface data around a city, or reduce it to just the building of interest. Furthermore, we could remove the volume data only exporting surface data, this reduces the size of the results to two dimensions. Further, we could take a leaf out of the wind tunnel book and introduce once again points on the surface as a CSV as virtual tap points which will only export the data at those points.
In the above two examples, it is up to the user to determine the level of results they require, however, every time you drop a level a significant amount of space is freed up on the machine and these methods can lead to highly productive simulation runs.
Transient results are the time-dependent result fields and can be saved every specified interval, it is generally recommended that only small domains are saved, and if an animation is desired, a small slice saving frequently. If a machine runs out of memory, your simulation will fail, wasting potentially a lot of solve time. So be conservative with the transient output and think about the exact results you need, (Todo)
Statistics can be analysed as a percentage from the end, at intervals bigger or equal to the timestep. Percentage from the end defines where the analysis starts. For example, percentage from the end of 1 (100%) analyses all data from the beginning of the simulation, however, this might be undesirable since the flow takes some time to initialise and stabilise to a somewhat periodic constant flow. Therefore, numbers such as 0.5 (50% into the simulation) and default 0.2 (20% from end, 80% into the simulation) are better.
Probe points can be added to be used as velocity measuring devices (virtual hot wires or pitot tubes etc…) or can be added to monitor pressure at a point (virtual pressure tap points) where data for each probe is returned as components of velocity and pressure, where the full time sequence for these probes are returned at the rate specified in the resulting control.
The format for specifying the pacefish probe plot is:
This can easily be done in excel or your choice of spreadsheet software, which can export in .csv format.
It is important to note that if the time steps are bigger than the asked for frequency, then the data is returned at the rate of the time step size, and warns the user in the user interface. This is important if doing a spectral analysis and the user has a different frequency than asked for in the interface. This is true for probe points, force plots, statistical sampling and transient result field return.
Last updated: September 22nd, 2020
Did you find this article helpful?
How can we do better?
We appreciate and value your feedback.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.