I could not find a mistake in the setup at first glance but it might also be something completely different, will keep digging. In the meantime my colleague @rszoeke might add his thoughts here as he knows more about solver logs of this type than I do.
Hi @atanas_kuzev,
the error log you posted above hints that the hard disk size was too small for the case you were running.
Looking at your case, it is pretty obvious that the case should not be too large to be run on an instance with a couple of hundred GB of hard disk space available.
I was checking it with a newer version of the solver which we will roll out soon (within the next weeks) and there the problem disappears - you might have run into a case with a particularly poor disk space handling, which was resolved in a later release.
Until the new version is rolled out to everyone, you could improve the situation by reducing the ānumber of parallel processesā, I saw you were for example 8 on a 16-core instance (which is the default setup and usually the fastest). Using 4 instead of 8 will not reduce your run time too drastically (since we will still use all the processors available, but then more with shared memory parallelism and less with MPI parallelization), but the disk utilization will be reduced significantly in your case.
My guess is that this odd behavior is caused by the remote force loads you are using. You might also want to switch to āundeformableā remote force which should also fix the issue.
Hi @atanas_kuzev,
from now on all your runs will use the new solver version for your structural analysis runs.
Let me know if you see any issues or unexpected behavior.
Hi @atanas_kuzev,
regarding the error, it looks like it is related to one of the result control plots you defined. We are investigating and should have a fix by Monday. As I said, the new version is not yet fully tested, so these kind of issues are still possible.
Regarding the scaling to only visible parts, I think this is not yet possible, as we only have 2 options for the scaling, āmanualā and āAll parts automatic rangeā - I guess we would need a third one āVisible parts automatic rangeā.
Maybe @bdaqui knows a work-around for the time being?
@rszoeke There is an experimental version for such a feature. I will check how mature it is at this point and how soon it could potentially be brought to production.
Hi @atanas_kuzev,
We have added a fix for the error you encountered. Iāve managed to successfully complete your simulation run āRun 1 Augmented Lagrange new solverā. Could you try again and let us know if you encounter any errors?
Hi @atanas_kuzev,
I see that the only difference between your simulation and my test is the initial time step. Could you try lowering the initial time step from 0.1 to 0.025 and try again?
Hi @svanschie,
The simulation (Run 2 Augmented Lagrange new solver fix error) with initial time step=0.025 also doesnāt converge, but this time in t=0.75s.