To achieve what you need, there might be an easy way with the current code.
First, know that you can change the time step size by calling
*UpdateStepSize*. You can replace long *DyDynamics* calls with step-by-step
calls to circumvent the problem. That is, replacing
*my_tracker->AddAcc(...);*
*DEMSim.DoDynamics(a_long_time);*
with
*DEMSim.UpdateStepSize(current_stepsize);*
*for (double t = 0.; t < a_long_time; t+=current_stepsize) {*
* my_tracker->AddAcc(...);*
* DEMSim.DoDynamics(**current_stepsize**); *
*}*
You may be concerned about the performance and indeed, transferring an
array to the device at each step will take its toll, but it's probably not
that bad considering how heavy each DEM step is anyway (I may add another
utility that applies a persistent acceleration later on). On the other
hand, splitting a *DoDynamics *call into multiple pieces in a for loop
alone, should affect the performance little, so you should not be worried.
This way, it should be safe to advance the fluid simulation for several
time steps and then advance the DEM simulation by one step. In fact, I do
this in my co-simulations just fine.
A note: In theory *UpdateStepSize *should only be used from a synchronized
solver stance, meaning after a *DoDynamicsThenSync* call, because the step
size is used to determine how proactive the contact detection has to be.
But if your step size change is a micro tweak, then you should be able to
get away with it even if it follows asynchronized calls aka *DoDynamics*.
As for the call duration being smaller than the step size (but larger than
0): This is a good question. Right now it should still advance the
simulation by a time step, which puts the simulation time ahead of what you
would expect. So it's better to *UpdateStepSize *as needed to stay safe.
This behavior might be improved later.
Thank you,
Ruochun
On Wednesday, May 15, 2024 at 6:27:26 AM UTC+8 [email protected] wrote:
> Thank you for your fast reply, you've been very helpful already.
>
> I'm using the trackers to track granular particles inside a fluid flow.
> Thank you for pointing out the difference in time step size and the time
> duration of the DoDynamics call, I'm pretty sure that is where my error is
> coming from.
> Since we're using an adaptive time stepping for the fluid simulation, it
> can happen that the time step for the flow varies throughout the
> simulation. For this reason I'm running the DoDynamics call with the time
> step size of the fluid simulation. Usually the time step for the flow is
> much smaller than the DEM time step. (*additional question towards the end)
> It would be possible to add an additional time step criterion based on the
> DEM simulation on the fluid side, but this would probably result in
> unnecessary long simulations, since we haven't fully coupled the system yet.
>
> So when I'm passing the states of my particles, I want them to move
> according to the forces of the fluid. The problem I observed is exactly
> what you described, basically I'm just applying a short acceleration in the
> first DEM time step but after that the particle is not further accelerated
> by that force. I was able to recreate some experimental results by
> pre-calculating the resulting velocities from the acceleration but this is
> definitely not a long term solution...
>
> For this particular case it would be handy for if the acceleration is
> cleared again after a DoDynamics call, and stays constant over the time
> steps of the DoDynamics call.
> Is this something that would be easy for me to tweak in the code? Or do
> you maybe have an alternative suggestion for me?
>
> * additional question: I don't know if this will ever be the case in my
> simulation but what would happen if the DoDynamics duration is smaller then
> the DEM time step?
>
> Thank you, Julian
>
> On Tuesday, May 14, 2024 at 7:03:59 PM UTC+2 Ruochun Zhang wrote:
>
>> Hi Julian,
>>
>> Glad that you are able to move on to doing co-simulations.
>>
>> If you use a tracker to add acceleration to some owners, then it affects
>> only the next time step. This is to be consistent with other tracker Set
>> methods (such as SetPos) because, well, they technically only affect the
>> simulation once, too. This is also because setting acceleration with
>> trackers is assumed to be used in a co-simulation, and in this case, the
>> acceleration probably changes at each step. If the acceleration
>> modification was to affect indefinitely then it would be the user's
>> responsibility to deactivate it once it is not needed. Of course, this is
>> not necessarily the best or most intuitive design choice and I am open to
>> suggestions.
>>
>> The acceleration prescription can only be added before initialization
>> because it is just-in-time compiled into the CUDA kernels to make it more
>> efficient. They are expected to be non-changing during the simulation and,
>> although fixed prescribed motions are very common in DEM simulation, they
>> are perhaps not suitable to be used in co-simulations.
>>
>> If in your test case the added acceleration seems to have no effect, then
>> it's likely that it is too small, or the DoDynamics is called with a time
>> length that is significantly larger than the time step size. If this is not
>> the case and you suspect it is due to a bug, please provide a minimal
>> reproducible example so I can look into it.
>>
>> Thank you,
>> Ruochun
>>
>> On Monday, May 13, 2024 at 9:05:34 PM UTC+8 [email protected] wrote:
>>
>>> Hi Ruochun,
>>> I've upgraded my hardware and now everything is working fine.
>>>
>>> I'm trying to run a co-simulation with the DEM-Engine where it would be
>>> necessary to pass the acceleration for each particle to the simulation.
>>> From the code, I've seen that there are two options, either by adding an
>>> acceleration or using a prescribed force/acceleration.
>>>
>>> If I read the comments from the code correctly, the acceleration is only
>>> added for the next time step but not constant over the DoDynamics call?
>>> From my tests it looks like the acceleration has no effect on the
>>> trajectory of my particle.
>>> On the other hand, the prescribed acceleration can only be added during
>>> the initialisation, and not between DoDynamics calls.
>>>
>>> Is there an option to add an acceleration to a particle that affects the
>>> particle over the whole DoDynamics call?
>>>
>>> Thank you for help
>>> Julian
>>> On Friday, March 29, 2024 at 9:23:29 PM UTC+1 Ruochun Zhang wrote:
>>>
>>>> Hi Julian,
>>>>
>>>> I see. The minimum CC tested was 6.1 (10 series). 9 and 10 series are a
>>>> big jump and DEME is a new package that uses newer CUDA features a lot.
>>>> Most likely GTX 970 is not going to support them. Quite a good reason to
>>>> get an upgrade I would say, no?
>>>>
>>>> Thank you,
>>>> Ruochun
>>>> On Saturday, March 30, 2024 at 3:38:40 AM UTC+8 [email protected]
>>>> wrote:
>>>>
>>>>> Hi Ruochun,
>>>>> Thank you for your answer and trying to help me.
>>>>> I have been able to run a simulation in the container using the same
>>>>> image on another GPU machine (a cluster with several NVIDIA RTX 2080Ti w/
>>>>> 12GB).
>>>>> When I'm trying to run a simulation on my local machine, that I'm
>>>>> using for development purposes with a (NVIDIA GTX 970 w/ 4GB) the
>>>>> simulation crashes.
>>>>> I also tried to run the simulation outside of a container, and the
>>>>> simulation still crashes with the same error. Also other projects using
>>>>> CUDA do run on my local machine.
>>>>> Both machines the cluster and local machine run the exact same CUDA
>>>>> and NVIDIA drivers, so I'm assuming running the simulation inside the
>>>>> Docker Container is not the issue.
>>>>>
>>>>> I'm assuming that there is an issue with the compute capabilities of
>>>>> my local GPU, is there any kind of minimum hardware requirements?
>>>>>
>>>>> Julian
>>>>>
>>>>> On Friday, March 29, 2024 at 7:57:49 PM UTC+1 Ruochun Zhang wrote:
>>>>>
>>>>>> Just to be clear, DEM-Engine runs on a single GPU as well and there
>>>>>> is no difference other than being (around) half as fast.
>>>>>>
>>>>>> Ruochun
>>>>>>
>>>>>> On Friday, March 29, 2024 at 10:58:18 PM UTC+8 [email protected]
>>>>>> wrote:
>>>>>>
>>>>>>> I was able to run a simulation on a different GPU setup, using 2
>>>>>>> GPUS. Is it not possible to run the DEM-Engine on a single GPU?
>>>>>>>
>>>>>>> On Thursday, March 28, 2024 at 4:55:44 PM UTC+1 Julian Reis wrote:
>>>>>>>
>>>>>>>> Hello,
>>>>>>>>
>>>>>>>> I've tried to setup a Docker Container for the DEM-Engine using the
>>>>>>>> nvidia/cuda-12.0.1-devel-ubuntu22.04 as a base image.
>>>>>>>> I followed the compile instructions from the github-repo and the
>>>>>>>> code compiles fine.
>>>>>>>> When I'm trying to run any of the test-cases tough, the simulation
>>>>>>>> crashes with the following error:
>>>>>>>> Bus error (core dumped)
>>>>>>>> Right after the following outputs for the demo file
>>>>>>>> SingleSphereCollide:
>>>>>>>> These owners are tracked: 0,
>>>>>>>> Meshes' owner--offset pairs: {1, 0}, {2, 1},
>>>>>>>> kT received a velocity update: 1
>>>>>>>>
>>>>>>>> Are you aware of any problems like this?
>>>>>>>>
>>>>>>>> Julian
>>>>>>>>
>>>>>>>
--
You received this message because you are subscribed to the Google Groups
"ProjectChrono" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/projectchrono/70bbd275-9703-48d0-9dd1-45ecae5c79bcn%40googlegroups.com.