Hi,
Indeed, there is an issue with the GPU detection code's consistency checks
that trip and abort the run if any of the detected GPUs behaves in
unexpected ways (e.g. runs out of memory during checks).
This should be fixed in an upcoming release, but until then as you have
observed, you can
I think I've temporarily solved this problem. Only when I use
CUDA_VISIABLE_DEVICE to block the memory-almost-fully-occupied GPUs, I can run
GROMACS smoothly (using -gpu_id only is useless). I think there may be some bug
in GROMACS's GPU usage model in a multi-GPU environment (It seems like as