tra added a comment.

In D128914#3643495 <https://reviews.llvm.org/D128914#3643495>, @jhuber6 wrote:

> Yes, it's actually pretty difficult to find a CUDA application using 
> `fgpu-rdc`. It seems much more common to just stick everything that's needed 
> in the file.I've considered finding a CUDA / HIP benchmark suite and 
> comparing compile times using the new driver stuff. The benefit of having 
> `fgpu-rdc` be the default is that device code basically behaves exactly like 
> host code and LTO makes `fgpu-rdc` behave like `fno-gpu-rdc` performance 
> wise. The downside, as you mentioned, is compile time.

For what it's worth, NCCL <https://developer.nvidia.com/nccl> is the only 
nontrivial library that needs RDC compilation that I'm aware of.
It's also self-contained for RDC purposes we only need to use RDC on the 
library TUs and do not need to propagate it to all CUDA TUs in the build.

I believe such 'constrained' RDC compilation will likely be the reasonable 
practical trade-off. It may not become the default compilation mode, but we 
should be able to control where the "fully linked GPU executable" boundary is 
and it's not necessarily going to match the fully-linked host executable.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D128914/new/

https://reviews.llvm.org/D128914

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to