tra added inline comments.

================
Comment at: clang/tools/clang-repl/ClangRepl.cpp:27
+static llvm::cl::opt<bool> CudaEnabled("cuda", llvm::cl::Hidden);
+static llvm::cl::opt<std::string> OffloadArch("offload-arch", 
llvm::cl::Hidden);
+
----------------
Where will clang-repl find CUDA headers? Generally speaking `--cuda-path` is 
essential for CUDA compilation as it's fairly common for users to have more 
than one CUDA SDK versions installed, or to have them installed in a 
non-default location.


================
Comment at: clang/tools/clang-repl/ClangRepl.cpp:137
+
+    ExitOnErr(Interp->LoadDynamicLibrary("libcudart.so"));
+  } else
----------------
Is there any doc describing the big picture approach to CUDA REPL 
implementation and how all the pieces tie together?

From the patch I see that we will compile GPU side of the code to PTX, pack it 
into fatbinary, but it's not clear now do we get from there to actually 
launching the kernels. Loading libcudart.so here also does not appear to be 
tied to anything else. I do not see any direct API calls, and the host-side 
compilation appears to be done w.o passing the GPU binary to it, which would 
normally trigger generation of the glue code to register the kernels with CUDA 
runtime. I may be missing something, too.

I assume the gaps will be filled in in future patches, but I'm still curious 
about the overall plan.




Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D146389/new/

https://reviews.llvm.org/D146389

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to