v.g.vassilev added a comment.

Generally, looks good to me. I'd like to wait for @Hahnfeld and @tra's feedback 
at least for another week before merging.

@dblaikie, I know that generally we do not want to run tests on the bots and 
that makes testing quite hard for this patch. Do you have a suggestion how to 
move forward here? In principle, we could have another option where we might 
has the JIT if it can execute code on the device if available.



================
Comment at: clang/lib/Interpreter/Offload.cpp:1
+//===-------------- Offload.cpp - CUDA Offloading ---------------*- C++ 
-*-===//
+//
----------------
argentite wrote:
> Hahnfeld wrote:
> > v.g.vassilev wrote:
> > > How about `DeviceOffload.cpp`?
> > Or `IncrementalCUDADeviceParser.cpp` for the moment - not sure what other 
> > classes will be added in the future, and if they should be in the same TU.
> I wanted to avoid "CUDA" in case we use it later for HIP.
Was `DeviceOffload.cpp` not a better name for the file and its intent?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D146389/new/

https://reviews.llvm.org/D146389

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to