zhouronghua wrote:

> From my understanding, we intentionally keep the dependency files restricted 
> to the host for HIP / CUDA. Would this work if we passed `-fdepfile-entry` 
> where we do `-mlink-builtin-bitcode`?
> 
> Right now the files we pass in should probably only be modified if you 
> updated ROCm or CUDA so I can't imagine this being a very common case.

Ignore whether -fdepfile-entry works or not. Even if it does, there are the 
following issues:

1. If header dependencies differ across architectures in the kernel (where 
different headers are included for different architectures), using only 
host-side dependencies will miss the kernel’s dependencies.
2. When multiple architectures need to be supported and only some are compiled, 
adding -fdepfile-entry for each architecture can introduce build dependencies 
for architectures that aren’t being compiled. This can trigger incremental 
builds even when they shouldn’t be triggered.
3. Of course, compared to linking more device libraries, adding includes in 
code is a very common change. Requiring programmers to consider adding the 
-fdepfile-entry compile option every time they add an include isn’t very 
friendly.
4. Failures in incremental builds are often not as obvious as compilation 
failures or functional issues in the compiled program. When a programmer adds 
new code and triggers a build in a Make or CMake project, but doesn’t see the 
expected effect, they might not immediately suspect an incremental build issue. 
They could spend a lot of time first debugging the code itself, only later 
realizing they need to add the -fdepfile-entry compile option. This adds extra 
workload.

https://github.com/llvm/llvm-project/pull/176072
_______________________________________________
cfe-commits mailing list
[email protected]
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to