================
@@ -231,6 +245,10 @@ linux_projects_to_test=$(exclude-linux 
$(compute-projects-to-test ${modified_pro
 linux_check_targets=$(check-targets ${linux_projects_to_test} | sort | uniq)
 linux_projects=$(add-dependencies ${linux_projects_to_test} | sort | uniq)
 
+linux_runtimes_to_test=$(compute-runtimes-to-test ${linux_projects_to_test})
----------------
Endilll wrote:

> I remember recently that clang folks argued to removed the flang tests when 
> changing clang because of the likelihood of breaking Flang with clang tests.

I believe Flang was tested on Clang changes to make sure Clang didn't make any 
incompatible changes to the driver, which Flang reuses. This is a different use 
case from testing Clang itself, and yes, we believed (and likely still) that it 
didn't bring enough value, especially when it was rendering the whole Clang CI 
unusable.

> I'm not sure the probability of a random LLVM change breaking clang in a way 
> that would only show up with a runtime tests and no other LLVM/clang test is 
> higher!

I find it hard to prove this statement either way. But I find it much easier to 
believe that (a) Clang is insufficiently tested; (b) libc++ tests bring a lot 
of coverage. So I believe the value is there, and the cost is building the 
tests and running them, which is as good as it gets.

Additionally, "don't run those Clang tests if Clang is tested because of an 
LLVM change" kind of logic make it harder to reason about CI. Statements like 
"this change has passed Clang CI" would require additional context to 
understand which tests actually ran.

> The ratio value / cost isn't clear to me right now.

To me the ratio is rather clear. If you want to push for this, you'd have to 
open a PR yourself.

https://github.com/llvm/llvm-project/pull/93318
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to