labath added a comment.

In D121631#3384124 <https://reviews.llvm.org/D121631#3384124>, @yinghuitan 
wrote:

>> unify with `target.preload-symbols` feature
>
> I do not have strong opinion on this. One concern is that 
> `target.preload-symbols` provides full experience but 
> `symbols.load-on-demand` provides trade-off experience (for example some 
> global expression evaluation may fail because we won't automatically hydrate 
> based on type query).

That is a fair point, but the having them separate also has downsides, as we 
have to figure out what is the meaning of on-demand=true&preload=true combo 
going to be. I think it would be strange to have on-demand take precedence in 
this setup, but then if it doesn't, then you need to change *two* settings to 
switch from "preload everything" (the current default) to the "on demand mode". 
I am not particularly invested in any solution, but I think we should make a 
conscious choice one way or the other.

>> I don't think that "run everything with the setting turned on" is a good 
>> testing strategy
>
> I do not have strong opinion either. I understand the concern is extra test 
> time. I do want to share that there are many tests that do not set 
> breakpoints which exercised symbol ondemand code path and helped to catch 
> real bugs in the implementation.

How many tests/bugs are we talking about here? Were there more than say ten 
distinct bugs caught there?

I see a large difference between running tests to find /new/ bugs, and running 
tests to find /regressions/. Our (API) test suite allows you to run it in a lot 
of different ways, which can be used to find new bugs. I have used that myself 
on several occasions. Even without making any changes to the test suite, you 
could run it with your feature enabled through the `--setting` argument to 
dotest. But simply because you manage to find a bug using some combination of 
dotest arguments it does not mean that everyone should be running the test 
suite with those arguments to ensure it doesn't regress. It just doesn't scale. 
When you find (and fix) a bug, you can make a dedicated regression test for 
that bug. Maybe even more than one -- to test similar edge cases that happened 
to work already, but could be broken in the future.

Please don't take this personally. This isn't about you or your feature -- 
that's my standard response to anyone tempted (in a way, it was a mistake to 
make it too easy to add new flavours) to add new test flavours. All of what I 
said applies (maybe even more so than here) to some of the existing test 
categories (`gmodules` for one), but since they're already here it becomes much 
harder to remove them -- which is why I'm trying to avoid adding any new ones.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D121631/new/

https://reviews.llvm.org/D121631

_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to