tberghammer added a comment.

In http://reviews.llvm.org/D16334#331368, @zturner wrote:

> I don't know, I still disagree.  If something in step-over breaks, I dont' 
> want to dig through a list of 30 other tests that have nothing to do with the 
> problem, only to find out 2 days later that the problem is actually in step 
> over.  The only reason this helps is because the test suite is insufficient 
> as it is.  But it doesn't need to be!


I agree but first we should fix the test coverage and then fix the individual 
tests. Doing it in the opposite way will cause a significant drop in quality 
(we will fix individual tests but not increase the coverage enough).

> The real solution is for people to start thinking about tests more.  I've 
> hounded on this time and time again, but it seems most of the time tests only 
> get added when I catch a CL go by with no tests and request them.  Sometimes 
> they don't even get added then.  "Oh yea this is on my radar, I'll loop back 
> around to it."  <Months go by, no tests>.  Hundreds of CLs have gone in over 
> the past few months, and probably 10 tests have gone in.  *That's* the 
> problem.  People should be spending as much time thinking about how to write 
> tests as they are about how to write the thing they're implementing.  Almost 
> every CL can be tested.  Everything, no matter how small, can be tested.  If 
> the SB tests are too heavyweight, that's what the unit tests are for.  IF 
> there's no SB API that does what you need to do to test it, add the SB API.  
> "But I have to design the API first" is not an excuse.  Design it then.


I think we need a different API for tests then the SB API which can be changed 
more freely without have to worry about backward compatibility. When adding a 
new feature I try to avoid adding an SB API until I know for sure what data I 
have to expose because a wrong decision early on will carry forward (how many 
deprecated SB API calls we have?).

> We've got an entire class of feature that "can't be tested" (the unwinder).  
> There's like 0 unwinding tests.  I get that it's hard, but writing a debugger 
> is hard too, and you guys did it.  I do not believe that we can't write 
> better tests.  Or unwinder tests.  Or core-file debugging tests.

> 

> Really, the solution is for people to stop chekcing in CLs with no tests, and 
> for people to spend as much time writing their tests as they do the rest of 
> their CLs.  If the problem is that people don't have the time because they've 
> got too much other stuff on their plate, that's not a good excuse and I don't 
> think we should intentionally encourage writing poor tests just because 
> someone's manager doesn't give them enough time to do things the right way.


It is true that every CL can be tested but a lot of change is going in to 
address a specific edge case generated by a specific compiler in a strange 
situation. To create a reliable test from it we have to commit in a compiled 
binary with the strange/incorrect debug info and then it will be a platform and 
architecture specific test what is also very hard to debug because you most 
likely can't recompile it with your own compiler. I am not sure we want to go 
down in this road.


http://reviews.llvm.org/D16334



_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to