FWIW, I’m supportive of this.

I do find SB API based tests to be powerful but extremely cumbersome to write. 
If Vedant wants to write 15 different tests for the various cases he’s 
covering, it’s easy to see that they would be much easier to write this way. It 
is very powerful to have the test source and its inputs in the same file and it 
makes writing new kinds of tests much much faster.

There would definitely be a cost if the output of the driver changes in 
significant ways. It feels like anything that would be worth changing the 
driver output in significant ways could certainly absorb the cost of writing a 
script to update the existing tests that rely on this output.

And maybe the correct way is to expose the debugger object model in some kind 
of serialized form and match this instead of the driver output, but I’m not 
sure this is so much different. The model could evolve in a way that requires 
the tests to be updated.

Fred
 

> On Aug 23, 2018, at 4:25 PM, Vedant Kumar <v...@apple.com> wrote:
> 
> Pinging this because I'd like this to go forward to make testing easier.
> 
> I know folks have concerns about maintaining completeness of the scripting 
> APIs and about keeping the test suite debuggable. I just don't think making 
> FileCheck available in inline tests is counter to those goals :).
> 
> I think this boils down to having a more powerful replacement for 
> `self.expect` in lldbinline tests. As we're actively discouraging use of 
> pexpect during code review now, we need some replacement.
> 
> vedant
> 
>> On Aug 15, 2018, at 12:18 PM, Vedant Kumar <v...@apple.com 
>> <mailto:v...@apple.com>> wrote:
>> 
>> 
>> 
>>> On Aug 15, 2018, at 12:12 PM, Jason Molenda <jmole...@apple.com 
>>> <mailto:jmole...@apple.com>> wrote:
>>> 
>>> 
>>> 
>>>> On Aug 15, 2018, at 11:34 AM, Vedant Kumar <v...@apple.com 
>>>> <mailto:v...@apple.com>> wrote:
>>>> 
>>>> 
>>>> 
>>>>> On Aug 14, 2018, at 6:19 PM, Jason Molenda <jmole...@apple.com 
>>>>> <mailto:jmole...@apple.com>> wrote:
>>>>> 
>>>>> It's more verbose, and it does mean test writers need to learn the public 
>>>>> API, but it's also much more stable and debuggable in the future.
>>>> 
>>>> I'm not sure about this. Having looked at failing sb api tests for a while 
>>>> now, I find them about as easy to navigate and fix as FileCheck tests in 
>>>> llvm.
>>> 
>>> I don't find that to be true.  I see a failing test on line 79 or whatever, 
>>> and depending on what line 79 is doing, I'll throw in some 
>>> self.runCmd("bt")'s or self.runCmd("fr v") to the test, re-run, and see 
>>> what the relevant context is quickly. For most simple tests, I can usually 
>>> spot the issue in under a minute.  dotest.py likes to eat output when it's 
>>> run in multiprocess mode these days, so I have to remember to add 
>>> --no-multiprocess.  If I'm adding something that I think is generally 
>>> useful to debug the test case, I'll add a conditional block testing again 
>>> self.TraceOn() and print things that may help people who are running 
>>> dotest.py with -t trace mode enabled.
>> 
>> I do agree that there are effective ways of debugging sb api tests. Having 
>> worked with plenty of filecheck-based tests in llvm/clang/swift, I find them 
>> to be as easy (or easier for me personally) to debug.
>> 
>> 
>>> Sometimes there is a test written so it has a "verify this value" function 
>>> that is run over a variety of different variables during the test 
>>> timeframe, and debugging that can take a little more work to understand the 
>>> context that is failing.  But that kind of test would be harder (or at 
>>> least much more redundant) to express in a FileCheck style system anyway, 
>>> so I can't ding it.
>> 
>> 
>> Yep, sounds like a great candidate for a unit test or an SB API test.
>> 
>> 
>>> As for the difficulty of writing SB API tests, you do need to know the 
>>> general architecture of lldb (a target has a process, a process has 
>>> threads, a thread has frames, a frame has variables), the public API which 
>>> quickly becomes second nature because it is so regular, and then there's 
>>> the testsuite specific setup and template code.  But is that that 
>>> intimidating to anyone familiar with lldb?
>> 
>> Not intimidating, no. Cumbersome and slow, absolutely. So much so that I 
>> don't see a way of adequately testing my patches this way. It would just 
>> take too much time.
>> 
>> vedant
>> 
>>> packages/Python/lldbsuite/test/sample_test/TestSampleTest.py is 50 lines 
>>> including comments; there's about ten lines of source related to 
>>> initializing / setting up the testsuite, and then 6 lines is what's needed 
>>> to run to a breakpoint, get a local variable, check the value. 
>>> 
>>> 
>>> J
>>> 
>>> 
>>> 
>>>> 
>>>> 
>>>>> It's a higher up front cost but we're paid back in being able to develop 
>>>>> lldb more quickly in the future, where our published API behaviors are 
>>>>> being tested directly, and the things that must not be broken.
>>>> 
>>>> I think the right solution here is to require API tests when new 
>>>> functionality is introduced. We can enforce this during code review. 
>>>> Making it impossible to write tests against the driver's output doesn't 
>>>> seem like the best solution. It means that far fewer tests will be written 
>>>> (note that a test suite run of lldb gives less than 60% code coverage). It 
>>>> also means that the driver's output isn't tested as much as it should be.
>>>> 
>>>> 
>>>>> The lldb driver's output isn't a contract, and treating it like one makes 
>>>>> the debugger harder to innovate in the future.
>>>> 
>>>> I appreciate your experience with this (pattern matching on driver input) 
>>>> in gdb. That said, I think there are reliable/maintainable ways to do 
>>>> this, and proven examples we can learn from in llvm/clang/etc.
>>>> 
>>>> 
>>>>> It's also helpful when adding new features to ensure you've exposed the 
>>>>> feature through the API sufficiently.  The first thing I thought to try 
>>>>> when writing the example below was SBFrame::IsArtificial() (see 
>>>>> SBFrame::IsInlined()) which doesn't exist.  If a driver / IDE is going to 
>>>>> visually indicate artificial frames, they'll need that.
>>>> 
>>>> Sure. That's true, we do need API exposure for new features, and again we 
>>>> can enforce that during code review. The reason you didn't find 
>>>> IsArtificial() is because it's sitting on my disk :). Haven't shared the 
>>>> patch yet.
>>>> 
>>>> vedant
>>>> 
>>>>> 
>>>>> J
>>>>> 
>>>>>> On Aug 14, 2018, at 5:56 PM, Vedant Kumar <v...@apple.com 
>>>>>> <mailto:v...@apple.com>> wrote:
>>>>>> 
>>>>>> It'd be easy to update FileCheck tests when changing the debugger (this 
>>>>>> happens all the time in clang/swift). OTOH, the verbosity of the python 
>>>>>> API means that fewer tests get written. I see a real need to make 
>>>>>> expressive tests easier to write.
>>>>>> 
>>>>>> vedant
>>>>>> 
>>>>>>> On Aug 14, 2018, at 5:38 PM, Jason Molenda <jmole...@apple.com 
>>>>>>> <mailto:jmole...@apple.com>> wrote:
>>>>>>> 
>>>>>>> I'd argue against this approach because it's exactly why the lit tests 
>>>>>>> don't run against the lldb driver -- they're hardcoding the output of 
>>>>>>> the lldb driver command into the testsuite and these will eventually 
>>>>>>> make it much more difficult to change and improve the driver as we've 
>>>>>>> accumulated this style of test.
>>>>>>> 
>>>>>>> This is a perfect test for a normal SB API.  Run to your breakpoints 
>>>>>>> and check the stack frames.
>>>>>>> 
>>>>>>> f0 = thread.GetFrameAtIndex(0)
>>>>>>> check that f0.GetFunctionName() == sink
>>>>>>> check that f0.IsArtifical() == True
>>>>>>> check that f0.GetLineEntry().GetLine() == expected line number
>>>>>>> 
>>>>>>> 
>>>>>>> it's more verbose, but it's also much more explicit about what it's 
>>>>>>> checking, and easy to see what has changed if there is a failure.
>>>>>>> 
>>>>>>> 
>>>>>>> J
>>>>>>> 
>>>>>>>> On Aug 14, 2018, at 5:31 PM, Vedant Kumar via lldb-dev 
>>>>>>>> <lldb-dev@lists.llvm.org <mailto:lldb-dev@lists.llvm.org>> wrote:
>>>>>>>> 
>>>>>>>> Hello,
>>>>>>>> 
>>>>>>>> I'd like to make FileCheck available within lldb inline tests, in 
>>>>>>>> addition to existing helpers like 'runCmd' and 'expect'.
>>>>>>>> 
>>>>>>>> My motivation is that several tests I'm working on can't be made as 
>>>>>>>> rigorous as they need to be without FileCheck-style checks. In 
>>>>>>>> particular, the 'matching', 'substrs', and 'patterns' arguments to 
>>>>>>>> runCmd/expect don't allow me to verify the ordering of checked input, 
>>>>>>>> to be stringent about line numbers, or to capture & reuse snippets of 
>>>>>>>> text from the input stream.
>>>>>>>> 
>>>>>>>> I'd curious to know if anyone else is interested or would be willing 
>>>>>>>> to review this (https://reviews.llvm.org/D50751 
>>>>>>>> <https://reviews.llvm.org/D50751>).
>>>>>>>> 
>>>>>>>> Here's an example of an inline test which benefits from 
>>>>>>>> FileCheck-style checking. This test is trying to check that certain 
>>>>>>>> frames appear in a backtrace when stopped inside of the "sink" 
>>>>>>>> function. Notice that without FileCheck, it's not possible to verify 
>>>>>>>> the order in which frames are printed, and that dealing with line 
>>>>>>>> numbers would be cumbersome.
>>>>>>>> 
>>>>>>>> ```
>>>>>>>> --- 
>>>>>>>> a/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
>>>>>>>> +++ 
>>>>>>>> b/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
>>>>>>>> @@ -9,16 +9,21 @@
>>>>>>>> 
>>>>>>>> volatile int x;
>>>>>>>> 
>>>>>>>> +// CHECK: frame #0: {{.*}}sink() at main.cpp:[[@LINE+2]] [opt]
>>>>>>>> void __attribute__((noinline)) sink() {
>>>>>>>> -  x++; //% self.expect("bt", substrs = ['main', 'func1', 'func2', 
>>>>>>>> 'func3', 'sink'])
>>>>>>>> +  x++; //% self.filecheck("bt", "main.cpp")
>>>>>>>> }
>>>>>>>> 
>>>>>>>> +// CHECK-NEXT: frame #1: {{.*}}func3() {{.*}}[opt] [artificial]
>>>>>>>> void __attribute__((noinline)) func3() { sink(); /* tail */ }
>>>>>>>> 
>>>>>>>> +// CHECK-NEXT: frame #2: {{.*}}func2() at main.cpp:[[@LINE+1]] [opt]
>>>>>>>> void __attribute__((disable_tail_calls, noinline)) func2() { func3(); 
>>>>>>>> /* regular */ }
>>>>>>>> 
>>>>>>>> +// CHECK-NEXT: frame #3: {{.*}}func1() {{.*}}[opt] [artificial]
>>>>>>>> void __attribute__((noinline)) func1() { func2(); /* tail */ }
>>>>>>>> 
>>>>>>>> +// CHECK-NEXT: frame #4: {{.*}}main at main.cpp:[[@LINE+2]] [opt]
>>>>>>>> int __attribute__((disable_tail_calls)) main() {
>>>>>>>> func1(); /* regular */
>>>>>>>> return 0;
>>>>>>>> ```
>>>>>>>> 
>>>>>>>> For reference, here's the output of the "bt" command:
>>>>>>>> 
>>>>>>>> ```
>>>>>>>> runCmd: bt
>>>>>>>> output: * thread #1, queue = 'com.apple.main-thread', stop reason = 
>>>>>>>> breakpoint 1.1
>>>>>>>> * frame #0: 0x000000010c6a6f64 a.out`sink() at main.cpp:14 [opt]
>>>>>>>> frame #1: 0x000000010c6a6f70 a.out`func3() at main.cpp:15 [opt] 
>>>>>>>> [artificial]
>>>>>>>> frame #2: 0x000000010c6a6f89 a.out`func2() at main.cpp:21 [opt]
>>>>>>>> frame #3: 0x000000010c6a6f90 a.out`func1() at main.cpp:21 [opt] 
>>>>>>>> [artificial]
>>>>>>>> frame #4: 0x000000010c6a6fa9 a.out`main at main.cpp:28 [opt]
>>>>>>>> ```
>>>>>>>> 
>>>>>>>> thanks,
>>>>>>>> vedant
>>>>>>>> _______________________________________________
>>>>>>>> lldb-dev mailing list
>>>>>>>> lldb-dev@lists.llvm.org <mailto:lldb-dev@lists.llvm.org>
>>>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 

_______________________________________________
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

Reply via email to