Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Tamas Berghammer via lldb-dev
Hi Todd,

I attached the statistic of the last 100 test run on the Linux x86_64
builder (http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake).
The data might be a little bit noisy because of the actual test failures
happening because of a temporary regression, but they should give you a
general idea about what is happening.

I will try to create a statistic where the results are displayed separately
for each compiler and architecture to get a bit more detailed view, but it
will take some time. If you want I can include the list of build numbers
for all outcome, but it will be a very log list (currently only included
for Timeout and Failure)

Tamas

On Mon, Sep 14, 2015 at 11:24 PM Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> On an Ubuntu 14.04 x86_64 system, I'm seeing the following results:
>
> *cmake/ninja/clang-3.6:*
>
> Testing: 395 test suites, 24 threads
> 395 out of 395 test suites processed - TestGdbRemoteKill.py
> Ran 395 test suites (0 failed) (0.00%)
> Ran 478 test cases (0 failed) (0.00%)
>
> Unexpected Successes (6)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>
>
> *cmake/ninja/gcc-4.9.2:*
>
> 395 out of 395 test suites processed - TestMultithreaded.py
> Ran 395 test suites (1 failed) (0.253165%)
> Ran 457 test cases (1 failed) (0.218818%)
> Failing Tests (1)
> FAIL: LLDB (suite) :: TestRegisterVariables.py
>
> Unexpected Successes (6)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestDataFormatterSynth.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py
>
>
> I will look into those.  I suspect some of them are compiler-version
> specific, much like some of the OS X ones I dug into earlier.
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>


test-statistics
Description: Binary data
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Get vtable info from the image

2015-09-15 Thread Ramkumar Ramachandra via lldb-dev
Hi,

I believe there's now a:

  (gdb) info vtbl ...

and I'm unable to find the equivalent of this in lldb. I usually do:

  (lldb) im look -r -v -s ...

and look for the vtable info in the output, but it doesn't always seem
to be there.

What am I missing?

Thanks.

Ram
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Get vtable info from the image

2015-09-15 Thread Ramkumar Ramachandra via lldb-dev
Ha, turns out it's

  (lldb) im look -r -v -s "vtable for ..."

We should document this in http://lldb.llvm.org/lldb-gdb.html

On Tue, Sep 15, 2015 at 9:53 AM, Ramkumar Ramachandra
 wrote:
> Hi,
>
> I believe there's now a:
>
>   (gdb) info vtbl ...
>
> and I'm unable to find the equivalent of this in lldb. I usually do:
>
>   (lldb) im look -r -v -s ...
>
> and look for the vtable info in the output, but it doesn't always seem
> to be there.
>
> What am I missing?
>
> Thanks.
>
> Ram
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Todd Fiala via lldb-dev
On Tue, Sep 15, 2015 at 2:57 AM, Tamas Berghammer 
wrote:

> Hi Todd,
>
> I attached the statistic of the last 100 test run on the Linux x86_64
> builder (http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake).
> The data might be a little bit noisy because of the actual test failures
> happening because of a temporary regression, but they should give you a
> general idea about what is happening.
>
>
Thanks, Tamas!  I'll have a look.


> I will try to create a statistic where the results are displayed
> separately for each compiler and architecture to get a bit more detailed
> view, but it will take some time. If you want I can include the list of
> build numbers for all outcome, but it will be a very log list (currently
> only included for Timeout and Failure)
>
>
I'll know better when I have a look at what you provided.  The hole I see
right now is we're not adequately dealing with unexpected successes for
different configurations.  Any reporting around that is helpful.

Thanks!


> Tamas
>
> On Mon, Sep 14, 2015 at 11:24 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> On an Ubuntu 14.04 x86_64 system, I'm seeing the following results:
>>
>> *cmake/ninja/clang-3.6:*
>>
>> Testing: 395 test suites, 24 threads
>> 395 out of 395 test suites processed - TestGdbRemoteKill.py
>> Ran 395 test suites (0 failed) (0.00%)
>> Ran 478 test cases (0 failed) (0.00%)
>>
>> Unexpected Successes (6)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>>
>>
>> *cmake/ninja/gcc-4.9.2:*
>>
>> 395 out of 395 test suites processed - TestMultithreaded.py
>> Ran 395 test suites (1 failed) (0.253165%)
>> Ran 457 test cases (1 failed) (0.218818%)
>> Failing Tests (1)
>> FAIL: LLDB (suite) :: TestRegisterVariables.py
>>
>> Unexpected Successes (6)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestDataFormatterSynth.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py
>>
>>
>> I will look into those.  I suspect some of them are compiler-version
>> specific, much like some of the OS X ones I dug into earlier.
>> --
>> -Todd
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Todd Fiala via lldb-dev
Wow Tamas, this is perfect.  Thanks for pulling that together!

Don't worry about the bigger file.

Thanks much.

-Todd

On Tue, Sep 15, 2015 at 8:56 AM, Tamas Berghammer 
wrote:

> I created a new statistic what separates the data based on compiler and
> architecture and I also extended it to the last 250 builds on the Linux
> build bot. If you would like to see the build IDs for the different
> outcomes then let me know, because I have them collected out, but it is a
> quite big file.
>
> Tamas
>
> On Tue, Sep 15, 2015 at 3:37 PM Todd Fiala  wrote:
>
>> On Tue, Sep 15, 2015 at 2:57 AM, Tamas Berghammer > > wrote:
>>
>>> Hi Todd,
>>>
>>> I attached the statistic of the last 100 test run on the Linux x86_64
>>> builder (
>>> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake). The
>>> data might be a little bit noisy because of the actual test failures
>>> happening because of a temporary regression, but they should give you a
>>> general idea about what is happening.
>>>
>>>
>> Thanks, Tamas!  I'll have a look.
>>
>>
>>> I will try to create a statistic where the results are displayed
>>> separately for each compiler and architecture to get a bit more detailed
>>> view, but it will take some time. If you want I can include the list of
>>> build numbers for all outcome, but it will be a very log list (currently
>>> only included for Timeout and Failure)
>>>
>>>
>> I'll know better when I have a look at what you provided.  The hole I see
>> right now is we're not adequately dealing with unexpected successes for
>> different configurations.  Any reporting around that is helpful.
>>
>> Thanks!
>>
>>
>>> Tamas
>>>
>>> On Mon, Sep 14, 2015 at 11:24 PM Todd Fiala via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
 On an Ubuntu 14.04 x86_64 system, I'm seeing the following results:

 *cmake/ninja/clang-3.6:*

 Testing: 395 test suites, 24 threads
 395 out of 395 test suites processed - TestGdbRemoteKill.py
 Ran 395 test suites (0 failed) (0.00%)
 Ran 478 test cases (0 failed) (0.00%)

 Unexpected Successes (6)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py


 *cmake/ninja/gcc-4.9.2:*

 395 out of 395 test suites processed - TestMultithreaded.py
 Ran 395 test suites (1 failed) (0.253165%)
 Ran 457 test cases (1 failed) (0.218818%)
 Failing Tests (1)
 FAIL: LLDB (suite) :: TestRegisterVariables.py

 Unexpected Successes (6)
 UNEXPECTED SUCCESS: LLDB (suite) :: TestDataFormatterSynth.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
 UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py


 I will look into those.  I suspect some of them are compiler-version
 specific, much like some of the OS X ones I dug into earlier.
 --
 -Todd
 ___
 lldb-dev mailing list
 lldb-dev@lists.llvm.org
 http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

>>>
>>
>>
>> --
>> -Todd
>>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Todd Fiala via lldb-dev
Just to make sure I'm reading these right:

== Compiler: totclang Architecture: x86_64 ==

UnexpectedSuccess
TestMiInterpreterExec.MiInterpreterExecTestCase.test_lldbmi_settings_set_target_run_args_before
(250/250 100.00%)
TestRaise.RaiseTestCase.test_restart_bug_with_dwarf (119/250 47.60%)
TestMiSyntax.MiSyntaxTestCase.test_lldbmi_process_output (250/250
100.00%)
TestInferiorAssert.AssertingInferiorTestCase.test_inferior_asserting_expr_dwarf
(195/250 78.00%)


This is saying that running the tests with a top of tree clang, on x86_64,
we see (for example):
* test_lldbmi_settings_set_target_run_args_before() is always passing,
* test_inferior_asserting_expr_dwarf() is always passing
* test_restart_bug_with_dwarf() is failing more often than passing.

This is incredibly useful for figuring out the true disposition of a test
on different configurations.  What method did you use to gather that data?

On Tue, Sep 15, 2015 at 9:03 AM, Todd Fiala  wrote:

> Wow Tamas, this is perfect.  Thanks for pulling that together!
>
> Don't worry about the bigger file.
>
> Thanks much.
>
> -Todd
>
> On Tue, Sep 15, 2015 at 8:56 AM, Tamas Berghammer 
> wrote:
>
>> I created a new statistic what separates the data based on compiler and
>> architecture and I also extended it to the last 250 builds on the Linux
>> build bot. If you would like to see the build IDs for the different
>> outcomes then let me know, because I have them collected out, but it is a
>> quite big file.
>>
>> Tamas
>>
>> On Tue, Sep 15, 2015 at 3:37 PM Todd Fiala  wrote:
>>
>>> On Tue, Sep 15, 2015 at 2:57 AM, Tamas Berghammer <
>>> tbergham...@google.com> wrote:
>>>
 Hi Todd,

 I attached the statistic of the last 100 test run on the Linux x86_64
 builder (
 http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake). The
 data might be a little bit noisy because of the actual test failures
 happening because of a temporary regression, but they should give you a
 general idea about what is happening.


>>> Thanks, Tamas!  I'll have a look.
>>>
>>>
 I will try to create a statistic where the results are displayed
 separately for each compiler and architecture to get a bit more detailed
 view, but it will take some time. If you want I can include the list of
 build numbers for all outcome, but it will be a very log list (currently
 only included for Timeout and Failure)


>>> I'll know better when I have a look at what you provided.  The hole I
>>> see right now is we're not adequately dealing with unexpected successes for
>>> different configurations.  Any reporting around that is helpful.
>>>
>>> Thanks!
>>>
>>>
 Tamas

 On Mon, Sep 14, 2015 at 11:24 PM Todd Fiala via lldb-dev <
 lldb-dev@lists.llvm.org> wrote:

> On an Ubuntu 14.04 x86_64 system, I'm seeing the following results:
>
> *cmake/ninja/clang-3.6:*
>
> Testing: 395 test suites, 24 threads
> 395 out of 395 test suites processed - TestGdbRemoteKill.py
> Ran 395 test suites (0 failed) (0.00%)
> Ran 478 test cases (0 failed) (0.00%)
>
> Unexpected Successes (6)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>
>
> *cmake/ninja/gcc-4.9.2:*
>
> 395 out of 395 test suites processed - TestMultithreaded.py
> Ran 395 test suites (1 failed) (0.253165%)
> Ran 457 test cases (1 failed) (0.218818%)
> Failing Tests (1)
> FAIL: LLDB (suite) :: TestRegisterVariables.py
>
> Unexpected Successes (6)
> UNEXPECTED SUCCESS: LLDB (suite) :: TestDataFormatterSynth.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
> UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py
>
>
> I will look into those.  I suspect some of them are compiler-version
> specific, much like some of the OS X ones I dug into earlier.
> --
> -Todd
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>

>>>
>>>
>>> --
>>> -Todd
>>>
>>
>
>
> --
> -Todd
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Tamas Berghammer via lldb-dev
Yes, you are reading it correctly (for totclang we mean the totclang at the
time when the test suit was run).

The cmake builder runs in GCE and it uploads all test logs to Google Cloud
Storage (including full host logs and server logs). I used a python script
(running also in GCE) to download this data and to parse the test output
from the test traces.

On Tue, Sep 15, 2015 at 5:08 PM Todd Fiala  wrote:

> Just to make sure I'm reading these right:
>
> == Compiler: totclang Architecture: x86_64 ==
>
> UnexpectedSuccess
> TestMiInterpreterExec.MiInterpreterExecTestCase.test_lldbmi_settings_set_target_run_args_before
> (250/250 100.00%)
> TestRaise.RaiseTestCase.test_restart_bug_with_dwarf (119/250 47.60%)
> TestMiSyntax.MiSyntaxTestCase.test_lldbmi_process_output (250/250
> 100.00%)
> TestInferiorAssert.AssertingInferiorTestCase.test_inferior_asserting_expr_dwarf
> (195/250 78.00%)
>
>
> This is saying that running the tests with a top of tree clang, on x86_64,
> we see (for example):
> * test_lldbmi_settings_set_target_run_args_before() is always passing,
> * test_inferior_asserting_expr_dwarf() is always passing
> * test_restart_bug_with_dwarf() is failing more often than passing.
>
> This is incredibly useful for figuring out the true disposition of a test
> on different configurations.  What method did you use to gather that data?
>
> On Tue, Sep 15, 2015 at 9:03 AM, Todd Fiala  wrote:
>
>> Wow Tamas, this is perfect.  Thanks for pulling that together!
>>
>> Don't worry about the bigger file.
>>
>> Thanks much.
>>
>> -Todd
>>
>> On Tue, Sep 15, 2015 at 8:56 AM, Tamas Berghammer > > wrote:
>>
>>> I created a new statistic what separates the data based on compiler and
>>> architecture and I also extended it to the last 250 builds on the Linux
>>> build bot. If you would like to see the build IDs for the different
>>> outcomes then let me know, because I have them collected out, but it is a
>>> quite big file.
>>>
>>> Tamas
>>>
>>> On Tue, Sep 15, 2015 at 3:37 PM Todd Fiala  wrote:
>>>
 On Tue, Sep 15, 2015 at 2:57 AM, Tamas Berghammer <
 tbergham...@google.com> wrote:

> Hi Todd,
>
> I attached the statistic of the last 100 test run on the Linux x86_64
> builder (
> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake).
> The data might be a little bit noisy because of the actual test failures
> happening because of a temporary regression, but they should give you a
> general idea about what is happening.
>
>
 Thanks, Tamas!  I'll have a look.


> I will try to create a statistic where the results are displayed
> separately for each compiler and architecture to get a bit more detailed
> view, but it will take some time. If you want I can include the list of
> build numbers for all outcome, but it will be a very log list (currently
> only included for Timeout and Failure)
>
>
 I'll know better when I have a look at what you provided.  The hole I
 see right now is we're not adequately dealing with unexpected successes for
 different configurations.  Any reporting around that is helpful.

 Thanks!


> Tamas
>
> On Mon, Sep 14, 2015 at 11:24 PM Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> On an Ubuntu 14.04 x86_64 system, I'm seeing the following results:
>>
>> *cmake/ninja/clang-3.6:*
>>
>> Testing: 395 test suites, 24 threads
>> 395 out of 395 test suites processed - TestGdbRemoteKill.py
>> Ran 395 test suites (0 failed) (0.00%)
>> Ran 478 test cases (0 failed) (0.00%)
>>
>> Unexpected Successes (6)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>>
>>
>> *cmake/ninja/gcc-4.9.2:*
>>
>> 395 out of 395 test suites processed - TestMultithreaded.py
>> Ran 395 test suites (1 failed) (0.253165%)
>> Ran 457 test cases (1 failed) (0.218818%)
>> Failing Tests (1)
>> FAIL: LLDB (suite) :: TestRegisterVariables.py
>>
>> Unexpected Successes (6)
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestDataFormatterSynth.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>> UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py
>>
>>
>> I will look into those.  I suspect some of them are compiler-version
>> specific, much like some of the OS X ones I dug into earlier.
>> --
>> -Todd
>> ___

Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Todd Fiala via lldb-dev
> The cmake builder runs in GCE and it uploads all test logs to Google
Cloud Storage (including full host logs and server logs). I used a python
script (running also in GCE) to download this data and to parse the test
output from the test traces.

Are the GCE logs public?  If not, do you know if our buildbot protocol
supports polling this info via another method straight from the build bot?
 (The latter is ultimately preferable so we can pull from multiple
builders, e.g. macosx, freebsd, etc.)  I suspect worst case the web
interface could be botted and the data collected and scraped, but hopefully
that isn't necessary.

Thanks again for sharing the info!

On Tue, Sep 15, 2015 at 9:16 AM, Tamas Berghammer 
wrote:

> Yes, you are reading it correctly (for totclang we mean the totclang at
> the time when the test suit was run).
>
> The cmake builder runs in GCE and it uploads all test logs to Google Cloud
> Storage (including full host logs and server logs). I used a python script
> (running also in GCE) to download this data and to parse the test output
> from the test traces.
>
> On Tue, Sep 15, 2015 at 5:08 PM Todd Fiala  wrote:
>
>> Just to make sure I'm reading these right:
>>
>> == Compiler: totclang Architecture: x86_64 ==
>>
>> UnexpectedSuccess
>> TestMiInterpreterExec.MiInterpreterExecTestCase.test_lldbmi_settings_set_target_run_args_before
>> (250/250 100.00%)
>> TestRaise.RaiseTestCase.test_restart_bug_with_dwarf (119/250 47.60%)
>> TestMiSyntax.MiSyntaxTestCase.test_lldbmi_process_output (250/250
>> 100.00%)
>> TestInferiorAssert.AssertingInferiorTestCase.test_inferior_asserting_expr_dwarf
>> (195/250 78.00%)
>>
>>
>> This is saying that running the tests with a top of tree clang, on
>> x86_64, we see (for example):
>> * test_lldbmi_settings_set_target_run_args_before() is always passing,
>> * test_inferior_asserting_expr_dwarf() is always passing
>> * test_restart_bug_with_dwarf() is failing more often than passing.
>>
>> This is incredibly useful for figuring out the true disposition of a test
>> on different configurations.  What method did you use to gather that data?
>>
>> On Tue, Sep 15, 2015 at 9:03 AM, Todd Fiala  wrote:
>>
>>> Wow Tamas, this is perfect.  Thanks for pulling that together!
>>>
>>> Don't worry about the bigger file.
>>>
>>> Thanks much.
>>>
>>> -Todd
>>>
>>> On Tue, Sep 15, 2015 at 8:56 AM, Tamas Berghammer <
>>> tbergham...@google.com> wrote:
>>>
 I created a new statistic what separates the data based on compiler and
 architecture and I also extended it to the last 250 builds on the Linux
 build bot. If you would like to see the build IDs for the different
 outcomes then let me know, because I have them collected out, but it is a
 quite big file.

 Tamas

 On Tue, Sep 15, 2015 at 3:37 PM Todd Fiala 
 wrote:

> On Tue, Sep 15, 2015 at 2:57 AM, Tamas Berghammer <
> tbergham...@google.com> wrote:
>
>> Hi Todd,
>>
>> I attached the statistic of the last 100 test run on the Linux x86_64
>> builder (
>> http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake).
>> The data might be a little bit noisy because of the actual test failures
>> happening because of a temporary regression, but they should give you a
>> general idea about what is happening.
>>
>>
> Thanks, Tamas!  I'll have a look.
>
>
>> I will try to create a statistic where the results are displayed
>> separately for each compiler and architecture to get a bit more detailed
>> view, but it will take some time. If you want I can include the list of
>> build numbers for all outcome, but it will be a very log list (currently
>> only included for Timeout and Failure)
>>
>>
> I'll know better when I have a look at what you provided.  The hole I
> see right now is we're not adequately dealing with unexpected successes 
> for
> different configurations.  Any reporting around that is helpful.
>
> Thanks!
>
>
>> Tamas
>>
>> On Mon, Sep 14, 2015 at 11:24 PM Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> On an Ubuntu 14.04 x86_64 system, I'm seeing the following results:
>>>
>>> *cmake/ninja/clang-3.6:*
>>>
>>> Testing: 395 test suites, 24 threads
>>> 395 out of 395 test suites processed - TestGdbRemoteKill.py
>>> Ran 395 test suites (0 failed) (0.00%)
>>> Ran 478 test cases (0 failed) (0.00%)
>>>
>>> Unexpected Successes (6)
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestConstVariables.py
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiBreak.py
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py
>>> UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py
>>>
>>>
>>> *cmake/ninja/gcc

Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Siva Chandra via lldb-dev
On Tue, Sep 15, 2015 at 9:25 AM, Todd Fiala via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> > The cmake builder runs in GCE and it uploads all test logs to Google
> Cloud Storage (including full host logs and server logs). I used a python
> script (running also in GCE) to download this data and to parse the test
> output from the test traces.
>
> Are the GCE logs public?  If not, do you know if our buildbot protocol
> supports polling this info via another method straight from the build bot?
>

You are probably looking for this: http://lab.llvm.org:8011/json/help
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Todd Fiala via lldb-dev
Yep looks like there's a decent interface to it.  Thanks, Siva!

I see there's some docs here too:
http://docs.buildbot.net/current/index.html

On Tue, Sep 15, 2015 at 9:42 AM, Siva Chandra 
wrote:

> On Tue, Sep 15, 2015 at 9:25 AM, Todd Fiala via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> > The cmake builder runs in GCE and it uploads all test logs to Google
>> Cloud Storage (including full host logs and server logs). I used a python
>> script (running also in GCE) to download this data and to parse the test
>> output from the test traces.
>>
>> Are the GCE logs public?  If not, do you know if our buildbot protocol
>> supports polling this info via another method straight from the build bot?
>>
>
> You are probably looking for this: http://lab.llvm.org:8011/json/help
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Siva Chandra via lldb-dev
IIRC, doing it from Python is straightforward and simple:

json.load(urlparse.urlopen(<...>))

Could be a little more, but should not be much.

On Tue, Sep 15, 2015 at 9:52 AM, Todd Fiala  wrote:

> Yep looks like there's a decent interface to it.  Thanks, Siva!
>
> I see there's some docs here too:
> http://docs.buildbot.net/current/index.html
>
> On Tue, Sep 15, 2015 at 9:42 AM, Siva Chandra 
> wrote:
>
>> On Tue, Sep 15, 2015 at 9:25 AM, Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> > The cmake builder runs in GCE and it uploads all test logs to Google
>>> Cloud Storage (including full host logs and server logs). I used a python
>>> script (running also in GCE) to download this data and to parse the test
>>> output from the test traces.
>>>
>>> Are the GCE logs public?  If not, do you know if our buildbot protocol
>>> supports polling this info via another method straight from the build bot?
>>>
>>
>> You are probably looking for this: http://lab.llvm.org:8011/json/help
>>
>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Tamas Berghammer via lldb-dev
Unfortunately the GCE logs aren't public at the moment and the amount of
them isn't make it easy to make them accessible in any way (~30MB/build)
and they aren't much more machine parsable then the stdout from the build.

I think downloading data with the json API won't help because it will only
list the failures displayed on the Web UI what don't contain full test
names and don't contain info about the UnexpectedSuccess-es. If you want to
download it from the web interface then I am pretty sure we have to parse
in the stdout of the test runner and change dotest in a way that it
displays more information about the outcome of the different tests.

On Tue, Sep 15, 2015 at 5:52 PM Todd Fiala  wrote:

> Yep looks like there's a decent interface to it.  Thanks, Siva!
>
> I see there's some docs here too:
> http://docs.buildbot.net/current/index.html
>
> On Tue, Sep 15, 2015 at 9:42 AM, Siva Chandra 
> wrote:
>
>> On Tue, Sep 15, 2015 at 9:25 AM, Todd Fiala via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> > The cmake builder runs in GCE and it uploads all test logs to Google
>>> Cloud Storage (including full host logs and server logs). I used a python
>>> script (running also in GCE) to download this data and to parse the test
>>> output from the test traces.
>>>
>>> Are the GCE logs public?  If not, do you know if our buildbot protocol
>>> supports polling this info via another method straight from the build bot?
>>>
>>
>> You are probably looking for this: http://lab.llvm.org:8011/json/help
>>
>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Siva Chandra via lldb-dev
On Tue, Sep 15, 2015 at 9:59 AM, Tamas Berghammer 
wrote:

> I think downloading data with the json API won't help because it will only
> list the failures displayed on the Web UI what don't contain full test
> names and don't contain info about the UnexpectedSuccess-es. If you want to
> download it from the web interface then I am pretty sure we have to parse
> in the stdout of the test runner and change dotest in a way that it
> displays more information about the outcome of the different tests.
>

I fully support making those changes to dotest. Also, it would be nice to
actually have a stats cron running along with the master with a webui,
something like this: https://build.chromium.org/p/chromium/stats and
http://build.chromium.org/f/chromium/flakiness/. Its a tall ask, but at the
very least, we should have dotest.py put out machine readable output. This
could be done on request (as in, when a certain flag is set).
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] 7th build slot?

2015-09-15 Thread Ying Chen via lldb-dev
Thanks for the suggestions.
I've changed the descriptions of "clang" to "clang-3.5" since this

build.

We currently have 8 test slots reserved. 1-6 are configurations that should
pass, 7-8 are experimental ones that have known failing tests, like step 8
on android builder

.

Both of new test configurations and total number of test slots is easy to
expand. We could discuss with the team for which clang versions to cover.

For test time, it depends on thread counts, which is the number of cores by
default. Like on the bot, the first line says "Testing: 396 test suites, 32
threads". And it's also related to build configuration, Release build runs
faster than Debug build.

Thanks,
Ying

On Mon, Sep 14, 2015 at 11:46 PM, Todd Fiala  wrote:

> Okay, thanks.
>
> It would be awesome if the summary listed the specific versions of clang
> being used.  TOT is obvious, but "clang" being 3.5 is less so.
>
> Seeing as clang behavior can change between releases, perhaps we can use
> the other slots for other clang versions?
>
> TOT clearly makes sense if more than one is being done. But if 4 slots
> could be dedicated to clang, it seems like:
> TOT (latest, 3.8 now)
> TOT - 1 (3.7)
> TOT - 2 (3.6)
> TOT - 3 (3.5)
>
> Or perhaps:
> TOT (latest, 3.8)
> TOT - 1 (3.7)
> TOT - 2 (3.6)
> TOT - *HISTORICALLY INTERESTING CLANG* (perhaps 3.4, 3.2, 3.0, whatever).
>
> Just some thoughts.  Nice that the tests run so fast.  I've only got them
> running in 2.5 minutes here, looks like your build bot does it in 1.5 :-)
>
> -Todd
>
> On Mon, Sep 14, 2015 at 10:49 PM, Chaoren Lin  wrote:
>
>> We gave each bot 8 slots due to some limitation with the build master.
>> Slots 1-6 will notify us of failures, and slots 7-8 are sort of reserved
>> for tests/builds that are expected to be broken. Ying can provide more
>> details.
>>
>> On Mon, Sep 14, 2015 at 10:12 PM, Todd Fiala 
>> wrote:
>>
>>> Hi Chaoren,
>>>
>>> While looking at the Ubuntu 14.04 x86_64 build bot, I was looking at the
>>> test configurations for the test slots.  6 of them are described in the
>>> build summary on the upper right
>>> 
>>>   There
>>> is a 7th and 8th test slot that are not described --- what happens in those
>>> slots?  Are they just unconfigured and thus not used?
>>>
>>> Thanks!
>>>
>>> -Todd
>>>
>>
>>
>
>
> --
> -Todd
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Digging into Linux unexpected successes

2015-09-15 Thread Todd Fiala via lldb-dev
Change http://reviews.llvm.org/D12831 in review (waiting on Windows results
for that) adds a test event stream that supports pluggable test event
formatters.  The first formatter I've added is JUnit/XUnit output.  That's
to support typical JUnit/XUnit output handling built into most commercial
and open source CI solutions.  But that eventing mechanism is intended to
support quite a wider range of possible applications, including outputting
to different formats, displaying test results as they occur in different
viewers/controllers, etc.

-Todd

On Tue, Sep 15, 2015 at 10:33 AM, Siva Chandra 
wrote:

> On Tue, Sep 15, 2015 at 9:59 AM, Tamas Berghammer 
> wrote:
>
>> I think downloading data with the json API won't help because it will
>> only list the failures displayed on the Web UI what don't contain full test
>> names and don't contain info about the UnexpectedSuccess-es. If you want to
>> download it from the web interface then I am pretty sure we have to parse
>> in the stdout of the test runner and change dotest in a way that it
>> displays more information about the outcome of the different tests.
>>
>
> I fully support making those changes to dotest. Also, it would be nice to
> actually have a stats cron running along with the master with a webui,
> something like this: https://build.chromium.org/p/chromium/stats and
> http://build.chromium.org/f/chromium/flakiness/. Its a tall ask, but at
> the very least, we should have dotest.py put out machine readable output.
> This could be done on request (as in, when a certain flag is set).
>



-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] 7th build slot?

2015-09-15 Thread Todd Fiala via lldb-dev
Thanks for the info, Ying!

On Tue, Sep 15, 2015 at 11:04 AM, Ying Chen  wrote:

> Thanks for the suggestions.
> I've changed the descriptions of "clang" to "clang-3.5" since this
> 
> build.
>
> We currently have 8 test slots reserved. 1-6 are configurations that
> should pass, 7-8 are experimental ones that have known failing tests, like
> step 8 on android builder
> 
> .
>
> Both of new test configurations and total number of test slots is easy to
> expand. We could discuss with the team for which clang versions to cover.
>
> For test time, it depends on thread counts, which is the number of cores
> by default. Like on the bot, the first line says "Testing: 396 test suites,
> 32 threads". And it's also related to build configuration, Release build
> runs faster than Debug build.
>
> Thanks,
> Ying
>
> On Mon, Sep 14, 2015 at 11:46 PM, Todd Fiala  wrote:
>
>> Okay, thanks.
>>
>> It would be awesome if the summary listed the specific versions of clang
>> being used.  TOT is obvious, but "clang" being 3.5 is less so.
>>
>> Seeing as clang behavior can change between releases, perhaps we can use
>> the other slots for other clang versions?
>>
>> TOT clearly makes sense if more than one is being done. But if 4 slots
>> could be dedicated to clang, it seems like:
>> TOT (latest, 3.8 now)
>> TOT - 1 (3.7)
>> TOT - 2 (3.6)
>> TOT - 3 (3.5)
>>
>> Or perhaps:
>> TOT (latest, 3.8)
>> TOT - 1 (3.7)
>> TOT - 2 (3.6)
>> TOT - *HISTORICALLY INTERESTING CLANG* (perhaps 3.4, 3.2, 3.0, whatever).
>>
>> Just some thoughts.  Nice that the tests run so fast.  I've only got them
>> running in 2.5 minutes here, looks like your build bot does it in 1.5 :-)
>>
>> -Todd
>>
>> On Mon, Sep 14, 2015 at 10:49 PM, Chaoren Lin 
>> wrote:
>>
>>> We gave each bot 8 slots due to some limitation with the build master.
>>> Slots 1-6 will notify us of failures, and slots 7-8 are sort of reserved
>>> for tests/builds that are expected to be broken. Ying can provide more
>>> details.
>>>
>>> On Mon, Sep 14, 2015 at 10:12 PM, Todd Fiala 
>>> wrote:
>>>
 Hi Chaoren,

 While looking at the Ubuntu 14.04 x86_64 build bot, I was looking at
 the test configurations for the test slots.  6 of them are described in the
 build summary on the upper right
 
   There
 is a 7th and 8th test slot that are not described --- what happens in those
 slots?  Are they just unconfigured and thus not used?

 Thanks!

 -Todd

>>>
>>>
>>
>>
>> --
>> -Todd
>>
>
>


-- 
-Todd
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] e not working when debugging llvm pass

2015-09-15 Thread carr27 via lldb-dev

Hi Greg,

Thanks for your response.  -fstandalone-debug what was I needed.

The rest of this email is just FYI:

Originally the command:

(lldb) image lookup -t llvm::BasicBlock

Just output a blank line.  I was using a self compiled version of lldb 
(release_37) so it seems your #2 was my case.


I was hitting the assert because I compiled everything with 
-DCMAKE_BUILD_TYPE=Debug.  I had built 'Debug' to try to diagnose my issue.


Thanks for your help,
Scott


On 09/14/2015 06:42 PM, Greg Clayton wrote:

On Sep 13, 2015, at 4:26 PM, carr27 via lldb-dev  
wrote:

Hello,

I'm working on an LLVM pass that I'm trying to debug it with LLDB, but I'm 
running into a few problems.  I'm generally following this turtorial [1] but I 
run my pass with opt instead from inside clang. So I run:

$ lldb ../build/bin/opt
(lldb) break set --name MyPass::runOnModule
(lldb) run -load ../build/lib/LLVMMyPass.so -MyPass test.ll

My runOnModule is just:

bool MyPass::runOnModule(Module &M) {
for (auto& F : M) {
for (auto& BB : F) {
  for (auto& I : BB) {
I.dump(); // step to here
  }
}
  }
}

It hits the breakpoint correctly then I single step into the inner most loop.  
Then I try the following LLDB commands:

"e I.dump()" gives me what I expect (it prints the LLVM instruction).

"e BB.dump()" gives me:
error: no member named 'dump' in 'llvm::BasicBlock'
error: 1 errors parsing expression

...but llvm::BasicBlock definitely does have a member named dump.


We would need to look at what LLDB thinks the llvm::BasicBlock type looks like. 
What does LLDB say in response to:

(lldb) image lookup -t llvm::BasicBlock

You may be running into a case where the debug info only has a forward 
declaration to llvm::BasicBlock... Let me know what the output of the above 
command looks like. The other thing is there is a lot of space savings that get 
enabled by default that omits important debug info where definitions of classes 
will be omitted from the debug info if they aren't directly used in the current 
source files. So it also depends on the debug info. If you have an ELF file for 
clang that you can make available, I can take a look and see what I can see.



"e M.dump()" gives me:
lldb: /home/carr27/dataconf-workspace2/llvm/tools/clang/lib/AST/RecordLayoutBuilder.cpp:2883: const 
clang::ASTRecordLayout &clang::ASTContext::getASTRecordLayout(const clang::RecordDecl *) const: 
Assertion `D && "Cannot get layout of forward declarations!"' failed.
./lldb.sh: line 1:  6736 Aborted (core dumped)

Sounds like we are trying to complete a forward declaration and things go bad. 
One of the drawbacks in llvm and clang and that the library likes to assert 
when it is unhappy. This is fine for development, but when you ship a debugger 
that uses clang, we don't want it asserting and killing the debugger.

I don't know exactly what is going on, but it seems like LLDB doesn't know 
about some of the LLVM classes.  Can anyone give me a pointer as to what might 
be wrong?

A few things can be going on:
1 - You are using the 3.6 release which is not good enough for debugging on 
linux. You will need to use the 3.7 release or preferably the top of tree SVN 
for llvm/clang/LLDB.
2 - The compiler that is compiling llvm/clang is pulling tricks and not 
emitting all of the debug info you need. clang has many optimizations that try 
to limit the amount of debug info that it emits and it often will omit base 
class definitions and many other things. There is an option that you can modify 
your build to pass if you are building with clang: -fstandalone-debug. This 
will ensure that debug info isn't emitted partially when it would affect the 
ability of a debugger to reconstruct a type from the debug info.

Let me know what your "llvm::BasicBlock" looks like and which of the above 
cases you think you might be running into.

Greg


___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24827] test events: add test backtraces for fail/xfail

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24827

Todd Fiala  changed:

   What|Removed |Added

   Assignee|lldb-dev@lists.llvm.org |todd.fi...@gmail.com

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24827] New: test events: add test backtraces for fail/xfail

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24827

Bug ID: 24827
   Summary: test events: add test backtraces for fail/xfail
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: todd.fi...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

Depends on http://reviews.llvm.org/D12831 landing.

Once that's in, add Python backtraces to the test events when a FAIL/XFAIL
occurs.  Reflect this in the xUnit formatter as well.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24828] test events: add "announce all tests that will run"

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24828

Todd Fiala  changed:

   What|Removed |Added

   Assignee|lldb-dev@lists.llvm.org |todd.fi...@gmail.com

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24828] New: test events: add "announce all tests that will run"

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24828

Bug ID: 24828
   Summary: test events: add "announce all tests that will run"
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: todd.fi...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

Depends on http://reviews.llvm.org/D12831 landing.

Add a pre-run pass for each inferior dotest.py on test startup that announces
all the test methods that will run.  Then, if an inferior crashes or times out,
we can intelligently re-run test methods that got dropped (and likely skipping
the last started-but-not-finished test), or at least report on the test methods
that should have run but didn't (e.g. ERROR: 15 un-run tests).

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24829] New: test events: emit flakey pass/fail as unique from normal pass or xfail

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24829

Bug ID: 24829
   Summary: test events: emit flakey pass/fail as unique from
normal pass or xfail
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: todd.fi...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

This depends on http://reviews.llvm.org/D12831 landing.

Plumb through flakeyFailure and flakeySuccess as valid test completion results
for the test event system.  Then we can do smarter things regarding the
tracking and reporting of flakey-test results.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24829] test events: emit flakey pass/fail as unique from normal pass or xfail

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24829

Todd Fiala  changed:

   What|Removed |Added

   Assignee|lldb-dev@lists.llvm.org |todd.fi...@gmail.com

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24830] New: parallel test runner drops signal-based inferior exit statuses on the floor

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24830

Bug ID: 24830
   Summary: parallel test runner drops signal-based inferior exit
statuses on the floor
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: todd.fi...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

We do not emit any kind of error if the parallel test runner's inferior
dotest.py generates some kind of exceptional exit.  We simply scan stderr as
normal, and if there's no error/failure in it, we get nothing for the fact that
we --- perhaps --- core dumped or did something else like a SIGINT.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24830] parallel test runner drops signal-based inferior exit statuses on the floor

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24830

Todd Fiala  changed:

   What|Removed |Added

   Assignee|lldb-dev@lists.llvm.org |todd.fi...@gmail.com

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24831] New: cmake + ninja: 'ninja lldb' misses lib/python2.7 build dependency on Linux

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24831

Bug ID: 24831
   Summary: cmake + ninja: 'ninja lldb' misses lib/python2.7 build
dependency on Linux
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: todd.fi...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

When building with 'ninja lldb' per the LLDB webpage, on Linux this misses
building the dependent lib/python2.7 directory and the python lldb module.  Fix
this.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Testing through api vs. commands

2015-09-15 Thread via lldb-dev
> > > > I do still think we need some tests that verify commands run, but I 
> > > > think those tests should focus not on doing complicated interactions 
> > > > with the debugger, and instead just verifying that things parse 
> > > > correctly and the command is configured correctly, with the underlying 
> > > > functionality being tested by the api tests.
> > > >
> > > > Thoughts?

I agree that we should have both testing methods - SB API *and* commands,
because we should be testing the user command interface too!

Instead, to fix the Windows vs. other issue, I would suggest writing a
sub-class that won't expect the missing params based on platform.

In any case, there's a lot I never could figure out how to do in the SB
API that I could only do via commands.  For example, how do you test
that a trailing space at the end of the expr --language option's argument
is trimmed in the SB API?

-Dawn
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24833] New: cmake + ninja: 'ninja lldb' misses lldb-server dependency on Linux

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24833

Bug ID: 24833
   Summary: cmake + ninja: 'ninja lldb' misses lldb-server
dependency on Linux
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: todd.fi...@gmail.com
CC: llvm-b...@lists.llvm.org
Classification: Unclassified

On Linux, lldb-server is a dependency of the lldb target.  Fix the build system
to recognize this so that:
ninja lldb
produces an lldb that can do something.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 24833] cmake + ninja: 'ninja lldb' misses lldb-server dependency on Linux

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=24833

Todd Fiala  changed:

   What|Removed |Added

   Assignee|lldb-dev@lists.llvm.org |todd.fi...@gmail.com

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Testing through api vs. commands

2015-09-15 Thread Zachary Turner via lldb-dev
I agree that we should test the command interface, but

a) they should be explicitly marked as interface tests.
b) There should be MUCH fewer.
c) It should only verify that typing a particular command maps to the right
core sequence of public / private API calls.  Not that the debugger
functionality works as expected (since that is already tested through API
tests).
d) Results of these interface tests should also not be *verified* by the
use of self.expect, but itself through the API.  (Relying on the text to be
formatted a specific way is obviously problematic, as opposed to just
verifying the actual state of the debugger)

c is probably the hardest, because it's hard to verify that a command does
what you think it does without actually having it do the thing.  It's
possible with some refactoring, but not somethign that can be whipped up in
a day or two though.

On Tue, Sep 15, 2015 at 4:23 PM  wrote:

> > > > > I do still think we need some tests that verify commands run, but
> I think those tests should focus not on doing complicated interactions with
> the debugger, and instead just verifying that things parse correctly and
> the command is configured correctly, with the underlying functionality
> being tested by the api tests.
> > > > >
> > > > > Thoughts?
>
> I agree that we should have both testing methods - SB API *and* commands,
> because we should be testing the user command interface too!
>
> Instead, to fix the Windows vs. other issue, I would suggest writing a
> sub-class that won't expect the missing params based on platform.
>
> In any case, there's a lot I never could figure out how to do in the SB
> API that I could only do via commands.  For example, how do you test
> that a trailing space at the end of the expr --language option's argument
> is trimmed in the SB API?
>
> -Dawn
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Testing through api vs. commands

2015-09-15 Thread Jim Ingham via lldb-dev

> On Sep 15, 2015, at 4:23 PM, d...@burble.org wrote:
> 
> I do still think we need some tests that verify commands run, but I think 
> those tests should focus not on doing complicated interactions with the 
> debugger, and instead just verifying that things parse correctly and the 
> command is configured correctly, with the underlying functionality being 
> tested by the api tests.
> 
> Thoughts?
> 
> I agree that we should have both testing methods - SB API *and* commands,
> because we should be testing the user command interface too!
> 
> Instead, to fix the Windows vs. other issue, I would suggest writing a
> sub-class that won't expect the missing params based on platform.
> 
> In any case, there's a lot I never could figure out how to do in the SB
> API that I could only do via commands.  For example, how do you test
> that a trailing space at the end of the expr --language option's argument
> is trimmed in the SB API?

I'm not quite sure I understand what you mean by this example?  It sounds like 
you are asking how to test peculiarities of the
Command Line language name option parser through the SB API.  Not sure that 
makes sense.

But if there's anything you can do with the lldb command line that you can't do 
with the SB API's that is a bug.  Please file bugs (or maybe ask "how would I 
do that" here first then file a bug if you get no answer.)

Note, you might have to do some work to marshall information in a way that 
looks like what some of the complex commands produce.  But even some of the 
complex printers like the frame & thread Status are available through the SB 
API's and anything else useful like that should also be exposed at some point.  

If we had much more time we might have built the Command Line on top of the SB 
API's, but that would have made it really hard to bootstrap the thing.  In 
theory, somebody could go back & re-implement the lldb command line on top of 
the SB API's - much like what was done with the MI commands.  In actuality that 
would be a lot of effort that could be better spent somewhere else.  But not 
doing it that way means we have to be careful not to add stuff to the command 
line without adding some way to get at it with the API's.

Jim


> 
> -Dawn

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 20446] Test failures on Ubuntu 14.04 x86_64 guest under VirtualBox Windows x86_64 Host

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=20446

Todd Fiala  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |FIXED

--- Comment #2 from Todd Fiala  ---
I'm a-guessing this is super stale.  I'll refile if/when I get on a VirtualBox
Linux guest again.

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 20446] Test failures on Ubuntu 14.04 x86_64 guest under VirtualBox Windows x86_64 Host

2015-09-15 Thread via lldb-dev
https://llvm.org/bugs/show_bug.cgi?id=20446

Todd Fiala  changed:

   What|Removed |Added

 Resolution|FIXED   |INVALID

-- 
You are receiving this mail because:
You are the assignee for the bug.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev