Re: [lldb-dev] Saving and restoring STDIN in the ScriptInterpreter

2020-04-07 Thread Pavel Labath via lldb-dev
Hi Davide,

I believe your guess about background processes is correct. I think that
the lldb process is stopped (or is continually getting stopped and
restarted) by SIGTTOU.


Macro: int SIGTTOU

This is similar to SIGTTIN, but is generated when a process in a
background job attempts to write to the terminal or ***set its modes***.
Again, the default action is to stop the process. SIGTTOU is only
generated for an attempt to write to the terminal if the TOSTOP output
mode is set; see Output Modes.


Saving/restoring tty state before/after entering the python interpreter
does not sound like an unreasonable thing to do. However, I do see two
problems with this code:
- it unconditionally uses STDIN_FILENO -- it should use
SBDebugger::GetInputFileHandle (or equivalent) instead
- it has no test and it's impossible to track down why it exactly exists
and whether is really needed

With that in mind, I don't have a problem with deleting this code (and
later readding it properly, if needed) -- I might even say it's a good
idea. I cannot guarantee this will solve your problem completely, since
any other operation which will attempt to access stdin will trigger the
same problem. However, this, in combination with
SBDebugger::SetInputFileHandle(/dev/null) should in theory be sufficient
since nothing should be accessing the process stdin.

That said, if you just want to make your creduce script work,
redirecting stdin to /dev/null (lldb ...  Hi Pavel, Jonas,
> 
> I was trying to reduce a bug through c-reduce, so I decided to write a
> SBAPI script to make it easier.
> I did find out, that after the first iteration, the reduction gets stuck
> forever.
> I sampled the process and I saw the following (trimmed for readability).
> 
> Call graph:
> […]
> 8455 
> lldb_private::CommandInterpreter::GetScriptInterpreter(bool)  (in _lldb.so) + 
> 84  [0x111aff826]
>   8455 
> lldb_private::PluginManager::GetScriptInterpreterForLanguage(lldb::ScriptLanguage,
>  lldb_private::CommandInterpreter&)  (in _lldb.so) + 99  [0x111a1efcf]
> 8455 
> lldb_private::ScriptInterpreterPython::CreateInstance(lldb_private::CommandInterpreter&)
>   (in _lldb.so) + 26  [0x111d128f4]
>   8455 
> std::__1::shared_ptr 
> std::__1::shared_ptr::make_shared(lldb_private::CommandInterpreter&&&)
>   (in _lldb.so) + 72  [0x111d1b976]
> 8455 
> lldb_private::ScriptInterpreterPython::ScriptInterpreterPython(lldb_private::CommandInterpreter&)
>   (in _lldb.so) + 353  [0x111d11ff3]
>   8455 
> lldb_private::ScriptInterpreterPython::InitializePrivate()  (in _lldb.so) + 
> 494  [0x111d12594]
> 8455 
> (anonymous namespace)::InitializePythonRAII::~InitializePythonRAII()  (in 
> _lldb.so) + 146  [0x111d1b446]
>   
> 8455 lldb_private::TerminalState::Restore() const  (in _lldb.so) + 74  
> [0x111ac8268]
> 
> 8455 tcsetattr  (in libsystem_c.dylib) + 110  [0x7fff7b95b585]
>   
> 8455 ioctl  (in libsystem_kernel.dylib) + 151  [0x7fff7ba19b44]
>   
>   8455 __ioctl  (in libsystem_kernel.dylib) + 10  [0x7fff7ba19b5a]
> 
> 
> It looks like lldb gets stuck forever in `tcsetattr()`, and there are no
> other threads waiting so it’s not entirely obvious to me why it’s
> waiting there.
> I was never able to reproduce this with an interactive session, I
> suspect this is somehow related to the fact that c-reduce spawns a
> thread in the background, hence it doesn’t have a TTY associated.
> I looked at the code that does this, and I wasn’t really able to find a
> reason why we need to do this work. Jim thinks it might have been needed
> historically.
> `git blame` doesn’t really help that much either. If I remove the code,
> everything still passes and it’s functional, but before moving forward
> with this I would like to collect your opinions.
> 
> $ git diff
> diff --git
> a/lldb/source/Plugins/ScriptInterpreter/Python/ScriptInterpreterPython.cpp
> b/lldb/source/Plugins/ScriptInterpreter/Python/ScriptInterpreterPython.cpp
> index ee94a183e0d..c53b3bd0fb6 100644
> ---
> a/lldb/source/Plugins/ScriptInterpreter/Python/ScriptInterpreterPython.cpp
> +++
> b/lldb/source/Plugins/ScriptInterpreter/Python/ScriptInterpreterPython.cpp
> @@ -224,10 +224,6 @@ struct InitializePythonRAII {
>  public:
>    InitializePythonRAII()
>        :

[lldb-dev] [Bug 45454] New: Race condition in debugserver stdout processing during application exit.

2020-04-07 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=45454

Bug ID: 45454
   Summary: Race condition in debugserver stdout processing during
application exit.
   Product: lldb
   Version: 10.0
  Hardware: PC
OS: All
Status: NEW
  Severity: normal
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: uldis.kalni...@gmail.com
CC: jdevliegh...@apple.com, llvm-b...@lists.llvm.org

There is a race condition in debugserver where application state processing
thread sends event_proc_state_changed before all the STDOUT is posted to
internal buffers.

Explanation:

Debugserver processes STDOUT in separate thread that runs select on the socket
attached to debugged process STDOUT/STDERR. This thread reads the socket
contents and posts them to internal STDOUT buffer and emits eBroadcastBitSTDOUT
event. When application finishes, the sockets receive EOF and the thread exits.

When debugged application exits, the process state thread flushes any output
available in internal STDOUT buffer and emits "event_proc_state_changed" event,
which is picked up by main event processing loop and sent to lldb as the last
message.

If State processing thread gets to the STDOUT buffer before STDOUT processing
thread can update the buffer, the last output from application is lost.

The state processing thread probably needs to join the STDOUT thread before
checking if any last STDOUT is left in there. Seems that in some way, this is
already done for profiling data thread.

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Upstreaming Reproducer Capture/Replay for the API Test Suite

2020-04-07 Thread Pavel Labath via lldb-dev
Hi Jonas, Davide,

I am not exactly thrilled by the ever-growing number of "modes" our test
suite can be run in. However, it seems that's a battle I am destined to
loose, so I'll just repeat what I've been saying for some time now.

I don't believe that either of these funny "modes" should be the _only_
way to test a given piece of code. Using the extra modes to increase
test coverage is fine, and I can certainly appreciate the value of this
kind of exploratory testing (I've added some temporary modes locally
myself when working on various patches), but I still believe that every
patch should have an accompanying test(s) which can run in the default
"mode" and with as few dependencies as possible.

I believe Jonas is aware of that, and his existing work on reproducers
reflects that philosophy, but I think it's still important to spell this
out.

regards,
pl

On 06/04/2020 23:32, Davidino Italiano via lldb-dev wrote:
> 
> 
>> On Apr 6, 2020, at 2:24 PM, Jonas Devlieghere via lldb-dev 
>>  wrote:
>>
>> Hi everyone,
>>
>> Reproducers in LLDB are currently tested through (1) unit tests, (2) 
>> dedicated end-to-end shell tests and (3) the `lldb-check-repro` suite which 
>> runs all the shell tests against a replayed reproducer. While this already 
>> provides great coverage, we're still missing out on about 800 API tests. 
>> These tests are particularly interesting to the reproducers, because as 
>> opposed to the shell tests, which only exercises a subset of SB API calls 
>> used to implement the driver, they cover the majority of the API surface.
>>
>> To further qualify reproducer and to improve test coverage, I want to 
>> capture and replay the API test suite as well. Conceptually, this can be 
>> split up in two stages: 
>>
>>  1. Capture a reproducer and replay it with the driver. This exercises the 
>> reproducer instrumentation (serialization and deserialization) for all the 
>> APIs used in our test suite. While a bunch of issues with the reproducer 
>> instrumentation can be detected at compile time, a large subset only 
>> triggers through assertions at runtime. However, this approach by itself 
>> only verifies that we can (de)serialize API calls and their arguments. It 
>> has no knowledge of the expected results and therefore cannot verify the 
>> results of the API calls.
>>
>>  2. Capture a reproducer and replay it with dotest.py. Rather than having 
>> the command line driver execute every API call one after another, we can 
>> have dotest.py call the Python API as it normally would, intercept the call, 
>> replay it from the reproducer, and return the replayed result. The 
>> interception can be hidden behind the existing LLDB_RECORD_* macros, which 
>> contains sufficient type info to drive replay. It then simply re-invokes 
>> itself with the arguments deserialized from the reproducer and returns that 
>> result. Just as with the shell tests, this approach allows us to reuse the 
>> existing API tests, completely transparently, to check the reproducer output.
>>
>> I have worked on this over the past month and have shown that it is possible 
>> to achieve both stages. I have a downstream fork that contains the necessary 
>> changes.
>>
>> All the runtime issues found in stage 1 have been fixed upstream. With the 
>> exception of about 30 tests that fail because the GDB packets diverge during 
>> replay, all the tests can be replayed with the driver.
>>
>> About 120 tests, which include the 30 mentioned earlier, fail to replay for 
>> stage 2. This isn't entirely unexpected, just like the shell tests, there 
>> are tests that simply are not expected to work. The reproducers don't 
>> currently capture the output of the inferior and synchronization through 
>> external files won't work either, as those paths will get remapped by the 
>> VFS. This requires manually triage.
>>
>> I would like to start upstreaming this work so we can start running this in 
>> CI. The majority of the changes are limited to the reproducer 
>> instrumentation, but some changes are needed in the test suite as well, and 
>> there would be a new decorator to skip the unsupported tests. I'm splitting 
>> up the changes in self-contained patches, but wanted to send out this RFC 
>> with the bigger picture first.
> 
> I personally believe this is a required step to make sure:
> a) Reproducers can jump from being a prototype idea to something that can 
> actually run in production
> b) Whenever we add a new test [or presumably a new API] we get coverage 
> for-free.
> c) We have a verification mechanism to make sure we don’t regress across the 
> large surface API and not only what the unittests & shell tests cover.
> 
> I personally would be really glad to see this being upstreamed. I also would 
> like to thank you for doing the work in a downstream branch until you proved 
> this was achievable.
> 
> —
> D
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https:/

[lldb-dev] SBValues that are synthetic has API issues when trying to get the child by name:

2020-04-07 Thread Greg Clayton via lldb-dev
   3int main(int argc, const char **argv) {
   4  std::atomic ai;
   5  ai = argc;
-> 6  ai = argc + 1;
   7  return 0;
   8}


(lldb) fr var ai
(std::atomic) ai = {
  Value = 1
}
(lldb) frame var --raw ai
(std::__1::atomic) ai = {
  std::__1::__atomic_base = {
std::__1::__atomic_base = {
  __a_ = 1
}
  }
}

So we have a synthetic child provider. But if we do:

(lldb) script
>>> v = lldb.frame.FindVariable('ai')
>>> print(v.GetNumChildren())
1
>>> print(v.GetChildAtIndex(0))
(int) Value = 1

But if we ask for it by name it doesn't work:

>>> print(v.GetChildMemberWithName('Value'))
No value

Bug? Intentional?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Bug 45471] New: SymbolFileDWARF::ParseVariableDIE consider all constant variables as "static"

2020-04-07 Thread via lldb-dev
https://bugs.llvm.org/show_bug.cgi?id=45471

Bug ID: 45471
   Summary: SymbolFileDWARF::ParseVariableDIE consider all
constant variables as "static"
   Product: lldb
   Version: unspecified
  Hardware: PC
OS: All
Status: NEW
  Severity: enhancement
  Priority: P
 Component: All Bugs
  Assignee: lldb-dev@lists.llvm.org
  Reporter: ditali...@apple.com
CC: jdevliegh...@apple.com, llvm-b...@lists.llvm.org

SymbolFileDWARF::ParseVariableDIE has the following code:

  if (location_is_const_value_data)
scope = eValueTypeVariableStatic;
  else {
scope = eValueTypeVariableLocal;

So every variable that just has a constant value, and isn’t in a location, is
treated as a file static.  That means, for instance, if you build this:

volatile int a; 
   main() {
  {
int b = 3;
a;
  }
}


Break on line 5 and run to it, you get:

(lldb) frame var 
(lldb)

It is a local variable, however, so we should print it.

-- 
You are receiving this mail because:
You are the assignee for the bug.___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] SBValues that are synthetic has API issues when trying to get the child by name:

2020-04-07 Thread Jim Ingham via lldb-dev
Definitely a bug.  ValueObjectSynthetic overrides both GetChildMemberWithName 
and GetIndexOfChildWithName so that if you have a synthetic value, it will look 
in the synthetic children to match the name, not in the underlying value's 
type.  Not sure why this isn’t working.

Jim



> On Apr 7, 2020, at 5:34 PM, Greg Clayton via lldb-dev 
>  wrote:
> 
>   3   int main(int argc, const char **argv) {
>   4 std::atomic ai;
>   5 ai = argc;
> -> 6ai = argc + 1;
>   7 return 0;
>   8   }
> 
> 
> (lldb) fr var ai
> (std::atomic) ai = {
>  Value = 1
> }
> (lldb) frame var --raw ai
> (std::__1::atomic) ai = {
>  std::__1::__atomic_base = {
>std::__1::__atomic_base = {
>  __a_ = 1
>}
>  }
> }
> 
> So we have a synthetic child provider. But if we do:
> 
> (lldb) script
 v = lldb.frame.FindVariable('ai')
 print(v.GetNumChildren())
> 1
 print(v.GetChildAtIndex(0))
> (int) Value = 1
> 
> But if we ask for it by name it doesn't work:
> 
 print(v.GetChildMemberWithName('Value'))
> No value
> 
> Bug? Intentional?
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev