Re: [lldb-dev] [cfe-dev] [GitHub] RFC: Enforcing no merge commit policy

2019-03-20 Thread Zachary Turner via lldb-dev
It sounds like we need to get someone from the Foundation (chandlerc@,
lattner@, tanya@, someone else?) to reach out to them offline about this.

On Wed, Mar 20, 2019 at 11:23 AM Arthur O'Dwyer 
wrote:

> On Wed, Mar 20, 2019 at 2:19 PM Tom Stellard via cfe-dev <
> cfe-...@lists.llvm.org> wrote:
>
>> On 03/20/2019 10:41 AM, Zachary Turner wrote:
>> >
>> > On Tue, Mar 19, 2019 at 12:00 PM Tom Stellard via lldb-dev <
>> lldb-dev@lists.llvm.org > wrote:
>> >
>> > Hi,
>> >
>> > I would like to follow up on the previous thread[1], where there
>> was a consensus
>> > to disallow merge commits in the llvm github repository, and start
>> a discussion
>> > about how we should enforce this policy.
>> >
>> > Unfortunately, GitHub does not provide a convenient way to fully
>> enforce this policy.
>> >
>> >
>> > Why isn't this enforceable with a server-side pre-receive hook?
>>
>> GitHub[1] only supports pre-receive hooks in the 'Enterprise Server'
>> plan, which is for self-hosted github instances.
>>
>
> AIUI, the GitHub team is perfectly willing to help out the LLVM project in
> whatever way LLVM needs, including but not limited to turning on
> server-side hooks for us.
> https://twitter.com/natfriedman/status/1086470665832607746
>
> Server-side hooks are *the *answer to this problem. There is no problem.
> You just use a server-side hook.
>
> (Whether or not to use GitHub PRs is an orthogonal question. You can use
> hooks with PRs, or hooks without PRs; PRs with hooks, or PRs without hooks.)
>
> –Arthur
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [GitHub] RFC: Enforcing no merge commit policy

2019-03-20 Thread Zachary Turner via lldb-dev
On Tue, Mar 19, 2019 at 12:00 PM Tom Stellard via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi,
>
> I would like to follow up on the previous thread[1], where there was a
> consensus
> to disallow merge commits in the llvm github repository, and start a
> discussion
> about how we should enforce this policy.
>
> Unfortunately, GitHub does not provide a convenient way to fully enforce
> this policy.
>
>
Why isn't this enforceable with a server-side pre-receive hook?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB not loading any debug information on windows

2019-03-14 Thread Zachary Turner via lldb-dev
You actually can get dwarf on Windows, i made this work a long time ago
before we were able to generate PDB, while i was still porting lldb to
Windows. Since pdb wasn’t a thing at the time, dwarf was necessary in order
to get anything working.

I don’t know what the state of it is today though and I’d definitely
consider it unsupported at minimum
On Thu, Mar 14, 2019 at 5:49 AM  wrote:

> (Resend, remembering to add lldb-dev back this time)
>
> Asking for DWARF on Windows generally doesn't get you any info at all.
>
>
>
> FTR, the `-glldb` option means generate DWARF, "tuned" for LLDB.  Clang
> understands three "debugger tunings" which are gdb, lldb, and sce. The
> distinctions are minor and not relevant here.
>
> --paulr
>
>
>
>
>
> *From:* lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] *On Behalf Of 
> *Zachary
> Turner via lldb-dev
> *Sent:* Wednesday, March 13, 2019 8:07 PM
> *To:* Adrian McCarthy
> *Cc:* LLDB
> *Subject:* Re: [lldb-dev] LLDB not loading any debug information on
> windows
>
>
>
> Two things stand out to me as odd here.
>
>
>
> 1) -glldb.  -g is supposed to specify the debug information format, either
> dwarf, codeview, or whichever is the default.  I've never heard of anyone
> using -glldb (or for that matter -ggdb).  Just -g, -gcodeview, or -gdwarf.
>
>
>
> 2) You're using clang instead of clang-cl.  While it's possible to make
> things work, we designed clang-cl specifically to avoid these kinds of
> issues, so I would first try running `clang-cl /Z7 main.c` and see if
> things suddenly start working better.
>
>
>
> To be honest, I'm surprised it's even generating a PDB at all with the
> given command line, but then again it's not a codepath anyone has really
> tested, so maybe it's generating "something".
>
>
>
> On Wed, Mar 13, 2019 at 5:01 PM Adrian McCarthy via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Sorry for the delay.  There's definitely something going wrong here.
>
>
>
> If you specify the .pdb file (target symbols add a.pdb), it iterates
> through the objfile plugins to see if any match, and none of them do
> (because a PDB file is not a "module").
>
>
>
> If you specify the .exe file (target symbols add a.exe), it matches an
> objfile plugin and creates the symbol vendor, but the symbol vendor says
> the symbol file is the .exe itself rather than the .pdb, so it appears to
> work but no symbols are actually loaded.
>
>
>
> If you specify the .exe with -s (target symbols add -s a.exe), you again
> get silent failure as in the previous case.
>
>
>
> I'll look at this some more tomorrow to see if I can figure out what this
> code path is supposed to be doing.
>
>
>
> On Mon, Mar 4, 2019 at 11:00 AM Christoph Baumann via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Hey,
>
> in order to try lldb on windows, i built (with clang compiler and lld
> linker (v7.0.1)) llvm, clang, lld and of course lldb from latest source
> with the following command line:
>
>
>
> > cmake -G Ninja -DCMAKE_C_COMPILER=clang-cl -DCMAKE_CXX_COMPILER=clang-cl
> -DCMAKE_LINKER=lld-link -DLLDB_RELOCATABLE_PYTHON=1
> -DLLDB_PYTHON_HOME=“C:\program files\python37“ -DLLVM_BUILD_TESTS=0
> -DLLVM_BUILD_BENCHMARKS=0 -DLLVM_BUILD_EXAMPLES=0
> -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGET_ARCH=host
> -DCMAKE_INSTALL_PREFIX=“..\release“ ..\src\llvm
>
> >ninja install
>
>
>
> Here my little program i used to test lldb:
>
>
>
> > //main.c
>
> > #include 
>
> >
>
> > int a=10;
>
> >
>
> > int main(int argc, char *argv[]){
>
> > for(int i=0;i
> >   printf("%s\n", argv[i]);
>
> > }
>
> > return(0);
>
> > }
>
>
>
> I compiled the above with „clang main.c -glldb -o a.exe“, which generated
> the executable a.exe and corresponding debug information a.pdb.
>
> I launched lldb with „lldb a.exe“ and tried to load the debug information
> with „target symbols add a.pdb“, however this resulted in „error: symbol
> file [….]\a.pdb does not match any existing module“.
>
>
>
> I am using Windows 10 pro 64bit, my both, my test program and lldb were
> compiled for x64 target.
>
>
>
> I have also tried the prebuilt llvm/lldb binaries (v8.0.0, v7.0.1) found
> on llvm.org, same result.
>
>
>
> I feel like i am missing something (unless lldb just does not work on
> windows yet).
>
>
>
> (On a sidenote, compiling with -gdwarf-5 makes clang crash. I can send the
> debug information clang spits out once my debug build finishes.)
>
>
>
> Greetings
>
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB not loading any debug information on windows

2019-03-13 Thread Zachary Turner via lldb-dev
Two things stand out to me as odd here.

1) -glldb.  -g is supposed to specify the debug information format, either
dwarf, codeview, or whichever is the default.  I've never heard of anyone
using -glldb (or for that matter -ggdb).  Just -g, -gcodeview, or -gdwarf.

2) You're using clang instead of clang-cl.  While it's possible to make
things work, we designed clang-cl specifically to avoid these kinds of
issues, so I would first try running `clang-cl /Z7 main.c` and see if
things suddenly start working better.

To be honest, I'm surprised it's even generating a PDB at all with the
given command line, but then again it's not a codepath anyone has really
tested, so maybe it's generating "something".

On Wed, Mar 13, 2019 at 5:01 PM Adrian McCarthy via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Sorry for the delay.  There's definitely something going wrong here.
>
> If you specify the .pdb file (target symbols add a.pdb), it iterates
> through the objfile plugins to see if any match, and none of them do
> (because a PDB file is not a "module").
>
> If you specify the .exe file (target symbols add a.exe), it matches an
> objfile plugin and creates the symbol vendor, but the symbol vendor says
> the symbol file is the .exe itself rather than the .pdb, so it appears to
> work but no symbols are actually loaded.
>
> If you specify the .exe with -s (target symbols add -s a.exe), you again
> get silent failure as in the previous case.
>
> I'll look at this some more tomorrow to see if I can figure out what this
> code path is supposed to be doing.
>
> On Mon, Mar 4, 2019 at 11:00 AM Christoph Baumann via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hey,
>>
>> in order to try lldb on windows, i built (with clang compiler and lld
>> linker (v7.0.1)) llvm, clang, lld and of course lldb from latest source
>> with the following command line:
>>
>>
>>
>> > cmake -G Ninja -DCMAKE_C_COMPILER=clang-cl
>> -DCMAKE_CXX_COMPILER=clang-cl -DCMAKE_LINKER=lld-link
>> -DLLDB_RELOCATABLE_PYTHON=1 -DLLDB_PYTHON_HOME=“C:\program files\python37“
>> -DLLVM_BUILD_TESTS=0 -DLLVM_BUILD_BENCHMARKS=0 -DLLVM_BUILD_EXAMPLES=0
>> -DCMAKE_BUILD_TYPE=Release -DLLVM_TARGET_ARCH=host
>> -DCMAKE_INSTALL_PREFIX=“..\release“ ..\src\llvm
>>
>> >ninja install
>>
>>
>>
>> Here my little program i used to test lldb:
>>
>>
>>
>> > //main.c
>>
>> > #include 
>>
>> >
>>
>> > int a=10;
>>
>> >
>>
>> > int main(int argc, char *argv[]){
>>
>> > for(int i=0;i>
>> >   printf("%s\n", argv[i]);
>>
>> > }
>>
>> > return(0);
>>
>> > }
>>
>>
>>
>> I compiled the above with „clang main.c -glldb -o a.exe“, which generated
>> the executable a.exe and corresponding debug information a.pdb.
>>
>> I launched lldb with „lldb a.exe“ and tried to load the debug information
>> with „target symbols add a.pdb“, however this resulted in „error: symbol
>> file [….]\a.pdb does not match any existing module“.
>>
>>
>>
>> I am using Windows 10 pro 64bit, my both, my test program and lldb were
>> compiled for x64 target.
>>
>>
>>
>> I have also tried the prebuilt llvm/lldb binaries (v8.0.0, v7.0.1) found
>> on llvm.org, same result.
>>
>>
>>
>> I feel like i am missing something (unless lldb just does not work on
>> windows yet).
>>
>>
>>
>> (On a sidenote, compiling with -gdwarf-5 makes clang crash. I can send
>> the debug information clang spits out once my debug build finishes.)
>>
>>
>>
>> Greetings
>>
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] DEBUG_PRINTF() macro

2019-03-13 Thread Zachary Turner via lldb-dev
Apparently we have a macro called DEBUG_PRINTF() which, if you compile LLDB
with a special pre-processor setting enabled, will cause certain messages
to be printed to stdout while running LLDB.

Does anyone use this?  This seems like a kind of hacky alternative to
tracepoints and/or pretty printers, and in some cases is causing otherwise
dead code to be compiled into the binary, so there's some benefit to
removing it.

Is anyone opposed to removing all of this?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Needs help contributing to lldb-vscode.

2019-03-12 Thread Zachary Turner via lldb-dev
This seems reasonable to me.  It's worth pointing out though that in
regards to the last comment "IMO it's good to make this lldb-vscode more
general so that it can be used by other debugger frontends besides vscode",
despite the name lldb-vscode, there is actually nothing here that is
specific to VSCode.  It reads DAP requests on stdin and responds with DAP
responses on stdout.  That's literally it.  The only thing vscode specific
about it is the names of the source files and some internal classes.  I
actually wouldn't be opposed to changing it to lldb-dap

On Tue, Mar 12, 2019 at 12:34 PM Leonard Mosescu via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Greg, what do you think?
>
>
> On Tue, Mar 12, 2019 at 11:50 AM Qianli Ma  wrote:
>
>> Hi lldb community,
>>
>> I am currently working on a project related to lldb. I'd like to write a
>> DAP RPC server similars to lldb-vscode.cc
>> 
>>  but
>> exports I/O to internal RPC clients. Doing so requires me to reuse some
>> functions defined in lldb-vscode.cc
>> .
>> However as those functions are defined using forward declaration I am not
>> able to do that.
>>
>> I'd like refactor the code a bit. More specifically, I'd like to extract
>> all helper functions in lldb-vscode.cc
>> 
>>  into
>> a separate file and create a header for it.  BTW, IMO it's good to make
>> this lldb-vscode more general so that it can be used by other debugger
>> frontends besides vscode.
>>
>> Please let me know WDYT and how I can proceed to submit changes for
>> review.
>>
>> Thanks and Regards
>> Qianli
>>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Status of DWARF64 in LLDB

2019-03-11 Thread Zachary Turner via lldb-dev
Given that:

1) LLVM doesn't produce DWARF64
2) GCC has to be patched to produce DWARF64
3) LLDB's support is only partial but is untested and appears to be missing
major pieces in order for it to work
4) It's of questionable use as there are several viable alternatives

Would it be reasonable to propose a patch removing the incomplete support
from LLDB?  We can always add it back later when someone is ready to really
support and test it properly, and the history in the repository will show
what code would need to be changed to get back to at least where the
support is today (which again, appears to not fully work).

If we can go this route, it makes merging the two DWARF parsing
implementations quite a bit simpler

On Mon, Mar 11, 2019 at 3:33 PM Adrian Prantl  wrote:

>
>
> > On Mar 11, 2019, at 12:45 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > I want to ask what the status of DWARF64 in LLDB is.  I can tell there's
> some support for it by reading the code, but it seems to have zero test
> coverage so it's not clear to me that anyone depends on it.  For example, I
> know that clang and LLVM will not even generate DWARF64, so if anyone is
> depending on it, they must be debugging programs built with some other
> toolchain.
>
> AFAIR, Apple's tools only generate/support DWARF32. After implementing
> type-uniquing in dsymutil we didn't see any individual .dSYM bundles that
> came even close to the 4GB watermark.
>
> >
> > I'm looking at unifying LLDB's DWARF parser with LLVM's, and this is the
> biggest functional difference I can see.
> >
> > Certainly we can improve LLVM's support for consuming DWARF64, but it's
> a question of priorities.  If nobody is actively depending on this, then
> taking a regression here could be on the table and then re-prioritizing
> adding back support in the future if / when we actually need it.
>
> -- adrian
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Status of DWARF64 in LLDB

2019-03-11 Thread Zachary Turner via lldb-dev
Thanks Jan,

That was my suspicion as well.  If it's true that DWARF64 support is
currently non-functional, then I think the easiest path forward is to
remove any traces of it from LLDB as a way of bringing the two
implementations closer together.

I'll tinker around with this idea in a local branch while waiting to see if
anyone else has any input.

On Mon, Mar 11, 2019 at 1:25 PM Jan Kratochvil 
wrote:

> On Mon, 11 Mar 2019 20:45:48 +0100, Zachary Turner via lldb-dev wrote:
> > I want to ask what the status of DWARF64 in LLDB is.
>
> IMO there isn't as for example:
> lldb/source/Plugins/SymbolFile/DWARF/DIERef.cpp
> is using bits 32..63 for additional info (DWO file offset/index for
> example)
> while only bits 0..31 are used for DIE offset inside .debug_info section.
>
> lldb/include/lldb/Core/dwarf.h
> #ifdef DWARFUTILS_DWARF64
> but nobody ever defines DWARFUTILS_DWARF64 and so it uses:
> typedef uint32_t dw_offset_t; // Dwarf Debug Information Entry offset for
> any
>   // offset into the file
>
>
> > For example, I > know that clang and LLVM will not even generate DWARF64,
>
> Even GCC needs to be patched to generate DWARF64.
>
>
> > Certainly we can improve LLVM's support for consuming DWARF64, but it's a
> > question of priorities.
>
> I think it is never needed in real world as long as one uses DWP and/or
> -fdebug-types-section.  Red Hat is using neither (for DWZ postprocessing)
> and
> so I did hit this limit of unsupported DWARF64 in GNU utilities [attached].
>
>
> Jan
>
>
>
> -- Forwarded message --
> From: Jan Kratochvil 
> To: x...@redhat.com
> Cc:
> Bcc:
> Date: Wed, 8 Aug 2018 15:14:16 +0200
> Subject: DWARF64 for Chromium with 10GB of DWARF
> Hello,
>
> LLDB people were talking about 6GB Chromium binaries. So I checked Fedora
> Chromium but:
> # Debuginfo packages aren't very useful here. If you need to debug
> # you should do a proper debug build (not implemented in this spec
> yet)
> %global debug_package %{nil}
> and it uses no '-g' during compilation.
>
> After enabling Chromium debug info [attached, it has only -g2, not -g3] I
> got:
> obj/mojo/public/cpp/system/system/message_pipe.o:(.debug_loc+0x1bcc):
> relocation truncated to fit: R_X86_64_32 against `.debug_info'
> ...
> obj/third_party/blink/renderer/platform/platform/fe_convolve_matrix.o:(.debug_info+0x4b5dc):
> additional relocation overflows omitted from the output
> collect2: error: ld returned 1 exit status
>
> Which is logical as Chromium has 8GB of .debug_info section. I found no gcc
> option to enable 64-bit DWARF so I had to patch GCC for that [attached].
>
> But then the rpmbuild failed a different way:
>
> /usr/lib/rpm/debugedit:
> BUILDROOT/chromium-67.0.3396.87-2.fc28.x86_64/usr/lib64/chromium-browser/libEGL.so:
> 64-bit DWARF not supported
> ...
> eu-strip: elf32_updatefile.c:336: __elf64_updatemmap: Assertion
> `dl->data.d.d_size <= (shdr->sh_size - (GElf_Off) dl->data.d.d_off)' failed.
> /usr/lib/rpm/find-debuginfo.sh: line 231: 3998449 Aborted
>  eu-strip --remove-comment $r $g ${keep_remove_args} -f "$1" "$2"
> double free or corruption (out)
> /usr/lib/rpm/find-debuginfo.sh: line 231: 3998529 Aborted
>  eu-strip --remove-comment $r $g ${keep_remove_args} -f "$1" "$2"
> dwz:
> ./etc/chromium/native-messaging-hosts/remoting_user_session-67.0.3396.87-2.fc28.x86_64.debug:
> 64-bit DWARF not supported
>
> DWZ would be unable to handle it on x86_64 even if it did support DWARF64
> as
> rpmbuild limits it to 110e6 DIEs while this DWARF has 500e6 DIEs.
>
> Google AFAIK builds it with -gsplit-dwarf and then one can pack the *.dwo
> files into EXEC.dwp by /usr/bin/dwp (llvm-dwp in the Google case).
> DWZ-like optimization is then achieved by -fdebug-types-section.
> But then rpmbuild is not prepared to handle *.dwp (like current *.debug).
>
> The 10GB DWARF64 binary if anyone is interested:
> https://www.jankratochvil.net/t/chromium-headless_shell.xz
>
> Then it is questionable whether to deal with DWARF64/DWP just for Chromium.
> Customers really do not have binaries of this DWARF size?
>
>
> Jan
> ___
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Status of DWARF64 in LLDB

2019-03-11 Thread Zachary Turner via lldb-dev
I want to ask what the status of DWARF64 in LLDB is.  I can tell there's
some support for it by reading the code, but it seems to have zero test
coverage so it's not clear to me that anyone depends on it.  For example, I
know that clang and LLVM will not even generate DWARF64, so if anyone is
depending on it, they must be debugging programs built with some other
toolchain.

I'm looking at unifying LLDB's DWARF parser with LLVM's, and this is the
biggest functional difference I can see.

Certainly we can improve LLVM's support for consuming DWARF64, but it's a
question of priorities.  If nobody is actively depending on this, then
taking a regression here could be on the table and then re-prioritizing
adding back support in the future if / when we actually need it.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Host is now dependency free

2019-03-08 Thread Zachary Turner via lldb-dev
It's been a long time coming and a lot of work to get here, but Host is now
dependency free.  While this may not be enforced in the Xcode project
(unless someone changes it to not link against any other libraries /
targets), as of r355730 this is  enforced in the CMake build, so if new
dependencies are introduced, it will break most non-OSX build bots.

Mostly just throwing this out there so people are aware.

The good news is that this gets us one step closer to a shared libraries
build as well as a real C++ modules build, as well as being able to create
small debugger-related tools that are not full blown debuggers.

For the curious, the remaining cycles in the build graph (as well as their
outgoing edge counts) are:

4 deps to break: lldb/Commands [3->] lldb/Expression [1->] lldb/Commands
5 deps to break: lldb/Plugins/SymbolFile/DWARF [4->] lldb/Expression [1->]
lldb/Plugins/SymbolFile/DWARF
5 deps to break: lldb/Plugins/Language/ObjC [4->]
lldb/Plugins/LanguageRuntime/ObjC/AppleObjCRuntime [1->]
lldb/Plugins/Language/ObjC
6 deps to break: lldb/Interpreter [1->] lldb/Breakpoint [5->]
lldb/Interpreter
6 deps to break: lldb/Plugins/ScriptInterpreter/Python [2->] lldb/API [4->]
lldb/Plugins/ScriptInterpreter/Python
13 deps to break: lldb/Plugins/Language/ObjC [12->] lldb/Symbol [1->]
lldb/Plugins/Language/ObjC
14 deps to break: lldb/Interpreter [10->] lldb/DataFormatters [4->]
lldb/Interpreter
22 deps to break: lldb/Plugins/SymbolFile/PDB [21->] lldb/Symbol [1->]
lldb/Plugins/SymbolFile/PDB
23 deps to break: lldb/Target [3->] lldb/DataFormatters [20->] lldb/Target
26 deps to break: lldb/Expression [1->] lldb/Plugins/ExpressionParser/Clang
[25->] lldb/Expression
29 deps to break: lldb/Plugins/Language/CPlusPlus [26->] lldb/Core [3->]
lldb/Plugins/Language/CPlusPlus
29 deps to break: lldb/Plugins/Language/ObjC [27->] lldb/Core [2->]
lldb/Plugins/Language/ObjC
29 deps to break: lldb/Core [14->] lldb/DataFormatters [15->] lldb/Core
33 deps to break: lldb/Expression [30->] lldb/Symbol [3->] lldb/Expression
37 deps to break: lldb/Expression [33->] lldb/Core [4->] lldb/Expression
38 deps to break: lldb/Target [1->] lldb/Plugins/Language/ObjC [37->]
lldb/Target
42 deps to break: lldb/Interpreter [19->] lldb/Target [23->]
lldb/Interpreter
42 deps to break: lldb/Breakpoint [39->] lldb/Core [3->] lldb/Breakpoint
49 deps to break: lldb/Interpreter [25->] lldb/Core [24->] lldb/Interpreter
51 deps to break: lldb/Target [4->] lldb/Plugins/ExpressionParser/Clang
[47->] lldb/Target
55 deps to break: lldb/Plugins/SymbolFile/DWARF [54->] lldb/Symbol [1->]
lldb/Plugins/SymbolFile/DWARF
62 deps to break: lldb/Plugins/ExpressionParser/Clang [58->] lldb/Symbol
[4->] lldb/Plugins/ExpressionParser/Clang
69 deps to break: lldb/Target [38->] lldb/Breakpoint [31->] lldb/Target
72 deps to break: lldb/Target [13->] lldb/Expression [59->] lldb/Target
72 deps to break: lldb/Utility [71->] lldb [1->] lldb/Utility
104 deps to break: lldb/Target [64->] lldb/Symbol [40->] lldb/Target
128 deps to break: lldb/Target [7->] lldb/Plugins/Process/Utility [121->]
lldb/Target
201 deps to break: lldb/Core [110->] lldb/Symbol [91->] lldb/Core
227 deps to break: lldb/Target [127->] lldb/Core [100->] lldb/Target

Found by running scripts/analyze-project-deps.py
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Eliminate LLDB_DISABLE_PYTHON

2019-03-07 Thread Zachary Turner via lldb-dev
Yes, Pavel pointed out one specific case where it is used, and that case
definitely needs to be supported.

We've talked in the past about fixing the layering in such a way that all
Python related code is in ScriptInterpreterPython, but there's definitely a
non-trivial amount of work needed to make that possible.  And I agree, if
that were the case today, then you could just turn it off trivially.

On Thu, Mar 7, 2019 at 12:48 PM Jim Ingham  wrote:

>
>
> > On Mar 7, 2019, at 11:37 AM, Zachary Turner  wrote:
> >
> >
> >
> > On Thu, Mar 7, 2019 at 11:03 AM Jim Ingham via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > Even though you can just use debugserver/lldb-server and debug remotely,
> many people find it handy to be able to run a debugger directly on the
> device they are using.  But requiring that you port Python and bundle it
> with your embedded platform just to get a debugger there is a pretty big
> ask.  So it is still quite useful to be able to build lldb without Python.
> This option hasn't been broken all that frequently in our uses, but if it
> is being a problem, the better solution would be to set up a bot to build
> with this option, so we can make sure it keeps working.
> >
> > That's true, but there's a maintenance burden associated with this
> handiness, and if nobody uses it that much (or at all?), it's better IMO to
> optimize for low maintenance.  Every time I've tried this configuration it
> has been broken, which leads me to believe that in practice nobody is
> actually using it.  If that really is the case, I'd rather it be gone.  I
> don't think we should keep code around just in case, without specific
> evidence that it's providing value.
>
>
> It does get used, though we might be able to get away from that at some
> point.  But I still think requiring any new platform that might want to run
> a debugger to get Python up first is unfortunate, and we shouldn't lightly
> require it.
>
> But also, this isn't just about Python in particular.  Everything in lldb
> that touches the script interpreter should be going through the abstract
> ScriptInterpreter class.  The only place where the fact that the script
> interpreter happens to be Python should show up is in the
> ScriptInterpreterPython.  So building without Python should be as simple as
> not building the ScriptInterpreterPython and not initializing that plugin.
> The maintenance burden for this option should be trivial.  Something is
> broken that LLDB_DISABLE_PYTHON has gotten entangled deeper in lldb.  I'd
> much prefer we fix that.
>
> It would be really cool, for instance, if lldb could support a Python2 and
> a Python3 script interpreter to ease the transition for folks with lots of
> legacy Python 2 code (such folks exist).  lldb was designed to support that
> sort of thing.
>
> Or maybe at some point we should support some other new hotness language.
> I'm not sure it is good to bind ourselves inextricably to Python.
>
> Jim
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Eliminate LLDB_DISABLE_PYTHON

2019-03-07 Thread Zachary Turner via lldb-dev
Does lldb-server for Android currently use this flag?  I was under the
impression it just linked against Python anyway.

On Thu, Mar 7, 2019 at 11:50 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> On 07/03/2019 20:29, Davide Italiano via lldb-dev wrote:
> > I'm in favor of this. FWIW, I will be surprised if lldb works at all
> > with that option.
> >
>
> I would actually be also surprised if it works. However, the option I
> think is important to keep is to be able to build lldb-server without
> python around. Nobody uses the lldb client on android (in fact, I'm
> pretty sure it doesn't work), but lldb-server is used there. Getting a
> working python for android is hard, and it seems counterproductive to
> ask somebody to do it, when lldb-server does not even use python.
>
> If we can factor the code in such a way that code requiring python is
> not built/used when building lldb-server, then making python support
> mandatory for the rest seems reasonable to me.
>
> pl
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Eliminate LLDB_DISABLE_PYTHON

2019-03-07 Thread Zachary Turner via lldb-dev
On Thu, Mar 7, 2019 at 11:03 AM Jim Ingham via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Even though you can just use debugserver/lldb-server and debug remotely,
> many people find it handy to be able to run a debugger directly on the
> device they are using.  But requiring that you port Python and bundle it
> with your embedded platform just to get a debugger there is a pretty big
> ask.  So it is still quite useful to be able to build lldb without Python.
> This option hasn't been broken all that frequently in our uses, but if it
> is being a problem, the better solution would be to set up a bot to build
> with this option, so we can make sure it keeps working.
>

That's true, but there's a maintenance burden associated with this
handiness, and if nobody uses it that much (or at all?), it's better IMO to
optimize for low maintenance.  Every time I've tried this configuration it
has been broken, which leads me to believe that in practice nobody is
actually using it.  If that really is the case, I'd rather it be gone.  I
don't think we should keep code around just in case, without specific
evidence that it's providing value.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-03-06 Thread Zachary Turner via lldb-dev
On Mon, Mar 4, 2019 at 10:32 AM Zachary Turner  wrote:

> On Sat, Mar 2, 2019 at 2:56 PM Adrian Prantl  wrote:
>
>>
>>- It becomes testable as an independent component, because you can
>>just send requests to it and dump the results and see if they make sense.
>>Currently there is almost zero test coverage of this aspect of LLDB apart
>>from what you can get after going through many levels of indirection via
>>spinning up a full debug session and doing things that indirectly result 
>> in
>>symbol queries.
>>
>> You are right that the type system debug info ingestion and AST
>> reconstruction is primarily tested end-to-end.
>>
> Do you consider this something worth addressing by testing the debug info
> ingestion in isolation?
>

 Wanted to bump this thread for visibility.  If nothing else, I'm
interested in an answer to this question.  Because if people agree that it
would be valuable to test this going forward, we should work out a plan
about what such tests would look like and how to refactor the code
appropriately to make it possible.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] new tool (core2yaml) + a new top-level library (Formats)

2019-03-06 Thread Zachary Turner via lldb-dev
Well, all of the actual yamlization code in obj2yaml and yaml2obj is
library-ized, so you could always add the real code there, then have
core2yaml just link against it
On Wed, Mar 6, 2019 at 5:11 AM Pavel Labath  wrote:

> On 05/03/2019 22:52, Zachary Turner wrote:
> >
> >
> > On Tue, Mar 5, 2019 at 1:47 PM Jonas Devlieghere via lldb-dev
> > mailto:lldb-dev@lists.llvm.org>> wrote:
> >
> >
> > I don't know much about the minidump format or code, but it sounds
> > reasonable for me to have support for it in yaml2obj, which would be
> > a sufficient motivation to have the code live there. As you mention
> > in your footnote, MachO core files are already supported, and it
> > sounds like ELF could reuse a bunch of existing code as well. So
> > having everything in LLVM would give you even more symmetry. I also
> > doubt anyone would mind having more fine grained yamlization, even
> > if you cannot use it to reduce a test it's nicer to see structure
> > than a binary blob (imho). Anyway, that's just my take, I guess this
> > is more of a question for the LLVM mailing list.
> >
> > A lot of obj2yaml output is just "Section Name" / "Section Contents" and
> > then a long hex string consisting of the contents.  Since a core file is
> > an ELF file, this would already be supported for obj2yaml today (in
> > theory)
>
> Actually, even this is not true. An elf *core file* is an *elf file*,
> but it contains no sections. It contains "segments" instead. :P obj2yaml
> has absolutely no support for segments so if you try it to yamlize a
> core file, you will get an empty output.
>
> Interestingly, yaml2obj does contain some support for segments, but its
> extremely limited, and can only be used to create very simple
> "executable" files. Core files still cannot be represented there right
> now, as yaml2obj is still very section-centric.
>
>
> However, I do see the appeal in having a single tool for yamlization of
> various "object" file formats, so I am going to send an email to
> llvm-dev and see what the response is like there. I'd encourage anyone
> interested in this to voice your opinion there too.
>
> regards,
> pavel
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] new tool (core2yaml) + a new top-level library (Formats)

2019-03-05 Thread Zachary Turner via lldb-dev
On Tue, Mar 5, 2019 at 1:47 PM Jonas Devlieghere via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

>
> I don't know much about the minidump format or code, but it sounds
> reasonable for me to have support for it in yaml2obj, which would be a
> sufficient motivation to have the code live there. As you mention in your
> footnote, MachO core files are already supported, and it sounds like ELF
> could reuse a bunch of existing code as well. So having everything in LLVM
> would give you even more symmetry. I also doubt anyone would mind having
> more fine grained yamlization, even if you cannot use it to reduce a test
> it's nicer to see structure than a binary blob (imho). Anyway, that's just
> my take, I guess this is more of a question for the LLVM mailing list.
>

A lot of obj2yaml output is just "Section Name" / "Section Contents" and
then a long hex string consisting of the contents.  Since a core file is an
ELF file, this would already be supported for obj2yaml today (in theory),
but I also agree that specific knowledge of breaking it down into finer
grained fields and subfields, and actually parsing the core, is probably
not useful for anything else in LLVM.



>
>
>> Discussion topic #3: Use of .def files in lldb. In one of the patches a
>> create a .def textual header to be used for avoiding repetitive code
>> when dealing various constants. This is fairly common practice in llvm,
>> but would be a first in lldb.
>>
>
> I think this is a good idea. Although not exactly the same, we already got
> our feet wet with a tablegen file in the driver.
>
+1
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-03-04 Thread Zachary Turner via lldb-dev
On Sat, Mar 2, 2019 at 2:56 PM Adrian Prantl  wrote:

>
> On Feb 25, 2019, at 10:21 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Hi all,
>
> We've got some internal efforts in progress, and one of those would
> benefit from debug info parsing being out of process (independently of
> whether or not the rest of LLDB is out of process).
>
> There's a couple of advantages to this, which I'll enumerate here:
>
>- It improves one source of instability in LLDB which has been known
>to be problematic -- specifically, that debug info can be bad and handling
>this can often be difficult and bring down the entire debug session.  While
>other efforts have been made to address stability by moving things out of
>process, they have not been upstreamed, and even if they had I think we
>would still want this anyway, for reasons that follow.
>
> Where do you draw the line between debug info and the in-process part of
> LLDB? I'm asking because I have never seen the mechanical parsing of DWARF
> to be a source of instability; most crashes in LLDB are when reconstructing
> Clang ASTs because we're breaking some subtle and badly enforced invariants
> in Clang's Sema. Perhaps parsing PDBs is less stable? If you do mean at the
> AST level then I agree with the sentiment that it is a common source of
> crashes, but I don't see a good way of moving that component out of
> process. Serializing ASTs or types in general is a hard problem, and I'd
> find the idea of inventing yet another serialization format for types that
> we would have to develop, test, and maintain quite scary.
>
If anything I think parsing PDBs is more stable.  There is close to zero
flexibility in how types and symbols can be represented in PDB / CodeView,
and on top of that, there are very few producers.  Combined, this means we
can assume almost everything about the structure of the records.

Yes the crashes *happen* at the AST level (most of them anyway, not all -
there are definitely examples of crashing in the actual parsing code), but
the fact that there is so much flexibility in how records can be specified
in DWARF exacerbates the problem by complicating the parsing code, which is
then not well tested because of all the different code paths.



>
>- It becomes testable as an independent component, because you can
>just send requests to it and dump the results and see if they make sense.
>Currently there is almost zero test coverage of this aspect of LLDB apart
>from what you can get after going through many levels of indirection via
>spinning up a full debug session and doing things that indirectly result in
>symbol queries.
>
> You are right that the type system debug info ingestion and AST
> reconstruction is primarily tested end-to-end.
>
Do you consider this something worth addressing by testing the debug info
ingestion in isolation?


>
> The big win here, at least from my point of view, is the second one.
> Traditional symbol servers operate by copying entire symbol files (DSYM,
> DWP, PDB) from some machine to the debugger host.  These can be very large
> -- we've seen 12+ GB in some cases -- which ranges from "slow bandwidth
> hog" to "complete non-starter" depending on the debugger host and network.
>
>
> 12 GB sounds suspiciously large. Do you know how this breaks down between
> line table, types, and debug locations? If it's types, are you
> deduplicating them? For comparison, the debug info of LLDB (which contains
> two compilers and a debugger) compresses to under 500MB, but perhaps the
> binaries you are working with are really just that much larger.
>
They really are that large.



>
> In this kind of scenario, one could theoretically run the debug info
> process on the same NAS, cloud, or whatever as the symbol server.  Then,
> rather than copying over an entire symbol file, it responds only to the
> query you issued -- if you asked for a type, it just returns a packet
> describing the type you requested.
>
> The API itself would be stateless (so that you could make queries for
> multiple targets in any order) as well as asynchronous (so that responses
> might arrive out of order).  Blocking could be implemented in LLDB, but
> having the server be asynchronous means multiple clients could connect to
> the same server instance.  This raises interesting possibilities.  For
> example, one can imagine thousands of developers connecting to an internal
> symbol server on the network and being able to debug remote processes or
> core dumps over slow network connections or on machines with very little
> storage (e.g. chromebooks).
>
>
> You *could* just run LLDB remotely ;-

Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-03-01 Thread Zachary Turner via lldb-dev
On Wed, Feb 27, 2019 at 4:35 PM Frédéric Riss  wrote:

>
> On Feb 27, 2019, at 3:14 PM, Zachary Turner  wrote:
>
>
>
> On Wed, Feb 27, 2019 at 2:52 PM Frédéric Riss  wrote:
>
>> On Feb 27, 2019, at 10:12 AM, Zachary Turner  wrote:
>>
>>
>>
>> For what it's worth, in an earlier message I mentioned that I would
>> probably build the server by using mostly code from LLVM, and making sure
>> that it supported the union of things currently supported by LLDB and
>> LLVM's DWARF parsers.  Doing that would naturally require merging the two
>> (which has been talked about for a long time) as a pre-requisite, and I
>> would expect that for testing purposes we might want something like
>> llvm-dwarfdump but that dumps a higher level description of the information
>> (if we change our DWARF emission code in LLVM for example, to output the
>> exact same type in slightly different ways in the underlying DWARF, we
>> wouldn't want our test to break, for example).  So for example imagine you
>> could run something like `lldb-dwarfdump -lookup-type=foo a.out` and it
>> would dump some description of the type that is resilient to insignificant
>> changes in the underlying DWARF.
>>
>>
>> At which level do you consider the “DWARF parser” to stop and the
>> debugger policy to start? In my view, the DWARF parser stop at the DwarfDIE
>> boundary. Replacing it wouldn’t get us closer to a higher-level abstraction.
>>
> At the level where you have an alternative representation that you no
> longer have to access to the debug info.  In LLDB today, this
> "representation" is a combination of LLDB's own internal symbol hierarchy
> (e.g. lldb_private::Type, lldb_private::Function, etc) and the Clang AST.
> Once you have constructed those 2 things, the DWARF parser is out of the
> picture.
>
> A lot of the complexity in processing raw DWARF comes from handling
> different versions of the DWARF spec (e.g. supporting DWARF 4 & DWARF 5),
> collecting and interpreting the subset of attributes which happens be
> present, following references to other parts of the DWARF, and then at the
> end of all this (or perhaps during all of this), dealing with "partial
> information" (e.g. something that would have saved me a lot of trouble was
> missing, now I have to do extra work to find it).
>
> I'm treading DWARF expressions as an exception though, because it would be
> somewhat tedious and not provide much value to convert those into some text
> format and then evaluate the text representation of the expression since
> it's already in a format suitable for processing.  So for this case, you
> could just encode the byte sequence into a hex string and send that.
>
> I hinted at this already, but part of the problem (at least in my mind) is
> that our "DWARF parser" is intermingled with the code that *interprets the
> parsed DWARF*.  We parse a little bit, build something, parse a little bit
> more, add on to the thing we're building, etc.  This design is fragile and
> makes error handling difficult, so part of what I'm proposing is a
> separation here, where "parse as much as possible, and return an
> intermediate representation that is as finished as we are able to make it".
>
> This part is independent of whether DWARF parsing is out of process
> however.  That's still useful even if DWARF parsing is in process, and
> we've talked about something like that for a long time, whereby we have
> some kind of API that says "give me the thing, handle all errors
> internally, and either return me a thing which I can trust or an error".
> I'm viewing "thing which I can trust" as some representation which is
> separate from the original DWARF, and which we could test -- for example --
> by writing a tool which dumps this representation
>
>
> Ok, here we are talking about something different (which you might have
> been expressing since the beginning and I misinterpreted). If you want to
> decouple dealing with DIEs from creating ASTs as a preliminary, then I
> think this would be super valuable and it addresses my concerns about
> duplicating the AST creation logic.
>
> I’m sure Greg would have comments about the challenges of lazily parsing
> the DWARF in such a design.
>
Well, I was originally talking about both lumped into one thing.  Because
this is a necessary precursor to having it be out of process :)

Since we definitely agree on this portion, the question then becomes:
Suppose we have this firm API boundary across which we either return errors
or things that can be trusted.  What are the things which can be trusted?
Are they DIEs?  I'm not sure they should be, because we'd have to
synthesize DIEs on the fly in the case where we got something that was bad
but we tried to "fix" it (in order to sanitize the debug info into
something the caller can make basic assumptions about).  And additionally,
it doesn't really make the client's job much easier as far as parsing goes.

So, I think it should build up a little bit higher representation of the
debug 

Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-27 Thread Zachary Turner via lldb-dev
On Wed, Feb 27, 2019 at 2:52 PM Frédéric Riss  wrote:

> On Feb 27, 2019, at 10:12 AM, Zachary Turner  wrote:
>
>
>
> For what it's worth, in an earlier message I mentioned that I would
> probably build the server by using mostly code from LLVM, and making sure
> that it supported the union of things currently supported by LLDB and
> LLVM's DWARF parsers.  Doing that would naturally require merging the two
> (which has been talked about for a long time) as a pre-requisite, and I
> would expect that for testing purposes we might want something like
> llvm-dwarfdump but that dumps a higher level description of the information
> (if we change our DWARF emission code in LLVM for example, to output the
> exact same type in slightly different ways in the underlying DWARF, we
> wouldn't want our test to break, for example).  So for example imagine you
> could run something like `lldb-dwarfdump -lookup-type=foo a.out` and it
> would dump some description of the type that is resilient to insignificant
> changes in the underlying DWARF.
>
>
> At which level do you consider the “DWARF parser” to stop and the debugger
> policy to start? In my view, the DWARF parser stop at the DwarfDIE
> boundary. Replacing it wouldn’t get us closer to a higher-level abstraction.
>
At the level where you have an alternative representation that you no
longer have to access to the debug info.  In LLDB today, this
"representation" is a combination of LLDB's own internal symbol hierarchy
(e.g. lldb_private::Type, lldb_private::Function, etc) and the Clang AST.
Once you have constructed those 2 things, the DWARF parser is out of the
picture.

A lot of the complexity in processing raw DWARF comes from handling
different versions of the DWARF spec (e.g. supporting DWARF 4 & DWARF 5),
collecting and interpreting the subset of attributes which happens be
present, following references to other parts of the DWARF, and then at the
end of all this (or perhaps during all of this), dealing with "partial
information" (e.g. something that would have saved me a lot of trouble was
missing, now I have to do extra work to find it).

I'm treading DWARF expressions as an exception though, because it would be
somewhat tedious and not provide much value to convert those into some text
format and then evaluate the text representation of the expression since
it's already in a format suitable for processing.  So for this case, you
could just encode the byte sequence into a hex string and send that.

I hinted at this already, but part of the problem (at least in my mind) is
that our "DWARF parser" is intermingled with the code that *interprets the
parsed DWARF*.  We parse a little bit, build something, parse a little bit
more, add on to the thing we're building, etc.  This design is fragile and
makes error handling difficult, so part of what I'm proposing is a
separation here, where "parse as much as possible, and return an
intermediate representation that is as finished as we are able to make it".

This part is independent of whether DWARF parsing is out of process
however.  That's still useful even if DWARF parsing is in process, and
we've talked about something like that for a long time, whereby we have
some kind of API that says "give me the thing, handle all errors
internally, and either return me a thing which I can trust or an error".
I'm viewing "thing which I can trust" as some representation which is
separate from the original DWARF, and which we could test -- for example --
by writing a tool which dumps this representation



>
> At that point you're already 90% of the way towards what I'm proposing,
> and it's useful independently.
>
>
> I think that “90%” figure is a little off :-) But please don’t take my
> questions as opposition to the general idea. I find the idea very
> interesting, and we could maybe use something similar internally so I am
> interested. That’s why I’m asking questions.
>

Hmm, well I think the 90% figure is pretty accurate.  Because if we
envision a hypothetical command line tool which ingests DWARF from a binary
or set of binaries, and has some command line interface that allows you to
query it in the same way our SymbolFile plugins can be queried, and dumps
its output in some intermediate format (maybe JSON, maybe something else)
and is sufficiently descriptive to make a Clang AST or build LLDB's
internal symbol & type hierarchy out of it, then at that point the only
thing missing from my original proposal is a socket to send that over the
wire and something on the other end to make the Clang AST and LLDB type /
symbol hierarchy.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-27 Thread Zachary Turner via lldb-dev
GSYM, as I understand it, is basically just an evolution of Breakpad
symbols.  It doesn't contain full fidelity debug information (type
information, function parameters, etc).

On Tue, Feb 26, 2019 at 5:56 PM  wrote:

> When I see this "parsing DWARF and turning it into something else" it is
> very reminiscent of what clayborg is trying to do with GSYM.  You're both
> talking about leveraging LLVM's parser, which is great, but I have to
> wonder if there isn't more commonality being left on the table.  Just
> throwing that thought out there; I don't have anything specific to suggest.
>
> --paulr
>
>
>
> *From:* lldb-dev [mailto:lldb-dev-boun...@lists.llvm.org] *On Behalf Of 
> *Frédéric
> Riss via lldb-dev
> *Sent:* Tuesday, February 26, 2019 5:40 PM
> *To:* Zachary Turner
> *Cc:* LLDB
> *Subject:* Re: [lldb-dev] RFC: Moving debug info parsing out of process
>
>
>
>
>
>
>
> On Feb 26, 2019, at 4:52 PM, Zachary Turner  wrote:
>
>
>
>
>
> On Tue, Feb 26, 2019 at 4:49 PM Frédéric Riss  wrote:
>
>
>
> On Feb 26, 2019, at 4:03 PM, Zachary Turner  wrote:
>
>
>
> I would probably build the server by using mostly code from LLVM.  Since
> it would contain all of the low level debug info parsing libraries, i would
> expect that all knowledge of debug info (at least, in the form that
> compilers emit it in) could eventually be removed from LLDB entirely.
>
>
>
> That’s quite an ambitious goal.
>
>
>
> I haven’t looked at the SymbolFile API, what do you expect the exchange
> currency between the server and LLDB to be? Serialized compiler ASTs? If
> that’s the case, it seems like you need a strong rev-lock between the
> server and the client. Which in turn add quite some complexity to the
> rollout of new versions of the debugger.
>
> Definitely not serialized ASTs, because you could be debugging some
> language other than C++.  Probably something more like JSON, where you
> parse the debug info and send back some JSON representation of the type /
> function / variable the user requested, which can almost be a direct
> mapping to LLDB's internal symbol hierarchy (e.g. the Function, Type, etc
> classes).  You'd still need to build the AST on the client
>
>
>
> This seems fairly easy for Function or symbols in general, as it’s easy to
> abstract their few properties, but as soon as you get to the type system, I
> get worried.
>
>
>
> Your representation needs to have the full expressivity of the underlying
> debug info format. Inventing something new in that space seems really
> expensive. For example, every piece of information we add to the debug info
> in the compiler would need to be handled in multiple places:
>
>  - the server code
>
>  - the client code that talks to the server
>
>  - the current “local" code (for a pretty long while)
>
> Not ideal. I wish there was a way to factor at least the last 2.
>
>
>
> But maybe I’m misunderstanding exactly what you’d put in your JSON. If
> it’s very close to the debug format (basically a JSON representation of the
> DWARF or the PDB), then it becomes more tractable as the client code can be
> the same as the current local one with some refactoring.
>
>
>
> Fred
>
>
>
>
>
> So, for example, all of the efforts to merge LLDB and LLVM's DWARF parsing
> libraries could happen by first implementing inside of LLVM whatever
> functionality is missing, and then using that from within the server.  And
> yes, I would expect lldb to spin up a server, just as it does with
> lldb-server today if you try to debug something.  It finds the lldb-server
> binary and runs it.
>
>
>
> When I say "switching the default", what I mean is that if someday this
> hypothetical server supports everything that the current in-process parsing
> codepath supports, we could just delete that entire codepath and switch
> everything to the out of process server, even if that server were running
> on the same physical machine as the debugger client (which would be
> functionally equivalent to what we have today).
>
>
>
> (I obviously knew what you meant by "switching the default”, I was trying
> to ask about how… to which the answer is by spinning up a local server)
>
>
>
> Do you envision LLDB being able to talk to more than one server at the
> same time? It seems like this could be useful to debug a local build while
> still having access to debug symbols for your dependencies that have their
> symbols in a central repository.
>
>
>
> I hadn't really thought of this, but it certainly seems possible.  Since
> the API is stateless, it could send requests to any server it wanted, with
> some mechanism of selecting between them.
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-27 Thread Zachary Turner via lldb-dev
On Tue, Feb 26, 2019 at 5:39 PM Frédéric Riss  wrote:

>
> On Feb 26, 2019, at 4:52 PM, Zachary Turner  wrote:
>
>
>
> On Tue, Feb 26, 2019 at 4:49 PM Frédéric Riss  wrote:
>
>>
>> On Feb 26, 2019, at 4:03 PM, Zachary Turner  wrote:
>>
>> I would probably build the server by using mostly code from LLVM.  Since
>> it would contain all of the low level debug info parsing libraries, i would
>> expect that all knowledge of debug info (at least, in the form that
>> compilers emit it in) could eventually be removed from LLDB entirely.
>>
>>
>> That’s quite an ambitious goal.
>>
>> I haven’t looked at the SymbolFile API, what do you expect the exchange
>> currency between the server and LLDB to be? Serialized compiler ASTs? If
>> that’s the case, it seems like you need a strong rev-lock between the
>> server and the client. Which in turn add quite some complexity to the
>> rollout of new versions of the debugger.
>>
> Definitely not serialized ASTs, because you could be debugging some
> language other than C++.  Probably something more like JSON, where you
> parse the debug info and send back some JSON representation of the type /
> function / variable the user requested, which can almost be a direct
> mapping to LLDB's internal symbol hierarchy (e.g. the Function, Type, etc
> classes).  You'd still need to build the AST on the client
>
>
> This seems fairly easy for Function or symbols in general, as it’s easy to
> abstract their few properties, but as soon as you get to the type system, I
> get worried.
>
> Your representation needs to have the full expressivity of the underlying
> debug info format. Inventing something new in that space seems really
> expensive. For example, every piece of information we add to the debug info
> in the compiler would need to be handled in multiple places:
>  - the server code
>  - the client code that talks to the server
>  - the current “local" code (for a pretty long while)
> Not ideal. I wish there was a way to factor at least the last 2.
>
How often does this actually happen though?  The C++ type system hasn't
really undergone very many fundamental changes over the years.  I mocked up
a few samples of what some JSON descriptions would look like, and it didn't
seem terrible.  It certainly is some work -- there's no denying -- but I
think a lot of the "expressivity" of the underlying format is actually more
accurately described as "flexibility".  What I mean by this is that there
are both many different ways to express the same thing, as well as many
entities that can express different things depending on how they're used.
An intermediate format gives us a way to eliminate all of that flexibility
and instead offer consistency, which makes client code much simpler.  In a
way, this is a similar benefit to what one gets by compiling a source
language down to LLVM IR and then operating on the LLVM IR because you have
a much simpler grammar to deal with, along with more semantic restrictions
on what kind of descriptions you form with that grammar (to be clear: JSON
itself is not restrictive, but we can make our schema restrictive).

For what it's worth, in an earlier message I mentioned that I would
probably build the server by using mostly code from LLVM, and making sure
that it supported the union of things currently supported by LLDB and
LLVM's DWARF parsers.  Doing that would naturally require merging the two
(which has been talked about for a long time) as a pre-requisite, and I
would expect that for testing purposes we might want something like
llvm-dwarfdump but that dumps a higher level description of the information
(if we change our DWARF emission code in LLVM for example, to output the
exact same type in slightly different ways in the underlying DWARF, we
wouldn't want our test to break, for example).  So for example imagine you
could run something like `lldb-dwarfdump -lookup-type=foo a.out` and it
would dump some description of the type that is resilient to insignificant
changes in the underlying DWARF.

At that point you're already 90% of the way towards what I'm proposing, and
it's useful independently.

>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-26 Thread Zachary Turner via lldb-dev
On Tue, Feb 26, 2019 at 4:49 PM Frédéric Riss  wrote:

>
> On Feb 26, 2019, at 4:03 PM, Zachary Turner  wrote:
>
> I would probably build the server by using mostly code from LLVM.  Since
> it would contain all of the low level debug info parsing libraries, i would
> expect that all knowledge of debug info (at least, in the form that
> compilers emit it in) could eventually be removed from LLDB entirely.
>
>
> That’s quite an ambitious goal.
>
> I haven’t looked at the SymbolFile API, what do you expect the exchange
> currency between the server and LLDB to be? Serialized compiler ASTs? If
> that’s the case, it seems like you need a strong rev-lock between the
> server and the client. Which in turn add quite some complexity to the
> rollout of new versions of the debugger.
>
Definitely not serialized ASTs, because you could be debugging some
language other than C++.  Probably something more like JSON, where you
parse the debug info and send back some JSON representation of the type /
function / variable the user requested, which can almost be a direct
mapping to LLDB's internal symbol hierarchy (e.g. the Function, Type, etc
classes).  You'd still need to build the AST on the client


>
> So, for example, all of the efforts to merge LLDB and LLVM's DWARF parsing
> libraries could happen by first implementing inside of LLVM whatever
> functionality is missing, and then using that from within the server.  And
> yes, I would expect lldb to spin up a server, just as it does with
> lldb-server today if you try to debug something.  It finds the lldb-server
> binary and runs it.
>
> When I say "switching the default", what I mean is that if someday this
> hypothetical server supports everything that the current in-process parsing
> codepath supports, we could just delete that entire codepath and switch
> everything to the out of process server, even if that server were running
> on the same physical machine as the debugger client (which would be
> functionally equivalent to what we have today).
>
>
> (I obviously knew what you meant by "switching the default”, I was trying
> to ask about how… to which the answer is by spinning up a local server)
>
> Do you envision LLDB being able to talk to more than one server at the
> same time? It seems like this could be useful to debug a local build while
> still having access to debug symbols for your dependencies that have their
> symbols in a central repository.
>

I hadn't really thought of this, but it certainly seems possible.  Since
the API is stateless, it could send requests to any server it wanted, with
some mechanism of selecting between them.

>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Moving debug info parsing out of process

2019-02-26 Thread Zachary Turner via lldb-dev
I would probably build the server by using mostly code from LLVM.  Since it
would contain all of the low level debug info parsing libraries, i would
expect that all knowledge of debug info (at least, in the form that
compilers emit it in) could eventually be removed from LLDB entirely.

So, for example, all of the efforts to merge LLDB and LLVM's DWARF parsing
libraries could happen by first implementing inside of LLVM whatever
functionality is missing, and then using that from within the server.  And
yes, I would expect lldb to spin up a server, just as it does with
lldb-server today if you try to debug something.  It finds the lldb-server
binary and runs it.

When I say "switching the default", what I mean is that if someday this
hypothetical server supports everything that the current in-process parsing
codepath supports, we could just delete that entire codepath and switch
everything to the out of process server, even if that server were running
on the same physical machine as the debugger client (which would be
functionally equivalent to what we have today).

On Tue, Feb 26, 2019 at 3:46 PM Frédéric Riss  wrote:

>
> On Feb 25, 2019, at 10:21 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Hi all,
>
> We've got some internal efforts in progress, and one of those would
> benefit from debug info parsing being out of process (independently of
> whether or not the rest of LLDB is out of process).
>
> There's a couple of advantages to this, which I'll enumerate here:
>
>- It improves one source of instability in LLDB which has been known
>to be problematic -- specifically, that debug info can be bad and handling
>this can often be difficult and bring down the entire debug session.  While
>other efforts have been made to address stability by moving things out of
>process, they have not been upstreamed, and even if they had I think we
>would still want this anyway, for reasons that follow.
>- It becomes theoretically possible to move debug info parsing not
>just to another process, but to another machine entirely.  In a broader
>sense, this decouples the physical debug info location (and for that
>matter, representation) from the debugger host.
>- It becomes testable as an independent component, because you can
>just send requests to it and dump the results and see if they make sense.
>Currently there is almost zero test coverage of this aspect of LLDB apart
>from what you can get after going through many levels of indirection via
>spinning up a full debug session and doing things that indirectly result in
>symbol queries.
>
> The big win here, at least from my point of view, is the second one.
> Traditional symbol servers operate by copying entire symbol files (DSYM,
> DWP, PDB) from some machine to the debugger host.  These can be very large
> -- we've seen 12+ GB in some cases -- which ranges from "slow bandwidth
> hog" to "complete non-starter" depending on the debugger host and network.
> In this kind of scenario, one could theoretically run the debug info
> process on the same NAS, cloud, or whatever as the symbol server.  Then,
> rather than copying over an entire symbol file, it responds only to the
> query you issued -- if you asked for a type, it just returns a packet
> describing the type you requested.
>
> The API itself would be stateless (so that you could make queries for
> multiple targets in any order) as well as asynchronous (so that responses
> might arrive out of order).  Blocking could be implemented in LLDB, but
> having the server be asynchronous means multiple clients could connect to
> the same server instance.  This raises interesting possibilities.  For
> example, one can imagine thousands of developers connecting to an internal
> symbol server on the network and being able to debug remote processes or
> core dumps over slow network connections or on machines with very little
> storage (e.g. chromebooks).
>
>
> On the LLDB side, all of this is hidden behind the SymbolFile interface,
> so most of LLDB doesn't have to change at all.   While this is in
> development, we could have SymbolFileRemote and keep the existing local
> codepath the default, until such time that it's robust and complete enough
> that we can switch the default.
>
> Thoughts?
>
>
> Interesting idea.
>
> Would you build the server using the pieces we have in the current
> SymbolFile implementations? What do you mean by “switching the default”? Do
> you expect LLDB to spin up a server if there’s none configured in the
> environment?
>
> Fred
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] RFC: Moving debug info parsing out of process

2019-02-25 Thread Zachary Turner via lldb-dev
Hi all,

We've got some internal efforts in progress, and one of those would benefit
from debug info parsing being out of process (independently of whether or
not the rest of LLDB is out of process).

There's a couple of advantages to this, which I'll enumerate here:

   - It improves one source of instability in LLDB which has been known to
   be problematic -- specifically, that debug info can be bad and handling
   this can often be difficult and bring down the entire debug session.  While
   other efforts have been made to address stability by moving things out of
   process, they have not been upstreamed, and even if they had I think we
   would still want this anyway, for reasons that follow.
   - It becomes theoretically possible to move debug info parsing not just
   to another process, but to another machine entirely.  In a broader sense,
   this decouples the physical debug info location (and for that matter,
   representation) from the debugger host.
   - It becomes testable as an independent component, because you can just
   send requests to it and dump the results and see if they make sense.
   Currently there is almost zero test coverage of this aspect of LLDB apart
   from what you can get after going through many levels of indirection via
   spinning up a full debug session and doing things that indirectly result in
   symbol queries.

The big win here, at least from my point of view, is the second one.
Traditional symbol servers operate by copying entire symbol files (DSYM,
DWP, PDB) from some machine to the debugger host.  These can be very large
-- we've seen 12+ GB in some cases -- which ranges from "slow bandwidth
hog" to "complete non-starter" depending on the debugger host and network.
In this kind of scenario, one could theoretically run the debug info
process on the same NAS, cloud, or whatever as the symbol server.  Then,
rather than copying over an entire symbol file, it responds only to the
query you issued -- if you asked for a type, it just returns a packet
describing the type you requested.

The API itself would be stateless (so that you could make queries for
multiple targets in any order) as well as asynchronous (so that responses
might arrive out of order).  Blocking could be implemented in LLDB, but
having the server be asynchronous means multiple clients could connect to
the same server instance.  This raises interesting possibilities.  For
example, one can imagine thousands of developers connecting to an internal
symbol server on the network and being able to debug remote processes or
core dumps over slow network connections or on machines with very little
storage (e.g. chromebooks).


On the LLDB side, all of this is hidden behind the SymbolFile interface, so
most of LLDB doesn't have to change at all.   While this is in development,
we could have SymbolFileRemote and keep the existing local codepath the
default, until such time that it's robust and complete enough that we can
switch the default.

Thoughts?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC]The future of pexpect

2019-01-31 Thread Zachary Turner via lldb-dev
It's worth mentioning that pexpect is basically unusable on Windows, so
there's still that.

On Thu, Jan 31, 2019 at 11:40 AM Pavel Labath  wrote:

> On 31/01/2019 19:51, Zachary Turner wrote:
>  > FileCheck the ansi escape codes seems like one possibility.
>  >
>  > In general I think you don't actually need to test true interactivity,
>  > because the odds of there being a problem in the 2-3 lines of code that
>  > convert the keyboard press to something else in LLDB are very unlikely
>  > to be problematic, and the rest can be mocked.
>
>
> On 31/01/2019 20:02, Jim Ingham wrote:
> > All the traffic back and forth with the terminal happens in the
> IOHandlerEditLine.  We should be able to get our hands on the Debuggers
> IOHandler and feed characters directly to it, and read the results.  So we
> should be able to write this kind of test by driving the debugger to
> whatever state you need with SB API and then just run one command and get
> the output string directly from the IOHandler.  We should be able to then
> scan that output for color codes.  I don't think we need an external
> process inspection tool to do this sort of thing.
> >
>
>
> Libedit expect to work with a real terminal, so to test the code that
> interacts with libedit (and there's more than 3 lines of that), you'll
> need something that can create a pty, and read and write characters to
> it, regardless of whether you drive the test through FileCheck or SB API.
>
> "creating a pty, and reading and writing to it" is pretty much the
> definition of pexpect.
>
> I am not saying either of this approaches can't be made to work, but I
> am not sure who is going to do it. I fear that we are shooting ourselves
> in the foot banning pexpect and then pushing patches without tests
> because "it's hard".
>
> Just for fun, I tried to write a test to check the coloring of the
> prompt via pexpect. It was _literally_ three lines long:
>
> def test_colored_prompt_comes_out_right(self):
>  child = pexpect.spawn(lldbtest_config.lldbExec)
>  child.expect_exact("(lldb) \x1b[1G\x1b[2m(lldb) \x1b[22m\x1b[8G")
>
>
> BTW: I am not proposing we spend heroic efforts trying to port pexpect
> 2.4 to python3. But I would consider using a newer version of pexpect to
> write tests ***where it makes sense to do so***. At least until someone
> comes up with a better (and not vapourware) alternative...
>
> pl
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC]The future of pexpect

2019-01-31 Thread Zachary Turner via lldb-dev
FileCheck the ansi escape codes seems like one possibility.

In general I think you don't actually need to test true interactivity,
because the odds of there being a problem in the 2-3 lines of code that
convert the keyboard press to something else in LLDB are very unlikely to
be problematic, and the rest can be mocked.



On Thu, Jan 31, 2019 at 10:42 AM Pavel Labath  wrote:

> On 31/01/2019 19:26, Zachary Turner wrote:
> > Was the test failing specifically in the keyboard handler for up arrow,
> > or was it failing in the command history searching code?  Because if
> > it's the latter, then we could have a command which searches the command
> > history.
> >
>
> The patch is r351313, if you want to look at it in detail. But, I don't
> think this one example matters too much, since we will always have some
> code which deals with the interactivity of the terminal. That will need
> to be tested somehow.
>
> Another example: we have a fairly complex piece of code that makes sure
> our (lldb) prompt comes out in color. How do we write a test for that?
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC]The future of pexpect

2019-01-31 Thread Zachary Turner via lldb-dev
Was the test failing specifically in the keyboard handler for up arrow, or
was it failing in the command history searching code?  Because if it's the
latter, then we could have a command which searches the command history.

On Thu, Jan 31, 2019 at 10:23 AM Davide Italiano via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> On Thu, Jan 31, 2019 at 10:09 AM Pavel Labath  wrote:
> >
> > On 31/01/2019 02:32, Davide Italiano via lldb-dev wrote:
> > > As you probably know (I didn’t), lldb embeds its own version of
> > > `pexpect-2.4`, which doesn’t support python3.
> > > This is the (relatively short) list of tests relying on pyexpect:
> > >
> > > testcases/tools/lldb-mi/syntax/TestMiSyntax.py:import pexpect
> > >  # 7 (EOF)
> > > testcases/tools/lldb-mi/lldbmi_testcase.py:import pexpect
> > > testcases/tools/lldb-mi/signal/TestMiSignal.py:import pexpect
> > > testcases/tools/lldb-mi/signal/TestMiSignal.py:import pexpect
> > > testcases/lldbtest.py:import pexpect
> > > testcases/driver/batch_mode/TestBatchMode.py:import pexpect
> > > testcases/driver/batch_mode/TestBatchMode.py:import pexpect
> > > testcases/driver/batch_mode/TestBatchMode.py:import pexpect
> > > testcases/driver/batch_mode/TestBatchMode.py:import pexpect
> > > testcases/lldbpexpect.py:import pexpect
> > > testcases/terminal/TestSTTYBeforeAndAfter.py:import pexpect
> > > testcases/darwin_log.py:import pexpect
> > > testcases/macosx/nslog/TestDarwinNSLogOutput.py:import pexpect
> > > testcases/benchmarks/stepping/TestSteppingSpeed.py:import
> pexpect
> > > testcases/benchmarks/frame_variable/TestFrameVariableResponse.py:
> > >import pexpect
> > >
> testcases/benchmarks/turnaround/TestCompileRunToBreakpointTurnaround.py:
> > > import pexpect
> > >
> testcases/benchmarks/turnaround/TestCompileRunToBreakpointTurnaround.py:
> > > import pexpect
> > > testcases/benchmarks/expression/TestExpressionCmd.py:import
> pexpect
> > > testcases/benchmarks/expression/TestRepeatedExprs.py:import
> pexpect
> > > testcases/benchmarks/expression/TestRepeatedExprs.py:import
> pexpect
> > > testcases/benchmarks/startup/TestStartupDelays.py:import
> pexpect
> > > testcases/functionalities/command_regex/TestCommandRegex.py:
> > > import pexpect
> > >
> testcases/functionalities/single-quote-in-filename-to-lldb/TestSingleQuoteInFilename.py:
> > > import pexpect
> > > testcases/functionalities/format/TestFormats.py:import pexpect
> > >
> > > (I count 14, but there might be something else).
> > >
> > > I audited all of them and from what I see they’re almost all testing
> the driver.
> > > I had a chat with my coworkers and we agreed it's reasonable to
> > > replace them with lit tests (as they're just running commands).
> > > This would allow us to get rid of an external dependency, which
> > > happened to be cause of trouble in the past.
> > >
> > > Are there any objections?
> > >
> > > Thanks,
> > >
> >
> > I'm not a fan of pexpect, and if these tests can be converted to lit,
> > then I'm all for it. But I do have a question.
> >
> > There is a class of tests that cannot be written in the current lit
> > framework, but they can with pexpect. A couple of weeks ago we had a
> > patch fixing a bug where pressing up arrow while searching through the
> > command history caused a crash. In the end a test for this was not
> > included because it was hard for a reason unrelated to pexpect, but
> > without pexpect (or something equivalent) writing a test for this would
> > be impossible.
> >
>
> I don't know about this, to be honest. Maybe lit should grow an
> interactive mode somehow to accomodate for this functionality?
> I'm not an expert in how it's implemented so that could be hard to achieve.
> FWIW, I haven't seen anything that really requires interactivity, but
> I have to admit I haven't looked really deeply.
>
> > What's our story for testing interactive command-line functionalities?
> > The way I see it, if we don't use pexpect, we'll either have to use some
> > other tool which simulates a realistic terminal, or write our own. (We
> > already have one attempt for this in
> > unittests/Editline/EditlineTest.cpp, but this would need more work to be
> > fully functional.)
> >
> > pl
> >
> >
> > PS: Does anyone actually use the benchmark tests? Can we just delete
> them?
>
> I don't know. Maybe Jim knows. I personally don't use them.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC]The future of pexpect

2019-01-30 Thread Zachary Turner via lldb-dev
This would be great. All of these tests have always been disabled on
Windows so converting them to lit tests would increase test coverage there
as well
On Wed, Jan 30, 2019 at 6:00 PM Alex Langford via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> +1
>
> Thanks for bringing this up. I'd like to see this happen!
>
> - Alex
>
> On 1/30/19, 5:33 PM, "lldb-dev on behalf of Davide Italiano via lldb-dev"
> 
> wrote:
>
> As you probably know (I didn’t), lldb embeds its own version of
> `pexpect-2.4`, which doesn’t support python3.
> This is the (relatively short) list of tests relying on pyexpect:
>
> testcases/tools/lldb-mi/syntax/TestMiSyntax.py:import pexpect
> # 7 (EOF)
> testcases/tools/lldb-mi/lldbmi_testcase.py:import pexpect
> testcases/tools/lldb-mi/signal/TestMiSignal.py:import pexpect
> testcases/tools/lldb-mi/signal/TestMiSignal.py:import pexpect
> testcases/lldbtest.py:import pexpect
> testcases/driver/batch_mode/TestBatchMode.py:import pexpect
> testcases/driver/batch_mode/TestBatchMode.py:import pexpect
> testcases/driver/batch_mode/TestBatchMode.py:import pexpect
> testcases/driver/batch_mode/TestBatchMode.py:import pexpect
> testcases/lldbpexpect.py:import pexpect
> testcases/terminal/TestSTTYBeforeAndAfter.py:import pexpect
> testcases/darwin_log.py:import pexpect
> testcases/macosx/nslog/TestDarwinNSLogOutput.py:import pexpect
> testcases/benchmarks/stepping/TestSteppingSpeed.py:import
> pexpect
> testcases/benchmarks/frame_variable/TestFrameVariableResponse.py:
>   import pexpect
>
> testcases/benchmarks/turnaround/TestCompileRunToBreakpointTurnaround.py:
>import pexpect
>
> testcases/benchmarks/turnaround/TestCompileRunToBreakpointTurnaround.py:
>import pexpect
> testcases/benchmarks/expression/TestExpressionCmd.py:import
> pexpect
> testcases/benchmarks/expression/TestRepeatedExprs.py:import
> pexpect
> testcases/benchmarks/expression/TestRepeatedExprs.py:import
> pexpect
> testcases/benchmarks/startup/TestStartupDelays.py:import
> pexpect
> testcases/functionalities/command_regex/TestCommandRegex.py:
> import pexpect
>
> testcases/functionalities/single-quote-in-filename-to-lldb/TestSingleQuoteInFilename.py:
>import pexpect
> testcases/functionalities/format/TestFormats.py:import pexpect
>
> (I count 14, but there might be something else).
>
> I audited all of them and from what I see they’re almost all testing
> the driver.
> I had a chat with my coworkers and we agreed it's reasonable to
> replace them with lit tests (as they're just running commands).
> This would allow us to get rid of an external dependency, which
> happened to be cause of trouble in the past.
>
> Are there any objections?
>
> Thanks,
>
> --
> Davide
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.llvm.org_cgi-2Dbin_mailman_listinfo_lldb-2Ddev&d=DwIGaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=plcBe-Lvb3GcVnji0z26DNJmyn6uNsBq7AW-IQ7KAQQ&m=oo0_7ONGQhEkwtwF6DG8I6sVC2lUR-vlmka8pm4v1k0&s=zp1B92i8MPZxGtbFYUADj5J4GqHwpC1-g_x3fIN1hq0&e=
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB not loading any debug information on windows

2019-01-16 Thread Zachary Turner via lldb-dev
Can you try clang-cl.exe /Z7 main.cpp instead of a clang.exe command line?
On Wed, Jan 16, 2019 at 10:56 PM Christoph Baumann via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hey,
>
> I wrote a simple hello-world program to test lldb on windows:
>
>
>
>   #include 
>
>   int main(int argc, char* argv[]){
>
> printf(„hello world“);
>
> return(0);
>
>   }
>
>
>
> Im compiling with ‚clang -g main.c -o main.exe‘, which produces the
> outputfiles ‚main.exe‘, ‚main.pdb‘ and ‚main.lnk‘.
>
> When i’m now firing up lldb and create new target with ‚target create
> main.exe‘, lldb does not appear to load any debug information and source
> level debugging is not available. Trying to load debug symbols with ‚target
> symbols add main.pdb‘ results in ‚…does not match any existing module‘.
>
>
>
> Im using Windows 10 pro and llvm tools built from latest source.
>
>
>
> I wonder if i miss something or if its just lldb not working properly on
> windows yet.
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB bot health

2019-01-11 Thread Zachary Turner via lldb-dev
I own this one: lldb-x86-windows-msvc2015

It can be removed, especially now that Stella's is strictly better than
mine was even when it was working.

There will probably be an effort on our side to get the Linux bots up and
running again "soon", but I don't have an exact timeline right now.

On Fri, Jan 11, 2019 at 3:12 PM Davide Italiano 
wrote:

> On Fri, Jan 11, 2019 at 3:07 PM Stella Stamenova 
> wrote:
> >
> > Thanks Davide,
> >
> > I think several of these bots have not been maintained for a while. One
> thing we could do is try to ping the owners and see if it's possible to
> update the bots or if they're no longer useful, then remove them.
> >
>
> I agree. I don't know who owns these bots, is there an easy way to
> find? (or just cc: them to these e-mail).
> We can then ask Galina to just remove the bots if nobody maintains them.
>
> Thanks,
>
> --
> Davide
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] RFC: Simplifying SymbolFile interface

2019-01-09 Thread Zachary Turner via lldb-dev
The Native PDB symbol file plugin I think is mostly complete.  It's at
least almost as good as the old Windows-only PDB plugin in 90% of ways,
while actually being significantly better in other ways (for example, there
was a test that took over 2 minutes to run with the Windows-only PDB
plugin, which now takes about 2 seconds to run with the native PDB plugin.

While implementing this, I ran into several things that made my life quite
difficult, and only later found out that I could have saved myself a lot of
headache and time if the SymbolFile interface had been a little simpler and
easier to understand.

Specifically, I'd like to remove the heavy use of SymbolContext in the
SymbolFile / SymbolVendor interface and replace it with more narrow and
targeted parameter lists.

Consider the case of someone calling FindTypes.  In theory, today they can
fill out any combination of Target, Module, Function, Block, CompileUnit,
LineEntry, and Symbol.  That's 2^7 different possible ways the function can
be called.  While obviously not all of these combinations make sense, the
fact is that it greatly increases the API surface, which is bad for test
coverage, bad for ease of understanding, bad for usability, and leads to a
lot of dead code.

For a person implementing this function for the first time, and who may not
know all the details about how the rest of LLDB works, this is quite
daunting because there's an inherent desire to implement the function
faithfully "just in case", since they don't know all of the different ways
the function might be called.

This results in wasted time on the developer's part, because they end up
implementing a bunch of functionality that is essentially dead code.

We can certainly document for every single function "The implementor should
be prepared to handle the case of fields X, Y, and Z being set, and handle
it in such and such way", but I think it's easier to just change the
interface to be more clear in the first place.


Here are the cases I identified, and a proposal for how I could change the
interface.

1) SymbolFile::ParseTypes(SymbolContext&)
  * In the entire codebase, this is only called with a CompileUnit set.  We
should change this to be ParseTypesForCompileUnit(CompileUnit&) so that the
interface is self-documenting.  A patch with this change is here [
https://reviews.llvm.org/D56462]

2) SymbolFile::ParseDeclsForContext(CompilerDeclContext)
  * This is intended to only be used for parsing variables in a block.  But
should it be recursive?  It's impossible to tell from the function name, so
callers can't use it correctly and implementors can't implement it
correctly.  I spent 4 days trying to implement a generic version of this
function for the NativePDB plugin only to find out that I only actually
cared about block variables.  I would propose changing this to
ParseVariableDeclsForBlock(Block&).

3) These functions:
 * ParseCompileUnitLanguage(SymbolContext&)
 * ParseCompileUnitFunctions(SymbolContext&)
 * ParseCompileUnitLineTable(SymbolContext&)
 * ParseCompileUnitDebugMacros(SymbolContext&)
 * ParseCompileUnitSupportFiles(SymbolContext&)

are only for CompileUnits (as the name implies.  I propose changing the
parameter from a SymbolContext& to a CompileUnit&.

4) SymbolFile::ParseFunctionBlocks(SymbolContext&)
 * This is intended to be used when the SymbolContexts m_function member is
set.  I propose changing this to SymbolFile::ParseFunctionBlocks(Function&).

5) SymbolFile::ParseVariablesForContext(CompilerDeclContext)
* This function is only called with the the Context being a CompileUnit,
Function, and Block.  But does it need to be recursive?  For a Function and
Block it seems to be assumed to be recursive, and for a CompileUnit it
seems to be assumed to not be recursive.  For the former case, it's not
actually clear how this function differs from ParseGlobalVariables, and for
the latter case I would propose changing this to
ParseImmedateVariablesForBlock(Block&).

6) SymbolFile::FindTypes(SymbolContext&).
* This function is only called with the m_module field set, and since a
SymbolFile is already tied to a module anyway, the parameter appears
unnecessary.  I propose changing this to SymbolFile::FindAllTypes()

7) SymbolFile::FindNamespace(SymbolContext&, ConstString, DeclContext*) is
only called with default-constructed (i.e. null) SymbolContexts, making the
first parameter unnecessary.  I propose changing this to
FindNamespace(ConstString, DeclContext*)


8)   Module::FindTypes(SymbolContext &, ConstString, bool , size_t ,
DenseSet &, TypeList&):

* After the change in #6, we can propagate this change upwards for greater
benefit.  The first parameter in Module::FindTypes(SymbolContext&, ...) now
becomes unnecessary (and in fact, it was kind of unnecessary to begin with
since in every case, the SymbolContext actually just had a single member
set, which was equal to the this pointer of the Module from which this
function was called).  So I propose deleting thi

Re: [lldb-dev] Unreliable process attach on Linux

2019-01-05 Thread Zachary Turner via lldb-dev
I'd be curious to see if the PID of the process that is failed to attach to
is the same as one of the PIDs of a process that was previously attached to
(and if so, if it is the first such case where a PID is recycled).

On Sat, Jan 5, 2019 at 4:42 AM Florian Weimer via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> * Jan Kratochvil:
>
> > On Fri, 04 Jan 2019 17:38:42 +0100, Florian Weimer via lldb-dev wrote:
> >> Run it in a loop like this:
> >>
> >> $ while ./test-attach ; do date; done
> >>
> >> On Linux x86-64 (Fedora 29), with LLDB 7 (lldb-7.0.0-1.fc29.x86_64) and
> >> kernel 4.19.12 (kernel-4.19.12-301.fc29.x86_64), after 100 iterations or
> >> so, attaching to the newly created process fails:
> >>
> >> test-attach: SBTarget::Attach failed: lost connection
> >
> > FYI after 3 runs it still runs fine with your reproducer both with
> system
> > lldb-devel-7.0.0-1.fc29.x86_64 and COPR
> > lldb-experimental-devel-8.0.0-0.20190102snap0.fc29.x86_64 (=trunk), part
> > running without /usr/lib/debug and part with.
>
> Well, that's odd.  Shall I try to reproduce this on a lab machine?
>
> > Fedora 29 x86_64 + kernel-4.19.10-300.fc29.x86_64
> >
> > (I haven't investigated the code why it could fail this way.)
>
> First, I want to get more logging data out of LLDB.  Maybe this will
> tell us where things go wrong.
>
> Thanks,
> Florian
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Signedness of scalars built from APInt(s)

2019-01-04 Thread Zachary Turner via lldb-dev
It seems like we have 3 uses of this constructor in LLDB.

IRInterpreter.cpp: Constructs a Scalar for an llvm::Constant.
IRForTarget.cpp:  Constructs a Scalar for an llvm::Constant.
ClangASTContext.cpp: bitcasts an APFloat to an APInt.

The first two we should just treat constants in LLVM IR as signed, so we
could construct an APSInt at the call-sites with signed=true.

The third seems like a bug, we should just have a Scalar constructor that
takes an APFloat directly.  We already have one that takes a float and it
just sets the `APFloat m_float` member variable, I don't know why we're
jumping through this bitcast hoop (which is probably wrong for negative
floats anyway).

On Fri, Jan 4, 2019 at 3:38 PM Zachary Turner  wrote:

> On Fri, Jan 4, 2019 at 3:23 PM Jonas Devlieghere 
> wrote:
>
>> On Fri, Jan 4, 2019 at 3:13 PM Zachary Turner  wrote:
>>
>>> I don't think #2 is a correct change.  Just because the sign bit is set
>>> doesn't mean it's signed.  Is the 4-byte value 0x1000 signed or
>>> unsigned?  It's a trick question, because there's not enough information!
>>> If it was written "int x = 0x1000" then it's signed (and negative).  If
>>> it was written "unsigned x = 0x1000" then it's unsigned (and
>>> positive).  What about the 4-byte value 0x1?  Still a trick!  If it was
>>> written "int x = 1" then it's signed (and positive), and if it was written
>>> "unsigned x = 1" then it's unsigned (and positive).
>>>
>>> My point is that signedness of the *type* does not necessarly imply
>>> signedness of the value, and vice versa.
>>>
>>> APInt is purely a bit-representation and a size, there is no information
>>> whatsoever about whether type *type* is signed.  It doesn't make sense to
>>> say "is this APInt negative?" without additional information.
>>>
>>> With APSInt, on the other hand, it does make sense to ask that
>>> question.  If you have an APSInt where isSigned() is true, *then* you can
>>> use the sign bit to determine whether it's negative.  And if you have an
>>> APSInt where isSigned() is false, then the "sign bit" is not actually a
>>> sign bit at all, it is just an extra power of 2 for the unsigned value.
>>>
>>> This is my understanding of the classes, someone correct me if I'm wrong.
>>>
>>
>>> IIUC though, the way to fix this is by using APSInt throughout the
>>> class, and delete all references to APInt.
>>>
>>
>> I think we share the same understanding. If we know at every call site
>> whether the type is signed or not then I totally agree, we should only use
>> APSInt. The reason I propose doing (2) first is for the first scenario you
>> described, where you don't know. Turning it into an explicit APSInt is as
>> bad as using an APInt and looking at the value. The latter has the
>> advantage that it conveys that you don't know, while the other may or may
>> not be a lie.
>>
>
> Do we ever not know though?  And if so, then why don't we know whether the
> type is supposed to be signed or unsigned?  Because guessing is always
> going to be wrong sometimes.
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Signedness of scalars built from APInt(s)

2019-01-04 Thread Zachary Turner via lldb-dev
On Fri, Jan 4, 2019 at 3:23 PM Jonas Devlieghere 
wrote:

> On Fri, Jan 4, 2019 at 3:13 PM Zachary Turner  wrote:
>
>> I don't think #2 is a correct change.  Just because the sign bit is set
>> doesn't mean it's signed.  Is the 4-byte value 0x1000 signed or
>> unsigned?  It's a trick question, because there's not enough information!
>> If it was written "int x = 0x1000" then it's signed (and negative).  If
>> it was written "unsigned x = 0x1000" then it's unsigned (and
>> positive).  What about the 4-byte value 0x1?  Still a trick!  If it was
>> written "int x = 1" then it's signed (and positive), and if it was written
>> "unsigned x = 1" then it's unsigned (and positive).
>>
>> My point is that signedness of the *type* does not necessarly imply
>> signedness of the value, and vice versa.
>>
>> APInt is purely a bit-representation and a size, there is no information
>> whatsoever about whether type *type* is signed.  It doesn't make sense to
>> say "is this APInt negative?" without additional information.
>>
>> With APSInt, on the other hand, it does make sense to ask that question.
>> If you have an APSInt where isSigned() is true, *then* you can use the sign
>> bit to determine whether it's negative.  And if you have an APSInt where
>> isSigned() is false, then the "sign bit" is not actually a sign bit at all,
>> it is just an extra power of 2 for the unsigned value.
>>
>> This is my understanding of the classes, someone correct me if I'm wrong.
>>
>
>> IIUC though, the way to fix this is by using APSInt throughout the class,
>> and delete all references to APInt.
>>
>
> I think we share the same understanding. If we know at every call site
> whether the type is signed or not then I totally agree, we should only use
> APSInt. The reason I propose doing (2) first is for the first scenario you
> described, where you don't know. Turning it into an explicit APSInt is as
> bad as using an APInt and looking at the value. The latter has the
> advantage that it conveys that you don't know, while the other may or may
> not be a lie.
>

Do we ever not know though?  And if so, then why don't we know whether the
type is supposed to be signed or unsigned?  Because guessing is always
going to be wrong sometimes.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Signedness of scalars built from APInt(s)

2019-01-04 Thread Zachary Turner via lldb-dev
I don't think #2 is a correct change.  Just because the sign bit is set
doesn't mean it's signed.  Is the 4-byte value 0x1000 signed or
unsigned?  It's a trick question, because there's not enough information!
If it was written "int x = 0x1000" then it's signed (and negative).  If
it was written "unsigned x = 0x1000" then it's unsigned (and
positive).  What about the 4-byte value 0x1?  Still a trick!  If it was
written "int x = 1" then it's signed (and positive), and if it was written
"unsigned x = 1" then it's unsigned (and positive).

My point is that signedness of the *type* does not necessarly imply
signedness of the value, and vice versa.

APInt is purely a bit-representation and a size, there is no information
whatsoever about whether type *type* is signed.  It doesn't make sense to
say "is this APInt negative?" without additional information.

With APSInt, on the other hand, it does make sense to ask that question.
If you have an APSInt where isSigned() is true, *then* you can use the sign
bit to determine whether it's negative.  And if you have an APSInt where
isSigned() is false, then the "sign bit" is not actually a sign bit at all,
it is just an extra power of 2 for the unsigned value.

This is my understanding of the classes, someone correct me if I'm wrong.

IIUC though, the way to fix this is by using APSInt throughout the class,
and delete all references to APInt.

On Fri, Jan 4, 2019 at 2:58 PM Jonas Devlieghere 
wrote:

> If I understand the situation correctly I think we should do both. I'd
> start by doing (2) to improve the current behavior and add a constructor
> for APSInt. We can then audit the call sites and migrate to APSInt where
> it's obvious that the type is signed. That should match the semantics of
> both classes?
>
> On Fri, Jan 4, 2019 at 2:00 PM Davide Italiano 
> wrote:
>
>> On Fri, Jan 4, 2019 at 1:57 PM Davide Italiano 
>> wrote:
>> >
>> > While adding support for 512-bit integers in `Scalar`, I figured I
>> > could add some coverage.
>> >
>> > TEST(ScalarTest, Signedness) {
>> >  auto s1 = Scalar(APInt(32, 12, false /* isSigned */));
>> >  auto s2 = Scalar(APInt(32, 12, true /* isSigned */ ));
>> >  ASSERT_EQ(s1.GetType(), Scalar::e_uint); // fails
>> >  ASSERT_EQ(s2.GetType(), Scalar::e_sint); // pass
>> > }
>> >
>> > The result of `s1.GetType()` is Scalar::e_sint.
>> > This is because an APInt can't distinguish between "int patatino = 12"
>> > and "uint patatino = 12".
>> > The correct class in `llvm` to do that is `APSInt`.
>> >
>>
>> Please note that this is also broken in the case where you have
>> APInt(32 /* bitWidth */, -323);
>> because of the way the constructor is implemented.
>>
>> --
>> Davide
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] When should ArchSpecs match?

2018-12-07 Thread Zachary Turner via lldb-dev
“Unknown” is a perfectly fine value for the os though, and I’m not
suggesting to change that.

My point is simply that Jason’s situation (baremetal) is one that is not
even expressible by the Triple syntax. As long as there’s some enum value
that describes the situation (of which unknown is a valid choice), the
problem goes away.
On Fri, Dec 7, 2018 at 8:06 AM  wrote:

> We use 2 triples for Hexagon:
>
> hexagon-unknown-elf (which becomes hexagon-unknown-unknown-elf
> internally), and hexagon-unknown-linux.
>
>
>
> We follow the Linux standard and add in magic to the elf to identify it as
> a Linux binary. But in the hexagon-unknown-elf case we have no way to
> distinguish between standalone (no OS, running on our simulator) or QuRT
> (proprietary OS, could be running on hardware or simulator). In fact, the
> same shared library that has no OS calls (just standard library calls that
> go into the appropriate .so) could run under either one.
>
>
>
> I think requiring a value for every OS would be a non-starter for us.
>
>
>
> --
>
> Ted Woodward
>
> Qualcomm Innovation Center, Inc.
>
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux
> Foundation Collaborative Project
>
>
>
> *From:* lldb-dev  *On Behalf Of *Zachary
> Turner via lldb-dev
> *Sent:* Friday, December 7, 2018 4:38 AM
> *To:* Pavel Labath 
> *Cc:* LLDB 
> *Subject:* Re: [lldb-dev] When should ArchSpecs match?
>
>
>
> We can already say that with OSType::Unknown. That’s different than “i
> know that no OS exists”
>
> On Fri, Dec 7, 2018 at 12:00 AM Pavel Labath  wrote:
>
> On 07/12/2018 01:22, Jason Molenda via lldb-dev wrote:
> > Oh sorry I missed that.  Yes, I think a value added to the OSType for
> NoOS or something would work.  We need to standardize on a textual
> representation for this in a triple string as well, like 'none'.  Then with
> arm64-- and arm64-*-* as UnknownVendor + UnknownOS we can have these marked
> as "compatible" with any other value in the case Adrian is looking at.
> >
> >
>
> Sounds good to me.
>
> As another data point, it is usually impossible to tell from looking at
> an ELF file which os it is intended to run on. You can tell the
> architecture because it's right in the elf header, but that's about it.
> Some OSs get around this by adding a special section like
> .this.is.an.android.binary, but not all of them. So in general, we need
> to be able to say "I have no idea which OS is this binary intended for".
>
> pl
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] When should ArchSpecs match?

2018-12-07 Thread Zachary Turner via lldb-dev
We can already say that with OSType::Unknown. That’s different than “i know
that no OS exists”
On Fri, Dec 7, 2018 at 12:00 AM Pavel Labath  wrote:

> On 07/12/2018 01:22, Jason Molenda via lldb-dev wrote:
> > Oh sorry I missed that.  Yes, I think a value added to the OSType for
> NoOS or something would work.  We need to standardize on a textual
> representation for this in a triple string as well, like 'none'.  Then with
> arm64-- and arm64-*-* as UnknownVendor + UnknownOS we can have these marked
> as "compatible" with any other value in the case Adrian is looking at.
> >
> >
>
> Sounds good to me.
>
> As another data point, it is usually impossible to tell from looking at
> an ELF file which os it is intended to run on. You can tell the
> architecture because it's right in the elf header, but that's about it.
> Some OSs get around this by adding a special section like
> .this.is.an.android.binary, but not all of them. So in general, we need
> to be able to say "I have no idea which OS is this binary intended for".
>
> pl
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] When should ArchSpecs match?

2018-12-06 Thread Zachary Turner via lldb-dev
That's what I mean though, perhaps we could add a value to the OSType
enumeration like BareMetal or None to explicitly represent this.  the
SubArchType enum has NoSubArch, so it's not without precedent.  As long as
you can express it in the triple format, the problem goes away.

On Thu, Dec 6, 2018 at 3:55 PM Jason Molenda  wrote:

> There is genuinely no OS in some cases, like people who debug the software
> that runs in a keyboard or a mouse.  And to higher-level coprocessors in a
> modern phones; the SOCs on all these devices have a cluster of processors,
> and only some of them are running an identifiable operating system, like
> iOS or Android.
>
> I'll be honest, it's not often that we'll be debugging an arm64-apple-none
> target and have to decide whether an arm64-apple-ios binary should be
> loaded or not.  But we need some way to express this kind of environment.
>
>
> > On Dec 6, 2018, at 3:50 PM, Zachary Turner  wrote:
> >
> > Is there some reason we can’t define vendors, environments, arches, and
> oses for all supported use cases? That way “there is no os” would not ever
> be a thing.
> > On Thu, Dec 6, 2018 at 3:37 PM Jason Molenda via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > I think the confusing thing is when "unspecified" means "there is no OS"
> or "there is no vendor" versus "vendor/OS is unspecified".
> >
> > Imagine debugging a firmware environment where we have a cpu arch, and
> we may have a vendor, but we specifically do not have an OS.  Say
> armv7-apple-none (I make up "none", I don't think that's valid).  If lldb
> is looking for a binary and it finds one with armv7-apple-ios, it should
> reject that binary, they are incompatible.
> >
> > As opposed to a triple of "armv7-*-*" saying "I know this is an armv7
> system target, but I don't know anything about the vendor or the OS" in
> which case an armv7-apple-ios binary is compatible.
> >
> > My naive reading of "arm64-*-*" means vendor & OS are unspecified and
> should match anything.
> >
> > My naive reading of "arm64" is that it is the same as "arm64-*-*".
> >
> > I don't know what a triple string looks like where we specify "none" for
> a field.  Is it armv7-apple-- ?  I know Triple has Unknown enums, but
> "Unknown" is ambiguous between "I don't know it yet" versus "It not any
> Vendor/OS".
> >
> > Some of the confusion is the textual representation of the triples, some
> of it is the llvm Triple class not having a way to express (afaik) "do not
> match this field against anything" aka "none".
> >
> >
> >
> > > On Dec 6, 2018, at 3:19 PM, Adrian Prantl via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > >
> > > I was puzzled by the behavior of ArchSpec::IsExactMatch() and
> IsCompatibleMatch() yesterday, so I created a couple of unit tests to
> document the current behavior. Most of the tests make perfect sense, but a
> few edge cases really don't behave like I would have expected them to.
> > >
> > >>  {
> > >>ArchSpec A("arm64-*-*");
> > >>ArchSpec B("arm64-apple-ios");
> > >>ASSERT_FALSE(A.IsExactMatch(B));
> > >>// FIXME: This looks unintuitive and we should investigate whether
> > >>// this is the desired behavior.
> > >>ASSERT_FALSE(A.IsCompatibleMatch(B));
> > >>  }
> > >>  {
> > >>ArchSpec A("x86_64-*-*");
> > >>ArchSpec B("x86_64-apple-ios-simulator");
> > >>ASSERT_FALSE(A.IsExactMatch(B));
> > >>// FIXME: See above, though the extra environment complicates
> things.
> > >>ASSERT_FALSE(A.IsCompatibleMatch(B));
> > >>  }
> > >>  {
> > >>ArchSpec A("x86_64");
> > >>ArchSpec B("x86_64-apple-macosx10.14");
> > >>// FIXME: The exact match also looks unintuitive.
> > >>ASSERT_TRUE(A.IsExactMatch(B));
> > >>ASSERT_TRUE(A.IsCompatibleMatch(B));
> > >>  }
> > >>
> > >
> > > Particularly, I believe that:
> > > - ArchSpec("x86_64-*-*") and ArchSpec("x86_64") should behave the same.
> > > - ArchSpec("x86_64").IsExactMatch("x86_64-apple-macosx10.14") should
> be false.
> > > - ArchSpec("x86_64-*-*").IsCompatibleMath("x86_64-apple-macosx")
> should be true.
> > >
> > > Does anyone disagree with any of these statements?
> > >
> > > I fully understand that changing any of these behaviors will
> undoubtedly break one or the other edge case, but I think it would be
> important to build on a foundation that actually makes sense if we want to
> be able to reason about the architecture matching logic at all.
> > >
> > > let me know what you think!
> > > -- adrian
> > > ___
> > > lldb-dev mailing list
> > > lldb-dev@lists.llvm.org
> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] When should ArchSpecs match?

2018-12-06 Thread Zachary Turner via lldb-dev
Is there some reason we can’t define vendors, environments, arches, and
oses for all supported use cases? That way “there is no os” would not ever
be a thing.
On Thu, Dec 6, 2018 at 3:37 PM Jason Molenda via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I think the confusing thing is when "unspecified" means "there is no OS"
> or "there is no vendor" versus "vendor/OS is unspecified".
>
> Imagine debugging a firmware environment where we have a cpu arch, and we
> may have a vendor, but we specifically do not have an OS.  Say
> armv7-apple-none (I make up "none", I don't think that's valid).  If lldb
> is looking for a binary and it finds one with armv7-apple-ios, it should
> reject that binary, they are incompatible.
>
> As opposed to a triple of "armv7-*-*" saying "I know this is an armv7
> system target, but I don't know anything about the vendor or the OS" in
> which case an armv7-apple-ios binary is compatible.
>
> My naive reading of "arm64-*-*" means vendor & OS are unspecified and
> should match anything.
>
> My naive reading of "arm64" is that it is the same as "arm64-*-*".
>
> I don't know what a triple string looks like where we specify "none" for a
> field.  Is it armv7-apple-- ?  I know Triple has Unknown enums, but
> "Unknown" is ambiguous between "I don't know it yet" versus "It not any
> Vendor/OS".
>
> Some of the confusion is the textual representation of the triples, some
> of it is the llvm Triple class not having a way to express (afaik) "do not
> match this field against anything" aka "none".
>
>
>
> > On Dec 6, 2018, at 3:19 PM, Adrian Prantl via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > I was puzzled by the behavior of ArchSpec::IsExactMatch() and
> IsCompatibleMatch() yesterday, so I created a couple of unit tests to
> document the current behavior. Most of the tests make perfect sense, but a
> few edge cases really don't behave like I would have expected them to.
> >
> >>  {
> >>ArchSpec A("arm64-*-*");
> >>ArchSpec B("arm64-apple-ios");
> >>ASSERT_FALSE(A.IsExactMatch(B));
> >>// FIXME: This looks unintuitive and we should investigate whether
> >>// this is the desired behavior.
> >>ASSERT_FALSE(A.IsCompatibleMatch(B));
> >>  }
> >>  {
> >>ArchSpec A("x86_64-*-*");
> >>ArchSpec B("x86_64-apple-ios-simulator");
> >>ASSERT_FALSE(A.IsExactMatch(B));
> >>// FIXME: See above, though the extra environment complicates things.
> >>ASSERT_FALSE(A.IsCompatibleMatch(B));
> >>  }
> >>  {
> >>ArchSpec A("x86_64");
> >>ArchSpec B("x86_64-apple-macosx10.14");
> >>// FIXME: The exact match also looks unintuitive.
> >>ASSERT_TRUE(A.IsExactMatch(B));
> >>ASSERT_TRUE(A.IsCompatibleMatch(B));
> >>  }
> >>
> >
> > Particularly, I believe that:
> > - ArchSpec("x86_64-*-*") and ArchSpec("x86_64") should behave the same.
> > - ArchSpec("x86_64").IsExactMatch("x86_64-apple-macosx10.14") should be
> false.
> > - ArchSpec("x86_64-*-*").IsCompatibleMath("x86_64-apple-macosx") should
> be true.
> >
> > Does anyone disagree with any of these statements?
> >
> > I fully understand that changing any of these behaviors will undoubtedly
> break one or the other edge case, but I think it would be important to
> build on a foundation that actually makes sense if we want to be able to
> reason about the architecture matching logic at all.
> >
> > let me know what you think!
> > -- adrian
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] The lit test driver gets killed because of SIGHUP

2018-12-04 Thread Zachary Turner via lldb-dev
Do you know if it’s Darwin specific? If so, maybe someone internally can
offer guidance on how to diagnose (like on the kernel team)?

When you aren’t using the lit driver, does the signal still get delivered
(and we just handle it better), or does it not get delivered at all?
On Tue, Dec 4, 2018 at 9:12 PM Jonas Devlieghere 
wrote:

>
>
> On Tue, Dec 4, 2018 at 19:11 Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Have you tried an strace to see if it tells you who is sending the signal?
>
>
> I used DTrace with the default kill.d script. It shows who sends what
> signal and there was nothing interesting other than debugserver sending
> signal 17 (SIGSTOP) to the inferior. This makes me think that the signal
> might be coming from the kernel?
>
>
>> On Tue, Dec 4, 2018 at 6:49 PM Jonas Devlieghere via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hi everyone,
>>>
>>> Since we switched to lit as the test driver we've been seeing it getting
>>> killed as the result of a SIGHUP signal. The problem doesn't reproduce on
>>> every machine and there seems to be a correlation between number of
>>> occurrences and thread count.
>>>
>>> Davide and Raphael spent some time narrowing down what particular test
>>> is causing this and it seems that TestChangeProcessGroup.py is always
>>> involved. However it never reproduces when running just this test. I was
>>> able to reproduce pretty consistently with the following filter:
>>>
>>> ./bin/llvm-lit ../llvm/tools/lldb/lit/Suite/ --filter="process"
>>>
>>> Bisecting the test itself didn't help much, the problem reproduces as
>>> soon as we attach to the inferior.
>>>
>>> At this point it is still not clear who is sending the SIGHUP and why
>>> it's reaching the lit test driver. Fred suggested that it might have
>>> something to do with process groups (which would be an interesting
>>> coincidence given the previously mentioned test) and he suggested having
>>> the test run in different process groups. Indeed, adding a call to
>>> os.setpgrp() in lit's executeCommand and having a different process group
>>> per test prevent us from seeing this. Regardless of this issue I think it's
>>> reasonable to have tests run in their process group, so if nobody objects I
>>> propose adding this to lit in llvm.
>>>
>>> Still, I'd like to understand where the signal is coming from and fix
>>> the root cause in addition to the symptom. Maybe someone here has an idea
>>> of what might be going on?
>>>
>>> Thanks,
>>> Jonas
>>>
>>> PS
>>>
>>> 1. There's two places where we send a SIGHUP ourself, with that code
>>> removed we still receive the signal, which suggests that it might be coming
>>> from Python or the OS.
>>> 2. If you're able to reproduce you'll see that adding an early return
>>> before the attach in TestChangeProcessGroup.py hides/prevents the problem.
>>> Moving the return down one line and it pops up again.
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
> --
> Sent from my iPhone
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] The lit test driver gets killed because of SIGHUP

2018-12-04 Thread Zachary Turner via lldb-dev
Have you tried an strace to see if it tells you who is sending the signal?
On Tue, Dec 4, 2018 at 6:49 PM Jonas Devlieghere via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi everyone,
>
> Since we switched to lit as the test driver we've been seeing it getting
> killed as the result of a SIGHUP signal. The problem doesn't reproduce on
> every machine and there seems to be a correlation between number of
> occurrences and thread count.
>
> Davide and Raphael spent some time narrowing down what particular test is
> causing this and it seems that TestChangeProcessGroup.py is always
> involved. However it never reproduces when running just this test. I was
> able to reproduce pretty consistently with the following filter:
>
> ./bin/llvm-lit ../llvm/tools/lldb/lit/Suite/ --filter="process"
>
> Bisecting the test itself didn't help much, the problem reproduces as soon
> as we attach to the inferior.
>
> At this point it is still not clear who is sending the SIGHUP and why it's
> reaching the lit test driver. Fred suggested that it might have something
> to do with process groups (which would be an interesting coincidence given
> the previously mentioned test) and he suggested having the test run in
> different process groups. Indeed, adding a call to os.setpgrp() in lit's
> executeCommand and having a different process group per test prevent us
> from seeing this. Regardless of this issue I think it's reasonable to have
> tests run in their process group, so if nobody objects I propose adding
> this to lit in llvm.
>
> Still, I'd like to understand where the signal is coming from and fix the
> root cause in addition to the symptom. Maybe someone here has an idea of
> what might be going on?
>
> Thanks,
> Jonas
>
> PS
>
> 1. There's two places where we send a SIGHUP ourself, with that code
> removed we still receive the signal, which suggests that it might be coming
> from Python or the OS.
> 2. If you're able to reproduce you'll see that adding an early return
> before the attach in TestChangeProcessGroup.py hides/prevents the problem.
> Moving the return down one line and it pops up again.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Debugging Python scripts (backtraces, variables) with LLDB

2018-11-20 Thread Zachary Turner via lldb-dev
On Tue, Nov 20, 2018 at 8:51 AM Alexandru Croitor via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

>
> I would appreciate, if someone could point me to some relevant code that
> does something similar to what I'm asking, so I could use it as a base
> point for exploration.
>
> Many thanks.


Not sure how much it will help you, but on Windows if you're using MS
Visual Studio, their debugger does this.  You can seamlessly step between
managed and native code and see Python callstacks interspersed with  native
callstacks.  It's all open source but it's quite a lot of code to dig
through, and unless you have a Windows machien, you won't be able to play
around with it anyway.

https://github.com/Microsoft/PTVS/tree/master/Python/Product
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Best way to support multiple compilers from lit tests

2018-11-18 Thread Zachary Turner via lldb-dev
One of the issues we've faced a couple of times (and will continue to face)
is that lit is generally built around substitutions and command lines, but
different compilers will generally have different substitutions, and worse
-- different command line syntaxes.

The most recent example of this is the stop-hooks test that was added which
uses the %cc and %cxx substitutions.  This will work fine with GCC or
clang, but not with clang-cl, where we need an entirely different command
line.

If we're going to grow this test suite, I think we're going to need a
solution to this.

The main issue I want to address here is that we need a way to abstract
over the difference between command line syntaxes.  gcc and clang are
pretty similar, but they can occasionally differ in minor ways.  But
clang-cl will always be different.

One idea I've had here is to extend lit with the ability to create our own
custom prefix commands (similar to RUN lines), but where we can provide lit
with the prefix and a function to call to execute it.  The thinking being
that we can add something like a COMPILE and LINK command, which you could
invoke like this:

// COMPILE: source=%p/Inputs/foo.cpp \
// COMPILE:opt=none \
// COMPILE:compiler=default \
// COMPILE:out=%t.obj \
// COMPILE:link=no
// LINK: obj=%t.obj \
// LINK:linker=default \
// LINK:nodefaultlib \
// LINK:entry=main \
// LINK:out=%t.exe

Here "default" means whatever lit decides, which would usually depend on
how you configured your CMake.  But for some tests you could specify an
explicit compiler, for example, you could say compiler=gcc or
compiler=msvc, and the test would fail if those compilers are not
configured.

This is actually very similar in spirit to how dotest.py's "builders" work,
but extended to lit.

If we go this route, the first step would be to extend lit with a general
notion of pluggable commands, independently of LLDB.

After that, we could implement the COMPILE and LINK commands in LLDB,
perhaps even reusing much of builder.py from the dotest sutite.


Note that I'm not attempting to address the idea of running the test suite
with different compilers in the same run.  I have further ideas for how to
address that, but I don't think we need to do that right now as it's not an
immediate need.


Does anyone have any thoughts on this?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Problems `target variable` command.

2018-11-09 Thread Zachary Turner via lldb-dev
I tried to run this command:

(lldb) target variable "std::numeric_limits::max_exponent"

In Variable.cpp:386 we run a regex against the input string which results
in the above string being cut down to just `std::numeric_limits`.  So then
I search my debug info and don't find anything.

What I need is for this string to be passed precisely as is to
SymbolFile::FindGlobalVariables.

My question is: is this just a limitation of `target variables` that is by
design, or can this be fixed?

Note that I think even C++14 variable templates are broken because of this,
so if someone writes:

template
constexpr T Pi = T(3.1415926535897932385L);

And inside of LLDB they write `target variable Pi` it won't work.

I think the DWARF and PDB are fundamentally different here in that in DWARF
you've got a DW_TAG_variable whose name is Pi and it will have
DW_TAG_template_type_parameter of type long double.  However, in PDB the
only way to find this is by actually searching for the string Pi (or
whatever instantiation you want).  If you just search for Pi it will be
impossible to find.

So, I think there are two problems to fix:

1) We need to search for the exact thing the user types, and if that fails,
then try using the Regex expression.  That will fix the problem for PDB.

2) It doesn't even work for DWARF currently because the regex filter throws
away the .  So it does find all of the DW_TAG_variables whose name is
Pi, but then it tries to evaluate  as a sub-expression, which
obviously isn't correct.

Thoughts?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [cfe-dev] [Call for Volunteers] Bug triaging

2018-11-09 Thread Zachary Turner via lldb-dev
I had considered a libraries/Backends:Other as well that would be separate
from libraries/Other

On Fri, Nov 9, 2018 at 11:20 AM Derek Schuff  wrote:

> I wonder if backends are a special case to the heuristic of "let's not
> make a bug component for code components that are too small".  LLVM is
> factored to cleanly separate backend code, to the point where it's the one
> thing you can leave out at compile time; this can disincentivize people to
> care about bugs in backends that they don't use (and conversely backends
> seem like the most common/best supported out-of-tree use case). There's
> obviously a lot of variance in how actively-developed the backends are and
> how many people care about them, but it seems like if we care enough to
> have the code in-tree then maybe we care enough to have a bug component too.
>
> On Fri, Nov 9, 2018 at 10:45 AM Kristof Beyls via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
>
>> Hi Zach,
>>
>> Thanks for elaborating.
>> I like your proposal. I agree it still groups per area of expertise. And
>> it makes the set of components we have easier to manage.
>> Before making changes though I hope to hear opinions from others on this.
>> What do others think?
>>
>> Thanks,
>>
>> Kristof
>>
>>
>> On 9 Nov 2018, at 18:05, Zachary Turner  wrote:
>>
>> To elaborate, I didn't mean to group all components with less than 10
>> bugs into one massive component.  Rather, to do it separately for each
>> subcomponent.  Grouping by expertise is fine, but I would argue that a
>> component with 2 or 3 bugs filed per year is not a very useful component.
>> There has to be some kind of bar for having a component otherwise you end
>> up in the situation we have now.
>>
>> If you apply this algorithm to the existing set of components, you end up
>> with something like this:
>>
>> Clang:
>> * New Bugs
>> * C++
>> * Frontend
>> * Formatter
>> * LLVM Codegen
>> * Static Analyzer
>> * Driver
>> * Modules
>> * libclang
>> * Other
>>
>> clang-tools
>> * clang-tidy
>> * Other
>>
>> compiler-rt
>> * All Bugs
>>
>> Documentation
>> * All Bugs
>>
>> libc++
>> * All Bugs
>>
>> libraries
>> * Backend:X86
>> * Scalar Optimizations
>> * Common Code Generator Code
>> * Backend:AMDGPU
>> * Loop Optimizer
>> * Backend:WebAssembly
>> * Backend:ARM
>> * DebugInfo
>> * Backend:AArch64
>> * MC
>> * GlobalISel
>> * Core LLVM classes
>> * Global Analyses
>> * Interprocedural Optimizations
>> * Support Libraries
>> * Backend:PowerPC
>> * Linker
>> * Transformation Utilities
>> * Other
>>
>> lld
>> * ELF
>> * COFF
>> * Other
>>
>> lldb
>> * All Bugs
>>
>> LNT
>> * All Bugs
>>
>> new-bugs
>> * All Bugs
>>
>> OpenMP
>> * Clang Compiler Support
>> * Runtime Support
>>
>> Packaging
>> * All Bugs
>>
>> Phabricator
>> * All Bugs
>>
>> Polly
>> * All Bugs
>>
>> Runtime Libraries
>> * libprofile
>>
>> Test Suite
>> * All Bugs
>>
>> tools
>> * All Bugs
>>
>> Website
>> * All Bugs
>>
>> XRay
>> * All Bugs
>>
>> I don't think it's helpful to have what essentially amounts to lots of
>> dead components, because it causes confusion for bug reporters as well as
>> triagers.  I also don't think the above split is radically different than
>> what is already there, and for the most part, it still *is* organized by
>> expertise.  It also means you need to find less volunteers to add
>> themselves to the cc list for various components.  Instead of needing to
>> find a separate volunteer for Hexagon, MSP430, PTX, RISC-V, Sparc, Bitcode
>> Writer, and MCJIT, each of which has only 1 bug each (so in each case
>> you're looking for a needle in a haystack to find the right person and get
>> them to volunteer), you only need to find 1 for all of them, and there's a
>> good chance that person will be at least somewhat familiar with backends in
>> general and so know who the right person to talk to is in each case.
>>
>> Anyway, just my thoughts.
>>
>> On Fri, Nov 9, 2018 at 12:19 AM Kristof Beyls 
>> wrote:
>>
>>> Hi Zach,
>>>
>>> Thanks for putting the data in a spreadsheet - that’s easier to navigate.
>>>
>>> And thanks for re-raising the question whether we have the right
>>> components in bugzilla.
>>> As I think this could be an area for lots of different opinions, without
>>> any near-perfect solution, it has the potential to be a discussion that
>>> drags on for a long time.
>>> I thought half of all bugs not getting triaged was a serious enough
>>> problem to try and tackle first (with this mail thread) before aiming to
>>> improve the component breakdown in bugzilla.
>>> I think that setting default-cc lists on the components we have
>>> currently is largely orthogonal to reducing/merging components, as we can
>>> always merge default-cc lists when we merge components.
>>>
>>>
>>> On actually coming up with a refined list of components: I think we’ll
>>> need to define/agree first on what guiding principles we follow when
>>> deciding something is worthwhile to be a separate component.
>>> Over the past few weeks I’ve heard a number o

Re: [lldb-dev] [cfe-dev] [Call for Volunteers] Bug triaging

2018-11-09 Thread Zachary Turner via lldb-dev
To elaborate, I didn't mean to group all components with less than 10 bugs
into one massive component.  Rather, to do it separately for each
subcomponent.  Grouping by expertise is fine, but I would argue that a
component with 2 or 3 bugs filed per year is not a very useful component.
There has to be some kind of bar for having a component otherwise you end
up in the situation we have now.

If you apply this algorithm to the existing set of components, you end up
with something like this:

Clang:
* New Bugs
* C++
* Frontend
* Formatter
* LLVM Codegen
* Static Analyzer
* Driver
* Modules
* libclang
* Other

clang-tools
* clang-tidy
* Other

compiler-rt
* All Bugs

Documentation
* All Bugs

libc++
* All Bugs

libraries
* Backend:X86
* Scalar Optimizations
* Common Code Generator Code
* Backend:AMDGPU
* Loop Optimizer
* Backend:WebAssembly
* Backend:ARM
* DebugInfo
* Backend:AArch64
* MC
* GlobalISel
* Core LLVM classes
* Global Analyses
* Interprocedural Optimizations
* Support Libraries
* Backend:PowerPC
* Linker
* Transformation Utilities
* Other

lld
* ELF
* COFF
* Other

lldb
* All Bugs

LNT
* All Bugs

new-bugs
* All Bugs

OpenMP
* Clang Compiler Support
* Runtime Support

Packaging
* All Bugs

Phabricator
* All Bugs

Polly
* All Bugs

Runtime Libraries
* libprofile

Test Suite
* All Bugs

tools
* All Bugs

Website
* All Bugs

XRay
* All Bugs

I don't think it's helpful to have what essentially amounts to lots of dead
components, because it causes confusion for bug reporters as well as
triagers.  I also don't think the above split is radically different than
what is already there, and for the most part, it still *is* organized by
expertise.  It also means you need to find less volunteers to add
themselves to the cc list for various components.  Instead of needing to
find a separate volunteer for Hexagon, MSP430, PTX, RISC-V, Sparc, Bitcode
Writer, and MCJIT, each of which has only 1 bug each (so in each case
you're looking for a needle in a haystack to find the right person and get
them to volunteer), you only need to find 1 for all of them, and there's a
good chance that person will be at least somewhat familiar with backends in
general and so know who the right person to talk to is in each case.

Anyway, just my thoughts.

On Fri, Nov 9, 2018 at 12:19 AM Kristof Beyls  wrote:

> Hi Zach,
>
> Thanks for putting the data in a spreadsheet - that’s easier to navigate.
>
> And thanks for re-raising the question whether we have the right
> components in bugzilla.
> As I think this could be an area for lots of different opinions, without
> any near-perfect solution, it has the potential to be a discussion that
> drags on for a long time.
> I thought half of all bugs not getting triaged was a serious enough
> problem to try and tackle first (with this mail thread) before aiming to
> improve the component breakdown in bugzilla.
> I think that setting default-cc lists on the components we have currently
> is largely orthogonal to reducing/merging components, as we can always
> merge default-cc lists when we merge components.
>
>
> On actually coming up with a refined list of components: I think we’ll
> need to define/agree first on what guiding principles we follow when
> deciding something is worthwhile to be a separate component.
> Over the past few weeks I’ve heard a number of different options, ranging
> over:
>
>
>- Just make a component for every sub-directory in the source code.
>- Just make a component for every library that gets build in the LLVM
>build.
>- Make components so that each component has a significant enough
>number of issues raised against it (I’m trying to paraphrase what you’re
>proposing below).
>
>
> In my mind, the guiding principle should be:
>
>- Components should reflect an area of expertise, so that each
>component can have a set of recognised people that can triage and/or fix
>bugs against that component.
>
>
> If we’d follow that principle, I think we should not merge all components
> with less than 10 bugs reported into an “Other” component.
> I do agree that some merging could still probably be done. E.g. maybe all
> the “clang/C++11”, “clang/C++14”, “clang/C++17”, “clang/C++2a” could be
> merged into a single component.
>
> So in summary:
>
>- I don’t think we need to delay assigning
>volunteers-for-triaging/default-cc lists to components. If we merge
>components later on, we can merge cc lists, or asks the volunteers for the
>relevant components If they want to remain on the default-cc list for the
>merged component.
>- My opinion is the we should define components based on areas of
>expertise.
>
>
> Thanks,
>
> Kristof
>
> On 8 Nov 2018, at 20:39, Zachary Turner  wrote:
>
> Just so I'm clear, are we going to attempt to clean up and/or merge the
> components?  If we are, it makes sense to do that before we start putting
> ourselves as default CC's on the various components since they will just
> change.  If not,

Re: [lldb-dev] [cfe-dev] [Call for Volunteers] Bug triaging

2018-11-08 Thread Zachary Turner via lldb-dev
Just so I'm clear, are we going to attempt to clean up and/or merge the
components?  If we are, it makes sense to do that before we start putting
ourselves as default CC's on the various components since they will just
change.  If not, it would be nice to get some clarification on that now.

I've put the above list into a spreadsheet so people can sort / filter it
as they see fit.  The link is here:

https://docs.google.com/spreadsheets/d/1aeU6P_vN2c63mpkilqni26U7XtEBDbzZYPFnovwr3FI/edit#gid=0

I think a good starting point would be to get rid of any component with
less than 10 bugs reported so far this year and merge them all into an
"Other" component.

On Thu, Nov 8, 2018 at 8:11 AM Kristof Beyls via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> Hi,
>
> Yesterday, I’ve landed a description for how reported bugs should be
> flowing through the various stages of a bug’s life (triage, fixing,
> closing, …) at http://llvm.org/docs/BugLifeCycle.html.
> Thanks for the many many people who provided ideas and feedback for this!
>
> With there now being a description of what is expected during bug triaging
> (http://llvm.org/docs/BugLifeCycle.html#triaging-bugs), we're looking for
> more volunteers to actually do the bug triaging.
> About half of all raised bugs currently don’t seem to get triaged.
>
> The idea is to have one or more volunteers for each of the well over 100
> different product/component combinations we have in bugzilla.
> If you volunteer to help with triaging bugs against a specific component,
> we’ll add you to the default cc list for that component, so that when a new
> bug is raised against that component, you’ll get notified automatically
> through email. For components with few reported bugs, a single triager may
> suffice. For the high-traffic components, we’ll probably need multiple
> volunteers.
> I’ve provided the list of product/components below that had bugs reported
> against in 2018, together with how many bugs were reported against them
> this year so far, as an indication for which components may need more
> volunteers.
>
> I do want to highlight the “new-bugs/new bugs”, “clang/-New Bugs”
> components as those tend to be components people file bugs against if they
> don’t have a clue which part of clang/llvm is causing the issue they’re
> seeing. I believe that you don’t need to be an expert to be able to triage
> most of those bugs. If you want to learn more about llvm, volunteering to
> triage those bugs may be an interesting way to learn a lot more yourself.
>
> How can you get added to the default cc list/volunteer?
> * Preferred way: raise a bug against “Bugzilla Admin”/“Products” to get
> yourself added to the default cc list of the components of your choice.
> * Other way: email bugs-ad...@lists.llvm.org
> * Yet another way: just reply to this mail.
>
> Thanks,
>
> Kristof
>
> new-bugs/new bugs: 535 bugs raised in 2018 (so far)
> clang/C++: 296 bugs raised in 2018 (so far)
> clang/-New Bugs: 260 bugs raised in 2018 (so far)
> libraries/Backend: X86: 202 bugs raised in 2018 (so far)
> libraries/Scalar Optimizations: 152 bugs raised in 2018 (so far)
> clang/Frontend: 120 bugs raised in 2018 (so far)
> lld/ELF: 120 bugs raised in 2018 (so far)
> clang/Formatter: 108 bugs raised in 2018 (so far)
> lldb/All Bugs: 102 bugs raised in 2018 (so far)
> clang/LLVM Codegen: 100 bugs raised in 2018 (so far)
> clang-tools-extra/clang-tidy: 87 bugs raised in 2018 (so far)
> clang/Static Analyzer: 84 bugs raised in 2018 (so far)
> libraries/Common Code Generator Code: 78 bugs raised in 2018 (so far)
> libc++/All Bugs: 67 bugs raised in 2018 (so far)
> lld/COFF: 64 bugs raised in 2018 (so far)
> libraries/Backend: AMDGPU: 60 bugs raised in 2018 (so far)
> libraries/Loop Optimizer: 44 bugs raised in 2018 (so far)
> lld/All Bugs: 30 bugs raised in 2018 (so far)
> clang/Driver: 30 bugs raised in 2018 (so far)
> Runtime Libraries/libprofile library: 29 bugs raised in 2018 (so far)
> libraries/Backend: WebAssembly: 27 bugs raised in 2018 (so far)
> libraries/Backend: ARM: 25 bugs raised in 2018 (so far)
> clang-tools-extra/Other: 25 bugs raised in 2018 (so far)
> libraries/DebugInfo: 25 bugs raised in 2018 (so far)
> OpenMP/Clang Compiler Support: 23 bugs raised in 2018 (so far)
> compiler-rt/compiler-rt: 21 bugs raised in 2018 (so far)
> libraries/Backend: AArch64: 19 bugs raised in 2018 (so far)
> clang/C++11: 19 bugs raised in 2018 (so far)
> libraries/MC: 18 bugs raised in 2018 (so far)
> Build scripts/cmake: 17 bugs raised in 2018 (so far)
> clang/Modules: 17 bugs raised in 2018 (so far)
> libraries/GlobalISel: 17 bugs raised in 2018 (so far)
> OpenMP/Runtime Library: 15 bugs raised in 2018 (so far)
> libraries/Global Analyses: 14 bugs raised in 2018 (so far)
> libraries/Core LLVM classes: 14 bugs raised in 2018 (so far)
> clang/libclang: 14 bugs raised in 2018 (so far)
> Documentation/General docs: 13 bugs raised in 2018 (so far)
> Packaging/deb packages: 13 bugs raised in 2018 (so far)
> li

Re: [lldb-dev] [RFC] OS Awareness in LLDB

2018-10-31 Thread Zachary Turner via lldb-dev
I don’t totally agree with this. I think there are a lot of useful os
awareness tasks in user mode. For example, you’re debugging a deadlock and
want to understand the state of other mutexes, who owns them, etc. or you
want to examine open file descriptors. In the case of a heap corruption you
may wish to study the internal structures of your process’s heap, or even
lower level, the os virtual memory page table structures.

There’s quite a lot you can still do in user mode, but definitely there is
more in kernel mode. As Leonard said, try put WinDbg as a lot of this stuff
already exists so it’s a good reference
On Wed, Oct 31, 2018 at 12:08 PM Alexander Polyakov via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi Leonard,
>
> I think it will be kernel-mode debugging since debugging an application in
> user mode is not an OS awareness imo. Of course, some of kernel's modules
> might run in user-mode, but it will be ok I think.
>
> Thanks for your reference, I'll take a look at it.
>
> Also, I found out that ARM supports OS awareness in their DS-5 debugger.
> They have a mechanism for adding new operating systems. All you need to do
> is to describe OS' model (thread's or task's structure for example). I
> think that is how it might be done in LLDB.
>
> On Wed, Oct 31, 2018 at 9:26 PM Leonard Mosescu 
> wrote:
>
>> Hi Alexander, are you interested in user-mode, kernel-mode debugging or
>> both?
>>
>> Fore reference, the current state of the art regarding OS-awareness
>> debugging is debugging tools for windows
>>  
>> (windbg
>> & co.). This is not surprising since the tools were developed alongside
>> Windows. Obviously they are specific to Windows, but it's good example of
>> how the OS-awareness might look like.
>>
>>
>> On Mon, Oct 29, 2018 at 11:37 AM, Alexander Polyakov via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hi lldb-dev,
>>>
>>> I'm a senior student at Saint Petersburg State University. The one of my
>>> possible diploma themes is "OS Awareness in LLDB". Generally, the OS
>>> awareness extends a debugger to provide a representation of the OS threads
>>> - or tasks - and other relevant data structures, typically semaphores,
>>> mutexes, or queues.
>>>
>>> I want to ask the community if OS awareness is interesting for LLDB
>>> users and developers? The main goal is to create some base on top of LLDB
>>> that can be extended to support awareness for different operating systems.
>>>
>>> Also, if you have a good article or other useful information about OS
>>> awareness, please share it with me.
>>>
>>> Thanks in advance!
>>>
>>> --
>>> Alexander
>>>
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>>
>>
>
> --
> Alexander
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] [llvm-dev] [RFC] LLVM bug lifecycle BoF - triaging

2018-10-31 Thread Zachary Turner via lldb-dev
I can tell you that in LLDB we already do get CC'ed on the list for every
bug.  I will grant you that the volume of bugs in LLDB is much lower than
other lists, but I find it very helpful.  It gives visibility to bugs that
would otherwise be seen by nobody.

On the other hand, I'm intentionally unsubscribed from llvm-bugs because it
just generates an unbelievable volume of email.  Checking the archives,
there were over 700 emails in October.  I'm just not going to sign up for
that, and if all llvm bugs started going to llvm-dev I would probably even
go one step further and unsubscribe from llvm-dev.


Slightly unrelated, but has there been any specific guidance or proposals
of how to re-organize the components?   They all look way too specific to
me.  For example, in clang we have:

C++
C++17
C++11
C++14
C++2a
CUDA
Documentation
Driver
Formatter
Frontend
Headers
libclang
LLVM Codegen
Modules
OpenCL
Static Analyzer
Tooling.

Can we cut this down to about 4?  I'll take a stab at it:

Standards Conformance
Tooling
Codegen Quality
Other

I don't actively work on clang so feel free to ignore this, it's just a
strawman attempt at doing something.

The motivation here is that if people can quickly and easily identify the
set of components they're interested in they are more willing to subscribe
themselves to those components.

I'm guessing that of the existing set of components, there is a significant
amount of overlap among the set of components that individual contributors
are interested in, which suggests we can compress most of them down quite a
bit.

On Wed, Oct 31, 2018 at 11:25 AM Richard Smith via cfe-dev <
cfe-...@lists.llvm.org> wrote:

> On Wed, 31 Oct 2018, 10:47 David Greene via cfe-dev <
> cfe-...@lists.llvm.org wrote:
>
>> Richard Smith via cfe-dev  writes:
>>
>> > In fact, I think it'd be entirely reasonable to subscribe cfe-dev to
>> > all clang bugs (fully subscribe -- email on all updates!). I don't see
>> > any reason whatsoever why a bug update should get *less* attention
>> > than non-bug development discussion.
>>
>> Some of us are on space-limited machines (I'm thinking of personal
>> equipment, not corporate infrastructure) and getting all bug updates for
>> components could put a real squeeze on things.
>>
>> I agree that cfe-bugs, for example, should get copied on all updates but
>> those updates should be opt-in.
>>
>
> Assuming we go that way, do you think it's reasonable for someone to want
> to subscribe to cfe-dev but not cfe-bugs? What's the use case for that? If
> it's email volume, that choice would prioritize the discussion of "I'm not
> sure this is a bug" or "what's going on here?" plus general dev discussion
> and announcements (cfe-dev) over the discussion of "I'm confident that this
> is a bug" (cfe-bugs).
>
> Perhaps we should have a separate cfe-announce list for people who want to
> stay informed but not drink from the firehose of development discussion
> (current cfe-dev plus clang bug updates).
>
>  -David
>
>
>> ___
>> cfe-dev mailing list
>> cfe-...@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>>
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] `ClangASTSource::IgnoreName` C++ false positives

2018-10-31 Thread Zachary Turner via lldb-dev
The first thing I would try is to see where the language is getting set to
objective c and force it c++. Just as an experiment. Depending where it
happens, it may be possible to initialize it from the debug info (or
hardcode it).

But since ObjC assumptions are baked into several places, this has
potential to break some things
On Wed, Oct 31, 2018 at 6:54 AM Aleksandr Urakov <
aleksandr.ura...@jetbrains.com> wrote:

> Sorry, I have somehow missed the discussion continuation there. Yes, it's
> a very similar problem, thanks. But unfortunately no one of the workarounds
> mentioned there doesn't work in this situation...
>
> On Wed, Oct 31, 2018 at 4:32 PM Zachary Turner  wrote:
>
>> It seems like we hit this issue in different contexts almost at the same
>> time (see my thread several days ago about “problem formatting value
>> objects”). That might at least give you some context about why things
>>
>> I wish ObjC assumptions weren’t so deeply embedded, but alas it is the
>> case.
>>
>> Hopefully Jim or someone has ideas on how to fix this properly.
>> On Wed, Oct 31, 2018 at 5:08 AM Aleksandr Urakov <
>> aleksandr.ura...@jetbrains.com> wrote:
>>
>>> Hello,
>>>
>>> I've tried to use a check like `if (m_ast_context->getLangOpts().ObjC)
>>> ...`, but it seems that it's always true. How can we else determine here if
>>> the Objective-C case is used? Or if we can't, where can we move `if (name
>>> == id_name || name == Class_name)` to make it Objective-C only? What
>>> regressions Objective-C users would have if we would remove this check from
>>> here?
>>>
>>> Regards,
>>> Alex
>>>
>>> On Wed, Oct 24, 2018 at 7:14 PM Aleksandr Urakov <
>>> aleksandr.ura...@jetbrains.com> wrote:
>>>
 Hi all!

 There are two hardcoded names to ignore in the
 `ClangASTSource::IgnoreName` function, "Class" and "id", they are valid
 names for C++. It seems that they were added for the Objective-C case. But
 the problem is that when they are in locals they are blocking expressions
 evaluation.

 For example for the next code:

 int main() {
   int x = 5;
   int id = 7;
   int y = 8;
   return 0;
 }

 if you'll break on `return 0` and will try to `print x`, then you'll
 get a error like `no member named 'id' in namespace '$__lldb_local_vars'
 `.

 Do you have any ideas, how can we fix it?

 Regards,
 Alex

>>>
>>>
>>> --
>>> Aleksandr Urakov
>>> Software Developer
>>> JetBrains
>>> http://www.jetbrains.com
>>> The Drive to Develop
>>>
>>
>
> --
> Aleksandr Urakov
> Software Developer
> JetBrains
> http://www.jetbrains.com
> The Drive to Develop
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] `ClangASTSource::IgnoreName` C++ false positives

2018-10-31 Thread Zachary Turner via lldb-dev
It seems like we hit this issue in different contexts almost at the same
time (see my thread several days ago about “problem formatting value
objects”). That might at least give you some context about why things

I wish ObjC assumptions weren’t so deeply embedded, but alas it is the case.

Hopefully Jim or someone has ideas on how to fix this properly.
On Wed, Oct 31, 2018 at 5:08 AM Aleksandr Urakov <
aleksandr.ura...@jetbrains.com> wrote:

> Hello,
>
> I've tried to use a check like `if (m_ast_context->getLangOpts().ObjC)
> ...`, but it seems that it's always true. How can we else determine here if
> the Objective-C case is used? Or if we can't, where can we move `if (name
> == id_name || name == Class_name)` to make it Objective-C only? What
> regressions Objective-C users would have if we would remove this check from
> here?
>
> Regards,
> Alex
>
> On Wed, Oct 24, 2018 at 7:14 PM Aleksandr Urakov <
> aleksandr.ura...@jetbrains.com> wrote:
>
>> Hi all!
>>
>> There are two hardcoded names to ignore in the
>> `ClangASTSource::IgnoreName` function, "Class" and "id", they are valid
>> names for C++. It seems that they were added for the Objective-C case. But
>> the problem is that when they are in locals they are blocking expressions
>> evaluation.
>>
>> For example for the next code:
>>
>> int main() {
>>   int x = 5;
>>   int id = 7;
>>   int y = 8;
>>   return 0;
>> }
>>
>> if you'll break on `return 0` and will try to `print x`, then you'll get
>> a error like `no member named 'id' in namespace '$__lldb_local_vars'`.
>>
>> Do you have any ideas, how can we fix it?
>>
>> Regards,
>> Alex
>>
>
>
> --
> Aleksandr Urakov
> Software Developer
> JetBrains
> http://www.jetbrains.com
> The Drive to Develop
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Problem formatting class types

2018-10-26 Thread Zachary Turner via lldb-dev
Ok that was it, it was because my type was called Class. Oops!
On Fri, Oct 26, 2018 at 4:28 PM Jim Ingham  wrote:

> Most C++ classes and C structs don't have data formatters, particularly
> not classes that you write yourself.
>
> The way value printing works in lldb is that we start by making the
> ValueObject for the value from its Type, so at that stage it is just a
> direct view of the members of the object.  That is done without help of the
> data formatters, reading instead directly from the object's type.  Then we
> consult our type match -> summary/synthetic children registries and we
> construct a summary or a set of "synthetic children" (or both) for the
> object if we find any matches there.  Then the ValueObjectPrinter prints
> the object using the Type based ValueObject, the Summary and the Synthetic
> Children, and there's a print options object that says whether to use the
> raw view, the summary and/or the synthetic children.
>
> But for a type lldb knows nothing about, there won't be any entries in the
> formatter maps, so you should just see the direct Type based children in
> that case.
>
> --raw sets the right options in the print option object to get the printer
> to just use the strict Type based view of the object, with no formatters
> applied.
>
> In your case, you used "Class" as your type name and  Class is a special
> name in ObjC and there happens to be a formatter for that.  You can always
> figure out what formatters apply to the result of an expression with the
> "type {summary/synthetic} info" command.  For your example, I see (my
> variable of type Class was called myClass):
>
> (lldb) type summary info myClass
> summary applied to (Class) myClass is:  (not cascading) (hide value) (skip
> pointers) (skip references) Class summary provider
> (lldb) type synthetic info myClass
> synthetic applied to (Class) myClass is:  Class synthetic children
>
> On macOS those summary/synthetic child providers are in the objc
> category.  The info output should really print the category as well, that
> would be helpful.  But you can do "type summary list" and then find the
> summary in that list and go from there to the category.  Ditto for "type
> synthetic".
>
> What do you get from that?
>
> Jim
>
> > On Oct 26, 2018, at 3:34 PM, Zachary Turner  wrote:
> >
> > So, the second command works, but the first one doesn't.  It doesn't
> give any error, but on the other hand, it doesn't change the results of
> printing the variable.  When I run type category list though, I get this:
> >
> > (lldb) type category list
> > Category: default (enabled)
> > Category: VectorTypes (enabled, applicable for language(s):
> objective-c++)
> > Category: system (enabled, applicable for language(s): objective-c++)
> >
> > So it looks like the behavior I'm seeing is already with the default
> category.  Does this sound right?  Which code path is supposed to get
> executed to format it as a C++ class?
> >
> > On Fri, Oct 26, 2018 at 10:25 AM Jim Ingham  wrote:
> > Remove the "not"...
> >
> > Jim
> >
> > > On Oct 26, 2018, at 10:24 AM, Jim Ingham  wrote:
> > >
> > > But at the minimum, not loading formatters for a language that we can
> determine isn't used in this program seems like something we should try to
> avoid.
> >
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Problem formatting class types

2018-10-26 Thread Zachary Turner via lldb-dev
So, the second command works, but the first one doesn't.  It doesn't give
any error, but on the other hand, it doesn't change the results of printing
the variable.  When I run type category list though, I get this:

(lldb) type category list
Category: default (enabled)
Category: VectorTypes (enabled, applicable for language(s): objective-c++)
Category: system (enabled, applicable for language(s): objective-c++)

So it looks like the behavior I'm seeing is already with the default
category.  Does this sound right?  Which code path is supposed to get
executed to format it as a C++ class?

On Fri, Oct 26, 2018 at 10:25 AM Jim Ingham  wrote:

> Remove the "not"...
>
> Jim
>
> > On Oct 26, 2018, at 10:24 AM, Jim Ingham  wrote:
> >
> > But at the minimum, not loading formatters for a language that we can
> determine isn't used in this program seems like something we should try to
> avoid.
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Problem formatting class types

2018-10-26 Thread Zachary Turner via lldb-dev
Note that I also tried this with a a linux / DWARF executable and had the
same result.

On Fri, Oct 26, 2018 at 3:21 AM Zachary Turner  wrote:

> Hello,
>
> I've got this code:
>
> class Class {
>   int x = 0;
>   short y = 1;
>   char z = 'z';
> } C;
>
> int main(int argc, char **argv) {
>   __debugbreak();
>   return 0;
> }
>
> and I run the following LLDB session:
>
> lldb.exe -f foo.exe
> (lldb) target create "foo.exe"
> Current executable set to 'foo.exe' (x86_64).
> (lldb) run
> Process 24604 launched: 'foo.exe' (x86_64)
> Process 24604 stopped
> * thread #1, stop reason = Exception 0x8003 encountered at address
> 0x7ff70a0b1017
> frame #0: 0x7ff70a0b1018 foo.exe`main(argc=-1123614720,
> argv=0x7ff70a0b1000) at foo.cpp:19
>16
>17   int main(int argc, char **argv) {
>18 __debugbreak();
> -> 19 return 0;
>20   }
> (lldb) p C
> (Class) $0 =
> (lldb)
>
> The issue is, of course, that it doesn't display the members of the class
> C.  The type support in PDB is fine, so it's not that.  For example:
>
> (lldb) type lookup Class
> class Class {
> int x;
> short y;
> char z;
> }
>
> And it can definitely find C in memory:
>
> (lldb) p &C
> (Class *) $1 = 0x7ff70a0b3000
>
> Instead, the issue seems to be related to the value object formatter.  I
> tried to track this down but this code is pretty complicated.  However,
> there are two issues that I was able to discover:
>
> 1) It's using the objective C class formatter.  Obviously I'm not using
> objective C, so that seems wrong right off the bat.  Specifically, the
> "Synthetic children front end" is the ObjCClassSyntheticChildrenFrontEnd.
>
> 2) Because of #1, when it calls CalculateNumChildren() in Cocoa.cpp, it
> returns 0.  I would expect it to be calling some function somewhere that
> returns 3, because there are 3 members of the class.
>
> What's strange is that I don't see anything in the CPlusPlusLanguage
> plugin that provides a SyntheticChildrenFrontEnd that examines the
> CxxRecordDecl and looks for children, so I don't know how this is supposed
> to work anywhere.  But I know it must work somewhere, so I assume I'm just
> missing something and I need to find out the right place to hook A up to B
> and things will just work.
>
> Any pointers on what the expected code path that this should be taking is,
> so I can try to figure out where I might be going off path?
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Problem formatting class types

2018-10-26 Thread Zachary Turner via lldb-dev
Hello,

I've got this code:

class Class {
  int x = 0;
  short y = 1;
  char z = 'z';
} C;

int main(int argc, char **argv) {
  __debugbreak();
  return 0;
}

and I run the following LLDB session:

lldb.exe -f foo.exe
(lldb) target create "foo.exe"
Current executable set to 'foo.exe' (x86_64).
(lldb) run
Process 24604 launched: 'foo.exe' (x86_64)
Process 24604 stopped
* thread #1, stop reason = Exception 0x8003 encountered at address
0x7ff70a0b1017
frame #0: 0x7ff70a0b1018 foo.exe`main(argc=-1123614720,
argv=0x7ff70a0b1000) at foo.cpp:19
   16
   17   int main(int argc, char **argv) {
   18 __debugbreak();
-> 19 return 0;
   20   }
(lldb) p C
(Class) $0 =
(lldb)

The issue is, of course, that it doesn't display the members of the class
C.  The type support in PDB is fine, so it's not that.  For example:

(lldb) type lookup Class
class Class {
int x;
short y;
char z;
}

And it can definitely find C in memory:

(lldb) p &C
(Class *) $1 = 0x7ff70a0b3000

Instead, the issue seems to be related to the value object formatter.  I
tried to track this down but this code is pretty complicated.  However,
there are two issues that I was able to discover:

1) It's using the objective C class formatter.  Obviously I'm not using
objective C, so that seems wrong right off the bat.  Specifically, the
"Synthetic children front end" is the ObjCClassSyntheticChildrenFrontEnd.

2) Because of #1, when it calls CalculateNumChildren() in Cocoa.cpp, it
returns 0.  I would expect it to be calling some function somewhere that
returns 3, because there are 3 members of the class.

What's strange is that I don't see anything in the CPlusPlusLanguage plugin
that provides a SyntheticChildrenFrontEnd that examines the CxxRecordDecl
and looks for children, so I don't know how this is supposed to work
anywhere.  But I know it must work somewhere, so I assume I'm just missing
something and I need to find out the right place to hook A up to B and
things will just work.

Any pointers on what the expected code path that this should be taking is,
so I can try to figure out where I might be going off path?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Type lookup by basename vs. qualified name

2018-10-23 Thread Zachary Turner via lldb-dev
I was trying to implemented type lookup for qualified names (e.g. in
namespaces), and I noticed we have this code in Module.cpp:

  if (Type::GetTypeScopeAndBasename(type_name_cstr, type_scope,
type_basename,
type_class)) {
// Check if "name" starts with "::" which means the qualified type
starts
// from the root namespace and implies and exact match. The typenames we
// get back from clang do not start with "::" so we need to strip this
off
// in order to get the qualified names to match
exact_match = type_scope.consume_front("::");

ConstString type_basename_const_str(type_basename);
if (FindTypes_Impl(sc, type_basename_const_str, nullptr, append,
   max_matches, searched_symbol_files, typesmap)) {
  typesmap.RemoveMismatchedTypes(type_scope, type_basename, type_class,
 exact_match);
  num_matches = typesmap.GetSize();
}
  } else {


Basically we are stripping the namespace and scope, and only passing the
basename to the SymbolVendor.  I guess then the SymbolFile will find any
types which match the basename, which could be anything, and the
RemoveMismatchedTypes will filter this down to only the ones that are in
the namespace.

I don't know what this is like in DWARF-land, but for PDB this is
*precisely* what we do not want to do.  Types are indexed in an internal
hash table by fully qualified name, so we need the fully scoped name in the
SymbolFile plugin otherwise we basically have to do the equivalent of a
full table scan for what could be an O(1) operation.

If I change this to pass the fully scoped name, is this going to break
SymbolFileDWARF?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] Should we stop supporting building with Visual Studio?

2018-10-10 Thread Zachary Turner via lldb-dev
So IIUC this all 1 big solution, one component of which is LLVM? How do you
get them all together in 1 big solution?
On Wed, Oct 10, 2018 at 7:16 AM Nicolas Capens 
wrote:

> Hi Zachary,
>
> We use LLVM JIT in SwiftShader, which is used by Google Chrome and Android
> (Emulator). Most development takes place in Visual Studio, where it builds
> as part of the rest of the SwiftShader solution. So we care about LLVM
> source files compiling successfully within Visual Studio.
>
> Would it be reasonable to at least ensure that major releases (7.0, 8.0,
> etc.) build with Visual Studio? We don't care much about breakages in
> between releases, and the other issues you listed don't affect us much
> either due to using custom solution/project files.
>
> Thanks for your consideration,
> Nicolas Capens
>
> On Sun, Oct 7, 2018 at 4:51 PM Zachary Turner via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
>
>> This has been on my mind for quite some time, but recently it's been
>> popping up more and more seeing some of the issues people have run into.
>>
>> Before people get the wrong idea, let me make one thing clear.  **I am
>> not proposing we stop supporting the CMake Visual Studio generator.  I am
>> only proposing we stop supporting actually compiling with the generated
>> project**.  Yes the distinction is important, and I'll elaborate more on
>> why later.  First though, here are some of the issues with the VS generator:
>>
>> 1) Using MSBuild is slower than Ninja.
>> 2) Unless you remember to pass -Thost=x64 on the command line, you won't
>> be able to successfully build.  We can (and have) updated the documentation
>> to indicate this, but it's not intuitive and still bites people because for
>> some reason this is not the default.
>> 3) Even if you do pass -Thost=x64 to CMake, it will apparently still fail
>> sometimes.  See this thread for details:
>> http://lists.llvm.org/pipermail/cfe-dev/2018-October/059609.html.  It
>> seems the parallel build scheduler does not do a good job and can bring a
>> machine down.  This is not the first time though, every couple of months
>> there's a thread about how building or running tests from within VS doesn't
>> work.
>> 4) Supporting it is a continuous source of errors and mistakes when
>> writing tests.  The VS generator outputs a project which can build Debug /
>> Release with a single project.  This means that `CMAKE_BUILD_TYPE=Debug` is
>> a no-op on this generator.  The reason this matters for the test suite is
>> because `${CMAKE_CURRENT_BINARY_DIR}` isn't sufficient to identify the
>> location of the binaries.  You need 
>> `${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_CFG_INTDIR}`
>> instead.
>>
>> There is a continuous source of problems in our CMake [1, 2, 3, 4, 5].
>> It also affects tests, and every time someone adds a new lit site
>> configuration, they have to remember to add this magic block of code:
>>
>> # Support substitution of the tools_dir with user parameters. This is
>> # used when we can't determine the tool dir at configuration time.
>> try:
>> config.llvm_tools_dir = config.llvm_tools_dir % lit_config.params
>> config.llvm_shlib_dir = config.llvm_shlib_dir % lit_config.params
>> except KeyError:
>> e = sys.exc_info()[1]
>> key, = e.args
>> lit_config.fatal("unable to find %r parameter, use
>> '--param=%s=VALUE'" % (key,key))
>>
>> to the file (even though only about 2 people actually understand what
>> this does), which has caused problems several times.
>>
>> 5) VSCode and Visual Studio both support opening CMake projects directly
>> now, which bypasses MSBuild.  I don't know how well Visual Studio supports
>> LLVM's CMake, but the last time I tried it with VSCode on Linux it worked
>> fine.
>>
>> 
>>
>> I mentioned earlier that the distinction between not *building* with a
>> VS-generated project and not supporting the VS generator is important.
>>
>> I don't want to speak for everyone, but I believe that *most* people use
>> the VS generator because they want IDE support for their projects.  They
>> want to be able to browse code, hit F5 to debug, F9 to set breakpoints,
>> etc.  They don't necessarily care that Ctrl+Shift+B is how the code is
>> generated versus some other incantation.  I'm asserting that it's possible
>> to still have all the things people actually want from the VS generator
>> without actually building from inside of VS.  In fact, I've been doing this
>> for several years.  The workflow is:
>>
>> 1) Run CMake twice, generating to separate output directories.  Once
>> using -G "Visual Studio 15 2017" and once using -G Ninja, each to different
>> directories.
>>
>> 2) Open the VS one.  You have full IDE support.
>>
>> 3) Instead of hitting Ctrl+Shift+B to build, have a command prompt window
>> open and type ninja.  Wait for it to complete.  If you want to you can make
>> a custom tool command in Visual Studio so that you can access this from a
>> keyboard shortcut.
>>
>> 4) When you want to debug, set your star

Re: [lldb-dev] [cfe-dev] Should we stop supporting building with Visual Studio?

2018-10-09 Thread Zachary Turner via lldb-dev
On Tue, Oct 9, 2018 at 12:49 AM Csaba Raduly  wrote:

> On Sun, Oct 7, 2018 at 10:51 PM Zachary Turner via cfe-dev
>  wrote:
>
> > 1) Run CMake twice, generating to separate output directories.  Once
> using -G "Visual Studio 15 2017" and once using -G Ninja, each to different
> directories.
> >
> > 2) Open the VS one.  You have full IDE support.
> >
> > 3) Instead of hitting Ctrl+Shift+B to build, have a command prompt
> window open and type ninja.  Wait for it to complete.
>
> If there were errors, eyeball-grep the console output and manually
> navigate to the affected file/line. No thanks.

I don’t find this to be a problem in practice.  You have to eyeball grep
the output anyway to figure out which  line to double click in the build
output window.  Usually it’s a file you have open in which case you don’t
have to manually navigate to it.  If it’s not then yes you have to manually
open the file, but you don’t have to manually navigate to the line.  You
can hit Ctrl+F7 to compile just that file in VS and then double click.
Since the ninja build is faster anyway though, the whole process doesn’t
actually end up taking that much more time.  Note that if you were to add a
custom tool command to do your build for you and output to the VS console
window, you could still double click lines there and it would be exactly
the same as if you built from inside VS


>
> > If you want to you can make a custom tool command in Visual Studio so
> that you can access this from a keyboard shortcut.
> >
> > 4) When you want to debug, set your startup project (as you normally
> would), right click and hit properties, go to Debugging, change Command
> from $(TargetPath) to  want to debug>.
> >
>
> Make some changes in the source, hit build, and wonder why the changes
> don't appear in the debuggee. (because they got compiled into the VS
> dir, not the ninja dir).


This will never happen.  An incremental build takes about 10-20 seconds,
compared to 5-10 minutes for a full build.  If you hit Build in VS you will
notice because you probably don’t want to sit around for 5-10 minutes.
Moreover, this is just muscle memory which would cause you to accidentally
hit build in VS.  I’ve probably only done this 3-4 times in as many years.

>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Should we stop supporting building with Visual Studio?

2018-10-08 Thread Zachary Turner via lldb-dev
On Mon, Oct 8, 2018 at 12:29 PM  wrote:

> I build with the VS project. I find it more convenient to do that than
> have VS and a cmd window open to run ninja. Especially when I’ve got more
> than 1 copy of VS open looking at different release trains. I wouldn’t mind
> using ninja to build, but only if it worked when I right click on lldb and
> select “Build”.
>

This seems like a bit of an extreme position to take.   We shouldn't be
making decisions about supported configurations based on what keyboard /
mouse incantation is used to invoke a command.  The important thing is
whether or not there's a reasonable substitute for peoples' existing
workflows.  Pushing (for example) Ctrl+Alt+B instead of Ctrl+Shift+B I
consider reasonable (or typing ninja from a command prompt I also consider
reasonable).
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Parsing Line Table to determine function prologue?

2018-10-08 Thread Zachary Turner via lldb-dev
I see.  It's not the end of the world because I can just parse the whole
line table when requested.  It's just that in PDB-land the format is such
that a) I know the exact address of the prologue and epilogue at the time I
parse the function record, and b) When parsing the line table, I can
quickly scan to the address range for the function making the whole table
parsing less efficient than necessary.  But it's definitely sufficient.

On Mon, Oct 8, 2018 at 12:41 PM Jim Ingham  wrote:

> A single sequence in the line table needs to be run from beginning to end
> to make sense of it.  It doesn't really have addresses, it generally has a
> start address, then a sequence of "increment line, increment address"
> instructions.  So you have to run the state machine to figure out what the
> addresses are.
>
> However, the line table does not have to be one continuous sequence.  The
> DWARF docs state this explicitly, and there is an "end_sequence"
> instruction to implement this.  I can't see any reason why you couldn't get
> the compiler to emit line tables in per-function sequences, and have the
> debugger optimize reading the line table by first scanning for sequence
> ends to get the map of chunks -> addresses, and then reading the line table
> in those chunks.  I don't think anybody does this, however.  clang emitted
> the whole CU as one sequence in the few examples I had sitting around.
>
> Jim
>
>
> > On Oct 8, 2018, at 12:28 PM, Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Even if we do need to parse the line table, could it be done just for
> the function in question?  The debug info tells us the function's address
> range, so is there some technical reason why it couldn't parse the line
> table only for the given address range?
> >
> > My understanding is that there's one DWARF .debug_line "program" per CU,
> and normally you'd need to "execute" the whole line number program.
> >
> > On Sat, Oct 6, 2018 at 8:05 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > While implementing native PDB support I noticed that LLDB is asking to
> parse an entire compile unit's line table in order to determine if 1
> address is a function prologue or epilogue.
> >
> > Is this necessary in DWARF-land?  It would be nice if I could just pass
> the prologue and epilogue byte size directly to the constructor of the
> lldb_private::Function object when I construct it.
> >
> > It seems unnecessary to parse the entire line table just to set a
> breakpoint by function name, but this is what ends up happening.
> >
> > Even if we do need to parse the line table, could it be done just for
> the function in question?  The debug info tells us the function's address
> range, so is there some technical reason why it couldn't parse the line
> table only for the given address range?
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Should we stop supporting building with Visual Studio?

2018-10-08 Thread Zachary Turner via lldb-dev
On Mon, Oct 8, 2018 at 11:54 AM Stephen Kelly via cfe-dev <
cfe-...@lists.llvm.org> wrote:

>
> > 3) Even if you do pass -Thost=x64 to CMake, it will apparently still
> > fail sometimes.  See this thread for details:
> > http://lists.llvm.org/pipermail/cfe-dev/2018-October/059609.html.  It
> > seems the parallel build scheduler does not do a good job and can bring
> > a machine down.  This is not the first time though, every couple of
> > months there's a thread about how building or running tests from within
> > VS doesn't work.
>
> I don't know any more about this. It would be good to know more than
> that it can "apparently fail sometimes".
>
>
Sadly that's part of the problem.  Very few people actually use the Visual
Studio generator for building, so a lot of times when we get people with
issues, nobody knows how to help (or the person that does know doesn't see
the thread).  So they get a response like "hmm, not many people actually
use that workflow, can you try this instead?"

I feel bad when I can't help, and that's part of why I made this proposal
in the first place, because fewer supported options in the configuration
matrix means people are more likely to find someone who understands the
problem when something goes wrong.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Should we stop supporting building with Visual Studio?

2018-10-08 Thread Zachary Turner via lldb-dev
On Mon, Oct 8, 2018 at 7:42 AM Greg Bedwell  wrote:

> Thanks for raising this.
>
> This is a topic I've been interested in for a while too, as I've had to do
> a few of those lite.site.cfg fix-ups that you mention (in fact I have one
> sitting unreviewed at https://reviews.llvm.org/D40522 although I've not
> pinged it in a long time so I'll need to double check that it's still an
> issue).  There are also other issues.  For
> example LLVM_ENABLE_ABI_BREAKING_CHECKS is implemented in such a way that
> by default the value is defined at CMake time based on the value of
> LLVM_ENABLE_ASSERTIONS which gets confusing with the Visual Studio
> generator where the value of LLVM_ENABLE_ASSERTIONS does not necessarily
> correspond to whether assertions are enabled or not.
>
> As I understand it, what you're proposing is to not support building for
> any configs that return true for GENERATOR_IS_MULTI_CONFIG.  This includes
> all of the Visual Studio generators, but also the Xcode generator.  I'm not
> an Xcode user. Does anyone make use of that generator or is it entirely
> replaced in practice by single-config generators, i.e. Ninja?
>
I haven't heard of anyone using the Xcode generated project.  In fact, LLDB
maintains its own hand-created Xcode project precisely because the CMake
one is considered "unusable".  That said, I don't personally use Xcode or a
Mac, so I can't speak for if anyone else might be using the Xcode generator.


>
> We're still using the Visual Studio generators in production at Sony at
> the moment.  This is largely because until recently they were actually
> faster than Ninja for us due to the availability of distributed builds on
> our network.  We've recently patched in support for our system into our
> private branch of Ninja now so in theory it should be faster/on-par again
> but we've not yet pulled the trigger on making them the default.  If
> there's consensus that this is the way forward, then we'll definitely need
> some time to make the change internally.  I'm only speaking personally in
> this reply as I'll need to discuss with the rest of the team before we can
> reach a position, but basically I wouldn't want the conclusion of this
> thread to be "No dissenting voices, so here's an immediate patch to remove
> support!"
>
There's a patch up right now to add support for /MP.
https://reviews.llvm.org/D52193.  In theory this should also help unless
you have your own distributed build system.  I'm curious what was actually
faster though.  I've found hitting Ctrl+Shift+B from within Visual Studio
to be much slower, but it seems like a lot of that time is going to MSBuild
resolving dependencies and stuff.  Like it sometimes takes over 30 seconds
before it even starts doing *anything*.


>
> I've not tried the workflow you describe.  I'll try it out in the coming
> days to see how it works for me.  My main concerns are:
>
> * How far will it raise the barrier of entry to new developers?  My
> impression is that a lot of students coming to LLVM for the first time,
> first build out of the box with Visual Studio before later discovering this
> magical thing called Ninja that will speed things up.  Potentially this
> could be mitigated with good enough documentation in the getting started
> guide I expect.
>
There's a couple of ways we can mitigate this.  We can print a warning when
using the VS generator, and we can update the getting started guide.  But
I'm not sure it will raise the barrier of entry much, if at all.  Right now
new developers are struggling with building and running even with VS.
Every couple of weeks there's posts about how the test suite wouldn't run,
or something is running out of heap space, or they forgot to use
-Thost=x64.


>
> * LLVM's CMake is super-slow on Windows, and we'd need to run it twice
> whenever there are project changes.  This could be a significant drawback
> in the proposed workflow but I'll need to try it before I can say that for
> sure.
>
I mentioned this in my response to Aaron, but just to re-iterate here, you
only ever need to run CMake on the VS project if you actually want to edit
a file that has been added, which is pretty rare.  I have gone several
months without re-generating and it works fine.  This is actually a big
improvement over the VS-generator-only workflow.  FWIW, my experience is
that the Ninja generator is at least twice as fast as the CMake generator.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] Should we stop supporting building with Visual Studio?

2018-10-08 Thread Zachary Turner via lldb-dev
Yes i listed 5 steps, but 2 of them (#2 and #5) are exactly what you
already do.

#1 I only actually do every couple of weeks, which is an improvement over
building inside VS. when you build inside vs you have to close and reopen
the solution every time you sync, which is really slow. You don’t actually
have to regenerate the ide solution unless you need to edit a file that was
added, which is rare. I’ve gone several months without regenerating the vs
solution.

#3 is not *that* much different than what we already do. 6 of one, half
dozen of another.

#4 is the only real diff, if you build from the ide this just works, with
this workflow there’s 1 extra step. But you only really have to do it the
first time.
On Mon, Oct 8, 2018 at 7:35 AM Aaron Ballman  wrote:

> On Sun, Oct 7, 2018 at 4:51 PM Zachary Turner via cfe-dev
>  wrote:
> >
> > This has been on my mind for quite some time, but recently it's been
> popping up more and more seeing some of the issues people have run into.
> >
> > Before people get the wrong idea, let me make one thing clear.  **I am
> not proposing we stop supporting the CMake Visual Studio generator.  I am
> only proposing we stop supporting actually compiling with the generated
> project**.  Yes the distinction is important, and I'll elaborate more on
> why later.  First though, here are some of the issues with the VS generator:
> >
> > 1) Using MSBuild is slower than Ninja.
> > 2) Unless you remember to pass -Thost=x64 on the command line, you won't
> be able to successfully build.  We can (and have) updated the documentation
> to indicate this, but it's not intuitive and still bites people because for
> some reason this is not the default.
> > 3) Even if you do pass -Thost=x64 to CMake, it will apparently still
> fail sometimes.  See this thread for details:
> http://lists.llvm.org/pipermail/cfe-dev/2018-October/059609.html.  It
> seems the parallel build scheduler does not do a good job and can bring a
> machine down.  This is not the first time though, every couple of months
> there's a thread about how building or running tests from within VS doesn't
> work.
> > 4) Supporting it is a continuous source of errors and mistakes when
> writing tests.  The VS generator outputs a project which can build Debug /
> Release with a single project.  This means that `CMAKE_BUILD_TYPE=Debug` is
> a no-op on this generator.  The reason this matters for the test suite is
> because `${CMAKE_CURRENT_BINARY_DIR}` isn't sufficient to identify the
> location of the binaries.  You need
> `${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_CFG_INTDIR}` instead.
> >
> > There is a continuous source of problems in our CMake [1, 2, 3, 4, 5].
> It also affects tests, and every time someone adds a new lit site
> configuration, they have to remember to add this magic block of code:
> >
> > # Support substitution of the tools_dir with user parameters. This is
> > # used when we can't determine the tool dir at configuration time.
> > try:
> > config.llvm_tools_dir = config.llvm_tools_dir % lit_config.params
> > config.llvm_shlib_dir = config.llvm_shlib_dir % lit_config.params
> > except KeyError:
> > e = sys.exc_info()[1]
> > key, = e.args
> > lit_config.fatal("unable to find %r parameter, use
> '--param=%s=VALUE'" % (key,key))
> >
> > to the file (even though only about 2 people actually understand what
> this does), which has caused problems several times.
> >
> > 5) VSCode and Visual Studio both support opening CMake projects directly
> now, which bypasses MSBuild.  I don't know how well Visual Studio supports
> LLVM's CMake, but the last time I tried it with VSCode on Linux it worked
> fine.
> >
> > 
> >
> > I mentioned earlier that the distinction between not *building* with a
> VS-generated project and not supporting the VS generator is important.
> >
> > I don't want to speak for everyone, but I believe that *most* people use
> the VS generator because they want IDE support for their projects.  They
> want to be able to browse code, hit F5 to debug, F9 to set breakpoints,
> etc.  They don't necessarily care that Ctrl+Shift+B is how the code is
> generated versus some other incantation.  I'm asserting that it's possible
> to still have all the things people actually want from the VS generator
> without actually building from inside of VS.  In fact, I've been doing this
> for several years.  The workflow is:
> >
> > 1) Run CMake twice, generating to separate output directories.  Once
> using -G "Visual Studio 15 2017" and once using -G Ninja, each to different
> directories.
> >
> > 2) Open the VS one.  You have full IDE support.
> >
> > 3) Instead of hitting Ctrl+Shift+B to build, have a command prompt
> window open and type ninja.  Wait for it to complete.  If you want to you
> can make a custom tool command in Visual Studio so that you can access this
> from a keyboard shortcut.
> >
> > 4) When you want to debug, set your startup project (as you normally
> would), right click and hit prope

Re: [lldb-dev] [cfe-dev] Should we stop supporting building with Visual Studio?

2018-10-07 Thread Zachary Turner via lldb-dev
What would the variable do?  Ninja and VS are generators, the only way to
specify them is with the -G option to cmake.  If you use the VS generator,
there's no way I'm aware of to make it use ninja instead of MSBuild when
you hit Ctrl+Shift+B.

That said, type ninja in a command prompt is not a terrible burden, but
even if it is, people can always just create a custom Tool command that
runs ninja in the specified working directory, and bind it to some keyboard
combination so the workflow is almost exactly the same as what they are
using today.

On Sun, Oct 7, 2018 at 8:32 PM Hussien Hussien  wrote:

> Can we just create a CMAKE variable (eg. LLVM_USE_NINJA_BUILD) that's set
> to ON by default, but allow users to turn it OFF at their discretion?
>
> I do know that VS2017 supports CMAKE build integration through Ninja.
>
> On Sun, Oct 7, 2018 at 4:51 PM Zachary Turner via cfe-dev <
> cfe-...@lists.llvm.org> wrote:
>
>> This has been on my mind for quite some time, but recently it's been
>> popping up more and more seeing some of the issues people have run into.
>>
>> Before people get the wrong idea, let me make one thing clear.  **I am
>> not proposing we stop supporting the CMake Visual Studio generator.  I am
>> only proposing we stop supporting actually compiling with the generated
>> project**.  Yes the distinction is important, and I'll elaborate more on
>> why later.  First though, here are some of the issues with the VS generator:
>>
>> 1) Using MSBuild is slower than Ninja.
>> 2) Unless you remember to pass -Thost=x64 on the command line, you won't
>> be able to successfully build.  We can (and have) updated the documentation
>> to indicate this, but it's not intuitive and still bites people because for
>> some reason this is not the default.
>> 3) Even if you do pass -Thost=x64 to CMake, it will apparently still fail
>> sometimes.  See this thread for details:
>> http://lists.llvm.org/pipermail/cfe-dev/2018-October/059609.html.  It
>> seems the parallel build scheduler does not do a good job and can bring a
>> machine down.  This is not the first time though, every couple of months
>> there's a thread about how building or running tests from within VS doesn't
>> work.
>> 4) Supporting it is a continuous source of errors and mistakes when
>> writing tests.  The VS generator outputs a project which can build Debug /
>> Release with a single project.  This means that `CMAKE_BUILD_TYPE=Debug` is
>> a no-op on this generator.  The reason this matters for the test suite is
>> because `${CMAKE_CURRENT_BINARY_DIR}` isn't sufficient to identify the
>> location of the binaries.  You need 
>> `${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_CFG_INTDIR}`
>> instead.
>>
>> There is a continuous source of problems in our CMake [1, 2, 3, 4, 5].
>> It also affects tests, and every time someone adds a new lit site
>> configuration, they have to remember to add this magic block of code:
>>
>> # Support substitution of the tools_dir with user parameters. This is
>> # used when we can't determine the tool dir at configuration time.
>> try:
>> config.llvm_tools_dir = config.llvm_tools_dir % lit_config.params
>> config.llvm_shlib_dir = config.llvm_shlib_dir % lit_config.params
>> except KeyError:
>> e = sys.exc_info()[1]
>> key, = e.args
>> lit_config.fatal("unable to find %r parameter, use
>> '--param=%s=VALUE'" % (key,key))
>>
>> to the file (even though only about 2 people actually understand what
>> this does), which has caused problems several times.
>>
>> 5) VSCode and Visual Studio both support opening CMake projects directly
>> now, which bypasses MSBuild.  I don't know how well Visual Studio supports
>> LLVM's CMake, but the last time I tried it with VSCode on Linux it worked
>> fine.
>>
>> 
>>
>> I mentioned earlier that the distinction between not *building* with a
>> VS-generated project and not supporting the VS generator is important.
>>
>> I don't want to speak for everyone, but I believe that *most* people use
>> the VS generator because they want IDE support for their projects.  They
>> want to be able to browse code, hit F5 to debug, F9 to set breakpoints,
>> etc.  They don't necessarily care that Ctrl+Shift+B is how the code is
>> generated versus some other incantation.  I'm asserting that it's possible
>> to still have all the things people actually want from the VS generator
>> without actually building from inside of VS.  In fact, I've been doing this
>> for several years.  The workflow is:
>>
>> 1) Run CMake twice, generating to separate output directories.  Once
>> using -G "Visual Studio 15 2017" and once using -G Ninja, each to different
>> directories.
>>
>> 2) Open the VS one.  You have full IDE support.
>>
>> 3) Instead of hitting Ctrl+Shift+B to build, have a command prompt window
>> open and type ninja.  Wait for it to complete.  If you want to you can make
>> a custom tool command in Visual Studio so that you can access this from a
>> keyboard shortcut.
>>
>> 4) When you want

[lldb-dev] Should we stop supporting building with Visual Studio?

2018-10-07 Thread Zachary Turner via lldb-dev
This has been on my mind for quite some time, but recently it's been
popping up more and more seeing some of the issues people have run into.

Before people get the wrong idea, let me make one thing clear.  **I am not
proposing we stop supporting the CMake Visual Studio generator.  I am only
proposing we stop supporting actually compiling with the generated
project**.  Yes the distinction is important, and I'll elaborate more on
why later.  First though, here are some of the issues with the VS generator:

1) Using MSBuild is slower than Ninja.
2) Unless you remember to pass -Thost=x64 on the command line, you won't be
able to successfully build.  We can (and have) updated the documentation to
indicate this, but it's not intuitive and still bites people because for
some reason this is not the default.
3) Even if you do pass -Thost=x64 to CMake, it will apparently still fail
sometimes.  See this thread for details:
http://lists.llvm.org/pipermail/cfe-dev/2018-October/059609.html.  It seems
the parallel build scheduler does not do a good job and can bring a machine
down.  This is not the first time though, every couple of months there's a
thread about how building or running tests from within VS doesn't work.
4) Supporting it is a continuous source of errors and mistakes when writing
tests.  The VS generator outputs a project which can build Debug / Release
with a single project.  This means that `CMAKE_BUILD_TYPE=Debug` is a no-op
on this generator.  The reason this matters for the test suite is because
`${CMAKE_CURRENT_BINARY_DIR}` isn't sufficient to identify the location of
the binaries.  You need `${CMAKE_CURRENT_BINARY_DIR}/${CMAKE_CFG_INTDIR}`
instead.

There is a continuous source of problems in our CMake [1, 2, 3, 4, 5].  It
also affects tests, and every time someone adds a new lit site
configuration, they have to remember to add this magic block of code:

# Support substitution of the tools_dir with user parameters. This is
# used when we can't determine the tool dir at configuration time.
try:
config.llvm_tools_dir = config.llvm_tools_dir % lit_config.params
config.llvm_shlib_dir = config.llvm_shlib_dir % lit_config.params
except KeyError:
e = sys.exc_info()[1]
key, = e.args
lit_config.fatal("unable to find %r parameter, use '--param=%s=VALUE'"
% (key,key))

to the file (even though only about 2 people actually understand what this
does), which has caused problems several times.

5) VSCode and Visual Studio both support opening CMake projects directly
now, which bypasses MSBuild.  I don't know how well Visual Studio supports
LLVM's CMake, but the last time I tried it with VSCode on Linux it worked
fine.



I mentioned earlier that the distinction between not *building* with a
VS-generated project and not supporting the VS generator is important.

I don't want to speak for everyone, but I believe that *most* people use
the VS generator because they want IDE support for their projects.  They
want to be able to browse code, hit F5 to debug, F9 to set breakpoints,
etc.  They don't necessarily care that Ctrl+Shift+B is how the code is
generated versus some other incantation.  I'm asserting that it's possible
to still have all the things people actually want from the VS generator
without actually building from inside of VS.  In fact, I've been doing this
for several years.  The workflow is:

1) Run CMake twice, generating to separate output directories.  Once using
-G "Visual Studio 15 2017" and once using -G Ninja, each to different
directories.

2) Open the VS one.  You have full IDE support.

3) Instead of hitting Ctrl+Shift+B to build, have a command prompt window
open and type ninja.  Wait for it to complete.  If you want to you can make
a custom tool command in Visual Studio so that you can access this from a
keyboard shortcut.

4) When you want to debug, set your startup project (as you normally
would), right click and hit properties, go to Debugging, change Command
from $(TargetPath) to .

5) Hit F5.

In short, with only 2 simple additional steps (run CMake an extra time, and
type a path into a window), people can have the exact workflow they are
used to, plus faster builds, minus all of the problems and complexities
associated with building from within VS.

And we can simplify our CMake logic and lit configuration files as well.




[1] - https://reviews.llvm.org/D43096
[2] - https://reviews.llvm.org/D46642
[3] - https://reviews.llvm.org/D45918
[4] - https://reviews.llvm.org/D45333
[5] - https://reviews.llvm.org/D46334
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Parsing Line Table to determine function prologue?

2018-10-06 Thread Zachary Turner via lldb-dev
While implementing native PDB support I noticed that LLDB is asking to
parse an entire compile unit's line table in order to determine if 1
address is a function prologue or epilogue.

Is this necessary in DWARF-land?  It would be nice if I could just pass the
prologue and epilogue byte size directly to the constructor of the
lldb_private::Function object when I construct it.

It seems unnecessary to parse the entire line table just to set a
breakpoint by function name, but this is what ends up happening.

Even if we do need to parse the line table, could it be done just for the
function in question?  The debug info tells us the function's address
range, so is there some technical reason why it couldn't parse the line
table only for the given address range?
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: Replacing all PDB code with non-Windows specific implementation

2018-10-02 Thread Zachary Turner via lldb-dev
To clarify, #1 is saying to re-implement the API **in LLVM** so that LLDB
transparently just works with no code changes.  While #2 is saying to
re-implement the plugin **in LLDB** to not use that API at all, and instead
use the low-level API that parses records directly from the file.

On Tue, Oct 2, 2018 at 1:57 PM Zachary Turner  wrote:

> Currently our PDBASTParser and SymbolFilePDB can only work on Windows
> because it relies on a builtin Windows library.
>
> In LLVM now we have full ability to read, parse, and interpret the
> contents of PDB files at the byte level.  There are two approaches to
> getting this working in LLDB.
>
> 1) Re-implement all the APIs that LLDB is currently using in terms of
> LLVM's native PDB parsing code.  This would be the most transparent
> solution from LLDB's point of view.
>
> 2) Re-implement the code in LLDB in terms of LLVM's low level PDB API.
>
> Originally there was someone working on #1, but I'm having second thoughts
> about whether that is the best approach.  The API in question has a log of
> "architecture overhead" associated with it, both in terms of runtime cost
> and implementation cost.  It essentially aims to be a one-size-fits-all
> abstraction over every possible use case for consuming debug info.  So in
> order to implement #1 you end up doing a lot of work that isn't strictly
> necessary for LLDB's use case.  Mechanical code only necessary to fit with
> the design.
>
> But LLDB doesn't exactly need all of that.  So I started thinking about
> #2.  Instead of spending weeks / months completing this API, then finding
> all the places where the APIs differ semantically in subtle ways that
> require changing the user code, we can just get rid of the existing
> implementation and re-implement existing functionality in terms of the low
> level PDB functionality of LLVM.
>
> Obviously, until it's at parity with the existing Windows-only
> implementation, this would be done side-by-side so the existing
> implementation would stay and still be the default.  We could putt he new
> implementation behind an environment variable or something for testing
> purposes (and use it unconditionally on non-Windows).
>
>
> I'm going to experiment with this by implementing a SymbolFilePDBNative
> plugin, but I want to see if anyone has strong objections to this approach.
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] RFC: Replacing all PDB code with non-Windows specific implementation

2018-10-02 Thread Zachary Turner via lldb-dev
Currently our PDBASTParser and SymbolFilePDB can only work on Windows
because it relies on a builtin Windows library.

In LLVM now we have full ability to read, parse, and interpret the contents
of PDB files at the byte level.  There are two approaches to getting this
working in LLDB.

1) Re-implement all the APIs that LLDB is currently using in terms of
LLVM's native PDB parsing code.  This would be the most transparent
solution from LLDB's point of view.

2) Re-implement the code in LLDB in terms of LLVM's low level PDB API.

Originally there was someone working on #1, but I'm having second thoughts
about whether that is the best approach.  The API in question has a log of
"architecture overhead" associated with it, both in terms of runtime cost
and implementation cost.  It essentially aims to be a one-size-fits-all
abstraction over every possible use case for consuming debug info.  So in
order to implement #1 you end up doing a lot of work that isn't strictly
necessary for LLDB's use case.  Mechanical code only necessary to fit with
the design.

But LLDB doesn't exactly need all of that.  So I started thinking about
#2.  Instead of spending weeks / months completing this API, then finding
all the places where the APIs differ semantically in subtle ways that
require changing the user code, we can just get rid of the existing
implementation and re-implement existing functionality in terms of the low
level PDB functionality of LLVM.

Obviously, until it's at parity with the existing Windows-only
implementation, this would be done side-by-side so the existing
implementation would stay and still be the default.  We could putt he new
implementation behind an environment variable or something for testing
purposes (and use it unconditionally on non-Windows).


I'm going to experiment with this by implementing a SymbolFilePDBNative
plugin, but I want to see if anyone has strong objections to this approach.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] LLDB Reproducers

2018-09-20 Thread Zachary Turner via lldb-dev
For the first, I think 99% of the time the bug is not caused by the
sequence of gdb remote packets.  The sequence of gdb remote packets just
happens to be the means by which the debugger was put into the state in
which it failed.  If there is another, stable way of getting the debugger
into the same state this part is solvable.

The second issue you raised does seem like something that would require
human intervention to specify the expected state though as part of a test

On Wed, Sep 19, 2018 at 11:17 AM Jim Ingham  wrote:

> There are a couple of problems with using these reproducers in the
> testsuite.
>
> The first is that we make no commitments that the a future lldb will
> implement the "same" session with the same sequence of gdb-remote packet
> requests.  We often monkey around with lldb's sequences of requests to make
> things go faster.  So some future lldb will end up making a request that
> wasn't in the data from the reproducer, and at that point we won't really
> know what to do.  The Provider for gdb-remote packets should record the
> packets it receives - not just the answers it gives - so it can detect this
> error and not go off the rails.  But I'm pretty sure it isn't worth the
> effort to try to get lldb to maintain all the old sequences it used in the
> past in order to support keeping the reproducers alive.  But this does mean
> that this is an unreliable way to write tests.
>
> The second is that the reproducers as described have no notion of
> "expected state".  They are meant to go along with a bug report where the
> "x was wrong" part is not contained in the reproducer.  That would be an
> interesting thing to think about adding, but I think the problem space here
> is complicated enough already...  You can't write a test if you don't know
> the correct end state.
>
> Jim
>
>
> > On Sep 19, 2018, at 10:59 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > I assume that reproducing race conditions is out of scope?
> >
> > Also, will it be possible to incorporate these reproducers into the test
> suite somehow?  It would be nice if we could create a tar file similar to a
> linkrepro, check in the tar file, and then have a test where you don't have
> to write any python code, any Makefile, any source code, or any anything
> for that matter.  It just enumerates all of these repro tar files in a
> certain location and runs that test.
> >
> > On Wed, Sep 19, 2018 at 10:48 AM Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > Great, thanks. This means that the lldb-server issues are not in scope
> for this feature, right?
> >
> > On Wed, Sep 19, 2018 at 10:09 AM, Jonas Devlieghere <
> jdevliegh...@apple.com> wrote:
> >
> >
> >> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu 
> wrote:
> >>
> >> Sounds like a fantastic idea.
> >>
> >> How would this work when the behavior of the debugee process is
> non-deterministic?
> >
> > All the communication between the debugger and the inferior goes through
> the
> > GDB remote protocol. Because we capture and replay this, we can reproduce
> > without running the executable, which is particularly convenient when
> you were
> > originally debugging something on a different device for example.
> >
> >>
> >> On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >> Hi everyone,
> >>
> >> We all know how hard it can be to reproduce an issue or crash in LLDB.
> There
> >> are a lot of moving parts and subtle differences can easily add up. We
> want to
> >> make this easier by generating reproducers in LLDB, similar to what
> clang does
> >> today.
> >>
> >> The core idea is as follows: during normal operation we capture whatever
> >> information is needed to recreate the current state of the debugger.
> When
> >> something goes wrong, this becomes available to the user. Someone else
> should
> >> then be able to reproduce the same issue with only this data, for
> example on a
> >> different machine.
> >>
> >> It's important to note that we want to replay the debug session from the
> >> reproducer, rather than just recreating the current state. This ensures
> that we
> >> have access to all the events leading up to the problem, which are
> usually far
> >> more important than the error state itself.
> >>
> >> # High Level Design
> >>
> >> Concretely we want to extend LLDB in

Re: [lldb-dev] [RFC] LLDB Reproducers

2018-09-19 Thread Zachary Turner via lldb-dev
By the way, several weeks / months ago I had an idea for exposing a
debugger object model.  That would be one very powerful way to create
reproducers, but it would be a large effort.  The idea is that if every
important part of your debugger is represented by some component in a
debugger object model, and all interactions (including internal
interactions) go through the object model, then you can record every state
change to the object model and replay it.

On Wed, Sep 19, 2018 at 10:59 AM Zachary Turner  wrote:

> I assume that reproducing race conditions is out of scope?
>
> Also, will it be possible to incorporate these reproducers into the test
> suite somehow?  It would be nice if we could create a tar file similar to a
> linkrepro, check in the tar file, and then have a test where you don't have
> to write any python code, any Makefile, any source code, or any anything
> for that matter.  It just enumerates all of these repro tar files in a
> certain location and runs that test.
>
> On Wed, Sep 19, 2018 at 10:48 AM Leonard Mosescu via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Great, thanks. This means that the lldb-server issues are not in scope
>> for this feature, right?
>>
>> On Wed, Sep 19, 2018 at 10:09 AM, Jonas Devlieghere <
>> jdevliegh...@apple.com> wrote:
>>
>>>
>>>
>>> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu  wrote:
>>>
>>> Sounds like a fantastic idea.
>>>
>>> How would this work when the behavior of the debugee process is
>>> non-deterministic?
>>>
>>>
>>> All the communication between the debugger and the inferior goes through
>>> the
>>> GDB remote protocol. Because we capture and replay this, we can reproduce
>>> without running the executable, which is particularly convenient when
>>> you were
>>> originally debugging something on a different device for example.
>>>
>>>
>>> On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
 Hi everyone,

 We all know how hard it can be to reproduce an issue or crash in LLDB.
 There
 are a lot of moving parts and subtle differences can easily add up. We
 want to
 make this easier by generating reproducers in LLDB, similar to what
 clang does
 today.

 The core idea is as follows: during normal operation we capture whatever
 information is needed to recreate the current state of the debugger.
 When
 something goes wrong, this becomes available to the user. Someone else
 should
 then be able to reproduce the same issue with only this data, for
 example on a
 different machine.

 It's important to note that we want to replay the debug session from the
 reproducer, rather than just recreating the current state. This ensures
 that we
 have access to all the events leading up to the problem, which are
 usually far
 more important than the error state itself.

 # High Level Design

 Concretely we want to extend LLDB in two ways:

 1.  We need to add infrastructure to _generate_ the data necessary for
 reproducing.
 2.  We need to add infrastructure to _use_ the data in the reproducer
 to replay
 the debugging session.

 Different parts of LLDB will have different definitions of what data
 they need
 to reproduce their path to the issue. For example, capturing the
 commands
 executed by the user is very different from tracking the dSYM bundles
 on disk.
 Therefore, we propose to have each component deal with its needs in a
 localized
 way. This has the advantage that the functionality can be developed and
 tested
 independently.

 ## Providers

 We'll call a combination of (1) and (2) for a given component a
 `Provider`. For
 example, we'd have an provider for user commands and a provider for
 dSYM files.
 A provider will know how to keep track of its information, how to
 serialize it
 as part of the reproducer as well as how to deserialize it again and
 use it to
 recreate the state of the debugger.

 With one exception, the lifetime of the provider coincides with that of
 the
 `SBDebugger`, because that is the scope of what we consider here to be
 a single
 debug session. The exception would be the provider for the global
 module cache,
 because it is shared between multiple debuggers. Although it would be
 conceptually straightforward to add a provider for the shared module
 cache,
 this significantly increases the complexity of the reproducer framework
 because
 of its implication on the lifetime and everything related to that.

 For now we will ignore this problem which means we will not replay the
 construction of the shared module cache but rather build it up during
 replaying, as if the current debug session was the first and only one
 using it.
 The impact of doing so 

Re: [lldb-dev] [RFC] LLDB Reproducers

2018-09-19 Thread Zachary Turner via lldb-dev
I assume that reproducing race conditions is out of scope?

Also, will it be possible to incorporate these reproducers into the test
suite somehow?  It would be nice if we could create a tar file similar to a
linkrepro, check in the tar file, and then have a test where you don't have
to write any python code, any Makefile, any source code, or any anything
for that matter.  It just enumerates all of these repro tar files in a
certain location and runs that test.

On Wed, Sep 19, 2018 at 10:48 AM Leonard Mosescu via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Great, thanks. This means that the lldb-server issues are not in scope for
> this feature, right?
>
> On Wed, Sep 19, 2018 at 10:09 AM, Jonas Devlieghere <
> jdevliegh...@apple.com> wrote:
>
>>
>>
>> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu  wrote:
>>
>> Sounds like a fantastic idea.
>>
>> How would this work when the behavior of the debugee process is
>> non-deterministic?
>>
>>
>> All the communication between the debugger and the inferior goes through
>> the
>> GDB remote protocol. Because we capture and replay this, we can reproduce
>> without running the executable, which is particularly convenient when you
>> were
>> originally debugging something on a different device for example.
>>
>>
>> On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hi everyone,
>>>
>>> We all know how hard it can be to reproduce an issue or crash in LLDB.
>>> There
>>> are a lot of moving parts and subtle differences can easily add up. We
>>> want to
>>> make this easier by generating reproducers in LLDB, similar to what
>>> clang does
>>> today.
>>>
>>> The core idea is as follows: during normal operation we capture whatever
>>> information is needed to recreate the current state of the debugger. When
>>> something goes wrong, this becomes available to the user. Someone else
>>> should
>>> then be able to reproduce the same issue with only this data, for
>>> example on a
>>> different machine.
>>>
>>> It's important to note that we want to replay the debug session from the
>>> reproducer, rather than just recreating the current state. This ensures
>>> that we
>>> have access to all the events leading up to the problem, which are
>>> usually far
>>> more important than the error state itself.
>>>
>>> # High Level Design
>>>
>>> Concretely we want to extend LLDB in two ways:
>>>
>>> 1.  We need to add infrastructure to _generate_ the data necessary for
>>> reproducing.
>>> 2.  We need to add infrastructure to _use_ the data in the reproducer to
>>> replay
>>> the debugging session.
>>>
>>> Different parts of LLDB will have different definitions of what data
>>> they need
>>> to reproduce their path to the issue. For example, capturing the commands
>>> executed by the user is very different from tracking the dSYM bundles on
>>> disk.
>>> Therefore, we propose to have each component deal with its needs in a
>>> localized
>>> way. This has the advantage that the functionality can be developed and
>>> tested
>>> independently.
>>>
>>> ## Providers
>>>
>>> We'll call a combination of (1) and (2) for a given component a
>>> `Provider`. For
>>> example, we'd have an provider for user commands and a provider for dSYM
>>> files.
>>> A provider will know how to keep track of its information, how to
>>> serialize it
>>> as part of the reproducer as well as how to deserialize it again and use
>>> it to
>>> recreate the state of the debugger.
>>>
>>> With one exception, the lifetime of the provider coincides with that of
>>> the
>>> `SBDebugger`, because that is the scope of what we consider here to be a
>>> single
>>> debug session. The exception would be the provider for the global module
>>> cache,
>>> because it is shared between multiple debuggers. Although it would be
>>> conceptually straightforward to add a provider for the shared module
>>> cache,
>>> this significantly increases the complexity of the reproducer framework
>>> because
>>> of its implication on the lifetime and everything related to that.
>>>
>>> For now we will ignore this problem which means we will not replay the
>>> construction of the shared module cache but rather build it up during
>>> replaying, as if the current debug session was the first and only one
>>> using it.
>>> The impact of doing so is significant, as no issue caused by the shared
>>> module
>>> cache will be reproducible, but does not limit reproducing any issue
>>> unrelated
>>> to it.
>>>
>>> ## Reproducer Framework
>>>
>>> To coordinate between the data from different components, we'll need to
>>> introduce a global reproducer infrastructure. We have a component
>>> responsible
>>> for reproducer generation (the `Generator`) and for using the reproducer
>>> (the
>>> `Loader`). They are essentially two ways of looking at the same unit of
>>> repayable work.
>>>
>>> The Generator keeps track of its providers and whether or not we need to
>>> generate a reproducer. When a 

Re: [lldb-dev] Symtab for PECOFF

2018-08-31 Thread Zachary Turner via lldb-dev
That would be my thought, yea
On Fri, Aug 31, 2018 at 1:21 AM Aleksandr Urakov <
aleksandr.ura...@jetbrains.com> wrote:

> Thanks for the reply!
>
> Yes, the function search is implemented in the way similar to what you
> have described (and even the search in a symbol file is done before the
> search in a symtab). But for Module::FindSymbolsWithNameAndType function I
> can't find any relevant function in the SymbolFile. Do you mean that we
> need to extend the SymbolFile interface with such a function (which will
> search all public symbols by the name and the type), and then implement it
> in derived classes?
>
> On Thu, Aug 30, 2018 at 6:03 PM Zachary Turner  wrote:
>
>> It seems reasonable to me to say that if the symbol is not found in the
>> executables symtab, it will fall back to searching in the symbol file..
>> this logic doesn’t even need to be specific to PDB
>> On Thu, Aug 30, 2018 at 7:00 AM Aleksandr Urakov via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Hello!
>>>
>>> I'm working on an expressions evaluation on Windows, and currently I'm
>>> trying to make a JIT evaluation working.
>>>
>>> When I'm trying to evaluate the next expression:
>>>
>>> print S::x
>>>
>>>
>>> on the next code:
>>>
>>> struct S {
>>>   static int x;
>>>   void foo() { }
>>> };
>>> int S::x = 5;
>>>
>>> int main() {
>>>   S().foo(); // here
>>>   return 0;
>>> }
>>>
>>>
>>> the evaluation requires JIT (but printing of global variables requires
>>> not, and I can't figure out what is the key difference between a class
>>> static variable and a global variable in the case?).
>>>
>>> During symbols resolving IRExecutionUnit::FindInSymbols is used, and it
>>> searches a symbol among functions (which is not our case), and then calls
>>> Module::FindSymbolsWithNameAndType for each module in the list. This
>>> function looks symbols up in a Symtab, which is retrieved through a
>>> SymbolVendor, and it retrieves one from an ObjectFile. ELF files
>>> contain symbols for such a variables in their symbol tables, but the
>>> problem is that PE files usually contain info about exported (and imported)
>>> symbols only, so the lookup in Symtab fails.
>>>
>>> I think that we need somehow to retrieve a symbols info from a symbol
>>> file. I thought that we can emit a Symtab from a SymbolFile just like
>>> from an ObjectFile (and for now implement it for SymbolFilePDB only),
>>> but I'm not sure if this solution is good. How can we solve the problem
>>> else?
>>>
>>> --
>>> Aleksandr Urakov
>>> Software Developer
>>> JetBrains
>>> http://www.jetbrains.com
>>> The Drive to Develop
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>
>
> --
> Aleksandr Urakov
> Software Developer
> JetBrains
> http://www.jetbrains.com
> The Drive to Develop
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Symtab for PECOFF

2018-08-30 Thread Zachary Turner via lldb-dev
It seems reasonable to me to say that if the symbol is not found in the
executables symtab, it will fall back to searching in the symbol file..
this logic doesn’t even need to be specific to PDB
On Thu, Aug 30, 2018 at 7:00 AM Aleksandr Urakov via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hello!
>
> I'm working on an expressions evaluation on Windows, and currently I'm
> trying to make a JIT evaluation working.
>
> When I'm trying to evaluate the next expression:
>
> print S::x
>
>
> on the next code:
>
> struct S {
>   static int x;
>   void foo() { }
> };
> int S::x = 5;
>
> int main() {
>   S().foo(); // here
>   return 0;
> }
>
>
> the evaluation requires JIT (but printing of global variables requires
> not, and I can't figure out what is the key difference between a class
> static variable and a global variable in the case?).
>
> During symbols resolving IRExecutionUnit::FindInSymbols is used, and it
> searches a symbol among functions (which is not our case), and then calls
> Module::FindSymbolsWithNameAndType for each module in the list. This
> function looks symbols up in a Symtab, which is retrieved through a
> SymbolVendor, and it retrieves one from an ObjectFile. ELF files contain
> symbols for such a variables in their symbol tables, but the problem is
> that PE files usually contain info about exported (and imported) symbols
> only, so the lookup in Symtab fails.
>
> I think that we need somehow to retrieve a symbols info from a symbol
> file. I thought that we can emit a Symtab from a SymbolFile just like
> from an ObjectFile (and for now implement it for SymbolFilePDB only), but
> I'm not sure if this solution is good. How can we solve the problem else?
>
> --
> Aleksandr Urakov
> Software Developer
> JetBrains
> http://www.jetbrains.com
> The Drive to Develop
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-23 Thread Zachary Turner via lldb-dev
I’m fine with it. I still would like to see inline tests ported to a custom
lit test format eventually, but this seems orthogonal to that and it can be
done in addition to this
On Thu, Aug 23, 2018 at 4:25 PM Vedant Kumar  wrote:

> Pinging this because I'd like this to go forward to make testing easier.
>
> I know folks have concerns about maintaining completeness of the scripting
> APIs and about keeping the test suite debuggable. I just don't think making
> FileCheck available in inline tests is counter to those goals :).
>
> I think this boils down to having a more powerful replacement for
> `self.expect` in lldbinline tests. As we're actively discouraging use of
> pexpect during code review now, we need some replacement.
>
> vedant
>
> On Aug 15, 2018, at 12:18 PM, Vedant Kumar  wrote:
>
>
>
> On Aug 15, 2018, at 12:12 PM, Jason Molenda  wrote:
>
>
>
> On Aug 15, 2018, at 11:34 AM, Vedant Kumar  wrote:
>
>
>
> On Aug 14, 2018, at 6:19 PM, Jason Molenda  wrote:
>
> It's more verbose, and it does mean test writers need to learn the public
> API, but it's also much more stable and debuggable in the future.
>
>
> I'm not sure about this. Having looked at failing sb api tests for a while
> now, I find them about as easy to navigate and fix as FileCheck tests in
> llvm.
>
>
> I don't find that to be true.  I see a failing test on line 79 or
> whatever, and depending on what line 79 is doing, I'll throw in some
> self.runCmd("bt")'s or self.runCmd("fr v") to the test, re-run, and see
> what the relevant context is quickly. For most simple tests, I can usually
> spot the issue in under a minute.  dotest.py likes to eat output when it's
> run in multiprocess mode these days, so I have to remember to add
> --no-multiprocess.  If I'm adding something that I think is generally
> useful to debug the test case, I'll add a conditional block testing again
> self.TraceOn() and print things that may help people who are running
> dotest.py with -t trace mode enabled.
>
>
> I do agree that there are effective ways of debugging sb api tests. Having
> worked with plenty of filecheck-based tests in llvm/clang/swift, I find
> them to be as easy (or easier for me personally) to debug.
>
>
> Sometimes there is a test written so it has a "verify this value" function
> that is run over a variety of different variables during the test
> timeframe, and debugging that can take a little more work to understand the
> context that is failing.  But that kind of test would be harder (or at
> least much more redundant) to express in a FileCheck style system anyway,
> so I can't ding it.
>
>
>
> Yep, sounds like a great candidate for a unit test or an SB API test.
>
>
> As for the difficulty of writing SB API tests, you do need to know the
> general architecture of lldb (a target has a process, a process has
> threads, a thread has frames, a frame has variables), the public API which
> quickly becomes second nature because it is so regular, and then there's
> the testsuite specific setup and template code.  But is that that
> intimidating to anyone familiar with lldb?
>
>
> Not intimidating, no. Cumbersome and slow, absolutely. So much so that I
> don't see a way of adequately testing my patches this way. It would just
> take too much time.
>
> vedant
>
> packages/Python/lldbsuite/test/sample_test/TestSampleTest.py is 50 lines
> including comments; there's about ten lines of source related to
> initializing / setting up the testsuite, and then 6 lines is what's needed
> to run to a breakpoint, get a local variable, check the value.
>
>
> J
>
>
>
>
>
> It's a higher up front cost but we're paid back in being able to develop
> lldb more quickly in the future, where our published API behaviors are
> being tested directly, and the things that must not be broken.
>
>
> I think the right solution here is to require API tests when new
> functionality is introduced. We can enforce this during code review. Making
> it impossible to write tests against the driver's output doesn't seem like
> the best solution. It means that far fewer tests will be written (note that
> a test suite run of lldb gives less than 60% code coverage). It also means
> that the driver's output isn't tested as much as it should be.
>
>
> The lldb driver's output isn't a contract, and treating it like one makes
> the debugger harder to innovate in the future.
>
>
> I appreciate your experience with this (pattern matching on driver input)
> in gdb. That said, I think there are reliable/maintainable ways to do this,
> and proven examples we can learn from in llvm/clang/etc.
>
>
> It's also helpful when adding new features to ensure you've exposed the
> feature through the API sufficiently.  The first thing I thought to try
> when writing the example below was SBFrame::IsArtificial() (see
> SBFrame::IsInlined()) which doesn't exist.  If a driver / IDE is going to
> visually indicate artificial frames, they'll need that.
>
>
> Sure. That's true, we do need API exposure for new fe

Re: [lldb-dev] PDB symbol reader supports C++ only?

2018-08-21 Thread Zachary Turner via lldb-dev
I think Aaron added that code for when the language is not set, but he can
clarify.

Off the top of my head I guess it helps with demangling symbols. Eg you
can’t demangle symbols from a TU without knowing what the language is.
There could be other reasons though. For example each language is going to
have an ABI with respect to the generated code. This is used for unwinding,
stepping, jitting code to run in the target, etc. all of those could be
affected by the language.

Maybe someone else can chime in with more reasons
On Tue, Aug 21, 2018 at 7:10 PM Vadim Chugunov  wrote:

> Would you mind going into a bit more detail on what sort of problems an
> unknown language could cause?   I'd like to understand the issue before
> jumping in to fix anything.  AFAIK, in the case of DWARF symbols, debug
> info for unknown languages is still used, so it wouldn't be the first for
> LLDB...
>
> Also, the second fragment
> 
> checks for specific file extensions, which is an unreliable method, IMO,
> since there's more extensions in use for c++ alone.  Code could also be
> generated by a template engine, which will probably use a different
> extension, etc.   I'd rather not just hardcode '.rs' for Rust.
> I was hoping Aaron could commend on why this is necessary (i.e. why not
> just trust the language flag?)
>
> Thanks!
>
> On Mon, Aug 20, 2018 at 7:35 PM Zachary Turner  wrote:
>
>> Various parts of lldb require knowing the source language. It’s possible
>> that things will mostly work if you report that the language is c++, but
>> you’ll probably get errors in other areas. It goes all the way down to the
>> CodeView level, where certain cv records indicate the original source
>> language. Can you check cvconst.h (ships with DIA SDK) and look for the
>> enumeration corresponding to source language? Does it have a value for
>> Rust? I’m guessing it doesn’t. When you generate PDBs for Rust you probably
>> need to put something unique value there, then we could properly set the
>> language in lldb
>> On Mon, Aug 20, 2018 at 7:15 PM Vadim Chugunov  wrote:
>>
>>> Hi!
>>> I've been investigating why LLDB refuses to set breakpoints in Rust
>>> source files when using PDB debug info on Windows...  This seems to stem
>>> from a couple of checks here
>>> 
>>> and here
>>> .
>>>
>>> I am wondering, what is the backstory there?  Are those still
>>> necessary?  I tried disabling them and Rust debugging worked just fine...
>>>
>>>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] PDB symbol reader supports C++ only?

2018-08-20 Thread Zachary Turner via lldb-dev
Various parts of lldb require knowing the source language. It’s possible
that things will mostly work if you report that the language is c++, but
you’ll probably get errors in other areas. It goes all the way down to the
CodeView level, where certain cv records indicate the original source
language. Can you check cvconst.h (ships with DIA SDK) and look for the
enumeration corresponding to source language? Does it have a value for
Rust? I’m guessing it doesn’t. When you generate PDBs for Rust you probably
need to put something unique value there, then we could properly set the
language in lldb
On Mon, Aug 20, 2018 at 7:15 PM Vadim Chugunov  wrote:

> Hi!
> I've been investigating why LLDB refuses to set breakpoints in Rust source
> files when using PDB debug info on Windows...  This seems to stem from a
> couple of checks here
> 
> and here
> .
>
> I am wondering, what is the backstory there?  Are those still necessary?
> I tried disabling them and Rust debugging worked just fine...
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-15 Thread Zachary Turner via lldb-dev
What do your patches do, out of curiosity?

On Wed, Aug 15, 2018 at 12:45 PM Vedant Kumar  wrote:

>
> On Aug 15, 2018, at 12:27 PM, Zachary Turner  wrote:
>
> Back to the original proposal, my biggest concern is that a single inline
> test could generate many FileCheck invocations.  This could cause
> measurable performance impact on the test suite.  Have you considered this?
>
>
> That's a good point. I hadn't considered that. My thoughts on that are;
>
> - It's relatively cheap to create a FileCheck process. If the build is
> (A|T)sanified, we can copy in a non-sanitized FileCheck to speed things up.
>
> - Based on the time it takes to run check-{llvm,clang} locally, which have
> ~56,000 FileCheck invocations, my intuition is that the overhead ought to
> be manageable.
>
> - The status quo is doing Python's re.search over a chunk of command
> output. My (unverified) intuition is that FileCheck won't be slower than
> that. Actually, FileCheck has an algorithmic advantage because it doesn't
> re-scan the input text from the beginning of the text each time it tries to
> match a substring. `self.expect` does.
>
>
>
> Another possible solution is what i mentioned earlier, basically to expose
> a debugger object model.  This would allow you to accomplish what you want
> without FileCheck, while simultaneously being making many other types of
> tests easier to write at the same time.  On the other hand, it’s a larger
> effort to create this system, but I think long term it would pay back
> enormously (it’s even useful as a general purpose debugger feature, not
> limited to testing)
>
>
> I'd volunteer to work on that. At the moment I really need to get some
> form of testing put together for my patches soon.
>
> vedant
>
>
> On Tue, Aug 14, 2018 at 5:31 PM Vedant Kumar via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hello,
>>
>> I'd like to make FileCheck available within lldb inline tests, in
>> addition to existing helpers like 'runCmd' and 'expect'.
>>
>> My motivation is that several tests I'm working on can't be made as
>> rigorous as they need to be without FileCheck-style checks. In particular,
>> the 'matching', 'substrs', and 'patterns' arguments to runCmd/expect don't
>> allow me to verify the ordering of checked input, to be stringent about
>> line numbers, or to capture & reuse snippets of text from the input stream.
>>
>> I'd curious to know if anyone else is interested or would be willing to
>> review this (https://reviews.llvm.org/D50751).
>>
>> Here's an example of an inline test which benefits from FileCheck-style
>> checking. This test is trying to check that certain frames appear in a
>> backtrace when stopped inside of the "sink" function. Notice that without
>> FileCheck, it's not possible to verify the order in which frames are
>> printed, and that dealing with line numbers would be cumbersome.
>>
>> ```
>> ---
>> a/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
>> +++
>> b/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
>> @@ -9,16 +9,21 @@
>>
>>  volatile int x;
>>
>> +// CHECK: frame #0: {{.*}}sink() at main.cpp:[[@LINE+2]] [opt]
>>  void __attribute__((noinline)) sink() {
>> -  x++; //% self.expect("bt", substrs = ['main', 'func1', 'func2',
>> 'func3', 'sink'])
>> +  x++; //% self.filecheck("bt", "main.cpp")
>>  }
>>
>> +// CHECK-NEXT: frame #1: {{.*}}func3() {{.*}}[opt] [artificial]
>>  void __attribute__((noinline)) func3() { sink(); /* tail */ }
>>
>> +// CHECK-NEXT: frame #2: {{.*}}func2() at main.cpp:[[@LINE+1]] [opt]
>>  void __attribute__((disable_tail_calls, noinline)) func2() { func3(); /*
>> regular */ }
>>
>> +// CHECK-NEXT: frame #3: {{.*}}func1() {{.*}}[opt] [artificial]
>>  void __attribute__((noinline)) func1() { func2(); /* tail */ }
>>
>> +// CHECK-NEXT: frame #4: {{.*}}main at main.cpp:[[@LINE+2]] [opt]
>>  int __attribute__((disable_tail_calls)) main() {
>>func1(); /* regular */
>>return 0;
>> ```
>>
>> For reference, here's the output of the "bt" command:
>>
>> ```
>> runCmd: bt
>> output: * thread #1, queue = 'com.apple.main-thread', stop reason =
>> breakpoint 1.1
>>   * frame #0: 0x00010c6a6f64 a.out`sink() at main.cpp:14 [opt]
>> frame #1: 0x00010c6a6f70 a.out`func3() at main.cpp:15 [opt]
>> [artificial]
>> frame #2: 0x00010c6a6f89 a.out`func2() at main.cpp:21 [opt]
>> frame #3: 0x00010c6a6f90 a.out`func1() at main.cpp:21 [opt]
>> [artificial]
>> frame #4: 0x00010c6a6fa9 a.out`main at main.cpp:28 [opt]
>> ```
>>
>> thanks,
>> vedant
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-15 Thread Zachary Turner via lldb-dev
Back to the original proposal, my biggest concern is that a single inline
test could generate many FileCheck invocations.  This could cause
measurable performance impact on the test suite.  Have you considered this?

Another possible solution is what i mentioned earlier, basically to expose
a debugger object model.  This would allow you to accomplish what you want
without FileCheck, while simultaneously being making many other types of
tests easier to write at the same time.  On the other hand, it’s a larger
effort to create this system, but I think long term it would pay back
enormously (it’s even useful as a general purpose debugger feature, not
limited to testing)

On Tue, Aug 14, 2018 at 5:31 PM Vedant Kumar via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hello,
>
> I'd like to make FileCheck available within lldb inline tests, in addition
> to existing helpers like 'runCmd' and 'expect'.
>
> My motivation is that several tests I'm working on can't be made as
> rigorous as they need to be without FileCheck-style checks. In particular,
> the 'matching', 'substrs', and 'patterns' arguments to runCmd/expect don't
> allow me to verify the ordering of checked input, to be stringent about
> line numbers, or to capture & reuse snippets of text from the input stream.
>
> I'd curious to know if anyone else is interested or would be willing to
> review this (https://reviews.llvm.org/D50751).
>
> Here's an example of an inline test which benefits from FileCheck-style
> checking. This test is trying to check that certain frames appear in a
> backtrace when stopped inside of the "sink" function. Notice that without
> FileCheck, it's not possible to verify the order in which frames are
> printed, and that dealing with line numbers would be cumbersome.
>
> ```
> ---
> a/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
> +++
> b/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
> @@ -9,16 +9,21 @@
>
>  volatile int x;
>
> +// CHECK: frame #0: {{.*}}sink() at main.cpp:[[@LINE+2]] [opt]
>  void __attribute__((noinline)) sink() {
> -  x++; //% self.expect("bt", substrs = ['main', 'func1', 'func2',
> 'func3', 'sink'])
> +  x++; //% self.filecheck("bt", "main.cpp")
>  }
>
> +// CHECK-NEXT: frame #1: {{.*}}func3() {{.*}}[opt] [artificial]
>  void __attribute__((noinline)) func3() { sink(); /* tail */ }
>
> +// CHECK-NEXT: frame #2: {{.*}}func2() at main.cpp:[[@LINE+1]] [opt]
>  void __attribute__((disable_tail_calls, noinline)) func2() { func3(); /*
> regular */ }
>
> +// CHECK-NEXT: frame #3: {{.*}}func1() {{.*}}[opt] [artificial]
>  void __attribute__((noinline)) func1() { func2(); /* tail */ }
>
> +// CHECK-NEXT: frame #4: {{.*}}main at main.cpp:[[@LINE+2]] [opt]
>  int __attribute__((disable_tail_calls)) main() {
>func1(); /* regular */
>return 0;
> ```
>
> For reference, here's the output of the "bt" command:
>
> ```
> runCmd: bt
> output: * thread #1, queue = 'com.apple.main-thread', stop reason =
> breakpoint 1.1
>   * frame #0: 0x00010c6a6f64 a.out`sink() at main.cpp:14 [opt]
> frame #1: 0x00010c6a6f70 a.out`func3() at main.cpp:15 [opt]
> [artificial]
> frame #2: 0x00010c6a6f89 a.out`func2() at main.cpp:21 [opt]
> frame #3: 0x00010c6a6f90 a.out`func1() at main.cpp:21 [opt]
> [artificial]
> frame #4: 0x00010c6a6fa9 a.out`main at main.cpp:28 [opt]
> ```
>
> thanks,
> vedant
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-14 Thread Zachary Turner via lldb-dev
On Tue, Aug 14, 2018 at 6:58 PM Jason Molenda  wrote:

>
>
> > On Aug 14, 2018, at 6:39 PM, Zachary Turner  wrote:
> >
> > Having bugs also makes the debugger harder to innovate in the future
> because it’s, not having tests leads to having bugs, and sb api tests leads
> to not having te
>
> Yes, lldb does not have these problems -- because we learned from our
> decades working on gdb, and did not repeat that mistake.  To be honest,
> lldb is such a young debugger - barely a decade old, depending on how you
> count it, that ANY testsuite approach would be fine at this point.  Add a
> couple more decades and we'd be back into the hole that gdb was in.  {I
> have not worked on gdb in over a decade, so I don't know how their testing
> methodology may be today}

That doesn’t mean that the current approach is the final word.  As new
people come onto the project, new ideas come forth and we should entertain
them rather than deciding that all decisions are set in stone forever.

For example, the object model based approach I mentioned earlier would not
have any of the problems that you’ve described from gdb.  Just because one
set of problems has been solved doesn’t mean we should declare victory and
say there’s no point in trying to solve the remaining problems too.  And
right now, the problem is that we need to be coming up with a way to make
tests easier to write so that people will actually write them
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-14 Thread Zachary Turner via lldb-dev
Having bugs also makes the debugger harder to innovate in the future
because it’s, not having tests leads to having bugs, and sb api tests leads
to not having tests. At the end of the day, it doesn’t matter how stable
the tests are if there arent enough of them. There should be about 10x-20x
as many tests as there are currently, and that will simply never happen
under the current approach. If it means we need to have multiple different
styles of test, so be it. The problem we face right now has nothing to do
with command output changing, and in fact I don’t that we’ve *ever* had
this problem. So we should be focusing on problems we have, not problems we
don’t have.

Note that it is not strictly necessary for a test to check the debuggers
command output. There could be a different set of commands whose only
purpose is to print information for the purposes of debugging. One idea
would be to introduce the notion of a debugger object model, where you
print various aspects of the debuggers state with an object like syntax.
For example,

(lldb) p debugger.targets
~/foo (running, pid: 123)

(lldb) p debugger.targets[0].threads[0].frames[1]
int main(int argc=3, char **argv=0x12345678) + 0x72

(lldb) p debugger.targets[0].threads[0].frames[1].params[0]
int argc=3

(lldb) p debugger.targets[0].breakpoints
[1] main.cpp:72

Etc. you can get arbitrarily granular and dxpose every detail of the
debuggers internal state this way, and the output is so simple that you
never have to worry about it changing.

That said, I think history has shown that limiting ourselves to sb api
tests, despite all the theoretical benefits, leads to insufficient test
coverage. So while it has benefits, it also has problems for which we need
a better solution
On Tue, Aug 14, 2018 at 6:19 PM Jason Molenda via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> It's more verbose, and it does mean test writers need to learn the public
> API, but it's also much more stable and debuggable in the future.  It's a
> higher up front cost but we're paid back in being able to develop lldb more
> quickly in the future, where our published API behaviors are being tested
> directly, and the things that must not be broken.  The lldb driver's output
> isn't a contract, and treating it like one makes the debugger harder to
> innovate in the future.
>
> It's also helpful when adding new features to ensure you've exposed the
> feature through the API sufficiently.  The first thing I thought to try
> when writing the example below was SBFrame::IsArtificial() (see
> SBFrame::IsInlined()) which doesn't exist.  If a driver / IDE is going to
> visually indicate artificial frames, they'll need that.
>
> J
>
> > On Aug 14, 2018, at 5:56 PM, Vedant Kumar  wrote:
> >
> > It'd be easy to update FileCheck tests when changing the debugger (this
> happens all the time in clang/swift). OTOH, the verbosity of the python API
> means that fewer tests get written. I see a real need to make expressive
> tests easier to write.
> >
> > vedant
> >
> >> On Aug 14, 2018, at 5:38 PM, Jason Molenda  wrote:
> >>
> >> I'd argue against this approach because it's exactly why the lit tests
> don't run against the lldb driver -- they're hardcoding the output of the
> lldb driver command into the testsuite and these will eventually make it
> much more difficult to change and improve the driver as we've accumulated
> this style of test.
> >>
> >> This is a perfect test for a normal SB API.  Run to your breakpoints
> and check the stack frames.
> >>
> >> f0 = thread.GetFrameAtIndex(0)
> >> check that f0.GetFunctionName() == sink
> >> check that f0.IsArtifical() == True
> >> check that f0.GetLineEntry().GetLine() == expected line number
> >>
> >>
> >> it's more verbose, but it's also much more explicit about what it's
> checking, and easy to see what has changed if there is a failure.
> >>
> >>
> >> J
> >>
> >>> On Aug 14, 2018, at 5:31 PM, Vedant Kumar via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>>
> >>> Hello,
> >>>
> >>> I'd like to make FileCheck available within lldb inline tests, in
> addition to existing helpers like 'runCmd' and 'expect'.
> >>>
> >>> My motivation is that several tests I'm working on can't be made as
> rigorous as they need to be without FileCheck-style checks. In particular,
> the 'matching', 'substrs', and 'patterns' arguments to runCmd/expect don't
> allow me to verify the ordering of checked input, to be stringent about
> line numbers, or to capture & reuse snippets of text from the input stream.
> >>>
> >>> I'd curious to know if anyone else is interested or would be willing
> to review this (https://reviews.llvm.org/D50751).
> >>>
> >>> Here's an example of an inline test which benefits from
> FileCheck-style checking. This test is trying to check that certain frames
> appear in a backtrace when stopped inside of the "sink" function. Notice
> that without FileCheck, it's not possible to verify the order in which
> frames are printed, and that dealing with line numb

Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-14 Thread Zachary Turner via lldb-dev
On Tue, Aug 14, 2018 at 5:56 PM Vedant Kumar  wrote:

>
>
> On Aug 14, 2018, at 5:34 PM, Zachary Turner  wrote:
>
> I’ve thought about this in the past but the conclusion I came to is that
> lldbinline tests are actually just filecheck tests in disguise. Why do we
> need both? I’d rather delete the lldbinline infrastructure entirely and
> make a new lit TestFormat that basically does what lldbinline already does
>
>
> An inline test does more than simply pattern-matching input. It builds a
> program, sets breakpoints, etc. I'd rather make this existing
> infrastructure easier to use than come up with something new.
>
> vedant
>

Right, but only one specific type of lit test  depends on pattern matching,
and those are the SHTest format tests.  You can make an arbitrary test
format, including one that builds programs, set breakpoints etc.  the
format and structure of an lldbinline test need not even change at all
(except that I think we could eliminate the .py file).  The lldbinline
tests are about as close to a drop in fit for lit as we can get, the only
thing that needs to happen is the code in lldbinline.py needs to move to
something called InlineTestFormat.py and then be hooked into lit
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Using FileCheck in lldb inline tests

2018-08-14 Thread Zachary Turner via lldb-dev
I’ve thought about this in the past but the conclusion I came to is that
lldbinline tests are actually just filecheck tests in disguise. Why do we
need both? I’d rather delete the lldbinline infrastructure entirely and
make a new lit TestFormat that basically does what lldbinline already does
On Tue, Aug 14, 2018 at 5:31 PM Vedant Kumar via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hello,
>
> I'd like to make FileCheck available within lldb inline tests, in addition
> to existing helpers like 'runCmd' and 'expect'.
>
> My motivation is that several tests I'm working on can't be made as
> rigorous as they need to be without FileCheck-style checks. In particular,
> the 'matching', 'substrs', and 'patterns' arguments to runCmd/expect don't
> allow me to verify the ordering of checked input, to be stringent about
> line numbers, or to capture & reuse snippets of text from the input stream.
>
> I'd curious to know if anyone else is interested or would be willing to
> review this (https://reviews.llvm.org/D50751).
>
> Here's an example of an inline test which benefits from FileCheck-style
> checking. This test is trying to check that certain frames appear in a
> backtrace when stopped inside of the "sink" function. Notice that without
> FileCheck, it's not possible to verify the order in which frames are
> printed, and that dealing with line numbers would be cumbersome.
>
> ```
> ---
> a/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
> +++
> b/lldb/packages/Python/lldbsuite/test/functionalities/tail_call_frames/unambiguous_sequence/main.cpp
> @@ -9,16 +9,21 @@
>
>  volatile int x;
>
> +// CHECK: frame #0: {{.*}}sink() at main.cpp:[[@LINE+2]] [opt]
>  void __attribute__((noinline)) sink() {
> -  x++; //% self.expect("bt", substrs = ['main', 'func1', 'func2',
> 'func3', 'sink'])
> +  x++; //% self.filecheck("bt", "main.cpp")
>  }
>
> +// CHECK-NEXT: frame #1: {{.*}}func3() {{.*}}[opt] [artificial]
>  void __attribute__((noinline)) func3() { sink(); /* tail */ }
>
> +// CHECK-NEXT: frame #2: {{.*}}func2() at main.cpp:[[@LINE+1]] [opt]
>  void __attribute__((disable_tail_calls, noinline)) func2() { func3(); /*
> regular */ }
>
> +// CHECK-NEXT: frame #3: {{.*}}func1() {{.*}}[opt] [artificial]
>  void __attribute__((noinline)) func1() { func2(); /* tail */ }
>
> +// CHECK-NEXT: frame #4: {{.*}}main at main.cpp:[[@LINE+2]] [opt]
>  int __attribute__((disable_tail_calls)) main() {
>func1(); /* regular */
>return 0;
> ```
>
> For reference, here's the output of the "bt" command:
>
> ```
> runCmd: bt
> output: * thread #1, queue = 'com.apple.main-thread', stop reason =
> breakpoint 1.1
>   * frame #0: 0x00010c6a6f64 a.out`sink() at main.cpp:14 [opt]
> frame #1: 0x00010c6a6f70 a.out`func3() at main.cpp:15 [opt]
> [artificial]
> frame #2: 0x00010c6a6f89 a.out`func2() at main.cpp:21 [opt]
> frame #3: 0x00010c6a6f90 a.out`func1() at main.cpp:21 [opt]
> [artificial]
> frame #4: 0x00010c6a6fa9 a.out`main at main.cpp:28 [opt]
> ```
>
> thanks,
> vedant
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB nightly benchmarks and flamegraphs

2018-08-03 Thread Zachary Turner via lldb-dev
This is really cool.  Maybe you could do it for all of LLVM too?  It would
be nice if, instead of cycling through each benchmark on a set interval,
there were just a dropdown box where you could select the one you wanted to
see.

On Fri, Aug 3, 2018 at 3:37 PM Raphael Isemann via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi everyone,
>
> I wanted to share a (hopefully useful) service for LLDB that I added
> recently:
>
> If you go to https://teemperor.de/lldb-bench/ you'll now see graphs
> that show the instruction count and memory usage of the last LLDB
> nightlies (one per day). If you click on a graph you'll see a flame
> graph that shows how much time we spent in each function when running
> the benchmark. The graph should make it pretty obvious where the good
> places for optimizations are.
>
> You can see all graphs without the slide show under
> https://teemperor.de/lldb-bench/static.html.
>
> The source code of every benchmark can be found here:
> https://github.com/Teemperor/lldb-bench If you want to add a
> benchmark, just make a PR to that repository and I'll merge it. See
> the README of the repo for instructions.
>
> I'll add more benchmarks in the future, but you are welcome to add your
> own.
>
> Also, if you for some reason don't appreciate my amazing GNUplot
> markup skills and prefer your own graphs, you can just grab the raw
> benchmark data from here: https://teemperor.de/lldb-bench/data/ The
> data format is just the time, git-commit and the
> instruction-count/memoryInKB value (depending if it's a `.mem.dat` or
> a `.inst.dat`).
>
> On a side note: Today's spike in memory is related to changes in the
> build setup, not a LLDB change. I don't expect too many of these
> spikes to happen in the future because the benchmark framework is now
> hopefully stable enough.
>
> Cheers,
>
> - Raphael
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB tests duplicated between lldb-suite and lit

2018-07-17 Thread Zachary Turner via lldb-dev
Yea, removing them is probably fine.

On Tue, Jul 17, 2018 at 11:14 AM Stella Stamenova 
wrote:

> Hey all,
>
>
>
> I’ve been looking at some of the test failures on Windows and this led me
> to realize that there are at least several tests that are duplicated
> between lldb-suite and the lit tests. This appears to have been on purpose
> circa 2016 as a proof of concept for moving tests from lldb-suite to lit. I
> think this is confusing and we should pick a set (lit or lldb-suite) and
> remove the second set. Also, if we decide to stick with the lit tests, I
> think they will need to be updated as right now they are not all
> functioning as expected.
>
>
>
> For example, the test TestCallStdStringFunction exists both for lit and
> lldb-suite. It is expected to fail on Windows because windows does not
> correctly support expressions. However, the test fails in lldb-suite and *
> *passes** in lit and it passes for the wrong reason (see details below).
> I suspect there may be other places in the duplicated tests where we think
> we’ve checked something, but we really haven’t validated it correctly.
>
>
>
> My suggestion is that we remove the lit versions of the duplicated tests
> rather than fixing them as the lldb-suite set appears to be working
> correctly.
>
>
>
> Thanks,
>
> -Stella
>
>
>
> P.S. Here are the details on TestCallStdStringFunction:
>
>
>
> Here is what the test attempts to do:
>
>
>
> breakpoint set --file call-function.cpp --line 52
>
> run
>
> print str
>
> # CHECK: Hello world
>
> print str.c_str()
>
> # CHECK: Hello world
>
>
>
> In the lldb-suite version these CHECKs would have verified the output of
> the print immediately, but because of how lit works, these are verified
> together at the end. Since the executable itself prints “Hello World” a
> couple of times, even though the print expressions fail, “Hello World” can
> be found twice in the output, so the test succeeds.
>
>
>
> Moreover, the test sets a breakpoint that it expects to hit before calling
> the two print statements. In the lldb-suite version, the test verifies that
> the breakpoint was set, this version doesn’t and it happens to fail to set
> the breakpoint. So when the test is calling “print”, the executable has
> already run through the end, so even if expressions worked correctly on
> windows, this would have failed since we would have made the call after the
> executable finished. At the very least, this test needs an additional CHECK
> statement to verify that either the breakpoint was set or it was hit.
>
>
>
> Looking at the other duplicated tests, we have the potential for similar
> issues. They also all use CHECK rather then CHECK-DAG, so we should at
> least update them to use CHECK-DAG.
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] error: process launch failed: Lost debug server connection

2018-07-12 Thread Zachary Turner via lldb-dev
You might not get a reply but usually the turnaround time is < 24 hours.
If it's not let me know and I'll find out who to ask.

On Thu, Jul 12, 2018 at 7:50 AM NeckTwi via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I’m getting same error even if I debug on the same device.
>
> necktwi@pi:/home/necktwi/Workspace/RemoteDebugTest/build$ lldb-server-6.0
> platform --listen "*:1234" --server
> Connection established.
>
> necktwi@pi:/home/necktwi/Workspace/RemoteDebugTest/build$ lldb-6.0
> (lldb) platform select remote-linux
>   Platform: remote-linux
>  Connected: no
> (lldb) platform connect connect://localhost:1234
>   Platform: remote-linux
> OS Version: 4.9.78 (4.9.78-v7+)
> Kernel: #1084 SMP Thu Jan 25 18:05:49 GMT 2018
>   Hostname: pi.RemoteDebugTest.com
> 
>  Connected: yes
> WorkingDir: /home/necktwi/Workspace/RemoteDebugTest/build
> (lldb) file RemoteDebugTest
> Current executable set to 'RemoteDebugTest' (arm).
> (lldb) run
> error: process launch failed: Lost debug server connection
> (lldb)
>
> I think it’s a bug. I tried to sign up for bugs-ad...@lists.llvm.org but
> I didn’t get a reply though I’ve put a mail with my email-id and name.
>
> … NeckTwi
>
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] How LLDB plug-ins loaded on Windows?

2018-07-10 Thread Zachary Turner via lldb-dev
Is it? If lldb is totally broken without that patch we should upstream it
no?
On Tue, Jul 10, 2018 at 8:52 PM Aaron Smith 
wrote:

> This patch is needed for lldb to work on Windows.
>
> https://reviews.llvm.org/D12245
>
>
> --
> *From:* Zachary Turner 
> *Sent:* Friday, July 6, 2018 5:17 AM
> *To:* Salahuddin Khan
> *Cc:* Aaron Smith; Adrian McCarthy; Stella Stamenova;
> lldb-dev@lists.llvm.org
>
> *Subject:* Re: [lldb-dev] How LLDB plug-ins loaded on Windows?
>
> It’s been a while since i was close to this code, so adding people who
> have been in there more recently
>
> On Thu, Jul 5, 2018 at 1:12 PM Salahuddin Khan  wrote:
>
>> Hi Zachary,
>>
>>
>>
>> Ahh ok, thanks for your quick response.
>>
>>
>>
>> I was hoping to use LLDB on Windows (to debug Go code), which I know is
>> DWARF based (and I thought would work on Windows), but since I couldn’t
>> seem to set a breakpoint in C/C++/Go, so I thought perhaps plug-ins weren’t
>> being loaded (although it’s not possible to ‘load’ a .lib file, they have
>> to be included during link time).
>>
>>
>>
>> However, longer term I’m also hoping to replace the kernel debugger for
>> my own operating system (a personal OS written from scratch) with LLDB. My
>> OS uses the PE file format and has PDBs too – currently compiled on Windows
>> using a very old version of the Windows DDK. I’m in the process of moving
>> to a new build system using clang. I was using DIA to some degree and
>> debugging from Windows, but I eventually hope to be able to debug one
>> system from another also running the OS, so non-Windows support would be
>> good.
>>
>>
>>
>> Any idea which pieces are missing on Windows? I’m probably going to start
>> debugging lldb to figure it out, but knowing what is needed would help
>> significantly.
>>
>>
>>
>> Thanks,
>>
>> Salah
>>
>>
>>
>> *From:* Zachary Turner 
>> *Sent:* Thursday, July 05, 2018 12:56 PM
>> *To:* Salahuddin Khan 
>> *Cc:* lldb-dev@lists.llvm.org
>> *Subject:* Re: [lldb-dev] How LLDB plug-ins loaded on Windows?
>>
>>
>>
>> Plugin is a bit misleading. All “plugins” are compiled into lldb. Plugins
>> are really just a layering abstraction.
>>
>> To answer your question, pdb works currently but is limited in
>> functionality. First, it only supports limited usage scenarios, and second
>> it requires Windows. It’s currently built on top of DIA. If you need PDB
>> support on non Windows it will be quite a bit of work (although there’s
>> people making gradual progress on it). If you need it on Windows it
>> basically works but you’ll have to fill in some missing pieces. Several
>> other people have been submitting patches in this area as well recently
>>
>> On Thu, Jul 5, 2018 at 12:47 PM Salahuddin Khan via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>> Hi All,
>>
>>
>>
>> I’m somewhat puzzled by the plug-ins in LLDB, specifically on Windows.
>>
>>
>>
>> When examining the lib directory after building LLVM/LLDB, I noticed at
>> lot of lldbPlugin*.lib files. However, it’s not clear if or how these are
>> included in LLDB.
>>
>>
>>
>> Here’s one example:
>>
>> lldbPluginSymbolFilePDB.lib
>>
>>
>>
>> Are these compiled into lldb.exe and if so, how are they invoked? I’m
>> trying to determine if PDB symbols are currently working, and if not, what
>> would be required to make them work.
>>
>>
>>
>> Thanks,
>>
>> Salah
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> 
>>
>>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] How LLDB plug-ins loaded on Windows?

2018-07-05 Thread Zachary Turner via lldb-dev
It’s been a while since i was close to this code, so adding people who have
been in there more recently

On Thu, Jul 5, 2018 at 1:12 PM Salahuddin Khan  wrote:

> Hi Zachary,
>
>
>
> Ahh ok, thanks for your quick response.
>
>
>
> I was hoping to use LLDB on Windows (to debug Go code), which I know is
> DWARF based (and I thought would work on Windows), but since I couldn’t
> seem to set a breakpoint in C/C++/Go, so I thought perhaps plug-ins weren’t
> being loaded (although it’s not possible to ‘load’ a .lib file, they have
> to be included during link time).
>
>
>
> However, longer term I’m also hoping to replace the kernel debugger for my
> own operating system (a personal OS written from scratch) with LLDB. My OS
> uses the PE file format and has PDBs too – currently compiled on Windows
> using a very old version of the Windows DDK. I’m in the process of moving
> to a new build system using clang. I was using DIA to some degree and
> debugging from Windows, but I eventually hope to be able to debug one
> system from another also running the OS, so non-Windows support would be
> good.
>
>
>
> Any idea which pieces are missing on Windows? I’m probably going to start
> debugging lldb to figure it out, but knowing what is needed would help
> significantly.
>
>
>
> Thanks,
>
> Salah
>
>
>
> *From:* Zachary Turner 
> *Sent:* Thursday, July 05, 2018 12:56 PM
> *To:* Salahuddin Khan 
> *Cc:* lldb-dev@lists.llvm.org
> *Subject:* Re: [lldb-dev] How LLDB plug-ins loaded on Windows?
>
>
>
> Plugin is a bit misleading. All “plugins” are compiled into lldb. Plugins
> are really just a layering abstraction.
>
> To answer your question, pdb works currently but is limited in
> functionality. First, it only supports limited usage scenarios, and second
> it requires Windows. It’s currently built on top of DIA. If you need PDB
> support on non Windows it will be quite a bit of work (although there’s
> people making gradual progress on it). If you need it on Windows it
> basically works but you’ll have to fill in some missing pieces. Several
> other people have been submitting patches in this area as well recently
>
> On Thu, Jul 5, 2018 at 12:47 PM Salahuddin Khan via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Hi All,
>
>
>
> I’m somewhat puzzled by the plug-ins in LLDB, specifically on Windows.
>
>
>
> When examining the lib directory after building LLVM/LLDB, I noticed at
> lot of lldbPlugin*.lib files. However, it’s not clear if or how these are
> included in LLDB.
>
>
>
> Here’s one example:
>
> lldbPluginSymbolFilePDB.lib
>
>
>
> Are these compiled into lldb.exe and if so, how are they invoked? I’m
> trying to determine if PDB symbols are currently working, and if not, what
> would be required to make them work.
>
>
>
> Thanks,
>
> Salah
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] How LLDB plug-ins loaded on Windows?

2018-07-05 Thread Zachary Turner via lldb-dev
Plugin is a bit misleading. All “plugins” are compiled into lldb. Plugins
are really just a layering abstraction.

To answer your question, pdb works currently but is limited in
functionality. First, it only supports limited usage scenarios, and second
it requires Windows. It’s currently built on top of DIA. If you need PDB
support on non Windows it will be quite a bit of work (although there’s
people making gradual progress on it). If you need it on Windows it
basically works but you’ll have to fill in some missing pieces. Several
other people have been submitting patches in this area as well recently
On Thu, Jul 5, 2018 at 12:47 PM Salahuddin Khan via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi All,
>
>
>
> I’m somewhat puzzled by the plug-ins in LLDB, specifically on Windows.
>
>
>
> When examining the lib directory after building LLVM/LLDB, I noticed at
> lot of lldbPlugin*.lib files. However, it’s not clear if or how these are
> included in LLDB.
>
>
>
> Here’s one example:
>
> lldbPluginSymbolFilePDB.lib
>
>
>
> Are these compiled into lldb.exe and if so, how are they invoked? I’m
> trying to determine if PDB symbols are currently working, and if not, what
> would be required to make them work.
>
>
>
> Thanks,
>
> Salah
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: libtrace

2018-06-27 Thread Zachary Turner via lldb-dev
suppose process A (single threaded) is tracing process B (2 threads). If
trace events happen on both threads of B, then the second thread can’t
continue until both threads’ trace events have been fully handled,
synchronously. If process A has a second thread though, the tracer thread
can enqueue work via a lock free queue (or worst case scenario, a mutex),
and continue immediately. So it seems less overhead this way.

That said, there seems to be no harm in exposing the lowest levels of the
API with all of their os specific quirks, and one could be built on top
that standardizes the assumptions and requirements
On Wed, Jun 27, 2018 at 12:56 AM Pavel Labath  wrote:

> On Wed, 27 Jun 2018 at 01:14, Zachary Turner via lldb-dev
>  wrote:
> >
> > Yes that’s what I’ve been thinking about as well.
> >
> > One thing I’ve been giving a lot of thought to is whether to serialize
> the handling of trace events.  I want to balance the “this is a library and
> you should be able to get it to work for you no matter what your use case
> is” aspect with the “you really just don’t want to go there, we know what’s
> best for you” aspect.  Then there’s the  fact that not all platforms behave
> the same, but we’d like a consistent set of expectations that makes it easy
> to use for everyone.
> >
> > So I’m leaning towards having the library serialize all tace events,
> because it’s a nice common denominator that every platform can implement.
> >
> > To be clear though, I don’t mean that if 2 processes are being traced
> simultaneously and A stops followed by B stopping, then the tool will
> necessarily block before handling  B’s stop.  I just mean that A and B’s
> stop handlers will be invoked on a single thread (not the threads which are
> tracing  A or B).
> >
> > So A stops, posts its stop event on the blessed thread and waits.  Then
> B stops and does the same thing.  A’s handler runs, for whatever reason
> decides it will continue later, saves off the event somewhere, then
> processes B’s.  Later something happens, it decides to continue A, signals
> A’s thread which wakes up.
> >
> > I think this kind of design eliminates a large class of race conditions
> without sacrificing any performance.
> >
>
> Does this mean that you will always have to have at least two threads
> (the one doing the tracing and the one where stop handlers are
> invoked)? Because if that's true, then I'm not sure I buy the
> no-performance-sacrifice part. Given that with ptrace (on linux at
> least, but I think that holds for some other OSs too), all debugging
> operations have to happen on a specific thread, if that thread is not
> the one where the core logic happens, you will have to do a lot of
> ping-pong to do all the debugging operations (read/write
> registers/memory, set breakpoints, etc.). Of all the use cases, the
> one where this matters most may be actually yours -- I'm not sure I
> understand it fully but if the goal is to have as little impact on the
> traced process, then this is going to be a problem, because every
> microsecond you spend context-switching between these two threads is a
> microsecond when the target process is not executing. In lldb-server
> we avoid these context switches (and race conditions!) by being single
> threaded. It think it would be good to keep things this way by having
> the new api (the lowest layers of it?) accessible in a single-threaded
> manner, at least on platforms where this is possible (everything
> except windows, I guess).
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: libtrace

2018-06-26 Thread Zachary Turner via lldb-dev
Yes that’s what I’ve been thinking about as well.

One thing I’ve been giving a lot of thought to is whether to serialize the
handling of trace events.  I want to balance the “this is a library and you
should be able to get it to work for you no matter what your use case is”
aspect with the “you really just don’t want to go there, we know what’s
best for you” aspect.  Then there’s the  fact that not all platforms behave
the same, but we’d like a consistent set of expectations that makes it easy
to use for everyone.

So I’m leaning towards having the library serialize all tace events,
because it’s a nice common denominator that every platform can implement.

To be clear though, I don’t mean that if 2 processes are being traced
simultaneously and A stops followed by B stopping, then the tool will
necessarily block before handling  B’s stop.  I just mean that A and B’s
stop handlers will be invoked on a single thread (not the threads which are
tracing  A or B).

So A stops, posts its stop event on the blessed thread and waits.  Then B
stops and does the same thing.  A’s handler runs, for whatever reason
decides it will continue later, saves off the event somewhere, then
processes B’s.  Later something happens, it decides to continue A, signals
A’s thread which wakes up.

I think this kind of design eliminates a large class of race conditions
without sacrificing any performance.

LLDB doesn’t currently work like this, but it would be nice not to end up
with another split similar to the dwarf split, so I’m curious if you can
think of any fundamental assumptions of LLDB’s architecture that this would
violate.  This way we’d at least know that it’s possible to use the api in
lldb (assuming it does everything lldb needs obviously)

Thoughts?

On Tue, Jun 26, 2018 at 1:09 PM Jim Ingham  wrote:

> You'd probably need to pull the Unwinder in if you want backtraces, but
> that part shouldn't be that hard to disentangle.  I don't think you'd need
> much else?
>
> Basing your work on NativeProcess rather than lldb proper would also cut
> the number of observer processes in half and avoid the context switches
> between the server and the debugger.  That seems more appropriate for a
> lightweight tool.
>
> Jim
>
>
> > On Jun 26, 2018, at 12:59 PM, Jim Ingham via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > So you aren't planning to print values at all, just stop points (i.e.
> you are only interested in the line table and function symbols part of
> DWARF)?
> >
> > Given what you've described so far, I'm wondering if what you really
> want is the NativeProcess classes with some symbol-file reading pulled in?
> Is there anything that you couldn't do from there?
> >
> > Jim
> >
> >
> >> On Jun 26, 2018, at 12:48 PM, Zachary Turner 
> wrote:
> >>
> >> no expression parser or knowledge of any specific programming language.
> >>
> >> Basically I just mean that the parsing of the native DWARF format
> itself is in scope, but anything beyond that is out of scope.  For
> symbolication we have things like llvm-symbolizer that already just work
> and are built on top of LLVM's dwarf parsing code.  Similarly, LLDB's type
> system could be built on top of it as well.  Given that I think everyone
> mostly agrees that unifying on one DWARF parser is a good idea in
> principle, this would mean no functional change from LLDB's point of view,
> it would just continue to do exactly what it does regarding parsing C++
> expressions and converting these into types that clang understands.
> >>
> >> It will probably be useful someday to have an expression parser and
> language specific type system, but when that comes I don't think we'd want
> anything radically different than what LLDB already has.
> >>
> >> On Tue, Jun 26, 2018 at 12:26 PM Jim Ingham  wrote:
> >> Just to be clear, by "no clang integration" do you mean "no expression
> parser" or do you mean something more radical?  For instance, adding a
> TypeSystem and its DWARF parser for C family languages that uses a
> different underlying representation than Clang AST's to store the results
> would be a lot of work that wouldn't be terribly interesting to lldb.  I
> don't think that's what you meant, but wanted to be sure.
> >>
> >> Jim
> >>
> >>> On Jun 26, 2018, at 11:58 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> We have been thinking internally about a lightweight llvm-based
> ptracer.  To address one question up front: the primary way in which this
> differs from LLDB is

Re: [lldb-dev] [llvm-dev] RFC: libtrace

2018-06-26 Thread Zachary Turner via lldb-dev
Ahh, thanks.  I thought those changes never landed, but it's good to hear
that they did.

On Tue, Jun 26, 2018 at 1:49 PM Adrian Prantl  wrote:

>
> > On Jun 26, 2018, at 1:38 PM, Zachary Turner  wrote:
> >
> >> On Tue, Jun 26, 2018 at 1:28 PM Adrian Prantl 
> wrote:
> >>
> >>> > On Jun 26, 2018, at 11:58 AM, Zachary Turner via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
> >>> > A good example of this would be LLDB’s DWARF parsing code, which is
> more featureful than LLVM’s but has kind of evolved in parallel.  Sinking
> this into LLVM would be one early target of such an effort, although over
> time there would likely be more.
> >>>
> >>> As you are undoubtedly aware we've been carefully rearchitecting
> LLVM's DWARF parser over the last few years to eventually become featureful
> enough so that LLDB could use it, so any help on that front would be most
> welcome. As long as we are careful to not regress in performance/lazyness,
> features and fault-tolerance, deduplicating the implementations can only be
> good for LLVM and LLDB.
> >>>
> >> Yea, this is the general idea.   Has anyone actively been working on
> this specific effort recently?  To my knowledge someone started and then
> never finished, but the efforts also never made it upstream, so my
> understanding is that it's a goal, but one that nobody has made significant
> headway on.
> >
> That's not true. Greg Clayton started the effort in 2016 and landed many
> of the ground-breaking changes. The design ideas fleshed out during that
> initial effort (thanks to David Blaikie who spent a lot of time reviewing
> the new interfaces!) such as improved error handling where then picked up
> the entire team of contributors who worked on DWARF 5 support in LLVM and
> we've continued down that path ever since. The greatly improved
> llvm-dwarfdump was also born out of this effort, for example. We also payed
> attention that every refactoring of LLDB DWARF parser code would bring it
> closer to the new LLVM parser interface to narrow the gaps between the
> implementations.
>
> -- adrian
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] RFC: libtrace

2018-06-26 Thread Zachary Turner via lldb-dev
On Tue, Jun 26, 2018 at 1:28 PM Adrian Prantl  wrote:

>
>
> > On Jun 26, 2018, at 11:58 AM, Zachary Turner via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
> >
> > Hi all,
> >
> > We have been thinking internally about a lightweight llvm-based
> ptracer.  To address one question up front: the primary way in which this
> differs from LLDB is that it targets a more narrow use case -- there is no
> scripting support, no clang integration, no dynamic extensibility, no
> support for running jitted code in the target, and no user interface.  We
> have several use cases internally that call for varying levels of
> functionality from such a utility, and being able to use as little as
> possible of the library as is necessary for the given task is important for
> the scale in which we wish to use it.
> >
> > We are still in early discussions and planning, but I think this would
> be a good addition to the LLVM upstream.  Since we’re approaching this as a
> set of small isolated components, my thinking is to work on this completely
> upstream, directly under the llvm project (as opposed to making a separate
> subproject), but I’m open to discussion if anyone feels differently.
> >
> > LLDB has solved a lot of the difficult problems needed for such a tool.
> So in the spirit of code reuse, we think it’s worth trying componentize
> LLDB by sinking pieces into LLVM and rebasing LLDB as well as these smaller
> tools on top of these components, so that smaller tools can reduce code
> duplication and contribute to the overall health of the code base.
>
> Do you have a rough idea of what components specifically the new tool
> would need to function?
>

* process & thread control
* platform agnostic ptrace wrapper (not all platforms even have ptrace, and
those that do the usage and capabilities vary quite a bit)
* install various kinds of traps
* monitor cpu performance counters
* symbol file parsing
* symbol resolution (name <-> addr and line <-> addr)
* unwinding and backtrace generation



>
> >  At the same time we think that in doing so we can break things up into
> more granular pieces, ultimately exposing a larger testing surface and
> enabling us to create exhaustive tests, giving LLDB more fine grained
> testing of important subsystems.
>
> Are you thinking of the new utility as something that would naturally live
> in llvm/tools or as something that would live in the LLDB repository?
>
I would rather put it under LLDB and then link LLDB against certain pieces
in cases where that makes sense.


>
> >
> > A good example of this would be LLDB’s DWARF parsing code, which is more
> featureful than LLVM’s but has kind of evolved in parallel.  Sinking this
> into LLVM would be one early target of such an effort, although over time
> there would likely be more.
>
> As you are undoubtedly aware we've been carefully rearchitecting LLVM's
> DWARF parser over the last few years to eventually become featureful enough
> so that LLDB could use it, so any help on that front would be most welcome.
> As long as we are careful to not regress in performance/lazyness, features
> and fault-tolerance, deduplicating the implementations can only be good for
> LLVM and LLDB.
>
> Yea, this is the general idea.   Has anyone actively been working on this
specific effort recently?  To my knowledge someone started and then never
finished, but the efforts also never made it upstream, so my understanding
is that it's a goal, but one that nobody has made significant headway on.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] RFC: libtrace

2018-06-26 Thread Zachary Turner via lldb-dev
The various NativeProcess implementations are definitely a good starting
point and I'll probably be looking at them to understand all the ins and
outs of each platform.  I'm not sure if the API / interface we want will be
the same, so I don't think we can just copy it all down.  But a lot of the
core logic we probably can.  Depending on how much of it we end up
implementing and how close we get to the current functionality of the
NativeProcess classes, this could be another area for code reuse similar to
what I mentioned with the DWARF reading.  i.e. we could write lots of
low-level tests of the tracing functionality specifically, then update the
NativeProcess implementations to use this.

On Tue, Jun 26, 2018 at 1:09 PM Jim Ingham  wrote:

> You'd probably need to pull the Unwinder in if you want backtraces, but
> that part shouldn't be that hard to disentangle.  I don't think you'd need
> much else?
>
> Basing your work on NativeProcess rather than lldb proper would also cut
> the number of observer processes in half and avoid the context switches
> between the server and the debugger.  That seems more appropriate for a
> lightweight tool.
>
> Jim
>
>
> > On Jun 26, 2018, at 12:59 PM, Jim Ingham via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > So you aren't planning to print values at all, just stop points (i.e.
> you are only interested in the line table and function symbols part of
> DWARF)?
> >
> > Given what you've described so far, I'm wondering if what you really
> want is the NativeProcess classes with some symbol-file reading pulled in?
> Is there anything that you couldn't do from there?
> >
> > Jim
> >
> >
> >> On Jun 26, 2018, at 12:48 PM, Zachary Turner 
> wrote:
> >>
> >> no expression parser or knowledge of any specific programming language.
> >>
> >> Basically I just mean that the parsing of the native DWARF format
> itself is in scope, but anything beyond that is out of scope.  For
> symbolication we have things like llvm-symbolizer that already just work
> and are built on top of LLVM's dwarf parsing code.  Similarly, LLDB's type
> system could be built on top of it as well.  Given that I think everyone
> mostly agrees that unifying on one DWARF parser is a good idea in
> principle, this would mean no functional change from LLDB's point of view,
> it would just continue to do exactly what it does regarding parsing C++
> expressions and converting these into types that clang understands.
> >>
> >> It will probably be useful someday to have an expression parser and
> language specific type system, but when that comes I don't think we'd want
> anything radically different than what LLDB already has.
> >>
> >> On Tue, Jun 26, 2018 at 12:26 PM Jim Ingham  wrote:
> >> Just to be clear, by "no clang integration" do you mean "no expression
> parser" or do you mean something more radical?  For instance, adding a
> TypeSystem and its DWARF parser for C family languages that uses a
> different underlying representation than Clang AST's to store the results
> would be a lot of work that wouldn't be terribly interesting to lldb.  I
> don't think that's what you meant, but wanted to be sure.
> >>
> >> Jim
> >>
> >>> On Jun 26, 2018, at 11:58 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>>
> >>> Hi all,
> >>>
> >>> We have been thinking internally about a lightweight llvm-based
> ptracer.  To address one question up front: the primary way in which this
> differs from LLDB is that it targets a more narrow use case -- there is no
> scripting support, no clang integration, no dynamic extensibility, no
> support for running jitted code in the target, and no user interface.  We
> have several use cases internally that call for varying levels of
> functionality from such a utility, and being able to use as little as
> possible of the library as is necessary for the given task is important for
> the scale in which we wish to use it.
> >>>
> >>> We are still in early discussions and planning, but I think this would
> be a good addition to the LLVM upstream.  Since we’re approaching this as a
> set of small isolated components, my thinking is to work on this completely
> upstream, directly under the llvm project (as opposed to making a separate
> subproject), but I’m open to discussion if anyone feels differently.
> >>>
> >>> LLDB has solved a lot of the difficult problems needed for such a
> tool.  So in the spirit of code reuse, we 

Re: [lldb-dev] RFC: libtrace

2018-06-26 Thread Zachary Turner via lldb-dev
no expression parser or knowledge of any specific programming language.

Basically I just mean that the parsing of the native DWARF format itself is
in scope, but anything beyond that is out of scope.  For symbolication we
have things like llvm-symbolizer that already just work and are built on
top of LLVM's dwarf parsing code.  Similarly, LLDB's type system could be
built on top of it as well.  Given that I think everyone mostly agrees that
unifying on one DWARF parser is a good idea in principle, this would mean
no functional change from LLDB's point of view, it would just continue to
do exactly what it does regarding parsing C++ expressions and converting
these into types that clang understands.

It will probably be useful someday to have an expression parser and
language specific type system, but when that comes I don't think we'd want
anything radically different than what LLDB already has.

On Tue, Jun 26, 2018 at 12:26 PM Jim Ingham  wrote:

> Just to be clear, by "no clang integration" do you mean "no expression
> parser" or do you mean something more radical?  For instance, adding a
> TypeSystem and its DWARF parser for C family languages that uses a
> different underlying representation than Clang AST's to store the results
> would be a lot of work that wouldn't be terribly interesting to lldb.  I
> don't think that's what you meant, but wanted to be sure.
>
> Jim
>
> > On Jun 26, 2018, at 11:58 AM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > Hi all,
> >
> > We have been thinking internally about a lightweight llvm-based
> ptracer.  To address one question up front: the primary way in which this
> differs from LLDB is that it targets a more narrow use case -- there is no
> scripting support, no clang integration, no dynamic extensibility, no
> support for running jitted code in the target, and no user interface.  We
> have several use cases internally that call for varying levels of
> functionality from such a utility, and being able to use as little as
> possible of the library as is necessary for the given task is important for
> the scale in which we wish to use it.
> >
> > We are still in early discussions and planning, but I think this would
> be a good addition to the LLVM upstream.  Since we’re approaching this as a
> set of small isolated components, my thinking is to work on this completely
> upstream, directly under the llvm project (as opposed to making a separate
> subproject), but I’m open to discussion if anyone feels differently.
> >
> > LLDB has solved a lot of the difficult problems needed for such a tool.
> So in the spirit of code reuse, we think it’s worth trying componentize
> LLDB by sinking pieces into LLVM and rebasing LLDB as well as these smaller
> tools on top of these components, so that smaller tools can reduce code
> duplication and contribute to the overall health of the code base.  At the
> same time we think that in doing so we can break things up into more
> granular pieces, ultimately exposing a larger testing surface and enabling
> us to create exhaustive tests, giving LLDB more fine grained testing of
> important subsystems.
> >
> > A good example of this would be LLDB’s DWARF parsing code, which is more
> featureful than LLVM’s but has kind of evolved in parallel.  Sinking this
> into LLVM would be one early target of such an effort, although over time
> there would likely be more.
> >
> > Anyone have any thoughts / strong opinions on this proposal, or where
> the code should live?  Also, does anyone have any suggestions on things
> they’d like to see come out of this?  Whether it’s a specific new tool, new
> functionality to an existing tool, an architectural or design change to
> some existing tool or library, or something else entirely, all feedback and
> ideas are welcome.
> >
> > Thanks,
> > Zach
> >
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] RFC: libtrace

2018-06-26 Thread Zachary Turner via lldb-dev
Hi all,

We have been thinking internally about a lightweight llvm-based ptracer.
To address one question up front: the primary way in which this differs
from LLDB is that it targets a more narrow use case -- there is no
scripting support, no clang integration, no dynamic extensibility, no
support for running jitted code in the target, and no user interface.  We
have several use cases internally that call for varying levels of
functionality from such a utility, and being able to use as little as
possible of the library as is necessary for the given task is important for
the scale in which we wish to use it.

We are still in early discussions and planning, but I think this would be a
good addition to the LLVM upstream.  Since we’re approaching this as a set
of small isolated components, my thinking is to work on this completely
upstream, directly under the llvm project (as opposed to making a separate
subproject), but I’m open to discussion if anyone feels differently.

LLDB has solved a lot of the difficult problems needed for such a tool.  So
in the spirit of code reuse, we think it’s worth trying componentize LLDB
by sinking pieces into LLVM and rebasing LLDB as well as these smaller
tools on top of these components, so that smaller tools can reduce code
duplication and contribute to the overall health of the code base.  At the
same time we think that in doing so we can break things up into more
granular pieces, ultimately exposing a larger testing surface and enabling
us to create exhaustive tests, giving LLDB more fine grained testing of
important subsystems.

A good example of this would be LLDB’s DWARF parsing code, which is more
featureful than LLVM’s but has kind of evolved in parallel.  Sinking this
into LLVM would be one early target of such an effort, although over time
there would likely be more.

Anyone have any thoughts / strong opinions on this proposal, or where the
code should live?  Also, does anyone have any suggestions on things they’d
like to see come out of this? Whether it’s a specific new tool, new
functionality to an existing tool, an architectural or design change to
some existing tool or library, or something else entirely, all feedback and
ideas are welcome.

Thanks,

Zach
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] clang::VersionTuple

2018-06-18 Thread Zachary Turner via lldb-dev
+1 for limiting the scope of a variable as much as possible
On Mon, Jun 18, 2018 at 7:57 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Thanks. I am going to submit the patch then.
>
> On Fri, 15 Jun 2018 at 19:56, Jim Ingham  wrote:
> > > On Jun 15, 2018, at 3:44 AM, Pavel Labath via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> > >
> > > Hello again,
> > >
> > > Just a quick update on the state of this.
> > >
> > > I've managed to move VersionTuple from clang to llvm. I've also
> > > created  to switch over our version
> > > handling to that class.
> > >
> > > Could I interest anyone in taking a quick look at the patch?
> >
> >
> > Somehow I can’t log into Phabricator from home so I can’t comment right
> now, but I took a look.
> >
> > In some of your changes in the SB files you do:
> >
> >   if (PlatformSP platform_sp = GetSP())
> > version = platform_sp->GetOSVersion();
> >
> > I don’t like putting initializers in if statements like this because I
> always have to think twice about whether you meant “==“.  Moreover, all of
> the surrounding code does it differently:
> >
> >   PlatformSP platform_sp = GetSP()
> >   if (platform_sp)
> > version = platform_sp->GetOSVersion();
> >
> > so switching to the other form in a couple of places only kinda forces
> the double-take.  But that’s a little nit.
>
> I've rechecked the llvm style guide. It doesn't say anything about
> this particular issue, but this syntax is used throughout the examples
> demonstrating other things.
>
> What I like about this syntax is that it makes it clear that the
> variable has no meaning outside of the if block, which is the same
> reason we declare variables inside the for() statement. But those are
> microscopic details I'd leave to the discretion of whoever is writing
> the code.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Do we have any infrastructure for creating mini dump files from a loaded process or from a core file?

2018-06-13 Thread Zachary Turner via lldb-dev
Yea, I think something like this would actually make a useful llvm
utility.  Call it llvm-core or something, and it links against the library
LLVMCoreFile.  We could move all the code for consuming and producing
Windows minidumps and Unix / Mach-O corefiles from LLDB down into
LLVMCoreFile, write a library like llvm-core that can manipulate or inspect
them, then have LLDB use it.  Kill 2 birds with one stone that way IMO.

On Wed, Jun 13, 2018 at 2:56 PM Jason Molenda  wrote:

> fwiw I had to prototype a new LC_NOTE load command a year ago in Mach-O
> core files, to specify where the kernel binary was located.  I wrote a
> utility to add the data to an existing corefile - both load command and
> payload - and it was only about five hundred lines of C++.  I didn't link
> against anything but libc, it's such  a simple task I didn't sweat trying
> to find an object-file-reader/writer library.  ELF may be more complicated
> though.
>
> > On Jun 13, 2018, at 2:51 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >
> > What about the case where you already have a Unix core file and you
> aren't in a debugger but just want to convert it?  It seems like we could
> have a standalone utility that did that (one could imagine doing the
> reverse too).  I'm wondering if it wouldn't be possible to do this as a
> library or something that didn't have any dependencies on LLDB, that way a
> standalone tool could link against this library, and so could LLDB.  I
> think this would improve its usefulness quite a bit.
> >
> > On Wed, Jun 13, 2018 at 2:42 PM Greg Clayton  wrote:
> > The goal is to take a live process (regular process just stopped, or a
> core file) and run "save_minidump ..." as a command and export a minidump
> file that can be sent elsewhere. Unix core files are too large to always
> send and they are less useful if they are not examined in the machine that
> they were produced on. So LLDB gives us the connection to the live process,
> and we can then create a minidump file. I am going to create a python
> module that can do this for us.
> >
> > Greg
> >
> >
> >> On Jun 13, 2018, at 2:29 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >>
> >> Also, if the goal is to have this upstream somewhere, it would be nice
> to have a tool this be a standalone tool.  This seems like something that
> you shouldn't be required to start up a debugger to do, and probably
> doesn't have many (or any for that matters) on the rest of LLDB.
> >>
> >> On Wed, Jun 13, 2018 at 1:58 PM Leonard Mosescu 
> wrote:
> >> That being said, it's not exactly trivial to produce a good minidump.
> Crashpad has a native & cross-platform minidump writer, that's what I'd
> start with.
> >>
> >> Addendum: I realized after sending the email that if the goal is to
> convert core files -> LLDB -> minidump a lot of the complexity found in
> Crashpad can be avoided, so perhaps writing an LLDB minidump writer from
> scratch would not be too bad.
> >>
> >> On Wed, Jun 13, 2018 at 1:50 PM, Leonard Mosescu 
> wrote:
> >> The minidump format is more or less documented in MSDN.
> >>
> >> That being said, it's not exactly trivial to produce a good minidump.
> Crashpad has a native & cross-platform minidump writer, that's what I'd
> start with.
> >>
> >> On Wed, Jun 13, 2018 at 1:38 PM, Adrian McCarthy via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >> Zach's right.  On Windows, lldb can produce a minidump, but it just
> calls out to a Microsoft library to do so.  We don't have any
> platform-agnostic code for producing a minidump.
> >>
> >> I've also pinged another Googler who I know might be interested in
> converting between minidumps and core files (the opposite direction) to see
> if he has any additional info.  I don't think he's on lldb-dev, though, so
> I'll act as a relay if necessary.
> >>
> >> On Wed, Jun 13, 2018 at 12:07 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
> >> We can’t produce them, but you should check out the source code of
> google breakpad / crashpad which can.
> >>
> >> That said it’s a pretty simple format, there may be enough in our
> consumer code that should allow you to produce them
> >>
> >>
> >> ___
> >> lldb-dev mailing list
> >> lldb-dev@lists.llvm.org
> >> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> 

Re: [lldb-dev] Do we have any infrastructure for creating mini dump files from a loaded process or from a core file?

2018-06-13 Thread Zachary Turner via lldb-dev
Also one could imagine using it for many other things too.  For example,
given a windows dump file, strip it down (i.e. remove heap, etc), or
similar types of options for operating on Unix core files to remove certain
types of info etc

On Wed, Jun 13, 2018 at 2:51 PM Zachary Turner  wrote:

> What about the case where you already have a Unix core file and you aren't
> in a debugger but just want to convert it?  It seems like we could have a
> standalone utility that did that (one could imagine doing the reverse
> too).  I'm wondering if it wouldn't be possible to do this as a library or
> something that didn't have any dependencies on LLDB, that way a standalone
> tool could link against this library, and so could LLDB.  I think this
> would improve its usefulness quite a bit.
>
> On Wed, Jun 13, 2018 at 2:42 PM Greg Clayton  wrote:
>
>> The goal is to take a live process (regular process just stopped, or a
>> core file) and run "save_minidump ..." as a command and export a minidump
>> file that can be sent elsewhere. Unix core files are too large to always
>> send and they are less useful if they are not examined in the machine that
>> they were produced on. So LLDB gives us the connection to the live process,
>> and we can then create a minidump file. I am going to create a python
>> module that can do this for us.
>>
>> Greg
>>
>>
>> On Jun 13, 2018, at 2:29 PM, Zachary Turner via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>> Also, if the goal is to have this upstream somewhere, it would be nice to
>> have a tool this be a standalone tool.  This seems like something that you
>> shouldn't be required to start up a debugger to do, and probably doesn't
>> have many (or any for that matters) on the rest of LLDB.
>>
>> On Wed, Jun 13, 2018 at 1:58 PM Leonard Mosescu 
>> wrote:
>>
>>> That being said, it's not exactly trivial to produce a good minidump.
>>>> Crashpad  <https://chromium.googlesource.com/crashpad/crashpad>has a
>>>> native & cross-platform minidump writer, that's what I'd start with.
>>>>
>>>
>>> Addendum: I realized after sending the email that if the goal is to
>>> convert core files -> LLDB -> minidump a lot of the complexity found in
>>> Crashpad can be avoided, so perhaps writing an LLDB minidump writer from
>>> scratch would not be too bad.
>>>
>>> On Wed, Jun 13, 2018 at 1:50 PM, Leonard Mosescu 
>>> wrote:
>>>
>>>> The minidump format is more or less documented in MSDN
>>>> <https://msdn.microsoft.com/en-us/library/windows/desktop/ms679293(v=vs.85).aspx>
>>>> .
>>>>
>>>> That being said, it's not exactly trivial to produce a good minidump. 
>>>> Crashpad
>>>> <https://chromium.googlesource.com/crashpad/crashpad>has a native &
>>>> cross-platform minidump writer, that's what I'd start with.
>>>>
>>>> On Wed, Jun 13, 2018 at 1:38 PM, Adrian McCarthy via lldb-dev <
>>>> lldb-dev@lists.llvm.org> wrote:
>>>>
>>>>> Zach's right.  On Windows, lldb can produce a minidump, but it just
>>>>> calls out to a Microsoft library to do so.  We don't have any
>>>>> platform-agnostic code for producing a minidump.
>>>>>
>>>>> I've also pinged another Googler who I know might be interested in
>>>>> converting between minidumps and core files (the opposite direction) to 
>>>>> see
>>>>> if he has any additional info.  I don't think he's on lldb-dev, though, so
>>>>> I'll act as a relay if necessary.
>>>>>
>>>>> On Wed, Jun 13, 2018 at 12:07 PM, Zachary Turner via lldb-dev <
>>>>> lldb-dev@lists.llvm.org> wrote:
>>>>>
>>>>>> We can’t produce them, but you should check out the source code of
>>>>>> google breakpad / crashpad which can.
>>>>>>
>>>>>> That said it’s a pretty simple format, there may be enough in our
>>>>>> consumer code that should allow you to produce them
>>>>>>
>>>>>>
>>>>>> ___
>>>>>> lldb-dev mailing list
>>>>>> lldb-dev@lists.llvm.org
>>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>>
>>>>>>
>>>>>
>>>>> ___
>>>>> lldb-dev mailing list
>>>>> lldb-dev@lists.llvm.org
>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>
>>>>>
>>>>
>>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Do we have any infrastructure for creating mini dump files from a loaded process or from a core file?

2018-06-13 Thread Zachary Turner via lldb-dev
What about the case where you already have a Unix core file and you aren't
in a debugger but just want to convert it?  It seems like we could have a
standalone utility that did that (one could imagine doing the reverse
too).  I'm wondering if it wouldn't be possible to do this as a library or
something that didn't have any dependencies on LLDB, that way a standalone
tool could link against this library, and so could LLDB.  I think this
would improve its usefulness quite a bit.

On Wed, Jun 13, 2018 at 2:42 PM Greg Clayton  wrote:

> The goal is to take a live process (regular process just stopped, or a
> core file) and run "save_minidump ..." as a command and export a minidump
> file that can be sent elsewhere. Unix core files are too large to always
> send and they are less useful if they are not examined in the machine that
> they were produced on. So LLDB gives us the connection to the live process,
> and we can then create a minidump file. I am going to create a python
> module that can do this for us.
>
> Greg
>
>
> On Jun 13, 2018, at 2:29 PM, Zachary Turner via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> Also, if the goal is to have this upstream somewhere, it would be nice to
> have a tool this be a standalone tool.  This seems like something that you
> shouldn't be required to start up a debugger to do, and probably doesn't
> have many (or any for that matters) on the rest of LLDB.
>
> On Wed, Jun 13, 2018 at 1:58 PM Leonard Mosescu 
> wrote:
>
>> That being said, it's not exactly trivial to produce a good minidump.
>>> Crashpad  <https://chromium.googlesource.com/crashpad/crashpad>has a
>>> native & cross-platform minidump writer, that's what I'd start with.
>>>
>>
>> Addendum: I realized after sending the email that if the goal is to
>> convert core files -> LLDB -> minidump a lot of the complexity found in
>> Crashpad can be avoided, so perhaps writing an LLDB minidump writer from
>> scratch would not be too bad.
>>
>> On Wed, Jun 13, 2018 at 1:50 PM, Leonard Mosescu 
>> wrote:
>>
>>> The minidump format is more or less documented in MSDN
>>> <https://msdn.microsoft.com/en-us/library/windows/desktop/ms679293(v=vs.85).aspx>
>>> .
>>>
>>> That being said, it's not exactly trivial to produce a good minidump. 
>>> Crashpad
>>> <https://chromium.googlesource.com/crashpad/crashpad>has a native &
>>> cross-platform minidump writer, that's what I'd start with.
>>>
>>> On Wed, Jun 13, 2018 at 1:38 PM, Adrian McCarthy via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
>>>> Zach's right.  On Windows, lldb can produce a minidump, but it just
>>>> calls out to a Microsoft library to do so.  We don't have any
>>>> platform-agnostic code for producing a minidump.
>>>>
>>>> I've also pinged another Googler who I know might be interested in
>>>> converting between minidumps and core files (the opposite direction) to see
>>>> if he has any additional info.  I don't think he's on lldb-dev, though, so
>>>> I'll act as a relay if necessary.
>>>>
>>>> On Wed, Jun 13, 2018 at 12:07 PM, Zachary Turner via lldb-dev <
>>>> lldb-dev@lists.llvm.org> wrote:
>>>>
>>>>> We can’t produce them, but you should check out the source code of
>>>>> google breakpad / crashpad which can.
>>>>>
>>>>> That said it’s a pretty simple format, there may be enough in our
>>>>> consumer code that should allow you to produce them
>>>>>
>>>>>
>>>>> ___
>>>>> lldb-dev mailing list
>>>>> lldb-dev@lists.llvm.org
>>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>>
>>>>>
>>>>
>>>> ___
>>>> lldb-dev mailing list
>>>> lldb-dev@lists.llvm.org
>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>
>>>>
>>>
>> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Do we have any infrastructure for creating mini dump files from a loaded process or from a core file?

2018-06-13 Thread Zachary Turner via lldb-dev
Also, if the goal is to have this upstream somewhere, it would be nice to
have a tool this be a standalone tool.  This seems like something that you
shouldn't be required to start up a debugger to do, and probably doesn't
have many (or any for that matters) on the rest of LLDB.

On Wed, Jun 13, 2018 at 1:58 PM Leonard Mosescu  wrote:

> That being said, it's not exactly trivial to produce a good minidump.
>> Crashpad  <https://chromium.googlesource.com/crashpad/crashpad>has a
>> native & cross-platform minidump writer, that's what I'd start with.
>>
>
> Addendum: I realized after sending the email that if the goal is to
> convert core files -> LLDB -> minidump a lot of the complexity found in
> Crashpad can be avoided, so perhaps writing an LLDB minidump writer from
> scratch would not be too bad.
>
> On Wed, Jun 13, 2018 at 1:50 PM, Leonard Mosescu 
> wrote:
>
>> The minidump format is more or less documented in MSDN
>> <https://msdn.microsoft.com/en-us/library/windows/desktop/ms679293(v=vs.85).aspx>
>> .
>>
>> That being said, it's not exactly trivial to produce a good minidump. 
>> Crashpad
>> <https://chromium.googlesource.com/crashpad/crashpad>has a native &
>> cross-platform minidump writer, that's what I'd start with.
>>
>> On Wed, Jun 13, 2018 at 1:38 PM, Adrian McCarthy via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Zach's right.  On Windows, lldb can produce a minidump, but it just
>>> calls out to a Microsoft library to do so.  We don't have any
>>> platform-agnostic code for producing a minidump.
>>>
>>> I've also pinged another Googler who I know might be interested in
>>> converting between minidumps and core files (the opposite direction) to see
>>> if he has any additional info.  I don't think he's on lldb-dev, though, so
>>> I'll act as a relay if necessary.
>>>
>>> On Wed, Jun 13, 2018 at 12:07 PM, Zachary Turner via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
>>>> We can’t produce them, but you should check out the source code of
>>>> google breakpad / crashpad which can.
>>>>
>>>> That said it’s a pretty simple format, there may be enough in our
>>>> consumer code that should allow you to produce them
>>>>
>>>>
>>>> ___
>>>> lldb-dev mailing list
>>>> lldb-dev@lists.llvm.org
>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>
>>>>
>>>
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>>
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


  1   2   3   4   5   6   7   >