Re: [lldb-dev] Is GetLogIf**All**CategoriesSet useful?

2022-01-19 Thread Jonas Devlieghere via lldb-dev


> On Jan 19, 2022, at 10:25 AM, Jim Ingham  wrote:
> 
> 
> 
>> On Jan 19, 2022, at 6:40 AM, Pavel Labath  wrote:
>> 
>> Hi all,
>> 
>> In case you haven't noticed, I'd like to draw your attention to the 
>> in-flight patches (https://reviews.llvm.org/D117382, 
>> https://reviews.llvm.org/D117490) whose goal clean up/improve/streamline the 
>> logging infrastructure.
>> 
>> I'm don't want go into technical details here (they're on the patch), but 
>> the general idea is to replace statements like 
>> GetLogIf(Any/All)CategoriesSet(LIBLLDB_LOG_CAT1 | LIBLLDB_LOG_CAT2)
>> with
>> GetLogIf(Any/All)(LLDBLog::Cat1 | LLDBLog::Cat2)
>> i.e., drop macros and make use of templates to make the function calls 
>> shorter and safer.
>> 
>> The reason I'm writing this email is to ask about the "All" versions of 
>> these logging functions. Do you find them useful in practice?
>> 
>> I'm asking that because I've never used this functionality. While I can't 
>> find anything wrong with the concept in theory, practically I think it's 
>> just confusing to have some log message appear only for some combination of 
>> enabled channels. It might have made some sense when we had a "verbose" 
>> logging channel, but that one is long gone (we still have a verbose logging 
>> *flag*).
>> 
>> In fact, out of all our GetLogIf calls (1203), less than 1% (11*) uses the 
>> GetLogIfAll form with more than one category. Of those, three are in tests, 
>> one is definitely a bug (it combines the category with 
>> LLDB_LOG_OPTION_VERBOSE), and the others (7) are of questionable usefulness 
>> (to me anyway).
>> 
>> If we got rid of this, we could simplify the logging calls even further and 
>> have something like:
>> Log *log = GetLog(LLDBLog::Process);
>> everywhere.
> 
> The only time I’ve ever “used” GetLogIfAll was when I added another LOG 
> option to a log call, not noticing it was “All”, finding the new log didn’t 
> work, and going back to switch “All” to “Any”.
> 
> I vote for removing it.

+1 

> 
> Jim
> 
> 
>> 
>> cheers,
>> pl
>> 
>> (*) I used this command to count:
>> $ git grep -e LogIfAll -A 1 | fgrep -e '|' | wc -l

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [llvm-dev] [RFC] Building LLVM-Debuginfod

2021-10-05 Thread Jonas Devlieghere via lldb-dev
+lldb-dev

On Mon, Oct 4, 2021 at 11:01 PM Petr Hosek via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> Two major factors are compatibility with a broad range of platforms (our
> toolchain is already being used by developers on Linux, macOS, Windows) and
> permissive license (our goal is to provide a permissively licensed,
> self-contained toolchain with a complete set of binary tools that support
> debuginfod).
>
> We are also thinking about some potential future extensions that would
> make sense for the LLVM implementation. For example, we are planning on
> adopting GSYM as the symbolization format and we would like to
> support GSYM in debuginfod in the future.
>
> On Wed, Sep 29, 2021 at 10:40 AM Frank Ch. Eigler via llvm-dev <
> llvm-...@lists.llvm.org> wrote:
>
>> Hi -
>>
>> As a developer of elfutils/debuginfod, I read with interest your
>> intent to build an llvm reimplementation of the debuginfod stack.
>> Best of luck, enjoy!
>>
>> I'm curious whether there were any indications that the existing code
>> base couldn't be used due to problems of some sort.  AIUI, licensing
>> compatibility with LLVM is moot for out-of-process binaries like the
>> debuginfod server and the debuginfod-find client.  -L symlink loops
>> are a SMOP.  Was mach-o support the only real showstopper?
>>
>> - FChE
>>
>> ___
>> LLVM Developers mailing list
>> llvm-...@lists.llvm.org
>> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>>
> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLDB Reproducers

2021-09-23 Thread Jonas Devlieghere via lldb-dev
Hey everyone,

Over the course of the past year, I've come to the conclusion that the
reproducers still require a non-trivial amount of investment to reach
production quality. Unfortunately, I don't have the bandwidth to make that
happen.

Reproducers are inherently all-or-nothing: they either faithfully reproduce
the issue or they don't. There is no middle ground and we've seen that the
smallest bug or shortcoming in LLDB's reproducer infrastructure can render
a reproducer useless. At the same time, the information that's part of the
reproducer is generally valuable in and by itself. The list of commands,
the executable and symbol files are often things we need during an
investigation.

My plan is to transform the reproducers into something that resembles a
sysdiagnose on Apple platforms: an archive containing a variety of
information to help diagnose a bug, but without the machinery to
automatically reproduce the bug. This essentially means keeping a subset of
the "capture" side of the reproducer infrastructure but dropping
the "replay" part.

Unless anyone objects and volunteers to maintain (and improve) the current
reproducer functionality, I'm going to move in the direction outlined above
and start ripping out the parts of the reproducers that don't serve their
new purpose.

Cheers,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Removing linux mips support

2021-03-09 Thread Jonas Devlieghere via lldb-dev
+1

This all sounds in line with the expectations we've laid out on the mailing
list in the past for platform/language support.

On Tue, Mar 9, 2021 at 12:24 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi all,
>
> I propose to remove support for linux mips debugging. This basically
> amounts to deleting
> source/Plugins/Process/Linux/NativeRegisterContextLinux_mips64.{cpp,h}.
> My reasons for doing that are:
>
> - This code is unmaintained (last non-mechanical change was in 2017) and
> untested (no public buildbots), so we don't know if even basic
> functionality works, or if it indeed builds.
>
> - At the same, it is carrying a lot of technical debt, which is leaking
> out of the mips-specific files, and interfering with other development
> efforts. The last instance of this is D96766, which is adding FreeBSD
> mips support, but needs to work around linux specific knowledge leaking
> into supposedly generic code. This one should be fixable relatively
> easily (these days we already have precedents for similar things in x86
> and arm code), but it needs someone who is willing to do that.
>
> But that is not all. To support mips, we introduced two new fields into
> the RegisterInfo struct (dynamic_size_dwarf_{expr_bytes,len}). These are
> introducing a lot of clutter in all our RegisterInfo definitions (which
> we have **a lot** of) and are not really consistent with the long term
> vision of the gdb-remote protocol usage in lldb. These days, we have a
> different mechanism for this (added to support a similar feature in
> arm), it would be better to implement this feature in terms of that. I
> would tout this (removal of these fields) as the main benefit of
> dropping mips support.
>
> So, unless someone willing to address these issues (I'm happy to provide
> support where I can), I propose we drop mips support. Generic mips
> support will remain (and hopefully be better tested) thanks to the
> FreeBSD mips port, so re-adding mips support should be a matter of
> reimplementing the linux bits.
>
> regards,
> Pavel
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Updating or removing lldb's copy of unittest2

2021-01-28 Thread Jonas Devlieghere via lldb-dev
Hey David,

On Thu, Jan 28, 2021 at 2:46 AM David Spickett via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I came across a minor bug writing some lldb-server tests where single
> line diffs weren't handled correctly by unittest2. Turns out they are
> in the latest version but the third_party/ version is older than that.
>
> https://bugs.python.org/issue9174
> https://hg.python.org/unittest2/rev/96e432563d53 (though I think the
> commit title is a mistake)
>
> So I thought of cherry picking that one thing (assuming licensing
> would allow me to), or even updating the whole copy (a lot of churn
> for a single fix). Then I remembered that llvm in general has been
> moving to Python3.
>

I made an attempt to update the vendored unittest2 module in the past [1].
I diffed our vendored version with the release it was based on, updated the
module and re-applied the changes. That was the easy part. The more
intrusive part is that the testing framework changed the way it deals with
expected failures. The old version used exceptions, while the new framework
only looks at asserts that fail. I don't remember the details, but we are
relying on that mechanism somehow and as a result a bunch of test failed.
The good thing is that this uncovered a few tests that were XFAILed but
were really failing for unrelated reasons (i.e. Python throwing
an exception because the test was plain wrong, rather than an assertion
triggering or what was being tested not working). Anyway, hopefully this
isn't too much work, but at the time something more important came up and I
haven't had time to look at this again since.


> Looking at https://lldb.llvm.org/resources/build.html it doesn't
> explicitly say Python2 isn't supported, but only Python3 is mentioned
> so I assume lldb is now Python3 only.
>

LLVM dropped support for Python 2 at the beginning of this year [2]. For
LLDB specifically, I've asked for a bit more time before we start making
"Python 2 incompatible" changes [3] as we still have to maintain Python 2
support internally. We're actively working to drop that requirement.


> If that is correct, is it worth me investigating using Python3's built
> in unittest module instead, and removing our copy of unittest2?
>

I'm in favor of dropping a vendored dependency, assuming of course we can
get rid of the modification we rely on today. If we go that route I want to
ask to land this after the 13 release is branched.

Cheers,
Jonas

[1]
https://github.com/JDevlieghere/llvm-project/tree/update-vendored-unittest2
[2] https://lists.llvm.org/pipermail/llvm-dev/2020-December/147372.html
[3] https://lists.llvm.org/pipermail/lldb-dev/2020-August/016388.html


> Thanks,
> David Spickett.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Moving away from epydoc for the LLDB SB API documentation

2020-11-25 Thread Jonas Devlieghere via lldb-dev
Another issue with epydoc is that it currently doesn't list properties. The
checked-in documentation from the old days had them, but I never got epydoc
to generate them (and to be fair I never really tried). Instead I looked at
alternatives as well. The main issue I found is that it's easy to trick
epydoc (see lldb/docs/CMakeLists.txt) into parsing the bindings without
actually needing liblldb to be built, which is out of the question for the
server that renders the docs. All the other alternatives I tried would
attempt to do an `import lldb` which would obviously fail without the
dylib. More important things came up and I never really followed up on
this. Maybe it's easy to hack around that (but please no static bindings),
but I think it's an important thing to consider.

I feel very similar to Jordan. I like Sphix because it's already used by
LLVM and LLDB, but unless it doesn't require a separate plugin I'm not sure
how much that really matters. The biggest pro for me is that it looks and
feels like a lot of existing Python documentation. That said, pdoc3 looks
and feels a bit nicer, and it seems to be around for a while and actively
developed. I'm pretty indifferent between the two, so as a tie-breaker I'd
go with the one that requires the least amount of modification.

On Tue, Nov 24, 2020 at 8:13 PM Jordan Rupprecht via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Based on looks alone, your Sphinx example feels the most polished to me.
> And it'd be consistent with the main LLVM docs, which is nice. However, the
> pdoc3 feels much more *usable* (easier to skim through, I love the
> one-pager-ness of it), so that's where my vote is going too.
>
> It'd be nice if the pdoc3 had anchors on the **right** side, e.g. if
> you're skimming through and find something you want to link, you can do so
> without having to look it up again on the left. Many doc systems (including
> the main LLDB docs) have a "¶" symbol that appears next to each header when
> hovering for this. Also, the UI feels excessively large/bulky, it'd be nice
> to make it more compact. Both these things seem like minor issues that
> could be tweaked -- if pdoc3 doesn't already support it, it probably isn't
> too hard to send a patch for.
>
> On Tue, Nov 24, 2020 at 5:29 AM Raphael “Teemperor” Isemann via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi all,
>>
>> some of you might have noticed that the Python SB API documentation on
>> the website hasn't been regenerated since more than a year. I pinged Andrei
>> about fixing the broken generation script. While I was trying to recreate
>> the doc generation setup on my machine I noticed that our documentation is
>> generated by epydoc which is unmaintained and had its last release 12 years
>> ago. It seems we also have some mocking setup for our native _lldb module
>> in place that stopped working with Python 3.
>>
>> While the setup we currently have will probably work for a bit longer
>> (assuming no one upgrades the web server generating the API docs), I would
>> propose we migrate to a newer documentation generator. That would not only
>> make this whole setup a bit more future-proof but we probably could also
>> make the API docs a more user-friendly while we're at it.
>>
>> From what I can see we have at least three alternative generators:
>>
>> 1. pydoctor (epydoc's maintained fork) - Example LLDB docs:
>> https://teemperor.de/pub/pydoctor/index.html
>> Pros:
>>   + Doesn't really change the user-experience compared to epydoc.
>> Cons:
>>   - Doesn't really change the user-experience compared to epydoc.
>>   - The website is rather verbose and you need to click around a lot to
>> find anything.
>>   - Horrible user-experience when viewed on mobile.
>>   - No search from what I can see.
>>   - It seems we can't filter out certain types we don't care about (like
>> Swig generated variables/wrappers etc.)
>>   - It doesn't include LLDB's globals/enum values in the API (even when I
>> manually document them in the source). This seems to be just a Python thing
>> that opinions are split on how/if globals are supposed to be documented.
>>   - Somehow ignores certain doc strings (I assume it fails to parse them
>> because of the embedded code examples).
>>
>>
>> 2. sphinx (which is also generating the rest of the LLVM websites) -
>> Example LLDB docs: https://teemperor.de/pub/sphinx/index.html
>> Pros:
>>   + The most flexible alternative, so we potentially could fix all the
>> issues we have if we spend enough time implementing plugins.
>>   + We already use sphinx for generating the website. We however don't
>> use its autodoc plugin for actually generating documentation from what I
>> can see.
>> Cons:
>>   - The two plugins I tried for autogenerating our API are hard to modify
>> for our needs (e.g. to implement filters for SWIG generated vars/wrappers).
>>   - In general sphinx is much better if we would hand-write dedicated
>> Python documentation files, but I don't think we want 

Re: [lldb-dev] [RFC] Segmented Address Space Support in LLDB

2020-11-10 Thread Jonas Devlieghere via lldb-dev

> On Nov 10, 2020, at 12:58 PM, Zdenek Prikryl  wrote:
> 
> Hi all,
> 
> Just for the record, we have successfully implemented the wrapping of addr_t 
> into a class to support multiple address spaces. The info about address space 
> is stored in the ELF file, so we get the info from ELF parser and then pass 
> it to the rest of the system. CLI/MI interface has been extended as well, so 
> user can select with address space he wants for memory printing. Similarly, 
> we patched expression evaluation, disassembler, etc.

That's really interesting, I'm excited to hear that this is feasible and has 
been done before. Is this code available publicly and/or is this something 
you'd be willing to upstream (with our help)? 

> 
> If the address wrap is part of the upstream version, it will be awesome :-)...
> 
> Best regards.
> 
> On 10/20/20 9:30 PM, Ted Woodward via lldb-dev wrote:
>> I agree with Pavel about the larger picture - we need to know the driver 
>> behind address spaces before we can discuss a workable solution.
>> 
>> I've dealt with 2 use cases - Harvard architecture cores, and low level 
>> hardware debugging.
>> 
>> A Harvard architecture core has separate instruction and data memories. 
>> These often use the same addresses, so to distinguish between them you need 
>> address spaces. The Motorola DSP56300 had 1 program and 2 data memories, 
>> called p, x and y. p:100, x:100 and y:100 were all separate memories, so 
>> "address 100" isn't enough to get what the user needed to see.
>> 
>> For low level hardware debugging (often using JTAG), many devices let you 
>> access memories in ways like "virtual using the TLB", or "virtual == 
>> physical, through the core", or "physical, through the SoC, not cached". 
>> Memory spaces, done right, can give the user the flexibility to pick how to 
>> view memory.
>> 
>> 
>> Are these the use cases you were envisioning, Jonas?
>> 
>>> -Original Message-
>>> From: lldb-dev  On Behalf Of Pavel Labath
>>> via lldb-dev
>>> Sent: Tuesday, October 20, 2020 12:51 PM
>>> To: Jonas Devlieghere ; LLDB >> d...@lists.llvm.org>
>>> Subject: [EXT] Re: [lldb-dev] [RFC] Segmented Address Space Support in
>>> LLDB
>>> 
>>> There's a lot of things that are unclear to me about this proposal. The
>>> mechanics of representing an segmented address are one thing, but I I think
>>> that the really interesting part will be the interaction with the rest of 
>>> lldb. Like
>>> - What's going to be the source of this address space information? Is it 
>>> going
>>> to be statically baked into lldb (a function of the target architecture?), 
>>> or
>>> dynamically retrieved from the target or platform we're debugging? How
>>> would that work?
>>> - How is this going to interact with Object/SymbolFile classes? Are you
>>> expecting to use existing object and symbol formats for address space
>>> information, or some custom ones? AFAIK, none of the existing formats
>>> actually support encoding address space information (though that hasn't
>>> stopped people from trying).
>>> 
>>> Without understanding the bigger picture it's hard for me to say whether the
>>> proposed large scale refactoring is a good idea. Nonetheless, I am doubtful 
>>> of
>>> the viability of that approach. Some of my reasons for that are:
>>> - not all addr_ts represent an actual address -- sometimes that is a 
>>> difference
>>> between two addresses, which still uses addr_t, as that's guaranteed to fit.
>>> - relatedly to that, there is a difference (I'd expect) between the 
>>> operations
>>> supported by the two types. addr_t supports all integral operations (though 
>>> I
>>> hope we don't use all of them), but I wouldn't expect to be able to do the
>>> same with a SegmentedAddress. For one, I'd expect it wouldn't be possible
>>> to add two SegmentedAddresses together (which is possible for addr_t).
>>> OTOH, adding a SegmentedAddress and an addr_t would probably be fine?
>>> Would subtracting two SegmentedAddresses should result in an addr_t? But
>>> only if they have matching address spaces (and assert otherwise)?
>>> - I'd also be worried about over-generalizing specialized code which can
>>> afford to work with plain addresses, and where the added address space
>>> would be a nuisance (or a source of bugs). E.g. ELF has no notion of address
>>> space, so I don't think I'd find it 

Re: [lldb-dev] Deleting lldb/utils/test/

2020-10-27 Thread Jonas Devlieghere via lldb-dev
Last time I looked at these nothing seemed relevant (anymore) to me either.
I'm in favor of getting rid of the directory.

On Tue, Oct 27, 2020 at 10:26 AM Vedant Kumar via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi,
>
> I'm considering deleting the lldb/utils/test/ directory as a cleanup. Does
> anyone has a reason to keep these scripts around?
>
> Here are the files in the directory:
>
> % ls lldb/utils/test
> README-disasm   README-run-until-faulted
> lldb-disasm.py  main.c  run-dis.py
> README-lldb-disasm  disasm.py
>  llvm-mc-shell.pyras.py
> run-until-faulted.py
>
> AFAICT:
>
> - disasm.py would've been helpful before lldb gained a 'disassemble'
> command, but it doesn't seem useful anymore
> - ditto for lldb-disasm.py; this one also seems quite Darwin-specific (I'm
> pretty sure it doesn't work anymore)
> - llvm-mc-shell.py might be useful if you want to type bytes by hand and
> see the disassembly, but even then, seems better to just do `echo ""
> | llvm-mc`
> - ras.py isn't running the test suite properly, also (imo) seems like an
> ersatz Jenkins replacement
> - main.c is just an example program
> - run-dis.py looks like a driver for stress-testing lldb's disassembly
> command, but it looks very iOS/Darwin specific and has likely outlived its
> usefulness
> - run-until-faulted.py runs a program up to 100 times to see if any of the
> runs fail; I suspect most users would reach for a shell one-liner before
> looking for something like this script
>
> Thoughts?
>
> thanks,
> vedant
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [RFC] Segmented Address Space Support in LLDB

2020-10-19 Thread Jonas Devlieghere via lldb-dev
We want to support segmented address spaces in LLDB. Currently, all of
LLDB’s external API, command line interface, and internals assume that an
address in memory can be addressed unambiguously as an addr_t (aka
uint64_t). To support a segmented address space we’d need to extend addr_t
with a discriminator (an aspace_t) to uniquely identify a location in
memory. This RFC outlines what would need to change and how we propose to
do that.

### Addresses in LLDB

Currently, LLDB has two ways of representing an address:

 - Address object. Mostly represents addresses as Section+offset for a
binary image loaded in the Target. An Address in this form can persist
across executions, e.g. an address breakpoint in a binary image that loads
at a different address every execution. An Address object can represent
memory not mapped to a binary image. Heap, stack, jitted items, will all be
represented as the uint64_t load address of the object, and cannot persist
across multiple executions. You must have the Target object available to
get the current load address of an Address object in the current process
run. Some parts of lldb do not have a Target available to them, so they
require that the Address can be devolved to an addr_t (aka uint64_t) and
passed in.
 - The addr_t (aka uint64_t) type. Primarily used when receiving input
(e.g. from a user on the command line) or when interacting with the
inferior (reading/writing memory) for addresses that need not persist
across runs. Also used when reading DWARF and in our symbol tables to
represent file offset addresses, where the size of an Address object would
be objectionable.

## Proposal

### Address + ProcessAddress

 - The Address object gains a segment discriminator member variable.
Everything that creates an Address will need to provide this segment
discriminator.
 - A ProcessAddress object which is a uint64_t and a segment discriminator
as a replacement for addr_t. ProcessAddress objects would not persist
across multiple executions. Similar to how you can create an addr_t from an
Address+Target today, you can create a ProcessAddress given an
Address+Target. When we pass around addr_ts today, they would be replaced
with ProcessAddress, with the exception of symbol tables where the added
space would be significant, and we do not believe we need segment
discriminators today.

### Address Only

Extend the lldb_private::Address class to be the one representation of
locations; including file based ones valid before running, file addresses
resolved in a process, and process specific addresses (heap/stack/JIT code)
that are only valid during a run. That is attractive because it would
provide a uniform interface to any “where is something” question you would
ask, either about symbols in files, variables in stack frames, etc.

At present, when we resolve a Section+Offset Address to a “load address” we
provide a Target to the resolution API.  Providing the Target externally
makes sense because a Target knows whether the Section is present or not
and can unambiguously return a load address.We could continue that
approach since the Target always holds only one process, or extend it to
allow passing in a Process when resolving non-file backed addresses.  But
this would make the conversion from addr_t uses to Address uses more
difficult, since we will have to push the Target or Process into all the
API’s that make use of just an addr_t.  Using a single Address class seems
less attractive when you have to provide an external entity to make sense
of it at all the use sites.

We could improve this situation by including a Process (as a weak pointer)
and fill that in on the boundaries where in the current code we go from an
Address to a process specific addr_t.  That would make the conversion
easier, but add complexity.  Since Addresses are ubiquitous, you won’t know
what any given Address you’ve been handed actually contains.  It could even
have been resolved for another process than the current one.  Making
Address usage-dependent in this way reduces the attractiveness of the
solution.

## Approach

Replacing all the instances of addr_t by hand would be a lot of work.
Therefore we propose writing a clang-based tool to automate this menial
task. The tool would update function signatures and replace uses of addr_t
inside those functions to get the addr_t from the ProcessAddress or Address
and return the appropriate object for functions that currently return an
addr_t. The goal of this tool is to generate one big NFC patch. This tool
needs not be perfect, at some point it will be more work to improve the
tool than fixing up the remaining code by hand. After this patch LLDB would
still not really understand address spaces but it will have everything in
place to support them.

Once all the APIs are updated, we can start working on the functional
changes. This means actually interpreting the aspace_t values and making
sure they don’t get dropped.

Finally, when all this work is done 

Re: [lldb-dev] [llvm-dev] HTTP library in LLVM

2020-08-31 Thread Jonas Devlieghere via lldb-dev
On Mon, Aug 31, 2020 at 4:38 PM Petr Hosek via llvm-dev <
llvm-...@lists.llvm.org> wrote:

> There are several options, I've looked at couple of them and the one I
> like the most so far is https://github.com/yhirose/cpp-httplib for a few
> reasons:
>
> * It's MIT licensed.
> * It supports Linux, macOS and Windows (and presumably other platforms).
> * It doesn't have any dependencies, it can optionally use zlib and OpenSSL.
> * It's a modern C++11 implementation, the entire library is a single
> header.
>

This looks appealing indeed. Out of curiosity, what are the other
alternatives you considered?


>
> On Mon, Aug 31, 2020 at 4:31 PM Eric Christopher 
> wrote:
>
>> +LLDB Dev  as well for visibility. +Pavel Labath
>>  since he and I have talked about such things.
>>
>> On Mon, Aug 31, 2020 at 7:26 PM David Blaikie  wrote:
>>
>>> [+debug info folks, just as FYI - since the immediate question's more
>>> about 3rd party library deps than the nuances of DWARF, etc]
>>>
>>> I'd imagine avoiding writing such a thing from scratch would be
>>> desirable, but that the decision might depend somewhat on what libraries
>>> out there you/we would consider including, what their licenses and further
>>> dependencies are.
>>>
>>> On Mon, Aug 31, 2020 at 4:22 PM Petr Hosek via llvm-dev <
>>> llvm-...@lists.llvm.org> wrote:
>>>
 We're considering implementing [debuginfod](
 https://sourceware.org/elfutils/Debuginfod.html) library in LLVM.
 Initially, we'd like to start with the client implementation, which would
 enable debuginfod support in tools like llvm-symbolizer, but later we'd
 also like to provide LLVM-based debuginfod server implementation.

 debuginfod uses HTTP and so we need an HTTP library, ideally one that
 supports both client and server.

 The question is, would it be acceptable to use an existing C++ HTTP
 library or would it be preferred to implement an HTTP library in LLVM from
 scratch?
 ___
 LLVM Developers mailing list
 llvm-...@lists.llvm.org
 https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

>>> ___
> LLVM Developers mailing list
> llvm-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Deprecating Python2 and adding type-annotations to the python API

2020-08-03 Thread Jonas Devlieghere via lldb-dev
Hi Nathan,

Thanks for bringing this up. I've been expecting this question for a while
now.

Python 2 is end-of-life and we should move to Python 3. I'm pretty sure
nobody here disagrees with that. Unfortunately though, we still have
consumers, both internally and externally, that still rely on it. We're
actively making an effort to change that, but we're not quite there yet.

That said, I think we should continue moving in that direction. In line
with the rest of LLVM moving to Python 3 by the end of the year, we've
already made it the default. All our bots on GreenDragon are also building
against Python 3.

As a first step, for the next release, I propose we remove the fallback to
Python 2 and make it the only supported configuration. At the same time we
can convert any scripts and tools (I'm thinking of the lit configurations,
the lldb-dotest and lldb-repro wrappers, etc) to be Python 3 only. During
this time however, we'd ask that the bindings and the test suite remain
compatible with Python 2. Given that Python 3 is the only supported
configuration for developers, we'd take on the burden of maintaining Python
2 compatibility in the test suite and correcting (accidental)
incompatibilities.

When the 12.0 release is cut, we can reconsider the situation. If we're
still not ready by then to drop Python 2 support, I  propose another
intermediate step where we remove Python 2 support from the upstream
repository, but ask the community to not actively modernize the test suite
and the bindings. In this situation we'd be dealing with the merge
conflicts in our downstream fork and this would avoid an endless number of
conflicts in the test suite.

Finally, presumably after the 13.0 release, we'd drop that last
requirement.

Please let me know if you think that sounds like a reasonable timeline.

Thanks,
Jonas


On Mon, Aug 3, 2020 at 3:11 PM Nathan Lanza via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> As a user of the lldb python scripting API I would love to see type
> annotations on the scripting APIs. I posted this diff the other day -
> https://reviews.llvm.org/D84269. Pavel commented that this would require
> deprecating python2 and that the recent 11.0 branch cut might make this a
> good time to do this. So how do people feel about removing python2 support
> and moving the APIs more towards modern python3?
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Break setting aliases...

2020-07-21 Thread Jonas Devlieghere via lldb-dev
I don't mind adding the two-letter commands, but I also don't really see
the value in being able to say `bs` instead of `b s -y`. Until either
becomes muscle memory, both require a little cognitive overhead of thinking
"breakpoint set -y" or "breakpoint source". As a user there would be more
value in knowing that the latter is really `breakpoint set -y` which then
allows you to query the help.

If I understand correctly the problem with `b` is that the regex can't
distinguish easily between what it should parse and what it should forward
to `breakpoint set`. Additionally, because it's essentially a
mini-language, you can't be sure if something with a colon is a symbol or
file separated by a line/column number.

I think we should be able to solve the first issue by making `b` a proper
first-class command instead of a regex alias, taking the exact same options
as `breakpoint set`. I think our existing command object argument parser
should be able to parse this and return the remaining "free form" argument,
which we can then parse as a mini-language like we do today. Of course this
would remain suboptimal, but would be strictly better than what we have
today and address the original problem you're trying to solve. Furthermore,
with a first-class command we can do a better job on the help front which
is really underwhelming for _regexp_break command aliases.

That leaves the second problem, which would be solved by the new two-letter
commands but not by changing `b`. From a purity perspective I'd lean
towards the new commands, but as a user I doubt I would use them. I set
almost all my breakpoints with `b` and I don't see a compelling reason to
change to `bs`. So that leaves me with using `b` most of the time, until I
do need to pass some extra option at which point I'll probably just use
`breakpoint set` directly.

TL;DR: Given how widely used `b` is I'd rather improve that and turn it
from a 98% solution into a 99% solution instead of adding new commands.


On Tue, Jul 21, 2020 at 10:22 AM Jim Ingham via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> When we were first devising commands for lldb, we tried to be really
> parsimonious with the one & two letter unique command strings that lldb
> ships with by default.  I was trying to leave us as much flexibility as
> possible as we evolved, and I also wanted to make sure we weren’t taking up
> all the convenient short commands, leaving a cramped space for user aliases.
>
> The _regex_break command was added (and aliased by default to ‘b’) as a
> way to allow quick access for various common breakpoint setting options.
> However it suffers from the problem that you can only provide the options
> that are recognized by the _regexp_break command aliases.  For instance,
> you can’t add the -h option to make a hardware breakpoint.  Because the
> “_regex_break command works by passing the command through a series of
> regex’s stopping at the first match, trying to extend the regular
> expressions to also include “anything else” while not causing one regex to
> claim a command that was really meant for a regex further on in the series
> is really tricky.
>
> That makes it kind of a wall for people.  As soon as you need to do
> anything it doesn’t support you have to go to a command that is not known
> to you (since “b” isn’t related to “break set” in any way that a normal
> user can actually see.)
>
> However, lldb has been around for a while and we only have two unique
> commands of the form “b[A-Za-z]” in the current lldb command set (br and
> bt).  So I think it would be okay for us to take up a few more second
> letter commands to make setting breakpoints more convenient.  I think
> adding:
>
> bs (break source) -> break set -y
> ba (break address) -> break set -a
> bn (break name) -> break set -n
>
> would provide a convenient way to set the most common classes of
> breakpoints while not precluding access to all the other options available
> to “break set”.  We could still leave “b” by itself for the _regex_break
> command - people who’ve figured out it’s intricacies shouldn’t lose their
> investment.  This would be purely additive.
>
> What do people think?
>
> Jim
>
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Minimum required swig version?

2020-04-16 Thread Jonas Devlieghere via lldb-dev
On Thu, Apr 16, 2020 at 2:42 PM Davidino Italiano via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

>
>
> On Apr 16, 2020, at 2:28 PM, Ted Woodward via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
> http://lldb.llvm.org/resources/build.html Says we need swig 2 or later:
>
> If you want to run the test suite, you’ll need to build LLDB with Python
> scripting support.
> · Python 
> · SWIG  2 or later.
>
> I don’t think this is correct anymore.
>
> test/API/python_api/sbenvironment/TestSBEnvironment.py has this line:
> env.Set("FOO", "bar", overwrite=True)
>
> lldb built with swig 2.0.11 fails this test with the error:
> env.Set("FOO", "bar", overwrite=True)
> TypeError: Set() got an unexpected keyword argument 'overwrite'
>
> It works when lldb is built with swig 3.0.8.
>
>
>
> Yes, we bumped the swig requirements.
> Swig-2, among others, don’t support python 3 correctly.
>

I think you're confusing SWIG 1.x and SWIG 2.x. We bumped the requirements
to 2, because that's the first version that correctly supported Python 3.
Personally I don't mind bumping the version again, but this seems more like
a bug that we should be able to fix with SWIG 2.


>
> Feel free to submit a patch.
>
> —
> D
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [RFC] Upstreaming Reproducer Capture/Replay for the API Test Suite

2020-04-06 Thread Jonas Devlieghere via lldb-dev
Hi everyone,

Reproducers in LLDB are currently tested through (1) unit tests, (2)
dedicated end-to-end shell tests and (3) the `lldb-check-repro` suite which
runs all the shell tests against a replayed reproducer. While this already
provides great coverage, we're still missing out on about 800 API tests.
These tests are particularly interesting to the reproducers, because as
opposed to the shell tests, which only exercises a subset of SB API calls
used to implement the driver, they cover the majority of the API surface.

To further qualify reproducer and to improve test coverage, I want to
capture and replay the API test suite as well. Conceptually, this can be
split up in two stages:

 1. Capture a reproducer and replay it with the driver. This exercises the
reproducer instrumentation (serialization and deserialization) for all the
APIs used in our test suite. While a bunch of issues with the reproducer
instrumentation can be detected at compile time, a large subset only
triggers through assertions at runtime. However, this approach by itself
only verifies that we can (de)serialize API calls and their arguments. It
has no knowledge of the expected results and therefore cannot verify the
results of the API calls.

 2. Capture a reproducer and replay it with dotest.py. Rather than having
the command line driver execute every API call one after another, we can
have dotest.py call the Python API as it normally would, intercept the
call, replay it from the reproducer, and return the replayed result. The
interception can be hidden behind the existing LLDB_RECORD_* macros, which
contains sufficient type info to drive replay. It then simply re-invokes
itself with the arguments deserialized from the reproducer and returns that
result. Just as with the shell tests, this approach allows us to reuse the
existing API tests, completely transparently, to check the reproducer
output.

I have worked on this over the past month and have shown that it is
possible to achieve both stages. I have a downstream fork that contains the
necessary changes.

All the runtime issues found in stage 1 have been fixed upstream. With the
exception of about 30 tests that fail because the GDB packets diverge
during replay, all the tests can be replayed with the driver.

About 120 tests, which include the 30 mentioned earlier, fail to replay for
stage 2. This isn't entirely unexpected, just like the shell tests, there
are tests that simply are not expected to work. The reproducers don't
currently capture the output of the inferior and synchronization through
external files won't work either, as those paths will get remapped by the
VFS. This requires manually triage.

I would like to start upstreaming this work so we can start running this in
CI. The majority of the changes are limited to the reproducer
instrumentation, but some changes are needed in the test suite as well, and
there would be a new decorator to skip the unsupported tests. I'm splitting
up the changes in self-contained patches, but wanted to send out this RFC
with the bigger picture first.

Please let me know what you think!

Cheers,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] lldb-instr not working

2020-03-23 Thread Jonas Devlieghere via lldb-dev
Hi Walter,

lldb-instr needs compile_commands.json file to figure out the exact
compiler invocation for every file. Can you verify that the file exists in
the directory you're running lldb-instr from?

Cheers,
Jonas

On Mon, Mar 23, 2020 at 1:29 PM Walter via lldb-dev 
wrote:

> Hi, I've recently tried to use lldb-instr, as mentioned in
> https://lldb.llvm.org/resources/sbapi.html, but I'm having the following
> issue when running it on darwin.
>
> ./lldb-instr
> > LLVM ERROR: Unable to find target for this triple (no targets are
> registered)
>
> Is this a known issue? Or should lldb-instr be built in a special way to
> make it aware of the local compilation target?
>
> Does anyone know anything about this?
>
> Thanks!
>
> - Walter
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB Website is not being updated

2020-03-02 Thread Jonas Devlieghere via lldb-dev
I removed the version number from the home page.

To github.com:llvm/llvm-project.git
   adc69729ec8..c77fc00eec0  master -> master

I'm not sure we should have the banner because (1) the documentation
doesn't change as much as llvm/clang and (2) we don't archive the old
documentation so there's nothing to link to.

On Mon, Mar 2, 2020 at 1:47 PM Adrian Prantl  wrote:

> Ah, that's great! We should probably just don't print a version on this
> variant and/or add a box like the one at the top of http://llvm.org/docs/.
>
> thanks,
> adrian
>
> On Mar 2, 2020, at 1:43 PM, Jonas Devlieghere 
> wrote:
>
> Hey Adrian,
>
> The version is hard-coded in lldb/docs/conf.py, we just need to update it
> there. As far as I know the website is being updated nightly.
>
> Cheers,
> Jonas
>
> On Mon, Mar 2, 2020 at 1:41 PM Adrian Prantl  wrote:
>
>> Hello Tanya,
>>
>> I just looked at the LLDB website and it still displays the out-of-date
>> "version 8" page. Did you get a chance to investigate this in the mean time?
>>
>> thanks,
>> adrian
>>
>> > On Nov 24, 2019, at 8:30 AM, Tanya Lattner 
>> wrote:
>> >
>> > I’ll have to check the status of moving the scripts over and what is
>> going on. But yes, this is all related to moving to GitHub and modifying
>> scripts that used to be either on a post-commit hook or nightly cron.
>> >
>> > -Tanya
>> >
>> >> On Nov 21, 2019, at 3:04 PM, Jonas Devlieghere 
>> wrote:
>> >>
>> >> I see a bunch of eros here:
>> >> http://lists.llvm.org/pipermail/www-scripts/2019-November/thread.html
>> >>
>> >> Is this possibly related to the Github/monorepo transition?
>> >>
>> >> -- Jonas
>> >>
>> >> On Thu, Nov 21, 2019 at 10:52 AM Adrian Prantl via lldb-dev
>> >>  wrote:
>> >>>
>> >>> Hello Tanya,
>> >>>
>> >>> it looks like the cron job that is supposed to be updating the LLDB
>> website isn't running or is otherwise blocked at the moment. You can see on
>> https://lldb.llvm.org that it says "Welcome to the LLDB version 8
>> documentation!". We also recently removed the "Why a New Debugger?"
>> headline and that change isn't showing up either.
>> >>>
>> >>> Would you mind taking a look?
>> >>>
>> >>> thanks for your help,
>> >>> Adrian
>> >>> ___
>> >>> lldb-dev mailing list
>> >>> lldb-dev@lists.llvm.org
>> >>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> >
>>
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB Website is not being updated

2020-03-02 Thread Jonas Devlieghere via lldb-dev
Hey Adrian,

The version is hard-coded in lldb/docs/conf.py, we just need to update it
there. As far as I know the website is being updated nightly.

Cheers,
Jonas

On Mon, Mar 2, 2020 at 1:41 PM Adrian Prantl  wrote:

> Hello Tanya,
>
> I just looked at the LLDB website and it still displays the out-of-date
> "version 8" page. Did you get a chance to investigate this in the mean time?
>
> thanks,
> adrian
>
> > On Nov 24, 2019, at 8:30 AM, Tanya Lattner 
> wrote:
> >
> > I’ll have to check the status of moving the scripts over and what is
> going on. But yes, this is all related to moving to GitHub and modifying
> scripts that used to be either on a post-commit hook or nightly cron.
> >
> > -Tanya
> >
> >> On Nov 21, 2019, at 3:04 PM, Jonas Devlieghere 
> wrote:
> >>
> >> I see a bunch of eros here:
> >> http://lists.llvm.org/pipermail/www-scripts/2019-November/thread.html
> >>
> >> Is this possibly related to the Github/monorepo transition?
> >>
> >> -- Jonas
> >>
> >> On Thu, Nov 21, 2019 at 10:52 AM Adrian Prantl via lldb-dev
> >>  wrote:
> >>>
> >>> Hello Tanya,
> >>>
> >>> it looks like the cron job that is supposed to be updating the LLDB
> website isn't running or is otherwise blocked at the moment. You can see on
> https://lldb.llvm.org that it says "Welcome to the LLDB version 8
> documentation!". We also recently removed the "Why a New Debugger?"
> headline and that change isn't showing up either.
> >>>
> >>> Would you mind taking a look?
> >>>
> >>> thanks for your help,
> >>> Adrian
> >>> ___
> >>> lldb-dev mailing list
> >>> lldb-dev@lists.llvm.org
> >>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB_PYTHON_HOME

2020-02-27 Thread Jonas Devlieghere via lldb-dev
So to make my previous explanation more concrete:

On Thu, Feb 27, 2020 at 11:05 AM Jonas Devlieghere 
wrote:

>
>
> On Thu, Feb 27, 2020 at 10:53 AM Adrian McCarthy 
> wrote:
>
>> Thanks for the info.  Setting Python3_ROOT_DIR solves the problem.
>>
>> Looking at the cmake output from before setting Python3_ROOT_DIR, cmake
>> looks for Python twice and finds it at the two different locations.
>>
>> Early on:
>>
>> -- Found PythonInterp: C:/Python36/python.exe (found version "3.6.8")
>>
>
^ This is using the "old" (CMake < 3.12) way of finding the Python
interpreter.


>
>> Which looks good (modulo the incorrect slash direction).  But later:
>>
>> -- Found Python3: C:/Program Files (x86)/Microsoft Visual
>> Studio/Shared/Python37_64/python.exe (found version "3.7.5") found
>> components:  Interpreter Development
>> -- Found PythonInterpAndLibs: C:/Program Files (x86)/Microsoft Visual
>> Studio/Shared/Python37_64/libs/python37.lib
>>
>
^ This is using the "new" (CMake > 3.12) way of finding the Python
interpreter and libraries.


>
>> Which is where the discrepancy comes in.  Note that only C:\Python36 is
>> in my PATH.
>>
>> It's frustrating that this keeps breaking.  Last time, I had to purge all
>> but one Python installation from my machine to get it to make a consistent
>> choice.  But I just upgraded to VS 2019, and it smuggled in its own version.
>>
>> So why are there two searches anyway?  And why do they have different
>> algorithms that lead to different results?  (I'm not sure _how_ it ever
>> found the Microsoft copy, since there's nothing in the process environment
>> that points that way.)
>>
>
> The reason there's two searches is because LLVM and LLDB have different
> requirements. LLVM just needs a python interpreter to run some scripts.
> LLDB on the other hand needs an interpreter and a matching Python library
> to link against. Before CMake 3.12, finding the interpreter and the
> libraries are two separate calls to find with no guarantees that they
> match. This lead to all kinds of issues, where you're linking against one
> version of Python and then trying to run the test suite with a totally
> different interpreter. There were other problems on Windows, which meant
> that we had our own hand-rolled implementation to find Python.
>
> This was all fixed in CMake 3.12. With FindPython{2,3} you know you'll
> have a matching interpreter and library. It also fixed all the problems we
> had to work around for Windows. Unfortunately, LLVM's minimum CMake version
> is 3.4, so we can't use it yet. For LLDB on Windows we agreed that the
> benefits of using FindPython3 are worth bumping the minimum required CMake
> version (see lldb/CMakeLists.txt, line 2-4). Once LLVM moves to CMake 3.12
> or later, all these problems should be fixed. We can then call FindPython3
> once and rely on everything being consistent.
>
>
>>
>> On Thu, Feb 27, 2020 at 10:23 AM Jonas Devlieghere 
>> wrote:
>>
>>> Hey Adrian,
>>>
>>> Config.h gets generated by expanding the corresponding CMake variables.
>>> If you look at LLDBConfig.cmake, you can see that LLDB_PYTHON_HOME is
>>> computed from PYTHON_EXECUTABLE. The problem appears that somehow CMake
>>> ignored your specified PYTHON_HOME and decided to pick a different Python.
>>> I'm not sure why though, because I use a similar CMake invocation on
>>> Windows.
>>>
>>> > cmake ..\llvm-project\llvm -G Ninja -DCMAKE_BUILD_TYPE=RelWithDebInfo
>>> -DLLVM_ENABLE_PROJECTS="llvm;clang;lldb;lld" -DLLVM_ENABLE_ASSERTIONS=OFF
>>> -DLLVM_ENABLE_ZLIB=FALSE -DLLDB_ENABLE_PYTHON=TRUE
>>> -DPYTHON_HOME="C:/Program Files/Python36/"
>>>
>>> According to FindPython3 (
>>> https://cmake.org/cmake/help/v3.12/module/FindPython3.html), you can
>>> set Python3_ROOT_DIR as a hint. Can you give that a try? If that works we
>>> should populate that variable from PYTHON_HOME in
>>> FindPythonInterpAndLibs.cmake.
>>>
>>> Cheers,
>>> Jonas
>>>
>>> On Thu, Feb 27, 2020 at 10:10 AM Adrian McCarthy via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
 Is there documentation on how lldb\include\lldb\host\config.h is
 generated?  I'm again having the problem of the config trying to point to
 the wrong Python installation.

 When I run cmake, I explicitly point PYTHON_HOME to C:\Python36 like
 this:

 cmake -GNinja -DLLVM_TEMPORARILY_ALLOW_OLD_TOOLCHAIN=ON
 -DCMAKE_BUILD_TYPE=Debug -DLLDB_TEST_DEBUG_TEST_CRASHES=1
 -DPYTHON_HOME=C:\Python36
 -DLLDB_TEST_COMPILER=D:\src\llvm\build\ninja\bin\clang.exe
 ..\..\llvm-project\llvm -DLLVM_ENABLE_ZLIB=OFF
 -DLLVM_ENABLE_PROJECTS="clang;lld;lldb"

 But the generated Config.h contains:

 #define LLDB_PYTHON_HOME "C:/Program Files (x86)/Microsoft Visual
 Studio/Shared/Python37_64"


 And the mismatch causes my build to fail because it goes looking for
 python37_d.dll, which is apparently not part of the Microsoft distribution.
 

Re: [lldb-dev] LLDB_PYTHON_HOME

2020-02-27 Thread Jonas Devlieghere via lldb-dev
On Thu, Feb 27, 2020 at 10:53 AM Adrian McCarthy 
wrote:

> Thanks for the info.  Setting Python3_ROOT_DIR solves the problem.
>
> Looking at the cmake output from before setting Python3_ROOT_DIR, cmake
> looks for Python twice and finds it at the two different locations.
>
> Early on:
>
> -- Found PythonInterp: C:/Python36/python.exe (found version "3.6.8")
>
> Which looks good (modulo the incorrect slash direction).  But later:
>
> -- Found Python3: C:/Program Files (x86)/Microsoft Visual
> Studio/Shared/Python37_64/python.exe (found version "3.7.5") found
> components:  Interpreter Development
> -- Found PythonInterpAndLibs: C:/Program Files (x86)/Microsoft Visual
> Studio/Shared/Python37_64/libs/python37.lib
>
> Which is where the discrepancy comes in.  Note that only C:\Python36 is in
> my PATH.
>
> It's frustrating that this keeps breaking.  Last time, I had to purge all
> but one Python installation from my machine to get it to make a consistent
> choice.  But I just upgraded to VS 2019, and it smuggled in its own version.
>
> So why are there two searches anyway?  And why do they have different
> algorithms that lead to different results?  (I'm not sure _how_ it ever
> found the Microsoft copy, since there's nothing in the process environment
> that points that way.)
>

The reason there's two searches is because LLVM and LLDB have different
requirements. LLVM just needs a python interpreter to run some scripts.
LLDB on the other hand needs an interpreter and a matching Python library
to link against. Before CMake 3.12, finding the interpreter and the
libraries are two separate calls to find with no guarantees that they
match. This lead to all kinds of issues, where you're linking against one
version of Python and then trying to run the test suite with a totally
different interpreter. There were other problems on Windows, which meant
that we had our own hand-rolled implementation to find Python.

This was all fixed in CMake 3.12. With FindPython{2,3} you know you'll have
a matching interpreter and library. It also fixed all the problems we had
to work around for Windows. Unfortunately, LLVM's minimum CMake version is
3.4, so we can't use it yet. For LLDB on Windows we agreed that the
benefits of using FindPython3 are worth bumping the minimum required CMake
version (see lldb/CMakeLists.txt, line 2-4). Once LLVM moves to CMake 3.12
or later, all these problems should be fixed. We can then call FindPython3
once and rely on everything being consistent.


>
> On Thu, Feb 27, 2020 at 10:23 AM Jonas Devlieghere 
> wrote:
>
>> Hey Adrian,
>>
>> Config.h gets generated by expanding the corresponding CMake variables.
>> If you look at LLDBConfig.cmake, you can see that LLDB_PYTHON_HOME is
>> computed from PYTHON_EXECUTABLE. The problem appears that somehow CMake
>> ignored your specified PYTHON_HOME and decided to pick a different Python.
>> I'm not sure why though, because I use a similar CMake invocation on
>> Windows.
>>
>> > cmake ..\llvm-project\llvm -G Ninja -DCMAKE_BUILD_TYPE=RelWithDebInfo
>> -DLLVM_ENABLE_PROJECTS="llvm;clang;lldb;lld" -DLLVM_ENABLE_ASSERTIONS=OFF
>> -DLLVM_ENABLE_ZLIB=FALSE -DLLDB_ENABLE_PYTHON=TRUE
>> -DPYTHON_HOME="C:/Program Files/Python36/"
>>
>> According to FindPython3 (
>> https://cmake.org/cmake/help/v3.12/module/FindPython3.html), you can set
>> Python3_ROOT_DIR as a hint. Can you give that a try? If that works we
>> should populate that variable from PYTHON_HOME in
>> FindPythonInterpAndLibs.cmake.
>>
>> Cheers,
>> Jonas
>>
>> On Thu, Feb 27, 2020 at 10:10 AM Adrian McCarthy via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Is there documentation on how lldb\include\lldb\host\config.h is
>>> generated?  I'm again having the problem of the config trying to point to
>>> the wrong Python installation.
>>>
>>> When I run cmake, I explicitly point PYTHON_HOME to C:\Python36 like
>>> this:
>>>
>>> cmake -GNinja -DLLVM_TEMPORARILY_ALLOW_OLD_TOOLCHAIN=ON
>>> -DCMAKE_BUILD_TYPE=Debug -DLLDB_TEST_DEBUG_TEST_CRASHES=1
>>> -DPYTHON_HOME=C:\Python36
>>> -DLLDB_TEST_COMPILER=D:\src\llvm\build\ninja\bin\clang.exe
>>> ..\..\llvm-project\llvm -DLLVM_ENABLE_ZLIB=OFF
>>> -DLLVM_ENABLE_PROJECTS="clang;lld;lldb"
>>>
>>> But the generated Config.h contains:
>>>
>>> #define LLDB_PYTHON_HOME "C:/Program Files (x86)/Microsoft Visual
>>> Studio/Shared/Python37_64"
>>>
>>>
>>> And the mismatch causes my build to fail because it goes looking for
>>> python37_d.dll, which is apparently not part of the Microsoft distribution.
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB_PYTHON_HOME

2020-02-27 Thread Jonas Devlieghere via lldb-dev
Hey Adrian,

Config.h gets generated by expanding the corresponding CMake variables. If
you look at LLDBConfig.cmake, you can see that LLDB_PYTHON_HOME is computed
from PYTHON_EXECUTABLE. The problem appears that somehow CMake ignored your
specified PYTHON_HOME and decided to pick a different Python. I'm not sure
why though, because I use a similar CMake invocation on Windows.

> cmake ..\llvm-project\llvm -G Ninja -DCMAKE_BUILD_TYPE=RelWithDebInfo
-DLLVM_ENABLE_PROJECTS="llvm;clang;lldb;lld" -DLLVM_ENABLE_ASSERTIONS=OFF
-DLLVM_ENABLE_ZLIB=FALSE -DLLDB_ENABLE_PYTHON=TRUE
-DPYTHON_HOME="C:/Program Files/Python36/"

According to FindPython3 (
https://cmake.org/cmake/help/v3.12/module/FindPython3.html), you can set
Python3_ROOT_DIR as a hint. Can you give that a try? If that works we
should populate that variable from PYTHON_HOME in
FindPythonInterpAndLibs.cmake.

Cheers,
Jonas

On Thu, Feb 27, 2020 at 10:10 AM Adrian McCarthy via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Is there documentation on how lldb\include\lldb\host\config.h is
> generated?  I'm again having the problem of the config trying to point to
> the wrong Python installation.
>
> When I run cmake, I explicitly point PYTHON_HOME to C:\Python36 like this:
>
> cmake -GNinja -DLLVM_TEMPORARILY_ALLOW_OLD_TOOLCHAIN=ON
> -DCMAKE_BUILD_TYPE=Debug -DLLDB_TEST_DEBUG_TEST_CRASHES=1
> -DPYTHON_HOME=C:\Python36
> -DLLDB_TEST_COMPILER=D:\src\llvm\build\ninja\bin\clang.exe
> ..\..\llvm-project\llvm -DLLVM_ENABLE_ZLIB=OFF
> -DLLVM_ENABLE_PROJECTS="clang;lld;lldb"
>
> But the generated Config.h contains:
>
> #define LLDB_PYTHON_HOME "C:/Program Files (x86)/Microsoft Visual
> Studio/Shared/Python37_64"
>
>
> And the mismatch causes my build to fail because it goes looking for
> python37_d.dll, which is apparently not part of the Microsoft distribution.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Moving lldbsuite API tests

2020-02-10 Thread Jonas Devlieghere via lldb-dev
I'm very excited about this. Thank you for taking the time and effort
to make this happen!

On Mon, Feb 10, 2020 at 9:01 AM Jordan Rupprecht  wrote:
>
> Later today I'm planning to land D71151, which moves 
> lldb/packages/Python/lldbsuite/test to lldb/test/API, and removes the 
> lldb/test/API/testcases symlink. This is a large move, so I expect it will 
> cause conflict for many outstanding patches with lldb tests. However, it 
> should hopefully make testing the lldbsuite/test/api a little easier: for 
> example, you can now run "ninja check-lldb-api-lang" to run that subdirectory 
> of tests, which matches how ninja targets work for the rest of llvm testing. 
> (Note: "ninja check-lldb" still works without re-invoking cmake, but you need 
> to run cmake again to get all the sub targets). It also removes the symlink, 
> which confuses some tools, and makes file navigation confusing.
>
> I have verified no tests got lost in the move via:
> (cd /path/to/src/llvm-build/tools/lldb/test && /usr/bin/python 
> /path/to/src/llvm-build/dev/./bin/llvm-lit -sv 
> /path/to/src/llvm-project/lldb/test/API --show-tests)
> which shows no diffs before/after my patch
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] How/where to add test cases in LLDB.

2020-01-10 Thread Jonas Devlieghere via lldb-dev
Hey Sourabh,

You'll want to take a look at the existing tests in `test/Shell`.
These tests run one or more shell commands and uses FileCheck to
verify that the output match what you expect. These tests use lldb's
batch mode, where lldb commands are executed one after another, rather
than interactively.

A good example is test/Shell/Commands/command-backtrace.test

More info about lit and FileCheck:
https://llvm.org/docs/TestingGuide.html
https://llvm.org/docs/CommandGuide/FileCheck.html

Cheers,
Jonas

On Fri, Jan 10, 2020 at 12:24 PM Sourabh Singh Tomar via lldb-dev
 wrote:
>
> Hello Everyone,
>
> I've wrote a patch for extending support for new forms[DWARFv5] in macro 
> section. Planning to file a review soon.
>
> But I'm stuck with writing test case for this. My feature works well when 
> using LLDB in interactively, but I need to write a test case to accompany my 
> patch.
>
> To put things in perspective:
> here' s how I tested it interactively:
>
> $lldb a.out
> $display MACRO1  -- macro defined in source/a.out
> $ b main
> $ run
> LLDB output
> - Hook 1 (expr -- MACRO1)
> (int) $0 = 4
> Process 18381 stopped
> * thread #1, name = 'a.out', stop reason = breakpoint 1.1
>
> Need to write/add test case for this so that it will be run by "make 
> check-lldb", any ideas/pointers how to proceed forward from here.
>
> Thanks in anticipation!
> Sourabh Singh Tomar
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Optional Dependencies in LLDB

2020-01-08 Thread Jonas Devlieghere via lldb-dev
On Wed, Jan 8, 2020 at 2:46 PM Adrian Prantl  wrote:
>
>
>
> > On Jan 6, 2020, at 11:17 AM, Jonas Devlieghere via lldb-dev 
> >  wrote:
> >
> > Hey everyone,
> >
> > I just wanted to let you know that most of the work is complete for
> > auto-detecting optional dependencies in LLDB. Unless explicitly
> > specified, optional dependencies like editline will be enabled when
> > available and disabled otherwise.
>
> This "explicitly specified" mode makes it possible to declare that I want it 
> to be hard error if an optional dependency is missing (e.g., to avoid 
> silently dropping editline support by accident)?

Correct, setting any of the LLDB_ENABLE_* to ON and the dependency not
being found will cause a CMake configuration error.

>
> -- adrian
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Optional Dependencies in LLDB

2020-01-08 Thread Jonas Devlieghere via lldb-dev
Yes, that's correct. This was added in edadb818e5b.

On Tue, Jan 7, 2020 at 11:19 PM Martin Storsjö  wrote:
>
> On Tue, 7 Jan 2020, Jonas Devlieghere wrote:
>
> > After trying it out I concluded that it should be easy enough to check
> > for the static bindings flag in FindPythonInterpAndLibs.cmake so I've
> > implemented your suggestion in fc6f15d4d2c. Thanks again for bringing
> > this up.
>
> Awesome, thanks!
>
> Do I understand this correctly that something similar still would be
> needed for Lua though?
>
> // Martin
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Optional Dependencies in LLDB

2020-01-07 Thread Jonas Devlieghere via lldb-dev
After trying it out I concluded that it should be easy enough to check
for the static bindings flag in FindPythonInterpAndLibs.cmake so I've
implemented your suggestion in fc6f15d4d2c. Thanks again for bringing
this up.

On Tue, Jan 7, 2020 at 1:01 PM Jonas Devlieghere  wrote:
>
> On Tue, Jan 7, 2020 at 12:52 PM Martin Storsjö  wrote:
> >
> > On Mon, 6 Jan 2020, Jonas Devlieghere via lldb-dev wrote:
> >
> > > I just wanted to let you know that most of the work is complete for
> > > auto-detecting optional dependencies in LLDB. Unless explicitly
> > > specified, optional dependencies like editline will be enabled when
> > > available and disabled otherwise. This is different from  the old
> > > behavior, where optional dependencies were that were enabled by
> > > default would cause an error at configuration time. The motivation is
> > > to make it easier to build LLDB by making things "just work" out of
> > > the box.
> >
> > I think (didn't test at the moment, just browsed the cmakefiles) one case
> > that still isn't handled properly, is the interaction between python/lua
> > and swig. If python or lua are detected, they are enabled, and then the
> > build strictly requires SWIG to be present.
>
> Yup, good point, that's still on my TODO list.
>
> > I think we should check for SWIG first and require it to be present before
> > automatically enabling python and lua.
>
> That would make sense, but I haven't gone that route because
> downstream (Swift) we have the Python bindings checked-in. So it's not
> necessary to have SWIG in order to enable Python. Of course downstream
> shouldn't direct what we do upstream, but if I can figure out a
> solution that minimizes divergence I strongly prefer that. I'm hoping
> to get to this soon.
>
> >
> > // Martin
> >
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Optional Dependencies in LLDB

2020-01-07 Thread Jonas Devlieghere via lldb-dev
On Tue, Jan 7, 2020 at 12:52 PM Martin Storsjö  wrote:
>
> On Mon, 6 Jan 2020, Jonas Devlieghere via lldb-dev wrote:
>
> > I just wanted to let you know that most of the work is complete for
> > auto-detecting optional dependencies in LLDB. Unless explicitly
> > specified, optional dependencies like editline will be enabled when
> > available and disabled otherwise. This is different from  the old
> > behavior, where optional dependencies were that were enabled by
> > default would cause an error at configuration time. The motivation is
> > to make it easier to build LLDB by making things "just work" out of
> > the box.
>
> I think (didn't test at the moment, just browsed the cmakefiles) one case
> that still isn't handled properly, is the interaction between python/lua
> and swig. If python or lua are detected, they are enabled, and then the
> build strictly requires SWIG to be present.

Yup, good point, that's still on my TODO list.

> I think we should check for SWIG first and require it to be present before
> automatically enabling python and lua.

That would make sense, but I haven't gone that route because
downstream (Swift) we have the Python bindings checked-in. So it's not
necessary to have SWIG in order to enable Python. Of course downstream
shouldn't direct what we do upstream, but if I can figure out a
solution that minimizes divergence I strongly prefer that. I'm hoping
to get to this soon.

>
> // Martin
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Fwd: [Lldb-commits] Optional Dependencies in LLDB

2020-01-06 Thread Jonas Devlieghere via lldb-dev
Hey Greg,

On Mon, Jan 6, 2020 at 12:28 PM Greg Clayton  wrote:
>
> editline seems like a bad example as being optional as I wouldn't want to use 
> a LLDB that doesn't have editline support. Or are we taking care of this by 
> having the cmake settings files (lldb/cmake/caches/*.cmake) for each system 
> contain the right invocations and most people are expected to use those?

This argument could be made for most of the optional dependencies.
Having said that, there are also use cases where you don't care about
editline at all. Think about people making changes to the clang
APIwant to build LLDB to ensure they didn't break anything. As a
developer, you'll have the dependencies whether they're mandatory or
not, so nothing really changes. This does become important when you're
planning to distribute lldb, which is where I think the caches are an
excellent idea.

> Is there an easy way to show all of the LLDB_ENABLE_* values or do we need to 
> search all CMakeLists.txt for this value?

They're currently only listed in LLDBConfig.cmake

add_optional_dependency(LLDB_ENABLE_LIBEDIT ...
add_optional_dependency(LLDB_ENABLE_CURSES ...
add_optional_dependency(LLDB_ENABLE_LZMA ...
add_optional_dependency(LLDB_ENABLE_LUA ...
add_optional_dependency(LLDB_ENABLE_PYTHON ...

I'll add an entry on the build page to make this easier to discover.
Thanks for bringing this up!

> > On Jan 6, 2020, at 11:15 AM, Jonas Devlieghere via lldb-commits 
> >  wrote:
> >
> > Hey everyone,
> >
> > I just wanted to let you know that most of the work is complete for
> > auto-detecting optional dependencies in LLDB. Unless explicitly
> > specified, optional dependencies like editline will be enabled when
> > available and disabled otherwise. This is different from  the old
> > behavior, where optional dependencies were that were enabled by
> > default would cause an error at configuration time. The motivation is
> > to make it easier to build LLDB by making things "just work" out of
> > the box.
> >
> > All optional dependencies are now controlled by an LLDB_ENABLE_* CMake
> > flag. The default value for these variables is "Auto", which causes
> > the dependency to be enabled based on whether it was found. It's still
> > possible to obtain the old behavior by setting the corresponding CMake
> > variable to "On" or "Off" respectively.
> >
> > If you have a configuration where you were depending on the old
> > behavior where the dependency being enabled or disabled by default,
> > you might want to consider passing LLDB_ENABLE_*=On/Off to CMake to
> > ensure the dependency is required or ignored respectively.
> >
> > TL;DR Optional dependencies in LLDB are controlled by LLDB_ENABLE_*
> > CMake flags and are auto-detected by default. You can return to the
> > old behavior by setting the variables to "On" or "Off" respectively.
> > ___
> > lldb-commits mailing list
> > lldb-comm...@lists.llvm.org
> > https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Optional Dependencies in LLDB

2020-01-06 Thread Jonas Devlieghere via lldb-dev
Hey everyone,

I just wanted to let you know that most of the work is complete for
auto-detecting optional dependencies in LLDB. Unless explicitly
specified, optional dependencies like editline will be enabled when
available and disabled otherwise. This is different from  the old
behavior, where optional dependencies were that were enabled by
default would cause an error at configuration time. The motivation is
to make it easier to build LLDB by making things "just work" out of
the box.

All optional dependencies are now controlled by an LLDB_ENABLE_* CMake
flag. The default value for these variables is "Auto", which causes
the dependency to be enabled based on whether it was found. It's still
possible to obtain the old behavior by setting the corresponding CMake
variable to "On" or "Off" respectively.

If you have a configuration where you were depending on the old
behavior where the dependency being enabled or disabled by default,
you might want to consider passing LLDB_ENABLE_*=On/Off to CMake to
ensure the dependency is required or ignored respectively.

TL;DR Optional dependencies in LLDB are controlled by LLDB_ENABLE_*
CMake flags and are auto-detected by default. You can return to the
old behavior by setting the variables to "On" or "Off" respectively.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Supporting Lua Scripting in LLDB

2019-12-09 Thread Jonas Devlieghere via lldb-dev
Given that the response so far has been positive, I've put up the
patches for review:

https://reviews.llvm.org/D71232
https://reviews.llvm.org/D71234
https://reviews.llvm.org/D71235

Jonas

On Mon, Dec 9, 2019 at 9:27 AM Jonas Devlieghere  wrote:
>
> On Mon, Dec 9, 2019 at 1:55 AM Pavel Labath  wrote:
> >
> > I think this would be a very interesting project, and would allow us to
> > flesh out the details of the script interpreter interface.
> >
> > A lot of the complexity in our python code comes from the fact that
> > python can be (a) embedded into lldb and (b) lldb can be embedded into
> > python. It's been a while since I worked with lua, but from what I
> > remember, lua was designed to make (a) easy., and I don't think (b) was
> > ever a major goal (though it can always be done ways, of course)..
> >
> > Were you intending to implement both of these directions or just one of
> > them ((a), I guess)?
>
> Thanks for pointing this out. Indeed, my goal is only to support (a)
> for exactly the reasons you brought up.
>
> > The reason I am asking this is because doing only (a) will definitely
> > make lua support simpler than python, but it will also mean it won't be
> > a "python-lite".
> >
> > Both of these options are fine -- I just want to understand where you're
> > going with this. It also has some impact on the testing strategy, as our
> > existing python tests are largely using mode (b).
>
> That's part of my motivation for *not* doing (b). I really don't want
> to create/maintain another (Lua driven) test suite.
>
> > Another question I'm interested in is how deeply will this
> > multi-interpreter thing go? Will it be a build time option, will it be
> > selectable at runtime, but we'll have only one script interpreter per
> > SBDebugger, or will we be able to freely mix'n'match scripting languages?
>
> There is one script interpreter per debugger. As far as I can tell
> from the code this is already enforced.
>
> > I think the last option would be best because of data formatters
> > (otherwise one would have a problem is some of his data formatters are
> > written in python and others in lua), but it would also create a lot
> > more of new api surface, as one would have to worry about consistency of
> > the lua and python views of lldb, etc.
>
> That's an interesting problem I didn't think of. I'm definitely not
> excited about having the same data formatter implemented in both
> scripting languages. Mixing scripting languages makes sense for when
> your LLDB is configured to support both Python and Lua, but what do
> you do for people that want only Lua? They might still want to
> re-implement some data formatters they care about... Anyway, given
> that we don't maintain/ship data formatters in Python ourselves, maybe
> this isn't that big of an issue at all?
>
> > On 09/12/2019 01:25, Jonas Devlieghere via lldb-dev wrote:
> > > Hi everyone,
> > >
> > > Earlier this year, when I was working on the Python script
> > > interpreter, I thought it would be interesting to see what it would
> > > take to support other scripting languages in LLDB. Lua, being designed
> > > to be embedded, quickly came to mind. The idea remained in the back of
> > > my head, but I never really got around to it, until now.
> > >
> > > I was pleasantly surprised to see that it only took me a few hours to
> > > create a basic but working prototype. It supports running single
> > > commands as well as an interactive interpreter and has access to most
> > > of the SB API through bindings generated by SWIG. Of course it's far
> > > from complete.
> > >
> > > Before I invest more time in this, I'm curious to hear what the
> > > community thinks about adding support for another scripting language
> > > to LLDB. Do we need both Lua and Python?
> > >
> > > Here are some of the reasons off the top of my head as to why the
> > > answer might be
> > > "yes":
> > >
> > >  - The cost for having another scripting language is pretty small. The
> > > Lua script interpreter is very simple and SWIG can reuse the existing
> > > interfaces to generate the bindings.
> > >  - LLDB is designed to support multiple script interpreters, but in
> > > reality we only have one. Actually exercising this property ensures
> > > that we don't unintentionally break that design assumptions.
> > >  - The Python script interpreter is complex. It's hard to figure out
> > > what's really needed to support another language. The L

Re: [lldb-dev] [RFC] Supporting Lua Scripting in LLDB

2019-12-09 Thread Jonas Devlieghere via lldb-dev
On Mon, Dec 9, 2019 at 1:55 AM Pavel Labath  wrote:
>
> I think this would be a very interesting project, and would allow us to
> flesh out the details of the script interpreter interface.
>
> A lot of the complexity in our python code comes from the fact that
> python can be (a) embedded into lldb and (b) lldb can be embedded into
> python. It's been a while since I worked with lua, but from what I
> remember, lua was designed to make (a) easy., and I don't think (b) was
> ever a major goal (though it can always be done ways, of course)..
>
> Were you intending to implement both of these directions or just one of
> them ((a), I guess)?

Thanks for pointing this out. Indeed, my goal is only to support (a)
for exactly the reasons you brought up.

> The reason I am asking this is because doing only (a) will definitely
> make lua support simpler than python, but it will also mean it won't be
> a "python-lite".
>
> Both of these options are fine -- I just want to understand where you're
> going with this. It also has some impact on the testing strategy, as our
> existing python tests are largely using mode (b).

That's part of my motivation for *not* doing (b). I really don't want
to create/maintain another (Lua driven) test suite.

> Another question I'm interested in is how deeply will this
> multi-interpreter thing go? Will it be a build time option, will it be
> selectable at runtime, but we'll have only one script interpreter per
> SBDebugger, or will we be able to freely mix'n'match scripting languages?

There is one script interpreter per debugger. As far as I can tell
from the code this is already enforced.

> I think the last option would be best because of data formatters
> (otherwise one would have a problem is some of his data formatters are
> written in python and others in lua), but it would also create a lot
> more of new api surface, as one would have to worry about consistency of
> the lua and python views of lldb, etc.

That's an interesting problem I didn't think of. I'm definitely not
excited about having the same data formatter implemented in both
scripting languages. Mixing scripting languages makes sense for when
your LLDB is configured to support both Python and Lua, but what do
you do for people that want only Lua? They might still want to
re-implement some data formatters they care about... Anyway, given
that we don't maintain/ship data formatters in Python ourselves, maybe
this isn't that big of an issue at all?

> On 09/12/2019 01:25, Jonas Devlieghere via lldb-dev wrote:
> > Hi everyone,
> >
> > Earlier this year, when I was working on the Python script
> > interpreter, I thought it would be interesting to see what it would
> > take to support other scripting languages in LLDB. Lua, being designed
> > to be embedded, quickly came to mind. The idea remained in the back of
> > my head, but I never really got around to it, until now.
> >
> > I was pleasantly surprised to see that it only took me a few hours to
> > create a basic but working prototype. It supports running single
> > commands as well as an interactive interpreter and has access to most
> > of the SB API through bindings generated by SWIG. Of course it's far
> > from complete.
> >
> > Before I invest more time in this, I'm curious to hear what the
> > community thinks about adding support for another scripting language
> > to LLDB. Do we need both Lua and Python?
> >
> > Here are some of the reasons off the top of my head as to why the
> > answer might be
> > "yes":
> >
> >  - The cost for having another scripting language is pretty small. The
> > Lua script interpreter is very simple and SWIG can reuse the existing
> > interfaces to generate the bindings.
> >  - LLDB is designed to support multiple script interpreters, but in
> > reality we only have one. Actually exercising this property ensures
> > that we don't unintentionally break that design assumptions.
> >  - The Python script interpreter is complex. It's hard to figure out
> > what's really needed to support another language. The Lua script
> > interpreter on the other hand is pretty straightforward. Common code
> > can be shared by both.
> >  - Currently Python support is disabled for some targets, like Android
> > and iOS. Lua could enable scripting for these environments where
> > having all of Python is overkill or undesirable.
> >
> > Reasons why the answer might be "no":
> >
> >  - Are our users going to use this?
> >  - Supporting Python is an ongoing pain. Do we really want to risk
> > burdening ourselves with another scripting language?
> >  - The Python API is ver

[lldb-dev] [RFC] Supporting Lua Scripting in LLDB

2019-12-08 Thread Jonas Devlieghere via lldb-dev
Hi everyone,

Earlier this year, when I was working on the Python script
interpreter, I thought it would be interesting to see what it would
take to support other scripting languages in LLDB. Lua, being designed
to be embedded, quickly came to mind. The idea remained in the back of
my head, but I never really got around to it, until now.

I was pleasantly surprised to see that it only took me a few hours to
create a basic but working prototype. It supports running single
commands as well as an interactive interpreter and has access to most
of the SB API through bindings generated by SWIG. Of course it's far
from complete.

Before I invest more time in this, I'm curious to hear what the
community thinks about adding support for another scripting language
to LLDB. Do we need both Lua and Python?

Here are some of the reasons off the top of my head as to why the
answer might be
"yes":

 - The cost for having another scripting language is pretty small. The
Lua script interpreter is very simple and SWIG can reuse the existing
interfaces to generate the bindings.
 - LLDB is designed to support multiple script interpreters, but in
reality we only have one. Actually exercising this property ensures
that we don't unintentionally break that design assumptions.
 - The Python script interpreter is complex. It's hard to figure out
what's really needed to support another language. The Lua script
interpreter on the other hand is pretty straightforward. Common code
can be shared by both.
 - Currently Python support is disabled for some targets, like Android
and iOS. Lua could enable scripting for these environments where
having all of Python is overkill or undesirable.

Reasons why the answer might be "no":

 - Are our users going to use this?
 - Supporting Python is an ongoing pain. Do we really want to risk
burdening ourselves with another scripting language?
 - The Python API is very well tested. We'd need to add test for the
Lua bindings as well. It's unlikely this will match the coverage of
Python, and probably even undesirable, because what's the point of
testing the same thing twice. Also, do we want to risk fragmenting
tests across two scripting languages?

There's probably a bunch more stuff that I didn't even think of. :-)

Personally I lean towards "yes" because I feel the benefits outweigh
the costs, but of course that remains to be seen. Please let me know
what you think!

If you're curious about what this looks like, you can find the patches
on my fork on GitHub:
https://github.com/JDevlieghere/llvm-project/tree/lua

Cheers,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB Website is not being updated

2019-11-21 Thread Jonas Devlieghere via lldb-dev
I see a bunch of eros here:
http://lists.llvm.org/pipermail/www-scripts/2019-November/thread.html

Is this possibly related to the Github/monorepo transition?

-- Jonas

On Thu, Nov 21, 2019 at 10:52 AM Adrian Prantl via lldb-dev
 wrote:
>
> Hello Tanya,
>
> it looks like the cron job that is supposed to be updating the LLDB website 
> isn't running or is otherwise blocked at the moment. You can see on 
> https://lldb.llvm.org that it says "Welcome to the LLDB version 8 
> documentation!". We also recently removed the "Why a New Debugger?" headline 
> and that change isn't showing up either.
>
> Would you mind taking a look?
>
> thanks for your help,
> Adrian
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Segfault using the lldb python module with a non-Xcode python binary

2019-11-14 Thread Jonas Devlieghere via lldb-dev
I've put up a patch to extend the documentation: https://reviews.llvm.org/D70252

Please have a look let me know if you have any comments or suggestions!

On Thu, Nov 14, 2019 at 9:17 AM Jonas Devlieghere  wrote:
>
> Hey António,
>
> If I understand correctly, you're trying to mix between two versions
> of the Python interpreter and library. That's not something that's
> supported and has always been an issue. Internally we get the
> occasional bug report where somebody install python via homebrew or
> python.org and the corresponding interpreter ends up first in their
> path because /usr/local/bin comes before /usr/bin. The same is true if
> you build your own LLDB, if you link against Homebrew's Python3 dylib,
> you need to use the Homebrew interpreter. In CMake we try really hard
> to ensure those two are in sync for when we run the (Python) test
> suite.
>
> On macOS Catalina, there's a shim for python3 in /usr/bin/ that will
> launch the interpreter from Xcode, which matches what the LLDB from
> Xcode is linked against.
>
> Given that this is an issue that comes up frequently, I'm going to add
> a bit of documentation about this on the LLDB website.
>
> Best,
> Jonas
>
> On Wed, Nov 13, 2019 at 10:53 PM António Afonso via lldb-dev
>  wrote:
> >
> > I'm building lldb with python3 support by using the framework that is 
> > shipped with the latest versions of Xcode.
> >
> > I'm able to build and run lldb just fine but if I try to use the lldb 
> > python module on a python binary that is not the one from Xcode it 
> > segfaults when creating the module. I then tried with the stock lldb from 
> > Xcode and found the exact same issue ☹. I don’t think this was a problem 
> > before?
> >
> >
> >
> > I'm not sure why this happens and I wasn't able to debug the issue. I've 
> > already tried with a binary that has the exact same version of python but 
> > still the same problem:
> >
> >
> >
> > Works fine with the Xcode binary:
> >
> > $ PYTHONPATH=`lldb -P` 
> > /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.7/bin/python3
> >
> > Python 3.7.3 (default, Sep 18 2019, 14:29:06)
> >
> > [Clang 11.0.0 (clang-1100.0.33.8)] on darwin
> >
> > Type "help", "copyright", "credits" or "license" for more information.
> >
> > >>> import lldb
> >
> > >>>
> >
> >
> >
> > Fails with any other:
> >
> > $ PYTHONPATH=`lldb -P` /Users/aadsm/.pyenv/versions/3.7.3/bin/python
> >
> > Python 3.7.3 (default, Nov 12 2019, 23:19:54)
> >
> > [Clang 11.0.0 (clang-1100.0.33.8)] on darwin
> >
> > Type "help", "copyright", "credits" or "license" for more information.
> >
> > >>> import lldb
> >
> > Segmentation fault: 11
> >
> >
> >
> > I attached lldb to see where it was failing and it's right after liblldb is 
> > loaded and python is trying to create the module itself, in the 
> > PyModule_Create2 function 
> > (https://github.com/python/cpython/blob/master/Objects/moduleobject.c#L173-L179).
> >
> > The disassembly shows:
> >
> >
> >
> > Process 89097 stopped
> >
> > * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS 
> > (code=1, address=0x10)
> >
> > frame #0: 0x00010f4cae5b Python3`PyModule_Create2 + 27
> >
> > Python3`PyModule_Create2:
> >
> > ->  0x10f4cae5b <+27>: movq   0x10(%rax), %rdi
> >
> > 0x10f4cae5f <+31>: callq  0x10f5823b0   ; 
> > _PyImport_IsInitialized
> >
> > 0x10f4cae64 <+36>: testl  %eax, %eax
> >
> > 0x10f4cae66 <+38>: je 0x10f4cae77   ; <+55>
> >
> > Target 0: (Python) stopped.
> >
> > (lldb) dis
> >
> > Python3`PyModule_Create2:
> >
> > 0x10f4cae40 <+0>:  pushq  %rbp
> >
> > 0x10f4cae41 <+1>:  movq   %rsp, %rbp
> >
> > 0x10f4cae44 <+4>:  pushq  %r14
> >
> > 0x10f4cae46 <+6>:  pushq  %rbx
> >
> > 0x10f4cae47 <+7>:  movl   %esi, %r14d
> >
> > 0x10f4cae4a <+10>: movq   %rdi, %rbx
> >
> > 0x10f4cae4d <+13>: leaq   0x2226ac(%rip), %rax  ; _PyRuntime
> >
> > 0x10f4cae54 <+20>: movq   0x5a0(%rax), %rax
> >
> > ->  0x10f4cae5b <+27>: movq   0x10(%rax), %rdi
> >
> > 0x10f4cae5f <+31>: callq  0x10f5823b0   ; 
> > _PyImport_IsInitialized
> >
> > 0x10f4cae64 <+36>: testl  %eax, %eax
> >
> > 0x10f4cae66 <+38>: je 0x10f4cae77   ; <+55>
> >
> > 0x10f4cae68 <+40>: movq   %rbx, %rdi
> >
> > 0x10f4cae6b <+43>: movl   %r14d, %esi
> >
> > 0x10f4cae6e <+46>: popq   %rbx
> >
> > 0x10f4cae6f <+47>: popq   %r14
> >
> > 0x10f4cae71 <+49>: popq   %rbp
> >
> > 0x10f4cae72 <+50>: jmp0x10f4cae90   ; 
> > _PyModule_CreateInitialized
> >
> > 0x10f4cae77 <+55>: leaq   0x14f111(%rip), %rdi  ; "Python import 
> > machinery not initialized"
> >
> > 0x10f4cae7e <+62>: callq  0x10f593d40   ; Py_FatalError
> >
> > 0x10f4cae83 <+67>: nopw   %cs:(%rax,%rax)
> >
> > 0x10f4cae8d <+77>: nopl   (%rax)
> >
> >
> >
> > Not really sure how to debug this besides trying to build my own version of 
> > 

Re: [lldb-dev] Segfault using the lldb python module with a non-Xcode python binary

2019-11-14 Thread Jonas Devlieghere via lldb-dev
Hey António,

If I understand correctly, you're trying to mix between two versions
of the Python interpreter and library. That's not something that's
supported and has always been an issue. Internally we get the
occasional bug report where somebody install python via homebrew or
python.org and the corresponding interpreter ends up first in their
path because /usr/local/bin comes before /usr/bin. The same is true if
you build your own LLDB, if you link against Homebrew's Python3 dylib,
you need to use the Homebrew interpreter. In CMake we try really hard
to ensure those two are in sync for when we run the (Python) test
suite.

On macOS Catalina, there's a shim for python3 in /usr/bin/ that will
launch the interpreter from Xcode, which matches what the LLDB from
Xcode is linked against.

Given that this is an issue that comes up frequently, I'm going to add
a bit of documentation about this on the LLDB website.

Best,
Jonas

On Wed, Nov 13, 2019 at 10:53 PM António Afonso via lldb-dev
 wrote:
>
> I'm building lldb with python3 support by using the framework that is shipped 
> with the latest versions of Xcode.
>
> I'm able to build and run lldb just fine but if I try to use the lldb python 
> module on a python binary that is not the one from Xcode it segfaults when 
> creating the module. I then tried with the stock lldb from Xcode and found 
> the exact same issue ☹. I don’t think this was a problem before?
>
>
>
> I'm not sure why this happens and I wasn't able to debug the issue. I've 
> already tried with a binary that has the exact same version of python but 
> still the same problem:
>
>
>
> Works fine with the Xcode binary:
>
> $ PYTHONPATH=`lldb -P` 
> /Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.7/bin/python3
>
> Python 3.7.3 (default, Sep 18 2019, 14:29:06)
>
> [Clang 11.0.0 (clang-1100.0.33.8)] on darwin
>
> Type "help", "copyright", "credits" or "license" for more information.
>
> >>> import lldb
>
> >>>
>
>
>
> Fails with any other:
>
> $ PYTHONPATH=`lldb -P` /Users/aadsm/.pyenv/versions/3.7.3/bin/python
>
> Python 3.7.3 (default, Nov 12 2019, 23:19:54)
>
> [Clang 11.0.0 (clang-1100.0.33.8)] on darwin
>
> Type "help", "copyright", "credits" or "license" for more information.
>
> >>> import lldb
>
> Segmentation fault: 11
>
>
>
> I attached lldb to see where it was failing and it's right after liblldb is 
> loaded and python is trying to create the module itself, in the 
> PyModule_Create2 function 
> (https://github.com/python/cpython/blob/master/Objects/moduleobject.c#L173-L179).
>
> The disassembly shows:
>
>
>
> Process 89097 stopped
>
> * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS 
> (code=1, address=0x10)
>
> frame #0: 0x00010f4cae5b Python3`PyModule_Create2 + 27
>
> Python3`PyModule_Create2:
>
> ->  0x10f4cae5b <+27>: movq   0x10(%rax), %rdi
>
> 0x10f4cae5f <+31>: callq  0x10f5823b0   ; 
> _PyImport_IsInitialized
>
> 0x10f4cae64 <+36>: testl  %eax, %eax
>
> 0x10f4cae66 <+38>: je 0x10f4cae77   ; <+55>
>
> Target 0: (Python) stopped.
>
> (lldb) dis
>
> Python3`PyModule_Create2:
>
> 0x10f4cae40 <+0>:  pushq  %rbp
>
> 0x10f4cae41 <+1>:  movq   %rsp, %rbp
>
> 0x10f4cae44 <+4>:  pushq  %r14
>
> 0x10f4cae46 <+6>:  pushq  %rbx
>
> 0x10f4cae47 <+7>:  movl   %esi, %r14d
>
> 0x10f4cae4a <+10>: movq   %rdi, %rbx
>
> 0x10f4cae4d <+13>: leaq   0x2226ac(%rip), %rax  ; _PyRuntime
>
> 0x10f4cae54 <+20>: movq   0x5a0(%rax), %rax
>
> ->  0x10f4cae5b <+27>: movq   0x10(%rax), %rdi
>
> 0x10f4cae5f <+31>: callq  0x10f5823b0   ; 
> _PyImport_IsInitialized
>
> 0x10f4cae64 <+36>: testl  %eax, %eax
>
> 0x10f4cae66 <+38>: je 0x10f4cae77   ; <+55>
>
> 0x10f4cae68 <+40>: movq   %rbx, %rdi
>
> 0x10f4cae6b <+43>: movl   %r14d, %esi
>
> 0x10f4cae6e <+46>: popq   %rbx
>
> 0x10f4cae6f <+47>: popq   %r14
>
> 0x10f4cae71 <+49>: popq   %rbp
>
> 0x10f4cae72 <+50>: jmp0x10f4cae90   ; 
> _PyModule_CreateInitialized
>
> 0x10f4cae77 <+55>: leaq   0x14f111(%rip), %rdi  ; "Python import 
> machinery not initialized"
>
> 0x10f4cae7e <+62>: callq  0x10f593d40   ; Py_FatalError
>
> 0x10f4cae83 <+67>: nopw   %cs:(%rax,%rax)
>
> 0x10f4cae8d <+77>: nopl   (%rax)
>
>
>
> Not really sure how to debug this besides trying to build my own version of 
> python and see if I can repro (I don't have this issue on linux). I’ve also 
> checked the sys.abiflags and both binaries have the same ones.
>
> Has anyone experienced this before or has any pointers to debug it?
>
> - Afonso
>
> --
> Best regards,
> António Afonso
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org

Re: [lldb-dev] [cfe-dev] RFC: Using GitHub Actions for CI testing on the release/* branches

2019-11-12 Thread Jonas Devlieghere via lldb-dev
Hey Tom,

That sounds really useful. Would it be possible to include LLDB as
well? We have a subset of tests (unit & lit) that can be run without
Python/SWIG by passing LLDB_DISABLE_PYTHON=ON to CMake.

Thanks,
Jonas

On Tue, Nov 12, 2019 at 2:35 AM Hans Wennborg via cfe-dev
 wrote:
>
> On Tue, Nov 12, 2019 at 1:32 AM Tom Stellard via lldb-dev
>  wrote:
> >
> > Hi,
> >
> > I would like to start using GitHub Actions[1] for CI testing on the 
> > release/*
> > branches.  As far as I know we don't have any buildbots listening to the
> > release branches, and I think GitHub Actions are a good way for us to 
> > quickly
> > bring-up some CI jobs there.
> >
> > My proposal is to start by adding two post-commit CI jobs to the 
> > release/9.x branch.
> > One for building and testing (ninja checka-all) llvm/clang/lld on Linux,
> > Windows, and Mac, and another for detecting ABI changes since the 9.0.0 
> > release.
> >
> > I have already implemented these two CI jobs in my llvm-project fork on 
> > GitHub[2][3],
> > but in order to get these running in the main repository, I would need to:
> >
> > 1. Create a new repository in the LLVM organization called 'actions' for 
> > storing some custom
> > builds steps for our CI jobs (see [4]).
> > 2. Commit yaml CI definitions to the .github/workflows directory in the 
> > release/9.x
> > branch.
> >
> > In the future, I would also like to add buil and tests jobs for other 
> > sub-projects
> > once I am able to get those working.
> >
> > In addition to being used for post-commit testing, having these CI 
> > definitions in the
> > main tree will make it easier for me (or anyone) to do pre-commit testing 
> > for the
> > release branch in a personal fork.  It will also allow me to experiment 
> > with some new
> > workflows to help make managing the releases much easier.
> >
> > I think this will be a good way to test Actions in a low traffic 
> > environment to
> > see if they are something we would want to use for CI on the master branch.
> >
> > Given that we are close to the end of the 9.0.1 cycle, unless there are any
> > strong objections, I would like to get this enabled by Mon Nov 18, to 
> > maximize its
> > usefulness.  Let me know what you think.
> >
> > Thanks,
> > Tom
> >
> > [1] https://github.com/features/actions
> > [2] 
> > https://github.com/tstellar/llvm-project/commit/952d80e8509ecc95797b2ddbf1af40abad2dcf4e/checks?check_suite_id=305765621
> > [3] 
> > https://github.com/tstellar/llvm-project/commit/6d74f1b81632ef081dffa1e0c0434f47d4954423/checks?check_suite_id=303074176
> > [4] https://github.com/tstellar/actions
>
> Sounds great to me!
> ___
> cfe-dev mailing list
> cfe-...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] issue with lldb9 and python3.5

2019-10-28 Thread Jonas Devlieghere via lldb-dev
On Mon, Oct 28, 2019 at 10:04 AM Jonas Devlieghere
 wrote:
>
> On Mon, Oct 28, 2019 at 9:32 AM Tom Stellard  wrote:
> >
> > On 10/28/2019 09:29 AM, Jonas Devlieghere wrote:
> > > Yes, Python 3.5 is not supported. We "officially" support Python 2.7
> > > and Python 3.7. I'm sorry if we forgot that in the release notes.
> > >
> >
> > Is there a specific reason why 3.5 is not supported?  Is it
> > because of this issue?
>
> Not really other than the lack of testing/CI.
>
> - The Linux bots are all running with Python 2.7.
> - I know that on macOS we ran into issues with some older versions. I
> don't remember if it was this particular issue and I'm not even sure
> if that was using Python 3.5 or Python 3.6. Our bots on GreenDragon
> all run with Python 3.7.
> - Stella's Windows bot is running Python 3.6 so we should consider
> that supported as well.

For completeness, Python 2.7 is not supported on Windows at all. The
docs specify Python 3.5 or later. Maybe we should bump that to 3.6
too?

>
> >
> > -Tom
> >
> > > On Mon, Oct 28, 2019 at 7:06 AM Tom Stellard via lldb-dev
> > >  wrote:
> > >>
> > >> + lldb-dev
> > >>
> > >> On 10/28/2019 07:06 AM, Tom Stellard wrote:
> > >>> On 10/28/2019 03:50 AM, Romaric Jodin via lldb-dev wrote:
> >  Hi everyone,
> > 
> >  I have lldb crashing since I've updated to lldb9. Seems like there is 
> >  a issue with python3.5. Everything seems to work fine with python3.7.
> >  Am I missing something? Or is it a known issue?
> > 
> > >>>
> > >>> We have seen this too with python 3.6, but we haven't found the root 
> > >>> cause yet.
> > >>> For now, we've worked around this by disabling the readline module with 
> > >>> the
> > >>> attached patch.
> > >>>
> > >>> -Tom
> > >>>
> >  $ lldb
> >  (lldb) script
> >   #0 0x7f3d324c9c2a 
> >  llvm::sys::PrintStackTrace(llvm::raw_ostream&) 
> >  (/home/rjodin/work/dpu_tools3/build/lib/libLLVM-9.so+0x6bfc2a)
> >   #1 0x7f3d324c7af5 llvm::sys::RunSignalHandlers() 
> >  (/home/rjodin/work/dpu_tools3/build/lib/libLLVM-9.so+0x6bdaf5)
> >   #2 0x7f3d324c7c0c SignalHandler(int) 
> >  (/home/rjodin/work/dpu_tools3/build/lib/libLLVM-9.so+0x6bdc0c)
> >   #3 0x7f3d31bfe0e0 __restore_rt 
> >  (/lib/x86_64-linux-gnu/libpthread.so.0+0x110e0)
> >   #4 0x7f3d2d18f81b PyModule_GetState 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x6881b)
> >   #5 0x7f3d230e1621 _init 
> >  (/usr/lib/python3.5/lib-dynload/readline.cpython-35m-x86_64-linux-gnu.so
> >   +0x3621)
> >   #6 0x7f3d2e3dece1 rl_initialize 
> >  (/usr/lib/x86_64-linux-gnu/libedit.so.2+0x1dce1)
> >   #7 0x7f3d230e1f3e _init 
> >  (/usr/lib/python3.5/lib-dynload/readline.cpython-35m-x86_64-linux-gnu.so
> >   +0x3f3e)
> >   #8 0x7f3d2d32d710 _PyImport_LoadDynamicModuleWithSpec 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x206710)
> >   #9 0x7f3d2d330fe7 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x209fe7)
> >  #10 0x7f3d2d198259 PyCFunction_Call 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x71259)
> >  #11 0x7f3d2d2c8ff2 PyEval_EvalFrameEx 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a1ff2)
> >  #12 0x7f3d2d38b074 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264074)
> >  #13 0x7f3d2d2c7adf PyEval_EvalFrameEx 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a0adf)
> >  #14 0x7f3d2d2c96ad PyEval_EvalFrameEx 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
> >  #15 0x7f3d2d2c96ad PyEval_EvalFrameEx 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
> >  #16 0x7f3d2d2c96ad PyEval_EvalFrameEx 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
> >  #17 0x7f3d2d2c96ad PyEval_EvalFrameEx 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
> >  #18 0x7f3d2d38b074 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264074)
> >  #19 0x7f3d2d38b153 PyEval_EvalCodeEx 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264153)
> >  #20 0x7f3d2d21e558 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0xf7558)
> >  #21 0x7f3d2d2faa37 PyObject_Call 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1d3a37)
> >  #22 0x7f3d2d2fce1b _PyObject_CallMethodIdObjArgs 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1d5e1b)
> >  #23 0x7f3d2d32effa PyImport_ImportModuleLevelObject 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x207ffa)
> >  #24 0x7f3d2d2cd248 
> >  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a6248)
> >    

Re: [lldb-dev] issue with lldb9 and python3.5

2019-10-28 Thread Jonas Devlieghere via lldb-dev
On Mon, Oct 28, 2019 at 9:32 AM Tom Stellard  wrote:
>
> On 10/28/2019 09:29 AM, Jonas Devlieghere wrote:
> > Yes, Python 3.5 is not supported. We "officially" support Python 2.7
> > and Python 3.7. I'm sorry if we forgot that in the release notes.
> >
>
> Is there a specific reason why 3.5 is not supported?  Is it
> because of this issue?

Not really other than the lack of testing/CI.

- The Linux bots are all running with Python 2.7.
- I know that on macOS we ran into issues with some older versions. I
don't remember if it was this particular issue and I'm not even sure
if that was using Python 3.5 or Python 3.6. Our bots on GreenDragon
all run with Python 3.7.
- Stella's Windows bot is running Python 3.6 so we should consider
that supported as well.

>
> -Tom
>
> > On Mon, Oct 28, 2019 at 7:06 AM Tom Stellard via lldb-dev
> >  wrote:
> >>
> >> + lldb-dev
> >>
> >> On 10/28/2019 07:06 AM, Tom Stellard wrote:
> >>> On 10/28/2019 03:50 AM, Romaric Jodin via lldb-dev wrote:
>  Hi everyone,
> 
>  I have lldb crashing since I've updated to lldb9. Seems like there is a 
>  issue with python3.5. Everything seems to work fine with python3.7.
>  Am I missing something? Or is it a known issue?
> 
> >>>
> >>> We have seen this too with python 3.6, but we haven't found the root 
> >>> cause yet.
> >>> For now, we've worked around this by disabling the readline module with 
> >>> the
> >>> attached patch.
> >>>
> >>> -Tom
> >>>
>  $ lldb
>  (lldb) script
>   #0 0x7f3d324c9c2a 
>  llvm::sys::PrintStackTrace(llvm::raw_ostream&) 
>  (/home/rjodin/work/dpu_tools3/build/lib/libLLVM-9.so+0x6bfc2a)
>   #1 0x7f3d324c7af5 llvm::sys::RunSignalHandlers() 
>  (/home/rjodin/work/dpu_tools3/build/lib/libLLVM-9.so+0x6bdaf5)
>   #2 0x7f3d324c7c0c SignalHandler(int) 
>  (/home/rjodin/work/dpu_tools3/build/lib/libLLVM-9.so+0x6bdc0c)
>   #3 0x7f3d31bfe0e0 __restore_rt 
>  (/lib/x86_64-linux-gnu/libpthread.so.0+0x110e0)
>   #4 0x7f3d2d18f81b PyModule_GetState 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x6881b)
>   #5 0x7f3d230e1621 _init 
>  (/usr/lib/python3.5/lib-dynload/readline.cpython-35m-x86_64-linux-gnu.so 
>  +0x3621)
>   #6 0x7f3d2e3dece1 rl_initialize 
>  (/usr/lib/x86_64-linux-gnu/libedit.so.2+0x1dce1)
>   #7 0x7f3d230e1f3e _init 
>  (/usr/lib/python3.5/lib-dynload/readline.cpython-35m-x86_64-linux-gnu.so 
>  +0x3f3e)
>   #8 0x7f3d2d32d710 _PyImport_LoadDynamicModuleWithSpec 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x206710)
>   #9 0x7f3d2d330fe7 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x209fe7)
>  #10 0x7f3d2d198259 PyCFunction_Call 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x71259)
>  #11 0x7f3d2d2c8ff2 PyEval_EvalFrameEx 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a1ff2)
>  #12 0x7f3d2d38b074 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264074)
>  #13 0x7f3d2d2c7adf PyEval_EvalFrameEx 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a0adf)
>  #14 0x7f3d2d2c96ad PyEval_EvalFrameEx 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
>  #15 0x7f3d2d2c96ad PyEval_EvalFrameEx 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
>  #16 0x7f3d2d2c96ad PyEval_EvalFrameEx 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
>  #17 0x7f3d2d2c96ad PyEval_EvalFrameEx 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
>  #18 0x7f3d2d38b074 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264074)
>  #19 0x7f3d2d38b153 PyEval_EvalCodeEx 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264153)
>  #20 0x7f3d2d21e558 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0xf7558)
>  #21 0x7f3d2d2faa37 PyObject_Call 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1d3a37)
>  #22 0x7f3d2d2fce1b _PyObject_CallMethodIdObjArgs 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1d5e1b)
>  #23 0x7f3d2d32effa PyImport_ImportModuleLevelObject 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x207ffa)
>  #24 0x7f3d2d2cd248 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a6248)
>  #25 0x7f3d2d198279 PyCFunction_Call 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x71279)
>  #26 0x7f3d2d2faa37 PyObject_Call 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1d3a37)
>  #27 0x7f3d2d389b77 PyEval_CallObjectWithKeywords 
>  (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x262b77)
>  #28 0x7f3d2d2c57cb 

Re: [lldb-dev] issue with lldb9 and python3.5

2019-10-28 Thread Jonas Devlieghere via lldb-dev
Yes, Python 3.5 is not supported. We "officially" support Python 2.7
and Python 3.7. I'm sorry if we forgot that in the release notes.

On Mon, Oct 28, 2019 at 7:06 AM Tom Stellard via lldb-dev
 wrote:
>
> + lldb-dev
>
> On 10/28/2019 07:06 AM, Tom Stellard wrote:
> > On 10/28/2019 03:50 AM, Romaric Jodin via lldb-dev wrote:
> >> Hi everyone,
> >>
> >> I have lldb crashing since I've updated to lldb9. Seems like there is a 
> >> issue with python3.5. Everything seems to work fine with python3.7.
> >> Am I missing something? Or is it a known issue?
> >>
> >
> > We have seen this too with python 3.6, but we haven't found the root cause 
> > yet.
> > For now, we've worked around this by disabling the readline module with the
> > attached patch.
> >
> > -Tom
> >
> >> $ lldb
> >> (lldb) script
> >>  #0 0x7f3d324c9c2a llvm::sys::PrintStackTrace(llvm::raw_ostream&) 
> >> (/home/rjodin/work/dpu_tools3/build/lib/libLLVM-9.so+0x6bfc2a)
> >>  #1 0x7f3d324c7af5 llvm::sys::RunSignalHandlers() 
> >> (/home/rjodin/work/dpu_tools3/build/lib/libLLVM-9.so+0x6bdaf5)
> >>  #2 0x7f3d324c7c0c SignalHandler(int) 
> >> (/home/rjodin/work/dpu_tools3/build/lib/libLLVM-9.so+0x6bdc0c)
> >>  #3 0x7f3d31bfe0e0 __restore_rt 
> >> (/lib/x86_64-linux-gnu/libpthread.so.0+0x110e0)
> >>  #4 0x7f3d2d18f81b PyModule_GetState 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x6881b)
> >>  #5 0x7f3d230e1621 _init 
> >> (/usr/lib/python3.5/lib-dynload/readline.cpython-35m-x86_64-linux-gnu.so 
> >> +0x3621)
> >>  #6 0x7f3d2e3dece1 rl_initialize 
> >> (/usr/lib/x86_64-linux-gnu/libedit.so.2+0x1dce1)
> >>  #7 0x7f3d230e1f3e _init 
> >> (/usr/lib/python3.5/lib-dynload/readline.cpython-35m-x86_64-linux-gnu.so 
> >> +0x3f3e)
> >>  #8 0x7f3d2d32d710 _PyImport_LoadDynamicModuleWithSpec 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x206710)
> >>  #9 0x7f3d2d330fe7 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x209fe7)
> >> #10 0x7f3d2d198259 PyCFunction_Call 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x71259)
> >> #11 0x7f3d2d2c8ff2 PyEval_EvalFrameEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a1ff2)
> >> #12 0x7f3d2d38b074 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264074)
> >> #13 0x7f3d2d2c7adf PyEval_EvalFrameEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a0adf)
> >> #14 0x7f3d2d2c96ad PyEval_EvalFrameEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
> >> #15 0x7f3d2d2c96ad PyEval_EvalFrameEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
> >> #16 0x7f3d2d2c96ad PyEval_EvalFrameEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
> >> #17 0x7f3d2d2c96ad PyEval_EvalFrameEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a26ad)
> >> #18 0x7f3d2d38b074 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264074)
> >> #19 0x7f3d2d38b153 PyEval_EvalCodeEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264153)
> >> #20 0x7f3d2d21e558 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0xf7558)
> >> #21 0x7f3d2d2faa37 PyObject_Call 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1d3a37)
> >> #22 0x7f3d2d2fce1b _PyObject_CallMethodIdObjArgs 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1d5e1b)
> >> #23 0x7f3d2d32effa PyImport_ImportModuleLevelObject 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x207ffa)
> >> #24 0x7f3d2d2cd248 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a6248)
> >> #25 0x7f3d2d198279 PyCFunction_Call 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x71279)
> >> #26 0x7f3d2d2faa37 PyObject_Call 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1d3a37)
> >> #27 0x7f3d2d389b77 PyEval_CallObjectWithKeywords 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x262b77)
> >> #28 0x7f3d2d2c57cb PyEval_EvalFrameEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x19e7cb)
> >> #29 0x7f3d2d38b074 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264074)
> >> #30 0x7f3d2d38b153 PyEval_EvalCodeEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264153)
> >> #31 0x7f3d2d2c145b PyEval_EvalCode 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x19a45b)
> >> #32 0x7f3d2d2ce2cd 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a72cd)
> >> #33 0x7f3d2d198259 PyCFunction_Call 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x71259)
> >> #34 0x7f3d2d2c8ff2 PyEval_EvalFrameEx 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x1a1ff2)
> >> #35 0x7f3d2d38b074 
> >> (/usr/lib/x86_64-linux-gnu/libpython3.5m.so.1.0+0x264074)
> >> #36 

Re: [lldb-dev] Rust support in LLDB, again

2019-09-30 Thread Jonas Devlieghere via lldb-dev
Hi Vadim,

On Sat, Sep 28, 2019 at 4:00 PM Vadim Chugunov via lldb-dev
 wrote:
>
> Hi,
> Last year there was an effort led by Tom Tromey to add Rust language support 
> into LLDB.  He had implemented a fairly complete language plugin, however it 
> was not accepted into mainline because of supportability concerns.I guess 
> these concerns had some merit, because this change did not survive even in 
> Rust's private branch due to the difficulty of rebasing on top of LLVM 9.

Unless my memory is failing me, I don't think we ever explicitly
rejected Rust's language plugin. We removed a few other language
plugins (Go, Java) that were not maintained and were becoming an
increasing burden on the community. At the same time we agreed that we
didn't want to make the same mistake again. Some of the things that
come to mind are having a working implementation, testing, CI, etc. If
the rust community can show that they're dedicated to maintaining Rust
support in LLDB, I wouldn't expect a lot of resistance. I just bring
this up because I don't want to discourage anyone from adding support
for new languages to LLDB.

> I am wondering if there's a more limited version of this, that can be merged 
> into mainline:
> In terms of its memory model, Rust is not that far off from C++, so treating 
> Rust types is if they were C++ types basically works.  There is only one 
> major problem: currently LLDB cannot deal with tagged unions, which Rust code 
> uses quite heavily.   When such a type is encountered, LLDB just emits an 
> empty struct, which makes it impossible to examine the contents.
>
> My tentative proposal is to modify LLDB's DWARFASTParserClang to handle 
> DW_TAG_variant et al, and create a C++ approximation of these types, e.g. as 
> a polymorphic class, or just an untagged union.   This would provide at least 
> a minimal level of functionality for Rust (and possibly other languages) and 
> be a much lesser maintenance burden on LLDB core team.
> What would y'all say?

The people that actually work on this code should answer this, but
personally I don't have strong objections to this. That said, of
course I would prefer to have a (maintained) language plugin instead.

PS: Are there other changes that live downstream that are not Rust
specific and would benefit upstream LLDB and would potentially improve
Rust debugging?

Jonas
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Restructuring the (command) tests

2019-08-30 Thread Jonas Devlieghere via lldb-dev
Sounds good to me :-)

On Fri, Aug 30, 2019 at 1:27 PM Davide Italiano via lldb-dev
 wrote:
>
> On Fri, Aug 30, 2019 at 1:44 AM Raphael “Teemperor” Isemann via
> lldb-dev  wrote:
> >
> > Hi all,
> >
> > I have to admit I’m getting a bit confused lately where to put tests. 
> > Especially for testing LLDB commands it’s not obvious where to put files as 
> > we test some commands directly in the top-level test folder (e.g. quit, 
> > help, settings), some are in /functionalities with a _command suffix (e.g. 
> > target), some are in /functionalities without any suffix (e.g. register), 
> > some tests are split by subcommand (process, frame) and some are in the 
> > top-level folder with the _command prefix (e.g. expression). This makes it 
> > hard to figure out where to find or create tests for specific commands. 
> > Also setting a LIT_FILTER for jus testing CommandObject* changes is not 
> > possible.
> >
> > I would propose we restructure at least the command tests into 
> > “test/commands/${command_name}/${subcommand_name_or_functionality}/“ such 
> > as “test/commands/process/launch”. The LIT_FILTER for these things would be 
> > “commands/“.
> >
> > I don’t see any disadvantages from this as
> > * downstreams usually doesn’t fiddle around with the existing tests, so 
> > there should hopefully be no merge conflicts from this.
> > * git blame can handle this change as we only move files/directories and 
> > don’t touch their contents.
> > * it’s very little work to actually do this.
> >
> > I’m not sure if there is a need to restructure any other tests but I think 
> > if there are no objections in this thread, then I assume everyone can just 
> > take a few seconds and restructure their own tests.
> >
>
> +1
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Anybody using the GUI?

2019-08-23 Thread Jonas Devlieghere via lldb-dev
Hi Greg,

We're more than a year later and I haven't seen any development on the
GUI. While I personally thing this could be a really cool feature,
I've never been able to use it because it's missing too many thing to
be useful for now. When I talk to people that know about this feature,
I hear either frustration or disappointment that it doesn't work
(yet). I (personally) haven't found anyone that is actively using it.
As such, can we remove it until we have resources to do it right and
provide our users with something they can rely on?

Thanks,
Jonas

On Wed, Apr 11, 2018 at 11:47 AM Greg Clayton via lldb-dev
 wrote:
>
> And yes many people I know are using this including myself.
>
> > On Apr 11, 2018, at 11:08 AM, Davide Italiano  wrote:
> >
> > Good day.
> > While trying to implement a command in lldb I noticed lldb has this
> > awesome `gui` command that opens an ncurses GUI.
> > I find it really useful and I wanted to play with it a bit, but I
> > wasn't really able to get it working.
> > In particular, I tried to press enter on `target create` or `attach`
> > but nothing happens.
> >
> > Greg, as you wrote the original implementation, can you please explain
> > how this is supposed to work? Are you actively interested in
> > maintaining this mode?
> >
> > Thanks!
> >
> > --
> > Davide
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [patch] char8_t support (plus dlang UTF8/16/32)

2019-08-19 Thread Jonas Devlieghere via lldb-dev
Hi James,

Thanks for working on this. I've opened a code review for your patch:
https://reviews.llvm.org/D66447

I've had to make some modification for it to compile and added a test.

Cheers,
Jonas

On Sun, Aug 18, 2019 at 6:34 PM James Blachly via lldb-dev
 wrote:
>
> Dear LLDB developers:
>
> I have added support for C++20 char8_t, as well as support for dlang's 
> char/wchar/dchar types. As I am not a professional developer, and the 
> submission-review-merge process for LLVM projects seems somewhat byzantine, I 
> wanted to offer this up on the list in the hopes that others find it useful 
> and someone will be able to integrate it.
>
> kind regards
> James
>
>
> Using an example program that defines each of the unicode types as a single 
> character as well as string, we see a major improvement.
>
> BEFORE:
> (lldb) frame v
> error: need to add support for DW_TAG_base_type 'char8_t' encoded with DW_ATE 
> = 0x10, bit_size = 8
> (void) c8 = 
>
> (char16_t) c16 = U+ u'\0'
> (char32_t) c32 = U+0x7fff U'翿'
> (void [11]) str8 = ([0] = , [1] =  determine byte size.>, [2] = , [3] =  to determine byte size.>, [4] = , [5] = 
> , [6] = , [7] 
> = , [8] = , 
> [9] = , [10] =  size.>)
> (void *) str8ptr = 0x7fffe3d9
> (char16_t [12]) str16 = u"Hello UTF16"
> (char16_t *) str16ptr = 0x7fffe3b0 u"Hello UTF16"
> (char32_t [12]) str32 = U"Hello UTF32"
> (char32_t *) str32ptr = 0x7fffe370 U"Hello UTF32"
>
> AFTER:
> (lldb) frame v
> (char8_t) c8 = 0x00 u8'\0'
> (char16_t) c16 = U+ u'\0'
> (char32_t) c32 = U+0x7fff U'翿'
> (char8_t [11]) str8 = u8"Hello UTF8"
> (char8_t *) str8ptr = 0x7fffe3c9 u8"Hello UTF8"
> (char16_t [12]) str16 = u"Hello UTF16"
> (char16_t *) str16ptr = 0x7fffe3a0 u"Hello UTF16"
> (char32_t [12]) str32 = U"Hello UTF32"
> (char32_t *) str32ptr = 0x7fffe360 U"Hello UTF32”
>
>
>
>
> diff --git a/include/lldb/lldb-enumerations.h 
> b/include/lldb/lldb-enumerations.h
> index f9830c04b..e7189dc9d 100644
> --- a/include/lldb/lldb-enumerations.h
> +++ b/include/lldb/lldb-enumerations.h
> @@ -167,6 +167,7 @@ enum Format {
>eFormatOctal,
>eFormatOSType, // OS character codes encoded into an integer 'PICT' 'text'
>   // etc...
> +  eFormatUnicode8,
>eFormatUnicode16,
>eFormatUnicode32,
>eFormatUnsigned,
> diff --git a/source/Plugins/Language/CPlusPlus/CPlusPlusLanguage.cpp 
> b/source/Plugins/Language/CPlusPlus/CPlusPlusLanguage.cpp
> index 0b3c31816..15e0a82bd 100644
> --- a/source/Plugins/Language/CPlusPlus/CPlusPlusLanguage.cpp
> +++ b/source/Plugins/Language/CPlusPlus/CPlusPlusLanguage.cpp
> @@ -853,6 +853,14 @@ static void 
> LoadSystemFormatters(lldb::TypeCategoryImplSP cpp_category_sp) {
>
>// FIXME because of a bug in the FormattersContainer we need to add a 
> summary
>// for both X* and const X* ()
> +  AddCXXSummary(
> +  cpp_category_sp, lldb_private::formatters::Char8StringSummaryProvider,
> +  "char8_t * summary provider", ConstString("char8_t *"), string_flags);
> +  AddCXXSummary(cpp_category_sp,
> +lldb_private::formatters::Char8StringSummaryProvider,
> +"char8_t [] summary provider",
> +ConstString("char8_t \\[[0-9]+\\]"), string_array_flags, 
> true);
> +
>AddCXXSummary(
>cpp_category_sp, lldb_private::formatters::Char16StringSummaryProvider,
>"char16_t * summary provider", ConstString("char16_t *"), 
> string_flags);
> @@ -890,6 +898,9 @@ static void LoadSystemFormatters(lldb::TypeCategoryImplSP 
> cpp_category_sp) {
>.SetHideItemNames(true)
>.SetShowMembersOneLiner(false);
>
> +  AddCXXSummary(
> +  cpp_category_sp, lldb_private::formatters::Char8SummaryProvider,
> +  "char8_t summary provider", ConstString("char8_t"), widechar_flags);
>AddCXXSummary(
>cpp_category_sp, lldb_private::formatters::Char16SummaryProvider,
>"char16_t summary provider", ConstString("char16_t"), widechar_flags);
> diff --git a/source/Plugins/Language/CPlusPlus/CxxStringTypes.cpp 
> b/source/Plugins/Language/CPlusPlus/CxxStringTypes.cpp
> index 959079070..3ea7589d8 100644
> --- a/source/Plugins/Language/CPlusPlus/CxxStringTypes.cpp
> +++ b/source/Plugins/Language/CPlusPlus/CxxStringTypes.cpp
> @@ -32,6 +32,31 @@ using namespace lldb;
>  using namespace lldb_private;
>  using namespace lldb_private::formatters;
>
> +bool lldb_private::formatters::Char8StringSummaryProvider(
> +ValueObject , Stream , const TypeSummaryOptions &) {
> +  ProcessSP process_sp = valobj.GetProcessSP();
> +  if (!process_sp)
> +return false;
> +
> +  lldb::addr_t valobj_addr = GetArrayAddressOrPointerValue(valobj);
> +  if (valobj_addr == 0 || valobj_addr == LLDB_INVALID_ADDRESS)
> +return false;
> +
> +  StringPrinter::ReadStringAndDumpToStreamOptions options(valobj);
> +  options.SetLocation(valobj_addr);
> +  options.SetProcessSP(process_sp);
> +  options.SetStream();
> +  

Re: [lldb-dev] How do I use lit to only run the lldb test suite, now that dotest multiprocessing capabilities have been removed?

2019-08-09 Thread Jonas Devlieghere via lldb-dev
So far the only thing that changed by removing multiprocess is that
`--no-multiprocess` is always enabled. Everything else you describe is
still possible, and will continue to be possible.

Even when we remove the driver functionality from dotest.py this will
all continue to work. The only difference is that dotest.py will
operate on a single file after being invoked by some other driver. The
problem you describe with the dual arguments seems relatively easy to
fix by loading a different configuration in lit. It's not something
I've looked at yet, because everything we care about is configurable
from CMake or overridable by passing --params to lit.


On Fri, Aug 9, 2019 at 1:30 PM Ted Woodward  wrote:
>
> Hi Jonas,
>
> What I need is a way to run the test suite with arbitrary command line 
> arguments. Sometimes I want to run one or more tests with -f, sometimes I 
> want to run one or more test files with -p, and sometimes I want to run the 
> entire suite, either in parallel or 1 at a time (--no-multiprocess). I might 
> be running from a directory where I've built lldb (but not clang, using clang 
> from an arbitrary location), a directory where I've built everything (but in 
> this case we set everything up with cmake), or a directory where I've 
> checkout out just the sources and have copied the binaries from a 
> distribution, so no cmake.
>
> dotest used to handle these cases; how do I handle them now?
>
> In our environment, lldb will launch and connect to the hexagon simulator, 
> much like how it launches and connects to debugserver or lldb-server. But it 
> has to use a specific version of clang, because we've found that if we have 
> mismatches in clang/simulator/RTOS, bad things can happen. User code is in a 
> shared library that gets loaded by a wrapper run under the RTOS. The RTOS, 
> wrapper and user code all need to be built with the same complier and c/c++ 
> libraries.
>
> Ted
>
> > -Original Message-
> > From: Jonas Devlieghere 
> > Sent: Friday, August 9, 2019 3:18 PM
> > To: Ted Woodward 
> > Cc: LLDB 
> > Subject: [EXT] Re: How do I use lit to only run the lldb test suite, now 
> > that
> > dotest multiprocessing capabilities have been removed?
> >
> > Hey Ted,
> >
> > On Thu, Aug 8, 2019 at 2:08 PM Ted Woodward 
> > wrote:
> > >
> > > Thanks Jonas.
> > >
> > > Is full support for --param fairly recent? I tried it with a version of 
> > > our
> > master, based on top-of-tree from about a month ago, and it didn't work 
> > quite
> > right. It's passing the dotest args, but it's also generating some args, so 
> > I'm
> > seeing odd effects.
> >
> > It's not something I touched recently, but it's always possible they made 
> > some
> > changes in LLVM.
> >
> > > Here is my run line:
> > > bin/python bin/llvm-lit /local/mnt/ted/8.4/llvm/lldb/lit/Suite --param
> > 'dotest-args=-A v66 -C /prj/dsp/qdsp6/release/internal/HEXAGON/branch-
> > 8.4/linux64/latest/Tools/bin/hexagon-clang --executable
> > /local/scratch/ted/8.4/build/bin/lldb -t -v -f
> > RecursiveTypesTestCase.test_recursive_type_1_dwarf'
> > >
> > > I only want to run RecursiveTyepsTestCase.test_recursive_type_1_dwarf,
> > but it's running the whole test suite.
> >
> > Do you know about the lldb-dotest binary? You can still use it to invoke a
> > single test, similar to how lit does it. You should be able to just pass 
> > your
> > arguments to that.
> >
> > Here's a dotest line from the run:
> > >
> > > /local/mnt/ted/8.4/build/bin/python
> > > /local/mnt/ted/8.4/llvm/lldb/test/dotest.py -q --arch=v66 -s
> > > /local/mnt/ted/8.4/build/lldb-test-traces --build-dir
> > > /local/mnt/ted/8.4/build/lldb-test-build.noindex -S nm -u CXXFLAGS -u
> > > CFLAGS --executable /local/mnt/ted/8.4/build/./bin/lldb --dsymutil
> > > /local/mnt/ted/8.4/build/./bin/dsymutil --filecheck
> > > /local/mnt/ted/8.4/build/./bin/FileCheck -C
> > > /local/mnt/ted/8.4/build/./bin/clang --env ARCHIVER=/usr/bin/ar --env
> > > OBJCOPY=/usr/bin/objcopy -A v66 -C
> > > /prj/dsp/qdsp6/release/internal/HEXAGON/branch-8.4/linux64/latest/Tool
> > > s/bin/hexagon-clang --executable /local/scratch/ted/8.4/build/bin/lldb
> > > -t -v -f RecursiveTypesTestCase.test_recursive_type_1_dwarf --env
> > > LLVM_LIBS_DIR=/local/mnt/ted/8.4/build/./lib
> > > /local/mnt/ted/8.4/llvm/lldb/packages/Python/lldbsuite/test/functional
> > > ities/breakpoint/debugbreak -p TestDebugBreak.py
> > >
> > >
> > > It's got both --arch= and -A, -C is set to my build directory clang as 
> > > well as the
> > clang I told it to use, --executable is set twice, and it's got -f
> > RecursiveTypesTestCase.test_recursive_type_1_dwarf and -p
> > TestDebugBreak.py .
> >
> > Both lit and lldb-dotest are configured using the dotest arguments that we 
> > can
> > configure at CMake configuration time. That would explain where the extra
> > options come from. If those are not the ones you want, you can still invoke
> > dotest.py directly.
> >
> > >
> > > These tests that do a "process launch" 

Re: [lldb-dev] How do I use lit to only run the lldb test suite, now that dotest multiprocessing capabilities have been removed?

2019-08-09 Thread Jonas Devlieghere via lldb-dev
Hey Ted,

On Thu, Aug 8, 2019 at 2:08 PM Ted Woodward  wrote:
>
> Thanks Jonas.
>
> Is full support for --param fairly recent? I tried it with a version of our 
> master, based on top-of-tree from about a month ago, and it didn't work quite 
> right. It's passing the dotest args, but it's also generating some args, so 
> I'm seeing odd effects.

It's not something I touched recently, but it's always possible they
made some changes in LLVM.

> Here is my run line:
> bin/python bin/llvm-lit /local/mnt/ted/8.4/llvm/lldb/lit/Suite --param 
> 'dotest-args=-A v66 -C 
> /prj/dsp/qdsp6/release/internal/HEXAGON/branch-8.4/linux64/latest/Tools/bin/hexagon-clang
>  --executable /local/scratch/ted/8.4/build/bin/lldb -t -v -f 
> RecursiveTypesTestCase.test_recursive_type_1_dwarf'
>
> I only want to run RecursiveTyepsTestCase.test_recursive_type_1_dwarf, but 
> it's running the whole test suite.

Do you know about the lldb-dotest binary? You can still use it to
invoke a single test, similar to how lit does it. You should be able
to just pass your arguments to that.

Here's a dotest line from the run:
>
> /local/mnt/ted/8.4/build/bin/python 
> /local/mnt/ted/8.4/llvm/lldb/test/dotest.py -q --arch=v66 -s 
> /local/mnt/ted/8.4/build/lldb-test-traces --build-dir 
> /local/mnt/ted/8.4/build/lldb-test-build.noindex -S nm -u CXXFLAGS -u CFLAGS 
> --executable /local/mnt/ted/8.4/build/./bin/lldb --dsymutil 
> /local/mnt/ted/8.4/build/./bin/dsymutil --filecheck 
> /local/mnt/ted/8.4/build/./bin/FileCheck -C 
> /local/mnt/ted/8.4/build/./bin/clang --env ARCHIVER=/usr/bin/ar --env 
> OBJCOPY=/usr/bin/objcopy -A v66 -C 
> /prj/dsp/qdsp6/release/internal/HEXAGON/branch-8.4/linux64/latest/Tools/bin/hexagon-clang
>  --executable /local/scratch/ted/8.4/build/bin/lldb -t -v -f 
> RecursiveTypesTestCase.test_recursive_type_1_dwarf --env 
> LLVM_LIBS_DIR=/local/mnt/ted/8.4/build/./lib 
> /local/mnt/ted/8.4/llvm/lldb/packages/Python/lldbsuite/test/functionalities/breakpoint/debugbreak
>  -p TestDebugBreak.py
>
>
> It's got both --arch= and -A, -C is set to my build directory clang as well 
> as the clang I told it to use, --executable is set twice, and it's got -f 
> RecursiveTypesTestCase.test_recursive_type_1_dwarf and -p TestDebugBreak.py .

Both lit and lldb-dotest are configured using the dotest arguments
that we can configure at CMake configuration time. That would explain
where the extra options come from. If those are not the ones you want,
you can still invoke dotest.py directly.

>
> These tests that do a "process launch" (which is most of them) invoke the 
> hexagon simulator, but it was never launched. There was also only 1 testcase 
> built in /local/mnt/ted/8.4/build/lldb-test-build.noindex - 
> types/TestRecursiveTypes.test_recursive_type_1_dwarf .

Other than the extra arguments I can't think of any reason why this
would behave differently. Lit is just a simple wrapper that invokes
dotest.py with the right arguments.

> This does not have the patch that removes multiprocess support from dotest.
>
> Ted
>
> > -Original Message-
> > From: Jonas Devlieghere 
> > Sent: Thursday, August 8, 2019 2:50 PM
> > To: Ted Woodward 
> > Cc: LLDB 
> > Subject: [EXT] Re: How do I use lit to only run the lldb test suite, now 
> > that
> > dotest multiprocessing capabilities have been removed?
> >
> > Hey Ted,
> >
> > 1. You can run just the dotest-tests by pointing lit at the `lit/Suite` 
> > directory.
> > 2. You can pass arguments to dotest by passing `dotest-args` in --param.
> >
> > The invocation would look something like this:
> >
> > /path/to/llvm/bin/llvm-lit /path/to/lldb/lit/Suite --param 
> > 'dotest-args=--foo --
> > bar'
> >
> > Hope that helps,
> > Jonas
> >
> > On Thu, Aug 8, 2019 at 9:31 AM Ted Woodward 
> > wrote:
> > >
> > > RE: https://reviews.llvm.org/D65311
> > >
> > >
> > >
> > > Internally we use dotest to run the lldb test suite with various RTOS
> > configurations for the test binaries. In these runs we don’t care about the 
> > lit
> > tests or the unit tests, because they are OS agnostic. We do this by 
> > specifying
> > the compiler, lldb, and test flavor (static testcase + os, dynamic library 
> > testcase
> > loaded by an OS image, dynamic library testcase loaded by an OS image
> > running the OS’ debug stub).
> > >
> > >
> > >
> > > With the multiprocess testrunner removed, how do I have lit:
> > >
> > > Only run the lldb test suite
> > > Run dotest with specific arguments
> > >
> > >
> > >
> > > We’re not running cmake; we’re taking an existing tools build and running
> > the tests from the source directory using the toolset.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] How do I use lit to only run the lldb test suite, now that dotest multiprocessing capabilities have been removed?

2019-08-08 Thread Jonas Devlieghere via lldb-dev
Hey Ted,

1. You can run just the dotest-tests by pointing lit at the
`lit/Suite` directory.
2. You can pass arguments to dotest by passing `dotest-args` in --param.

The invocation would look something like this:

/path/to/llvm/bin/llvm-lit /path/to/lldb/lit/Suite --param
'dotest-args=--foo --bar'

Hope that helps,
Jonas

On Thu, Aug 8, 2019 at 9:31 AM Ted Woodward  wrote:
>
> RE: https://reviews.llvm.org/D65311
>
>
>
> Internally we use dotest to run the lldb test suite with various RTOS 
> configurations for the test binaries. In these runs we don’t care about the 
> lit tests or the unit tests, because they are OS agnostic. We do this by 
> specifying the compiler, lldb, and test flavor (static testcase + os, dynamic 
> library testcase loaded by an OS image, dynamic library testcase loaded by an 
> OS image running the OS’ debug stub).
>
>
>
> With the multiprocess testrunner removed, how do I have lit:
>
> Only run the lldb test suite
> Run dotest with specific arguments
>
>
>
> We’re not running cmake; we’re taking an existing tools build and running the 
> tests from the source directory using the toolset.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] test suite issue with Python2.7/3.5

2019-07-23 Thread Jonas Devlieghere via lldb-dev
The 7.0 branch is not compatible with Python 3, at least not if you're not
on Windows. The first release that is, would be 9.0, which is currently
being qualified. This includes a bunch of compatibility fixes, and a newer
version of the vendored pexpect (4.6). As you've noticed, using different
versions of Python will not work either.

Your only option is to use Python 2.7, both for building LLDB and for
running the test suite. Making sure the 2.7 interpreter is first in your
PATH should be sufficient. Alternatively, you can explicitly pass
-DPYTHON_EXECUTABLE=/path/to/python27.

Cheers,
Jonas

On Tue, Jul 23, 2019 at 7:38 AM Romaric Jodin via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hi everyone,
>
> I'm trying to run the test suite on lldb and I'm having some issues with
> Python. I'm on branch 7.0.
>
> When I build lldb, I've got a folder "python3.5/site-packages" generated.
> So I believe that the build system found python3 in my environment.
> But when I run the testsuite (using "ninja check-lldb"), I've got this
> issue:
>
> Traceback (most recent call last):
>   File
> "/home/rjodin/work/dpu_tools/llvm/lldb/lldb/packages/Python/lldbsuite/test/decorators.py",
> line 113, in wrapper
> func(*args, **kwargs)
>   File
> "/home/rjodin/work/dpu_tools/llvm/lldb/lldb/packages/Python/lldbsuite/test/decorators.py",
> line 341, in wrapper
> return func(self, *args, **kwargs)
>   File
> "/home/rjodin/work/dpu_tools/llvm/lldb/lldb/packages/Python/lldbsuite/test/functionalities/command_regex/TestCommandRegex.py",
> line 39, in test_command_regex
> child.expect_exact(prompt)
>   File
> "/home/rjodin/work/dpu_tools/llvm/lldb/lldb/third_party/Python/module/pexpect-2.4/pexpect.py",
> line 1386, in expect_exact
> if type(pattern_list) in types.StringTypes or pattern_list in (
> AttributeError: module 'types' has no attribute 'StringTypes'
>
> It seems that it's because the code in "pexpect.py" is not compatible with
> python3.5.
> If I force the system to use python2.7, I've got another issue because of
> the way "_lldb.so" is built (with python3.5):
>
> Traceback (most recent call last):
>   File "/home/rjodin/work/dpu_tools/llvm/lldb/lldb/test/dotest.py", line
> 7, in 
> lldbsuite.test.run_suite()
>   File
> "/home/rjodin/work/dpu_tools/llvm/lldb/lldb/packages/Python/lldbsuite/test/dotest.py",
> line 1180, in run_suite
> import lldb
>   File
> "/home/rjodin/package-sdk-2019.3.0/upmem-internal/usr/share/upmem/lib/python3.5/site-packages/lldb/__init__.py",
> line 39, in 
> import _lldb
> ImportError: dynamic module does not define init function (init_lldb)
>
>
> What do I do wrong?
> Thanks,
> --
> *Romaric JODIN*
> UPMEM
> *Software Engineer*
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Cannot use system debugserver for testing

2019-07-19 Thread Jonas Devlieghere via lldb-dev
I this was because of a change in llvm which broke codesigning of
debugserver: https://reviews.llvm.org/D64965

On Fri, Jul 19, 2019 at 10:36 AM Gábor Márton via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Actually, it is embarrassing (perhaps for macOS and not for me) that after
> a reboot the problem is gone.
> Perhaps after "sudo /usr/sbin/DevToolsSecurity --enable" a reboot is
> required, but could not find anything official about that.
>
> On Fri, Jul 19, 2019 at 7:20 PM Gábor Márton 
> wrote:
>
>> This might not be related to the debugserver, I just realized that I get
>> "error: process exited with status -1 (Error 1)"
>> even with the simplest main.c.
>> This may be some kind of security issue on mac OS...
>> Though I've checked and I have SIP disabled and I have executed "sudo
>> /usr/sbin/DevToolsSecurity --enable".
>>
>> On Fri, Jul 19, 2019 at 4:46 PM Gábor Márton 
>> wrote:
>>
>>> Hi Stefan,
>>>
>>> Since the commit
>>> "[CMake] Always build debugserver on Darwin and allow tests to use the
>>> system's one"
>>> I cannot use the system debugserver for testing.
>>> I receive the following error message from lldb when I execute "ninja
>>> check-lldb":
>>> ```
>>> runCmd: run
>>> runCmd failed!
>>> error: process exited with status -1 (Error 1)
>>> ```
>>>
>>> I do set up "-DLLDB_USE_SYSTEM_DEBUGSERVER=ON" with cmake so I see
>>> ```
>>> -- LLDB tests use out-of-tree debugserver:
>>> /Library/Developer/CommandLineTools/Library/PrivateFrameworks/LLDB.framework/Resources/debugserver
>>> ```
>>>
>>> Also, I have inspected the following test output
>>> ```
>>> Command invoked: /usr/bin/python
>>> /Users/egbomrt/llvm2/git/llvm/tools/lldb/test/dotest.py -q --arch=x86_64 -s
>>> /Users/egbomrt/llvm2/build/release_assert/lldb-test-traces --build-dir
>>> /Users/egbomrt/llvm2/build/release_assert/lldb-test-build.noindex -S nm -u
>>> CXXFLAGS -u CFLAGS --executable
>>> /Users/egbomrt/llvm2/build/release_assert/./bin/lldb --dsymutil
>>> /Users/egbomrt/llvm2/build/release_assert/./bin/dsymutil --filecheck
>>> /Users/egbomrt/llvm2/build/release_assert/./bin/FileCheck -C
>>> /Users/egbomrt/llvm2/build/release_assert/bin/clang --codesign-identity -
>>> --out-of-tree-debugserver --arch x86_64 -t --env TERM=vt100 -p
>>> TestCModules.py --results-port 49931 -S nm --inferior -p TestCModules.py
>>> /Users/egbomrt/llvm2/git/llvm/tools/lldb/packages/Python/lldbsuite/test/lang/c/modules
>>> --event-add-entries worker_index=0:int
>>>   1 out of 736 test suites processed - TestCModules.py
>>> ```
>>> so it seems like the argument for --out-of-tree-debugserver is missing...
>>>
>>> Could you please advise?
>>>
>>> Thank you,
>>> Gabor
>>>
>> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLDB BoF @ LLVM Dev Meeting

2019-07-09 Thread Jonas Devlieghere via lldb-dev
Hey everyone,

I'm proposing an LLDB BoF at the upcoming LLVM developer meeting. Currently
I have a pretty generic abstract, but I think it would be good to have a
more concrete agenda.

> LLDB has seen an influx of contributions over the past year, with the
highest level of activity we've seen in the past 4 years. Let's use this
BoF to discuss everybody's goals and identify places where we can
synchronize our efforts.

Let's use this e-mail thread to collect topics we'd like to discuss.

Thanks,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Removing lldb-mi

2019-07-05 Thread Jonas Devlieghere via lldb-dev
Thank you for doing this, Raphael. I believe this shows that it's possible
to keep lldb-mi alive, without today's maintenance burden on the LLDB
community, a solution that seems to appease everyones concerns in this
thread. I hope this sparks interest for somebody to step up as a
maintainer.

I went ahead and created a diff to add the proposed deprecations to the
LLVM release notes: https://reviews.llvm.org/D64254
I'll put up another diff to remove the code, which we can land once LLVM 9
has branched.

Thank you,
Jonas

On Thu, Jul 4, 2019 at 12:24 PM Raphael “Teemperor” Isemann via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> I just went forward with this and made a quick test repo with an
> out-of-tree lldb-mi that compiles against the system LLDB:
> https://github.com/Teemperor/lldb-mi This seems to work fine with the
> exception of the python tests which require LLDB’s python code for testing
> which isn’t installed alongside LLDB. I guess we will have to see if we
> copy the related test code there or we just rewrite the test suite (which
> is anyway broken). On the upside, we can now just use Travis for CI as we
> don’t have to compile LLVM/Clang/LLDB, so that’s nice.
>
> I’m in favor of deprecating lldb-mi with 9.0.0 and then we can give
> downstream time until 10.0.0 (or X.0.0 :) ) to package out-of-tree lldb-mi
> for users. Given how simple lldb-mi is, this seems like a reasonable
> timeframe.
>
> - Raphael
>
>
> On Jul 4, 2019, at 9:51 AM, Davide Italiano via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>
>
> On Thu, Jul 4, 2019 at 12:58 AM Zdenek Prikryl via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> We're using it with Eclipse and Eclipse based product, so I'd like to
>> keep as well! :-)...
>>
>> Zdenek
>>
>
> I do understand that there's desire from people to keep this around (from
> an user perspective), but I guess this fundamentally misses Jonas' original
> mail point.
> lldb-mi has been unmaintained for a long time (at least the past two years
> from what I can tell), and we tried to use it in emacs without success.
> It has never been a priority for many of the parties putting effort in
> lldb and I'm under the impression the situation won't change in the
> foreseeable future.
> Unless somebody steps up as maintainer I don't think there's a lot of
> future for the tool.
> Maybe a good compromise would be that of having lldb-mi living in a
> separate repo somewhere on GitHub, as it only uses the SBAPI, which is
> public and set in stone?
>
> --
> Davide
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Docs] Unify build instructions

2019-07-03 Thread Jonas Devlieghere via lldb-dev
Hey Stefan,

> On Jul 3, 2019, at 3:29 PM, Stefan Gränitz  wrote:
> 
> Hey Jonas, thanks for the initiative! I actually started doing the same today 
> but then go distracted. Your changes certainly put it in a much better shape!
> Before touching anything again, maybe it's worth some feedback first:
> 
> * In "Building LLDB with CMake & Ninja": Should we update the build 
> instructions for the monorepo?

I was thinking about this myself. I believe it's part of the LLVM getting 
started page, which we link initially. On the other hand, it might be good to 
point out that you need libcxx on macOS. I'll leave the decision up to you. 

> 
> * In "Building LLDB with CMake and Other Generators": Using an IDE generator 
> for all of LLDB/LLVM/Clang/libc++ results in a huge workspace that is not 
> really manageable anymore (at least in Xcode). Do you think it's worth 
> explaining here that we can generate the IDE workspace for standalone LLDB 
> and have the dependencies in a provided Ninja build-tree?

Yep, definitely! I was going to ask you to complete this section as you've been 
working on this. 

> 
> * Either as another section or (maybe for now) as a note in the existing 
> per-OS sections, I would explain how to use the CMake caches. I think that 
> could be useful with the above two proposals.

Sounds good!

> 
> What do you think?
> I will rebase my current state on yours on Monday and then submit a proposal 
> during the week.

Thanks!

> 
> Best
> Stefan
> 
>> On 3. Jul 2019, at 22:51, Jonas Devlieghere via Phabricator 
>>  wrote:
>> 
>> JDevlieghere closed this revision.
>> JDevlieghere added a comment.
>> 
>> Landed in rL365081 
>> 
>> 
>> CHANGES SINCE LAST ACTION
>> https://reviews.llvm.org/D64154/new/
>> 
>> https://reviews.llvm.org/D64154
>> 
>> 
>> 
> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [RFC] Removing lldb-mi

2019-07-01 Thread Jonas Devlieghere via lldb-dev
Hi everyone,

After long consideration, I want to propose removing lldb-mi from the
repository. It is basically unmaintained, doesn't match the LLDB code
style, and worst of all the tests are unreliable if not already disabled.
As far as I can tell it's missing core functionality to be usable from
something like say emacs.

Thanks,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB Website

2019-05-07 Thread Jonas Devlieghere via lldb-dev
hanges before to prevent any
>>>> issue.
>>>>
>>>> -Tanya
>>>>
>>>> On Apr 29, 2019, at 10:26 AM, Jonas Devlieghere 
>>>> wrote:
>>>>
>>>> I've merged the aforementioned patch.
>>>>
>>>> Tanya, can you give generating the python docs another shot?
>>>>
>>>> Thanks,
>>>> Jonas
>>>>
>>>> On Fri, Apr 26, 2019 at 4:29 PM Jonas Devlieghere <
>>>> jo...@devlieghere.com> wrote:
>>>>
>>>>> I've put up a patch to make it possible to generate the python
>>>>> reference without building lldb at all:
>>>>> https://reviews.llvm.org/D61216
>>>>>
>>>>> PS: The website isn't updating anymore, is that because of the python
>>>>> reference generation?
>>>>>
>>>>> On Wed, Apr 24, 2019 at 11:46 AM Ted Woodward 
>>>>> wrote:
>>>>>
>>>>>> That's the issue - lldb-python-doc depends on liblldb. From
>>>>>> docs/CMakeLists.txt:
>>>>>>
>>>>>> if(EPYDOC_EXECUTABLE)
>>>>>>   find_program(DOT_EXECUTABLE dot)
>>>>>> if(DOT_EXECUTABLE)
>>>>>>   set(EPYDOC_OPTIONS ${EPYDOC_OPTIONS} --graph all --dotpath
>>>>>> ${DOT_EXECUTABLE})
>>>>>> endif()
>>>>>> set(DOC_DIR "${CMAKE_CURRENT_SOURCE_DIR}/doc")
>>>>>> file(MAKE_DIRECTORY "${DOC_DIR}")
>>>>>> #set(ENV{PYTHONPATH}
>>>>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib/python2.7/site-packages)
>>>>>> add_custom_target(lldb-python-doc
>>>>>>   ${EPYDOC_EXECUTABLE}
>>>>>>   --html
>>>>>>   lldb
>>>>>>   -o ${CMAKE_CURRENT_BINARY_DIR}/python_reference
>>>>>>   --name "LLDB python API"
>>>>>>   --url "http://lldb.llvm.org;
>>>>>>   ${EPYDOC_OPTIONS}
>>>>>>   DEPENDS swig_wrapper liblldb
>>>>>>   WORKING_DIRECTORY
>>>>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib${LLVM_LIBDIR_SUFFIX}/python2.7/site-packages
>>>>>>   COMMENT "Generating LLDB Python API reference with epydoc"
>>>>>> VERBATIM
>>>>>> )
>>>>>> endif(EPYDOC_EXECUTABLE)
>>>>>>
>>>>>>
>>>>>> > -Original Message-
>>>>>> > From: lldb-dev  On Behalf Of
>>>>>> Pavel Labath
>>>>>> > via lldb-dev
>>>>>> > Sent: Wednesday, April 24, 2019 1:16 AM
>>>>>> > To: Jonas Devlieghere ; Tanya Lattner
>>>>>> > 
>>>>>> > Cc: LLDB 
>>>>>> > Subject: [EXT] Re: [lldb-dev] LLDB Website
>>>>>> >
>>>>>> > On 24/04/2019 03:19, Jonas Devlieghere via lldb-dev wrote:
>>>>>> > >
>>>>>> > >
>>>>>> > > On Tue, Apr 23, 2019 at 6:04 PM Jonas Devlieghere
>>>>>> > > mailto:jo...@devlieghere.com>> wrote:
>>>>>> > >
>>>>>> > >
>>>>>> > >
>>>>>> > > On Tue, Apr 23, 2019 at 5:43 PM Tanya Lattner <
>>>>>> tanyalatt...@llvm.org
>>>>>> > > <mailto:tanyalatt...@llvm.org>> wrote:
>>>>>> > >
>>>>>> > >
>>>>>> > >
>>>>>> > >> On Apr 23, 2019, at 5:06 PM, Jonas Devlieghere
>>>>>> > >> mailto:jo...@devlieghere.com>>
>>>>>> wrote:
>>>>>> > >>
>>>>>> > >>
>>>>>> > >>
>>>>>> > >> On Tue, Apr 23, 2019 at 5:00 PM Tanya Lattner
>>>>>> > >> mailto:tanyalatt...@llvm.org>>
>>>>>> wrote:
>>>>>> > >>
>>>>>> > >>
>>>>>> > >>
>>>>>> > >>> On Apr 23, 2019, at 11:54 AM, Jonas Devlieghere
>>>>>> > >>> >>>>> jo...@devlieghere.com>>
>>>>>> > wrote:
>>>>>> > >

Re: [lldb-dev] LLDB Website

2019-05-07 Thread Jonas Devlieghere via lldb-dev
>> wrote:
>>>>
>>>>> That's the issue - lldb-python-doc depends on liblldb. From
>>>>> docs/CMakeLists.txt:
>>>>>
>>>>> if(EPYDOC_EXECUTABLE)
>>>>>   find_program(DOT_EXECUTABLE dot)
>>>>> if(DOT_EXECUTABLE)
>>>>>   set(EPYDOC_OPTIONS ${EPYDOC_OPTIONS} --graph all --dotpath
>>>>> ${DOT_EXECUTABLE})
>>>>> endif()
>>>>> set(DOC_DIR "${CMAKE_CURRENT_SOURCE_DIR}/doc")
>>>>> file(MAKE_DIRECTORY "${DOC_DIR}")
>>>>> #set(ENV{PYTHONPATH}
>>>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib/python2.7/site-packages)
>>>>> add_custom_target(lldb-python-doc
>>>>>   ${EPYDOC_EXECUTABLE}
>>>>>   --html
>>>>>   lldb
>>>>>   -o ${CMAKE_CURRENT_BINARY_DIR}/python_reference
>>>>>   --name "LLDB python API"
>>>>>   --url "http://lldb.llvm.org;
>>>>>   ${EPYDOC_OPTIONS}
>>>>>   DEPENDS swig_wrapper liblldb
>>>>>   WORKING_DIRECTORY
>>>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib${LLVM_LIBDIR_SUFFIX}/python2.7/site-packages
>>>>>   COMMENT "Generating LLDB Python API reference with epydoc"
>>>>> VERBATIM
>>>>> )
>>>>> endif(EPYDOC_EXECUTABLE)
>>>>>
>>>>>
>>>>> > -Original Message-
>>>>> > From: lldb-dev  On Behalf Of Pavel
>>>>> Labath
>>>>> > via lldb-dev
>>>>> > Sent: Wednesday, April 24, 2019 1:16 AM
>>>>> > To: Jonas Devlieghere ; Tanya Lattner
>>>>> > 
>>>>> > Cc: LLDB 
>>>>> > Subject: [EXT] Re: [lldb-dev] LLDB Website
>>>>> >
>>>>> > On 24/04/2019 03:19, Jonas Devlieghere via lldb-dev wrote:
>>>>> > >
>>>>> > >
>>>>> > > On Tue, Apr 23, 2019 at 6:04 PM Jonas Devlieghere
>>>>> > > mailto:jo...@devlieghere.com>> wrote:
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > > On Tue, Apr 23, 2019 at 5:43 PM Tanya Lattner <
>>>>> tanyalatt...@llvm.org
>>>>> > > <mailto:tanyalatt...@llvm.org>> wrote:
>>>>> > >
>>>>> > >
>>>>> > >
>>>>> > >> On Apr 23, 2019, at 5:06 PM, Jonas Devlieghere
>>>>> > >> mailto:jo...@devlieghere.com>>
>>>>> wrote:
>>>>> > >>
>>>>> > >>
>>>>> > >>
>>>>> > >> On Tue, Apr 23, 2019 at 5:00 PM Tanya Lattner
>>>>> > >> mailto:tanyalatt...@llvm.org>>
>>>>> wrote:
>>>>> > >>
>>>>> > >>
>>>>> > >>
>>>>> > >>> On Apr 23, 2019, at 11:54 AM, Jonas Devlieghere
>>>>> > >>> mailto:jo...@devlieghere.com
>>>>> >>
>>>>> > wrote:
>>>>> > >>>
>>>>> > >>> Hey Tanya,
>>>>> > >>>
>>>>> > >>> On Tue, Apr 23, 2019 at 11:51 Tanya Lattner
>>>>> > >>> mailto:tanyalatt...@llvm.org>>
>>>>> wrote:
>>>>> > >>>
>>>>> > >>> Jonas,
>>>>> > >>>
>>>>> > >>> Ignore what I said before as these do need to be
>>>>> > >>> separate targets. It appears the new targets are
>>>>> > >>> running doxygen. This isn’t something we
>>>>> typically do
>>>>> > >>> as a post commit hook since it takes awhile. I’ll
>>>>> > >>> need to do this via the doxygen nightly script.
>>>>> Any
>>>>> > >>> concerns?
>>>>> > >>>
>>>>> > >>> That sounds perfect. Can we still do the regular
>>>>> website
>>>>> > >>> post commit?
>>&

Re: [lldb-dev] LLDB Website

2019-05-07 Thread Jonas Devlieghere via lldb-dev
Hey Tanya,

That's great. I see the Python documentation is online now!

Unfortunately it appears that the Sphinx part still isn't updating. I
pushed a bunch of changes last week and none have made it to the homepage
yet. I checked the www-scripts mailing list but I don't see any failures
for LLDB. Do you know what's up here?

Thanks,
Jonas

On Tue, May 7, 2019 at 12:19 AM Tanya Lattner  wrote:

> Ignore this. svn wasn’t actually updating the src tree. It works! I just
> need doxygen script to finish and it will be confirmed tonight.
>
>
> -Tanya
>
> On May 6, 2019, at 11:55 PM, Tanya Lattner  wrote:
>
> I’m not sure it is working. To clarify, nothing in LLVM should be compiled
> to build the python docs correct?
>
> So I shouldn’t see this?
> *Scanning dependencies of target liblldb_exports*
> [  0%] *Creating export file for liblldb*
> [  0%] Built target liblldb_exports
> *Scanning dependencies of target LLVMDemangle*
> [  0%] Building CXX object
> lib/Demangle/CMakeFiles/LLVMDemangle.dir/Demangle.cpp.o
> [  0%] Building CXX object
> lib/Demangle/CMakeFiles/LLVMDemangle.dir/ItaniumDemangle.cpp.o
> [  0%] Building CXX object
> lib/Demangle/CMakeFiles/LLVMDemangle.dir/MicrosoftDemangle.cpp.o
> [  0%] Building CXX object
> lib/Demangle/CMakeFiles/LLVMDemangle.dir/MicrosoftDemangleNodes.cpp.o
> [  0%] *Linking CXX static library ../libLLVMDemangle.a*
> [  0%] Built target LLVMDemangle
> *Scanning dependencies of target LLVMSupport*
> [  0%] Building CXX object
> lib/Support/CMakeFiles/LLVMSupport.dir/AArch64TargetParser.cpp.o
> [  0%] Building CXX object
> lib/Support/CMakeFiles/LLVMSupport.dir/ARMTargetParser.cpp.o
> [  0%] Building CXX object
> lib/Support/CMakeFiles/LLVMSupport.dir/AMDGPUMetadata.cpp.o
> [  0%] Building CXX object
> lib/Support/CMakeFiles/LLVMSupport.dir/APFloat.cpp.o
> [  0%] Building CXX object
> lib/Support/CMakeFiles/LLVMSupport.dir/APInt.cpp.o
>
> Do I need any additional config options?
>
> Thanks,
> Tanya
>
> On May 3, 2019, at 8:58 AM, Jonas Devlieghere 
> wrote:
>
> Hey Tanya,
>
> It appears the website is still stuck. It hasn't picked up my changes from
> earlier this week. Please let me know if there's anything I can do to help.
>
> Thanks,
> Jonas
>
> On Wed, May 1, 2019 at 10:40 PM Tanya Lattner 
> wrote:
>
>> I will give this a shot. I did remove the changes before to prevent any
>> issue.
>>
>> -Tanya
>>
>> On Apr 29, 2019, at 10:26 AM, Jonas Devlieghere 
>> wrote:
>>
>> I've merged the aforementioned patch.
>>
>> Tanya, can you give generating the python docs another shot?
>>
>> Thanks,
>> Jonas
>>
>> On Fri, Apr 26, 2019 at 4:29 PM Jonas Devlieghere 
>> wrote:
>>
>>> I've put up a patch to make it possible to generate the python reference
>>> without building lldb at all: https://reviews.llvm.org/D61216
>>>
>>> PS: The website isn't updating anymore, is that because of the python
>>> reference generation?
>>>
>>> On Wed, Apr 24, 2019 at 11:46 AM Ted Woodward 
>>> wrote:
>>>
>>>> That's the issue - lldb-python-doc depends on liblldb. From
>>>> docs/CMakeLists.txt:
>>>>
>>>> if(EPYDOC_EXECUTABLE)
>>>>   find_program(DOT_EXECUTABLE dot)
>>>> if(DOT_EXECUTABLE)
>>>>   set(EPYDOC_OPTIONS ${EPYDOC_OPTIONS} --graph all --dotpath
>>>> ${DOT_EXECUTABLE})
>>>> endif()
>>>> set(DOC_DIR "${CMAKE_CURRENT_SOURCE_DIR}/doc")
>>>> file(MAKE_DIRECTORY "${DOC_DIR}")
>>>> #set(ENV{PYTHONPATH}
>>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib/python2.7/site-packages)
>>>> add_custom_target(lldb-python-doc
>>>>   ${EPYDOC_EXECUTABLE}
>>>>   --html
>>>>   lldb
>>>>   -o ${CMAKE_CURRENT_BINARY_DIR}/python_reference
>>>>   --name "LLDB python API"
>>>>   --url "http://lldb.llvm.org;
>>>>       ${EPYDOC_OPTIONS}
>>>>   DEPENDS swig_wrapper liblldb
>>>>   WORKING_DIRECTORY
>>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib${LLVM_LIBDIR_SUFFIX}/python2.7/site-packages
>>>>   COMMENT "Generating LLDB Python API reference with epydoc"
>>>> VERBATIM
>>>> )
>>>> endif(EPYDOC_EXECUTABLE)
>>>>
>>>>
>>>> > -Original Message-
>>>> > From: lldb-dev  On Behalf Of Pavel
>>>> Labath
>>>> > via lldb-dev
>>>>

Re: [lldb-dev] LLDB Website

2019-05-06 Thread Jonas Devlieghere via lldb-dev
Friendly ping.

On Fri, May 3, 2019 at 8:58 AM Jonas Devlieghere 
wrote:

> Hey Tanya,
>
> It appears the website is still stuck. It hasn't picked up my changes from
> earlier this week. Please let me know if there's anything I can do to help.
>
> Thanks,
> Jonas
>
> On Wed, May 1, 2019 at 10:40 PM Tanya Lattner 
> wrote:
>
>> I will give this a shot. I did remove the changes before to prevent any
>> issue.
>>
>> -Tanya
>>
>> On Apr 29, 2019, at 10:26 AM, Jonas Devlieghere 
>> wrote:
>>
>> I've merged the aforementioned patch.
>>
>> Tanya, can you give generating the python docs another shot?
>>
>> Thanks,
>> Jonas
>>
>> On Fri, Apr 26, 2019 at 4:29 PM Jonas Devlieghere 
>> wrote:
>>
>>> I've put up a patch to make it possible to generate the python reference
>>> without building lldb at all: https://reviews.llvm.org/D61216
>>>
>>> PS: The website isn't updating anymore, is that because of the python
>>> reference generation?
>>>
>>> On Wed, Apr 24, 2019 at 11:46 AM Ted Woodward 
>>> wrote:
>>>
>>>> That's the issue - lldb-python-doc depends on liblldb. From
>>>> docs/CMakeLists.txt:
>>>>
>>>> if(EPYDOC_EXECUTABLE)
>>>>   find_program(DOT_EXECUTABLE dot)
>>>> if(DOT_EXECUTABLE)
>>>>   set(EPYDOC_OPTIONS ${EPYDOC_OPTIONS} --graph all --dotpath
>>>> ${DOT_EXECUTABLE})
>>>> endif()
>>>> set(DOC_DIR "${CMAKE_CURRENT_SOURCE_DIR}/doc")
>>>> file(MAKE_DIRECTORY "${DOC_DIR}")
>>>> #set(ENV{PYTHONPATH}
>>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib/python2.7/site-packages)
>>>> add_custom_target(lldb-python-doc
>>>>   ${EPYDOC_EXECUTABLE}
>>>>   --html
>>>>   lldb
>>>>   -o ${CMAKE_CURRENT_BINARY_DIR}/python_reference
>>>>   --name "LLDB python API"
>>>>   --url "http://lldb.llvm.org;
>>>>   ${EPYDOC_OPTIONS}
>>>>   DEPENDS swig_wrapper liblldb
>>>>   WORKING_DIRECTORY
>>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib${LLVM_LIBDIR_SUFFIX}/python2.7/site-packages
>>>>   COMMENT "Generating LLDB Python API reference with epydoc"
>>>> VERBATIM
>>>> )
>>>> endif(EPYDOC_EXECUTABLE)
>>>>
>>>>
>>>> > -Original Message-
>>>> > From: lldb-dev  On Behalf Of Pavel
>>>> Labath
>>>> > via lldb-dev
>>>> > Sent: Wednesday, April 24, 2019 1:16 AM
>>>> > To: Jonas Devlieghere ; Tanya Lattner
>>>> > 
>>>> > Cc: LLDB 
>>>> > Subject: [EXT] Re: [lldb-dev] LLDB Website
>>>> >
>>>> > On 24/04/2019 03:19, Jonas Devlieghere via lldb-dev wrote:
>>>> > >
>>>> > >
>>>> > > On Tue, Apr 23, 2019 at 6:04 PM Jonas Devlieghere
>>>> > > mailto:jo...@devlieghere.com>> wrote:
>>>> > >
>>>> > >
>>>> > >
>>>> > > On Tue, Apr 23, 2019 at 5:43 PM Tanya Lattner <
>>>> tanyalatt...@llvm.org
>>>> > > <mailto:tanyalatt...@llvm.org>> wrote:
>>>> > >
>>>> > >
>>>> > >
>>>> > >> On Apr 23, 2019, at 5:06 PM, Jonas Devlieghere
>>>> > >> mailto:jo...@devlieghere.com>>
>>>> wrote:
>>>> > >>
>>>> > >>
>>>> > >>
>>>> > >> On Tue, Apr 23, 2019 at 5:00 PM Tanya Lattner
>>>> > >> mailto:tanyalatt...@llvm.org>>
>>>> wrote:
>>>> > >>
>>>> > >>
>>>> > >>
>>>> > >>> On Apr 23, 2019, at 11:54 AM, Jonas Devlieghere
>>>> > >>> mailto:jo...@devlieghere.com
>>>> >>
>>>> > wrote:
>>>> > >>>
>>>> > >>> Hey Tanya,
>>>> > >>>
>>>> > >>> On Tue, Apr 23, 2019 at 11:51 Tanya Lattner
>>>> > >>> mailto:tanyalatt...@llvm.org>>
>>>> wrote:
>>>> > >>>
>>>> > >>> Jonas,
>

Re: [lldb-dev] LLDB Website

2019-05-03 Thread Jonas Devlieghere via lldb-dev
Hey Tanya,

It appears the website is still stuck. It hasn't picked up my changes from
earlier this week. Please let me know if there's anything I can do to help.

Thanks,
Jonas

On Wed, May 1, 2019 at 10:40 PM Tanya Lattner  wrote:

> I will give this a shot. I did remove the changes before to prevent any
> issue.
>
> -Tanya
>
> On Apr 29, 2019, at 10:26 AM, Jonas Devlieghere 
> wrote:
>
> I've merged the aforementioned patch.
>
> Tanya, can you give generating the python docs another shot?
>
> Thanks,
> Jonas
>
> On Fri, Apr 26, 2019 at 4:29 PM Jonas Devlieghere 
> wrote:
>
>> I've put up a patch to make it possible to generate the python reference
>> without building lldb at all: https://reviews.llvm.org/D61216
>>
>> PS: The website isn't updating anymore, is that because of the python
>> reference generation?
>>
>> On Wed, Apr 24, 2019 at 11:46 AM Ted Woodward 
>> wrote:
>>
>>> That's the issue - lldb-python-doc depends on liblldb. From
>>> docs/CMakeLists.txt:
>>>
>>> if(EPYDOC_EXECUTABLE)
>>>   find_program(DOT_EXECUTABLE dot)
>>> if(DOT_EXECUTABLE)
>>>   set(EPYDOC_OPTIONS ${EPYDOC_OPTIONS} --graph all --dotpath
>>> ${DOT_EXECUTABLE})
>>> endif()
>>> set(DOC_DIR "${CMAKE_CURRENT_SOURCE_DIR}/doc")
>>> file(MAKE_DIRECTORY "${DOC_DIR}")
>>> #set(ENV{PYTHONPATH}
>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib/python2.7/site-packages)
>>> add_custom_target(lldb-python-doc
>>>   ${EPYDOC_EXECUTABLE}
>>>   --html
>>>   lldb
>>>   -o ${CMAKE_CURRENT_BINARY_DIR}/python_reference
>>>   --name "LLDB python API"
>>>   --url "http://lldb.llvm.org;
>>>   ${EPYDOC_OPTIONS}
>>>   DEPENDS swig_wrapper liblldb
>>>   WORKING_DIRECTORY
>>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib${LLVM_LIBDIR_SUFFIX}/python2.7/site-packages
>>>   COMMENT "Generating LLDB Python API reference with epydoc" VERBATIM
>>> )
>>> endif(EPYDOC_EXECUTABLE)
>>>
>>>
>>> > -Original Message-
>>> > From: lldb-dev  On Behalf Of Pavel
>>> Labath
>>> > via lldb-dev
>>> > Sent: Wednesday, April 24, 2019 1:16 AM
>>> > To: Jonas Devlieghere ; Tanya Lattner
>>> > 
>>> > Cc: LLDB 
>>> > Subject: [EXT] Re: [lldb-dev] LLDB Website
>>> >
>>> > On 24/04/2019 03:19, Jonas Devlieghere via lldb-dev wrote:
>>> > >
>>> > >
>>> > > On Tue, Apr 23, 2019 at 6:04 PM Jonas Devlieghere
>>> > > mailto:jo...@devlieghere.com>> wrote:
>>> > >
>>> > >
>>> > >
>>> > > On Tue, Apr 23, 2019 at 5:43 PM Tanya Lattner <
>>> tanyalatt...@llvm.org
>>> > > <mailto:tanyalatt...@llvm.org>> wrote:
>>> > >
>>> > >
>>> > >
>>> > >> On Apr 23, 2019, at 5:06 PM, Jonas Devlieghere
>>> > >> mailto:jo...@devlieghere.com>>
>>> wrote:
>>> > >>
>>> > >>
>>> > >>
>>> > >> On Tue, Apr 23, 2019 at 5:00 PM Tanya Lattner
>>> > >> mailto:tanyalatt...@llvm.org>>
>>> wrote:
>>> > >>
>>> > >>
>>> > >>
>>> > >>> On Apr 23, 2019, at 11:54 AM, Jonas Devlieghere
>>> > >>> mailto:jo...@devlieghere.com>>
>>> > wrote:
>>> > >>>
>>> > >>> Hey Tanya,
>>> > >>>
>>> > >>> On Tue, Apr 23, 2019 at 11:51 Tanya Lattner
>>> > >>> mailto:tanyalatt...@llvm.org>>
>>> wrote:
>>> > >>>
>>> > >>> Jonas,
>>> > >>>
>>> > >>> Ignore what I said before as these do need to be
>>> > >>> separate targets. It appears the new targets are
>>> > >>> running doxygen. This isn’t something we typically
>>> do
>>> > >>> as a post commit hook since it takes awhile. I’ll
>>> > >>> need to do this via the doxygen nightly script. Any
>>> > >>>  

Re: [lldb-dev] LLDB Website

2019-04-29 Thread Jonas Devlieghere via lldb-dev
I've merged the aforementioned patch.

Tanya, can you give generating the python docs another shot?

Thanks,
Jonas

On Fri, Apr 26, 2019 at 4:29 PM Jonas Devlieghere 
wrote:

> I've put up a patch to make it possible to generate the python reference
> without building lldb at all: https://reviews.llvm.org/D61216
>
> PS: The website isn't updating anymore, is that because of the python
> reference generation?
>
> On Wed, Apr 24, 2019 at 11:46 AM Ted Woodward  wrote:
>
>> That's the issue - lldb-python-doc depends on liblldb. From
>> docs/CMakeLists.txt:
>>
>> if(EPYDOC_EXECUTABLE)
>>   find_program(DOT_EXECUTABLE dot)
>> if(DOT_EXECUTABLE)
>>   set(EPYDOC_OPTIONS ${EPYDOC_OPTIONS} --graph all --dotpath
>> ${DOT_EXECUTABLE})
>> endif()
>> set(DOC_DIR "${CMAKE_CURRENT_SOURCE_DIR}/doc")
>> file(MAKE_DIRECTORY "${DOC_DIR}")
>> #set(ENV{PYTHONPATH}
>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib/python2.7/site-packages)
>> add_custom_target(lldb-python-doc
>>   ${EPYDOC_EXECUTABLE}
>>   --html
>>   lldb
>>   -o ${CMAKE_CURRENT_BINARY_DIR}/python_reference
>>   --name "LLDB python API"
>>   --url "http://lldb.llvm.org;
>>   ${EPYDOC_OPTIONS}
>>   DEPENDS swig_wrapper liblldb
>>   WORKING_DIRECTORY
>> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib${LLVM_LIBDIR_SUFFIX}/python2.7/site-packages
>>   COMMENT "Generating LLDB Python API reference with epydoc" VERBATIM
>> )
>> endif(EPYDOC_EXECUTABLE)
>>
>>
>> > -Original Message-
>> > From: lldb-dev  On Behalf Of Pavel
>> Labath
>> > via lldb-dev
>> > Sent: Wednesday, April 24, 2019 1:16 AM
>> > To: Jonas Devlieghere ; Tanya Lattner
>> > 
>> > Cc: LLDB 
>> > Subject: [EXT] Re: [lldb-dev] LLDB Website
>> >
>> > On 24/04/2019 03:19, Jonas Devlieghere via lldb-dev wrote:
>> > >
>> > >
>> > > On Tue, Apr 23, 2019 at 6:04 PM Jonas Devlieghere
>> > > mailto:jo...@devlieghere.com>> wrote:
>> > >
>> > >
>> > >
>> > > On Tue, Apr 23, 2019 at 5:43 PM Tanya Lattner <
>> tanyalatt...@llvm.org
>> > > <mailto:tanyalatt...@llvm.org>> wrote:
>> > >
>> > >
>> > >
>> > >> On Apr 23, 2019, at 5:06 PM, Jonas Devlieghere
>> > >> mailto:jo...@devlieghere.com>>
>> wrote:
>> > >>
>> > >>
>> > >>
>> > >> On Tue, Apr 23, 2019 at 5:00 PM Tanya Lattner
>> > >> mailto:tanyalatt...@llvm.org>>
>> wrote:
>> > >>
>> > >>
>> > >>
>> > >>> On Apr 23, 2019, at 11:54 AM, Jonas Devlieghere
>> > >>> mailto:jo...@devlieghere.com>>
>> > wrote:
>> > >>>
>> > >>> Hey Tanya,
>> > >>>
>> > >>> On Tue, Apr 23, 2019 at 11:51 Tanya Lattner
>> > >>> mailto:tanyalatt...@llvm.org>>
>> wrote:
>> > >>>
>> > >>> Jonas,
>> > >>>
>> > >>> Ignore what I said before as these do need to be
>> > >>> separate targets. It appears the new targets are
>> > >>> running doxygen. This isn’t something we typically
>> do
>> > >>> as a post commit hook since it takes awhile. I’ll
>> > >>> need to do this via the doxygen nightly script. Any
>> > >>> concerns?
>> > >>>
>> > >>> That sounds perfect. Can we still do the regular website
>> > >>> post commit?
>> > >>
>> > >> Yes, so it will do docs-lldb-html on every commit.
>> > >>
>> > >>
>> > >> Perfect!
>> > >>
>> > >>
>> > >> So I am able to generate the cpp reference docs:
>> > >> https://lldb.llvm.org/cpp_reference/index.html
>> > >>
>> > >> However, the main website links to
>> > >> https://lldb.llvm.org/cpp_reference/html/index.html. Do
>> > >> you want the html in that url? 

Re: [lldb-dev] LLDB Website

2019-04-26 Thread Jonas Devlieghere via lldb-dev
I've put up a patch to make it possible to generate the python reference
without building lldb at all: https://reviews.llvm.org/D61216

PS: The website isn't updating anymore, is that because of the python
reference generation?

On Wed, Apr 24, 2019 at 11:46 AM Ted Woodward  wrote:

> That's the issue - lldb-python-doc depends on liblldb. From
> docs/CMakeLists.txt:
>
> if(EPYDOC_EXECUTABLE)
>   find_program(DOT_EXECUTABLE dot)
> if(DOT_EXECUTABLE)
>   set(EPYDOC_OPTIONS ${EPYDOC_OPTIONS} --graph all --dotpath
> ${DOT_EXECUTABLE})
> endif()
> set(DOC_DIR "${CMAKE_CURRENT_SOURCE_DIR}/doc")
> file(MAKE_DIRECTORY "${DOC_DIR}")
> #set(ENV{PYTHONPATH}
> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib/python2.7/site-packages)
> add_custom_target(lldb-python-doc
>   ${EPYDOC_EXECUTABLE}
>   --html
>   lldb
>   -o ${CMAKE_CURRENT_BINARY_DIR}/python_reference
>   --name "LLDB python API"
>   --url "http://lldb.llvm.org;
>   ${EPYDOC_OPTIONS}
>   DEPENDS swig_wrapper liblldb
>   WORKING_DIRECTORY
> ${CMAKE_CURRENT_BINARY_DIR}/../../../lib${LLVM_LIBDIR_SUFFIX}/python2.7/site-packages
>   COMMENT "Generating LLDB Python API reference with epydoc" VERBATIM
> )
> endif(EPYDOC_EXECUTABLE)
>
>
> > -Original Message-
> > From: lldb-dev  On Behalf Of Pavel
> Labath
> > via lldb-dev
> > Sent: Wednesday, April 24, 2019 1:16 AM
> > To: Jonas Devlieghere ; Tanya Lattner
> > 
> > Cc: LLDB 
> > Subject: [EXT] Re: [lldb-dev] LLDB Website
> >
> > On 24/04/2019 03:19, Jonas Devlieghere via lldb-dev wrote:
> > >
> > >
> > > On Tue, Apr 23, 2019 at 6:04 PM Jonas Devlieghere
> > > mailto:jo...@devlieghere.com>> wrote:
> > >
> > >
> > >
> > > On Tue, Apr 23, 2019 at 5:43 PM Tanya Lattner <
> tanyalatt...@llvm.org
> > > <mailto:tanyalatt...@llvm.org>> wrote:
> > >
> > >
> > >
> > >> On Apr 23, 2019, at 5:06 PM, Jonas Devlieghere
> > >> mailto:jo...@devlieghere.com>> wrote:
> > >>
> > >>
> > >>
> > >> On Tue, Apr 23, 2019 at 5:00 PM Tanya Lattner
> > >> mailto:tanyalatt...@llvm.org>> wrote:
> > >>
> > >>
> > >>
> > >>> On Apr 23, 2019, at 11:54 AM, Jonas Devlieghere
> > >>> mailto:jo...@devlieghere.com>>
> > wrote:
> > >>>
> > >>> Hey Tanya,
> > >>>
> > >>> On Tue, Apr 23, 2019 at 11:51 Tanya Lattner
> > >>> mailto:tanyalatt...@llvm.org>>
> wrote:
> > >>>
> > >>> Jonas,
> > >>>
> > >>> Ignore what I said before as these do need to be
> > >>> separate targets. It appears the new targets are
> > >>> running doxygen. This isn’t something we typically do
> > >>> as a post commit hook since it takes awhile. I’ll
> > >>> need to do this via the doxygen nightly script. Any
> > >>> concerns?
> > >>>
> > >>> That sounds perfect. Can we still do the regular website
> > >>> post commit?
> > >>
> > >> Yes, so it will do docs-lldb-html on every commit.
> > >>
> > >>
> > >> Perfect!
> > >>
> > >>
> > >> So I am able to generate the cpp reference docs:
> > >> https://lldb.llvm.org/cpp_reference/index.html
> > >>
> > >> However, the main website links to
> > >> https://lldb.llvm.org/cpp_reference/html/index.html. Do
> > >> you want the html in that url? I can change the alias. We
> > >> strip for other doxygen.
> > >>
> > >>
> > >> Let's keep it without the html. I'll update a link on the
> > >> website and add a redirect.
> > >>
> > >>
> > >> As for python docs, what is required to build those? It's
> > >> not showing up as a target for me.
> > >>
> > >>
> > >> This is probably because you don't have `epydoc` installed
> > >> (sudo pip insta

Re: [lldb-dev] LLDB Website

2019-04-23 Thread Jonas Devlieghere via lldb-dev
On Tue, Apr 23, 2019 at 6:04 PM Jonas Devlieghere 
wrote:

>
>
> On Tue, Apr 23, 2019 at 5:43 PM Tanya Lattner 
> wrote:
>
>>
>>
>> On Apr 23, 2019, at 5:06 PM, Jonas Devlieghere 
>> wrote:
>>
>>
>>
>> On Tue, Apr 23, 2019 at 5:00 PM Tanya Lattner 
>> wrote:
>>
>>>
>>>
>>> On Apr 23, 2019, at 11:54 AM, Jonas Devlieghere 
>>> wrote:
>>>
>>> Hey Tanya,
>>>
>>> On Tue, Apr 23, 2019 at 11:51 Tanya Lattner 
>>> wrote:
>>>
 Jonas,

 Ignore what I said before as these do need to be separate targets. It
 appears the new targets are running doxygen. This isn’t something we
 typically do as a post commit hook since it takes awhile. I’ll need to do
 this via the doxygen nightly script. Any concerns?

>>>
>>> That sounds perfect. Can we still do the regular website post commit?
>>>
>>>
>>> Yes, so it will do docs-lldb-html on every commit.
>>>
>>
>> Perfect!
>>
>>
>>>
>>> So I am able to generate the cpp reference docs:
>>> https://lldb.llvm.org/cpp_reference/index.html
>>>
>>> However, the main website links to
>>> https://lldb.llvm.org/cpp_reference/html/index.html. Do you want the
>>> html in that url? I can change the alias. We strip for other doxygen.
>>>
>>
>> Let's keep it without the html. I'll update a link on the website and add
>> a redirect.
>>
>>
>>>
>>> As for python docs, what is required to build those? It's not showing up
>>> as a target for me.
>>>
>>
>> This is probably because you don't have `epydoc` installed (sudo pip
>> install epydoc).
>> I think you'll have to re-run cmake after for it to pick it up. The
>> corresponding target should then be `lldb-python-doc`.
>>
>> https://lldb.llvm.org/cpp_reference/index.html
>>
>>
>> Well installing epydoc did the trick, but I don’t think the doxygen
>> script is the right place for this target. I have not dug into it yet but
>> it appears to require some LLVM libraries and is building those. I’m
>> letting it finish to verify it builds but I’ll have to sort out the best
>> way of doing this on the server. We have other scripts that generate other
>> documentation that build parts of LLVM. Ideally, I would want to leverage
>> that and reduce build times.
>>
>
> Yeah, the annoying thing about the Python documentation is that it builds
> the C++ API, then runs swig to generate the Python wrapper, and finally
> generates the docs from that.
> I wonder if we can just use the static bindings that are checked-in
> instead. I will look into that later today/tomorrow.
>

Right, so the reason is that we don't have the static bindings on llvm.org
(we have them for swift-lldb on GitHub).
Maybe we should check them in upstream too? That's something the community
will have to weigh in on...


>
>
>>
>> -Tanya
>>
>>
>>
>>>
>>> Thanks,
>>> Tanya
>>>
>>
>> Thanks again for doing this.
>>
>> Cheers,
>> Jonas
>>
>>
>>>
>>>
>>>
>>>
 -Tanya

>>>
>>> Thank you!
>>>
>>>
 On Apr 23, 2019, at 11:45 AM, Tanya Lattner 
 wrote:

 Anytime new targets are added, the script has to be modified. Is there
 a way you can put them all under a top level html target? Or is there a
 reason not to?

 -Tanya

 On Apr 19, 2019, at 12:17 PM, Jonas Devlieghere 
 wrote:

 Hey Tanya,

 Thanks again for migrating the LLDB website so it is generated with
 Sphinx!

 I made a change yesterday that hasn't been propagated yet. It looks
 like it might have something to do with
 http://lists.llvm.org/pipermail/www-scripts/2019-April/007524.html.

 Also, as the result of this change the following two links are broken:

 https://lldb.llvm.org/cpp_reference/
 https://lldb.llvm.org/python_reference/

 Could we make the script generate those two folders as well? The
 corresponding CMake target are lldb-cpp-doc and lldb-python-doc.

 Thank you,Or
 Jonas

 PS: I've included lldb-dev in CC so everyone knows we're working on the
 missing documentation.



 --
>>> Sent from my iPhone
>>>
>>>
>>>
>>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB Website

2019-04-23 Thread Jonas Devlieghere via lldb-dev
On Tue, Apr 23, 2019 at 5:43 PM Tanya Lattner  wrote:

>
>
> On Apr 23, 2019, at 5:06 PM, Jonas Devlieghere 
> wrote:
>
>
>
> On Tue, Apr 23, 2019 at 5:00 PM Tanya Lattner 
> wrote:
>
>>
>>
>> On Apr 23, 2019, at 11:54 AM, Jonas Devlieghere 
>> wrote:
>>
>> Hey Tanya,
>>
>> On Tue, Apr 23, 2019 at 11:51 Tanya Lattner 
>> wrote:
>>
>>> Jonas,
>>>
>>> Ignore what I said before as these do need to be separate targets. It
>>> appears the new targets are running doxygen. This isn’t something we
>>> typically do as a post commit hook since it takes awhile. I’ll need to do
>>> this via the doxygen nightly script. Any concerns?
>>>
>>
>> That sounds perfect. Can we still do the regular website post commit?
>>
>>
>> Yes, so it will do docs-lldb-html on every commit.
>>
>
> Perfect!
>
>
>>
>> So I am able to generate the cpp reference docs:
>> https://lldb.llvm.org/cpp_reference/index.html
>>
>> However, the main website links to
>> https://lldb.llvm.org/cpp_reference/html/index.html. Do you want the
>> html in that url? I can change the alias. We strip for other doxygen.
>>
>
> Let's keep it without the html. I'll update a link on the website and add
> a redirect.
>
>
>>
>> As for python docs, what is required to build those? It's not showing up
>> as a target for me.
>>
>
> This is probably because you don't have `epydoc` installed (sudo pip
> install epydoc).
> I think you'll have to re-run cmake after for it to pick it up. The
> corresponding target should then be `lldb-python-doc`.
>
> https://lldb.llvm.org/cpp_reference/index.html
>
>
> Well installing epydoc did the trick, but I don’t think the doxygen script
> is the right place for this target. I have not dug into it yet but it
> appears to require some LLVM libraries and is building those. I’m letting
> it finish to verify it builds but I’ll have to sort out the best way of
> doing this on the server. We have other scripts that generate other
> documentation that build parts of LLVM. Ideally, I would want to leverage
> that and reduce build times.
>

Yeah, the annoying thing about the Python documentation is that it builds
the C++ API, then runs swig to generate the Python wrapper, and finally
generates the docs from that.
I wonder if we can just use the static bindings that are checked-in
instead. I will look into that later today/tomorrow.


>
> -Tanya
>
>
>
>>
>> Thanks,
>> Tanya
>>
>
> Thanks again for doing this.
>
> Cheers,
> Jonas
>
>
>>
>>
>>
>>
>>> -Tanya
>>>
>>
>> Thank you!
>>
>>
>>> On Apr 23, 2019, at 11:45 AM, Tanya Lattner 
>>> wrote:
>>>
>>> Anytime new targets are added, the script has to be modified. Is there a
>>> way you can put them all under a top level html target? Or is there a
>>> reason not to?
>>>
>>> -Tanya
>>>
>>> On Apr 19, 2019, at 12:17 PM, Jonas Devlieghere 
>>> wrote:
>>>
>>> Hey Tanya,
>>>
>>> Thanks again for migrating the LLDB website so it is generated with
>>> Sphinx!
>>>
>>> I made a change yesterday that hasn't been propagated yet. It looks like
>>> it might have something to do with
>>> http://lists.llvm.org/pipermail/www-scripts/2019-April/007524.html.
>>>
>>> Also, as the result of this change the following two links are broken:
>>>
>>> https://lldb.llvm.org/cpp_reference/
>>> https://lldb.llvm.org/python_reference/
>>>
>>> Could we make the script generate those two folders as well? The
>>> corresponding CMake target are lldb-cpp-doc and lldb-python-doc.
>>>
>>> Thank you,Or
>>> Jonas
>>>
>>> PS: I've included lldb-dev in CC so everyone knows we're working on the
>>> missing documentation.
>>>
>>>
>>>
>>> --
>> Sent from my iPhone
>>
>>
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB Website

2019-04-23 Thread Jonas Devlieghere via lldb-dev
On Tue, Apr 23, 2019 at 5:00 PM Tanya Lattner  wrote:

>
>
> On Apr 23, 2019, at 11:54 AM, Jonas Devlieghere 
> wrote:
>
> Hey Tanya,
>
> On Tue, Apr 23, 2019 at 11:51 Tanya Lattner  wrote:
>
>> Jonas,
>>
>> Ignore what I said before as these do need to be separate targets. It
>> appears the new targets are running doxygen. This isn’t something we
>> typically do as a post commit hook since it takes awhile. I’ll need to do
>> this via the doxygen nightly script. Any concerns?
>>
>
> That sounds perfect. Can we still do the regular website post commit?
>
>
> Yes, so it will do docs-lldb-html on every commit.
>

Perfect!


>
> So I am able to generate the cpp reference docs:
> https://lldb.llvm.org/cpp_reference/index.html
>
> However, the main website links to
> https://lldb.llvm.org/cpp_reference/html/index.html. Do you want the html
> in that url? I can change the alias. We strip for other doxygen.
>

Let's keep it without the html. I'll update a link on the website and add a
redirect.


>
> As for python docs, what is required to build those? It's not showing up
> as a target for me.
>

This is probably because you don't have `epydoc` installed (sudo pip
install epydoc).
I think you'll have to re-run cmake after for it to pick it up. The
corresponding target should then be `lldb-python-doc`.

https://lldb.llvm.org/cpp_reference/index.html


>
> Thanks,
> Tanya
>

Thanks again for doing this.

Cheers,
Jonas


>
>
>
>
>> -Tanya
>>
>
> Thank you!
>
>
>> On Apr 23, 2019, at 11:45 AM, Tanya Lattner 
>> wrote:
>>
>> Anytime new targets are added, the script has to be modified. Is there a
>> way you can put them all under a top level html target? Or is there a
>> reason not to?
>>
>> -Tanya
>>
>> On Apr 19, 2019, at 12:17 PM, Jonas Devlieghere 
>> wrote:
>>
>> Hey Tanya,
>>
>> Thanks again for migrating the LLDB website so it is generated with
>> Sphinx!
>>
>> I made a change yesterday that hasn't been propagated yet. It looks like
>> it might have something to do with
>> http://lists.llvm.org/pipermail/www-scripts/2019-April/007524.html.
>>
>> Also, as the result of this change the following two links are broken:
>>
>> https://lldb.llvm.org/cpp_reference/
>> https://lldb.llvm.org/python_reference/
>>
>> Could we make the script generate those two folders as well? The
>> corresponding CMake target are lldb-cpp-doc and lldb-python-doc.
>>
>> Thank you,Or
>> Jonas
>>
>> PS: I've included lldb-dev in CC so everyone knows we're working on the
>> missing documentation.
>>
>>
>>
>> --
> Sent from my iPhone
>
>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB Website

2019-04-23 Thread Jonas Devlieghere via lldb-dev
Hey Tanya,

On Tue, Apr 23, 2019 at 11:51 Tanya Lattner  wrote:

> Jonas,
>
> Ignore what I said before as these do need to be separate targets. It
> appears the new targets are running doxygen. This isn’t something we
> typically do as a post commit hook since it takes awhile. I’ll need to do
> this via the doxygen nightly script. Any concerns?
>

That sounds perfect. Can we still do the regular website post commit?


> -Tanya
>

Thank you!


> On Apr 23, 2019, at 11:45 AM, Tanya Lattner  wrote:
>
> Anytime new targets are added, the script has to be modified. Is there a
> way you can put them all under a top level html target? Or is there a
> reason not to?
>
> -Tanya
>
> On Apr 19, 2019, at 12:17 PM, Jonas Devlieghere 
> wrote:
>
> Hey Tanya,
>
> Thanks again for migrating the LLDB website so it is generated with Sphinx!
>
> I made a change yesterday that hasn't been propagated yet. It looks like
> it might have something to do with
> http://lists.llvm.org/pipermail/www-scripts/2019-April/007524.html.
>
> Also, as the result of this change the following two links are broken:
>
> https://lldb.llvm.org/cpp_reference/
> https://lldb.llvm.org/python_reference/
>
> Could we make the script generate those two folders as well? The
> corresponding CMake target are lldb-cpp-doc and lldb-python-doc.
>
> Thank you,Or
> Jonas
>
> PS: I've included lldb-dev in CC so everyone knows we're working on the
> missing documentation.
>
>
>
> --
Sent from my iPhone
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] LLDB Website

2019-04-23 Thread Jonas Devlieghere via lldb-dev
Friendly ping.

There's a bunch of people that are annoyed by the missing documentation.
I've already addressed most of the other comments about the broken URLs and
missing top level links, but unfortunately that doesn't take effect because
the website isn't updating.

On Fri, Apr 19, 2019 at 12:17 PM Jonas Devlieghere 
wrote:

> Hey Tanya,
>
> Thanks again for migrating the LLDB website so it is generated with Sphinx!
>
> I made a change yesterday that hasn't been propagated yet. It looks like
> it might have something to do with
> http://lists.llvm.org/pipermail/www-scripts/2019-April/007524.html.
>
> Also, as the result of this change the following two links are broken:
>
> https://lldb.llvm.org/cpp_reference/
> https://lldb.llvm.org/python_reference/
>
> Could we make the script generate those two folders as well? The
> corresponding CMake target are lldb-cpp-doc and lldb-python-doc.
>
> Thank you,
> Jonas
>
> PS: I've included lldb-dev in CC so everyone knows we're working on the
> missing documentation.
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Where did the python/c++ API documentation go?

2019-04-22 Thread Jonas Devlieghere via lldb-dev
Yep, I sent http://lists.llvm.org/pipermail/lldb-dev/2019-April/014992.html
on Friday.

It's unfortunate that the website isn't updating either, because I added
some instructions on how to generate the docs locally, as an alternative to
what Jim suggested.

https://reviews.llvm.org/rGf7f03622eca68d11f3d2407ab497dbe83c13db63

Cheers,
Jonas

On Mon, Apr 22, 2019 at 11:41 AM Jim Ingham via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Well, you can probably look at the copy in the sources (www/index.html &
> so forth).  But we do need to get this fixed as external folks who don't
> have a checkout do rely on this.
>
> Jim
>
>
> > On Apr 22, 2019, at 11:37 AM, Ted Woodward  wrote:
> >
> > Great, thanks Jim. Glad to see people are already on this.
> >
> > But where do I go if I need to look at the python API right now?
> > (besides web.archive.org, which is what I ended up doing)
> >
> >> -Original Message-
> >> From: jing...@apple.com 
> >> Sent: Monday, April 22, 2019 1:35 PM
> >> To: Ted Woodward 
> >> Cc: LLDB 
> >> Subject: [EXT] Re: [lldb-dev] Where did the python/c++ API documentation
> >> go?
> >>
> >>
> >>
> >>> On Apr 22, 2019, at 10:59 AM, Ted Woodward via lldb-dev  >> d...@lists.llvm.org> wrote:
> >>>
> >>> The new LLDB website at http://lldb.llvm.org has an external resources
> >> page:
> >>> http://lldb.llvm.org/resources/external.html
> >>>
> >>> It has 2 entries on it for Documentation:
> >>> https://lldb.llvm.org/python_reference/index.html
> >>> https://lldb.llvm.org/cpp_reference/html/index.html
> >>>
> >>> Both of these lead to “404 Not Found”:
> >>> The requested URL /python_reference/index.html was not found on this
> >> server.
> >>> The requested URL /cpp_reference/html/index.html was not found on this
> >> server.
> >>>
> >>> Where do I go to find the python/c++ API documentation now?
> >>>
> >>> BTW, I don’t think LLDB’s API documentation is an “External Resource”.
> >> Those links should be on the main page, along with a link to the llvm
> main
> >> page.
> >>
> >> Both of these are known issues.  Jonas is going to move the API docs to
> the top
> >> level.  IIUC this also needs some website admin intervention, which
> Jonas is
> >> waiting on.
> >>
> >> Jim
> >>
> >>
> >>> ___
> >>> lldb-dev mailing list
> >>> lldb-dev@lists.llvm.org
> >>> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> >
>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] LLDB Website

2019-04-19 Thread Jonas Devlieghere via lldb-dev
Hey Tanya,

Thanks again for migrating the LLDB website so it is generated with Sphinx!

I made a change yesterday that hasn't been propagated yet. It looks like it
might have something to do with
http://lists.llvm.org/pipermail/www-scripts/2019-April/007524.html.

Also, as the result of this change the following two links are broken:

https://lldb.llvm.org/cpp_reference/
https://lldb.llvm.org/python_reference/

Could we make the script generate those two folders as well? The
corresponding CMake target are lldb-cpp-doc and lldb-python-doc.

Thank you,
Jonas

PS: I've included lldb-dev in CC so everyone knows we're working on the
missing documentation.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Adding a clang-style LLVM.h (or, "Are you tired of typing 'llvm::' everywhere ?")

2019-04-17 Thread Jonas Devlieghere via lldb-dev
Hey Pavel,

Sounds like a good idea. I don't have a strong opinion on this matter, but
I'm always in favor of improving readability.

Cheers,
Jonas

On Wed, Apr 17, 2019 at 3:38 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hello all,
>
> some llvm classes, are so well-known and widely used, that qualifying
> them with "llvm::" serves no useful purpose and only adds visual noise.
> I'm thinking here mainly of ADT classes like String/ArrayRef,
> Optional/Error, etc. I propose we stop explicitly qualifying these classes.
>
> We can implement this proposal the same way as clang solved the same
> problem, which is by creating a special LLVM.h
> <
> https://github.com/llvm-mirror/clang/blob/master/include/clang/Basic/LLVM.h>
>
> header in the Utility library. This header would adopt these classes
> into the lldb_private namespace via a series of forward and "using"
> declarations.
>
> I think clang's LLVM.h is contains a well-balanced collection of adopted
> classes, and it should cover the most widely-used classes in lldb too,
> so I propose we use that as a starting point.
>
> What do you think?
>
> regards,
> pavel
>
> PS: I'm not proposing any wholesale removal of "llvm::" qualifiers from
> these types, though I may do some smaller-scale removals if I'm about to
> substantially modify a file.
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Bugzilla default-assignee vs default-cc

2019-04-12 Thread Jonas Devlieghere via lldb-dev
On Fri, Apr 12, 2019 at 10:56 AM Raphael Isemann 
wrote:

> Can't we just have a list like lldb-bugs (similar to llvm-bugs) as the
> default-cc? This way I would have all the bugzilla talk also available
> in my mail, which means I can use my mail client to read comments and
> search them. And people that don't care about every bug update just
> don't subscribe to this list.
>

Would that be in addition to having lldb dev as the default assignee (as is
the case today) or instead (using unassigned) so Bugzilla traffic only goes
to that list? If you mean the latter, then +1 from me too.


> - Raphael
>
> Am Fr., 12. Apr. 2019 um 10:33 Uhr schrieb Pavel Labath via lldb-dev
> :
> >
> > On 12/04/2019 10:25, Jonas Devlieghere via lldb-dev wrote:
> > > I was talking to one of the Bugzilla admins (Kristof) earlier today and
> > > he pointed out that the default assignee for lldb bugs in the lldb-dev
> > > list, and that it might be better to change that from default-assignee
> > > to default-cc. That way, when the bug gets assigned, the mailing list
> > > continues to get updates.
> > >
> > > I guess changing this depends on what our intentions are: do we just
> > > want to notify the mailing list of new bugs or if we want to keep the
> > > list in the loop for every bug. The latter might mean more traffic,
> > > depending on how many people actually assign bugs to themselves.
> > >
> > > Please speak up if you think this is worth changing!
> > >
> > > Cheers,
> > > Jonas
> > >
> >
> > I actually think the current behaviour strikes a good balance for the
> > noise-to-signal ratio.
> >
> > Speaking for myself, I wouldn't want to get an email for every update to
> > all lldb bugs out there. However, I like to have an overview of new bugs
> > coming in, and when I find a particular bug interesting, I can always
> > add myself to the cc list so I am notified of updates to the bugs I care
> > about.
> >
> > pl
> > ___
> > lldb-dev mailing list
> > lldb-dev@lists.llvm.org
> > https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Bugzilla default-assignee vs default-cc

2019-04-12 Thread Jonas Devlieghere via lldb-dev
I was talking to one of the Bugzilla admins (Kristof) earlier today and he
pointed out that the default assignee for lldb bugs in the lldb-dev list,
and that it might be better to change that from default-assignee to
default-cc. That way, when the bug gets assigned, the mailing list
continues to get updates.

I guess changing this depends on what our intentions are: do we just want
to notify the mailing list of new bugs or if we want to keep the list in
the loop for every bug. The latter might mean more traffic, depending on
how many people actually assign bugs to themselves.

Please speak up if you think this is worth changing!

Cheers,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Can we remove this platform?

2019-03-27 Thread Jonas Devlieghere via lldb-dev
Thanks for the background, David!

I've removed the platform in r357086.

Cheers,
Jonas

On Wed, Mar 27, 2019 at 5:42 AM David Earlam 
wrote:

> Hi Jonas,
>
> I agree you can remove Kalimba as a platform.
> We'll manage bringing it back upstream should we re-engage with llvm/lldb
> for Kalimba.
>
>
>
> Some background:
>
> As CSR (Cambridge Silicon Radio plc) we experimented with using lldb for
> the Kalimba DSP.
> CSR plc was acquired by Qualcomm in August 2015 and became Qualcomm
> Technologies International, Ltd.
>
> o Kalimba Architecture 3 is a Harvard 24bit word-addressable deeply
> embedded DSP found in
> https://www.qualcomm.com/products/csr8675 used for Bluetooth aptX stereo
> headsets and speakers.
>
> o Kalimba Architecture 4 is a Harvard 32bit octet-addressable deeply
> embedded DSP and application processor
> first used for the multi-core CSRA6810x
> https://www.qualcomm.com/media/documents/files/csra68105-product-brief.pdf
> and now gaining wider adoption in
> https://www.qualcomm.com/products/qcc5100-series and
> https://www.qualcomm.com/products/qcc30xx-series based products which are
> typically
> used for Bluetooth aptX HD earbuds, headphones and speakers.
>
> o Kalimba Architecture 5 is a Harvard 24bit word-addressable deeply
> embedded audio DSP used in
> https://www.qualcomm.com/products/qualcomm-atlas-7 - an in-vehicle info
> and entertainment system-on-chip.
>
> The word-addressable feature of Architecture 3 and 5 Kalimba was nearly a
> total blocker for lldb adoption;
> an issue also faced by Embecosm for the 16bit AAP.
> http://lists.llvm.org/pipermail/llvm-dev/2017-February/109776.html
>
> Being deeply embedded, the cores provide some other unique system-level
> challenges for debug, development and test -
> including memory regions of different widths, power management, hardware
> breakpoint and memory patch units that made
> lldb not quite right for Kalimba. We also care deeply about optimised code
> debug fidelity (for example, our toolchain exploits
> DWARF's DW_LNS_negate_stmt).
>
> Such factors meant work was suspended on Kalimba as an lldb target around
> the time of the Qualcomm acquisition.
>
>
> (*)
> Providing infra-structure to run platform tests upstream is somewhat
> difficult. Development boards and custom debug probes can
> be expensive. Often we are creating the development tools for a new chip
> in advance of any silicon by using FPGAs or proprietary
> instruction set simulators that we cannot share. Nor can you usually
> easily repurpose end-consumer devices
> for tool testing because premium audio devices are also costly, run off a
> battery, and the code and const-data is fixed
> in none volatile memory or its debug port is not physically accessible or
> locked down since it contains an OEM's
> intellectual property.
>
> best regards,
>
> David Earlam
> Staff-Senior Engineer / Manager.
> Software  Development Tools.
> Qualcomm Technologies International, Ltd.
>
> -Original Message-
> From: lldb-dev  On Behalf Of Pavel
> Labath via lldb-dev
> Sent: 27 March 2019 09:32
> To: Jonas Devlieghere ; LLDB <
> lldb-dev@lists.llvm.org>
> Subject: [EXT] Re: [lldb-dev] Can we remove this platform?
>
> On 26/03/2019 23:16, Jonas Devlieghere via lldb-dev wrote:
> > Yesterday I stumbled upon the initialization code for the "Kalimba"
> > platform. It looks like this was added in 2014 and never had any tests.
> > If nobody is relying on this platform, I propose to remove it.
> >
> > Review: https://reviews.llvm.org/D59850
> >
> > Thanks,
> > Jonas
> >
>
> Sounds good to me. I've had to touch this file a couple of times in the
> past due to interface changes, and I came close to proposing the same thing.
>
>
> [To be fair, none of the other platforms (except a single PlatformDarwin
> tests checking one very particular aspect of it) have specific tests
> either, though most of their code would be exercised if you run the test
> suite against a target supported by that particular platform. However, I
> doubt anyone if doing that for PlatformKalimba these days.]
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] Can we remove this platform?

2019-03-26 Thread Jonas Devlieghere via lldb-dev
Yesterday I stumbled upon the initialization code for the "Kalimba"
platform. It looks like this was added in 2014 and never had any tests. If
nobody is relying on this platform, I propose to remove it.

Review: https://reviews.llvm.org/D59850

Thanks,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] new tool (core2yaml) + a new top-level library (Formats)

2019-03-05 Thread Jonas Devlieghere via lldb-dev
Hi Pavel,

On Tue, Mar 5, 2019 at 8:31 AM Pavel Labath via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Hello all,
>
> I have just posted a large-ish patch series for review (D58971, D58973,
> D58975, D58976), and I want to use this opportunity to draw more
> attention to it and highlight various bikeshedding
> opportunities^H^H^Htopics for discussion :).
>
> The new tool is called core2yaml, and it's goal is to fill the gap in
> the testing story for core files. As you might know, at present, the
> only way to test core file parsing code (*) is to check in an opaque
> binary blob and have the debugger open that. This presents a couple of
> challenges:
> - it's really hard to review what is inside the core file
> - one has to jump through various hoops to create a "small" core file
> This tools fixes both issues by enabling one to check in text files,
> with human-readable content. The yaml files can also be easily edited to
> prune out the content which is not relevant for the test. While that's
> not my goal at present, I am hoping that this will one day enable us to
> write self-contained tests for the unwinder, as the core file can be
> used to synthesize (or capture) interesting unwinder scenarios.
>
> Since I also needed to find a home for the new code I was writing, I
> thought this would be good opportunity to create a new library for
> various stuff. The goals I was trying to solve are:
> - make the yaml code a library. The reason for that is that we have a
> number of unittests using checked in binaries, and I thought it would be
> nice to be able to convert those to use yaml representation as well.
> - make the existing minidump parsing code more easily accessible. The
> parsing code currently lives in source/Plugins/Process/minidump, and is
> impossible to use it without pulling in the rest of lldb (which the tool
> doesn't need).
> The solution I came up with here is a new "Formats" library. I chose a
> fairly generic name, because I realized that we have code for
> (de)serializing a bunch of small formats, which don't really have a good
> place to live in. Currently I needed a parser for linux /proc/PID/maps
> files and minidump files, but I am hoping that a generic name would
> enable us to one day move the gdb-remote protocol code there (which is
> also currently buried in some plugin code, which makes it hard to depend
> on from lldb-server), as well as the future debug-info-server, if it
> ever comes into existence.
>
> Discussion topic #1: The library name and scope.
> There are lost of other ways this could be organized. One of the names I
> considered was "BinaryFormat" for symmetry with llvm, but then I chose
> to drop the "Binary" part as it seemed to me we have plenty of
> non-binary formats as well. As for it's dependencies I currently have it
> depending on Utility and nothing else (as far as lldb libraries go). I
> can imagine using some Host code might be useful there too, but I would
> like to avoid any other lldb dependencies right now. Another question is
> whether this should be a single library or a bunch of smaller ones. I
> chose a single library now because the things I initially plan to put
> there are fairly small (/proc/pid/maps parser is 200 LOC), but I can see
> how we may want to create sub-libraries for things that grow big (the
> debug-info server code might turn out to be one of those) or that have
> some additional dependencies.
>

I don't have strong opinions here, nor do I have a better suggestion for
the name.


> Discussion topic #2: tool name and scope
> A case could be made to integrate this functionality into the llvm
> yaml2obj utilities. Here I chose not to do that because the minidump
> format is not at all implemented in llvm, and I do not see a use case
> for it to be implemented/moved there. A stronger case could be made to
> put the elf core code there, since llvm already supports reading elf
> files. While originally being in favour of that, I eventually adopted
> the view that doing this in lldb would be better because:
> - it would bring more symmetry with minidumps
> - it would enable us to do fine-grained yamlization for things that we
> care about (e.g., registers), which is something that would probably be
> uninteresting to the rest of llvm.
>

I don't know much about the minidump format or code, but it sounds
reasonable for me to have support for it in yaml2obj, which would be a
sufficient motivation to have the code live there. As you mention in your
footnote, MachO core files are already supported, and it sounds like ELF
could reuse a bunch of existing code as well. So having everything in LLVM
would give you even more symmetry. I also doubt anyone would mind having
more fine grained yamlization, even if you cannot use it to reduce a test
it's nicer to see structure than a binary blob (imho). Anyway, that's just
my take, I guess this is more of a question for the LLVM mailing list.


> Discussion topic #3: Use of .def files in lldb. 

Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-16 Thread Jonas Devlieghere via lldb-dev
I've put up a (WIP) patch for the tool (https://reviews.llvm.org/D56822) in
case anybody is curious about that.

On Tue, Jan 15, 2019 at 1:41 PM Jonas Devlieghere 
wrote:

> I've updated the patch with a new version of the prototype:
> https://reviews.llvm.org/D56322
>
> It uses Pavel's suggestion to use the function address as a runtime ID.
> All the deserialization code is generated using templates, with automatic
> mapping on indices during serialization and deserialization.
>
> I (again) manually added the macros for the same set of functions I had in
> the original prototype. Unsurprisingly this is very error-prone. It's easy
> to forget to add the right macros for the registry, the function, and the
> return type. Some of these things can be detected at compile time, other
> only blow up at run-time. I strongly believe that a tool to add the macros
> is the way forward. It would be more of a developer tool rather than
> something that hooks up in the build process.
>
> Note that it's still a prototype, there are outstanding issues like void
> pointers, callbacks and other types of argument that require some kind of
> additional information to serialize. I also didn't get around yet to the
> lifetime issue yet that was discussed on IRC last week.
>
> Please let me know what you think.
>
> Thanks,
> Jonas
>
> On Wed, Jan 9, 2019 at 8:58 AM Jonas Devlieghere 
> wrote:
>
>>
>>
>> On Wed, Jan 9, 2019 at 8:42 AM Pavel Labath  wrote:
>>
>>> On 09/01/2019 17:15, Jonas Devlieghere wrote:
>>> >
>>> >
>>> > On Wed, Jan 9, 2019 at 5:05 AM Pavel Labath >> > > wrote:
>>> >
>>> > On 08/01/2019 21:57, Jonas Devlieghere wrote:
>>> >  > Before I got around to coding this up I realized you can't take
>>> the
>>> >  > address of constructors in C++, so the function address won't
>>> > work as an
>>> >  > identifier.
>>> >  >
>>> >
>>> > You gave up way too easily. :P
>>> >
>>> >
>>> > I counted on you having something in mind, it sounded too obvious for
>>> > you to have missed.  ;-)
>>> >
>>> > I realized that constructors are going to be tricky, but I didn't
>>> want
>>> > to dive into those details until I knew if you liked the general
>>> idea.
>>> > The most important thing to realize here is that for the identifier
>>> > thingy to work, you don't actually need to use the address of that
>>> > method/constructor as the identifier. It is sufficient to have
>>> > something
>>> > that can be deterministically computed from the function. Then you
>>> can
>>> > use the address of *that* as the identifier.
>>> >
>>> >
>>> > I was thinking about that yesterday. I still feel like it would be
>>> > better to have this mapping all done at compile time. I was
>>> considering
>>> > some kind of constexpr hashing but that sounded overkill.
>>> >
>>>
>>> Well.. most of this is done through template meta-programming, which
>>> _is_ compile-time. And the fact that I have a use for the new
>>> construct/invoke functions I create this way means that even the space
>>> used by those isn't completely wasted (although I'm sure this could be
>>> made smaller with hard-coded IDs). The biggest impact of this I can
>>> think of is the increased number of dynamic relocations that need to be
>>> performed by the loader, as it introduces a bunch of function pointers
>>> floating around. But even that shouldn't too bad as we have plenty of
>>> other sources of dynamic relocs (currently about 4% of the size of
>>> liblldb and 10% of lldb-server).
>>>
>>
>> Yeah of course, it wasn't my intention to critique your approach. I was
>> talking specifically about the mapping (the std::map) in the prototype,
>> something I asked about earlier in the thread. FWIW I think this would be
>> an excellent trade-off is we don't need a tool to generate code for us. I'm
>> hopeful that we can have the gross of the deserialization code generated
>> this way, most of the "special" cases are still pretty similar and dealing
>> with basic types. Anyway, that should become clear later today as I
>> integrate this into the lldb prototype.
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-15 Thread Jonas Devlieghere via lldb-dev
I've updated the patch with a new version of the prototype:
https://reviews.llvm.org/D56322

It uses Pavel's suggestion to use the function address as a runtime ID. All
the deserialization code is generated using templates, with automatic
mapping on indices during serialization and deserialization.

I (again) manually added the macros for the same set of functions I had in
the original prototype. Unsurprisingly this is very error-prone. It's easy
to forget to add the right macros for the registry, the function, and the
return type. Some of these things can be detected at compile time, other
only blow up at run-time. I strongly believe that a tool to add the macros
is the way forward. It would be more of a developer tool rather than
something that hooks up in the build process.

Note that it's still a prototype, there are outstanding issues like void
pointers, callbacks and other types of argument that require some kind of
additional information to serialize. I also didn't get around yet to the
lifetime issue yet that was discussed on IRC last week.

Please let me know what you think.

Thanks,
Jonas

On Wed, Jan 9, 2019 at 8:58 AM Jonas Devlieghere 
wrote:

>
>
> On Wed, Jan 9, 2019 at 8:42 AM Pavel Labath  wrote:
>
>> On 09/01/2019 17:15, Jonas Devlieghere wrote:
>> >
>> >
>> > On Wed, Jan 9, 2019 at 5:05 AM Pavel Labath > > > wrote:
>> >
>> > On 08/01/2019 21:57, Jonas Devlieghere wrote:
>> >  > Before I got around to coding this up I realized you can't take
>> the
>> >  > address of constructors in C++, so the function address won't
>> > work as an
>> >  > identifier.
>> >  >
>> >
>> > You gave up way too easily. :P
>> >
>> >
>> > I counted on you having something in mind, it sounded too obvious for
>> > you to have missed.  ;-)
>> >
>> > I realized that constructors are going to be tricky, but I didn't
>> want
>> > to dive into those details until I knew if you liked the general
>> idea.
>> > The most important thing to realize here is that for the identifier
>> > thingy to work, you don't actually need to use the address of that
>> > method/constructor as the identifier. It is sufficient to have
>> > something
>> > that can be deterministically computed from the function. Then you
>> can
>> > use the address of *that* as the identifier.
>> >
>> >
>> > I was thinking about that yesterday. I still feel like it would be
>> > better to have this mapping all done at compile time. I was considering
>> > some kind of constexpr hashing but that sounded overkill.
>> >
>>
>> Well.. most of this is done through template meta-programming, which
>> _is_ compile-time. And the fact that I have a use for the new
>> construct/invoke functions I create this way means that even the space
>> used by those isn't completely wasted (although I'm sure this could be
>> made smaller with hard-coded IDs). The biggest impact of this I can
>> think of is the increased number of dynamic relocations that need to be
>> performed by the loader, as it introduces a bunch of function pointers
>> floating around. But even that shouldn't too bad as we have plenty of
>> other sources of dynamic relocs (currently about 4% of the size of
>> liblldb and 10% of lldb-server).
>>
>
> Yeah of course, it wasn't my intention to critique your approach. I was
> talking specifically about the mapping (the std::map) in the prototype,
> something I asked about earlier in the thread. FWIW I think this would be
> an excellent trade-off is we don't need a tool to generate code for us. I'm
> hopeful that we can have the gross of the deserialization code generated
> this way, most of the "special" cases are still pretty similar and dealing
> with basic types. Anyway, that should become clear later today as I
> integrate this into the lldb prototype.
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-09 Thread Jonas Devlieghere via lldb-dev
On Wed, Jan 9, 2019 at 8:42 AM Pavel Labath  wrote:

> On 09/01/2019 17:15, Jonas Devlieghere wrote:
> >
> >
> > On Wed, Jan 9, 2019 at 5:05 AM Pavel Labath  > > wrote:
> >
> > On 08/01/2019 21:57, Jonas Devlieghere wrote:
> >  > Before I got around to coding this up I realized you can't take
> the
> >  > address of constructors in C++, so the function address won't
> > work as an
> >  > identifier.
> >  >
> >
> > You gave up way too easily. :P
> >
> >
> > I counted on you having something in mind, it sounded too obvious for
> > you to have missed.  ;-)
> >
> > I realized that constructors are going to be tricky, but I didn't
> want
> > to dive into those details until I knew if you liked the general
> idea.
> > The most important thing to realize here is that for the identifier
> > thingy to work, you don't actually need to use the address of that
> > method/constructor as the identifier. It is sufficient to have
> > something
> > that can be deterministically computed from the function. Then you
> can
> > use the address of *that* as the identifier.
> >
> >
> > I was thinking about that yesterday. I still feel like it would be
> > better to have this mapping all done at compile time. I was considering
> > some kind of constexpr hashing but that sounded overkill.
> >
>
> Well.. most of this is done through template meta-programming, which
> _is_ compile-time. And the fact that I have a use for the new
> construct/invoke functions I create this way means that even the space
> used by those isn't completely wasted (although I'm sure this could be
> made smaller with hard-coded IDs). The biggest impact of this I can
> think of is the increased number of dynamic relocations that need to be
> performed by the loader, as it introduces a bunch of function pointers
> floating around. But even that shouldn't too bad as we have plenty of
> other sources of dynamic relocs (currently about 4% of the size of
> liblldb and 10% of lldb-server).
>

Yeah of course, it wasn't my intention to critique your approach. I was
talking specifically about the mapping (the std::map) in the prototype,
something I asked about earlier in the thread. FWIW I think this would be
an excellent trade-off is we don't need a tool to generate code for us. I'm
hopeful that we can have the gross of the deserialization code generated
this way, most of the "special" cases are still pretty similar and dealing
with basic types. Anyway, that should become clear later today as I
integrate this into the lldb prototype.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-09 Thread Jonas Devlieghere via lldb-dev
On Wed, Jan 9, 2019 at 5:05 AM Pavel Labath  wrote:

> On 08/01/2019 21:57, Jonas Devlieghere wrote:
> > Before I got around to coding this up I realized you can't take the
> > address of constructors in C++, so the function address won't work as an
> > identifier.
> >
>
> You gave up way too easily. :P
>

I counted on you having something in mind, it sounded too obvious for you
to have missed.  ;-)

I realized that constructors are going to be tricky, but I didn't want
> to dive into those details until I knew if you liked the general idea.
> The most important thing to realize here is that for the identifier
> thingy to work, you don't actually need to use the address of that
> method/constructor as the identifier. It is sufficient to have something
> that can be deterministically computed from the function. Then you can
> use the address of *that* as the identifier.
>

I was thinking about that yesterday. I still feel like it would be better
to have this mapping all done at compile time. I was considering some kind
of constexpr hashing but that sounded overkill.


> I've created a very simple prototype ,
> where I do just that. The way I handle constructors there is that I
> create a special class template (construct), whose instantiations are
> going to be unique for each constructor (I achieve that by making the
> class name and the constructor argument types the template parameters of
> that function). Then I can take the address of the static member
> function inside this class (::doit), and
> use *that* as the ID.


Clever!

As a nice side-effect, the "doit" method actually does invoke the
> constructor in question, so I can also use that in the replay code to
> treat constructors like any other method that returns an object.
>

This is really neat.


> I also do the same thing for (non-static) member functions via the
> "invoke" template, because even though it is possible to take the
> address of those, it is very hard to do anything else with the retrieved
> pointer. So the effect of this that in the rest of the code, I only have
> to work with free functions, as both constructors and member functions
> are converted into equivalent free functions. I haven't tried to handle
> destructors yet, but I don't think those should pose any problems that
> we haven't encountered already.
>
> The example also show how you can use templates to automatically
> generate replay code for "simple" (i.e. those where you can
> (de)serialize each argument independently) functions, and then uses that
> to record/replay a very simple API.
>
> You can see it in action like this:
> $ g++ a.cc  # compile
> $ ./a.out 2>/tmp/recording # generate the recording
> SBFoo 47 42
> Method 1 2
> Static 10 11
> $ cat /tmp/recording
> 0  # ID of the constructor
> 47 # constructor arg 1
> 42 # constructor arg 2
> 0x7ffd74d9a0f7 # constructor result
> 1  # id of SBFoo::Method
> 0x7ffd74d9a0f7 # this
> 1  # arg 1
> 2  # arg 2
> 2  # id of SBFoo::Static
> 10 # arg 1
> 11 # arg 2
> $ ./a.out 1 < /tmp/recording # replay the recording
> SBFoo 47 42
> SBFoo 42 47
> Method 1 2
> Static 10 11
>
> Note that when replaying the SBFoo constructor is called twice. This is
> because this code does not attempt to track the object instances in any
> way... it just creates a new one each time. This obviously needs to be
> fixed, but that's independent of the function ID issue.
>
> hope you find that useful,
> pl
>

Definitely, thank you for taking the time to code up a prototype.

Cheers,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-08 Thread Jonas Devlieghere via lldb-dev
Before I got around to coding this up I realized you can't take the address
of constructors in C++, so the function address won't work as an
identifier.

On Tue, Jan 8, 2019 at 9:28 AM Jonas Devlieghere 
wrote:

> On Tue, Jan 8, 2019 at 8:27 AM Frédéric Riss  wrote:
>
>>
>>
>> > On Jan 8, 2019, at 1:25 AM, Pavel Labath  wrote:
>> >
>> > On 07/01/2019 22:45, Frédéric Riss wrote:
>> >>> On Jan 7, 2019, at 11:31 AM, Pavel Labath via lldb-dev <
>> lldb-dev@lists.llvm.org > wrote:
>> >>>
>> >>> On 07/01/2019 19:26, Jonas Devlieghere wrote:
>>  On Mon, Jan 7, 2019 at 1:40 AM Pavel Labath > > wrote:
>> I've been thinking about how could this be done better, and the
>> best
>> (though not ideal) way I came up with is using the functions
>> address as
>> the key. That's guaranteed to be unique everywhere. Of course, you
>> cannot serialize that to a file, but since you already have a
>> central
>> place where you list all intercepted functions (to register their
>> replayers), that place can be also used to assign unique integer
>> IDs to
>> these functions. So then the idea would be that the SB_RECORD
>> macro
>> takes the address of the current function, that gets converted to
>> an ID
>> in the lookup table, and the ID gets serialized.
>>  It sound like you would generate the indices at run-time. How would
>> that work with regards to the the reverse mapping?
>> >>> In the current implementation, SBReplayer::Init contains a list of
>> all intercepted methods, right? Each of the SB_REGISTER calls takes two
>> arguments: The method name, and the replay implementation.
>> >>>
>> >>> I would change that so that this macro takes three arguments:
>> >>> - the function address (the "runtime" ID)
>> >>> - an integer (the "serialized" ID)
>> >>> - the replay implementation
>> >>>
>> >>> This creates a link between the function address and the serialized
>> ID. So when, during capture, a method calls SB_RECORD_ENTRY and passes in
>> the function address, that address can be looked up and translated to an ID
>> for serialization.
>> >>>
>> >>> The only thing that would need to be changed is to have
>> SBReplayer::Init execute during record too (which probably means it
>> shouldn't be called SBReplayer, but whatever..), so that the ID mapping is
>> also available when capturing.
>> >>>
>> >>> Does that make sense?
>> >> I think I understand what you’re explaining, and the mapping side of
>> things makes sense. But I’m concerned about the size and complexity of the
>> SB_RECORD macro that will need to be written. IIUC, those would need to
>> take the address of the current function and the prototype, which is a lot
>> of cumbersome text to type. It seems like having a specialized tool to
>> generate those would be nice, but once you have a tool you also don’t need
>> all this complexity, do you?
>> >> Fred
>> >
>> > Yes, if the tool generates the IDs for you and checks that the macro
>> invocations are correct, then you don't need the function prototype.
>> However, that tool also doesn't come for free: Somebody has to write it,
>> and it adds complexity in the form of an extra step in the build process.
>>
>> Definitely agreed, the complexity has to be somewhere.
>>
>> > My point is that this extended macro could provide all the
>> error-checking benefits of this tool. It's a tradeoff, of course, and the
>> cost here is a more complex macro invocation. I think the choice here is
>> mostly down to personal preference of whoever implements this. However, if
>> I was implementing this, I'd go for an extended macro, because I don't find
>> the extra macro complexity to be too much. For example, this should be the
>> macro invocation for SBData::SetDataFromDoubleArray:
>> >
>> > SB_RECORD(bool, SBData, SetDataFromDoubleArray, (double *, size_t),
>> >array, array_len);
>>
>> Yeah, this doesn’t seem so bad. For some reason I imagined it much more
>> verbose. Note that a verification tool that checks that every SB method is
>> instrumented correctly would still be nice (but it can come as a follow-up).
>>
>
> It sounds like this should work but we should try it out to be sure. I'll
> rework the prototype to use the function address and update the thread with
> my findings. I also like the idea of using templates to generate the
> parsers so I'll try that as well.
>
>
>> > It's a bit long, but it's not that hard to type, and all of this
>> information should be present on the previous line, where
>> SBData::SetDataFromDoubleArray is defined (I deliberately made the macro
>> argument order match the function definition syntax).
>> >
>> > And this approach can be further tweaked. For instance, if we're
>> willing to take the hit of having "weird" function definitions, then we can
>> avoid the repetition altogether, and make the macro define the function too:
>> >
>> > 

Re: [lldb-dev] [RFC] Using Sphinx to generate documentation

2019-01-08 Thread Jonas Devlieghere via lldb-dev
For those interested, I've uploaded the latest version of the generated
HTML:

https://jonasdevlieghere.com/static/lldb/

I'd have to double check but I think that almost everything was ported
over. The biggest issue is that the GDB to LLDB command map is totally
unreadable with the RST generated table. I spent a little time tweaking the
CSS, but this needs some attention. Worst case we'll have to have an HTML
table here.

Theme-wise I went with the one used by clang. I think it's the most
readable and I personally really like the local ToC. The disadvantage is
that it doesn't have a sidebar, so you have to navigate back to "contents"
in the top right corner.

The alternative is the LLVM theme where we can have Sphinx generate the
global ToC in the sidebar. When I tried this it was missing the section
names (e.g. "Goals & Status" as seen on the main page).  Another issue is
that the local ToC gets totally lost beneath it because everything doesn't
fit on the screen. Once I figure out how/if we can include the section
names I'll generate the site with the LLVM theme so people can compare and
give their opinion.

Cheers,
Jonas

On Tue, Jan 8, 2019 at 9:31 AM Jonas Devlieghere 
wrote:

>
>
> On Tue, Jan 8, 2019 at 8:52 AM Stefan Gränitz via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi Jonas, I think this is a great effort. Thanks!
>>
>> My current reviews do some small updates on the build page. Hope this
>> doesn't get in conflict with your work?
>>
>
> Thanks for the heads up Stefan. This should be fine, I'll copy over your
> change in the rst files.
>
>
>> Best
>> Stefan
>>
>> On 6. Dec 2018, at 18:02, Jonas Devlieghere via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>> Hi everyone,
>>
>> The current LLDB website is written in HTML which is hard to maintain. We
>> have quite a bit of HTML code checked in which can make it hard to
>> differentiate between documentation written by us and documentation
>> generated by a tool. Furthermore I think text/RST files provide a lower
>> barrier for new or casual contributors to fix or update.
>>
>> In line with the other LLVM projects I propose generating the
>> documentation with Sphix. I created a patch (
>> https://reviews.llvm.org/D55376) that adds a new target docs-lldb-html
>> when -DLLVM_ENABLE_SPHINX:BOOL is enabled. I've ported over some pages to
>> give an idea of what this would look like in-tree. Before continuing with
>> this rather tedious work I'd like to get feedback form the community.
>>
>> Initially I started with the theme used by Clang because it's a default
>> theme and doesn't require configuration. If we want to keep the sidebar we
>> could use the one used by LLD.
>>
>> Please let me know what you think.
>>
>> Thanks,
>> Jonas
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>>
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-08 Thread Jonas Devlieghere via lldb-dev
On Tue, Jan 8, 2019 at 8:27 AM Frédéric Riss  wrote:

>
>
> > On Jan 8, 2019, at 1:25 AM, Pavel Labath  wrote:
> >
> > On 07/01/2019 22:45, Frédéric Riss wrote:
> >>> On Jan 7, 2019, at 11:31 AM, Pavel Labath via lldb-dev <
> lldb-dev@lists.llvm.org > wrote:
> >>>
> >>> On 07/01/2019 19:26, Jonas Devlieghere wrote:
>  On Mon, Jan 7, 2019 at 1:40 AM Pavel Labath  pa...@labath.sk>> wrote:
> I've been thinking about how could this be done better, and the
> best
> (though not ideal) way I came up with is using the functions
> address as
> the key. That's guaranteed to be unique everywhere. Of course, you
> cannot serialize that to a file, but since you already have a
> central
> place where you list all intercepted functions (to register their
> replayers), that place can be also used to assign unique integer
> IDs to
> these functions. So then the idea would be that the SB_RECORD macro
> takes the address of the current function, that gets converted to
> an ID
> in the lookup table, and the ID gets serialized.
>  It sound like you would generate the indices at run-time. How would
> that work with regards to the the reverse mapping?
> >>> In the current implementation, SBReplayer::Init contains a list of all
> intercepted methods, right? Each of the SB_REGISTER calls takes two
> arguments: The method name, and the replay implementation.
> >>>
> >>> I would change that so that this macro takes three arguments:
> >>> - the function address (the "runtime" ID)
> >>> - an integer (the "serialized" ID)
> >>> - the replay implementation
> >>>
> >>> This creates a link between the function address and the serialized
> ID. So when, during capture, a method calls SB_RECORD_ENTRY and passes in
> the function address, that address can be looked up and translated to an ID
> for serialization.
> >>>
> >>> The only thing that would need to be changed is to have
> SBReplayer::Init execute during record too (which probably means it
> shouldn't be called SBReplayer, but whatever..), so that the ID mapping is
> also available when capturing.
> >>>
> >>> Does that make sense?
> >> I think I understand what you’re explaining, and the mapping side of
> things makes sense. But I’m concerned about the size and complexity of the
> SB_RECORD macro that will need to be written. IIUC, those would need to
> take the address of the current function and the prototype, which is a lot
> of cumbersome text to type. It seems like having a specialized tool to
> generate those would be nice, but once you have a tool you also don’t need
> all this complexity, do you?
> >> Fred
> >
> > Yes, if the tool generates the IDs for you and checks that the macro
> invocations are correct, then you don't need the function prototype.
> However, that tool also doesn't come for free: Somebody has to write it,
> and it adds complexity in the form of an extra step in the build process.
>
> Definitely agreed, the complexity has to be somewhere.
>
> > My point is that this extended macro could provide all the
> error-checking benefits of this tool. It's a tradeoff, of course, and the
> cost here is a more complex macro invocation. I think the choice here is
> mostly down to personal preference of whoever implements this. However, if
> I was implementing this, I'd go for an extended macro, because I don't find
> the extra macro complexity to be too much. For example, this should be the
> macro invocation for SBData::SetDataFromDoubleArray:
> >
> > SB_RECORD(bool, SBData, SetDataFromDoubleArray, (double *, size_t),
> >array, array_len);
>
> Yeah, this doesn’t seem so bad. For some reason I imagined it much more
> verbose. Note that a verification tool that checks that every SB method is
> instrumented correctly would still be nice (but it can come as a follow-up).
>

It sounds like this should work but we should try it out to be sure. I'll
rework the prototype to use the function address and update the thread with
my findings. I also like the idea of using templates to generate the
parsers so I'll try that as well.


> > It's a bit long, but it's not that hard to type, and all of this
> information should be present on the previous line, where
> SBData::SetDataFromDoubleArray is defined (I deliberately made the macro
> argument order match the function definition syntax).
> >
> > And this approach can be further tweaked. For instance, if we're willing
> to take the hit of having "weird" function definitions, then we can avoid
> the repetition altogether, and make the macro define the function too:
> >
> > SB_METHOD2(bool, SBData, SetDataFromDoubleArray, double *, array,
> >size_t, array_len, {
> >  // Method body
> > })
>
> I personally don’t like this.
>
> Fred
>
> > This would also enable you to automatically capture method return value
> for the "object" results.
> >
> > pl
>
>

Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-07 Thread Jonas Devlieghere via lldb-dev
On Mon, Jan 7, 2019 at 3:52 AM Tamas Berghammer 
wrote:

> Thanks Pavel for looping me in. I haven't looked into the actual
> implementation of the prototype yet but reading your description I have
> some concern regarding the amount of data you capture as I feel it isn't
> sufficient to reproduce a set of usecases.
>

Thanks Tamas!


> One problem is when the behavior of LLDB is not deterministic for whatever
> reason (e.g. multi threading, unordered maps, etc...). Lets take
> SBModule::FindSymbols() what returns an SBSymbolContextList without any
> specific order (haven't checked the implementation but I would consider a
> random order to be valid). If a user calls this function, then iterates
> through the elements to find an index `I`, calls `GetContextAtIndex(I)` and
> pass the result into a subsequent function then what will we do. Will we
> capture what did `GetContextAtIndex(I)` returned in the trace and use that
> value or will we capture the value of `I`, call `GetContextAtIndex(I)`
> during reproduction and use that value. Doing the first would be correct in
> this case but would mean we don't call `GetContextAtIndex(I)` while doing
> the second case would mean we call `GetContextAtIndex(I)` with a wrong
> index if the order in SBSymbolContextList is non deterministic. In this
> case as we know that GetContextAtIndex is just an accessor into a vector
> the first option is the correct one but I can imagine cases where this is
> not the case (e.g. if GetContextAtIndex would have some useful side effect).
>

Indeed, in this scenario we would replay the call with the same `I`
resulting in an incorrect value. I think the only solution is fixing the
non-derterminism. This should be straightforward for lists (some kind of
sensible ordering), but maybe there are other issues I'm not aware of.


> Other interesting question is what to do with functions taking raw binary
> data in the form of a pointer + size (e.g. SBData::SetData). I think we
> will have to annotate these APIs to make the reproducer system aware of the
> amount of data they have to capture and then allocate these buffers with
> the correct lifetime during replay. I am not sure what would be the best
> way to attach these annotations but I think we might need a fairly generic
> framework because I won't be surprised if there are more situation when we
> have to add annotations to the API. I slightly related question is if a
> function returns a pointer to a raw buffer (e.g. const char* or void*) then
> do we have to capture the content of it or the pointer for it and in either
> case what is the lifetime of the buffer returned (e.g.
> SBError::GetCString() returns a buffer what goes out of scope when the
> SBError goes out of scope).
>

This a good concern and not something I had a good solution for at this
point. For const char* string we work around this by serializing the actual
string. Obviously that won't always work. Also we have the void* batons for
callsback, which is another tricky thing that wouldn't be supported. I'm
wondering if we can get away with ignoring these at first (maybe printing
something in the replay logic that warns the user that the reproducer
contains an unsupported function?).


> Additionally I am pretty sure we have at least some functions returning
> various indices what require remapping other then the pointers either
> because they are just indexing into a data structure with undefined
> internal order or they referencing some other resource. Just by randomly
> browsing some of the SB APIs I found for example SBHostOS::ThreadCreate
> what returns the pid/tid for the newly created thread what will have to be
> remapped (it also takes a function as an argument what is a problem as
> well). Because of this I am not sure if we can get away with an
> automatically generated set of API descriptions instead of wring one with
> explicit annotations for the various remapping rules.
>

Fixing the non-determinism should also address this, right?


> If there is interest I can try to take a deeper look into the topic
> sometime later but I hope that those initial thoughts are useful.
>

Thank you. I'll start by incorporating the feedback and ping the thread
when the patch is ready for another look.


> Tamas
>
> On Mon, Jan 7, 2019 at 9:40 AM Pavel Labath  wrote:
>
>> On 04/01/2019 22:19, Jonas Devlieghere via lldb-dev wrote:
>> > Hi Everyone,
>> >
>> > In September I sent out an RFC [1] about adding reproducers to LLDB.
>> > Over the
>> > past few months, I landed the reproducer framework, support for the GDB
>> > remote
>> > protocol and a bunch of preparatory changes. There's still an open code
>> > review
>> > [2] for dealing with files, but that one is currently

Re: [lldb-dev] [Reproducers] SBReproducer RFC

2019-01-07 Thread Jonas Devlieghere via lldb-dev
On Mon, Jan 7, 2019 at 1:40 AM Pavel Labath  wrote:

> On 04/01/2019 22:19, Jonas Devlieghere via lldb-dev wrote:
> > Hi Everyone,
> >
> > In September I sent out an RFC [1] about adding reproducers to LLDB.
> > Over the
> > past few months, I landed the reproducer framework, support for the GDB
> > remote
> > protocol and a bunch of preparatory changes. There's still an open code
> > review
> > [2] for dealing with files, but that one is currently blocked by a
> change to
> > the VFS in LLVM [3].
> >
> > The next big piece of work is supporting user commands (e.g. in the
> > driver) and
> > SB API calls. Originally I expected these two things to be separate, but
> > Pavel
> > made a good case [4] that they're actually very similar.
> >
> > I created a prototype of how I envision this to work. As usual, we can
> > differentiate between capture and replay.
> >
> > ## SB API Capture
> >
> > When capturing a reproducer, every SB function/method is instrumented
> > using a
> > macro at function entry. The added code tracks the function identifier
> > (currently we use its name with __PRETTY_FUNCTION__) and its arguments.
> >
> > It also tracks when a function crosses the boundary between internal and
> > external use. For example, when someone (be it the driver, the python
> > binding
> > or the RPC server) call SBFoo, and in its implementation SBFoo calls
> > SBBar, we
> > don't need to record SBBar. When invoking SBFoo during replay, it will
> > itself
> > call SBBar.
> >
> > When a boundary is crossed, the function name and arguments are
> > serialized to a
> > file. This is trivial for basic types. For objects, we maintain a table
> that
> > maps pointer values to indices and serialize the index.
> >
> > To keep our table consistent, we also need to track return for functions
> > that
> > return an object by value. We have a separate macro that wraps the
> returned
> > object.
> >
> > The index is sufficient because every object that is passed to a
> > function has
> > crossed the boundary and hence was recorded. During replay (see below)
> > we map
> > the index to an address again which ensures consistency.
> >
> > ## SB API Replay
> >
> > To replay the SB function calls we need a way to invoke the corresponding
> > function from its serialized identifier. For every SB function, there's a
> > counterpart that deserializes its arguments and invokes the function.
> These
> > functions are added to the map and are called by the replay logic.
> >
> > Replaying is just a matter looping over the function identifiers in the
> > serialized file, dispatching the right deserialization function, until
> > no more
> > data is available.
> >
> > The deserialization function for constructors or functions that return
> > by value
> > contains additional logic for dealing with the aforementioned indices.
> The
> > resulting objects are added to a table (similar to the one described
> > earlier)
> > that maps indices to pointers. Whenever an object is passed as an
> > argument, the
> > index is used to get the actual object from the table.
> >
> > ## Tool
> >
> > Even when using macros, adding the necessary capturing and replay code is
> > tedious and scales poorly. For the prototype, we did this by hand, but we
> > propose a new clang-based tool to streamline the process.
> >
> > For the capture code, the tool would validate that the macro matches the
> > function signature, suggesting a fixit if the macros are incorrect or
> > missing.
> > Compared to generating the macros altogether, it has the advantage that
> we
> > don't have "configured" files that are harder to debug (without faking
> line
> > numbers etc).
> >
> > The deserialization code would be fully generated. As shown in the
> prototype
> > there are a few different cases, depending on whether we have to account
> for
> > objects or not.
> >
> > ## Prototype Code
> >
> > I created a differential [5] on Phabricator with the prototype. It
> > contains the
> > necessary methods to re-run the gdb remote (reproducer) lit test.
> >
> > ## Feedback
> >
> > Before moving forward I'd like to get the community's input. What do you
> > think
> > about this approach? Do you have concerns or can we be smarter
> > somewhere? Any
> > feedback would be greatly appreciated!
> >
> > Thanks,
> > Jona

Re: [lldb-dev] Signedness of scalars built from APInt(s)

2019-01-04 Thread Jonas Devlieghere via lldb-dev
On Fri, Jan 4, 2019 at 3:13 PM Zachary Turner  wrote:

> I don't think #2 is a correct change.  Just because the sign bit is set
> doesn't mean it's signed.  Is the 4-byte value 0x1000 signed or
> unsigned?  It's a trick question, because there's not enough information!
> If it was written "int x = 0x1000" then it's signed (and negative).  If
> it was written "unsigned x = 0x1000" then it's unsigned (and
> positive).  What about the 4-byte value 0x1?  Still a trick!  If it was
> written "int x = 1" then it's signed (and positive), and if it was written
> "unsigned x = 1" then it's unsigned (and positive).
>
> My point is that signedness of the *type* does not necessarly imply
> signedness of the value, and vice versa.
>
> APInt is purely a bit-representation and a size, there is no information
> whatsoever about whether type *type* is signed.  It doesn't make sense to
> say "is this APInt negative?" without additional information.
>
> With APSInt, on the other hand, it does make sense to ask that question.
> If you have an APSInt where isSigned() is true, *then* you can use the sign
> bit to determine whether it's negative.  And if you have an APSInt where
> isSigned() is false, then the "sign bit" is not actually a sign bit at all,
> it is just an extra power of 2 for the unsigned value.
>
> This is my understanding of the classes, someone correct me if I'm wrong.
>

> IIUC though, the way to fix this is by using APSInt throughout the class,
> and delete all references to APInt.
>

I think we share the same understanding. If we know at every call site
whether the type is signed or not then I totally agree, we should only use
APSInt. The reason I propose doing (2) first is for the first scenario you
described, where you don't know. Turning it into an explicit APSInt is as
bad as using an APInt and looking at the value. The latter has the
advantage that it conveys that you don't know, while the other may or may
not be a lie.


> On Fri, Jan 4, 2019 at 2:58 PM Jonas Devlieghere 
> wrote:
>
>> If I understand the situation correctly I think we should do both. I'd
>> start by doing (2) to improve the current behavior and add a constructor
>> for APSInt. We can then audit the call sites and migrate to APSInt where
>> it's obvious that the type is signed. That should match the semantics of
>> both classes?
>>
>> On Fri, Jan 4, 2019 at 2:00 PM Davide Italiano 
>> wrote:
>>
>>> On Fri, Jan 4, 2019 at 1:57 PM Davide Italiano 
>>> wrote:
>>> >
>>> > While adding support for 512-bit integers in `Scalar`, I figured I
>>> > could add some coverage.
>>> >
>>> > TEST(ScalarTest, Signedness) {
>>> >  auto s1 = Scalar(APInt(32, 12, false /* isSigned */));
>>> >  auto s2 = Scalar(APInt(32, 12, true /* isSigned */ ));
>>> >  ASSERT_EQ(s1.GetType(), Scalar::e_uint); // fails
>>> >  ASSERT_EQ(s2.GetType(), Scalar::e_sint); // pass
>>> > }
>>> >
>>> > The result of `s1.GetType()` is Scalar::e_sint.
>>> > This is because an APInt can't distinguish between "int patatino = 12"
>>> > and "uint patatino = 12".
>>> > The correct class in `llvm` to do that is `APSInt`.
>>> >
>>>
>>> Please note that this is also broken in the case where you have
>>> APInt(32 /* bitWidth */, -323);
>>> because of the way the constructor is implemented.
>>>
>>> --
>>> Davide
>>>
>>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Signedness of scalars built from APInt(s)

2019-01-04 Thread Jonas Devlieghere via lldb-dev
If I understand the situation correctly I think we should do both. I'd
start by doing (2) to improve the current behavior and add a constructor
for APSInt. We can then audit the call sites and migrate to APSInt where
it's obvious that the type is signed. That should match the semantics of
both classes?

On Fri, Jan 4, 2019 at 2:00 PM Davide Italiano 
wrote:

> On Fri, Jan 4, 2019 at 1:57 PM Davide Italiano 
> wrote:
> >
> > While adding support for 512-bit integers in `Scalar`, I figured I
> > could add some coverage.
> >
> > TEST(ScalarTest, Signedness) {
> >  auto s1 = Scalar(APInt(32, 12, false /* isSigned */));
> >  auto s2 = Scalar(APInt(32, 12, true /* isSigned */ ));
> >  ASSERT_EQ(s1.GetType(), Scalar::e_uint); // fails
> >  ASSERT_EQ(s2.GetType(), Scalar::e_sint); // pass
> > }
> >
> > The result of `s1.GetType()` is Scalar::e_sint.
> > This is because an APInt can't distinguish between "int patatino = 12"
> > and "uint patatino = 12".
> > The correct class in `llvm` to do that is `APSInt`.
> >
>
> Please note that this is also broken in the case where you have
> APInt(32 /* bitWidth */, -323);
> because of the way the constructor is implemented.
>
> --
> Davide
>
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [Reproducers] SBReproducer RFC

2019-01-04 Thread Jonas Devlieghere via lldb-dev
Hi Everyone,

In September I sent out an RFC [1] about adding reproducers to LLDB. Over
the
past few months, I landed the reproducer framework, support for the GDB
remote
protocol and a bunch of preparatory changes. There's still an open code
review
[2] for dealing with files, but that one is currently blocked by a change to
the VFS in LLVM [3].

The next big piece of work is supporting user commands (e.g. in the driver)
and
SB API calls. Originally I expected these two things to be separate, but
Pavel
made a good case [4] that they're actually very similar.

I created a prototype of how I envision this to work. As usual, we can
differentiate between capture and replay.

## SB API Capture

When capturing a reproducer, every SB function/method is instrumented using
a
macro at function entry. The added code tracks the function identifier
(currently we use its name with __PRETTY_FUNCTION__) and its arguments.

It also tracks when a function crosses the boundary between internal and
external use. For example, when someone (be it the driver, the python
binding
or the RPC server) call SBFoo, and in its implementation SBFoo calls SBBar,
we
don't need to record SBBar. When invoking SBFoo during replay, it will
itself
call SBBar.

When a boundary is crossed, the function name and arguments are serialized
to a
file. This is trivial for basic types. For objects, we maintain a table that
maps pointer values to indices and serialize the index.

To keep our table consistent, we also need to track return for functions
that
return an object by value. We have a separate macro that wraps the returned
object.

The index is sufficient because every object that is passed to a function
has
crossed the boundary and hence was recorded. During replay (see below) we
map
the index to an address again which ensures consistency.

## SB API Replay

To replay the SB function calls we need a way to invoke the corresponding
function from its serialized identifier. For every SB function, there's a
counterpart that deserializes its arguments and invokes the function. These
functions are added to the map and are called by the replay logic.

Replaying is just a matter looping over the function identifiers in the
serialized file, dispatching the right deserialization function, until no
more
data is available.

The deserialization function for constructors or functions that return by
value
contains additional logic for dealing with the aforementioned indices. The
resulting objects are added to a table (similar to the one described
earlier)
that maps indices to pointers. Whenever an object is passed as an argument,
the
index is used to get the actual object from the table.

## Tool

Even when using macros, adding the necessary capturing and replay code is
tedious and scales poorly. For the prototype, we did this by hand, but we
propose a new clang-based tool to streamline the process.

For the capture code, the tool would validate that the macro matches the
function signature, suggesting a fixit if the macros are incorrect or
missing.
Compared to generating the macros altogether, it has the advantage that we
don't have "configured" files that are harder to debug (without faking line
numbers etc).

The deserialization code would be fully generated. As shown in the prototype
there are a few different cases, depending on whether we have to account for
objects or not.

## Prototype Code

I created a differential [5] on Phabricator with the prototype. It contains
the
necessary methods to re-run the gdb remote (reproducer) lit test.

## Feedback

Before moving forward I'd like to get the community's input. What do you
think
about this approach? Do you have concerns or can we be smarter somewhere?
Any
feedback would be greatly appreciated!

Thanks,
Jonas

[1] http://lists.llvm.org/pipermail/lldb-dev/2018-September/014184.html
[2] https://reviews.llvm.org/D54617
[3] https://reviews.llvm.org/D54277
[4] https://reviews.llvm.org/D55582
[5] https://reviews.llvm.org/D56322
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] Using Sphinx to generate documentation

2018-12-07 Thread Jonas Devlieghere via lldb-dev

> On Dec 7, 2018, at 4:37 AM, Bruce Mitchener via lldb-dev 
>  wrote:
> 
>> On Fri, Dec 7, 2018 at 6:11 PM Raphael Isemann via lldb-dev 
>>  wrote:
>> I think if we want to actually lower the entry barrier for
>> contributing/fixing things on the website, then the server should do
>> this. From what I know the other LLVM projects also generate the HTML
>> on the server (at least I've never seen anyone commit generated HTML
>> files), so this hopefully shouldn't be too complicated.

Yes, I definitely want to generate it. It should be straightforward to add a 
build bot that does this. I’ll see what is needed for this while I continue 
porting the other pages.

I’d like to have the doxygen and python documentation generated as well. The 
one that’s currently hosted on llvm.org is several years old, last time I 
checked. 

> Agree. Also, there's enough differences between the generated HTML for 
> various versions of the tools that having it happen on the server would be 
> good. 

I’d have to double check how llvm 

>  
>> I think in general this approach is really nice. Thanks a lot for the
>> work @Jonas!
> 
> Indeed!

My pleasure! I’m happy to see the positive reception here.

> 
>  - Bruce
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] [RFC] Using Sphinx to generate documentation

2018-12-06 Thread Jonas Devlieghere via lldb-dev
Hi everyone,

The current LLDB website is written in HTML which is hard to maintain. We have 
quite a bit of HTML code checked in which can make it hard to differentiate 
between documentation written by us and documentation generated by a tool. 
Furthermore I think text/RST files provide a lower barrier for new or casual 
contributors to fix or update.

In line with the other LLVM projects I propose generating the documentation 
with Sphix. I created a patch (https://reviews.llvm.org/D55376 
) that adds a new target docs-lldb-html when 
-DLLVM_ENABLE_SPHINX:BOOL is enabled. I've ported over some pages to give an 
idea of what this would look like in-tree. Before continuing with this rather 
tedious work I'd like to get feedback form the community.

Initially I started with the theme used by Clang because it's a default theme 
and doesn't require configuration. If we want to keep the sidebar we could use 
the one used by LLD.

Please let me know what you think.

Thanks,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] The lit test driver gets killed because of SIGHUP

2018-12-05 Thread Jonas Devlieghere via lldb-dev
On Wed, Dec 5, 2018 at 9:45 AM Pavel Labath  wrote:

> On 05/12/2018 18:36, Jonas Devlieghere wrote:
> > I believe that posix doesn't make this guarantee, but that in reality
> > neither linux nor darwin recycles pids before they wrap around?
>
> Yes, linux tries pretty hard to not recycle pids, but this is hampered
> by the fact that the default pid limit  is 32k and that process and
> thread ids share the same namespace.
>
> So given that we have over 1k tests and each test can easily spawn over
> 32 tids/pids on a 32-core machine (parallel parsing of debug info), we
> can easily run through the whole pid pool in a single test run.
>

Yeah, that makes sense. What I meant is that I don't think it's what's
happening here as we're only running a few tests and I can see the pids are
moslty consecutive.

As I mentioned in the previous e-mail I wrote a Python module in C to check
who's sending the PID. It turns out it's the inferior, the one that's
receiving signal 17 (SIGSTOP) from the debugserver. This actually makes
sense as SIGSTOP is involved in process control. I'd have to double check,
but it would definitely make sense if this signal is sent to the whole
foreground process group, similar to ^C and it would explain why changing
that prevents this from happening.

I'm going to see what happens if I change the SIGSTOP into a SIGINT.
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] The lit test driver gets killed because of SIGHUP

2018-12-05 Thread Jonas Devlieghere via lldb-dev
On Wed, Dec 5, 2018 at 5:01 AM Raphael Isemann via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> @Jonas: Did you confirm it is SIGHUP? I remember that we were not sure
> whether the signal kind was SIGHUP or SIGINT.
>

I'm relatively sure. I added a signal handler to lit and it fires on the
signal.


>
> - Raphael
> Am Mi., 5. Dez. 2018 um 10:25 Uhr schrieb Pavel Labath via lldb-dev
> :
> >
> > On 05/12/2018 03:49, Jonas Devlieghere via lldb-dev wrote:
> > > Hi everyone,
> > >
> > > Since we switched to lit as the test driver we've been seeing it
> getting killed as the result of a SIGHUP signal. The problem doesn't
> reproduce on every machine and there seems to be a correlation between
> number of occurrences and thread count.
> > >
> > > Davide and Raphael spent some time narrowing down what particular test
> is causing this and it seems that TestChangeProcessGroup.py is always
> involved. However it never reproduces when running just this test. I was
> able to reproduce pretty consistently with the following filter:
> > >
> > > ./bin/llvm-lit ../llvm/tools/lldb/lit/Suite/ --filter="process"
> > >
> > > Bisecting the test itself didn't help much, the problem reproduces as
> soon as we attach to the inferior.
> > >
> > > At this point it is still not clear who is sending the SIGHUP and why
> it's reaching the lit test driver. Fred suggested that it might have
> something to do with process groups (which would be an interesting
> coincidence given the previously mentioned test) and he suggested having
> the test run in different process groups. Indeed, adding a call to
> os.setpgrp() in lit's executeCommand and having a different process group
> per test prevent us from seeing this. Regardless of this issue I think it's
> reasonable to have tests run in their process group, so if nobody objects I
> propose adding this to lit in llvm.
> > >
> > > Still, I'd like to understand where the signal is coming from and fix
> the root cause in addition to the symptom. Maybe someone here has an idea
> of what might be going on?
> > >
> > > Thanks,
> > > Jonas
> > >
> > > PS
> > >
> > > 1. There's two places where we send a SIGHUP ourself, with that code
> removed we still receive the signal, which suggests that it might be coming
> from Python or the OS.
> > > 2. If you're able to reproduce you'll see that adding an early return
> before the attach in TestChangeProcessGroup.py hides/prevents the problem.
> Moving the return down one line and it pops up again.
> > > ___
> > > lldb-dev mailing list
> > > lldb-dev@lists.llvm.org
> > > http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
> > >
> >
> > Hi Jonas,
> >
> > Sounds like you have found an interesting issue to debug. I've tried
> > running the command you mention locally, and I didn't see any failures
> > in 100 runs.


Thank you. This confirm my suspicion that it's likely a Darwin-only thing.


> > There doesn't seem to be anything in the TestChangeProcessGroup which
> > sends a signal, though I can imagine that the act of changing a process
> > group mid-debug could be enough to confuse someone to send it. However,
> > I am having trouble reconciling this with your PS #2, because if
> > attaching is sufficient to trigger this (i.e., no group changing takes
> > place), then this test is not much different than any other test where
> > we spawn an inferior and then attach to it.
>

I agree, I think it might be just coincidence. Also running only this test
never fails so there is some timing involved. It looks like we needed at
least one other process-manipulating test to make it reproduce, but this is
just observation on my part and hard to confirm.


> > I am aware of one other instance where we send a spurious signal, though
> > it's SIGINT in this case
> > <
> https://github.com/llvm-mirror/lldb/blob/master/source/Plugins/Process/gdb-remote/ProcessGDBRemote.cpp#L3645
> >.
> > The issue there is that we don't check whether the debug server has
> > exited before we send SIGINT to it (which it normally does on its own at
> > the end of debug session). So if the debug server does exit and its pid
> > gets recycled before we get a chance to send this signal, we can end up
> > killing a random process.
>

I believe that posix doesn't make this guarantee, but that in reality
neither linux nor darwin recycles pids before they wrap around? I don't see
this signal in my DTrace output though. What I do see is that debugserver
sends a SIGS

Re: [lldb-dev] The lit test driver gets killed because of SIGHUP

2018-12-04 Thread Jonas Devlieghere via lldb-dev
On Tue, Dec 4, 2018 at 22:03 Zachary Turner  wrote:

> Do you know if it’s Darwin specific? If so, maybe someone internally can
> offer guidance on how to diagnose (like on the kernel team)?
>

Finding that out is part of the reason I sent this mail. We’ve only seen it
on Mac Pros and iMac Pros. I haven’t tried on Linux yet. If someone with a
specced machine would have a go at the the command I sent earlier that
would be really appreciated. I can try myself in a VM tomorrow, but if it
doesn’t reproduce it’ll be bad to tell whether it was because of the VM or
not.

When you aren’t using the lit driver, does the signal still get delivered
> (and we just handle it better), or does it not get delivered at all?


I wasn’t able to reproduce with dotest.py or the lldb-dotest wrapper. I
installed a signal handler for SIGHUP and it never triggered. Since the
signal doesn’t show up in my DTrace output I can’t be sure if it actually
reproduces, but the handler doesn’t execute. It also doesn’t fire in the
lit case, which adds to my confusion.


> On Tue, Dec 4, 2018 at 9:12 PM Jonas Devlieghere 
> wrote:
>
>>
>>
>> On Tue, Dec 4, 2018 at 19:11 Zachary Turner via lldb-dev <
>> lldb-dev@lists.llvm.org> wrote:
>>
>>> Have you tried an strace to see if it tells you who is sending the
>>> signal?
>>
>>
>> I used DTrace with the default kill.d script. It shows who sends what
>> signal and there was nothing interesting other than debugserver sending
>> signal 17 (SIGSTOP) to the inferior. This makes me think that the signal
>> might be coming from the kernel?
>>
>>
>>> On Tue, Dec 4, 2018 at 6:49 PM Jonas Devlieghere via lldb-dev <
>>> lldb-dev@lists.llvm.org> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> Since we switched to lit as the test driver we've been seeing it
>>>> getting killed as the result of a SIGHUP signal. The problem doesn't
>>>> reproduce on every machine and there seems to be a correlation between
>>>> number of occurrences and thread count.
>>>>
>>>> Davide and Raphael spent some time narrowing down what particular test
>>>> is causing this and it seems that TestChangeProcessGroup.py is always
>>>> involved. However it never reproduces when running just this test. I was
>>>> able to reproduce pretty consistently with the following filter:
>>>>
>>>> ./bin/llvm-lit ../llvm/tools/lldb/lit/Suite/ --filter="process"
>>>>
>>>> Bisecting the test itself didn't help much, the problem reproduces as
>>>> soon as we attach to the inferior.
>>>>
>>>> At this point it is still not clear who is sending the SIGHUP and why
>>>> it's reaching the lit test driver. Fred suggested that it might have
>>>> something to do with process groups (which would be an interesting
>>>> coincidence given the previously mentioned test) and he suggested having
>>>> the test run in different process groups. Indeed, adding a call to
>>>> os.setpgrp() in lit's executeCommand and having a different process group
>>>> per test prevent us from seeing this. Regardless of this issue I think it's
>>>> reasonable to have tests run in their process group, so if nobody objects I
>>>> propose adding this to lit in llvm.
>>>>
>>>> Still, I'd like to understand where the signal is coming from and fix
>>>> the root cause in addition to the symptom. Maybe someone here has an idea
>>>> of what might be going on?
>>>>
>>>> Thanks,
>>>> Jonas
>>>>
>>>> PS
>>>>
>>>> 1. There's two places where we send a SIGHUP ourself, with that code
>>>> removed we still receive the signal, which suggests that it might be coming
>>>> from Python or the OS.
>>>> 2. If you're able to reproduce you'll see that adding an early return
>>>> before the attach in TestChangeProcessGroup.py hides/prevents the problem.
>>>> Moving the return down one line and it pops up again.
>>>> ___
>>>> lldb-dev mailing list
>>>> lldb-dev@lists.llvm.org
>>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>>
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>>
>> --
>> Sent from my iPhone
>>
> --
Sent from my iPhone
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] The lit test driver gets killed because of SIGHUP

2018-12-04 Thread Jonas Devlieghere via lldb-dev
On Tue, Dec 4, 2018 at 19:11 Zachary Turner via lldb-dev <
lldb-dev@lists.llvm.org> wrote:

> Have you tried an strace to see if it tells you who is sending the signal?


I used DTrace with the default kill.d script. It shows who sends what
signal and there was nothing interesting other than debugserver sending
signal 17 (SIGSTOP) to the inferior. This makes me think that the signal
might be coming from the kernel?


> On Tue, Dec 4, 2018 at 6:49 PM Jonas Devlieghere via lldb-dev <
> lldb-dev@lists.llvm.org> wrote:
>
>> Hi everyone,
>>
>> Since we switched to lit as the test driver we've been seeing it getting
>> killed as the result of a SIGHUP signal. The problem doesn't reproduce on
>> every machine and there seems to be a correlation between number of
>> occurrences and thread count.
>>
>> Davide and Raphael spent some time narrowing down what particular test is
>> causing this and it seems that TestChangeProcessGroup.py is always
>> involved. However it never reproduces when running just this test. I was
>> able to reproduce pretty consistently with the following filter:
>>
>> ./bin/llvm-lit ../llvm/tools/lldb/lit/Suite/ --filter="process"
>>
>> Bisecting the test itself didn't help much, the problem reproduces as
>> soon as we attach to the inferior.
>>
>> At this point it is still not clear who is sending the SIGHUP and why
>> it's reaching the lit test driver. Fred suggested that it might have
>> something to do with process groups (which would be an interesting
>> coincidence given the previously mentioned test) and he suggested having
>> the test run in different process groups. Indeed, adding a call to
>> os.setpgrp() in lit's executeCommand and having a different process group
>> per test prevent us from seeing this. Regardless of this issue I think it's
>> reasonable to have tests run in their process group, so if nobody objects I
>> propose adding this to lit in llvm.
>>
>> Still, I'd like to understand where the signal is coming from and fix the
>> root cause in addition to the symptom. Maybe someone here has an idea of
>> what might be going on?
>>
>> Thanks,
>> Jonas
>>
>> PS
>>
>> 1. There's two places where we send a SIGHUP ourself, with that code
>> removed we still receive the signal, which suggests that it might be coming
>> from Python or the OS.
>> 2. If you're able to reproduce you'll see that adding an early return
>> before the attach in TestChangeProcessGroup.py hides/prevents the problem.
>> Moving the return down one line and it pops up again.
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>
-- 
Sent from my iPhone
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] The lit test driver gets killed because of SIGHUP

2018-12-04 Thread Jonas Devlieghere via lldb-dev
Hi everyone,

Since we switched to lit as the test driver we've been seeing it getting killed 
as the result of a SIGHUP signal. The problem doesn't reproduce on every 
machine and there seems to be a correlation between number of occurrences and 
thread count. 

Davide and Raphael spent some time narrowing down what particular test is 
causing this and it seems that TestChangeProcessGroup.py is always involved. 
However it never reproduces when running just this test. I was able to 
reproduce pretty consistently with the following filter:

./bin/llvm-lit ../llvm/tools/lldb/lit/Suite/ --filter="process"

Bisecting the test itself didn't help much, the problem reproduces as soon as 
we attach to the inferior. 

At this point it is still not clear who is sending the SIGHUP and why it's 
reaching the lit test driver. Fred suggested that it might have something to do 
with process groups (which would be an interesting coincidence given the 
previously mentioned test) and he suggested having the test run in different 
process groups. Indeed, adding a call to os.setpgrp() in lit's executeCommand 
and having a different process group per test prevent us from seeing this. 
Regardless of this issue I think it's reasonable to have tests run in their 
process group, so if nobody objects I propose adding this to lit in llvm. 

Still, I'd like to understand where the signal is coming from and fix the root 
cause in addition to the symptom. Maybe someone here has an idea of what might 
be going on? 

Thanks,
Jonas

PS
 
1. There's two places where we send a SIGHUP ourself, with that code removed we 
still receive the signal, which suggests that it might be coming from Python or 
the OS.  
2. If you're able to reproduce you'll see that adding an early return before 
the attach in TestChangeProcessGroup.py hides/prevents the problem. Moving the 
return down one line and it pops up again. 
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [cfe-dev] "devirtualizing" files in the VFS

2018-11-27 Thread Jonas Devlieghere via lldb-dev
Hi Sam,

Does extending the status with a path sound reasonable? This would work similar 
to the current Name field, which is controlled by UseExternalName. 

Please let me know what you think.

Thanks,
Jonas 

> On Nov 15, 2018, at 10:10 AM, Jonas Devlieghere via cfe-dev 
>  wrote:
> 
> 
>> On Nov 15, 2018, at 3:34 AM, Whisperity > > wrote:
>> 
>> I am really not sure if adding real file system functionality strictly into 
>> the VFS is a good approach. This "ExternalFileSystem" thing sounds weird to 
>> me.
> 
> The `ExternalFileSystem` was an attempt to provide a more limited interface 
> while exposing the "external" path in a way that made sense for the 
> RedirectingFileSystem. Like Sam said in the review it's not great because it 
> only does half of the work. 
> 
>> Does LLDB need to *write* the files through the VFS? I'm not sure perhaps a 
>> "WritableVFS" could be implemented, or the necessary casting/conversion 
>> options.
> 
> Most likely yes because of LLDB's design that abstracts over flies without 
> prior knowledge about whether they'd only get read or written. However 
> wouldn't it suffer form the exact same problems? 
> 
>> In case:
>>  - there is a real path behind the file --- you could spawn an 
>> llvm::RealFileSystem (the fqdn might not be this after the migration patch!) 
>> and use that to obtain the file's buffer.
> 
> I'm not sure I follow what you have in mind here. Can you give a little more 
> detail?
> 
>> How can you be sure the file actually exists on the FS? That's what the VFS 
>> should be all about, hiding this abstraction... if you *are* sure it exists, 
>> or want to make sure, you need to pull the appropriate realFS from the VFS 
>> Overlay (most tools have an overlay of a memoryFS above the realFS).
> 
> That makes sense, for LLDB's use case we would be happy having just a real or 
> redirecting filesystem (with fall through). 
> 
>> What I am not sure about is extending the general interface in a way that it 
>> caters to a particular (or half of a particular) use case.
> 
> I totally understand this sentiment but I don't think that's totally fair. 
> Finding files in different locations is an important feature of the VFS, when 
> it was introduced in 2014 this was the only use case. The "devirtualization" 
> aspect is unfortunate because native IO. 
> 
>> For example, in CodeCompass, we used a custom VFS implementation that 
>> "hijacked" the overlay and included itself between the realFS and the 
>> memoryFS. It obtains files from the database!
>> 
>> See:
>> https://github.com/Ericsson/CodeCompass/blob/a1a7b10e3a9e2e4f493135ea68566cee54adc081/plugins/cpp_reparse/service/src/databasefilesystem.cpp#L191-L224
>>  
>> 
>> 
>> These files *do not necessarily* (in 99% of the cases, not at all) exist on 
>> the hard drive at the moment of the code wanting to pull the file, hence why 
>> we implemented this to give the file source buffer from DB. The ClangTool 
>> that needs this still gets the memoryFS for its own purposes, and for the 
>> clang libraries, the realFS is still under there.
>> 
>> Perhaps the "Status" type could be extended to carry extra information? 
>> https://github.com/Ericsson/CodeCompass/blob/a1a7b10e3a9e2e4f493135ea68566cee54adc081/plugins/cpp_reparse/service/src/databasefilesystem.cpp#L85-L87
>>  
>> 
> 
> This sounds like an interesting idea. We already have the option to expose 
> the external name here, would it be reasonable to also expose the external 
> path here? (of course being an optional)
> 
>> 
>> Sam McCall via cfe-dev > > ezt írta (időpont: 2018. nov. 15., Cs, 
>> 12:02):
>> I'd like to get some more perspectives on the role of the VirtualFileSystem 
>> abstraction in llvm/Support.
>> (The VFS layer has recently moved from Clang to LLVM, so crossposting to 
>> both lists)
>> 
>> https://reviews.llvm.org/D54277  proposed 
>> adding a function to VirtualFileSystem to get the underlying "real file" 
>> path from a VFS path. LLDB is starting to use VFS for some filesystem 
>> interactions, but wants/needs to keep using native IO (FILE*, file 
>> descriptors) for others. There's some more context/discussion in the review.
>> 
>> My perspective is coloured by work on clang tooling, clangd etc. There we 
>> rely on VFS to ensure code (typically clang library code) works in a variety 
>> of environments, e.g:
>> in an IDE the edited file is consistently used rather than the one on disk
>> clang-tidy checks work on a local codebase, but our code review tool also 
>> runs them as a service
>> This works because all IO goes through the VFS, so VFSes 

Re: [lldb-dev] [cfe-dev] "devirtualizing" files in the VFS

2018-11-15 Thread Jonas Devlieghere via lldb-dev


> On Nov 15, 2018, at 3:34 AM, Whisperity  wrote:
> 
> I am really not sure if adding real file system functionality strictly into 
> the VFS is a good approach. This "ExternalFileSystem" thing sounds weird to 
> me.

The `ExternalFileSystem` was an attempt to provide a more limited interface 
while exposing the "external" path in a way that made sense for the 
RedirectingFileSystem. Like Sam said in the review it's not great because it 
only does half of the work. 

> Does LLDB need to *write* the files through the VFS? I'm not sure perhaps a 
> "WritableVFS" could be implemented, or the necessary casting/conversion 
> options.

Most likely yes because of LLDB's design that abstracts over flies without 
prior knowledge about whether they'd only get read or written. However wouldn't 
it suffer form the exact same problems? 

> In case:
>  - there is a real path behind the file --- you could spawn an 
> llvm::RealFileSystem (the fqdn might not be this after the migration patch!) 
> and use that to obtain the file's buffer.

I'm not sure I follow what you have in mind here. Can you give a little more 
detail?

> How can you be sure the file actually exists on the FS? That's what the VFS 
> should be all about, hiding this abstraction... if you *are* sure it exists, 
> or want to make sure, you need to pull the appropriate realFS from the VFS 
> Overlay (most tools have an overlay of a memoryFS above the realFS).

That makes sense, for LLDB's use case we would be happy having just a real or 
redirecting filesystem (with fall through). 

> What I am not sure about is extending the general interface in a way that it 
> caters to a particular (or half of a particular) use case.

I totally understand this sentiment but I don't think that's totally fair. 
Finding files in different locations is an important feature of the VFS, when 
it was introduced in 2014 this was the only use case. The "devirtualization" 
aspect is unfortunate because native IO. 

> For example, in CodeCompass, we used a custom VFS implementation that 
> "hijacked" the overlay and included itself between the realFS and the 
> memoryFS. It obtains files from the database!
> 
> See:
> https://github.com/Ericsson/CodeCompass/blob/a1a7b10e3a9e2e4f493135ea68566cee54adc081/plugins/cpp_reparse/service/src/databasefilesystem.cpp#L191-L224
>  
> 
> 
> These files *do not necessarily* (in 99% of the cases, not at all) exist on 
> the hard drive at the moment of the code wanting to pull the file, hence why 
> we implemented this to give the file source buffer from DB. The ClangTool 
> that needs this still gets the memoryFS for its own purposes, and for the 
> clang libraries, the realFS is still under there.
> 
> Perhaps the "Status" type could be extended to carry extra information? 
> https://github.com/Ericsson/CodeCompass/blob/a1a7b10e3a9e2e4f493135ea68566cee54adc081/plugins/cpp_reparse/service/src/databasefilesystem.cpp#L85-L87
>  
> 

This sounds like an interesting idea. We already have the option to expose the 
external name here, would it be reasonable to also expose the external path 
here? (of course being an optional)

> 
> Sam McCall via cfe-dev  > ezt írta (időpont: 2018. nov. 15., Cs, 
> 12:02):
> I'd like to get some more perspectives on the role of the VirtualFileSystem 
> abstraction in llvm/Support.
> (The VFS layer has recently moved from Clang to LLVM, so crossposting to both 
> lists)
> 
> https://reviews.llvm.org/D54277  proposed 
> adding a function to VirtualFileSystem to get the underlying "real file" path 
> from a VFS path. LLDB is starting to use VFS for some filesystem 
> interactions, but wants/needs to keep using native IO (FILE*, file 
> descriptors) for others. There's some more context/discussion in the review.
> 
> My perspective is coloured by work on clang tooling, clangd etc. There we 
> rely on VFS to ensure code (typically clang library code) works in a variety 
> of environments, e.g:
> in an IDE the edited file is consistently used rather than the one on disk
> clang-tidy checks work on a local codebase, but our code review tool also 
> runs them as a service
> This works because all IO goes through the VFS, so VFSes are substitutable. 
> We tend to rely on the static type system to ensure this (most people write 
> lit tests that use the real FS).
> 
> Adding facilities to use native IO together with VFS works against this, e.g. 
> a likely interface is
>   // Returns the OS-native path to the specified virtual file.
>   // Returns None if Path doesn't describe a native file, or its path is 
> unknown.
>   Optional FileSystem::getNativePath(string 

Re: [lldb-dev] "devirtualizing" files in the VFS

2018-11-15 Thread Jonas Devlieghere via lldb-dev
HI Sam,

Thanks again for taking the time to discuss this. 

> On Nov 15, 2018, at 3:02 AM, Sam McCall  wrote:
> 
> I'd like to get some more perspectives on the role of the VirtualFileSystem 
> abstraction in llvm/Support.
> (The VFS layer has recently moved from Clang to LLVM, so crossposting to both 
> lists)
> 
> https://reviews.llvm.org/D54277  proposed 
> adding a function to VirtualFileSystem to get the underlying "real file" path 
> from a VFS path. LLDB is starting to use VFS for some filesystem 
> interactions, but wants/needs to keep using native IO (FILE*, file 
> descriptors) for others. There's some more context/discussion in the review.
> 
> My perspective is coloured by work on clang tooling, clangd etc. There we 
> rely on VFS to ensure code (typically clang library code) works in a variety 
> of environments, e.g:
> in an IDE the edited file is consistently used rather than the one on disk
> clang-tidy checks work on a local codebase, but our code review tool also 
> runs them as a service
> This works because all IO goes through the VFS, so VFSes are substitutable. 
> We tend to rely on the static type system to ensure this (most people write 
> lit tests that use the real FS).

I want to emphasize that I don't have any intention of breaking any of those or 
other existing use cases. I opted for the virtual file system because it 
provides 95% of the functionality that's needed for reproducers: the real 
filesystem and the redirecting file system. It has the yaml mapping writer and 
reader, the abstraction level above the two, etc. It feels silly to implement 
everything again in LLDB (actually it would be more like copy/pasting 
everything) just because we miss that 5%, so I'm really motivated to find a 
solution that works for all of us :-) 

> Adding facilities to use native IO together with VFS works against this, e.g. 
> a likely interface is
>   // Returns the OS-native path to the specified virtual file.
>   // Returns None if Path doesn't describe a native file, or its path is 
> unknown.
>   Optional FileSystem::getNativePath(string Path)
> Most potential uses of such a function are going to produce code that doesn't 
> work well with arbitrary VFSes.
> Anecdotally, filesystems are confusing, and most features exposed by VFS end 
> up getting misused if possible.

You're right and this is a problem/limitation for LLDB as well. This was the 
motivation for the `ExternalFileSystem` (please forgive me for the terrible 
name, just wanted to get the code up in phab) because it had "some" semantic 
meaning for both implementations. But I also understand your concerns there. 

> So those are my reasons for pushing back on this change, but I'm not sure how 
> strong they are.
> I think broadly the alternatives for LLDB are:
> make a change like this to the VFS APIs
> migrate to actually doing IO using VFS (likely a lot of work)
> know which concrete VFSes they construct, and track the needed info externally
> stop using VFS, and build separate abstractions for tracking remapping of 
> native files etc
> abandon the new features that depend on this file remapping

Can you elaborate on what you have in mind for (3) and how it differs from (4)?

> As a purist, 2 and 4 seem like the cleanest options, but that's easy to say 
> when it's someone else's work.
> What path should we take here?

I'll withhold from answering this as I'm one of the stakeholders ;-) 

> 
> Cheers, Sam

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] skip some tests with "check-lldb"

2018-09-20 Thread Jonas Devlieghere via lldb-dev

> On Sep 20, 2018, at 2:25 PM, Gábor Márton  wrote:
> 
> Hi Jonas,
> 
> Thanks for the clarification.
> Finally I could skip some of the tests by using a negative lookahead
> regex passed to --filter. I wouldn't say that this is so convenient
> and easy to overview, but works.
> For example, to skip the TestCalculatorMode.py and TestPrintArray.py
> tests I used the following argument (appended to the normal lit
> command line reported by "ninja check-lldb -v" ) :
> --filter '^((?!(TestCalculatorMode\.py|TestPrintArray\.py)).)*$'
> Would be beneficial though to have a --exclude besides --filter to
> avoid using a complex negative lookahead regex.

I have to agree that this looks overly complex. Maybe it's something to bring
up on the llvm-dev mailing list as a general improvement for lit?

> Cheers,
> Gabor
> On Thu, Sep 20, 2018 at 12:53 PM Jonas Devlieghere
>  wrote:
>> 
>> When using lit as the driver (which is the case for check-lldb), the test
>> granularity is at file-level: lit invokes dotest.py for every test file. You
>> should be able to specify files to skip with lit's --filter.
>> 
>> With regards to threading, lit will schedule one instance of dotest.py on 
>> every
>> thread, which processes one test file. Unless you passed specific options for
>> dotest.py, the latter will run the different tests within that one file on
>> different threads as well. (IIRC)
>> 
>> On Sep 19, 2018, at 7:20 PM, Gábor Márton via lldb-dev 
>>  wrote:
>> 
>> That's okay, but is it possible to skip a few tests, when using lit? I was 
>> thinking about moving the test files I want to skip, but that has obvious 
>> drawbacks. Also --filter does not seem so useful in this case.
>> 
>> On Wed, 19 Sep 2018, 18:46 ,  wrote:
>>> 
>>> Unless you pass  --no-multiprocess to dotest, it should detect how many 
>>> cores your system has and use them.
>>> 
>>> --
>>> Ted Woodward
>>> Qualcomm Innovation Center, Inc.
>>> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
>>> Foundation Collaborative Project
>>> 
>>> -Original Message-
>>> From: lldb-dev  On Behalf Of Gábor Márton 
>>> via lldb-dev
>>> Sent: Wednesday, September 19, 2018 11:04 AM
>>> To: lldb-dev@lists.llvm.org
>>> Subject: Re: [lldb-dev] skip some tests with "check-lldb"
>>> 
>>> I just realized that `dotest.py` has a --thread option. Is that the one 
>>> which is used during the lit test (`ninja check-lldb`) ?
>>> 
>>> On Wed, Sep 19, 2018 at 6:00 PM Gábor Márton  wrote:
 
 Hi,
 
 I'd like to skip some tests when I run "ninja check-lldb", because they 
 fail.
 I am on release_70 branch.
 I know I could use dotest.py directly, but that would exercise only one 
 thread.
 Is there a way to execute the tests parallel on all cores and in the
 same time skip some of the tests?
 
 Thanks,
 Gabor
>>> ___
>>> lldb-dev mailing list
>>> lldb-dev@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>>> 
>> ___
>> lldb-dev mailing list
>> lldb-dev@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev
>> 
>> 

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] skip some tests with "check-lldb"

2018-09-20 Thread Jonas Devlieghere via lldb-dev
When using lit as the driver (which is the case for check-lldb), the test
granularity is at file-level: lit invokes dotest.py for every test file. You
should be able to specify files to skip with lit's --filter. 

With regards to threading, lit will schedule one instance of dotest.py on every
thread, which processes one test file. Unless you passed specific options for
dotest.py, the latter will run the different tests within that one file on
different threads as well. (IIRC)

> On Sep 19, 2018, at 7:20 PM, Gábor Márton via lldb-dev 
>  wrote:
> 
> That's okay, but is it possible to skip a few tests, when using lit? I was 
> thinking about moving the test files I want to skip, but that has obvious 
> drawbacks. Also --filter does not seem so useful in this case.
> 
> On Wed, 19 Sep 2018, 18:46 ,  > wrote:
> Unless you pass  --no-multiprocess to dotest, it should detect how many cores 
> your system has and use them.
> 
> --
> Ted Woodward
> Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux 
> Foundation Collaborative Project
> 
> -Original Message-
> From: lldb-dev  > On Behalf Of Gábor Márton via 
> lldb-dev
> Sent: Wednesday, September 19, 2018 11:04 AM
> To: lldb-dev@lists.llvm.org 
> Subject: Re: [lldb-dev] skip some tests with "check-lldb"
> 
> I just realized that `dotest.py` has a --thread option. Is that the one which 
> is used during the lit test (`ninja check-lldb`) ?
> 
> On Wed, Sep 19, 2018 at 6:00 PM Gábor Márton  > wrote:
> >
> > Hi,
> >
> > I'd like to skip some tests when I run "ninja check-lldb", because they 
> > fail.
> > I am on release_70 branch.
> > I know I could use dotest.py directly, but that would exercise only one 
> > thread.
> > Is there a way to execute the tests parallel on all cores and in the 
> > same time skip some of the tests?
> >
> > Thanks,
> > Gabor
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org 
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev 
> 
> 
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] [RFC] LLDB Reproducers

2018-09-19 Thread Jonas Devlieghere via lldb-dev


> On Sep 19, 2018, at 6:49 PM, Leonard Mosescu  wrote:
> 
> Sounds like a fantastic idea. 
> 
> How would this work when the behavior of the debugee process is 
> non-deterministic?

All the communication between the debugger and the inferior goes through the
GDB remote protocol. Because we capture and replay this, we can reproduce
without running the executable, which is particularly convenient when you were
originally debugging something on a different device for example. 

> 
> On Wed, Sep 19, 2018 at 6:50 AM, Jonas Devlieghere via lldb-dev 
> mailto:lldb-dev@lists.llvm.org>> wrote:
> Hi everyone,
> 
> We all know how hard it can be to reproduce an issue or crash in LLDB. There
> are a lot of moving parts and subtle differences can easily add up. We want to
> make this easier by generating reproducers in LLDB, similar to what clang does
> today.
> 
> The core idea is as follows: during normal operation we capture whatever
> information is needed to recreate the current state of the debugger. When
> something goes wrong, this becomes available to the user. Someone else should
> then be able to reproduce the same issue with only this data, for example on a
> different machine.
> 
> It's important to note that we want to replay the debug session from the
> reproducer, rather than just recreating the current state. This ensures that 
> we
> have access to all the events leading up to the problem, which are usually far
> more important than the error state itself.
> 
> # High Level Design
> 
> Concretely we want to extend LLDB in two ways:
> 
> 1.  We need to add infrastructure to _generate_ the data necessary for
> reproducing.
> 2.  We need to add infrastructure to _use_ the data in the reproducer to 
> replay
> the debugging session.
> 
> Different parts of LLDB will have different definitions of what data they need
> to reproduce their path to the issue. For example, capturing the commands
> executed by the user is very different from tracking the dSYM bundles on disk.
> Therefore, we propose to have each component deal with its needs in a 
> localized
> way. This has the advantage that the functionality can be developed and tested
> independently.
> 
> ## Providers
> 
> We'll call a combination of (1) and (2) for a given component a `Provider`. 
> For
> example, we'd have an provider for user commands and a provider for dSYM 
> files.
> A provider will know how to keep track of its information, how to serialize it
> as part of the reproducer as well as how to deserialize it again and use it to
> recreate the state of the debugger.
> 
> With one exception, the lifetime of the provider coincides with that of the
> `SBDebugger`, because that is the scope of what we consider here to be a 
> single
> debug session. The exception would be the provider for the global module 
> cache,
> because it is shared between multiple debuggers. Although it would be
> conceptually straightforward to add a provider for the shared module cache,
> this significantly increases the complexity of the reproducer framework 
> because
> of its implication on the lifetime and everything related to that.
> 
> For now we will ignore this problem which means we will not replay the
> construction of the shared module cache but rather build it up during
> replaying, as if the current debug session was the first and only one using 
> it.
> The impact of doing so is significant, as no issue caused by the shared module
> cache will be reproducible, but does not limit reproducing any issue unrelated
> to it.
> 
> ## Reproducer Framework
> 
> To coordinate between the data from different components, we'll need to
> introduce a global reproducer infrastructure. We have a component responsible
> for reproducer generation (the `Generator`) and for using the reproducer (the
> `Loader`). They are essentially two ways of looking at the same unit of
> repayable work.
> 
> The Generator keeps track of its providers and whether or not we need to
> generate a reproducer. When a problem occurs, LLDB will request the Generator
> to generate a reproducer. When LLDB finishes successfully, the Generator 
> cleans
> up anything it might have created during the session. Additionally, the
> Generator populates an index, which is part of the reproducer, and used by the
> Loader to discover what information is available.
> 
> When a reproducer is passed to LLDB, we want to use its data to replay the
> debug session. This is coordinated by the Loader. Through the index created by
> the Generator, different components know what data (Providers) are available,
> and how to use them.
> 
> It's important to note that in order to create a complete reproducer, we wi

[lldb-dev] [RFC] LLDB Reproducers

2018-09-19 Thread Jonas Devlieghere via lldb-dev
Hi everyone,

We all know how hard it can be to reproduce an issue or crash in LLDB. There
are a lot of moving parts and subtle differences can easily add up. We want to
make this easier by generating reproducers in LLDB, similar to what clang does
today.

The core idea is as follows: during normal operation we capture whatever
information is needed to recreate the current state of the debugger. When
something goes wrong, this becomes available to the user. Someone else should
then be able to reproduce the same issue with only this data, for example on a
different machine.

It's important to note that we want to replay the debug session from the
reproducer, rather than just recreating the current state. This ensures that we
have access to all the events leading up to the problem, which are usually far
more important than the error state itself.

# High Level Design

Concretely we want to extend LLDB in two ways:

1.  We need to add infrastructure to _generate_ the data necessary for
reproducing.
2.  We need to add infrastructure to _use_ the data in the reproducer to replay
the debugging session.

Different parts of LLDB will have different definitions of what data they need
to reproduce their path to the issue. For example, capturing the commands
executed by the user is very different from tracking the dSYM bundles on disk.
Therefore, we propose to have each component deal with its needs in a localized
way. This has the advantage that the functionality can be developed and tested
independently.

## Providers

We'll call a combination of (1) and (2) for a given component a `Provider`. For
example, we'd have an provider for user commands and a provider for dSYM files.
A provider will know how to keep track of its information, how to serialize it
as part of the reproducer as well as how to deserialize it again and use it to
recreate the state of the debugger.

With one exception, the lifetime of the provider coincides with that of the
`SBDebugger`, because that is the scope of what we consider here to be a single
debug session. The exception would be the provider for the global module cache,
because it is shared between multiple debuggers. Although it would be
conceptually straightforward to add a provider for the shared module cache,
this significantly increases the complexity of the reproducer framework because
of its implication on the lifetime and everything related to that.

For now we will ignore this problem which means we will not replay the
construction of the shared module cache but rather build it up during
replaying, as if the current debug session was the first and only one using it.
The impact of doing so is significant, as no issue caused by the shared module
cache will be reproducible, but does not limit reproducing any issue unrelated
to it.

## Reproducer Framework

To coordinate between the data from different components, we'll need to
introduce a global reproducer infrastructure. We have a component responsible
for reproducer generation (the `Generator`) and for using the reproducer (the
`Loader`). They are essentially two ways of looking at the same unit of
repayable work.

The Generator keeps track of its providers and whether or not we need to
generate a reproducer. When a problem occurs, LLDB will request the Generator
to generate a reproducer. When LLDB finishes successfully, the Generator cleans
up anything it might have created during the session. Additionally, the
Generator populates an index, which is part of the reproducer, and used by the
Loader to discover what information is available.

When a reproducer is passed to LLDB, we want to use its data to replay the
debug session. This is coordinated by the Loader. Through the index created by
the Generator, different components know what data (Providers) are available,
and how to use them.

It's important to note that in order to create a complete reproducer, we will
require data from our dependencies (llvm, clang, swift) as well. This means
that either (a) the infrastructure needs to be accessible from our dependencies
or (b) that an API is provided that allows us to query this. We plan to address
this issue when it arises for the respective Generator.

# Components

We have identified a list of minimal components needed to make reproducing
possible. We've divided those into two groups: explicit and implicit inputs.

Explicit inputs are inputs from the user to the debugger.

-   Command line arguments
-   Settings
-   User commands
-   Scripting Bridge API

In addition to the components listed above, LLDB has a bunch of inputs that are
not passed explicitly. It's often these that make reproducing an issue complex.

-   GDB Remote Packets
-   Files containing debug information (object files, dSYM bundles)
-   Clang headers
-   Swift modules

Every component would have its own provider and is free to implement it as it
sees fit. For example, as we expect to have a large number of GDB remote
packets, the provider might 

Re: [lldb-dev] Moderator needed for lldb-commits

2018-07-09 Thread Jonas Devlieghere via lldb-dev
Hi Tanya, 

I'd be happy to take care of this!  

Cheers,
Jonas

> On Jul 6, 2018, at 6:01 PM, Tanya Lattner via lldb-dev 
>  wrote:
> 
> LLDB Developers,
> 
> Moderators are needed for the lldb-commits mailing list. Is anyone interested 
> in helping out?
> 
> Thanks,
> Tanya
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


Re: [lldb-dev] Issues (resolved) with running lldb test-suite on Ubuntu 18.04 LTS

2018-06-29 Thread Jonas Devlieghere via lldb-dev
Hi Puyan,

> On Jun 29, 2018, at 7:30 PM, Puyan Lotfi via lldb-dev 
>  wrote:
> 
> Just a heads up, I had run into some issues running make check-lldb. I found 
> the solution to be setting:
> 
> PYTHON_INCLUDE_DIRS=/usr/include/python2.7
> PYTHON_LIBRARIES=/usr/lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so
> 
> prior to running cmake. Of course python2.7-dev needs to be installed prior. 
> 
> I don’t know if this can be done a better way through pyenv or something like 
> that, but I just thought I'd put that out there.

We’ve had similar issues with Python on Darwin. One potential cause of problems 
is linking against a different version of Python than the interpreter. If you 
install Python trough python.org or Homebrew, CMake finds that version for the 
interpreter, but the system one for libpython2.7.dylib.

-- Found PythonLibs: /usr/lib/libpython2.7.dylib (found version "2.7.10")   
<- System Python
-- Found PythonInterp: /usr/local/bin/python2.7 (found version "2.7.15")
<- Homebrew Python 

I’ve been told that as of version 3.12 of  CMake, there will be a new interface 
to FindPython that will ensure consistently and differentiate between Python 2 
and Python 3 (which I presume is your issue here). I don’t know if CMake 
supports doing different things depending on the version, but hopefully this 
will be resolved in the future. In the meantime (on macOS) I just unlinked the 
Python interpreter from /usr/local/bin. 

> 
> PL
> ___
> lldb-dev mailing list
> lldb-dev@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev

___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


[lldb-dev] London LLVM Social Thursday July 19

2018-06-29 Thread Jonas Devlieghere via lldb-dev
Hi everyone,

We’re excited to invite you to the second LLVM social in London on Thursday, 
July 19.

We'll meet at Drake & Morgan (6 Pancras Square, Kings Cross, N1C 4AG) at 6:30 
pm for an informal evening of LLVM-related discussions over drinks.

If you can, please help us plan and RSVP here: 
https://www.meetup.com/LLVM-Clang-Cambridge-social/events/252228264/

Cheers,
Jonas
___
lldb-dev mailing list
lldb-dev@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-dev


  1   2   >