labath added a comment.

In D74398#1930021 <https://reviews.llvm.org/D74398#1930021>, @jarin wrote:

> In D74398#1929019 <https://reviews.llvm.org/D74398#1929019>, @labath wrote:
>
> > Thanks. Could you also add the other kind of test (the one inline asm) I 
> > mentioned. In an ideal world we'd have a test case for every boundary 
> > condition, but we're pretty far from that right now. Even so, one test case 
> > like that would be nice.
>
>
> Pavel, thank you for the careful review! I still do not quite understand how 
> the asm test should look like and where it should live. Are you asking for 
> creating a new directory with a compiled program below and building the 
> machinery to stop at the right place and check the stack? If yes, that sounds 
> like a fairly time-consuming undertaking, and I am afraid I cannot invest 
> much more time into this.


Yes, that is kind of what I am proposing, but I don't think it should be that 
hard. The creation of a new directory is necessary because of how our test 
suite expects there to be one inferior per directory, and we want to build a 
new inferior here. The "inferior would basically consist of a single `main` 
function containing something like the inline asm I mentioned previously (it 
probably doesn't compile right now, but I hope it shows the general idea). 
"Stopping at the right place" is implemented via the `int3` opcode. Then the 
test would just run the program, wait for it to stop, and examine the 
jThreadsInfo response. It should expect to see one thread containing two 
(three?) memory records. The only tricky part would be validating the contents 
of those records, as they will not have fully static data due to the 
unpredictability of the %rsp value. But the test could compare that to the 
value of %rsp it gets through the `p` packet. Or maybe the inferior could align 
the stack at say 1MB boundary, and then the test could verify that rsp/rbp have 
the expected values modulo 1MB.

> Perhaps it is better if we stick this patch only into our lldb branch, this 
> one should be pretty easy for us to rebase.

That's something you'll have to decide. I'm still very much interested in 
having this patch, but I am also trying to maintain a higher bar for test 
coverage. I see this as sort of similar to the register reading tests we added 
recently (`lldb/test/Shell/Register`. Before that, we only had tests which try 
to read all registers and expect to get /a/ value. That's sort of similar to 
what your first test does -- it is somewhat useful, and it is cross-platform, 
but it doesn't really check all that much. Also, if we ever find a bug in this 
code, it will be impossible to write a regression test for it using that method.

This other test would be more similar to the "new" register tests. They are a 
bit trickier to write, but they enable you to write stronger assertions about 
what the code is actually supposed to do -- not just in the "normal" case, but 
also in various boundary conditions.

> As for the timings with local lldb on Linux, I see the time for jThreadsInfo 
> of 100 threads with fairly shallow stacks (10 entries or so) taking 20ms 
> locally, as opposed to 4ms before this change. The jThreadsInfo reply size is 
> 80kB, up from 11kB. While this seems excessive, I do not see any extra memory 
> traffic for getting a thread list with the patch,  whereas without this patch 
> we get over 100 roundtrips (each is at least 512 bytes of payload) to display 
> the top of the stack of each thread.

Thanks, for checking this out. 20ms for 100 threads doesn't sound too bad. I am 
guessing the statistics would look better if you also showed how many packets 
this saved. I would expect a net saving from all the memory read packets we can 
avoid sending now.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D74398/new/

https://reviews.llvm.org/D74398



_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to