omjavaid added inline comments.

================
Comment at: lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp:1357
+
+  if (error.Fail())
+    return error;
----------------
DavidSpickett wrote:
> omjavaid wrote:
> > DavidSpickett wrote:
> > > DavidSpickett wrote:
> > > > DavidSpickett wrote:
> > > > > omjavaid wrote:
> > > > > > ptrace request is a success if number of tags requested is not 
> > > > > > equal to no of tags read? If not then this and following condition 
> > > > > > may be redundant.
> > > > > Well ptracewrapper doesn't check the iovec, but I'll check the kernel 
> > > > > source to see if it's actually possible for it to fail that way.
> > > > In `linux/arch/arm64/kernel/mte.c` `__access_remote_tags` there is a 
> > > > comment:
> > > > ```
> > > > +/*
> > > > + * Access MTE tags in another process' address space as given in mm. 
> > > > Update
> > > > + * the number of tags copied. Return 0 if any tags copied, error 
> > > > otherwise.
> > > > + * Inspired by __access_remote_vm().
> > > > + */
> > > > ```
> > > > 
> > > > *any tags* being the key words.
> > > > 
> > > > So the scenario is:
> > > > * ask to read from addr X in page 0, with length of pagesize+some so 
> > > > the range spills into page 1
> > > > * kernel can access page 0, reads tags until the end of the page
> > > > * tries to access page 1 to read the rest, fails, returns 0 (success) 
> > > > since *some* tags were read
> > > > * we see the ptrace call succeeded but with less tags than we expected
> > > > 
> > > > I don't see it's worth dealing with this corner case here since lldb 
> > > > will look before it leaps. It would have errored much earlier here 
> > > > because either page 1 isn't in the tracee's memory regions or it wasn't 
> > > > MTE enabled.
> > > > 
> > > > 
> > > Added a comment in the code too.
> > This means emitting less than requested number of tags is legit. However we 
> > have set tags vector size equal to whatever we have requested. We set error 
> > string which is actually not being used anywhere and can be removed in 
> > favor of a log message to help with debugging if needed.
> > 
> > Also we need to resize the vector after ptrace request so we use this size 
> > in gdb remote reply.
> I'll log that error in in GDBRemoteCommunicationServerLLGS.cpp.
> 
> I'll do what you suggested to support partial read on the server side. Then 
> lldb client can error if it doesn't get what it expected.
> (testing partial reads on the lldb side is going to be very difficult anyway 
> since we'd need a valid smaps entry to even issue one)
If we are following an approach similar to m/M gdb remote packets for tags then 
its ok to read partial data in case a part memory in requested address range 
was inaccessible. May be make appropriate adjustment for command output I dont 
recall what currently emit out for partial memory reads but should be something 
like <tags not available>




================
Comment at: lldb/source/Plugins/Process/Linux/NativeProcessLinux.cpp:1345
+
+  tags.resize((range.GetByteSize() / details->manager->GetGranuleSize()) *
+              details->manager->GetBytesPerTag());
----------------
DavidSpickett wrote:
> omjavaid wrote:
> > is there a difference between Granule and GranuleSize?
> Granule is used to mean the current Granule you're in. So if you were at 
> address 0x10 you'd be in the [0x10,0x20) granule.
> GranuleSize is the size of each granule.
> 
> If I saw `manager->GetGranule` I'd expect it to be something like 
> `std::pair<addr_t, addr_t> GetGranule(addr_t addr);`.
> As in, tell me which granule I'm in.
> 
> Though I admit this is more an issue of "ExpandToGranule" not being clear 
> enough, rather than "GetGranuleSize" being too redunant.
> AlignToGranule(s) perhaps? But then you ask "align how", hence "ExpandTo". 
> Anyway.
Right I guess we can stay with current nomenclature. Thanks for detailed 
explanation. 


================
Comment at: lldb/test/API/tools/lldb-server/memory-tagging/main.c:13
+  // Wait for lldb-server to stop us
+  while (1) {
+  }
----------------
DavidSpickett wrote:
> omjavaid wrote:
> > infinite loop in test program may not be a good idea.
> I'll check what the timeouts are on the expect packet sequence. I think it 
> would get killed eventually if we didn't see the output we're looking for.
> (I could do something more CPU friendly, sleep/halt/wait on something)
In past I have LLDB buildbot sometimes piling up excutables which python 
couldnt cleanup for whatever reason. Its better if executable can safely exit 
within a reasonable period.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D95601/new/

https://reviews.llvm.org/D95601

_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to