[Lldb-commits] [lldb] 852a4bd - Change the Sanitizer report breakpoint callbacks to asynchronous.

2022-10-03 Thread Jim Ingham via lldb-commits

Author: Jim Ingham
Date: 2022-10-03T18:10:28-07:00
New Revision: 852a4bdb25d145884f61cd812e66611e65fd2dd9

URL: 
https://github.com/llvm/llvm-project/commit/852a4bdb25d145884f61cd812e66611e65fd2dd9
DIFF: 
https://github.com/llvm/llvm-project/commit/852a4bdb25d145884f61cd812e66611e65fd2dd9.diff

LOG: Change the Sanitizer report breakpoint callbacks to asynchronous.

The synchronous callbacks are not intended to start the target running
during the callback, and doing so is flakey.  This patch converts them
to being regular async callbacks, and adds some testing for sequential
reports that have caused problems in the field.

Differential Revision: https://reviews.llvm.org/D134927

Added: 


Modified: 
lldb/include/lldb/Breakpoint/Breakpoint.h

lldb/source/Plugins/InstrumentationRuntime/ASan/InstrumentationRuntimeASan.cpp

lldb/source/Plugins/InstrumentationRuntime/MainThreadChecker/InstrumentationRuntimeMainThreadChecker.cpp

lldb/source/Plugins/InstrumentationRuntime/TSan/InstrumentationRuntimeTSan.cpp

lldb/source/Plugins/InstrumentationRuntime/UBSan/InstrumentationRuntimeUBSan.cpp
lldb/source/Target/StopInfo.cpp
lldb/test/API/functionalities/ubsan/basic/TestUbsanBasic.py
lldb/test/API/functionalities/ubsan/basic/main.c

Removed: 




diff  --git a/lldb/include/lldb/Breakpoint/Breakpoint.h 
b/lldb/include/lldb/Breakpoint/Breakpoint.h
index 701d0823bd282..7490982cb05ba 100644
--- a/lldb/include/lldb/Breakpoint/Breakpoint.h
+++ b/lldb/include/lldb/Breakpoint/Breakpoint.h
@@ -382,7 +382,10 @@ class Breakpoint : public 
std::enable_shared_from_this,
   /// \param[in] is_synchronous
   ///If \b true the callback will be run on the private event thread
   ///before the stop event gets reported.  If false, the callback will get
-  ///handled on the public event thread after the stop has been posted.
+  ///handled on the public event thread while the stop event is being
+  ///pulled off the event queue.
+  ///Note: synchronous callbacks cannot cause the target to run, in
+  ///particular, they should not try to run the expression evaluator.
   void SetCallback(BreakpointHitCallback callback, void *baton,
bool is_synchronous = false);
 

diff  --git 
a/lldb/source/Plugins/InstrumentationRuntime/ASan/InstrumentationRuntimeASan.cpp
 
b/lldb/source/Plugins/InstrumentationRuntime/ASan/InstrumentationRuntimeASan.cpp
index 4746112873112..72dfbd5866224 100644
--- 
a/lldb/source/Plugins/InstrumentationRuntime/ASan/InstrumentationRuntimeASan.cpp
+++ 
b/lldb/source/Plugins/InstrumentationRuntime/ASan/InstrumentationRuntimeASan.cpp
@@ -299,14 +299,15 @@ void InstrumentationRuntimeASan::Activate() {
   if (symbol_address == LLDB_INVALID_ADDRESS)
 return;
 
-  bool internal = true;
-  bool hardware = false;
+  const bool internal = true;
+  const bool hardware = false;
+  const bool sync = false;
   Breakpoint *breakpoint =
   process_sp->GetTarget()
   .CreateBreakpoint(symbol_address, internal, hardware)
   .get();
   breakpoint->SetCallback(InstrumentationRuntimeASan::NotifyBreakpointHit, 
this,
-  true);
+  sync);
   breakpoint->SetBreakpointKind("address-sanitizer-report");
   SetBreakpointID(breakpoint->GetID());
 

diff  --git 
a/lldb/source/Plugins/InstrumentationRuntime/MainThreadChecker/InstrumentationRuntimeMainThreadChecker.cpp
 
b/lldb/source/Plugins/InstrumentationRuntime/MainThreadChecker/InstrumentationRuntimeMainThreadChecker.cpp
index a5c23615309d5..e22cc86116ce5 100644
--- 
a/lldb/source/Plugins/InstrumentationRuntime/MainThreadChecker/InstrumentationRuntimeMainThreadChecker.cpp
+++ 
b/lldb/source/Plugins/InstrumentationRuntime/MainThreadChecker/InstrumentationRuntimeMainThreadChecker.cpp
@@ -214,8 +214,9 @@ void InstrumentationRuntimeMainThreadChecker::Activate() {
   .CreateBreakpoint(symbol_address, /*internal=*/true,
 /*hardware=*/false)
   .get();
+  const bool sync = false;
   breakpoint->SetCallback(
-  InstrumentationRuntimeMainThreadChecker::NotifyBreakpointHit, this, 
true);
+  InstrumentationRuntimeMainThreadChecker::NotifyBreakpointHit, this, 
sync);
   breakpoint->SetBreakpointKind("main-thread-checker-report");
   SetBreakpointID(breakpoint->GetID());
 

diff  --git 
a/lldb/source/Plugins/InstrumentationRuntime/TSan/InstrumentationRuntimeTSan.cpp
 
b/lldb/source/Plugins/InstrumentationRuntime/TSan/InstrumentationRuntimeTSan.cpp
index 910992c48a7dc..425b0baa453f8 100644
--- 
a/lldb/source/Plugins/InstrumentationRuntime/TSan/InstrumentationRuntimeTSan.cpp
+++ 
b/lldb/source/Plugins/InstrumentationRuntime/TSan/InstrumentationRuntimeTSan.cpp
@@ -915,14 +915,15 @@ void InstrumentationRuntimeTSan::Activate() {
   if (symbol_address == LLDB_INVALID_ADDRESS)
 return;
 
-  bool internal = true;
-  bool hardware = 

[Lldb-commits] [PATCH] D134927: Make the sanitizer Notify callbacks asynchronous

2022-10-03 Thread Jim Ingham via Phabricator via lldb-commits
This revision was automatically updated to reflect the committed changes.
Closed by commit rG852a4bdb25d1: Change the Sanitizer report breakpoint 
callbacks to asynchronous. (authored by jingham).

Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134927/new/

https://reviews.llvm.org/D134927

Files:
  lldb/include/lldb/Breakpoint/Breakpoint.h
  lldb/source/Plugins/InstrumentationRuntime/ASan/InstrumentationRuntimeASan.cpp
  
lldb/source/Plugins/InstrumentationRuntime/MainThreadChecker/InstrumentationRuntimeMainThreadChecker.cpp
  lldb/source/Plugins/InstrumentationRuntime/TSan/InstrumentationRuntimeTSan.cpp
  
lldb/source/Plugins/InstrumentationRuntime/UBSan/InstrumentationRuntimeUBSan.cpp
  lldb/source/Target/StopInfo.cpp
  lldb/test/API/functionalities/ubsan/basic/TestUbsanBasic.py
  lldb/test/API/functionalities/ubsan/basic/main.c

Index: lldb/test/API/functionalities/ubsan/basic/main.c
===
--- lldb/test/API/functionalities/ubsan/basic/main.c
+++ lldb/test/API/functionalities/ubsan/basic/main.c
@@ -1,4 +1,16 @@
 int main() {
   int data[4];
-  return *(int *)(((char *)&data[0]) + 2); // align line
+  int result = *(int *)(((char *)&data[0]) + 2); // align line
+
+  int *p = data + 5;  // Index 5 out of bounds for type 'int [4]'
+  *p = data + 5;
+  *p = data + 5;
+  *p = data + 5;
+  *p = data + 5;
+  *p = data + 5;
+  *p = data + 5;
+  *p = data + 5;
+  *p = data + 5;
+
+  return 0;
 }
Index: lldb/test/API/functionalities/ubsan/basic/TestUbsanBasic.py
===
--- lldb/test/API/functionalities/ubsan/basic/TestUbsanBasic.py
+++ lldb/test/API/functionalities/ubsan/basic/TestUbsanBasic.py
@@ -86,4 +86,9 @@
 self.assertEqual(os.path.basename(data["filename"]), "main.c")
 self.assertEqual(data["line"], self.line_align)
 
-self.runCmd("continue")
+for count in range(0,8):
+process.Continue()
+stop_reason = thread.GetStopReason()
+self.assertEqual(stop_reason, lldb.eStopReasonInstrumentation,
+ "Round {0} wasn't instrumentation".format(count))
+
Index: lldb/source/Target/StopInfo.cpp
===
--- lldb/source/Target/StopInfo.cpp
+++ lldb/source/Target/StopInfo.cpp
@@ -362,29 +362,21 @@
" not running commands to avoid recursion.");
 bool ignoring_breakpoints =
 process->GetIgnoreBreakpointsInExpressions();
-if (ignoring_breakpoints) {
-  m_should_stop = false;
-  // Internal breakpoints will always stop.
-  for (size_t j = 0; j < num_owners; j++) {
-lldb::BreakpointLocationSP bp_loc_sp =
-bp_site_sp->GetOwnerAtIndex(j);
-if (bp_loc_sp->GetBreakpoint().IsInternal()) {
-  m_should_stop = true;
-  break;
-}
-  }
-} else {
-  m_should_stop = true;
+// Internal breakpoints should be allowed to do their job, we
+// can make sure they don't do anything that would cause recursive
+// command execution:
+if (!m_was_all_internal) {
+  m_should_stop = !ignoring_breakpoints;
+  LLDB_LOGF(log,
+"StopInfoBreakpoint::PerformAction - in expression, "
+"continuing: %s.",
+m_should_stop ? "true" : "false");
+  Debugger::ReportWarning(
+  "hit breakpoint while running function, skipping commands "
+  "and conditions to prevent recursion",
+process->GetTarget().GetDebugger().GetID());
+  return;
 }
-LLDB_LOGF(log,
-  "StopInfoBreakpoint::PerformAction - in expression, "
-  "continuing: %s.",
-  m_should_stop ? "true" : "false");
-Debugger::ReportWarning(
-"hit breakpoint while running function, skipping commands and "
-"conditions to prevent recursion",
-process->GetTarget().GetDebugger().GetID());
-return;
   }
 
   StoppointCallbackContext context(event_ptr, exe_ctx, false);
Index: lldb/source/Plugins/InstrumentationRuntime/UBSan/InstrumentationRuntimeUBSan.cpp
===
--- lldb/source/Plugins/InstrumentationRuntime/UBSan/InstrumentationRuntimeUBSan.cpp
+++ lldb/source/Plugins/InstrumentationRuntime/UBSan/InstrumentationRuntimeUBSan.cpp
@@ -275,8 +275,9 @@
   .CreateBreakpoint(symbol_address, /*internal=*/true,
 /*hardware=*/false)
   .get();
+  const bool sync = false;
   breakpoint->SetCall

[Lldb-commits] [PATCH] D126668: LLDB: Fix resolving nested template parameters

2022-10-03 Thread David Blaikie via Phabricator via lldb-commits
dblaikie added a comment.

FWIW, clang has support for including template parameter DIEs for template 
declarations (-Xclang -debug-forward-template-params). We could enable that by 
default when tuning for lldb (or maybe we're at the tipping point and we should 
enable it by default in general - I pushed back on that originally when Sony 
folks added/proposed it, under the argument that other debuggers didn't need 
this info - but if they do need the info, so be it)

That feature is also turned on by default in `-gsimple-template-names` which 
we're currently working on adding support to lldb for - so ensuring that lldb 
does the best things when declarations do have template parameters would be 
good. So not filtering based on declaration, but I guess based on "<>" in the 
name and the presenec/absence of template parameter DIEs as the patch does 
currently?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D126668/new/

https://reviews.llvm.org/D126668

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D134842: Improve dynamic loader support in DynamicLoaderPOSIXDYLD when using core files.

2022-10-03 Thread António Afonso via Phabricator via lldb-commits
aadsm added a comment.

Would it be possible to have a test for this? We would have probably found 
this, if we had a test at the time I rewrote this logic.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134842/new/

https://reviews.llvm.org/D134842

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D134991: [lldb] Add diagnostics

2022-10-03 Thread Jonas Devlieghere via Phabricator via lldb-commits
JDevlieghere added a comment.

In D134991#3830682 , @DavidSpickett 
wrote:

> I like the direction, thanks for working on this.
>
> To no one's surprise, I was thinking about testing :)
>
> - Errors related to the filesystem e.g. can't open a path, is too complicated 
> to setup on demand. The code looks to be passing on the error, which is good 
> enough most of the time.
> - The callbacks are added in c++, so testing if you add A then remove B and 
> so on doesn't make sense on the same copy of lldb (and upstream will always 
> have the same set of callbacks anyway).
> - Dumping statistics uses an existing method, you just make up the filename. 
> So it's the statistics code's responsibility to test what that function does.
>
> Maybe there could be a smoke test that just checks that the dir is made and 
> has some files ending in `stats.json` in it? I think clang does something 
> like this though they may have a `--please-crash` option too.

I appreciate you thinking this through. Back in the days of the reproducers, we 
would generate a reproducer on a crash (similar to what we do here) or through 
a command (`reproducer generate`). I think we'll want to a command here too for 
the case where something went wrong but without crashing LLDB. I think that 
part should be easy to test and I was planning on adding a test as part of that.

> Unrelated - there's maybe a situation where the same set of callbacks are 
> added in a different order, perhaps? But am I correct in thinking that this 
> isn't an issue because these will be writing to different files? Stats has 
> one set of files, logging has another set, etc. So the end result is always a 
> dir of separate files one or more for each thing that registers.

Yeah, in my mind all the callbacks should be orthogonal. If it wasn't for the 
layering, I'd make it all the responsibility of the Diagnostics class. If 
someone has a better idea than callbacks please let me know.

In D134991#3830744 , @labath wrote:

> Adding @rupprecht, as he might be interested in following this.
>
> I don't want to get too involved in this, but the first thought on my mind is 
> "do you really want to do all this from within a signal handler?". The fact 
> that we're running this code means that we're already in a bad state, and its 
> pretty hard to say how far we will manage to get without triggering another 
> crash.

We had a similar issue for the reproducers. In my experience, we mostly got 
away with it, but as you said, there's no guarantees. It all depends on what 
exactly what the callback is doing. Maybe that's something that should be 
controllable: e.g. some callbacks might want to do more/less based on whether 
they're called from a signal handler vs when they're triggered from a command.

That said, I see the crash case as a "best effort" kind of thing.

> At the very least, I think we should print the crash dump directory as the 
> first order of business, before we start running random callbacks.

That's a good point.

> There are various ways to solve that, like moving the dumping code to another 
> process, or being very careful about what you run inside the crash handler. 
> The one thing that they all have in common is that they are harder to 
> design/implement that what you have now. So, if you want try this out, and 
> accept the risk that it may not be enough to capture all the crashes you're 
> interested in, then I won't stand in your way...

Yup that's the approach we took for the reproducer. If you remember, we found 
that copying a bunch of files wasn't the best thing to do in a signal handler 
(surprise surprise) so we moved that work out of process (but still wrote out 
the mapping, similar to clang). I think we can take a similar approach here.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134991/new/

https://reviews.llvm.org/D134991

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D134882: [lldb] Move breakpoint hit reset code to Target::CleanupProcess

2022-10-03 Thread Jim Ingham via Phabricator via lldb-commits
jingham added a comment.

That seems fine.  I think it's useful to be able to see breakpoint hit counts 
up to the point where you start a new process.  From looking at the code, it 
looks like putting the clear in CleanupProcess will do that.  If you agree this 
is useful, can you add to the test that the hit count is still 1 between 
process.Kill and connecting to the new process?  Other than that this seems 
fine.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134882/new/

https://reviews.llvm.org/D134882

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D135028: [lldb] [gdb-remote] Move ReadPacketWithOutputSupport() to client

2022-10-03 Thread Michał Górny via Phabricator via lldb-commits
This revision was automatically updated to reflect the committed changes.
Closed by commit rGb6c24c161900: [lldb] [gdb-remote] Move 
ReadPacketWithOutputSupport() to client (authored by mgorny).
Herald added a project: LLDB.

Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D135028/new/

https://reviews.llvm.org/D135028

Files:
  lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp
  lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h
  lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp
  lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h


Index: lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h
===
--- lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h
+++ lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h
@@ -190,11 +190,6 @@
   PacketResult ReadPacket(StringExtractorGDBRemote &response,
   Timeout timeout, bool sync_on_timeout);
 
-  PacketResult ReadPacketWithOutputSupport(
-  StringExtractorGDBRemote &response, Timeout timeout,
-  bool sync_on_timeout,
-  llvm::function_ref output_callback);
-
   PacketResult WaitForPacketNoLock(StringExtractorGDBRemote &response,
Timeout timeout,
bool sync_on_timeout);
Index: lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp
===
--- lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp
+++ lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp
@@ -220,23 +220,6 @@
   return result;
 }
 
-GDBRemoteCommunication::PacketResult
-GDBRemoteCommunication::ReadPacketWithOutputSupport(
-StringExtractorGDBRemote &response, Timeout timeout,
-bool sync_on_timeout,
-llvm::function_ref output_callback) {
-  auto result = ReadPacket(response, timeout, sync_on_timeout);
-  while (result == PacketResult::Success && response.IsNormalResponse() &&
- response.PeekChar() == 'O') {
-response.GetChar();
-std::string output;
-if (response.GetHexByteString(output))
-  output_callback(output);
-result = ReadPacket(response, timeout, sync_on_timeout);
-  }
-  return result;
-}
-
 GDBRemoteCommunication::PacketResult
 GDBRemoteCommunication::ReadPacket(StringExtractorGDBRemote &response,
Timeout timeout,
Index: lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h
===
--- lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h
+++ lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h
@@ -58,6 +58,11 @@
   llvm::StringRef payload, StringExtractorGDBRemote &response,
   std::chrono::seconds interrupt_timeout = std::chrono::seconds(0));
 
+  PacketResult ReadPacketWithOutputSupport(
+  StringExtractorGDBRemote &response, Timeout timeout,
+  bool sync_on_timeout,
+  llvm::function_ref output_callback);
+
   PacketResult SendPacketAndReceiveResponseWithOutputSupport(
   llvm::StringRef payload, StringExtractorGDBRemote &response,
   std::chrono::seconds interrupt_timeout,
Index: lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp
===
--- lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp
+++ lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp
@@ -194,6 +194,23 @@
   return SendPacketAndWaitForResponseNoLock(payload, response);
 }
 
+GDBRemoteCommunication::PacketResult
+GDBRemoteClientBase::ReadPacketWithOutputSupport(
+StringExtractorGDBRemote &response, Timeout timeout,
+bool sync_on_timeout,
+llvm::function_ref output_callback) {
+  auto result = ReadPacket(response, timeout, sync_on_timeout);
+  while (result == PacketResult::Success && response.IsNormalResponse() &&
+ response.PeekChar() == 'O') {
+response.GetChar();
+std::string output;
+if (response.GetHexByteString(output))
+  output_callback(output);
+result = ReadPacket(response, timeout, sync_on_timeout);
+  }
+  return result;
+}
+
 GDBRemoteCommunication::PacketResult
 GDBRemoteClientBase::SendPacketAndReceiveResponseWithOutputSupport(
 llvm::StringRef payload, StringExtractorGDBRemote &response,


Index: lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h
===
--- lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h
+++ lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h
@@ -190,11 +190,6 @@
   PacketResult ReadPacket(StringExtractorGDBRemote &response,
   Timeout timeout, bool sync_on_timeout);
 
-  PacketResult ReadPacketWithOutputSupport(
-  StringExtractorGDBRemote &response, Timeout ti

[Lldb-commits] [lldb] b6c24c1 - [lldb] [gdb-remote] Move ReadPacketWithOutputSupport() to client

2022-10-03 Thread Michał Górny via lldb-commits

Author: Michał Górny
Date: 2022-10-03T18:42:49+02:00
New Revision: b6c24c161900a035f5ea7193f4816b6d192d6ac8

URL: 
https://github.com/llvm/llvm-project/commit/b6c24c161900a035f5ea7193f4816b6d192d6ac8
DIFF: 
https://github.com/llvm/llvm-project/commit/b6c24c161900a035f5ea7193f4816b6d192d6ac8.diff

LOG: [lldb] [gdb-remote] Move ReadPacketWithOutputSupport() to client

Move ReadPacketWithOutputSupport() from GDBRemoteCommunication
to GDBRemoteClientBase.  This function is client-specific and moving
it there simplifies followup patches that split communication into
separate thread.

Sponsored by: The FreeBSD Foundation

Differential Revision: https://reviews.llvm.org/D135028

Added: 


Modified: 
lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp
lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h
lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp
lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h

Removed: 




diff  --git a/lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp 
b/lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp
index ae85dbc68336d..394b62559da76 100644
--- a/lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp
+++ b/lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.cpp
@@ -194,6 +194,23 @@ GDBRemoteClientBase::SendPacketAndWaitForResponse(
   return SendPacketAndWaitForResponseNoLock(payload, response);
 }
 
+GDBRemoteCommunication::PacketResult
+GDBRemoteClientBase::ReadPacketWithOutputSupport(
+StringExtractorGDBRemote &response, Timeout timeout,
+bool sync_on_timeout,
+llvm::function_ref output_callback) {
+  auto result = ReadPacket(response, timeout, sync_on_timeout);
+  while (result == PacketResult::Success && response.IsNormalResponse() &&
+ response.PeekChar() == 'O') {
+response.GetChar();
+std::string output;
+if (response.GetHexByteString(output))
+  output_callback(output);
+result = ReadPacket(response, timeout, sync_on_timeout);
+  }
+  return result;
+}
+
 GDBRemoteCommunication::PacketResult
 GDBRemoteClientBase::SendPacketAndReceiveResponseWithOutputSupport(
 llvm::StringRef payload, StringExtractorGDBRemote &response,

diff  --git a/lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h 
b/lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h
index 4dd67a2b11f7e..b47fee76a2ab5 100644
--- a/lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h
+++ b/lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h
@@ -58,6 +58,11 @@ class GDBRemoteClientBase : public GDBRemoteCommunication, 
public Broadcaster {
   llvm::StringRef payload, StringExtractorGDBRemote &response,
   std::chrono::seconds interrupt_timeout = std::chrono::seconds(0));
 
+  PacketResult ReadPacketWithOutputSupport(
+  StringExtractorGDBRemote &response, Timeout timeout,
+  bool sync_on_timeout,
+  llvm::function_ref output_callback);
+
   PacketResult SendPacketAndReceiveResponseWithOutputSupport(
   llvm::StringRef payload, StringExtractorGDBRemote &response,
   std::chrono::seconds interrupt_timeout,

diff  --git a/lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp 
b/lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp
index b67cd7efd8846..7daf003fec7ba 100644
--- a/lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp
+++ b/lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.cpp
@@ -220,23 +220,6 @@ GDBRemoteCommunication::PacketResult 
GDBRemoteCommunication::GetAck() {
   return result;
 }
 
-GDBRemoteCommunication::PacketResult
-GDBRemoteCommunication::ReadPacketWithOutputSupport(
-StringExtractorGDBRemote &response, Timeout timeout,
-bool sync_on_timeout,
-llvm::function_ref output_callback) {
-  auto result = ReadPacket(response, timeout, sync_on_timeout);
-  while (result == PacketResult::Success && response.IsNormalResponse() &&
- response.PeekChar() == 'O') {
-response.GetChar();
-std::string output;
-if (response.GetHexByteString(output))
-  output_callback(output);
-result = ReadPacket(response, timeout, sync_on_timeout);
-  }
-  return result;
-}
-
 GDBRemoteCommunication::PacketResult
 GDBRemoteCommunication::ReadPacket(StringExtractorGDBRemote &response,
Timeout timeout,

diff  --git a/lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h 
b/lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h
index 9e17d6cc65238..4e59bd5ec8be6 100644
--- a/lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h
+++ b/lldb/source/Plugins/Process/gdb-remote/GDBRemoteCommunication.h
@@ -190,11 +190,6 @@ class GDBRemoteCommunication : public Communication {
   PacketResult ReadPacket(StringExtractorGDBRemote &response,
   Timeout timeout, bool sync_

[Lldb-commits] [PATCH] D134906: Have MemoryCache cache addresses that were unreadable, so duplicate read attempts can be suppressed

2022-10-03 Thread Jason Molenda via Phabricator via lldb-commits
jasonmolenda added a comment.

Jim & David, thanks so much for the comments.  The most important question of 
correctness that David asks is whether we might try to read 32k from an address 
and mark that address as inaccessible because a page 24k into the memory read 
failed (for instance).  I tried to be conservative and only mark an address as 
inaccessible when we were able to read 0 bytes (ReadMemory returns partial 
reads).

I think adding a little hint in the error message, "(cached)" would make it 
clearer to an lldb maintainer what happened, and not distract the user with too 
much.

In D134906#3830290 , @DavidSpickett 
wrote:

>> If we wanted to track these decisions it would be more appropriate to log 
>> them, but I'm not sure even that is necessary.
>
> Yeah this is a better idea, if we do it at all.
>
> Let me rephrase the question. If I had a memory read failing and I suspected 
> that the cache was marking it as unreadable how would I confirm that? If 
> there's a way to do that when it's really needed, then we're good.

When I was testing/debugging this, I did it by turning the packet log on :)

> Perhaps log only when a new address is added to the unreadable list? Then 
> with the memory log plus the packet log you could figure out the issue, even 
> if you didn't know that the cache had this unreadable address feature before 
> you started investigating.

Yes, I can add a log message to Process when I add an address, good idea.

Let me go through the patch again carefully to make sure I'm correctly handling 
partially-successful reads (not putting anything in the inaccessible memory 
cache) and tweak that error message a tad & add a bit of logging.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134906/new/

https://reviews.llvm.org/D134906

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D135029: [lldb] [gdb-remote] Isolate ClientBase from Communication

2022-10-03 Thread Michał Górny via Phabricator via lldb-commits
mgorny added inline comments.



Comment at: lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h:116-118
+  GDBRemoteCommunication &GetCommunication() {
+return m_comm;
+  }

labath wrote:
> Is the plan to make this private/protected at some point, or something like 
> that? Otherwise, I'm not really sure what's the benefit of this over the 
> regular inheritance.
> 
> I like the idea of using composition instead of inheritance (I think we could 
> do something similar with GDBRemoteCommunication and Communication), but 
> right now this seems fairly messy, and the benefit is unclear.
Ideally, yes. However, I don't think I'm going to pursue it that far, i.e. 
someone else will have to take up the effort. And yes, I honestly doubt anybody 
will.

The main goal is make ground for D135031, i.e. communication via separate 
thread. What I've been aiming at is leaving `GetCommunication()` for stuff 
that's unlikely to break when invoked cross-thread (or unlikely to be invoked 
cross-thread), while abstracting away the part of the API that needs to be 
reimplemented to communicate via the comm thread.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D135029/new/

https://reviews.llvm.org/D135029

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D134882: [lldb] Move breakpoint hit reset code to Target::CleanupProcess

2022-10-03 Thread Pavel Labath via Phabricator via lldb-commits
labath added a comment.

Here's an even better idea. I've now doing it in Target::CleanupProcess, right 
next to the code for resetting **watch**point hit counts.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134882/new/

https://reviews.llvm.org/D134882

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D134882: [lldb] Move breakpoint hit reset code to Target::CreateProcess

2022-10-03 Thread Pavel Labath via Phabricator via lldb-commits
labath updated this revision to Diff 464700.
labath added a comment.

use CleanupProcess instead


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134882/new/

https://reviews.llvm.org/D134882

Files:
  lldb/source/Target/Process.cpp
  lldb/source/Target/Target.cpp
  lldb/test/API/functionalities/gdb_remote_client/TestProcessConnect.py


Index: lldb/test/API/functionalities/gdb_remote_client/TestProcessConnect.py
===
--- lldb/test/API/functionalities/gdb_remote_client/TestProcessConnect.py
+++ lldb/test/API/functionalities/gdb_remote_client/TestProcessConnect.py
@@ -63,3 +63,54 @@
 self.dbg.GetSelectedTarget().GetProcess().Kill()
 lldbutil.expect_state_changes(self, self.dbg.GetListener(),
   self.process(), [lldb.eStateExited])
+def test_breakpoint_count(self):
+"""
+Test that breakpoint count gets reset for each new connection.
+"""
+class MyResponder(MockGDBServerResponder):
+
+def __init__(self):
+super().__init__()
+self.continued = False
+
+def qfThreadInfo(self):
+return "m47"
+
+def qsThreadInfo(self):
+return "l"
+
+def setBreakpoint(self, packet):
+return "OK"
+
+def readRegister(self, reg):
+# Pretend we're at the breakpoint after we've been resumed.
+return "3412" if self.continued else 
"4747"
+
+def cont(self):
+self.continued = True
+return "T05thread=47;reason:breakpoint"
+
+# Connect to the first process and set our breakpoint.
+self.server.responder = MyResponder()
+target = self.createTarget("a.yaml")
+process = self.connect(target)
+
+bkpt = target.BreakpointCreateByAddress(0x1234)
+self.assertTrue(bkpt.IsValid())
+self.assertEqual(bkpt.GetNumLocations(), 1)
+
+# "continue" the process. It should hit our breakpoint.
+process.Continue()
+self.assertState(process.GetState(), lldb.eStateStopped)
+self.assertEqual(bkpt.GetHitCount(), 1)
+
+# Now kill it, and connect again.
+process.Kill()
+self.server.stop()
+self.server = MockGDBServer(self.server_socket_class())
+self.server.start()
+
+process = self.connect(target)
+
+# The hit count should be reset.
+self.assertEqual(bkpt.GetHitCount(), 0)
Index: lldb/source/Target/Target.cpp
===
--- lldb/source/Target/Target.cpp
+++ lldb/source/Target/Target.cpp
@@ -177,6 +177,7 @@
   // clean up needs some help from the process.
   m_breakpoint_list.ClearAllBreakpointSites();
   m_internal_breakpoint_list.ClearAllBreakpointSites();
+  ResetBreakpointHitCounts();
   // Disable watchpoints just on the debugger side.
   std::unique_lock lock;
   this->GetWatchpointList().GetListMutex(lock);
Index: lldb/source/Target/Process.cpp
===
--- lldb/source/Target/Process.cpp
+++ lldb/source/Target/Process.cpp
@@ -2761,18 +2761,15 @@
 }
 
 Status Process::WillLaunch(Module *module) {
-  GetTarget().ResetBreakpointHitCounts();
   return DoWillLaunch(module);
 }
 
 Status Process::WillAttachToProcessWithID(lldb::pid_t pid) {
-  GetTarget().ResetBreakpointHitCounts();
   return DoWillAttachToProcessWithID(pid);
 }
 
 Status Process::WillAttachToProcessWithName(const char *process_name,
 bool wait_for_launch) {
-  GetTarget().ResetBreakpointHitCounts();
   return DoWillAttachToProcessWithName(process_name, wait_for_launch);
 }
 


Index: lldb/test/API/functionalities/gdb_remote_client/TestProcessConnect.py
===
--- lldb/test/API/functionalities/gdb_remote_client/TestProcessConnect.py
+++ lldb/test/API/functionalities/gdb_remote_client/TestProcessConnect.py
@@ -63,3 +63,54 @@
 self.dbg.GetSelectedTarget().GetProcess().Kill()
 lldbutil.expect_state_changes(self, self.dbg.GetListener(),
   self.process(), [lldb.eStateExited])
+def test_breakpoint_count(self):
+"""
+Test that breakpoint count gets reset for each new connection.
+"""
+class MyResponder(MockGDBServerResponder):
+
+def __init__(self):
+super().__init__()
+self.continued = False
+
+def qfThreadInfo(self):
+return "m47"
+
+def qsThreadInfo(self):
+return "l"
+
+def setBreakpoint(self, packet):
+return "OK"
+
+def readRegister(self, reg):
+# Pretend we're 

[Lldb-commits] [PATCH] D134656: [LLDB][NativePDB] Add class/union layout bit size.

2022-10-03 Thread Pavel Labath via Phabricator via lldb-commits
labath added a comment.

In D134656#3819025 , @zequanwu wrote:

>>> For members from parent classes, pdb only has information saying that its 
>>> direct parents class are at some offsets for this class. For class without 
>>> vtable, it's easy to calculate inherited member offsets by adding parent 
>>> class offsets with their member offsets. For class with vtable, it's more 
>>> complicated to calculate the offsets.
>>
>> Yes, so should we take the offsets from the debug info and provide them to 
>> clang (so that it does not have to recompute the offsets) ?
>
> Oh, it's already there 
> .
>  This patch just add the missing bit_size.

Ah, cool. Ok then.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134656/new/

https://reviews.llvm.org/D134656

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D135029: [lldb] [gdb-remote] Isolate ClientBase from Communication

2022-10-03 Thread Pavel Labath via Phabricator via lldb-commits
labath added inline comments.



Comment at: lldb/source/Plugins/Process/gdb-remote/GDBRemoteClientBase.h:116-118
+  GDBRemoteCommunication &GetCommunication() {
+return m_comm;
+  }

Is the plan to make this private/protected at some point, or something like 
that? Otherwise, I'm not really sure what's the benefit of this over the 
regular inheritance.

I like the idea of using composition instead of inheritance (I think we could 
do something similar with GDBRemoteCommunication and Communication), but right 
now this seems fairly messy, and the benefit is unclear.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D135029/new/

https://reviews.llvm.org/D135029

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D134991: [lldb] Add diagnostics

2022-10-03 Thread Pavel Labath via Phabricator via lldb-commits
labath added a subscriber: rupprecht.
labath added a comment.

Adding @rupprecht, as he might be interested in following this.

I don't want to get too involved in this, but the first thought on my mind is 
"do you really want to do all this from within a signal handler?". The fact 
that we're running this code means that we're already in a bad state, and its 
pretty hard to say how far we will manage to get without triggering another 
crash. At the very least, I think we should print the crash dump directory as 
the first order of business, before we start running random callbacks.

There are various ways to solve that, like moving the dumping code to another 
process, or being very careful about what you run inside the crash handler. The 
one thing that they all have in common is that they are harder to 
design/implement that what you have now. So, if you want try this out, and 
accept the risk that it may not be enough to capture all the crashes you're 
interested in, then I won't stand in your way...




Comment at: lldb/include/lldb/Target/Statistics.h:180
 
+class Stats {
+public:

What's the difference between this class and DebuggerStats above? e.g. if I 
wanted to add some new method, how would I know where to add it?



Comment at: lldb/source/Utility/Diagnostics.cpp:54
+bool Diagnostics::Dump(raw_ostream &stream) {
+  SmallString<128> diagnostics_dir;
+  std::error_code ec =

I am not sure how common this is, but I have recently seen (not in lldb, but 
another app) a bug, which essentially caused two threads to crash at once (one 
SEGV, one ABRT). In those situations, you probably don't want to crash-printing 
routines to race with each other. You can consider putting a (global) mutex in 
this function, or something like that. (unless the llvm function takes care of 
that already).


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134991/new/

https://reviews.llvm.org/D134991

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D134991: [lldb] Add diagnostics

2022-10-03 Thread David Spickett via Phabricator via lldb-commits
DavidSpickett added a comment.

I like the direction, thanks for working on this.

To no one's surprise, I was thinking about testing :)

- Errors related to the filesystem e.g. can't open a path, is too complicated 
to setup on demand. The code looks to be passing on the error, which is good 
enough most of the time.
- The callbacks are added in c++, so testing if you add A then remove B and so 
on doesn't make sense on the same copy of lldb (and upstream will always have 
the same set of callbacks anyway).
- Dumping statistics uses an existing method, you just make up the filename. So 
it's the statistics code's responsibility to test what that function does.

Maybe there could be a smoke test that just checks that the dir is made and has 
some files ending in `stats.json` in it? I think clang does something like this 
though they may have a `--please-crash` option too.

Unrelated - there's maybe a situation where the same set of callbacks are added 
in a different order, perhaps? But am I correct in thinking that this isn't an 
issue because these will be writing to different files? Stats has one set of 
files, logging has another set, etc. So the end result is always a dir of 
separate files one or more for each thing that registers.




Comment at: lldb/include/lldb/Utility/Diagnostics.h:29
+public:
+  Diagnostics();
+  ~Diagnostics();

Can this be private? Code should only use `Initialize` if I am reading this 
right.



Comment at: lldb/include/lldb/Utility/Log.h:231
llvm::StringRef function, const char *format,
-   Args &&... args) {
+   Args &&...args) {
 Format(file, function,

These seem unrelated but not doing any harm.



Comment at: lldb/source/API/SBDebugger.cpp:222
 
+static void DumpDiagnostics(void* cookie) {
+  Diagnostics::Instance().Dump(llvm::errs());

What is `cookie` used for? (or rather, used elsewhere)



Comment at: lldb/source/Utility/Diagnostics.cpp:43
+  std::lock_guard guard(m_callbacks_mutex);
+  m_callbacks.push_back(callback);
+}

Is it worth adding an assert that the callback is not already in the list?

(and below, that the callback is in fact in the list)


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134991/new/

https://reviews.llvm.org/D134991

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D133778: Fix LLDB build on old Linux kernels (pre-4.1)

2022-10-03 Thread David Spickett via Phabricator via lldb-commits
DavidSpickett added a comment.

I only have newer kernels but replacing the type with what it looks like on an 
older kernel and undefing PERF_ATTR_SIZE_VER5, lldb does build. So it looks 
like this is a good thing to land.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D133778/new/

https://reviews.llvm.org/D133778

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D134906: Have MemoryCache cache addresses that were unreadable, so duplicate read attempts can be suppressed

2022-10-03 Thread David Spickett via Phabricator via lldb-commits
DavidSpickett added a comment.

> If we wanted to track these decisions it would be more appropriate to log 
> them, but I'm not sure even that is necessary.

Yeah this is a better idea, if we do it at all.

Let me rephrase the question. If I had a memory read failing and I suspected 
that the cache was marking it as unreadable how would I confirm that? If 
there's a way to do that when it's really needed, then we're good.

Perhaps log only when a new address is added to the unreadable list? Then with 
the memory log plus the packet log you could figure out the issue, even if you 
didn't know that the cache had this unreadable address feature before you 
started investigating.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D134906/new/

https://reviews.llvm.org/D134906

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits


[Lldb-commits] [PATCH] D133778: Fix LLDB build on old Linux kernels (pre-4.1)

2022-10-03 Thread David Spickett via Phabricator via lldb-commits
DavidSpickett added a comment.

Did this ever land? I only see 
https://github.com/llvm/llvm-project/commit/d35702efe73010409c75b1f1b64f205cc3b2f6d3.

We have a short time to get this into 15.0.2, so I'd like to land this if it is 
needed.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D133778/new/

https://reviews.llvm.org/D133778

___
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits