> On Apr 30, 2015, at 3:36 PM, Robert Flack <[email protected]> wrote:
>
> Hi,
>
> I've been looking into why expression evaluation is failing when targeting a
> remote Linux machine from Mac lldb host and it seems there are only 2
> distinct problems remaining - both around memory allocation:
>
> 1. Can't find symbol for mmap.
> 2. Once found, lldb is calling mmap with incorrect constant values for
> MAP_ANON.
>
> For problem 1, the library being linked against (e.g.
> /lib/x86_64-linux-gnu/libc-2.19.so) is copied into a local module cache, but
> we don't copy the unstripped library in
> /usr/lib/debug/lib/x86_64-linux-gnu/libc-2.19.so (I'm assuming we can't call
> mmap from the symtab file given SymbolFileSymtab::FindFunctions returns 0).
> To avoid having to duplicate the symbol discovery (in
> Symbols::LocateExecutableSymbolFile) we should probably ask lldb-platform on
> the target to find the symbol files for the current target (I'm thinking
> Platform::ResolveSymbolFile looks like the right place).
>
> For problem 2, we're building the argument list to mmap and the constant for
> MAP_ANON on macosx is 0x1000 whereas for linux it's 0x20. I'm not sure what
> the right way to fix this is, I could imagine asking Platform to allocate
> memory, but this would likely be an involved change, or perhaps being able to
> ask platform for various OS specific const values which would be hard-coded
> into it when built for the target.
>
So we need to implement the allocate and deallocate memory packets in
lldb-server. It seems we have it implemented in the client, but not in the
server:
addr_t
GDBRemoteCommunicationClient::AllocateMemory (size_t size, uint32_t permissions)
{
if (m_supports_alloc_dealloc_memory != eLazyBoolNo)
{
m_supports_alloc_dealloc_memory = eLazyBoolYes;
char packet[64];
const int packet_len = ::snprintf (packet, sizeof(packet), "_M%" PRIx64
",%s%s%s",
(uint64_t)size,
permissions &
lldb::ePermissionsReadable ? "r" : "",
permissions &
lldb::ePermissionsWritable ? "w" : "",
permissions &
lldb::ePermissionsExecutable ? "x" : "");
assert (packet_len < (int)sizeof(packet));
StringExtractorGDBRemote response;
if (SendPacketAndWaitForResponse (packet, packet_len, response, false)
== PacketResult::Success)
{
if (response.IsUnsupportedResponse())
m_supports_alloc_dealloc_memory = eLazyBoolNo;
else if (!response.IsErrorResponse())
return response.GetHexMaxU64(false, LLDB_INVALID_ADDRESS);
}
else
{
m_supports_alloc_dealloc_memory = eLazyBoolNo;
}
}
return LLDB_INVALID_ADDRESS;
}
bool
GDBRemoteCommunicationClient::DeallocateMemory (addr_t addr)
{
if (m_supports_alloc_dealloc_memory != eLazyBoolNo)
{
m_supports_alloc_dealloc_memory = eLazyBoolYes;
char packet[64];
const int packet_len = ::snprintf(packet, sizeof(packet), "_m%" PRIx64,
(uint64_t)addr);
assert (packet_len < (int)sizeof(packet));
StringExtractorGDBRemote response;
if (SendPacketAndWaitForResponse (packet, packet_len, response, false)
== PacketResult::Success)
{
if (response.IsUnsupportedResponse())
m_supports_alloc_dealloc_memory = eLazyBoolNo;
else if (response.IsOKResponse())
return true;
}
else
{
m_supports_alloc_dealloc_memory = eLazyBoolNo;
}
}
return false;
}
Then you call mmap yourself on the native machine in lldb-server instead of
trying to know what enums will work.
We actually need to ask the PlatformLinux to run an allocate/deallocate memory
and hand it a process. So we can add the following to lldb_private::Platform:
virtual bool
SupportsMemoryAllocation();
virtual lldb::addr_t
AllocateMemory (lldb_private::Process *process, size_t size, uint32_t
permissions, Error &error);
virtual Error
DeallocateMemory (lldb_private::Process *process, lldb::addr_t ptr);
Then the lldb_private::Process can get the current platform and ask it if it
supports allocating memory, and if so call the
Platform::AllocateMemory()/Platform:: DeallocateMemory().
Then the PlatformLinux can "do the right thing" and use the right defines.
> Anyways, I wanted to send this out to see if anyone had any thoughts on
> either of these issues or was already working on them. I have verified (by
> hacking in the correct const values for linux and placing debug libs in a
> path where they will be found) that this fixes expression evaluation (and 14
> tests start passing) for mac->linux debugging.
>
> Thanks in advance for any suggestions,
> Rob
So my suggestion is to implement the memory allocation/deallocation in
lldb-server since it runs natively and will avoid the problems we run into by
trying to evaluate functions by calling them remotely using #define values from
the current system...
>
> P.S. the 14 tests passing mac->linux by fixing this (for other people looking
> at cross platform tests):
> Test11588.py
> TestAnonymous.py
> TestBreakpointConditions.py
> TestCPPStaticMethods.py
> TestCStrings.py
> TestCallStdStringFunction.py
> TestDataFormatterCpp.py
> TestDataFormatterStdList.py
> TestExprDoesntBlock.py
> TestExprHelpExamples.py
> TestFunctionTypes.py
> TestPrintfAfterUp.py
> TestSBValuePersist.py
> TestSetValues.py
>
> _______________________________________________
> lldb-dev mailing list
> [email protected]
> http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev
_______________________________________________
lldb-dev mailing list
[email protected]
http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev