On Thu, 27 Dec 2007, Vamsee Priya wrote:

> Hi
>
> I have tried LD_PRELOAD and UMEM_DEBUG with my program on Sparc.
> Everything worked. I also am unable to find any bug in my program.
>
> No clue as to who is the culprit..

Are you willing to share the coredump, and/or the application sourcecode 
(for the function active_out to start with) ?

Thx,
FrankH.

>
> Thanks
> Priya
>
>
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, December 26, 2007 6:52 PM
> To: Vamsee Priya
> Cc: [EMAIL PROTECTED]; opensolaris-discuss@opensolaris.org
> Subject: Re: [osol-discuss] SIGSEGV in
> libc.so.1`_malloc_unlockedonSolarisx86machine
>
>
>
> On Wed, 26 Dec 2007, Vamsee Priya wrote:
>
>> Hi,
>>
>>> From the umem_status I too agree that some thing in my program
> corrupted
>> the memory. I am working on as to what caused the problem. The same
>> program works fine always in "SPARC" platform. Why is it that it is
>> causing problems on x86 architecture only?
>
> There are differences between CPU architectures that go beyond "this is
> 32bit this is 64bit". Again, data structure alignment/padding rules and
> operand sizes come to mind.
>
> Have you ever run your program on SPARC with libumem/UMEM_DEBUG ? It
> might
> well fail in the same way (under memory debugging). As said, whether a
> bug
> such as this causes a program failure or not "depends" - on how lucky
> you
> are :)
>
> From the stacktraces you have, the function active_out() is the place to
>
> look. You allocate a piece of memory there, do something with it and in
> the process of that overwrite the buffer beyond its end, and then you
> try
> to free it. That's when libumem tells you "oh no, not with me ... I know
>
> what you did ...".
>
> FrankH.
>
>>
>> Thanks
>> Priya
>>
>> -----Original Message-----
>> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
>> Of [EMAIL PROTECTED]
>> Sent: Wednesday, December 26, 2007 6:36 PM
>> To: Vamsee Priya
>> Cc: opensolaris-discuss@opensolaris.org
>> Subject: Re: [osol-discuss] SIGSEGV in libc.so.1`_malloc_unlocked
>> onSolarisx86machine
>>
>>
>>
>>> This is the o/p I get with umem_status when I attach mdb to the
> process
>>> running.
>>>
>>> Status:         ready and active
>>> Concurrency:    0
>>> Logs:           (inactive)
>>> Message buffer:
>>> umem allocator: redzone violation: write past end of buffer
>>> buffer=80c7c68  bufctl=80c8ce8  cache: umem_alloc_80
>>
>> This error basically says: you have allocated an "X" (<= 80 bytes) of
>> memory but you wrote more than 80 bytes in that buffer.
>>
>> The buffer was allocated from this point in your application:
>>
>>> previous transaction on buffer 80c7c68:
>>> thread=1  time=T-0.000415973  slab=80b8ed8  cache: umem_alloc_80
>>> libumem.so.1'?? (0xfef99a48)
>>> libumem.so.1'umem_cache_alloc+0xe1
>>> libumem.so.1'umem_alloc+0x3f
>>> libumem.so.1'malloc+0x23
>>> ipfs_diff.exe'get_meta+0x28e            <--- here
>>> ipfs_diff.exe'active_out+0xb5
>>> ipfs_diff.exe'active+0xe0
>>> ipfs_diff.exe'main+0xd59
>>> ipfs_diff.exe'_start+0x80
>>
>>
>> And was freed here at which point the corruption was detected.
>>
>>> umem: heap corruption detected
>>> stack trace:
>>> libumem.so.1'?? (0xfef96099)
>>> libumem.so.1'?? (0xfef98b1c)
>>> libumem.so.1'umem_free+0xf6
>>> libumem.so.1'?? (0xfef97c05)
>>> libumem.so.1'free+0x14
>>> ipfs_diff.exe'meta_free+0xbf
>>> ipfs_diff.exe'active_out+0x44e
>>> ipfs_diff.exe'active+0xe0
>>> ipfs_diff.exe'main+0xd59
>>> ipfs_diff.exe'_start+0x80
>>
>> You could look at the buffer (+ 0t80) and see what was written there.
>>
>> Casper
>>
>>
>>
>>
>> _______________________________________________
>> opensolaris-discuss mailing list
>> opensolaris-discuss@opensolaris.org
>>
>
>
>

------------------------------------------------------------------------------
No good can come from selling your freedom, not for all the gold in the world,
for the value of this heavenly gift far exceeds that of any fortune on earth.
------------------------------------------------------------------------------
_______________________________________________
opensolaris-discuss mailing list
opensolaris-discuss@opensolaris.org

Reply via email to