That was a guess. As I said, i turned on the debugger to see when it
start eating the memory. As you can see the last messageit print is:
339069000: system.cpu + A0 T0 : 0x852f93.0  :   MOV_R_I : limm   eax,
0x9 : IntAlu :  D=0x0000000000000009
339069500: system.cpu.icache: set be: moving blk 452f80 to MRU
339069500: system.cpu.icache: ReadReq (ifetch) 452f98 hit

Then no message is printed and I see, with top command, that the
memory usage gos up and up until it consumes all memory.


On 4/27/12, Nilay Vaish <[email protected]> wrote:
> How do you know the instruction at which the memory starts leaking? What
> should we conclude from the instruction trace in your mail. I am unable to
> arrive at any conclusion from the valgrind report that you had attached.
> Apart from the info on uninitialized values, I did not find any useful
> output produced by valgrind.
>
> --
> Nilay
>
> On Fri, 27 Apr 2012, Mahmood Naderan wrote:
>
>> tonto with the test input uses about 4 GB and runs for about 2 seconds
>> on a real machine.
>>
>> I also used the test input with gem5. However again after tick
>> 300000000, all the 30GB memory is used and then gem5 is killed. The
>> same behaviour with ref input...
>>
>> I ran the following command:
>> valgrind --tool=memcheck --leak-check=full --track-origins=yes
>> --suppressions=../util/valgrind-suppressions ../build/X86/m5.debug
>> --debug-flags=Cache,ExecAll,Bus,CacheRepl,Context
>> --trace-start=339050000 ../configs/example/se.py -c
>> tonto_base.amd64-m64-gcc44-nn --cpu-type=detailed -F 5000000 --maxtick
>> 10000000 --caches --l2cache --prog-interval=100000
>>
>>
>> I also attach the report again. At the instruction that the memory
>> leak begins, you can see:
>> ...
>> 339066000: system.cpu + A0 T0 : 0x83d48d    : call   0x15afe
>> 339066000: system.cpu + A0 T0 : 0x83d48d.0  :   CALL_NEAR_I : limm
>> t1, 0x15afe : IntAlu :  D=0x0000000000015afe
>> 339066500: system.cpu + A0 T0 : 0x83d48d.1  :   CALL_NEAR_I : rdip
>> t7, %ctrl153,  : IntAlu :  D=0x000000000083d492
>> 339067000: system.cpu.dcache: set 9a: moving blk 5aa680 to MRU
>> 339067000: system.cpu.dcache: WriteReq 5aa6b8 hit
>> 339067000: system.cpu + A0 T0 : 0x83d48d.2  :   CALL_NEAR_I : st   t7,
>> SS:[rsp + 0xfffffffffffffff8] : MemWrite :  D=0x000000000083d492
>> A=0x7fffffffe6b8
>> 339067500: system.cpu + A0 T0 : 0x83d48d.3  :   CALL_NEAR_I : subi
>> rsp, rsp, 0x8 : IntAlu :  D=0x00007fffffffe6b8
>> 339068000: system.cpu + A0 T0 : 0x83d48d.4  :   CALL_NEAR_I : wrip   ,
>> t7, t1 : IntAlu :
>> 339068500: system.cpu.icache: set be: moving blk 452f80 to MRU
>> 339068500: system.cpu.icache: ReadReq (ifetch) 452f90 hit
>> 339068500: system.cpu + A0 T0 : 0x852f90    : mov    r10, rcx
>> 339068500: system.cpu + A0 T0 : 0x852f90.0  :   MOV_R_R : mov   r10,
>> r10, rcx : IntAlu :  D=0x0000000000000022
>> 339069000: system.cpu.icache: set be: moving blk 452f80 to MRU
>> 339069000: system.cpu.icache: ReadReq (ifetch) 452f90 hit
>> 339069000: system.cpu + A0 T0 : 0x852f93    : mov    eax, 0x9
>> 339069000: system.cpu + A0 T0 : 0x852f93.0  :   MOV_R_I : limm   eax,
>> 0x9 : IntAlu :  D=0x0000000000000009
>> 339069500: system.cpu.icache: set be: moving blk 452f80 to MRU
>> 339069500: system.cpu.icache: ReadReq (ifetch) 452f98 hit
>>
>>
>> What is your opinion then?
>> Regards,
>>
>> On 4/27/12, Steve Reinhardt <[email protected]> wrote:
>>> Also, if you do run valgrind, use the util/valgrind-suppressions file to
>>> suppress spurious reports.  Read the valgrind docs to see how this
>>> works.
>>>
>>> Steve
>>>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>


-- 
// Naderan *Mahmood;
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to