[gem5-users] Cache model in gem5

2015-07-16 Thread Will
Hi all,


I'm trying to model a new cache with a separate tagging mechanism. Does someone 
knows whether I could use the cache model of gem5 as a RAM only and totally 
ignore the tag function? 
I'm wondering which will be done faster, modify the current cache model or 
rewrite it.


I would appreciate your help.


Best regards,
Will







___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Could cache connected without bus?

2015-07-16 Thread Will
Hi Andreas,


Thank you for your sincere help. I've tried again, and it is OK to connect two 
caches back to back.
I want to model a new cache with several sub-modules and each of them may 
contribute to latency.
So I'm wondering that in which means I could connect two memory objects.
Does the way of using ports and packet best fits into my situation? 


I'm new to gem5 and many thanks.


Best regrads,
Will





At 2015-07-16 22:59:14, "Andreas Hansson"  wrote:

Hi Will,


In general you should be fine to connect two caches back to back. The question 
is, why would you? Why not make one of the caches larger?


Andreas


From: gem5-users  on behalf of Will 

Reply-To: gem5 users mailing list 
Date: Thursday, 16 July 2015 15:56
To: m5-users 
Subject: [gem5-users] Could cache connected without bus?



Hello,


I'v attempted to connect two caches without bus but I got error.
Does anybody knows whether I could connect two memory objects directly, i.e 
without bus?


I would appreciate if some one can shed some light on this.


Best regards,
Will



-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England & Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] DRAMCtrl: Question on read/write draining while not using the write threshold.

2015-07-16 Thread Prathap Kolakkampadath
Hello Andreas,

I kind of figured out what's going on.

With the modified DRAM controller switching mechanism, DRAM controller
Write Buffer and LLC write buffers become full because the DRAM controller
don't service writes as long as there are reads in the queue. Once the LLC
write buffer is full, LLC controller locks the cpu side port; as a result
,core cannot further generate any misses to LLC. At this point, the DRAM
controller continues to service reads from read queue; this could generate
evictions at the LLC. These evicted writes will get queued in the DRAM
controller port. Once the DRAM Read queue is empty, the controller switches
to write. After a write burst, and propagation delay, one write buffer gets
freed and now core can generate more read fills. However the read will
appear in the DRAM controller only after the queued writes at the port(due
to evictions) are serviced.

Do you think this hypothesis is correct?

Thanks,
Prathap

On Thu, Jul 16, 2015 at 11:44 AM, Prathap Kolakkampadath <
kvprat...@gmail.com> wrote:

> Hello Andreas,
>
>
> Below are the changes:
>
> @@ -1295,7 +1295,8 @@
>
>  // we have so many writes that we have to transition
>  if (writeQueue.size() > writeHighThreshold) {
> -switch_to_writes = true;
> +if (readQueue.empty())
> +switch_to_writes = true;
>  }
>  }
>
> @@ -1332,7 +1333,7 @@
>  if (writeQueue.empty() ||
>  (writeQueue.size() + minWritesPerSwitch < writeLowThreshold &&
>   !drainManager) ||
> -(!readQueue.empty() && writesThisTime >= minWritesPerSwitch))
> {
> +!readQueue.empty()) {
>  // turn the bus back around for reads again
>  busState = WRITE_TO_READ;
>
> Previously, i used some bank reservation schemes and not using all the
> banks. Now i re ran without any additional changes other than the above
> and still gets a *mean *writes_per_turn around of ~15.
> Once the cache is blocked due to write_buffers full; the core should be
> able to immediately send another request to DRAM as soon as on write buffer
> is freed.
> In my system this round trip time  is 45.5ns [ 24(L2 latency hit +miss) +
> 4 ((L1 latency hit +miss) + 7.5 (tBUrst) + 10[Xbar response+request].
> Note that the static latencies are set to 0.
>
> I am trying to figure out the unexpected number of writes processed per
> switch.
> Also attached the gem5 statistics.
>
> Thanks,
> Prathap
>
>
> On Thu, Jul 16, 2015 at 6:06 AM, Andreas Hansson 
> wrote:
>
>>  Hi Prathap,
>>
>>  It sounds like something is going wrong in your write-switching
>> algorithm. Have you verified that a read is actually showing up when you
>> think it is?
>>
>>  If needed, is there any chance you could post the patch on RB, or
>> include the changes in a mail?
>>
>>  Andreas
>>
>>   From: gem5-users  on behalf of Prathap
>> Kolakkampadath 
>> Reply-To: gem5 users mailing list 
>> Date: Thursday, 16 July 2015 00:36
>> To: gem5 users mailing list 
>> Subject: [gem5-users] DRAMCtrl: Question on read/write draining while
>> not using the write threshold.
>>
>> Hello Users,
>>
>>  I have experimented by modifying the DRAM Controller write draining
>> algorithm in such a way that, the DRAM Controller always process reads and
>> switch to writes only when the read queue is empty; controller switch from
>> writes to read immediately when a read arrives in the read queue.
>>
>>  With this modification, i ran a very memory intensive test on four cores
>> simultaneously. Each miss generates a read(line-fill) and write(write back)
>> to DRAM.
>>
>>  First, I brief what i am expecting: DRAM controller continue to process
>> reads; meanwhile DRAM write queue fills up and eventually fills up the
>> write buffers in the cache and therefore LLC locks up, therefore, no
>> further reads and writes to the DRAM from the core.
>> At this point, DRAM controller process reads until the read queue is
>> empty and switches to write and starts processing writes until a new read
>> request arrives. Note that the LLC is blocked at this moment. Once a write
>> is processed and corresponding write buffer of cache is cleared, a core can
>> generate a new miss(which generates a line fill first). During this round
>> trip time(as observed in my system 45ns and tBURST is 7.5ns), the DRAM
>> controller can process almost 6 requests(45/7.5). After which it should
>> switch to read.
>>
>>  However, from the gem5 statistics, I observe that the mean
>> writes_per_turn around is 30 instead of ~6. I don't understand why this is
>> the case? Can someone help me in understanding this behaviour?
>>
>>  Thanks,
>>  Prathap
>>
>>
>> -- IMPORTANT NOTICE: The contents of this email and any attachments are
>> confidential and may also be privileged. If you are not the intended
>> recipient, please notify the sender immediately and do not disclose the
>> contents to any other person, use it for an

Re: [gem5-users] Why cache misses are decreasing when core frequency increase?

2015-07-16 Thread Andreas Hansson
Hi Nimish,

How do you determine cache misses (what stat are you looking at)? Are you 
running the same workload in the two scenarios (i.e. are the actual 
instructions executed the same)? Is it full system (and if so, are you changing 
the core frequency without the OS knowing about it)?

Can you shed some more light on your experiment? Overall I’d say it’s a bad 
idea to be changing clocks behind the OS’s back...

Andreas

From: gem5-users 
mailto:gem5-users-boun...@gem5.org>> on behalf of 
Nimish Girdhar mailto:nimi...@tamu.edu>>
Reply-To: gem5 users mailing list 
mailto:gem5-users@gem5.org>>
Date: Thursday, 16 July 2015 23:00
To: gem5 users mailing list mailto:gem5-users@gem5.org>>
Cc: Gaurav Sharma mailto:gaurav1...@gmail.com>>
Subject: Re: [gem5-users] Why cache misses are decreasing when core frequency 
increase?


Anybody has any idea what might be happening here??
Any help will be appreciated..
Thanks,

On Jul 14, 2015 9:38 AM, "Nimish Girdhar" 
mailto:nimi...@tamu.edu>> wrote:
Hello,

I am trying to use DVFS for my project. But I want the frequency control in 
hardware so I cannot use the DVFS support given by Gem5 as that is on kernel 
level. For my project I added each core and their l1 caches to a different 
clock domains and hacked the code to change the frequency of the domains 
whenever I wanted.

To check if it is working I fired two runs, one with default frequency settings 
(which is 2 Ghz) and in the other run I double the frequency of each domain, so 
each core runs on 4Ghz.

Now looking at the stats, I see simulation time dropping to almost half which 
is expected. But I am not able to reason the cache stats. I am seeing the cache 
misses for all caches also decreasing by almost half. Can anybody reason how is 
that happening?

I am running arm Full system with classic memory model. All memory settings are 
default.

Thanks,
--
Warm regards
Nimish Girdhar
Department of Electrical and Computer Engineering
Texas A&M University

-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England & Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Why cache misses are decreasing when core frequency increase?

2015-07-16 Thread Nimish Girdhar
Anybody has any idea what might be happening here??
Any help will be appreciated..
Thanks,
On Jul 14, 2015 9:38 AM, "Nimish Girdhar"  wrote:

> Hello,
>
> I am trying to use DVFS for my project. But I want the frequency control
> in hardware so I cannot use the DVFS support given by Gem5 as that is on
> kernel level. For my project I added each core and their l1 caches to a
> different clock domains and hacked the code to change the frequency of the
> domains whenever I wanted.
>
> To check if it is working I fired two runs, one with default frequency
> settings (which is 2 Ghz) and in the other run I double the frequency of
> each domain, so each core runs on 4Ghz.
>
> Now looking at the stats, I see simulation time dropping to almost half
> which is expected. But I am not able to reason the cache stats. I am seeing
> the cache misses for all caches also decreasing by almost half. Can anybody
> reason how is that happening?
>
> I am running arm Full system with classic memory model. All memory
> settings are default.
>
> Thanks,
> --
> Warm regards
> Nimish Girdhar
> Department of Electrical and Computer Engineering
> Texas A&M University
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Stream benchmark in SE mode, bandwidth does not degrade when run in parallel

2015-07-16 Thread Andreas Hansson
Hi Timo,

Note that even the ‘timing’ core is not representative of anything
realistic. If all you do is dependent load/store (lat_mem_rd with
thrashing) you should be ok, but in general, for any performance numbers
use minor or arm_detailed.

Concerning the low bandwidth, you are running without caches. Have a look
at m5out/config.dot.pdf or preferably m5out/config.dot.svg for an
illustration of the system. If you do not see these files, make sure you
have py-dot installed.

Andreas

On 16/07/2015 22:06, "gem5-users on behalf of Timo Schneider"
 wrote:

>On Thu, 2015-07-16 at 21:33 +0100, Andreas Hansson wrote:
>
>Hi Andreas,
>
>> As a general rule, never use any performance number from atomic mode.
>> Atomic mode is for fast forwarding and warming (and for anything
>> non-temporal). The only notion of time is ‘enough to not confuse the
>>OS’.
>>
>> I would recommend to re-run your experiment with a realistic timing core
>> model (e.g. minor or arm_detailed), and a realistic DRAM controller
>>(e.g.
>> DDR3_1600_x64).
>
>Thanks! That helped! I am using --cpu-type=timing
>--mem-type=DDR3_2133_x64 now and it seems to work --- I get 27.9 MB/s
>for one and 17.5 MB/s for two procs.
>
>I am surprised that it is that low, but I am sure that the total memory
>bandwidth can be configured somewhere.
>
>Regards,
>Timo
>
>___
>gem5-users mailing list
>gem5-users@gem5.org
>http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users


-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium.  Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England & Wales, Company No:  2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England & Wales, Company No:  2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Stream benchmark in SE mode, bandwidth does not degrade when run in parallel

2015-07-16 Thread Timo Schneider
On Thu, 2015-07-16 at 21:33 +0100, Andreas Hansson wrote:

Hi Andreas,

> As a general rule, never use any performance number from atomic mode.
> Atomic mode is for fast forwarding and warming (and for anything
> non-temporal). The only notion of time is ‘enough to not confuse the OS’.
> 
> I would recommend to re-run your experiment with a realistic timing core
> model (e.g. minor or arm_detailed), and a realistic DRAM controller (e.g.
> DDR3_1600_x64).

Thanks! That helped! I am using --cpu-type=timing
--mem-type=DDR3_2133_x64 now and it seems to work --- I get 27.9 MB/s
for one and 17.5 MB/s for two procs.

I am surprised that it is that low, but I am sure that the total memory
bandwidth can be configured somewhere.

Regards,
Timo

___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Stream benchmark in SE mode, bandwidth does not degrade when run in parallel

2015-07-16 Thread Andreas Hansson
Hi Timo,

As a general rule, never use any performance number from atomic mode.
Atomic mode is for fast forwarding and warming (and for anything
non-temporal). The only notion of time is ‘enough to not confuse the OS’.

I would recommend to re-run your experiment with a realistic timing core
model (e.g. minor or arm_detailed), and a realistic DRAM controller (e.g.
DDR3_1600_x64).

I hope that explains it.

Andreas

On 16/07/2015 21:26, "gem5-users on behalf of Timo Schneider"
 wrote:

>Hi!
>
>I am running the stream benchmark[1] in GEM5 in SE, like this
>
>./build/X86/gem5.opt --debug-flags=ExecTicks ./configs/example/se.py
>--mem-type=SimpleMemory -n 1 -c /gem5tests/stream
>
>The bandwidth reported is 2000 MB/s. Now if I run it on two CPUs in
>parallel, like this
>
>./build/X86/gem5.opt --debug-flags=ExecTicks ./configs/example/se.py
>--mem-type=SimpleMemory -n 2 -c /gem5tests/stream;/gem5tests/stream
>
>the reported bandwidth is still ~2000 MB/s, reported by both processes!
>I expected it to be in the order of 1000 MB. The same happens with
>different memory types, etc.
>
>Can someone explain why the memory bandwidth is not halved?
>
>Thank you!
>
>
>___
>gem5-users mailing list
>gem5-users@gem5.org
>http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users


-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium.  Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England & Wales, Company No:  2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England & Wales, Company No:  2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Stream benchmark in SE mode, bandwidth does not degrade when run in parallel

2015-07-16 Thread Timo Schneider
Hi!

I am running the stream benchmark[1] in GEM5 in SE, like this 

./build/X86/gem5.opt --debug-flags=ExecTicks ./configs/example/se.py 
--mem-type=SimpleMemory -n 1 -c /gem5tests/stream 

The bandwidth reported is 2000 MB/s. Now if I run it on two CPUs in
parallel, like this

./build/X86/gem5.opt --debug-flags=ExecTicks ./configs/example/se.py 
--mem-type=SimpleMemory -n 2 -c /gem5tests/stream;/gem5tests/stream

the reported bandwidth is still ~2000 MB/s, reported by both processes!
I expected it to be in the order of 1000 MB. The same happens with
different memory types, etc.

Can someone explain why the memory bandwidth is not halved?

Thank you!


___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Could cache connected without bus?

2015-07-16 Thread Andreas Hansson
Hi Will,

In general you should be fine to connect two caches back to back. The question 
is, why would you? Why not make one of the caches larger?

Andreas

From: gem5-users 
mailto:gem5-users-boun...@gem5.org>> on behalf of 
Will mailto:alpha0...@yeah.net>>
Reply-To: gem5 users mailing list 
mailto:gem5-users@gem5.org>>
Date: Thursday, 16 July 2015 15:56
To: m5-users mailto:m5-us...@m5sim.org>>
Subject: [gem5-users] Could cache connected without bus?

Hello,

I'v attempted to connect two caches without bus but I got error.
Does anybody knows whether I could connect two memory objects directly, i.e 
without bus?

I would appreciate if some one can shed some light on this.

Best regards,
Will



-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England & Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Could cache connected without bus?

2015-07-16 Thread Will
Hello,


I'v attempted to connect two caches without bus but I got error.
Does anybody knows whether I could connect two memory objects directly, i.e 
without bus?


I would appreciate if some one can shed some light on this.


Best regards,
Will___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] DRAMCtrl: Question on read/write draining while not using the write threshold.

2015-07-16 Thread Andreas Hansson
Hi Prathap,

It sounds like something is going wrong in your write-switching algorithm. Have 
you verified that a read is actually showing up when you think it is?

If needed, is there any chance you could post the patch on RB, or include the 
changes in a mail?

Andreas

From: gem5-users 
mailto:gem5-users-boun...@gem5.org>> on behalf of 
Prathap Kolakkampadath mailto:kvprat...@gmail.com>>
Reply-To: gem5 users mailing list 
mailto:gem5-users@gem5.org>>
Date: Thursday, 16 July 2015 00:36
To: gem5 users mailing list mailto:gem5-users@gem5.org>>
Subject: [gem5-users] DRAMCtrl: Question on read/write draining while not using 
the write threshold.

Hello Users,

I have experimented by modifying the DRAM Controller write draining algorithm 
in such a way that, the DRAM Controller always process reads and switch to 
writes only when the read queue is empty; controller switch from writes to read 
immediately when a read arrives in the read queue.

With this modification, i ran a very memory intensive test on four cores 
simultaneously. Each miss generates a read(line-fill) and write(write back) to 
DRAM.

First, I brief what i am expecting: DRAM controller continue to process reads; 
meanwhile DRAM write queue fills up and eventually fills up the write buffers 
in the cache and therefore LLC locks up, therefore, no further reads and writes 
to the DRAM from the core.
At this point, DRAM controller process reads until the read queue is empty and 
switches to write and starts processing writes until a new read request 
arrives. Note that the LLC is blocked at this moment. Once a write is processed 
and corresponding write buffer of cache is cleared, a core can generate a new 
miss(which generates a line fill first). During this round trip time(as 
observed in my system 45ns and tBURST is 7.5ns), the DRAM controller can 
process almost 6 requests(45/7.5). After which it should switch to read.

However, from the gem5 statistics, I observe that the mean writes_per_turn 
around is 30 instead of ~6. I don't understand why this is the case? Can 
someone help me in understanding this behaviour?

Thanks,
Prathap


-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England & Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] MSHR Queue Full Handling

2015-07-16 Thread Andreas Hansson
Hi all,

The best way to customise your L1 instances is in the config script itself. If 
you use fs.py, I’d suggest to do it there.

Andreas

From: gem5-users 
mailto:gem5-users-boun...@gem5.org>> on behalf of 
Prathap Kolakkampadath mailto:kvprat...@gmail.com>>
Reply-To: gem5 users mailing list 
mailto:gem5-users@gem5.org>>
Date: Thursday, 16 July 2015 00:00
To: gem5 users mailing list mailto:gem5-users@gem5.org>>
Subject: Re: [gem5-users] MSHR Queue Full Handling

Hello Davesh,

I think it should be possible by passing the desired L1  MSHR setting for each 
core, while instantiating the dcache in CacheConfig.py
Also look at the BaseCache constructor, to see how these parameters are being 
set.


Thanks,
Prathap

-- IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.

ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered 
in England & Wales, Company No: 2557590
ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, 
Registered in England & Wales, Company No: 2548782
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] A query regarding cross compilation of the kernel for ARM FS simulation

2015-07-16 Thread rahul shrivastava
Hi,

I have followed the following link to setup DVFS system in ARM FS
simulation.
*http://www.m5sim.org/Running_gem5#Experimenting_with_DVFS
*

One of the step cross compiles the kernel with the following compiler which
compiles for hard float machines

*arm-linux-gnueabihf.*

However, when I cross compile the kernel with the above compiler and start
the FS simulation and login from m5term, I could see that the compiler that
kernel uses is for soft float machine and not for hard float machines



*root@gem5sim:/lib# ls -l | grep -i armdrwxr-xr-x  3 root root   4096
2011-08-15 11:19 arm-linux-gnueabilrwxrwxrwx  1 root root 28 2011-08-15
11:14 ld-linux.so.3 -> arm-linux-gnueabi/ld-2.13.so *


I have following three questions:
1) Shouldn't we cross compile the kernel with arm-linux-gnueabi instead of
arm-linux-gnueabihf ?

2) When I cross compile my project with arm-linux-gnueabi and try to
execute the binary in gem5 FS simulation, I get segfault throwing illegal
instruction. Could you shed some light on this?

3) I am using the following options for cross compilation of my project

*-march=armv7 -mthumb -mthumb-interwork -mfpu=vfp -msoft-float*
Could you please let me know what other options could be required if
options are any issue?


Regards
Rahul
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users