Thanks korey! I already modified the se.py for that. But as you mentioned
this, I thought of just trying this simple program on se.py and see. But
still here is what I get:

command line: build/ALPHA/gem5.opt configs/example/se.py --cmd
tests/test-progs/hello/bin/alpha/linux/hello;tests/test-progs/hello/bin/alpha/linux/hello
--cpu-type detailed --caches --l2cache
Global frequency set at 1000000000000 ticks per second
0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
0: system.remote_gdb.listener: listening for remote gdb #1 on port 7001
**** REAL SIMULATION ****
info: Entering event queue @ 0.  Starting simulation...
info: Increasing stack size by one page.
info: Increasing stack size by one page.
gem5.opt: build/ALPHA/cpu/o3/fetch_impl.hh:645: void
DefaultFetch<Impl>::finishTranslation(Fault, Request*) [with Impl =
O3CPUImpl, Fault = RefCountingPtr<FaultBase>, Request* = Request*]:
Assertion `!finishTranslationEvent.scheduled()' failed.
Program aborted at cycle 9606500
Aborted

Is it a matter of configuration error?

Thanks,
Heba
On Mon, Feb 20, 2012 at 7:56 AM, Korey Sewell <[email protected]> wrote:

> If you look into se.py there should be a some python code that allows you
> to specify multiple applications on one benchmark. I believe the semicolon
> is parsed out of the string specifying the workloads and will allow you to
> load multiple benchmarks per CPU.
>
> If that file hasn't been tweaked too much, you should be able to do
> something like:
> gem5.opt ... --cmd="hello;hello"  --detailed ...
>
> and that would give you two hello world binaries on the O3CPU in SE mode.
>
> (But I would say the most important thing is to go through that se.py file
> and make sure you on a high level understand what's going on. Part of the
> "goodness" of gem5 is the ability to config through the Python front end..)
>
> On Mon, Feb 20, 2012 at 10:45 AM, Heba Saadeldeen <[email protected]>wrote:
>
>> Here is a quote I got from multiprogrammed workloads:
>> " If you're using the O3 model, you can also assign a vector of workload
>> objects to one CPU, in which case the CPU will run all of the workloads
>> concurrently in SMT mode. Note that SE mode has no thread scheduling; if
>> you need a scheduler, run in FS mode and use the fine scheduler built into
>> the Linux kernel."
>>
>> I do not need a scheduler, I just need to be able to run them on the same
>> cpu concurrently in SMT mode. Is there a flag to start SMT mode?
>>
>> Thanks,
>> Heba
>>
>>
>> On Mon, Feb 20, 2012 at 7:28 AM, Heba Saadeldeen <[email protected]>wrote:
>>
>>> I still do not get it. I see in the cpu code that there could exist
>>> multiple threads and you can run multiple threads on the same cpu. Even in
>>> se.py there is a part where you read multiple workloads separated by
>>> semicoloumns to run on the same cpu. In O3CPU.py there is also a place were
>>> you specify the number of instructions fetched by each thread. I just want
>>> to run the multiple workloads as threads on the cpu, is that possible?
>>>
>>> Heba
>>>
>>>
>>> On Sat, Feb 18, 2012 at 12:24 PM, Gabe Black <[email protected]>wrote:
>>>
>>>> **
>>>> I'm pretty sure you can't run multiple workloads on the same CPU. SE
>>>> mode doesn't have a scheduler, so there would be no way to switch between
>>>> them. You'll have to use FS mode.
>>>>
>>>> Gabe
>>>>
>>>>
>>>> On 02/17/12 20:09, Heba Saadeldeen wrote:
>>>>
>>>> Hi,
>>>>
>>>> I am trying to run multiple workloads on the same cpu. But I got that
>>>> error:
>>>>
>>>> 0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
>>>> 0: system.remote_gdb.listener: listening for remote gdb #1 on port 7001
>>>> panic: same statistic name used twice!
>>>> name=system.cpu.workload1.num_syscalls
>>>>  @ cycle 0
>>>>
>>>> I also found that I can't use fast forward for multiple workloads on
>>>> the same cpu because to simulate the fast forwarded instructions gem5 uses
>>>> simple cpu that does not support more than one workload.
>>>>
>>>> Any help is appreciated!
>>>> Thanks,
>>>> --
>>>> Heba
>>>>
>>>>
>>>> _______________________________________________
>>>> gem5-users mailing 
>>>> [email protected]http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> gem5-users mailing list
>>>> [email protected]
>>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>>
>>>
>>>
>>>
>>> --
>>> Heba
>>>
>>
>>
>>
>> --
>> Heba
>>
>> _______________________________________________
>> gem5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
>
>
> --
> - Korey
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>



-- 
Heba
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to