The existing tick event is an example, as are the tick events in the
other CPUs. The simple CPUs will probably be easier to look at and
understand. Your event will be a new class, and you can define it
alongside the tick event. It can initially be scheduled in the CPUs
init() function in cpu/o3/cpu.cc. You'll want it to have a cpu pointer
which gets set when the object is constructed, and to call a function on
the cpu when the event's process() function is called. The CPU has the
information you want, I presume, so you'll want it to do the work. The
event can immediately reschedule itself for the next point in the future
when it's processed, or the CPU can handle that like it does for the
tick event if you need to be on a cycle boundary.
Gabe
Bartosz Wojciechowski wrote:
> Hello again,
>
> I've been trying to go throuhg the code to see how tick event is
> implemented and scheduled. And I must admit, I feel a little overwhelmed
> right now. Could you please give me some more clues on how to (and where
> exactly in cpu-related class hierarchy) implement a new event that will
> be scheduled every clock cycle or point out to me some other examples of
> such events. Thank You!
>
> Greetings,
> Bartosz Wojciechowski
>
>
>> The tick function will not be scheduled if the CPU doesn't have anything
>> to do, for instance while it's waiting for an interrupt. You can create
>> and schedule you're own recurring event that will always be scheduled
>> and use it for your DPRINTF.
>>
>> Gabe
>>
>> Bartosz Wojciechowski wrote:
>>
>>> Hello All,
>>>
>>> I'm trying to get traces of IPC every 10^6 clock cycles for every core
>>> in a 4-core setup. To do so, I've defined a new trace flag, and try
>>> using it only once every 1000000 cycles with this:
>>>
>>> if((uint64_t)numCycles.value() % 1000000 == 0) {
>>> DPRINTF(IPC, "cycle: %lld, inst: %lld\n", (uint64_t)numCycles.value(),
>>> (uint64_t)totalCommittedInsts.value() );
>>> }
>>> (in: template <class Impl> void FullO3CPU<Impl>::tick())
>>>
>>> However, apparently at random, some of the DPRINTFs that I expect,
>>> do not work (or are not invoked) and I lack some data. Moreover, this
>>> behaviour is not synchronised between cores (CPUs).
>>>
>>> Being new to m5, I'd really appreciate some feedback on what I may be
>>> doing wrong and also whether there is some established method of
>>> generating various traces.
>>>
>>> Thanks in advance,
>>> Bartosz Wojciechowski
>>>
>>> PS. Below is my code and how I use it.
>>>
>>> I've implemented this:
>>>
>>> diff -r 2b5fbdcbfb5d src/cpu/o3/SConscript
>>> ---a/src/cpu/o3/SConscript Fri Nov 26 20:47:23 2010 -0500
>>> +++b/src/cpu/o3/SConscript Mon Nov 29 14:45:44 2010 +0100
>>> @@ -64,6 +64,7 @@
>>> Source('store_set.cc')
>>> Source('thread_context.cc')
>>>
>>> + TraceFlag('IPC')
>>> TraceFlag('LSQ')
>>> TraceFlag('LSQUnit')
>>> TraceFlag('MemDepUnit')
>>> diff -r 2b5fbdcbfb5d src/cpu/o3/cpu.cc
>>> --- a/src/cpu/o3/cpu.cc Fri Nov 26 20:47:23 2010 -0500
>>> +++ b/src/cpu/o3/cpu.cc Mon Nov 29 14:45:44 2010 +0100
>>> @@ -502,7 +502,10 @@
>>>
>>> ++numCycles;
>>>
>>> -// activity = false;
>>> + //IPC logging
>>> + if((uint64_t)numCycles.value() % 1000000 == 0) {
>>> + DPRINTF(IPC, "cycle: %lld, inst: %lld\n",
>>> (uint64_t)numCycles.value(), (uint64_t)totalCommittedInsts.value() );
>>> + }
>>>
>>> //Tick each of the stages
>>> fetch.tick();
>>>
>>> And run it like this:
>>>
>>> time ../../build/ALPHA_SE/m5.opt --outdir=m5_gcc_4 --redirect-stdout \
>>> --stdout-file=out.txt --redirect-stderr --stderr-file=err.txt \
>>> --trace-flags=IPC ../../configs/example/se_tmp.py --detailed --caches \
>>> --l2cache -n4 --maxinst=1000000000 --cmd="gcc;gcc;gcc;gcc" \
>>> --options="166.i -o 466.o;167.i -o 467.o;168.i -o 468.i;169.i -o 469.o"
>>>
>>>
>>>
>> _______________________________________________
>> m5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>>
>
>
> _______________________________________________
> m5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>
_______________________________________________
m5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users