Hi Mark,

Thanks again for the quick response, really appreciate it. Did you happen
to hit the error of skid buffer size exceeding the max limit, as below:

panic: Skidbuffer Exceeded Max Size
 @ cycle 2666580
[skidInsert:build/X86/cpu/o3/rename_impl.hh, line 791]
Memory Usage: 1054688 KBytes
Program aborted at cycle 2666580
Aborted

Any ideas ?

Regards,
Ankita

On Thu, Apr 12, 2012 at 5:40 PM, Mark Browning <[email protected]>wrote:

> I believe that all the delays commented out are related to paths not
> actually used by any instructions, but they have parameters just defined in
> the code "just in case". Perhaps someone with more intricate knowledge of
> the O3 CPU model can confirm? (Or I could, you know, read the code)
>
> For a total of 20 stages, a "depth" of 4 is correct.
>
> Yes, the width is completely separate. I just included that in case you
> wanted to experiment with that dimension as well (I did).
>
> -Mark
>
>
> On Thu, Apr 12, 2012 at 1:56 PM, Ankita (Garg) Goel 
> <[email protected]>wrote:
>
>> Thanks a lot Mark for the super quick response. So I tried out the
>> suggested changes and my benchmark seems to be running fine. However, have
>> a couple of questions.
>>
>> - quite a few of the delay parameters seem to be commented out. Dont we
>> have to set them as well ?
>> - The width parameters are independent with the pipeline right ?
>> - To model a 20 stage pipeline, I have set the dept to 4..
>>
>>
>> Thanks a lot for your help!
>>
>> Regards,
>> Ankita
>>
>>
>> On Thu, Apr 12, 2012 at 1:21 PM, Mark Browning <[email protected]>wrote:
>>
>>> I just did this a few days ago.
>>>
>>> Below is a snippet of my config script (you don't even need to
>>> recompile!)  Not sure about the commented out forwarding paths. You might
>>> have been getting assert failures if you didn't increase the forwardComSize
>>> and backComSize, which are the size of the "buffers" holding the stages.
>>>
>>> Assuming each system.cpu is a O3CPU, and you want a straight up linear
>>> scaling of the default 5 stage pipeline. (where "depth" variable is really
>>> depth multiplier)
>>>
>>> for i in xrange(np):
>>>     system.cpu[i].workload = multiprocesses[i]
>>>      #main pipeline stages
>>>     system.cpu[i].fetchToDecodeDelay  = depth
>>>     system.cpu[i].decodeToRenameDelay = depth
>>>     system.cpu[i].renameToIEWDelay    = 2*depth
>>>     system.cpu[i].iewToCommitDelay    = depth
>>>
>>>     #forwarding paths
>>>     system.cpu[i].wbDepth              = depth
>>>     system.cpu[i].commitToDecodeDelay  = depth
>>>     #system.cpu[i].commitToFetchDelay  = depth
>>>      #system.cpu[i].commitToIEWDelay    = depth
>>>     #system.cpu[i].commitToRenameDelay = depth
>>>     #system.cpu[i].decodeToFetchDelay  = depth
>>>     #system.cpu[i].iewToDecodeDelay    = depth
>>>     #system.cpu[i].iewToFetchDelay     = depth
>>>      #system.cpu[i].iewToRenameDelay    = depth
>>>     #system.cpu[i].issueToExecuteDelay = depth
>>>     #system.cpu[i].renameToDecodeDelay = depth
>>>     #system.cpu[i].renameToFetchDelay  = depth
>>>     system.cpu[i].renameToROBDelay     = depth
>>>
>>>     system.cpu[i].forwardComSize       = 5*depth
>>>     system.cpu[i].backComSize          = 5*depth
>>>
>>>     #width
>>>     system.cpu[i].fetchWidth    = width
>>>     system.cpu[i].decodeWidth   = width
>>>     system.cpu[i].dispatchWidth = width
>>>     system.cpu[i].issueWidth    = width
>>>     system.cpu[i].wbWidth       = width
>>>     system.cpu[i].renameWidth   = width
>>>     system.cpu[i].commitWidth   = width
>>>     system.cpu[i].squashWidth   = width
>>>
>>> Good luck!
>>>
>>>
>>> On Thu, Apr 12, 2012 at 1:06 PM, Ankita (Garg) Goel <
>>> [email protected]> wrote:
>>>
>>>> Hi,
>>>>
>>>> I want to model deeper pipeline in X86 simulation, around 20-31 stages.
>>>> The ISCA tutorial slide mentioned that this could be achieved by adding
>>>> varying amounts of delay in between the existing 7 stages in O3CPU.py.
>>>> However, when I add some delays, the simulation aborts due to assert
>>>> failure. Has anyone tried to do this ? Or anyone has any ideas on how this
>>>> could be done ?
>>>>
>>>> Thanks a lot for your help !
>>>>
>>>> --
>>>> Regards,
>>>> Ankita
>>>> Graduate Student
>>>> Department of Computer Science
>>>> University of Texas at Austin
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> gem5-users mailing list
>>>> [email protected]
>>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>>
>>>
>>>
>>> _______________________________________________
>>> gem5-users mailing list
>>> [email protected]
>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>
>>
>>
>>
>> --
>> Regards,
>> Ankita
>> Graduate Student
>> Department of Computer Science
>> University of Texas at Austin
>>
>>
>>
>> _______________________________________________
>> gem5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>



-- 
Regards,
Ankita
Graduate Student
Department of Computer Science
University of Texas at Austin
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to