Hi all,

On the Ruby side I implemented a similar feature in the caches instead of
in the core model. See src/mem/ruby/system/BankedArray

This implementation uses annotations to the state transitions within the
cache coherence protocol to model bank and port conflicts and bandwidth
to/from the caches. It is a quite simplistic implementation, and
unfortunately it only applies to Ruby and not the classic caches.

Just thought I'd point this out so we can try to minimize duplicate work if
possible :).

Thanks,

Jason


On Tue, Jan 8, 2013 at 9:20 AM, Andreas Hansson <[email protected]>wrote:

>
>
> > On Jan. 8, 2013, 6:22 a.m., Ali Saidi wrote:
> > > at first glance this seems like a pretty useful fix. Thanks Amin!
> Anyone else have thoughts? Andreas, how does this interact with your
> changes?
>
> I have some reservations. I agree that it is a very useful addition, I am
> just not thrilled with the way it is done.
>
> If we really want to separate reads and writes, then we should check them
> separately in recvResponse and deal with them independently (do we really
> want to separate them?). Moreover, we should not send a retry the next
> cycle, but rather when a response comes back and frees up a "port".
>
>
> - Andreas
>
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/1422/#review3788
> -----------------------------------------------------------
>
>
> On Sept. 16, 2012, 5:06 p.m., Amin Farmahini wrote:
> >
> > -----------------------------------------------------------
> > This is an automatically generated e-mail. To reply, visit:
> > http://reviews.gem5.org/r/1422/
> > -----------------------------------------------------------
> >
> > (Updated Sept. 16, 2012, 5:06 p.m.)
> >
> >
> > Review request for Default.
> >
> >
> > Description
> > -------
> >
> > I made some changes to O3 to model the bandwidth between O3 and L1. By
> bandwidth I mean the number of requests and responses sent or received each
> cycle (not the amount of data transferred). Before putting a patch online,
> I would like to get your feedback on it.
> > Here is what I did to model cache ports for O3. I limit both the number
> of requests sent by O3 and the number of responses received by O3.
> >
> > For REQUESTS:
> > I have a separate read requests (loads) counter and a separate write
> requests (stores) counter.
> > LOADS: O3 limits the number of read requests sent each cycle to the
> number of defined cache read ports.
> > STORES: Similarly, O3 limits the number of write requests sent each
> cycle to the number of defined cache write ports.
> > Note that no matter how many ports are defined, we have only a single
> actual cache port used for all read and write requests. So just like the
> current gem5 code, only one dcachePort is defined in the code. However, I
> limit the number of requests to the number of cache ports defined in
> parameters.
> >
> > For RESPONSES:
> > LOADS: I limit the number of load responses received each cycle to the
> number of cache read ports. Once O3 reaches its load response limit, the
> next load response in this cycle is rejected and the cache port is blocked
> until next cycle. Note that blocking the cache port also inhibits the O3
> from receiving write responses which is not a correct thing, but I don't
> have any control on blocking read and write responses separately.
> > STORES: same as above
> >
> > I tried to avoid details such as split packets, pending packets, etc, to
> give a brief overview. I don't believe what I implemented is the best way
> to model cache ports here, so your feedback would be appreciated. What
> Steve mentioned seems a more concrete way to implement it, but lack of my
> knowledge about bus code in gem5 pushed me toward modifying the O3 code.
> >
> >
> > Diffs
> > -----
> >
> >   src/cpu/o3/lsq_unit_impl.hh UNKNOWN
> >   src/cpu/o3/lsq_impl.hh UNKNOWN
> >   src/cpu/o3/lsq_unit.hh UNKNOWN
> >   src/cpu/base_dyn_inst.hh UNKNOWN
> >   src/cpu/o3/O3CPU.py UNKNOWN
> >   src/cpu/o3/iew_impl.hh UNKNOWN
> >   src/cpu/o3/inst_queue.hh UNKNOWN
> >   src/cpu/o3/inst_queue_impl.hh UNKNOWN
> >   src/cpu/o3/lsq.hh UNKNOWN
> >
> > Diff: http://reviews.gem5.org/r/1422/diff/
> >
> >
> > Testing
> > -------
> >
> > a few small benches done only in SE and classic
> >
> >
> > Thanks,
> >
> > Amin Farmahini
> >
> >
>
> _______________________________________________
> gem5-dev mailing list
> [email protected]
> http://m5sim.org/mailman/listinfo/gem5-dev
>
_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to