What about the underlying request?

--
Nilay


On Thu, 20 Oct 2011, Steve Reinhardt wrote:

That's how it's supposed to work... the target is responsible for deleting
the packet, though it often simply reuses the packet for the response
message.

Steve

On Thu, Oct 20, 2011 at 11:36 AM, Nilay Vaish <[email protected]> wrote:


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/894/#review1613
-----------------------------------------------------------



src/mem/ruby/system/RubyPort.cc
<http://reviews.m5sim.org/r/894/#comment2096>

   I figured out that the packet may not be processed at this point in
time. But may be scheduled for processing at a later time. Is it assured
that the receiver will always delete the packet and request?


- Nilay


On 2011-10-17 23:50:47, Nilay Vaish wrote:

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/894/
-----------------------------------------------------------

(Updated 2011-10-17 23:50:47)


Review request for Default.


Summary
-------

This patch implements the functionality for forwarding invalidations
and replacements from the L1 cache of the Ruby memory system to the O3
CPU. The implementation adds a list of ports to RubyPort. Whenever a
replacement
or an invalidation is performed, the L1 cache forwards this to all the
ports,
which I believe is the LSQ in case of the O3 CPU. Those who understand
the O3
LSQ should take a close look at the implementation and figure out (at
least
qualitatively) if some thing is missing or erroneous.

This patch only modifies the MESI CMP directory protocol. I will modify
other
protocols once we sort the major issues surrounding this patch.

My understanding is that this should ensure an SC execution, as
long as Ruby can support SC. But I think Ruby does not support any
memory model currently. A couple of issues that need discussion --

* Can this get in to a deadlock? A CPU may not be able to proceed if
  a particularly cache block is repeatedly invalidated before the CPU
  can retire the actual load/store instruction. How do we ensure that
  at least one instruction is retired before an invalidation/replacement
  is processed?

* How to test this implementation? Is it possible to implement some of
the
  tests that we regularly come across in papers on consistency models? Or
  those present in manuals from AMD and Intel? I have tested that Ruby
will
  forward the invalidations, but not the part where the LSQ needs to act
on
  it.


Diffs
-----

  build_opts/ALPHA_SE_MESI_CMP_directory 92ba80d63abc
  configs/example/se.py 92ba80d63abc
  configs/ruby/MESI_CMP_directory.py 92ba80d63abc
  src/mem/protocol/MESI_CMP_directory-L1cache.sm 92ba80d63abc
  src/mem/protocol/RubySlicc_Types.sm 92ba80d63abc
  src/mem/ruby/system/RubyPort.hh 92ba80d63abc
  src/mem/ruby/system/RubyPort.cc 92ba80d63abc
  src/mem/ruby/system/Sequencer.hh 92ba80d63abc
  src/mem/ruby/system/Sequencer.cc 92ba80d63abc

Diff: http://reviews.m5sim.org/r/894/diff


Testing
-------


Thanks,

Nilay



_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev


_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to