On Sun, 23 Jan 2011, Korey Sewell wrote:

In sendFetch(), it calls sendTiming() which would then call the recvTiming
on the cache port since those two should be binded as peers.

I'm a little unsure of how the RubyPort, Sequencer, CacheMemory, and
CacheController (?) relationship is working (right now at least), but the
relationship between sendTiming and recvTiming is the key concept that
connects 2 memory objects unless things have changed.

On Sun, Jan 23, 2011 at 3:51 PM, Nilay Vaish <ni...@cs.wisc.edu> wrote:

I dug more in to the code today. There are three paths along which calls
are made to the RubyPort::M5Port::recvTiming(), which eventually results in
calls to CacheMemory::lookup().

1. TimingSimpleCPU::sendFetch() - 140 million
2. TimingSimpleCPU::handleReadPacket() - 30 million
3. TimingSimpleCPU::handleWritePacket() - 18 million

The number of times last two functions are called is very close to the
total number of memory references (48 million) for all the cpus together.
The number of lookup() calls is about 392 million. If we take into account
the calls to sendFetch(), then the ratio of number of lookup() calls to that
of the number of requests pushed in to ruby reduces to 2 to 1, from an
earlier estimate of 8 to 1.

My question would be why does sendFetch() makes calls to recvTiming()?


Some more reading revealed that that sendFetch() is calling recvTiming for instruction cache accesses. Whereas the other two calls (handleReadPacket and handleWritePacket) are for data cache accesses.

--
Nilay
_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev

Reply via email to