On 25 Aug 2016, at 22:09, Andreas Hansson wrote:
Hi all,
Thanks a lot for that reply.
Two thoughts:
1. Does X86 + o3 + classic memory system actually work?
2. The interleaving of “real” timing accesses and the functional
“debug”
accesses is not well defined. In general I would encourage to not
assume
anything.
If you are indeed using classic (see 1), then I think I know what is
causing the issue. In functional cache accesses we check the cache
itself
before we check the MSHRs. Thus, if a write is done from the
perspective
of the LSQ, you won’t necessarily see it by means of the functional
access. Is it a bug (see 2), we’d have to decide?
I think that might be related to the only(1) other issue I see on x86 +
o3 + classic.
How can we fix this? Why is it not an issue on ARM (which does the
same)? Is it because there you only have a the register but no memory
access?
(1) well or not. I found a couple of “timing sensitive” issues to
which I either have a workaround by changing latencies or adding an
extra function call somewhere into my OS code to spread critical
accesses further apart. I am happy to provide trace that show these
problems, e.g. I was hitting the IOAPIC with 64 bytes packets rather
than 4 byte packets when I enabled the L2 cache; and I do have a case
where after a page table walk I hit a PTE of 0 and the fault is delayed
for ages (or not delivered) causing problems on the next access as that
expects the page to be mapped in the kernel.
Whatever gets us closer to be able to successfully run this, would be
good :)
On 25/08/2016, 21:52, "gem5-dev on behalf of Potter, Brandon"
<gem5-dev-boun...@gem5.org on behalf of brandon.pot...@amd.com> wrote:
Hi Bjoern,
Did you ever solve this issue? I see what you're describing, but
it's
not obvious to me what causes the problem.
No.
_______________________________________________
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev