I'm getting the same error, working on fixing it but not very familiar with
the O3 cpu code (Or I wasn't before anyway).

>From what I can tell:

*FullO3CPU<Impl>::scheduleThreadExitEvent *schedules the threadExitEvent
for the next cycle, assuming that it will empty by then.
It sets* exitingThreads[tid] = true;*

By the time it gets to *FullO3CPU<Impl>::exitThreads *the reorder buffer
still has entries in it.

The reorder buffer can only squash 8 instructions per cycle. In my test
case at the time a halt is called, there are 70 instructions in the reorder
buffer.
So by the normal mechanism it would take 9 cycles to squash them all.
In order to decrement the counter that the error is complaining about:
*threadEntries[tid],* the commit stage has to retire them.

Commit can usually only commit 8 instructions per cycle but since it is
retiring squashed instructions it does them all in one cycle.

I can only guess at the reason for commit to be able to commit all its
squashed instructions in one cycle while the ROB has to do them in chunks
but the disparity is interesting. Could be a bug?

I increased the latency at which  *scheduleThreadExitEvent *schedules the
wait event, and I saw the ROB take its 9 cycles to squash and Commit take
one more cycle to fully retire them.

However in the interim fetch hadn't been disabled and the ROB filled up
with more instructions. (Fetch gets disabled when the event triggers, just
before the assertion fail we are seeing)

My idea to fix it (if indeed this is a bug, I could be wrong) would be one
of two things, depending on if the ROB is allowed to squash all of the
instructions at once or not.

If  it is allowed:
    Squash all the instructions on the cycle the halt is called, then the
next cycle before the event triggers (it has a low priority) the commit
stage will retire all of the ROB instructions.
    Voila, 0 ROB entries.

Else:
    Then the cpu needs to deactivate the fetch stage on the cycle it gets
the halt,
    and the cpu needs to wait  *threadEntries[tid]/squashWidth *cycles (or
keep checking every cycle, I don't actually know what is realistic)

If this is in fact a bug, and I get some guidance about what would be the
realistic solution, I can check in the fix to develop.

Cheers!

Dan


On Thu, Oct 29, 2020 at 3:35 AM Liao Xiongfei via gem5-users <
gem5-users@gem5.org> wrote:

> Hi Derrick,
>
>
>
> I just built gem5 simulator based on code downloaded yesterday.
>
>
>
> The simulator crashed with messages below.
>
>
>
>
>
>
>
> **** REAL SIMULATION ****
>
> info: Entering event queue @ 0.  Starting simulation...
>
> Hello world!
>
> Hello world!
>
> gem5.opt: build/X86/cpu/o3/cpu.cc:823: void
> FullO3CPU<Impl>::removeThread(ThreadID) [with Impl = O3CPUImpl; ThreadID =
> short int]: Assertion `commit.rob->isEmpty(tid)' failed.
>
> Program aborted at tick 17776220
>
> --- BEGIN LIBC BACKTRACE ---
>
> ./build/X86/gem5.opt(+0xa7eff0)[0x7f6734002ff0]
>
> ./build/X86/gem5.opt(+0xa9352e)[0x7f673401752e]
>
> /lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7f6732fd23c0]
>
> /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7f673212618b]
>
> /lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7f6732105859]
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x25729)[0x7f6732105729]
>
> /lib/x86_64-linux-gnu/libc.so.6(+0x36f36)[0x7f6732116f36]
>
> ./build/X86/gem5.opt(+0x43e60a)[0x7f67339c260a]
>
> ./build/X86/gem5.opt(+0x43e945)[0x7f67339c2945]
>
> ./build/X86/gem5.opt(+0x43f75d)[0x7f67339c375d]
>
> ./build/X86/gem5.opt(+0xa87069)[0x7f673400b069]
>
> ./build/X86/gem5.opt(+0xaa8bf8)[0x7f673402cbf8]
>
> ./build/X86/gem5.opt(+0xaa99ed)[0x7f673402d9ed]
>
> ./build/X86/gem5.opt(+0x86e720)[0x7f6733df2720]
>
> ./build/X86/gem5.opt(+0x3d425f)[0x7f673395825f]
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x2a8408)[0x7f6733288408]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x8dd8)[0x7f673305df48]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f67331aad3b]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f6733287de4]
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f6733054d6d]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f673305cef6]
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x8006b)[0x7f673306006b]
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f6733054d6d]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x12fd)[0x7f673305646d]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f67331aad3b]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyFunction_Vectorcall+0x94)[0x7f6733287de4]
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x74d6d)[0x7f6733054d6d]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalFrameDefault+0x7d86)[0x7f673305cef6]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(_PyEval_EvalCodeWithName+0x8fb)[0x7f67331aad3b]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(PyEval_EvalCodeEx+0x42)[0x7f67331ab0c2]
>
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(PyEval_EvalCode+0x1f)[0x7f67331ab4af]
>
> /lib/x86_64-linux-gnu/libpython3.8.so.1.0(+0x1cfaa1)[0x7f67331afaa1]
>
> --- END LIBC BACKTRACE ---
>
> Aborted (core dumped)
>
>
>
> *From:* Derrick.Greenspan via gem5-users [mailto:gem5-users@gem5.org]
> *Sent:* Thursday, 29 October 2020 2:59 PM
> *To:* gem5-users@gem5.org
> *Cc:* Derrick.Greenspan <derrick.greens...@knights.ucf.edu>
> *Subject:* [gem5-users] gem5 pthread regression with O3CPU on x86?
>
>
>
> Hi,
>
>
>
> Can someone else who has the latest build of gem5 try running
>
> ./build/X86/gem5.debug configs/example/se.py -c 
> 'tests/test-progs/hello/bin/x86/linux/hello;tests/test-progs/hello/bin/x86/linux/hello'
>  --caches --l2cache --l1d_size=32kB --l1i_size=32kB --l2_size=2MB 
> --l1d_assoc=8 --l1i_assoc=8 --l2_assoc=16 --cacheline_size=64 
> --cpu-type=DerivO3CPU --mem-type=DDR4_2400_8x8 --mem-size=8GB 
> --sys-clock=2.6GHz --cpu-clock=2.6GHz -n 2
>
>
>
> ...and let me know if you get a crash related to the reorder buffer?  I
> just tried it on Fedora32 and on Arch Linux, and both get the same error.
> It wasn't present in earlier builds of gem5.
>
>
>
> All my best,
>
>
>
> *Derrick Greenspan **MSCS*
>
> _______________________________________________
> gem5-users mailing list -- gem5-users@gem5.org
> To unsubscribe send an email to gem5-users-le...@gem5.org
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Reply via email to