Hi,
I am a newbie to Gem5. I have some Question about X86 Atomic access.

As we know, X86 can not run "multicore with classic cache in Timing-mode".
Because the classic cache do not support LockedRMW!
If you run "(O3CPU)multicore with classic cache in full system mode", the
system will hang.

System hang = Gem5 continue to run, but terminal(m5term) will no longer
display anything.(I think most of people know that situation)

However, there is a patch by Steve Reinhardt (
http://reviews.gem5.org/r/2691/) that modify the classic cache to support
atomic access in X86.

After I use this patch, some workload(like parsec.canneal)can run finish
and exit ROI. But some workload(like parsec. facesim or bodytrack) will
enter the kernel panic(Before I use the patch I never get kernel panic but
system hang).
(The KERNEL PANIC MESSAGE is below)

I am very currious. Does anyone use this patch? Is the patch "100%" solving
the atomic access problem so that can run X86(O3CPU) multicore with classic
cache in fs?

I also wonder if this patch is so powerful, why do not update this code
into official gem5 so that everybody download from official can run X86 in
fs without RUBY.

This my gem5 version:
changeset:   11704:c38fcdaa5fe5
bookmark:    master
tag:         tip
user:        Tony Gutierrez <[email protected]>
date:        Wed Oct 26 22:48:45 2016 -0400
summary:     hsail,gpu-compute: fixes to appease clang++

and I only modify dram_ctrl.cc to printf req(memory address) to the file.
script: use fs.py

Command:
/home/oslab/peng_test/gem5/gem5/build/X86/gem5.opt
/home/oslab/peng_test/gem5/gem5/configs/example/fs.py
--kernel=/home/oslab/Downloads/x86-parsec/binaries/x86_64-vmlinux-2.6.22.9.smp
--disk-image=/home/oslab/Downloads/x86-parsec/disks/x86root-parsec.img -r 7
--cpu-type=DerivO3CPU --sys-clock=5GHz --cpu-clock=5GHz -n 6 --caches
--l1i_size=64kB --l1d_size=64kB --mem-size=3GB

Checkpoint:      (after enter the ROI, I press the CTRL+C)
/home/oslab/peng_test/gem5/gem5/build/X86/gem5.opt
/home/oslab/peng_test/gem5/gem5/configs/example/fs.py
--kernel=/home/oslab/Downloads/x86-parsec/binaries/x86_64-vmlinux-2.6.22.9.smp
--disk-image=/home/oslab/Downloads/x86-parsec/disks/x86root-parsec.img -n 6
--mem-size=3GB --checkpoint-at-end

KERNEL PANIC=>
Unable to handle kernel paging request at 000000001cfbb1b0 RIP:
 [<ffffffff803685d0>] radix_tree_lookup+0x20/0x70
PGD bfbc8067 PUD bf0a3067 PMD 0
Oops: 0000 [1] SMP
CPU 5
Modules linked in:
Pid: 866, comm: facesim-work Not tainted 2.6.22.9 #12
RIP: 0010:[<ffffffff803685d0>]  [<ffffffff803685d0>]
radix_tree_lookup+0x20/0x70
RSP: 0000:ffff81000424bd30  EFLAGS: 0000007c
RAX: 0000000000001875 RBX: 000000000000027e RCX: ffff8100bf726168
RDX: 000000001cfbb1b0 RSI: 000000000000027e RDI: ffff8100bf726280
RBP: ffff8100bf726278 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000008ad630 R11: 00000000023eca0f R12: 000000000000027e
R13: ffff8100bfba1f28 R14: ffff8100bfba1ec0 R15: 0000000000000000
FS:  0000000000b65880(0063) GS:ffff8100040d7940(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 000000001cfbb1b0 CR3: 00000000bfb7a000 CR4: 00000000000006e0
Process facesim-work (pid: 866, threadinfo ffff81000424a000, task
ffff8100042221c0)
Stack:  ffffffff80254ef9 ffff8100bf78a400 ffff8100bfbc8000 ffff8100bf726278
 ffffffff80257658 ffff81000424be1c ffff810004195138 ffff8100bf726168
 00000002bfb48e80 ffffffff807b8820 ffff8100bfbc8000 ffff810000000000
Call Trace:
 [<ffffffff80254ef9>] find_get_page+0x29/0x70
 [<ffffffff80257658>] filemap_nopage+0x108/0x2f0
 [<ffffffff80262c1f>] __handle_mm_fault+0x1df/0xc30
 [<ffffffff80598abb>] do_page_fault+0x1fb/0x8c0
 [<ffffffff8059434b>] thread_return+0x0/0x6c5
 [<ffffffff80594147>] schedule+0x117/0x31b
 [<ffffffff80596e7d>] error_exit+0x0/0x84


Code: 44 8b 02 44 89 c0 48 39 34 c5 a0 c4 76 80 73 03 31 c0 c3 44
RIP  [<ffffffff803685d0>] radix_tree_lookup+0x20/0x70
 RSP <ffff81000424bd30>
CR2: 000000001cfbb1b0

This is my first time using the mailing-list. if there are some mistake,
please let me know.

Regards,
Sam
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to