Re: [gem5-users] ignore many simObjects from trace with --trace-ignore=EXPR
Thanks Steve, your guess makes more sense but the last string is not replacing the previous, they all get passed correctly but it just doesnt work correctly somewhere else. because when I print the value of "expr" from trace.i after "Trace::ignore.setExpression(expr)" has been executed, it gives me the entire string entered on the command line. so the code in main.py, trace.i and match.cc seem to work fine. it's just that somewhere along the line the code is not written to ignore more than one simObject. btw, when I put multiple expression, none of them seem to be considered but all get passed correctly as I explained above. but one expression works fine. What I am trying to accomplish is just to print the trace for L2 cache ONLY (e.g. system.l2). any other way to accomplish this? without messing with ignore option which has never worked before. src/mem/cache/cahe_impl.hh is the generic code used by all the caches. where can I go to only print trace for L2 cache and ignore icache and dcache? On Sat, Oct 11, 2014 at 12:43 PM, Steve Reinhardt wrote: > It's a little convoluted, but I think I found the problem. Apparently > having multiple ignore strings hasn't worked in quite some time, if ever. > > In src/python/m5/main.py, the ignore strings are passed into C++ one at a > time: > > for ignore in options.debug_ignore: > check_tracing() > trace.ignore(ignore) > > And it's a little tricky to track down, but trace.ignore() corresponds to > this swig-wrapped C++ function in src/python/swig/trace.i: > > inline void > ignore(const char *expr) > { > Trace::ignore.setExpression(expr); > } > > And if you track down setExpression() in src/base/match.cc, you find: > > void > ObjectMatch::setExpression(const string &expr) > { > tokens.resize(1); > tokenize(tokens[0], expr, '.'); > } > > So it looks like every time you call trace.ignore(ignore) from python > you're replacing the list of expressions in 'tokens' with the new string. > So my guess based on looking at the code is that, when you pass in multiple > strings, only the last one is actually taking effect---just a guess though. > > I see a few ways to fix this. > > I think the best way to fix it is to make a single call from python into > C++ with the entire list of strings. There's already an overloaded version > of setExpression for this ('ObjectMatch::setExpression(const vector > &expr)'). So it would just be a matter of exposing that function via swig > in trace.i, and passing it the full options.debug_ignore list directly (no > more for loop in python). > > If that sounds daunting, a more incremental quick fix would just be to > change ObjectMatch::setExpression() to append the passed-in expression to > the token vector instead of overwriting it. > > If you do fix it, and particularly if you choose the more comprehensive > former fix, please upload your patch on reviewboard so we can consider > putting it back in the main repository. > > Thanks, > > Steve > > > On Sat, Oct 11, 2014 at 11:56 AM, Marcus Tshibangu > wrote: > >> Thanks Steve, you are right I still have the old version with both debug* >> and trace* optons but it shouldn't make a difference. it maybe that the >> --trace-ignore option itself is not working for multiple objects only for >> one object. because even append doesn't work for it but it works for other >> options. for instance in the following command: >> >> *--debug-flags=Cache --debug-flags=BaseBus --trace-ignore='system.cpu0' >> --trace-ignore='system.cpu1' --trace-file=trace.out* >> >> Cache and BaseBus are appended and I get their traces but for ignore part >> it's not appended and doesn't even work for neither one of the objects. but >> it works fine if i only have one ignore objects. >> so can u do me a favor and use the above command in your new version >> (with trace* replaced by debug* of course) and see if it's a version issue? >> if not where can I go to fix this? >> >> On Sat, Oct 11, 2014 at 9:37 AM, Steve Reinhardt >> wrote: >> >>> Sometimes you've got to use the source... from src/python/m5/main.py: >>> >>> option("--debug-ignore", metavar="EXPR", action='append', split=':', >>> help="Ignore EXPR sim objects") >>> >>> Apparently colon is supposed to be the delimiter. The 'split' option is >>> a Nate extension (see src/python/m5/options.py), so if colon is not >>> actually working, it could be a bug. The "action='append'" part means that >>> you can also specify --debug-ignore more than once to get the same effect, >>> and that's built in to python optparse, so it's much less likely to be >>> buggy. >>> >>> BTW, the --trace-* options were changed to --debug-* a while ago, so you >>> may have an old version, but you can check your source to see if these >>> comments still apply. >>> >>> Steve >>> >>> >>> >>> On Sat, Oct 11, 2014 at 12:24 AM, Marcus Tshibangu via gem5-users < >>> gem5-users@gem5.org> wrote: >>> when I use *--trace-ignore='system.cpu0'*, I ignore every
Re: [gem5-users] Questions on DRAM Controller model
Hi Andreas, users I ran the test with ARM O3 cpu(--cpu-type=detailed) , mem_mode=timing, the results are exactly the same compared to mem_mode=atomic. I have partitioned the DRAM banks using software. Both the benchmarks- latency-sensitive and bandwidth -sensitive (both generates only reads) running in parallel using the same DRAM bank. >From status file, i observe expected number L2 misses and DRAM requests are getting generated. In my system, the number of L1 MSHRs are 10 and number of L2 MSHR's are 32. So i expect that when a request from a latency-sensitive benchmark comes to DRAM, the readQ size has to be 10. However what i am observing is most of the time the Queue is not getting filled and hence there is less queueing latency and interference. I am using classic memory system with default DRAM controller,DDR3_1600_x64. Addressing map is RoRaBaChCo, page policy-open_adaptive, and frfcfs scheduler. Do you have any thoughts on this? How could i debug this further? Appreciate your help. Thanks, Prathap Kumar Valsan Research Assistant University of Kansas On Mon, Oct 13, 2014 at 4:21 AM, Andreas Hansson wrote: > Hi Prathap, > > Indeed. The atomic mode is for fast-forwarding only. Once you actually > want to get some representative performance numbers you have to run in > timing mode with either the O3 or Minor CPU model. > > Andreas > > From: Prathap Kolakkampadath > Date: Monday, 13 October 2014 10:19 > > To: Andreas Hansson > Cc: gem5 users mailing list > Subject: Re: [gem5-users] Questions on DRAM Controller model > > Thanks for your reply. The memory mode which I used is atomic. I think, > I need to run the tests in timing More. I believe which shows up > interference and queueing delay similar to real platforms. > > Prathap > On Oct 13, 2014 2:55 AM, "Andreas Hansson" > wrote: > >> Hi Prathap, >> >> I don’t dare say exactly what is going wrong in your setup, but I am >> confident that Ruby will not magically make things more representative (it >> will likely give you a whole lot more problems though). In the end it is >> all about configuring the building blocks to match the system you want to >> capture. The crossbars and caches in the classic memory system do make some >> simplifications, but I have not yet seen a case when they are not >> sufficiently accurate. >> >> Have you looked at the various policy settings in the DRAM controller, >> e.g. the page policy and address mapping? If you’re trying to correlate >> with a real platform, also see Anthony’s ISPASS paper from last year for >> some sensible steps in simplifying the problem and dividing it into >> manageable chunks. >> >> Good luck. >> >> Andreas >> >> From: Prathap Kolakkampadath >> Date: Monday, 13 October 2014 00:29 >> To: Andreas Hansson >> Cc: gem5 users mailing list >> Subject: Re: [gem5-users] Questions on DRAM Controller model >> >> Hello Andreas/Users, >> >> I used to create a checkpoint until linux boot using Atomic Simple CPU >> and then restore from this checkpoint to detailed O3 cpu before running the >> test. I notice that the mem-mode is set to atomic and not timing. Will >> that be the reason for less contention in memory bus i am observing? >> >> Thanks, >> Prathap >> >> On Sun, Oct 12, 2014 at 4:56 PM, Prathap Kolakkampadath < >> kvprat...@gmail.com> wrote: >> >>> Hello Andreas, >>> >>> Even after configuring the model like the actual hardware, i still not >>> seeing enough interference to the read request under consideration. I am >>> using the classic memory system model. Since it uses atomic and functional >>> Packet allocation protocol, I would like to switch to Ruby( I think it >>> more resembles with real platform). >>> >>> >>> I am hitting in to below problem when i use ruby. >>> >>> /build/ARM/gem5.opt --stats-file=cr1A1.txt configs/example/fs.py >>> --caches --l2cache --l1d_size=32kB --l1i_size=32kB --l2_size=1MB >>> --num-cpus=4 --mem-size=512MB >>> --kernel=/home/prathap/WorkSpace/linux-linaro-tracking-gem5/vmlinux >>> --disk-image=/home/prathap/WorkSpace/gem5/fullsystem/disks/arm-ubuntu-natty-headless.img >>> --machine-type=VExpress_EMM >>> --dtb-file=/home/prathap/WorkSpace/linux-linaro-tracking-gem5/arch/arm/boot/dts/vexpress-v2p-ca15-tc1-gem5_4cpus.dtb >>> --cpu-type=detailed --ruby --mem-type=ddr3_1600_x64 >>> >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "/home/prathap/WorkSpace/gem5/src/python/m5/main.py", line 388, >>> in main >>> exec filecode in scope >>> File "configs/example/fs.py", line 302, in >>> test_sys = build_test_system(np) >>> File "configs/example/fs.py", line 138, in build_test_system >>> Ruby.create_system(options, test_sys, test_sys.iobus, >>> test_sys._dma_ports) >>> File "/home/prathap/WorkSpace/gem5/src/python/m5/SimObject.py", line >>> 825, in __getattr__ >>> raise AttributeError, err_string >>> AttributeError: object 'LinuxArmSystem' has no attribute '_dma_ports' >>> (C++ object is not ye
Re: [gem5-users] Trouble Running Full System with more than 2GB of physical memory
Hi Stevo, I supposed the problem was in the kernel, that is, linux things, not gem5. There might be some information on the internet about rebuild a linux kernel. Probably you need ro crosscompile it for Alpha. So, big efforts. I'm still thinking about simulations with more memory without rebuild the kernel. As soon I discover something, I'll tell you. Atenciosamente, Matheus Alcântara Souza (Via iPhone) > Em 13/10/2014, às 14:54, Stevenson Jian escreveu: > > Hi Matheus, > > Thanks again for the prompt response. I did an online search of "gem5 > highmem" in hope of finding how to set highmem. However, I wasn't able to > find a helpful source. I am wondering if you could help point me to how to > set it? > > Thanks! > Stevo > >> On Mon, Oct 13, 2014 at 12:44 PM, Matheus Alcântara Souza >> wrote: >> I've done a quick research about that. It seems you need to use the HIGHMEM >> option. Not sure if it works... >> >> Atenciosamente, >> Matheus Alcântara Souza >> (Via iPhone) >> >>> Em 13/10/2014, às 14:29, Stevenson Jian escreveu: >>> >> >>> Hi Matheus, >>> >>> Thanks for the prompt response. >>> >>> I already tried recompiling the console binary. It resulted in new errors. >>> See the second half of the original post. >>> >>> I saw on the website you linked that there are many possible kernel >>> versions. Which kernel do you recommend that I recompile? >>> >>> Thanks! >>> Stevo >>> On Mon, Oct 13, 2014 at 10:42 AM, Matheus Alcântara Souza wrote: Hello Stevo, Yes, a different vmlinux. Unfortunately, I never build a new kernel to gem5. Some information here: http://www.m5sim.org/Compiling_a_Linux_Kernel Other option is to recompile the console binary. Take a look at this thread: https://www.mail-archive.com/gem5-users@gem5.org/msg03280.html Best Matheus 2014-10-13 12:30 GMT-03:00 Stevenson Jian : > Hi Matheus, > > Thanks for the prompt response. I am not certain what you mean by kernel. > Do you mean use a different vmlinux? I tried both vmlinux and > vmlinux_2.6.27-gcc_4.3.4. I also tried recompiling system/alpha/palcode/ > and putting the resultant binary in m5_system_2.0b3/binaries. None of > them made any difference. > > Thanks! > Stevo > >> On Mon, Oct 13, 2014 at 10:25 AM, Matheus Alcântara Souza >> wrote: >> I guess it is a kernel problem. Can you check out this? Or try to use >> another kernel? >> >> Atenciosamente, >> Matheus Alcântara Souza >> (Via iPhone) >> >>> Em 13/10/2014, às 12:22, Stevenson Jian via gem5-users >>> escreveu: >>> >> >>> Hi all, >>> >>> I am trying to run PARSEC in Gem5 under full system mode. The >>> benchmarks run correctly when I set the simulated physical memory size >>> to <=2GB. However, I want to simulate a physical memory with 4GB. When >>> I set "return '4000MB'" in line 49 of configs/common/Benchmarks.py to >>> set the simulated physical memory size to 4GB and run PARSEC again >>> (using command "build/ALPHA/gem5.fast configs/example/fs.py -n 2 >>> --script=../parsecRunscripts/blackscholes_2c_simlarge_ckpts.rcS"), I >>> get the following error: >>> **simout** >>> ... >>> panic: M5 panic instruction called at pc = 0xfc31add0. >>> @ cycle 470482786500 >>> [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, >>> line 11210] >>> Memory Usage: 4273680 KBytes >>> Program aborted at cycle 470482786500 >>> Aborted >>> **system.terminal* >>> ... >>> setup: forcing memory size to 33554432K (from -98304K).^M >>> freeing pages 1103:4194304^M >>> reserving pages 1103:1167^M >>> SMP: 2 CPUs probed -- cpu_present_map = 3^M >>> Built 1 zonelists in Zone order, mobility grouping on. Total pages: >>> 4165632^M >>> Kernel command line: root=/dev/hda1 console=ttyS0^M >>> PID hash table entries: 4096 (order: 12, 32768 bytes)^M >>> Using epoch = 1900^M >>> Console: colour dummy device 80x25^M >>> console [ttyS0] enabled^M >>> Dentry cache hash table entries: 4194304 (order: 12, 33554432 bytes)^M >>> Inode-cache hash table entries: 2097152 (order: 11, 16777216 bytes)^M >>> Memory: 33265208k/33554432k available (3757k kernel code, 285456k >>> reserved, 261k data, 208k init)^M >>> Unable to handle kernel paging request at virtual address >>> ^M >>> CPU 0 swapper(0): Oops 1^M >>> pc = [] ra = [] ps = 0007Not >>> tainted^M >>> pc is at cache_alloc_refill+0x1ec/0x780^M >>> ra is at cache_alloc_refill+0xcc/0x780^M >>> v0 = 0001 t0 = t1 = ^M >>> t2 = t3 = 0001 t4 = ^M >>> t5 = t6 =
Re: [gem5-users] Fullsystem with NoC
Thank you sir! With a refresh in everything, Garnet os working. I wonder now if the topology Cluster might work. Suppose i have a 4core "chip" with crossbar interconnection, and 8 "chips" connected through a Mesh NoC (2x4). Thus, 32 cores. Any tips? Atenciosamente, Matheus Alcântara Souza (Via iPhone) > Em 11/10/2014, às 17:20, babak aghaei escreveu: > > Hi > this is possible, befor you must establish the garnet network and then run > any benchmark on it. > best > --- > Babak Aghaei > Ph.D candidate > > From: Matheus Alcântara Souza via gem5-users > To: "gem5-users@gem5.org" > Sent: Saturday, October 11, 2014 11:27 PM > Subject: [gem5-users] Fullsystem with NoC > > Dear all, > > I've been reading the gem5 list for quite some time, with the goal of know > how to run applications (such as PARSEC ones) in fullsystem mode, over a > network-on-chip architecture. > > I concluded that this is not possible nowadays. So I wonder if I am wrong? If > yes, what should I do to run this? > > If I'm right, what should be the first thing to check/change to make this > possible? Maybe the messages generator should be adapted, as well the Ruby > Memory protocols. > > Thank you all! > > Atenciosamente, > Matheus Alcântara Souza > (Via iPhone) > ___ > gem5-users mailing list > gem5-users@gem5.org > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users > ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] Fixed repeat-switch bugs
Hello everybody, Part of my research project is to run experiments where I constanly switch from timing to o3 model back and forth (--repeat-switch option). In particular I am using X86 with both MESI and MOESI. However, after many switches, I faced some problems while draining the O3 model: I either hit the assert(!memReq) assertion in drainSanityCheck() or got stuck in the draining process until I reached the maxtick count and the simulation ended. After some debugging I could found the causes of the errors and I was able to switch thousands of times per run. Although it worked fine for my project, it is probable that these changes mess up other parts of the code. In this post I want to do two things: - Ask if anyone can identify a condition where the modifications I did will result in an error in the simulator - Make public the changes so other people can use it In particular, the changes I did are: /src/cpu/o3/fetch_impl.hh @@ -738,7 +738,7 @@ decoder[tid]->reset(); // Clear the icache miss if it's outstanding. -if (fetchStatus[tid] == IcacheWaitResponse) { +if (fetchStatus[tid] == IcacheWaitResponse || fetchStatus[tid] == IcacheWaitRetry) { DPRINTF(Fetch, "[tid:%i]: Squashing outstanding Icache miss.\n", tid); memReq[tid] = NULL; /src/cpu/o3/lsq_impl.hh @@ -175,8 +175,10 @@ } if (retryTid != InvalidThreadID) { -DPRINTF(Drain, "Not drained, the LSQ has blocked the caches.\n"); -drained = false; +if(thread[retryTid].isLoadBlocked || thread[retryTid].isStoreBlocked) { +DPRINTF(Drain, "Not drained, the LSQ has blocked the caches.\n"); +drained = false; +} } return drained; /src/cpu/o3/lsq_unit.hh @@ -466,12 +466,14 @@ /** The packet that needs to be retried. */ PacketPtr retryPkt; + public: //May be there is a better way than make it public, but I need to know when the store and load are blocked /** Whehter or not a store is blocked due to the memory system. */ bool isStoreBlocked; /** Whether or not a load is blocked due to the memory system. */ bool isLoadBlocked; + private: /** Has the blocked load been handled. */ bool loadBlockedHandled; /src/mem/ruby/system/RubyMemoryControl.cc @@ -675,7 +675,7 @@ { DPRINTF(RubyMemory, "MemoryController drain\n"); if(m_event.scheduled()) { -deschedule(m_event); +//deschedule(m_event); //Why does it deschedules? If a store request is in flight while draining, it won't be satisfied and the cpu won't drain } return 0; } If these changes don't generate any other error somewhere else in the code, do you think I should add it as a patch? If so, what is the process? Thank you very much -- Alberto Javier Naranjo-Carmona M.S. Student Computer Engineering Texas A&M University, College Station, TX ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Trouble Running Full System with more than 2GB of physical memory
Hi Matheus, Thanks again for the prompt response. I did an online search of "gem5 highmem" in hope of finding how to set highmem. However, I wasn't able to find a helpful source. I am wondering if you could help point me to how to set it? Thanks! Stevo On Mon, Oct 13, 2014 at 12:44 PM, Matheus Alcântara Souza < ticks...@gmail.com> wrote: > I've done a quick research about that. It seems you need to use the > HIGHMEM option. Not sure if it works... > > Atenciosamente, > Matheus Alcântara Souza > (Via iPhone) > > Em 13/10/2014, às 14:29, Stevenson Jian > escreveu: > > Hi Matheus, > > Thanks for the prompt response. > > I already tried recompiling the console binary. It resulted in new errors. > See the second half of the original post. > > I saw on the website you linked that there are many possible kernel > versions. Which kernel do you recommend that I recompile? > > Thanks! > Stevo > > On Mon, Oct 13, 2014 at 10:42 AM, Matheus Alcântara Souza < > ticks...@gmail.com> wrote: > >> Hello Stevo, >> >> Yes, a different vmlinux. Unfortunately, I never build a new kernel to >> gem5. Some information here: >> http://www.m5sim.org/Compiling_a_Linux_Kernel >> >> Other option is to recompile the console binary. Take a look at this >> thread: https://www.mail-archive.com/gem5-users@gem5.org/msg03280.html >> >> Best >> Matheus >> >> 2014-10-13 12:30 GMT-03:00 Stevenson Jian : >> >> Hi Matheus, >>> >>> Thanks for the prompt response. I am not certain what you mean by >>> kernel. Do you mean use a different vmlinux? I tried both vmlinux and >>> vmlinux_2.6.27-gcc_4.3.4. I also tried recompiling >>> system/alpha/palcode/ and putting the resultant binary in >>> m5_system_2.0b3/binaries. None of them made any difference. >>> >>> Thanks! >>> Stevo >>> >>> On Mon, Oct 13, 2014 at 10:25 AM, Matheus Alcântara Souza < >>> ticks...@gmail.com> wrote: >>> I guess it is a kernel problem. Can you check out this? Or try to use another kernel? Atenciosamente, Matheus Alcântara Souza (Via iPhone) Em 13/10/2014, às 12:22, Stevenson Jian via gem5-users < gem5-users@gem5.org> escreveu: Hi all, I am trying to run PARSEC in Gem5 under full system mode. The benchmarks run correctly when I set the simulated physical memory size to <=2GB. However, I want to simulate a physical memory with 4GB. When I set "return '4000MB'" in line 49 of configs/common/Benchmarks.py to set the simulated physical memory size to 4GB and run PARSEC again (using command "build/ALPHA/gem5.fast configs/example/fs.py -n 2 --script=../parsecRunscripts/blackscholes_2c_simlarge_ckpts.rcS"), I get the following error: **simout** ... panic: M5 panic instruction called at pc = 0xfc31add0. @ cycle 470482786500 [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line 11210] Memory Usage: 4273680 KBytes Program aborted at cycle 470482786500 Aborted **system.terminal* ... setup: forcing memory size to 33554432K (from -98304K).^M freeing pages 1103:4194304^M reserving pages 1103:1167^M SMP: 2 CPUs probed -- cpu_present_map = 3^M Built 1 zonelists in Zone order, mobility grouping on. Total pages: 4165632^M Kernel command line: root=/dev/hda1 console=ttyS0^M PID hash table entries: 4096 (order: 12, 32768 bytes)^M Using epoch = 1900^M Console: colour dummy device 80x25^M console [ttyS0] enabled^M Dentry cache hash table entries: 4194304 (order: 12, 33554432 bytes)^M Inode-cache hash table entries: 2097152 (order: 11, 16777216 bytes)^M Memory: 33265208k/33554432k available (3757k kernel code, 285456k reserved, 261k data, 208k init)^M Unable to handle kernel paging request at virtual address ^M CPU 0 swapper(0): Oops 1^M pc = [] ra = [] ps = 0007Not tainted^M pc is at cache_alloc_refill+0x1ec/0x780^M ra is at cache_alloc_refill+0xcc/0x780^M v0 = 0001 t0 = t1 = ^M t2 = t3 = 0001 t4 = ^M t5 = t6 = fc07ff00 t7 = fc814000^M s0 = fc80c2a8 s1 = s2 = fc822710^M s3 = fc80c408 s4 = fc07ff30 s5 = ^M s6 = fc80c448^M a0 = fc80c448 a1 = 0009 a2 = 0001^M a3 = 0002 a4 = a5 = 0044^M t8 = t9 = 00200200 t10= ^M t11= fc80c418 pv = fc6bb0f0 at = fc80c428^M gp = fc85bf40 sp = fc817d38^M Trace:^M [] kmem_cache_alloc+0xb8/0xf0^M [] kmem_cache_create+0x1f4/0x550^M [] __start+0x1c/0x20^M ^M Code: 4821f621 4022
Re: [gem5-users] Trouble Running Full System with more than 2GB of physical memory
I've done a quick research about that. It seems you need to use the HIGHMEM option. Not sure if it works... Atenciosamente, Matheus Alcântara Souza (Via iPhone) > Em 13/10/2014, às 14:29, Stevenson Jian escreveu: > > Hi Matheus, > > Thanks for the prompt response. > > I already tried recompiling the console binary. It resulted in new errors. > See the second half of the original post. > > I saw on the website you linked that there are many possible kernel versions. > Which kernel do you recommend that I recompile? > > Thanks! > Stevo > >> On Mon, Oct 13, 2014 at 10:42 AM, Matheus Alcântara Souza >> wrote: >> Hello Stevo, >> >> Yes, a different vmlinux. Unfortunately, I never build a new kernel to gem5. >> Some information here: http://www.m5sim.org/Compiling_a_Linux_Kernel >> >> Other option is to recompile the console binary. Take a look at this thread: >> https://www.mail-archive.com/gem5-users@gem5.org/msg03280.html >> >> Best >> Matheus >> >> 2014-10-13 12:30 GMT-03:00 Stevenson Jian : >> >>> Hi Matheus, >>> >>> Thanks for the prompt response. I am not certain what you mean by kernel. >>> Do you mean use a different vmlinux? I tried both vmlinux and >>> vmlinux_2.6.27-gcc_4.3.4. I also tried recompiling system/alpha/palcode/ >>> and putting the resultant binary in m5_system_2.0b3/binaries. None of them >>> made any difference. >>> >>> Thanks! >>> Stevo >>> On Mon, Oct 13, 2014 at 10:25 AM, Matheus Alcântara Souza wrote: I guess it is a kernel problem. Can you check out this? Or try to use another kernel? Atenciosamente, Matheus Alcântara Souza (Via iPhone) > Em 13/10/2014, às 12:22, Stevenson Jian via gem5-users > escreveu: > > Hi all, > > I am trying to run PARSEC in Gem5 under full system mode. The benchmarks > run correctly when I set the simulated physical memory size to <=2GB. > However, I want to simulate a physical memory with 4GB. When I set > "return '4000MB'" in line 49 of configs/common/Benchmarks.py to set the > simulated physical memory size to 4GB and run PARSEC again (using command > "build/ALPHA/gem5.fast configs/example/fs.py -n 2 > --script=../parsecRunscripts/blackscholes_2c_simlarge_ckpts.rcS"), I get > the following error: > **simout** > ... > panic: M5 panic instruction called at pc = 0xfc31add0. > @ cycle 470482786500 > [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line > 11210] > Memory Usage: 4273680 KBytes > Program aborted at cycle 470482786500 > Aborted > **system.terminal* > ... > setup: forcing memory size to 33554432K (from -98304K).^M > freeing pages 1103:4194304^M > reserving pages 1103:1167^M > SMP: 2 CPUs probed -- cpu_present_map = 3^M > Built 1 zonelists in Zone order, mobility grouping on. Total pages: > 4165632^M > Kernel command line: root=/dev/hda1 console=ttyS0^M > PID hash table entries: 4096 (order: 12, 32768 bytes)^M > Using epoch = 1900^M > Console: colour dummy device 80x25^M > console [ttyS0] enabled^M > Dentry cache hash table entries: 4194304 (order: 12, 33554432 bytes)^M > Inode-cache hash table entries: 2097152 (order: 11, 16777216 bytes)^M > Memory: 33265208k/33554432k available (3757k kernel code, 285456k > reserved, 261k data, 208k init)^M > Unable to handle kernel paging request at virtual address > ^M > CPU 0 swapper(0): Oops 1^M > pc = [] ra = [] ps = 0007Not > tainted^M > pc is at cache_alloc_refill+0x1ec/0x780^M > ra is at cache_alloc_refill+0xcc/0x780^M > v0 = 0001 t0 = t1 = ^M > t2 = t3 = 0001 t4 = ^M > t5 = t6 = fc07ff00 t7 = fc814000^M > s0 = fc80c2a8 s1 = s2 = fc822710^M > s3 = fc80c408 s4 = fc07ff30 s5 = ^M > s6 = fc80c448^M > a0 = fc80c448 a1 = 0009 a2 = 0001^M > a3 = 0002 a4 = a5 = 0044^M > t8 = t9 = 00200200 t10= ^M > t11= fc80c418 pv = fc6bb0f0 at = fc80c428^M > gp = fc85bf40 sp = fc817d38^M > Trace:^M > [] kmem_cache_alloc+0xb8/0xf0^M > [] kmem_cache_create+0x1f4/0x550^M > [] __start+0x1c/0x20^M > ^M > Code: 4821f621 402207a1 e43fffe4 a447 a4670008 47f6040a > b4620008 ^M > Kernel panic - not syncing: Attempted to kill the idle task!^M > > > > To solve the above issue, I looked up former posts and got the idea to > recompile system/alpha/console/co
Re: [gem5-users] Trouble Running Full System with more than 2GB of physical memory
Hi Matheus, Thanks for the prompt response. I already tried recompiling the console binary. It resulted in new errors. See the second half of the original post. I saw on the website you linked that there are many possible kernel versions. Which kernel do you recommend that I recompile? Thanks! Stevo On Mon, Oct 13, 2014 at 10:42 AM, Matheus Alcântara Souza < ticks...@gmail.com> wrote: > Hello Stevo, > > Yes, a different vmlinux. Unfortunately, I never build a new kernel to > gem5. Some information here: http://www.m5sim.org/Compiling_a_Linux_Kernel > > Other option is to recompile the console binary. Take a look at this > thread: https://www.mail-archive.com/gem5-users@gem5.org/msg03280.html > > Best > Matheus > > 2014-10-13 12:30 GMT-03:00 Stevenson Jian : > > Hi Matheus, >> >> Thanks for the prompt response. I am not certain what you mean by kernel. >> Do you mean use a different vmlinux? I tried both vmlinux and >> vmlinux_2.6.27-gcc_4.3.4. >> I also tried recompiling system/alpha/palcode/ and putting the resultant >> binary in m5_system_2.0b3/binaries. None of them made any difference. >> >> Thanks! >> Stevo >> >> On Mon, Oct 13, 2014 at 10:25 AM, Matheus Alcântara Souza < >> ticks...@gmail.com> wrote: >> >>> I guess it is a kernel problem. Can you check out this? Or try to use >>> another kernel? >>> >>> Atenciosamente, >>> Matheus Alcântara Souza >>> (Via iPhone) >>> >>> Em 13/10/2014, às 12:22, Stevenson Jian via gem5-users < >>> gem5-users@gem5.org> escreveu: >>> >>> Hi all, >>> >>> I am trying to run PARSEC in Gem5 under full system mode. The benchmarks >>> run correctly when I set the simulated physical memory size to <=2GB. >>> However, I want to simulate a physical memory with 4GB. When I set "return >>> '4000MB'" in line 49 of configs/common/Benchmarks.py to set the simulated >>> physical memory size to 4GB and run PARSEC again (using command >>> "build/ALPHA/gem5.fast configs/example/fs.py -n 2 >>> --script=../parsecRunscripts/blackscholes_2c_simlarge_ckpts.rcS"), I get >>> the following error: >>> **simout** >>> ... >>> panic: M5 panic instruction called at pc = 0xfc31add0. >>> @ cycle 470482786500 >>> [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, >>> line 11210] >>> Memory Usage: 4273680 KBytes >>> Program aborted at cycle 470482786500 >>> Aborted >>> **system.terminal* >>> ... >>> setup: forcing memory size to 33554432K (from -98304K).^M >>> freeing pages 1103:4194304^M >>> reserving pages 1103:1167^M >>> SMP: 2 CPUs probed -- cpu_present_map = 3^M >>> Built 1 zonelists in Zone order, mobility grouping on. Total pages: >>> 4165632^M >>> Kernel command line: root=/dev/hda1 console=ttyS0^M >>> PID hash table entries: 4096 (order: 12, 32768 bytes)^M >>> Using epoch = 1900^M >>> Console: colour dummy device 80x25^M >>> console [ttyS0] enabled^M >>> Dentry cache hash table entries: 4194304 (order: 12, 33554432 bytes)^M >>> Inode-cache hash table entries: 2097152 (order: 11, 16777216 bytes)^M >>> Memory: 33265208k/33554432k available (3757k kernel code, 285456k >>> reserved, 261k data, 208k init)^M >>> Unable to handle kernel paging request at virtual address >>> ^M >>> CPU 0 swapper(0): Oops 1^M >>> pc = [] ra = [] ps = 0007Not >>> tainted^M >>> pc is at cache_alloc_refill+0x1ec/0x780^M >>> ra is at cache_alloc_refill+0xcc/0x780^M >>> v0 = 0001 t0 = t1 = ^M >>> t2 = t3 = 0001 t4 = ^M >>> t5 = t6 = fc07ff00 t7 = fc814000^M >>> s0 = fc80c2a8 s1 = s2 = fc822710^M >>> s3 = fc80c408 s4 = fc07ff30 s5 = ^M >>> s6 = fc80c448^M >>> a0 = fc80c448 a1 = 0009 a2 = 0001^M >>> a3 = 0002 a4 = a5 = 0044^M >>> t8 = t9 = 00200200 t10= ^M >>> t11= fc80c418 pv = fc6bb0f0 at = fc80c428^M >>> gp = fc85bf40 sp = fc817d38^M >>> Trace:^M >>> [] kmem_cache_alloc+0xb8/0xf0^M >>> [] kmem_cache_create+0x1f4/0x550^M >>> [] __start+0x1c/0x20^M >>> ^M >>> Code: 4821f621 402207a1 e43fffe4 a447 a4670008 47f6040a >>> b4620008 ^M >>> Kernel panic - not syncing: Attempted to kill the idle task!^M >>> >>> >>> >>> To solve the above issue, I looked up former posts and got the idea to >>> recompile system/alpha/console/console.c. I recompiled console.c using the >>> cross compile tool on the Gem5 website ( >>> http://www.m5sim.org/dist/current/alpha_crosstool.tar.bz2). Then I put >>> the compiled console binary in the m5_system_2.0b3 folder. However, now I >>> am getting a different error message: >>> >>> **simout*** >>> info: Entering event queue @ 0. Starting simulation... >>> info: Launchi
Re: [gem5-users] Simulation core of gem5
Have you looked at the comments in src/sim/eventq.hh? Are you interested in parallel simulation or the default single-threaded case? Steve On Mon, Oct 13, 2014 at 3:29 AM, fela via gem5-users wrote: > Hi everyone! > > I'm trying to understand the simulation core of gem5. Due to the lack of > documentation, I post this question hoping to find a response. > How events are managed in gem5? is there one clock for all the objects and > one scheduler like in systemC? I read in the site that each object > schedules > its own events but global events exist in the source code! If so, how > queues > are synchronized? > > Can somebody clarify this point, > > thanks, > > Fela > PhD candidate > > ___ > gem5-users mailing list > gem5-users@gem5.org > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users > ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Trouble Running Full System with more than 2GB of physical memory
Hello Stevo, Yes, a different vmlinux. Unfortunately, I never build a new kernel to gem5. Some information here: http://www.m5sim.org/Compiling_a_Linux_Kernel Other option is to recompile the console binary. Take a look at this thread: https://www.mail-archive.com/gem5-users@gem5.org/msg03280.html Best Matheus 2014-10-13 12:30 GMT-03:00 Stevenson Jian : > Hi Matheus, > > Thanks for the prompt response. I am not certain what you mean by kernel. > Do you mean use a different vmlinux? I tried both vmlinux and > vmlinux_2.6.27-gcc_4.3.4. > I also tried recompiling system/alpha/palcode/ and putting the resultant > binary in m5_system_2.0b3/binaries. None of them made any difference. > > Thanks! > Stevo > > On Mon, Oct 13, 2014 at 10:25 AM, Matheus Alcântara Souza < > ticks...@gmail.com> wrote: > >> I guess it is a kernel problem. Can you check out this? Or try to use >> another kernel? >> >> Atenciosamente, >> Matheus Alcântara Souza >> (Via iPhone) >> >> Em 13/10/2014, às 12:22, Stevenson Jian via gem5-users < >> gem5-users@gem5.org> escreveu: >> >> Hi all, >> >> I am trying to run PARSEC in Gem5 under full system mode. The benchmarks >> run correctly when I set the simulated physical memory size to <=2GB. >> However, I want to simulate a physical memory with 4GB. When I set "return >> '4000MB'" in line 49 of configs/common/Benchmarks.py to set the simulated >> physical memory size to 4GB and run PARSEC again (using command >> "build/ALPHA/gem5.fast configs/example/fs.py -n 2 >> --script=../parsecRunscripts/blackscholes_2c_simlarge_ckpts.rcS"), I get >> the following error: >> **simout** >> ... >> panic: M5 panic instruction called at pc = 0xfc31add0. >> @ cycle 470482786500 >> [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line >> 11210] >> Memory Usage: 4273680 KBytes >> Program aborted at cycle 470482786500 >> Aborted >> **system.terminal* >> ... >> setup: forcing memory size to 33554432K (from -98304K).^M >> freeing pages 1103:4194304^M >> reserving pages 1103:1167^M >> SMP: 2 CPUs probed -- cpu_present_map = 3^M >> Built 1 zonelists in Zone order, mobility grouping on. Total pages: >> 4165632^M >> Kernel command line: root=/dev/hda1 console=ttyS0^M >> PID hash table entries: 4096 (order: 12, 32768 bytes)^M >> Using epoch = 1900^M >> Console: colour dummy device 80x25^M >> console [ttyS0] enabled^M >> Dentry cache hash table entries: 4194304 (order: 12, 33554432 bytes)^M >> Inode-cache hash table entries: 2097152 (order: 11, 16777216 bytes)^M >> Memory: 33265208k/33554432k available (3757k kernel code, 285456k >> reserved, 261k data, 208k init)^M >> Unable to handle kernel paging request at virtual address >> ^M >> CPU 0 swapper(0): Oops 1^M >> pc = [] ra = [] ps = 0007Not >> tainted^M >> pc is at cache_alloc_refill+0x1ec/0x780^M >> ra is at cache_alloc_refill+0xcc/0x780^M >> v0 = 0001 t0 = t1 = ^M >> t2 = t3 = 0001 t4 = ^M >> t5 = t6 = fc07ff00 t7 = fc814000^M >> s0 = fc80c2a8 s1 = s2 = fc822710^M >> s3 = fc80c408 s4 = fc07ff30 s5 = ^M >> s6 = fc80c448^M >> a0 = fc80c448 a1 = 0009 a2 = 0001^M >> a3 = 0002 a4 = a5 = 0044^M >> t8 = t9 = 00200200 t10= ^M >> t11= fc80c418 pv = fc6bb0f0 at = fc80c428^M >> gp = fc85bf40 sp = fc817d38^M >> Trace:^M >> [] kmem_cache_alloc+0xb8/0xf0^M >> [] kmem_cache_create+0x1f4/0x550^M >> [] __start+0x1c/0x20^M >> ^M >> Code: 4821f621 402207a1 e43fffe4 a447 a4670008 47f6040a >> b4620008 ^M >> Kernel panic - not syncing: Attempted to kill the idle task!^M >> >> >> >> To solve the above issue, I looked up former posts and got the idea to >> recompile system/alpha/console/console.c. I recompiled console.c using the >> cross compile tool on the Gem5 website ( >> http://www.m5sim.org/dist/current/alpha_crosstool.tar.bz2). Then I put >> the compiled console binary in the m5_system_2.0b3 folder. However, now I >> am getting a different error message: >> >> **simout*** >> info: Entering event queue @ 0. Starting simulation... >> info: Launching CPU 1 @ 686481000 >> panic: M5 panic instruction called at pc = 0x8e41. >> @ cycle 697785500 >> [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line >> 11210] >> Memory Usage: 4268400 KBytes >> Program aborted at cycle 697785500 >> Aborted >> >> **system.terminal* >> ^MGot Configuration 623 >> ^Mmemsize FA00 pages 7D000 >> ^MFirst free page after ROM 0xFC018000 >> ^MHWRPB 0xFC018000 l1pt 0xFC046000 l2pt >> 0xFC048000 l3pt_rp
Re: [gem5-users] Trouble Running Full System with more than 2GB of physical memory
Hi Matheus, Thanks for the prompt response. I am not certain what you mean by kernel. Do you mean use a different vmlinux? I tried both vmlinux and vmlinux_2.6.27-gcc_4.3.4. I also tried recompiling system/alpha/palcode/ and putting the resultant binary in m5_system_2.0b3/binaries. None of them made any difference. Thanks! Stevo On Mon, Oct 13, 2014 at 10:25 AM, Matheus Alcântara Souza < ticks...@gmail.com> wrote: > I guess it is a kernel problem. Can you check out this? Or try to use > another kernel? > > Atenciosamente, > Matheus Alcântara Souza > (Via iPhone) > > Em 13/10/2014, às 12:22, Stevenson Jian via gem5-users < > gem5-users@gem5.org> escreveu: > > Hi all, > > I am trying to run PARSEC in Gem5 under full system mode. The benchmarks > run correctly when I set the simulated physical memory size to <=2GB. > However, I want to simulate a physical memory with 4GB. When I set "return > '4000MB'" in line 49 of configs/common/Benchmarks.py to set the simulated > physical memory size to 4GB and run PARSEC again (using command > "build/ALPHA/gem5.fast configs/example/fs.py -n 2 > --script=../parsecRunscripts/blackscholes_2c_simlarge_ckpts.rcS"), I get > the following error: > **simout** > ... > panic: M5 panic instruction called at pc = 0xfc31add0. > @ cycle 470482786500 > [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line > 11210] > Memory Usage: 4273680 KBytes > Program aborted at cycle 470482786500 > Aborted > **system.terminal* > ... > setup: forcing memory size to 33554432K (from -98304K).^M > freeing pages 1103:4194304^M > reserving pages 1103:1167^M > SMP: 2 CPUs probed -- cpu_present_map = 3^M > Built 1 zonelists in Zone order, mobility grouping on. Total pages: > 4165632^M > Kernel command line: root=/dev/hda1 console=ttyS0^M > PID hash table entries: 4096 (order: 12, 32768 bytes)^M > Using epoch = 1900^M > Console: colour dummy device 80x25^M > console [ttyS0] enabled^M > Dentry cache hash table entries: 4194304 (order: 12, 33554432 bytes)^M > Inode-cache hash table entries: 2097152 (order: 11, 16777216 bytes)^M > Memory: 33265208k/33554432k available (3757k kernel code, 285456k > reserved, 261k data, 208k init)^M > Unable to handle kernel paging request at virtual address > ^M > CPU 0 swapper(0): Oops 1^M > pc = [] ra = [] ps = 0007Not > tainted^M > pc is at cache_alloc_refill+0x1ec/0x780^M > ra is at cache_alloc_refill+0xcc/0x780^M > v0 = 0001 t0 = t1 = ^M > t2 = t3 = 0001 t4 = ^M > t5 = t6 = fc07ff00 t7 = fc814000^M > s0 = fc80c2a8 s1 = s2 = fc822710^M > s3 = fc80c408 s4 = fc07ff30 s5 = ^M > s6 = fc80c448^M > a0 = fc80c448 a1 = 0009 a2 = 0001^M > a3 = 0002 a4 = a5 = 0044^M > t8 = t9 = 00200200 t10= ^M > t11= fc80c418 pv = fc6bb0f0 at = fc80c428^M > gp = fc85bf40 sp = fc817d38^M > Trace:^M > [] kmem_cache_alloc+0xb8/0xf0^M > [] kmem_cache_create+0x1f4/0x550^M > [] __start+0x1c/0x20^M > ^M > Code: 4821f621 402207a1 e43fffe4 a447 a4670008 47f6040a > b4620008 ^M > Kernel panic - not syncing: Attempted to kill the idle task!^M > > > > To solve the above issue, I looked up former posts and got the idea to > recompile system/alpha/console/console.c. I recompiled console.c using the > cross compile tool on the Gem5 website ( > http://www.m5sim.org/dist/current/alpha_crosstool.tar.bz2). Then I put > the compiled console binary in the m5_system_2.0b3 folder. However, now I > am getting a different error message: > > **simout*** > info: Entering event queue @ 0. Starting simulation... > info: Launching CPU 1 @ 686481000 > panic: M5 panic instruction called at pc = 0x8e41. > @ cycle 697785500 > [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line > 11210] > Memory Usage: 4268400 KBytes > Program aborted at cycle 697785500 > Aborted > > **system.terminal* > ^MGot Configuration 623 > ^Mmemsize FA00 pages 7D000 > ^MFirst free page after ROM 0xFC018000 > ^MHWRPB 0xFC018000 l1pt 0xFC046000 l2pt 0xFC048000 > l3pt_rpb 0xFC04A000 l3pt_kernel 0xFC04E000 l2reserv > 0xFC04C000 > ^Mkstart = 0xFC31, kend = 0xFC899860, kentry = > 0xFC31, numCPUs = 0x2 > ^MCPU Clock at 2000 MHz IntrClockFrequency=1024 > ^MBooting with 2 processor(s) > ^MKSP: 0x20043FE8 PTBR 0x23 > ^MKSP: 0x20043FE8 PTBR 0x23 > ^MConsole Callback at 0x0, fixup at 0x0, crb offset: 0x790 > ^MMemory cluster 0 [0 - 392] > ^MMemory cluster 1 [392 - 511608] > ^MInitalizing mdt_bitmap a
Re: [gem5-users] Trouble Running Full System with more than 2GB of physical memory
I guess it is a kernel problem. Can you check out this? Or try to use another kernel? Atenciosamente, Matheus Alcântara Souza (Via iPhone) > Em 13/10/2014, às 12:22, Stevenson Jian via gem5-users > escreveu: > > Hi all, > > I am trying to run PARSEC in Gem5 under full system mode. The benchmarks run > correctly when I set the simulated physical memory size to <=2GB. However, I > want to simulate a physical memory with 4GB. When I set "return '4000MB'" in > line 49 of configs/common/Benchmarks.py to set the simulated physical memory > size to 4GB and run PARSEC again (using command "build/ALPHA/gem5.fast > configs/example/fs.py -n 2 > --script=../parsecRunscripts/blackscholes_2c_simlarge_ckpts.rcS"), I get the > following error: > **simout** > ... > panic: M5 panic instruction called at pc = 0xfc31add0. > @ cycle 470482786500 > [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line > 11210] > Memory Usage: 4273680 KBytes > Program aborted at cycle 470482786500 > Aborted > **system.terminal* > ... > setup: forcing memory size to 33554432K (from -98304K).^M > freeing pages 1103:4194304^M > reserving pages 1103:1167^M > SMP: 2 CPUs probed -- cpu_present_map = 3^M > Built 1 zonelists in Zone order, mobility grouping on. Total pages: 4165632^M > Kernel command line: root=/dev/hda1 console=ttyS0^M > PID hash table entries: 4096 (order: 12, 32768 bytes)^M > Using epoch = 1900^M > Console: colour dummy device 80x25^M > console [ttyS0] enabled^M > Dentry cache hash table entries: 4194304 (order: 12, 33554432 bytes)^M > Inode-cache hash table entries: 2097152 (order: 11, 16777216 bytes)^M > Memory: 33265208k/33554432k available (3757k kernel code, 285456k reserved, > 261k data, 208k init)^M > Unable to handle kernel paging request at virtual address ^M > CPU 0 swapper(0): Oops 1^M > pc = [] ra = [] ps = 0007Not > tainted^M > pc is at cache_alloc_refill+0x1ec/0x780^M > ra is at cache_alloc_refill+0xcc/0x780^M > v0 = 0001 t0 = t1 = ^M > t2 = t3 = 0001 t4 = ^M > t5 = t6 = fc07ff00 t7 = fc814000^M > s0 = fc80c2a8 s1 = s2 = fc822710^M > s3 = fc80c408 s4 = fc07ff30 s5 = ^M > s6 = fc80c448^M > a0 = fc80c448 a1 = 0009 a2 = 0001^M > a3 = 0002 a4 = a5 = 0044^M > t8 = t9 = 00200200 t10= ^M > t11= fc80c418 pv = fc6bb0f0 at = fc80c428^M > gp = fc85bf40 sp = fc817d38^M > Trace:^M > [] kmem_cache_alloc+0xb8/0xf0^M > [] kmem_cache_create+0x1f4/0x550^M > [] __start+0x1c/0x20^M > ^M > Code: 4821f621 402207a1 e43fffe4 a447 a4670008 47f6040a > b4620008 ^M > Kernel panic - not syncing: Attempted to kill the idle task!^M > > > > To solve the above issue, I looked up former posts and got the idea to > recompile system/alpha/console/console.c. I recompiled console.c using the > cross compile tool on the Gem5 website > (http://www.m5sim.org/dist/current/alpha_crosstool.tar.bz2). Then I put the > compiled console binary in the m5_system_2.0b3 folder. However, now I am > getting a different error message: > > **simout*** > info: Entering event queue @ 0. Starting simulation... > info: Launching CPU 1 @ 686481000 > panic: M5 panic instruction called at pc = 0x8e41. > @ cycle 697785500 > [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line > 11210] > Memory Usage: 4268400 KBytes > Program aborted at cycle 697785500 > Aborted > > **system.terminal* > ^MGot Configuration 623 > ^Mmemsize FA00 pages 7D000 > ^MFirst free page after ROM 0xFC018000 > ^MHWRPB 0xFC018000 l1pt 0xFC046000 l2pt 0xFC048000 > l3pt_rpb 0xFC04A000 l3pt_kernel 0xFC04E000 l2reserv > 0xFC04C000 > ^Mkstart = 0xFC31, kend = 0xFC899860, kentry = > 0xFC31, numCPUs = 0x2 > ^MCPU Clock at 2000 MHz IntrClockFrequency=1024 > ^MBooting with 2 processor(s) > ^MKSP: 0x20043FE8 PTBR 0x23 > ^MKSP: 0x20043FE8 PTBR 0x23 > ^MConsole Callback at 0x0, fixup at 0x0, crb offset: 0x790 > ^MMemory cluster 0 [0 - 392] > ^MMemory cluster 1 [392 - 511608] > ^MInitalizing mdt_bitmap addr 0xFC038000 mem_pages 7D000 > ^MConsoleDispatch at virt 18D8 phys 188D8 val FC0100A8 > ^MBootstraping CPU 1 with sp=0xFC07C000 > ^Munix_boot_mem ends at FC07E000 > ^Mk_argc = 0 > ^Mjumping to kernel at 0xFC31, (PCBB 0xFC018180 pfn 1101) > > > > I then ran GDB on gem5.debug to try to locate the source of the second error.
[gem5-users] Trouble Running Full System with more than 2GB of physical memory
Hi all, I am trying to run PARSEC in Gem5 under full system mode. The benchmarks run correctly when I set the simulated physical memory size to <=2GB. However, I want to simulate a physical memory with 4GB. When I set "return '4000MB'" in line 49 of configs/common/Benchmarks.py to set the simulated physical memory size to 4GB and run PARSEC again (using command "build/ALPHA/gem5.fast configs/example/fs.py -n 2 --script=../parsecRunscripts/blackscholes_2c_simlarge_ckpts.rcS"), I get the following error: **simout** ... panic: M5 panic instruction called at pc = 0xfc31add0. @ cycle 470482786500 [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line 11210] Memory Usage: 4273680 KBytes Program aborted at cycle 470482786500 Aborted **system.terminal* ... setup: forcing memory size to 33554432K (from -98304K).^M freeing pages 1103:4194304^M reserving pages 1103:1167^M SMP: 2 CPUs probed -- cpu_present_map = 3^M Built 1 zonelists in Zone order, mobility grouping on. Total pages: 4165632^M Kernel command line: root=/dev/hda1 console=ttyS0^M PID hash table entries: 4096 (order: 12, 32768 bytes)^M Using epoch = 1900^M Console: colour dummy device 80x25^M console [ttyS0] enabled^M Dentry cache hash table entries: 4194304 (order: 12, 33554432 bytes)^M Inode-cache hash table entries: 2097152 (order: 11, 16777216 bytes)^M Memory: 33265208k/33554432k available (3757k kernel code, 285456k reserved, 261k data, 208k init)^M Unable to handle kernel paging request at virtual address ^M CPU 0 swapper(0): Oops 1^M pc = [] ra = [] ps = 0007Not tainted^M pc is at cache_alloc_refill+0x1ec/0x780^M ra is at cache_alloc_refill+0xcc/0x780^M v0 = 0001 t0 = t1 = ^M t2 = t3 = 0001 t4 = ^M t5 = t6 = fc07ff00 t7 = fc814000^M s0 = fc80c2a8 s1 = s2 = fc822710^M s3 = fc80c408 s4 = fc07ff30 s5 = ^M s6 = fc80c448^M a0 = fc80c448 a1 = 0009 a2 = 0001^M a3 = 0002 a4 = a5 = 0044^M t8 = t9 = 00200200 t10= ^M t11= fc80c418 pv = fc6bb0f0 at = fc80c428^M gp = fc85bf40 sp = fc817d38^M Trace:^M [] kmem_cache_alloc+0xb8/0xf0^M [] kmem_cache_create+0x1f4/0x550^M [] __start+0x1c/0x20^M ^M Code: 4821f621 402207a1 e43fffe4 a447 a4670008 47f6040a b4620008 ^M Kernel panic - not syncing: Attempted to kill the idle task!^M To solve the above issue, I looked up former posts and got the idea to recompile system/alpha/console/console.c. I recompiled console.c using the cross compile tool on the Gem5 website ( http://www.m5sim.org/dist/current/alpha_crosstool.tar.bz2). Then I put the compiled console binary in the m5_system_2.0b3 folder. However, now I am getting a different error message: **simout*** info: Entering event queue @ 0. Starting simulation... info: Launching CPU 1 @ 686481000 panic: M5 panic instruction called at pc = 0x8e41. @ cycle 697785500 [execute:build/ALPHA/arch/alpha/generated/atomic_simple_cpu_exec.cc, line 11210] Memory Usage: 4268400 KBytes Program aborted at cycle 697785500 Aborted **system.terminal* ^MGot Configuration 623 ^Mmemsize FA00 pages 7D000 ^MFirst free page after ROM 0xFC018000 ^MHWRPB 0xFC018000 l1pt 0xFC046000 l2pt 0xFC048000 l3pt_rpb 0xFC04A000 l3pt_kernel 0xFC04E000 l2reserv 0xFC04C000 ^Mkstart = 0xFC31, kend = 0xFC899860, kentry = 0xFC31, numCPUs = 0x2 ^MCPU Clock at 2000 MHz IntrClockFrequency=1024 ^MBooting with 2 processor(s) ^MKSP: 0x20043FE8 PTBR 0x23 ^MKSP: 0x20043FE8 PTBR 0x23 ^MConsole Callback at 0x0, fixup at 0x0, crb offset: 0x790 ^MMemory cluster 0 [0 - 392] ^MMemory cluster 1 [392 - 511608] ^MInitalizing mdt_bitmap addr 0xFC038000 mem_pages 7D000 ^MConsoleDispatch at virt 18D8 phys 188D8 val FC0100A8 ^MBootstraping CPU 1 with sp=0xFC07C000 ^Munix_boot_mem ends at FC07E000 ^Mk_argc = 0 ^Mjumping to kernel at 0xFC31, (PCBB 0xFC018180 pfn 1101) I then ran GDB on gem5.debug to try to locate the source of the second error. Here is the gdb output: **GDB* info: kernel located at: /home/xunjian1/scratch/m5_system_2.0b3/binaries/vmlinux_2.6.27-gcc_4.3.4 Listening for system connection on port 3456 0: system.tsunami.io.rtc: Real-time clock set to Thu Jan 1 00:00:00 2009 warn: CoherentBus system.membus has no snooping ports attached! 0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000 0: system.remote_gdb.listener: listeni
[gem5-users] Intsalling Tizen on Gem5
Hi Gem5 developers! Has anyone tried installing/intsalled Tizen OS on gem5? If so any tips/directions/documentations would be very welcome. -- Best Regards, Anmol Mohanty ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] Simulation core of gem5
Hi everyone! I'm trying to understand the simulation core of gem5. Due to the lack of documentation, I post this question hoping to find a response. How events are managed in gem5? is there one clock for all the objects and one scheduler like in systemC? I read in the site that each object schedules its own events but global events exist in the source code! If so, how queues are synchronized? Can somebody clarify this point, thanks, Fela PhD candidate ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Questions on DRAM Controller model
Hi Prathap, Indeed. The atomic mode is for fast-forwarding only. Once you actually want to get some representative performance numbers you have to run in timing mode with either the O3 or Minor CPU model. Andreas From: Prathap Kolakkampadath mailto:kvprat...@gmail.com>> Date: Monday, 13 October 2014 10:19 To: Andreas Hansson mailto:andreas.hans...@arm.com>> Cc: gem5 users mailing list mailto:gem5-users@gem5.org>> Subject: Re: [gem5-users] Questions on DRAM Controller model Thanks for your reply. The memory mode which I used is atomic. I think, I need to run the tests in timing More. I believe which shows up interference and queueing delay similar to real platforms. Prathap On Oct 13, 2014 2:55 AM, "Andreas Hansson" mailto:andreas.hans...@arm.com>> wrote: Hi Prathap, I don’t dare say exactly what is going wrong in your setup, but I am confident that Ruby will not magically make things more representative (it will likely give you a whole lot more problems though). In the end it is all about configuring the building blocks to match the system you want to capture. The crossbars and caches in the classic memory system do make some simplifications, but I have not yet seen a case when they are not sufficiently accurate. Have you looked at the various policy settings in the DRAM controller, e.g. the page policy and address mapping? If you’re trying to correlate with a real platform, also see Anthony’s ISPASS paper from last year for some sensible steps in simplifying the problem and dividing it into manageable chunks. Good luck. Andreas From: Prathap Kolakkampadath mailto:kvprat...@gmail.com>> Date: Monday, 13 October 2014 00:29 To: Andreas Hansson mailto:andreas.hans...@arm.com>> Cc: gem5 users mailing list mailto:gem5-users@gem5.org>> Subject: Re: [gem5-users] Questions on DRAM Controller model Hello Andreas/Users, I used to create a checkpoint until linux boot using Atomic Simple CPU and then restore from this checkpoint to detailed O3 cpu before running the test. I notice that the mem-mode is set to atomic and not timing. Will that be the reason for less contention in memory bus i am observing? Thanks, Prathap On Sun, Oct 12, 2014 at 4:56 PM, Prathap Kolakkampadath mailto:kvprat...@gmail.com>> wrote: Hello Andreas, Even after configuring the model like the actual hardware, i still not seeing enough interference to the read request under consideration. I am using the classic memory system model. Since it uses atomic and functional Packet allocation protocol, I would like to switch to Ruby( I think it more resembles with real platform). I am hitting in to below problem when i use ruby. /build/ARM/gem5.opt --stats-file=cr1A1.txt configs/example/fs.py --caches --l2cache --l1d_size=32kB --l1i_size=32kB --l2_size=1MB --num-cpus=4 --mem-size=512MB --kernel=/home/prathap/WorkSpace/linux-linaro-tracking-gem5/vmlinux --disk-image=/home/prathap/WorkSpace/gem5/fullsystem/disks/arm-ubuntu-natty-headless.img --machine-type=VExpress_EMM --dtb-file=/home/prathap/WorkSpace/linux-linaro-tracking-gem5/arch/arm/boot/dts/vexpress-v2p-ca15-tc1-gem5_4cpus.dtb --cpu-type=detailed --ruby --mem-type=ddr3_1600_x64 Traceback (most recent call last): File "", line 1, in File "/home/prathap/WorkSpace/gem5/src/python/m5/main.py", line 388, in main exec filecode in scope File "configs/example/fs.py", line 302, in test_sys = build_test_system(np) File "configs/example/fs.py", line 138, in build_test_system Ruby.create_system(options, test_sys, test_sys.iobus, test_sys._dma_ports) File "/home/prathap/WorkSpace/gem5/src/python/m5/SimObject.py", line 825, in __getattr__ raise AttributeError, err_string AttributeError: object 'LinuxArmSystem' has no attribute '_dma_ports' (C++ object is not yet constructed, so wrapped C++ methods are unavailable.) What could be the cause of this? Thanks, Prathap On Tue, Sep 9, 2014 at 1:35 PM, Andreas Hansson mailto:andreas.hans...@arm.com>> wrote: Hi Prathap, There are many possible reasons for the discrepancy, and obviously there are many ways of building a memory controller :-). Have you configured the model to look like the actual hardware? The most obvious differences would be in terms of buffer sizes, the page policy, arbitration policy, the threshold before closing a page, the read/write switching, actual timings etc. It is also worth checking if the controller hardware treats writes the same way the model does (early responses, minimise switching). Andreas From: Prathap Kolakkampadath mailto:kvprat...@gmail.com>> Date: Tuesday, 9 September 2014 18:56 To: Andreas Hansson mailto:andreas.hans...@arm.com>> Cc: gem5 users mailing list mailto:gem5-users@gem5.org>> Subject: Re: [gem5-users] Questions on DRAM Controller model Hello Andreas, Thanks for your reply. I read your ISPASS paper and got a fair understanding about the architecture. I am trying to reproduce the results, collected from running synthetic benchm
Re: [gem5-users] Questions on DRAM Controller model
Thanks for your reply. The memory mode which I used is atomic. I think, I need to run the tests in timing More. I believe which shows up interference and queueing delay similar to real platforms. Prathap On Oct 13, 2014 2:55 AM, "Andreas Hansson" wrote: > Hi Prathap, > > I don’t dare say exactly what is going wrong in your setup, but I am > confident that Ruby will not magically make things more representative (it > will likely give you a whole lot more problems though). In the end it is > all about configuring the building blocks to match the system you want to > capture. The crossbars and caches in the classic memory system do make some > simplifications, but I have not yet seen a case when they are not > sufficiently accurate. > > Have you looked at the various policy settings in the DRAM controller, > e.g. the page policy and address mapping? If you’re trying to correlate > with a real platform, also see Anthony’s ISPASS paper from last year for > some sensible steps in simplifying the problem and dividing it into > manageable chunks. > > Good luck. > > Andreas > > From: Prathap Kolakkampadath > Date: Monday, 13 October 2014 00:29 > To: Andreas Hansson > Cc: gem5 users mailing list > Subject: Re: [gem5-users] Questions on DRAM Controller model > > Hello Andreas/Users, > > I used to create a checkpoint until linux boot using Atomic Simple CPU and > then restore from this checkpoint to detailed O3 cpu before running the > test. I notice that the mem-mode is set to atomic and not timing. Will > that be the reason for less contention in memory bus i am observing? > > Thanks, > Prathap > > On Sun, Oct 12, 2014 at 4:56 PM, Prathap Kolakkampadath < > kvprat...@gmail.com> wrote: > >> Hello Andreas, >> >> Even after configuring the model like the actual hardware, i still not >> seeing enough interference to the read request under consideration. I am >> using the classic memory system model. Since it uses atomic and functional >> Packet allocation protocol, I would like to switch to Ruby( I think it >> more resembles with real platform). >> >> >> I am hitting in to below problem when i use ruby. >> >> /build/ARM/gem5.opt --stats-file=cr1A1.txt configs/example/fs.py --caches >> --l2cache --l1d_size=32kB --l1i_size=32kB --l2_size=1MB --num-cpus=4 >> --mem-size=512MB >> --kernel=/home/prathap/WorkSpace/linux-linaro-tracking-gem5/vmlinux >> --disk-image=/home/prathap/WorkSpace/gem5/fullsystem/disks/arm-ubuntu-natty-headless.img >> --machine-type=VExpress_EMM >> --dtb-file=/home/prathap/WorkSpace/linux-linaro-tracking-gem5/arch/arm/boot/dts/vexpress-v2p-ca15-tc1-gem5_4cpus.dtb >> --cpu-type=detailed --ruby --mem-type=ddr3_1600_x64 >> >> Traceback (most recent call last): >> File "", line 1, in >> File "/home/prathap/WorkSpace/gem5/src/python/m5/main.py", line 388, in >> main >> exec filecode in scope >> File "configs/example/fs.py", line 302, in >> test_sys = build_test_system(np) >> File "configs/example/fs.py", line 138, in build_test_system >> Ruby.create_system(options, test_sys, test_sys.iobus, >> test_sys._dma_ports) >> File "/home/prathap/WorkSpace/gem5/src/python/m5/SimObject.py", line >> 825, in __getattr__ >> raise AttributeError, err_string >> AttributeError: object 'LinuxArmSystem' has no attribute '_dma_ports' >> (C++ object is not yet constructed, so wrapped C++ methods are >> unavailable.) >> >> What could be the cause of this? >> >> Thanks, >> Prathap >> >> >> >> On Tue, Sep 9, 2014 at 1:35 PM, Andreas Hansson >> wrote: >> >>> Hi Prathap, >>> >>> There are many possible reasons for the discrepancy, and obviously >>> there are many ways of building a memory controller :-). Have you >>> configured the model to look like the actual hardware? The most obvious >>> differences would be in terms of buffer sizes, the page policy, arbitration >>> policy, the threshold before closing a page, the read/write switching, >>> actual timings etc. It is also worth checking if the controller hardware >>> treats writes the same way the model does (early responses, minimise >>> switching). >>> >>> Andreas >>> >>> From: Prathap Kolakkampadath >>> Date: Tuesday, 9 September 2014 18:56 >>> To: Andreas Hansson >>> Cc: gem5 users mailing list >>> Subject: Re: [gem5-users] Questions on DRAM Controller model >>> >>> Hello Andreas, >>> >>> Thanks for your reply. I read your ISPASS paper and got a fair >>> understanding about the architecture. >>> I am trying to reproduce the results, collected from running synthetic >>> benchmarks (latency and bandwidth) on real hardware, in Simulator >>> Environment.However, i could see variations in the results and i am trying >>> to understand the reasons. >>> >>> The experiment has latency(memory non-intensive with random access) as >>> the primary task and bandwidth(memory intensive with sequential access) as >>> the co-runner task. >>> >>> >>> On real hardware >>> case 1 - 0 corunner : latency of the test is 74.88ns and b/w
Re: [gem5-users] Questions on DRAM Controller model
Hi Prathap, I don’t dare say exactly what is going wrong in your setup, but I am confident that Ruby will not magically make things more representative (it will likely give you a whole lot more problems though). In the end it is all about configuring the building blocks to match the system you want to capture. The crossbars and caches in the classic memory system do make some simplifications, but I have not yet seen a case when they are not sufficiently accurate. Have you looked at the various policy settings in the DRAM controller, e.g. the page policy and address mapping? If you’re trying to correlate with a real platform, also see Anthony’s ISPASS paper from last year for some sensible steps in simplifying the problem and dividing it into manageable chunks. Good luck. Andreas From: Prathap Kolakkampadath mailto:kvprat...@gmail.com>> Date: Monday, 13 October 2014 00:29 To: Andreas Hansson mailto:andreas.hans...@arm.com>> Cc: gem5 users mailing list mailto:gem5-users@gem5.org>> Subject: Re: [gem5-users] Questions on DRAM Controller model Hello Andreas/Users, I used to create a checkpoint until linux boot using Atomic Simple CPU and then restore from this checkpoint to detailed O3 cpu before running the test. I notice that the mem-mode is set to atomic and not timing. Will that be the reason for less contention in memory bus i am observing? Thanks, Prathap On Sun, Oct 12, 2014 at 4:56 PM, Prathap Kolakkampadath mailto:kvprat...@gmail.com>> wrote: Hello Andreas, Even after configuring the model like the actual hardware, i still not seeing enough interference to the read request under consideration. I am using the classic memory system model. Since it uses atomic and functional Packet allocation protocol, I would like to switch to Ruby( I think it more resembles with real platform). I am hitting in to below problem when i use ruby. /build/ARM/gem5.opt --stats-file=cr1A1.txt configs/example/fs.py --caches --l2cache --l1d_size=32kB --l1i_size=32kB --l2_size=1MB --num-cpus=4 --mem-size=512MB --kernel=/home/prathap/WorkSpace/linux-linaro-tracking-gem5/vmlinux --disk-image=/home/prathap/WorkSpace/gem5/fullsystem/disks/arm-ubuntu-natty-headless.img --machine-type=VExpress_EMM --dtb-file=/home/prathap/WorkSpace/linux-linaro-tracking-gem5/arch/arm/boot/dts/vexpress-v2p-ca15-tc1-gem5_4cpus.dtb --cpu-type=detailed --ruby --mem-type=ddr3_1600_x64 Traceback (most recent call last): File "", line 1, in File "/home/prathap/WorkSpace/gem5/src/python/m5/main.py", line 388, in main exec filecode in scope File "configs/example/fs.py", line 302, in test_sys = build_test_system(np) File "configs/example/fs.py", line 138, in build_test_system Ruby.create_system(options, test_sys, test_sys.iobus, test_sys._dma_ports) File "/home/prathap/WorkSpace/gem5/src/python/m5/SimObject.py", line 825, in __getattr__ raise AttributeError, err_string AttributeError: object 'LinuxArmSystem' has no attribute '_dma_ports' (C++ object is not yet constructed, so wrapped C++ methods are unavailable.) What could be the cause of this? Thanks, Prathap On Tue, Sep 9, 2014 at 1:35 PM, Andreas Hansson mailto:andreas.hans...@arm.com>> wrote: Hi Prathap, There are many possible reasons for the discrepancy, and obviously there are many ways of building a memory controller :-). Have you configured the model to look like the actual hardware? The most obvious differences would be in terms of buffer sizes, the page policy, arbitration policy, the threshold before closing a page, the read/write switching, actual timings etc. It is also worth checking if the controller hardware treats writes the same way the model does (early responses, minimise switching). Andreas From: Prathap Kolakkampadath mailto:kvprat...@gmail.com>> Date: Tuesday, 9 September 2014 18:56 To: Andreas Hansson mailto:andreas.hans...@arm.com>> Cc: gem5 users mailing list mailto:gem5-users@gem5.org>> Subject: Re: [gem5-users] Questions on DRAM Controller model Hello Andreas, Thanks for your reply. I read your ISPASS paper and got a fair understanding about the architecture. I am trying to reproduce the results, collected from running synthetic benchmarks (latency and bandwidth) on real hardware, in Simulator Environment.However, i could see variations in the results and i am trying to understand the reasons. The experiment has latency(memory non-intensive with random access) as the primary task and bandwidth(memory intensive with sequential access) as the co-runner task. On real hardware case 1 - 0 corunner : latency of the test is 74.88ns and b/w 854.74MB/s case 2 - 1 corunner : latency of the test is 225.95ns and b/w 283.24MB/s On simulator case 1 - 0 corunner : latency of the test is 76.08ns and b/w 802.25MB/s case 2 - 1 corunner : latency of the test is 93.69ns and b/w 651.57MB/s Case 1 where latency test run alone(0 corunner), the results matches on both environment. However Case 2, when run with