[gem5-users] Binary Translation in gem5
Hi,Can anyone please tell if binary translation is implemented in gem5? I have looked through the code and I am aware of how M5 description langauge is used to generate instruction objects and execute methods for each instr. As far as I understand, each instruction execution is a call to that instruction object's execute method with no other optimizations/binary translation. Am I missing something here? Abhishek___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] Error when running Garnet Network
Dear Gem5Users, I tried to run sample configuration to start using GARNET in my NoC research, but after I run the sample commands which were written in http://www.gem5.org/Interconnection_Network, i got the following problem gem5 compiled Apr 20 2015 18:30:54 gem5 started Apr 21 2015 17:46:06 gem5 executing on Malkawie command line: ./build/ALPHA/gem5.debug configs/example/ruby_random_test.py --num-cpus=16 --num-dirs=16 --topology=Mesh --mesh-rows=4 --garnet-network=fixed Error: could not create sytem for ruby protocol MI_example Traceback (most recent call last): File string, line 1, in module File /home/mmalkawie/new/gem5/src/python/m5/main.py, line 388, in main exec filecode in scope File configs/example/ruby_random_test.py, line 108, in module Ruby.create_system(options, False, system) File /home/mmalkawie/new/gem5/configs/ruby/Ruby.py, line 191, in create_system % protocol) File string, line 1, in module File /home/mmalkawie/new/gem5/configs/ruby/MI_example.py, line 85, in create_system clk_domain=system.cpu[i].clk_domain, File /home/mmalkawie/new/gem5/src/python/m5/SimObject.py, line 1156, in __getitem__ raise IndexError, Non-zero index '%s' to SimObject % key IndexError: Non-zero index '1' to SimObject and the same problem displayed when I tried to run simple network. Thanks Regards Minu ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] ARM 64 bit full system simulation
Dear Gem5 Users, I am trying to run a 64-bit full system simulation. I execute the following command : ./build/ARM/gem5.opt -r -d test configs/example/fs.py --kernel=/PATH/Gem5_Dev/system/binaries/vmlinux.aarch64.20140821 --machine=VExpress_EMM64 --dtb-file=/PATH/Gem5_Dev/system/binaries/vexpress.aarch64.20140821.dtb --disk-image=linaro-minimal-aarch64.img --script=./script.sh The contents of the script.sh file are the following: #!/bin/bash echo HELLO WORLD ls m5 exit The simulator boots up fine, although it takes some reasonable time. However neither the echo command is executed nor the ls. At the end the simualtion terminates because of the m5 exit instruction. The output of the simout is the following: REAL SIMULATION warn: Existing EnergyCtrl, but no enabled DVFSHandler found. info: Entering event queue @ 0. Starting simulation... warn: SCReg: Writing 0 to dcc0:site0:pos0:fn7:dev0 warn: Tried to read RealView I/O at offset 0x60 that doesn't exist warn: Tried to read RealView I/O at offset 0x48 that doesn't exist warn: Tried to read RealView I/O at offset 0x8 that doesn't exist warn: Tried to read RealView I/O at offset 0x48 that doesn't exist Exiting @ tick 51080808874500 because m5_exit instruction encountered and the last lines of the system.terminal are: [3.411025] TCP: cubic registered [3.411026] NET: Registered protocol family 17 [3.411205] VFS: Mounted root (ext2 filesystem) on device 8:1. [3.411215] devtmpfs: mounted [3.411223] Freeing unused kernel memory: 208K (ffc000692000 - ffc0006c6000) INIT: version 2.88 booting Starting udev [3.446942] udevd[607]: starting version 182 Starting Bootlog daemon: bootlogd. [3.532260] random: dd urandom read with 19 bits of entropy available Populating dev cache net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.all.rp_filter = 1 hwclock: can't open '/dev/misc/rtc': No such file or directory Mon Jan 27 08:00:00 UTC 2014 hwclock: can't open '/dev/misc/rtc': No such file or directory INIT: Entering runlevel: 5 Configuring network interfaces... udhcpc (v1.21.1) started [3.640781] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None Sending discover... Sending discover... Sending discover... No lease, forking to background done. Starting rpcbind daemon...rpcbind: cannot create socket for udp6 rpcbind: cannot create socket for tcp6 done. rpcbind: cannot get uid of '': Success creating NFS state directory: done starting statd: done Starting auto-serial-console: done Any idea why I cant execute any bash command on the simulator. I have also tried the -b option and the results are the same. Konstantinos Parasyris. ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Error when running Garnet Network
hi first you run this commands scons build/ALPHA/gem5.opt PROTOCOL=MEOSI_CMP_directory RUBY=true next, run this commands ./ build/ ALPHA/ gem5.opt configs/example/fs.py --ruby --garnet-network=fixed -n 16 --num-dirs=16 --topology=Mesh Sent from my iPhone On Jul 14, 2015, at 2:21 PM, Minu Cherian minu...@gmail.com wrote: Dear Gem5Users, I tried to run sample configuration to start using GARNET in my NoC research, but after I run the sample commands which were written in http://www.gem5.org/Interconnection_Network, i got the following problem gem5 compiled Apr 20 2015 18:30:54 gem5 started Apr 21 2015 17:46:06 gem5 executing on Malkawie command line: ./build/ALPHA/gem5.debug configs/example/ruby_random_test.py --num-cpus=16 --num-dirs=16 --topology=Mesh --mesh-rows=4 --garnet-network=fixed Error: could not create sytem for ruby protocol MI_example Traceback (most recent call last): File string, line 1, in module File /home/mmalkawie/new/gem5/src/python/m5/main.py, line 388, in main exec filecode in scope File configs/example/ruby_random_test.py, line 108, in module Ruby.create_system(options, False, system) File /home/mmalkawie/new/gem5/configs/ruby/Ruby.py, line 191, in create_system % protocol) File string, line 1, in module File /home/mmalkawie/new/gem5/configs/ruby/MI_example.py, line 85, in create_system clk_domain=system.cpu[i].clk_domain, File /home/mmalkawie/new/gem5/src/python/m5/SimObject.py, line 1156, in __getitem__ raise IndexError, Non-zero index '%s' to SimObject % key IndexError: Non-zero index '1' to SimObject and the same problem displayed when I tried to run simple network. Thanks Regards Minu ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] ARM 64 bit full system simulation
What if you try aarch64-ubuntu-trusty-headless.img disk image instead of inaro-minimal-aarch64.img? Mohammad On Tue, Jul 14, 2015 at 5:08 AM, Konstadinos PARASYRIS kopar...@uth.gr wrote: Dear Gem5 Users, I am trying to run a 64-bit full system simulation. I execute the following command : ./build/ARM/gem5.opt -r -d test configs/example/fs.py --kernel=/PATH/Gem5_Dev/system/binaries/vmlinux.aarch64.20140821 --machine=VExpress_EMM64 --dtb-file=/PATH/Gem5_Dev/system/binaries/vexpress.aarch64.20140821.dtb --disk-image=linaro-minimal-aarch64.img --script=./script.sh The contents of the script.sh file are the following: #!/bin/bash echo HELLO WORLD ls m5 exit The simulator boots up fine, although it takes some reasonable time. However neither the echo command is executed nor the ls. At the end the simualtion terminates because of the m5 exit instruction. The output of the simout is the following: REAL SIMULATION warn: Existing EnergyCtrl, but no enabled DVFSHandler found. info: Entering event queue @ 0. Starting simulation... warn: SCReg: Writing 0 to dcc0:site0:pos0:fn7:dev0 warn: Tried to read RealView I/O at offset 0x60 that doesn't exist warn: Tried to read RealView I/O at offset 0x48 that doesn't exist warn: Tried to read RealView I/O at offset 0x8 that doesn't exist warn: Tried to read RealView I/O at offset 0x48 that doesn't exist Exiting @ tick 51080808874500 because m5_exit instruction encountered and the last lines of the system.terminal are: [3.411025] TCP: cubic registered [3.411026] NET: Registered protocol family 17 [3.411205] VFS: Mounted root (ext2 filesystem) on device 8:1. [3.411215] devtmpfs: mounted [3.411223] Freeing unused kernel memory: 208K (ffc000692000 - ffc0006c6000) INIT: version 2.88 booting Starting udev [3.446942] udevd[607]: starting version 182 Starting Bootlog daemon: bootlogd. [3.532260] random: dd urandom read with 19 bits of entropy available Populating dev cache net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.all.rp_filter = 1 hwclock: can't open '/dev/misc/rtc': No such file or directory Mon Jan 27 08:00:00 UTC 2014 hwclock: can't open '/dev/misc/rtc': No such file or directory INIT: Entering runlevel: 5 Configuring network interfaces... udhcpc (v1.21.1) started [3.640781] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: None Sending discover... Sending discover... Sending discover... No lease, forking to background done. Starting rpcbind daemon...rpcbind: cannot create socket for udp6 rpcbind: cannot create socket for tcp6 done. rpcbind: cannot get uid of '': Success creating NFS state directory: done starting statd: done Starting auto-serial-console: done Any idea why I cant execute any bash command on the simulator. I have also tried the -b option and the results are the same. Konstantinos Parasyris. ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] Question on retry requests due to write queue full.
Hello Users, I am using classic memory system with following DRAM controller parameters write_buffer_size = 64 write_high_thresh_perc = 85 write_low_thresh_perc = 50 min_writes_per_switch =18 According to write draining algorithm, the bus has to turn around to writes when the writeQueue.size() writeHighThreshold. However when i run some memory intensive benchmarks, i get a high number of Write retry requests because the write queue is full, as reported in the gem5 statistics. # Number of times write queue was full causing retry system.mem_ctrls.numWrRetry183731 I am not sure how this could happen, as the DRAM controller drain writes in a batch whenever the write queue size grows beyond the high threshold, which is only 85% of Write Buffer Size. Is this expected? Thanks, Prathap ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Binary Translation in gem5
You are right that each instruction execution is a separate call. There is no binary translation being done. Steve On Mon, Jul 13, 2015 at 11:45 PM Abhishek Joshi abhijsh...@yahoo.in wrote: Hi, Can anyone please tell if binary translation is implemented in gem5? I have looked through the code and I am aware of how M5 description langauge is used to generate instruction objects and execute methods for each instr. As far as I understand, each instruction execution is a call to that instruction object's execute method with no other optimizations/binary translation. Am I missing something here? Abhishek ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] Why cache misses are decreasing when core frequency increase?
Hello, I am trying to use DVFS for my project. But I want the frequency control in hardware so I cannot use the DVFS support given by Gem5 as that is on kernel level. For my project I added each core and their l1 caches to a different clock domains and hacked the code to change the frequency of the domains whenever I wanted. To check if it is working I fired two runs, one with default frequency settings (which is 2 Ghz) and in the other run I double the frequency of each domain, so each core runs on 4Ghz. Now looking at the stats, I see simulation time dropping to almost half which is expected. But I am not able to reason the cache stats. I am seeing the cache misses for all caches also decreasing by almost half. Can anybody reason how is that happening? I am running arm Full system with classic memory model. All memory settings are default. Thanks, -- Warm regards Nimish Girdhar Department of Electrical and Computer Engineering Texas AM University ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Question on retry requests due to write queue full.
I think if the benchmark is write intensive, this could happen. When DRAM controller process writes, if there are many writes(cache evictions) arrives at a rate faster than the rate at which DRAM controller process writes. Thanks, Prathap On Tue, Jul 14, 2015 at 11:27 AM, Prathap Kolakkampadath kvprat...@gmail.com wrote: Hello Users, I am using classic memory system with following DRAM controller parameters write_buffer_size = 64 write_high_thresh_perc = 85 write_low_thresh_perc = 50 min_writes_per_switch =18 According to write draining algorithm, the bus has to turn around to writes when the writeQueue.size() writeHighThreshold. However when i run some memory intensive benchmarks, i get a high number of Write retry requests because the write queue is full, as reported in the gem5 statistics. # Number of times write queue was full causing retry system.mem_ctrls.numWrRetry183731 I am not sure how this could happen, as the DRAM controller drain writes in a batch whenever the write queue size grows beyond the high threshold, which is only 85% of Write Buffer Size. Is this expected? Thanks, Prathap ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Question on retry requests due to write queue full.
Hello Andreas, I think that could b the case. I am running a very memory intensive synthetic benchmark, in every memory operation generates a read and write to DRAM. Thanks, Prathap On Tue, Jul 14, 2015 at 12:39 PM, Andreas Hansson andreas.hans...@arm.com wrote: Hi Prathap, Your write are probably arriving faster than the controller can actually send them to the DRAM. What is it you’re running? Andreas From: gem5-users gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.org Date: Tuesday, 14 July 2015 17:27 To: gem5 users mailing list gem5-users@gem5.org Subject: [gem5-users] Question on retry requests due to write queue full. Hello Users, I am using classic memory system with following DRAM controller parameters write_buffer_size = 64 write_high_thresh_perc = 85 write_low_thresh_perc = 50 min_writes_per_switch =18 According to write draining algorithm, the bus has to turn around to writes when the writeQueue.size() writeHighThreshold. However when i run some memory intensive benchmarks, i get a high number of Write retry requests because the write queue is full, as reported in the gem5 statistics. # Number of times write queue was full causing retry system.mem_ctrls.numWrRetry183731 I am not sure how this could happen, as the DRAM controller drain writes in a batch whenever the write queue size grows beyond the high threshold, which is only 85% of Write Buffer Size. Is this expected? Thanks, Prathap -- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England Wales, Company No: 2557590 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England Wales, Company No: 2548782 ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] MSHR Queue Full Handling
Hi Davesh, 1. The promoteDeferredTargets is a “trick” used in the classic memory system where MSHR targets gets put “on hold” if we already have a target outstanding that will e.g. give us a block in modified or owned state. Once the first chunk of targets are resolved, and all caches agree on the new state, we then continue with the deferred targets (after promoting them). Thus, the deferred targets are serialised with respect to the normal targets, simplifying any intermittent states. 2. When the queue is full, we block the cpu-side port. This is quite crude, as there could be hits that would have been fine. However, at the moment we take the easy way out.Have a look at setBlocked and recvTimingReq. I hope that helps. Andreas From: gem5-users gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of Davesh Shingari shingaridav...@gmail.commailto:shingaridav...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org Date: Tuesday, 14 July 2015 18:59 To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org Subject: [gem5-users] MSHR Queue Full Handling Hi In cache_impl.hh, I can see that MSHRQueue *mq = mshr-queue; bool wasFull = mq-isFull(); But wasFull is used only in following code: if (mshr-promoteDeferredTargets()) { // avoid later read getting stale data while write miss is // outstanding.. see comment in timingAccess() if (blk) { blk-status = ~BlkReadable; } mq = mshr-queue; mq-markPending(mshr); requestMemSideBus((RequestCause)mq-index, clockEdge() + pkt-lastWordDelay); } else { mq-deallocate(mshr); if (wasFull !mq-isFull()) { clearBlocked((BlockedCause)mq-index); } I have 2 questions: 1. What does promoteDeferredTargets function do? 2. What happens when the queue is full. For eg. In Caches.py I can see that mshr for L2 is 20. What happens when the misses number exceeds 20? Can someone point me to the handling code? -- Have a great day! Thanks and Warm Regards Davesh Shingari Master's in Computer Engineering [EE] Arizona State University dshin...@asu.edumailto:dshin...@asu.edu [https://mailfoogae.appspot.com/t?sender=ac2hpbmdhcmlkYXZlc2hAZ21haWwuY29ttype=zerocontentguid=e27518a8-bc81-44ec-9b38-b67dc2b02b09]ᐧ -- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England Wales, Company No: 2557590 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England Wales, Company No: 2548782 ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] Discrepancy in Statistics, Clocks and Ticks
Hi I ran a simulation with dumping stats after every 1 ticks. After the simulation I extracted the clock cycles and got the difference between the consecutive readings, which should tell me the clock cycles in 1 ticks. As per the Gem5 discussion, A tick is 1ps and your clock cycle could be any number of ticks. For example, a clock cycle of 1ns corresponds to 1000 ticks. If I am understanding clearly, shouldn't number of clock cycles between 1 ticks be fixed, except clock gating comes into picture. Please correct me if I am thinking in wrong direction. -- Have a great day! Thanks and Warm Regards Davesh Shingari Master's in Computer Engineering [EE] Arizona State University davesh.shing...@asu.edu ᐧ ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
Re: [gem5-users] Question on retry requests due to write queue full.
Hi Prathap, Your write are probably arriving faster than the controller can actually send them to the DRAM. What is it you’re running? Andreas From: gem5-users gem5-users-boun...@gem5.orgmailto:gem5-users-boun...@gem5.org on behalf of Prathap Kolakkampadath kvprat...@gmail.commailto:kvprat...@gmail.com Reply-To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org Date: Tuesday, 14 July 2015 17:27 To: gem5 users mailing list gem5-users@gem5.orgmailto:gem5-users@gem5.org Subject: [gem5-users] Question on retry requests due to write queue full. Hello Users, I am using classic memory system with following DRAM controller parameters write_buffer_size = 64 write_high_thresh_perc = 85 write_low_thresh_perc = 50 min_writes_per_switch =18 According to write draining algorithm, the bus has to turn around to writes when the writeQueue.size() writeHighThreshold. However when i run some memory intensive benchmarks, i get a high number of Write retry requests because the write queue is full, as reported in the gem5 statistics. # Number of times write queue was full causing retry system.mem_ctrls.numWrRetry183731 I am not sure how this could happen, as the DRAM controller drain writes in a batch whenever the write queue size grows beyond the high threshold, which is only 85% of Write Buffer Size. Is this expected? Thanks, Prathap -- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England Wales, Company No: 2557590 ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ, Registered in England Wales, Company No: 2548782 ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] MSHR Queue Full Handling
Hi In *cache_impl.hh*, I can see that MSHRQueue *mq = mshr-queue; *bool wasFull = mq-isFull();* But wasFull is used only in following code: if (mshr-promoteDeferredTargets()) { // avoid later read getting stale data while write miss is // outstanding.. see comment in timingAccess() if (blk) { blk-status = ~BlkReadable; } mq = mshr-queue; mq-markPending(mshr); requestMemSideBus((RequestCause)mq-index, clockEdge() + pkt-lastWordDelay); } else { mq-deallocate(mshr); if (*wasFull !mq-isFull()*) { clearBlocked((BlockedCause)mq-index); } I have 2 questions: 1. What does promoteDeferredTargets function do? 2. What happens when the queue is full. For eg. In Caches.py I can see that mshr for L2 is 20. What happens when the misses number exceeds 20? Can someone point me to the handling code? -- Have a great day! Thanks and Warm Regards Davesh Shingari Master's in Computer Engineering [EE] Arizona State University dshin...@asu.edu ᐧ ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[gem5-users] classic memory system protocol specification
Hi there I'm doing research about verifying the correctness of protocol communication between each component. The criterial I use to judge is the specification of the protocol. What I have right now is found on gem5 wikipage, I did some modification according to the tracing result I have, but I'm not very sure if it's a hundred percent correct. The specification is the essential part of my research so I really need to make sure its correctness. For the platform, I used the se.py, with classic memory system. The command I used for this model is --cpu-type=timing -n 2 --caches --mem-size=1GB --l1d_size=16kB --l1i_size=16kB . Also when I compile the gem5.opt, I specified the protocol as mosei_smp_directory. However, since I'm using Ruby, and the trace result doesn't show much similarity to this protocol, I'm not sure I should follow this one. Attached is my guess of specification, please tell me if it's not what I should use. I really appreciate it. Best regards, Cao ___ gem5-users mailing list gem5-users@gem5.org http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users