Hi,
I am attaching fs_het.py for reference.

best regards,
Sharjeel

On 25 August 2017 at 17:36, SHARJEEL KHILJI <sharjeelsaeedkhi...@gmail.com>
wrote:

> Hi, thanks for the reply. I am ex5_big and ex5_LITTLE in the NoC. The
> protocol is MESI TWO LEVEL. The build statement is as follows.
>
> ./build/ARM/gem5.fast configs/example/fs_het.py --l1d_size=32kB
> --l1i_size=32kB --num-l2caches 4 --l2_size=1MB --cacheline_size=64
> --machine-type=VExpress_GEM5_V1 --kernel 
> /home/khilji/gem5/m5/system/binaries/vmlinux-aarch32
> --disk-image /home/khilji/gem5/m5/system/disks/arm-ubuntu-natty-headless.img
> --dtb-filename 
> /home/khilji/gem5/m5/system/dtb/armv7_gem5_v1_big_little_2_2.dtb
> --num-cpus 4 --cpu-type exynos --big-cpus 2 --little-cpus 2 --big-cpu-clock
> 2GHz --little-cpu-clock 1GHz --ruby --num-dirs=4 --network=garnet2.0
> --topology Mesh_XY --mesh-rows 2 --mem-size 1GB
>
> I think that ex5_big has some issues with ruby. Since I have used it
> separately in a simple NoC with all four cores as ex5_big and got the same
> issue with fs.py. The build statement for that is following
>
> ./build/ARM/gem5.fast configs/example/fs.py --l1d_size=32kB
> --l1i_size=32kB --num-l2caches 4 --l2_size=1MB --cacheline_size=64
> --machine-type=VExpress_GEM5_V1 --kernel 
> /home/khilji/gem5/m5/system/binaries/vmlinux-aarch32
> --disk-image /home/khilji/gem5/m5/system/disks/arm-ubuntu-natty-headless.img
> --dtb-filename /home/khilji/gem5/m5/system/dtb/armv7_gem5_v1_4cpu.dtb
> --num-cpus=4 --ruby --num-dirs=4 --cpu-type ex5_big --network=garnet2.0
> --topology Mesh_XY --mesh-rows 2 --mem-size 1GB --cpu-clock 1GHz
> --routing_algorithm 1 --work-begin-exit-count=1 --work-end-exit-count=1
>
> Also does ex5_big supports memory map of VExpress_GEM5_V1? May be that is
> the issue. The ex5_LITTLE has no issue when it is used in the above NoC
> with four cores.Any suggestions ?
>
> best regards,
>
> Sharjeel
>
>
>
>
> On 25 August 2017 at 16:58, Nikos Nikoleris <nikos.nikole...@arm.com>
> wrote:
>
>> Hi Sharjeel,
>>
>>
>> Can you provide some more information about your simulation setup? What
>> ruby protocol are you using and what is the workload you're running? It
>> looks like the simulation is trying to load instructions from an invalid
>> address.
>>
>>
>> Nikos
>>
>>
>>
>> ------------------------------
>> *From:* gem5-users <gem5-users-boun...@gem5.org> on behalf of SHARJEEL
>> KHILJI <sharjeelsaeedkhi...@gmail.com>
>> *Sent:* 24 August 2017 17:06:00
>> *To:* gem5 users mailing list
>> *Subject:* [gem5-users] ex5_big issues with ruby memory system
>>
>> Hi,
>>
>> I am getting following issue when I use ex5_big with ruby memory system.
>>
>> warning : Address 0 is outside of physical memory, stopping fetch
>>
>> which results in simulation limit reached. Can anyone tell me that does
>> ex5_big really works with ruby memory system.
>>
>> best regards,
>>
>> Sharjeel
>>
>>
>> IMPORTANT NOTICE: The contents of this email and any attachments are
>> confidential and may also be privileged. If you are not the intended
>> recipient, please notify the sender immediately and do not disclose the
>> contents to any other person, use it for any purpose, or store or copy the
>> information in any medium. Thank you.
>>
>> _______________________________________________
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
>
# Copyright (c) 2010-2013, 2016 ARM Limited
# All rights reserved.
#
# The license below extends only to copyright in the software and shall
# not be construed as granting a license to any other intellectual
# property including but not limited to intellectual property relating
# to a hardware implementation of the functionality of the software
# licensed hereunder.  You may use the software subject to the license
# terms below provided that you ensure that this notice is replicated
# unmodified and in its entirety in all distributions of the software,
# modified or unmodified, in source code or in binary form.
#
# Copyright (c) 2012-2014 Mark D. Hill and David A. Wood
# Copyright (c) 2009-2011 Advanced Micro Devices, Inc.
# Copyright (c) 2006-2007 The Regents of The University of Michigan
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met: redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer;
# redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution;
# neither the name of the copyright holders nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors: Ali Saidi
#          Brad Beckmann
#import argparse
import optparse
import sys

import m5
from m5.defines import buildEnv
from m5.objects import *
from m5.util import addToPath, fatal
#m5.util.addToPath("../../")
addToPath('../')

from ruby import Ruby

from common.FSConfig import *
from common.SysPaths import *
from common.Benchmarks import *
from common import Simulation
from common import CacheConfig
from common import MemConfig
from common import CpuConfig
from common.Caches import *
from common import Options
from common import O3_ARM_v7a
from O3_ARM_v7a import O3_ARM_v7a_3
############

from common import ex5_big
from common import ex5_LITTLE
from ex5_LITTLE import ex5_LITTLE
from ex5_big import ex5_big


import devices
from devices import AtomicCluster, KvmCluster
#####################

def _using_pdes(root):
    """Determine if the simulator is using multiple parallel event queues"""

    for obj in root.descendants():
        if not m5.proxy.isproxy(obj.eventq_index) and \
               obj.eventq_index != root.eventq_index:
            return True

    return False

#########################

def instantiate(options, checkpoint_dir=None):
    # Setup the simulation quantum if we are running in PDES-mode
    # (e.g., when using KVM)
    root = Root.getInstance()
    if root and _using_pdes(root):
        m5.util.inform("Running in PDES mode with a %s simulation quantum.",
                       options.sim_quantum)
        root.sim_quantum = _to_ticks(options.sim_quantum)

    # Get and load from the chkpt or simpoint checkpoint
    if options.restore_from:
        if checkpoint_dir and not os.path.isabs(options.restore_from):
            cpt = os.path.join(checkpoint_dir, options.restore_from)
        else:
            cpt = options.restore_from

        m5.util.inform("Restoring from checkpoint %s", cpt)
        m5.instantiate(cpt)
    else:
        m5.instantiate()


def run(checkpoint_dir=m5.options.outdir):
    # start simulation (and drop checkpoints when requested)
    while True:
        event = m5.simulate()
        exit_msg = event.getCause()
        if exit_msg == "checkpoint":
            print "Dropping checkpoint at tick %d" % m5.curTick()
            cpt_dir = os.path.join(checkpoint_dir, "cpt.%d" % m5.curTick())
            m5.checkpoint(cpt_dir)
            print "Checkpoint done."
        else:
            print exit_msg, " @ ", m5.curTick()
            break

    sys.exit(event.getCode())
############


# Check if KVM support has been enabled, we might need to do VM
# configuration if that's the case.
have_kvm_support = 'BaseKvmCPU' in globals()
def is_kvm_cpu(cpu_class):
    return have_kvm_support and cpu_class != None and \
        issubclass(cpu_class, BaseKvmCPU)

def cmd_line_template():
    if options.command_line and options.command_line_file:
        print "Error: --command-line and --command-line-file are " \
              "mutually exclusive"
        sys.exit(1)
    if options.command_line:
        return options.command_line
    if options.command_line_file:
        return open(options.command_line_file).read().strip()
    return None

  
##########################
class BigCluster(devices.CpuCluster):
    def __init__(self, system, num_cpus, cpu_clock,
                 cpu_voltage="1.0V"):
        cpu_config = [ CpuConfig.get("O3_ARM_v7a_3"), devices.L1I, devices.L1D,
                    devices.WalkCache, devices.L2 ]
        super(BigCluster, self).__init__(system, num_cpus, cpu_clock,
                                         cpu_voltage, *cpu_config)

class LittleCluster(devices.CpuCluster):
    def __init__(self, system, num_cpus, cpu_clock,
                 cpu_voltage="1.0V"):
        cpu_config = [ CpuConfig.get("MinorCPU"), devices.L1I, devices.L1D,
                       devices.WalkCache, devices.L2 ]
        super(LittleCluster, self).__init__(system, num_cpus, cpu_clock,
                                         cpu_voltage, *cpu_config)

class Ex5BigCluster(devices.CpuCluster):
    def __init__(self, system, num_cpus, cpu_clock,
                 cpu_voltage="1.0V"):
        cpu_config = [ CpuConfig.get("ex5_big"), ex5_big.L1I, ex5_big.L1D,
                    ex5_big.WalkCache, ex5_big.L2 ]
        super(Ex5BigCluster, self).__init__(system, num_cpus, cpu_clock,
                                         cpu_voltage, *cpu_config)

class Ex5LittleCluster(devices.CpuCluster):
    def __init__(self, system, num_cpus, cpu_clock,
                 cpu_voltage="1.0V"):
        cpu_config = [ CpuConfig.get("ex5_LITTLE"), ex5_LITTLE.L1I,
                    ex5_LITTLE.L1D, ex5_LITTLE.WalkCache, ex5_LITTLE.L2 ]
        super(Ex5LittleCluster, self).__init__(system, num_cpus, cpu_clock,
                                         cpu_voltage, *cpu_config)
                              
cpu_types = {
    "atomic" : (AtomicCluster, AtomicCluster),
    "timing" : (BigCluster, LittleCluster),
    "exynos" : (ex5_big , ex5_LITTLE),#(Ex5BigCluster, Ex5LittleCluster),
}

################
def build_test_system(np):
    cmdline = cmd_line_template()
    
    if buildEnv['TARGET_ISA'] == "arm":
        test_sys = makeArmSystem(test_mem_mode, options.machine_type,
                                 options.num_cpus, bm[0], options.dtb_filename,
                                 bare_metal=options.bare_metal,
                                 cmdline=cmdline,
                                 external_memory=options.external_memory_system,
                                 ruby=options.ruby)
        if options.enable_context_switch_stats_dump:
            test_sys.enable_context_switch_stats_dump = True
    else:
        fatal("Incapable of building %s full system!", buildEnv['TARGET_ISA'])

    # Set the cache line size for the entire system
    test_sys.cache_line_size = options.cacheline_size

    # Create a top-level voltage domain
    test_sys.voltage_domain = VoltageDomain(voltage = options.sys_voltage)

    # Create a source clock for the system and set the clock period
    test_sys.clk_domain = SrcClockDomain(clock =  options.sys_clock,
            voltage_domain = test_sys.voltage_domain)

    # Create a CPU voltage domain
    test_sys.cpu_voltage_domain = VoltageDomain()

    # Create a source clock for the CPUs and set the clock period
    test_sys.cpu_clk_domain_little = SrcClockDomain(clock = options.little_cpu_clock,
                                             voltage_domain =
                                             test_sys.cpu_voltage_domain)
    test_sys.cpu_clk_domain_big = SrcClockDomain(clock = options.big_cpu_clock,
                                             voltage_domain =
                                             test_sys.cpu_voltage_domain)                                         
    if options.kernel is not None:
        test_sys.kernel = binary(options.kernel)

    if options.script is not None:
        test_sys.readfile = options.script

    if options.lpae:
        test_sys.have_lpae = True

    if options.virtualisation:
        test_sys.have_virtualization = True

    test_sys.init_param = options.init_param

     
    #test_sys.cpu1
    test_sys.cpu = [TestCPUClass1(clk_domain=test_sys.cpu_clk_domain_little , cpu_id=0),TestCPUClass1(clk_domain=test_sys.cpu_clk_domain_little, cpu_id=1), TestCPUClass2(clk_domain=test_sys.cpu_clk_domain_big,cpu_id=2),TestCPUClass2(clk_domain=test_sys.cpu_clk_domain_big, cpu_id=3)]
#                    for i in xrange(1)]
    #test_sys.cpu2 
 #   test_sys.cpu = [TestCPUClass2(clk_domain=test_sys.cpu_clk_domain, cpu_id=i)
 #                   for i in xrange(1)]
                    
    #if is_kvm_cpu(TestCPUClass) or is_kvm_cpu(FutureClass):
    #    test_sys.kvm_vm = KvmVM()
        
#################        
   # CPUClassbig    =	TimingSimpleCPU#ex5_big
   # CPUClassLITTLE =	TimingSimpleCPU#ex5_LITTLE
########################




 #   if options.big_cpus + options.little_cpus == 0:
 #       m5.util.panic("Empty CPU clusters")

    
    # big cluster
   # if options.big_cpus > 0:
    
  #  test_sys.bigCluster = big_model(test_sys, 2,
  #                                    "2GHz")#options.big_cpu_clock)
#    all_cpus_big += test_sys.bigCluster.cpus
    
    # little cluster
   # if options.little_cpus > 0:
 #   num_big=options.big_cpus
 #   num_little=options.little_cpus
        
#    big_model, little_model = cpu_types[options.cpu_type]
 #   little_model =    CpuConfig.get("ex5_LITTLE")
 #   big_model    =    CpuConfig.get("ex5_big")

#    all_cpus_big = []
#    all_cpus_LITTLE = []
   
#    for i in xrange(num_big):
#    	all_cpus_LITTLE.append(little_model)
    	
#    for i in xrange(num_little):
#    	all_cpus_big.append(big_model)	

########################

    if options.ruby:
        # Check for timing mode because ruby does not support atomic accesses
        if not (options.cpu_type == "DerivO3CPU" or 
                options.cpu_type == "TimingSimpleCPU" or options.cpu_type == "atomic" or options.cpu_type == "timing" or options.cpu_type == "exynos"):
            print >> sys.stderr, "Ruby requires TimingSimpleCPU or O3CPU!!"
            sys.exit(1)

        Ruby.create_system(options, True, test_sys, test_sys.iobus,
                           test_sys._dma_ports)

        # Create a seperate clock domain for Ruby
        test_sys.ruby.clk_domain = SrcClockDomain(clock = options.ruby_clock,
                                        voltage_domain = test_sys.voltage_domain)

        # Connect the ruby io port to the PIO bus,
        # assuming that there is just one such port.
        test_sys.iobus.master = test_sys.ruby._io_port.slave
        
##############################
        
        for (i, cpu) in enumerate(test_sys.cpu):
       # for (i, cpu) in enumerate(all_cpus_LITTLE): 
            #
            # Tie the cpu ports to the correct ruby system ports
            #

	#    if i < 2:	
           # test_sys.cpu_clk_domain.clock = options.little_cpu_clock #'1GHz'
            #cpu = CPUClassLITTLE(clk_domain= test_sys.cpu_clk_domain,cpu_id=i)
            
            
            #elif i >1:
            #	test_sys.cpu_clk_domain.clock = options.big_cpu_clock #'2GHz'
            #	cpu = CPUClassbig(clk_domain= test_sys.cpu_clk_domain,cpu_id=i)
            
            
            #cpu.clk_domain = test_sys.cpu_clk_domain	
            cpu.createThreads()
            cpu.createInterruptController()

            cpu.icache_port = test_sys.ruby._cpu_ports[i].slave
            cpu.dcache_port = test_sys.ruby._cpu_ports[i].slave

            if buildEnv['TARGET_ISA'] in ("x86", "arm"):
                cpu.itb.walker.port = test_sys.ruby._cpu_ports[i].slave
                cpu.dtb.walker.port = test_sys.ruby._cpu_ports[i].slave

            if buildEnv['TARGET_ISA'] in "x86":
                cpu.interrupts[0].pio = test_sys.ruby._cpu_ports[i].master
                cpu.interrupts[0].int_master = test_sys.ruby._cpu_ports[i].slave
                cpu.interrupts[0].int_slave = test_sys.ruby._cpu_ports[i].master

########################
        
      #  test_sys.cpu_clk_domain.clock='2GHz'
     #   for (i, cpu) in enumerate(test_sys.cpu2): 
       # for (i, cpu) in enumerate(all_cpus_big): 
        
        #all_cpus_big all_cpus_LITTLE
            #
            # Tie the cpu ports to the correct ruby system ports
            #
#	    test_sys.cpu_clk_domain.clock = options.big_cpu_clock
       #     test_sys.cpu = CPUClassbig(clk_domain= test_sys.cpu_clk_domain,cpu_id=i)
       #     cpu.clk_domain = test_sys.cpu_clk_domain	
 #           cpu.createThreads()
 #           cpu.createInterruptController()

 #           cpu.icache_port = test_sys.ruby._cpu_ports[i].slave
 #           cpu.dcache_port = test_sys.ruby._cpu_ports[i].slave

 #           if buildEnv['TARGET_ISA'] in ("x86", "arm"):
 #               cpu.itb.walker.port = test_sys.ruby._cpu_ports[i].slave
 #               cpu.dtb.walker.port = test_sys.ruby._cpu_ports[i].slave

 #           if buildEnv['TARGET_ISA'] in "x86":
 #               cpu.interrupts[0].pio = test_sys.ruby._cpu_ports[i].master
 #               cpu.interrupts[0].int_master = test_sys.ruby._cpu_ports[i].slave
 #               cpu.interrupts[0].int_slave = test_sys.ruby._cpu_ports[i].master       
#########################

    else:
        if options.caches or options.l2cache:
            # By default the IOCache runs at the system clock
            test_sys.iocache = IOCache(addr_ranges = test_sys.mem_ranges)
            test_sys.iocache.cpu_side = test_sys.iobus.master
            test_sys.iocache.mem_side = test_sys.membus.slave
        elif not options.external_memory_system:
            test_sys.iobridge = Bridge(delay='50ns', ranges = test_sys.mem_ranges)
            test_sys.iobridge.slave = test_sys.iobus.master
            test_sys.iobridge.master = test_sys.membus.slave

        # Sanity check
        if options.fastmem:
          #  if TestCPUClass != AtomicSimpleCPU:
          #      fatal("Fastmem can only be used with atomic CPU!")
            if (options.caches or options.l2cache):
                fatal("You cannot use fastmem in combination with caches!")

        if options.simpoint_profile:
            if not options.fastmem:
                # Atomic CPU checked with fastmem option already
                fatal("SimPoint generation should be done with atomic cpu and fastmem")
            if np > 1:
                fatal("SimPoint generation not supported with more than one CPUs")

        for i in xrange(np):
            if options.fastmem:
                test_sys.cpu[i].fastmem = True
            if options.simpoint_profile:
                test_sys.cpu[i].addSimPointProbe(options.simpoint_interval)
            if options.checker:
                test_sys.cpu[i].addCheckerCpu()
            test_sys.cpu[i].createThreads()

        # If elastic tracing is enabled when not restoring from checkpoint and
        # when not fast forwarding using the atomic cpu, then check that the
        # TestCPUClass is DerivO3CPU or inherits from DerivO3CPU. If the check
        # passes then attach the elastic trace probe.
        # If restoring from checkpoint or fast forwarding, the code that does this for
        # FutureCPUClass is in the Simulation module. If the check passes then the
        # elastic trace probe is attached to the switch CPUs.
       # if options.elastic_trace_en and options.checkpoint_restore == None and \
       #     not options.fast_forward:
       #     CpuConfig.config_etrace(TestCPUClass, test_sys.cpu, options)

        CacheConfig.config_cache(options, test_sys)

        MemConfig.config_mem(options, test_sys)

    return test_sys

def build_drive_system(np):
    # driver system CPU is always simple, so is the memory
    # Note this is an assignment of a class, not an instance.
    DriveCPUClass = AtomicSimpleCPU
    drive_mem_mode = 'atomic'
    DriveMemClass = SimpleMemory

    cmdline = cmd_line_template()
    if buildEnv['TARGET_ISA'] == 'alpha':
        drive_sys = makeLinuxAlphaSystem(drive_mem_mode, bm[1], cmdline=cmdline)
    elif buildEnv['TARGET_ISA'] == 'mips':
        drive_sys = makeLinuxMipsSystem(drive_mem_mode, bm[1], cmdline=cmdline)
    elif buildEnv['TARGET_ISA'] == 'sparc':
        drive_sys = makeSparcSystem(drive_mem_mode, bm[1], cmdline=cmdline)
    elif buildEnv['TARGET_ISA'] == 'x86':
        drive_sys = makeLinuxX86System(drive_mem_mode, np, bm[1],
                                       cmdline=cmdline)
    elif buildEnv['TARGET_ISA'] == 'arm':
        drive_sys = makeArmSystem(drive_mem_mode, options.machine_type, np,
                                  bm[1], options.dtb_filename, cmdline=cmdline)

    # Create a top-level voltage domain
    drive_sys.voltage_domain = VoltageDomain(voltage = options.sys_voltage)

    # Create a source clock for the system and set the clock period
    drive_sys.clk_domain = SrcClockDomain(clock =  options.sys_clock,
            voltage_domain = drive_sys.voltage_domain)

    # Create a CPU voltage domain
    drive_sys.cpu_voltage_domain = VoltageDomain()

    # Create a source clock for the CPUs and set the clock period
    drive_sys.cpu_clk_domain = SrcClockDomain(clock = options.cpu_clock,
                                              voltage_domain =
                                              drive_sys.cpu_voltage_domain)

    drive_sys.cpu = DriveCPUClass(clk_domain=drive_sys.cpu_clk_domain,
                                  cpu_id=0)
    drive_sys.cpu.createThreads()
    drive_sys.cpu.createInterruptController()
    drive_sys.cpu.connectAllPorts(drive_sys.membus)
    if options.fastmem:
        drive_sys.cpu.fastmem = True
    if options.kernel is not None:
        drive_sys.kernel = binary(options.kernel)

    if is_kvm_cpu(DriveCPUClass):
        drive_sys.kvm_vm = KvmVM()

    drive_sys.iobridge = Bridge(delay='50ns',
                                ranges = drive_sys.mem_ranges)
    drive_sys.iobridge.slave = drive_sys.iobus.master
    drive_sys.iobridge.master = drive_sys.membus.slave

    # Create the appropriate memory controllers and connect them to the
    # memory bus
    drive_sys.mem_ctrls = [DriveMemClass(range = r)
                           for r in drive_sys.mem_ranges]
    for i in xrange(len(drive_sys.mem_ctrls)):
        drive_sys.mem_ctrls[i].port = drive_sys.membus.master

    drive_sys.init_param = options.init_param

    return drive_sys
##########################

# Add options
parser = optparse.OptionParser()
Options.addCommonOptions(parser)
Options.addFSOptions(parser)
Options.addOptions(parser)

##########
#parser1 = argparse.ArgumentParser(
#        description="Generic ARM big.LITTLE configuration")
#addOptions(parser1)

############


# Add the ruby specific and protocol specific options
if '--ruby' in sys.argv:
    Ruby.define_options(parser)

(options, args) = parser.parse_args()

if args:
    print "Error: script doesn't take any positional arguments"
    sys.exit(1)


#set memory mode here 


# system under test can be any CPU
#(TestCPUClass, test_mem_mode, FutureClass) = Simulation.setCPUClass(options)
TestCPUClass1 = O3_ARM_v7a_3 #TimingSimpleCPU
TestCPUClass2 = ex5_LITTLE  #TimingSimpleCPU
test_mem_mode = 'timing'
# Match the memories with the CPUs, based on the options for the test system
TestMemClass = Simulation.setMemClass(options)

if options.benchmark:
    try:
        bm = Benchmarks[options.benchmark]
    except KeyError:
        print "Error benchmark %s has not been defined." % options.benchmark
        print "Valid benchmarks are: %s" % DefinedBenchmarks
        sys.exit(1)
else:
    if options.dual:
        bm = [SysConfig(disk=options.disk_image, rootdev=options.root_device,
                        mem=options.mem_size, os_type=options.os_type),
              SysConfig(disk=options.disk_image, rootdev=options.root_device,
                        mem=options.mem_size, os_type=options.os_type)]
    else:
        bm = [SysConfig(disk=options.disk_image, rootdev=options.root_device,
                        mem=options.mem_size, os_type=options.os_type)]

#np = options.num_cpus

    np1=0
    np2=0
    np3=0
    
    np1 = options.little_cpus
    np2 = options.big_cpus
    np3 = np1 + np2

test_sys = build_test_system(np3)
if len(bm) == 2:
    drive_sys = build_drive_system(np3)
    root = makeDualRoot(True, test_sys, drive_sys, options.etherdump)
elif len(bm) == 1 and options.dist:
    # This system is part of a dist-gem5 simulation
    root = makeDistRoot(test_sys,
                        options.dist_rank,
                        options.dist_size,
                        options.dist_server_name,
                        options.dist_server_port,
                        options.dist_sync_repeat,
                        options.dist_sync_start,
                        options.ethernet_linkspeed,
                        options.ethernet_linkdelay,
                        options.etherdump);
elif len(bm) == 1:
    root = Root(full_system=True, system=test_sys)
else:
    print "Error I don't know how to create more than 2 systems."
    sys.exit(1)

if options.timesync:
    root.time_sync_enable = True

if options.frame_capture:
    VncServer.frame_capture = True
Simulation.setWorkCountOptions(test_sys, options)
instantiate(options)
run()
#
#Simulation.run(options, root, test_sys, FutureClass)


_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to