Hi Try the attached xeon configuration. -Fuart On Tue, Jul 3, 2012 at 2:33 PM, Peter Hornyack <[email protected]>wrote:
> Yes, in the steps that I included I just removed the default config files > to avoid repeat definition errors from scons. > > I've attached a tar.gz file containing the config files and simulation log > files for two machines: "heterogeneous", which comes from the Marss example > machine configuration webpage, and "xeon", which I created myself to try to > simulate an 8-core machine (so it must be built with scons c=8). If the > attachment is too large, I can post these on the web as well. This is the > output that I see when I run a simulation with "-machine xeon": > > MARSSx86::Command received : -run > Completed 0 cycles, 0 commits: 0 Hz, > 0 insns/sec: rip ffffffff81037f7b ffffffff81037f7b ffffffff81037f7b > ffffffff81037f7b 00000000004149bf ffffffff81037f7b ffffffff81037f7b > ffffffff81037f7bSegmentation fault (core dumped) > > Here's the backtrace from the core file (after building with scons c=8 > debug=1): > > Core was generated by `qemu/qemu-system-x86_64 -hda ...'. > Program terminated with signal 11, Segmentation fault. > #0 0x00000000005af1ef in > Memory::CoherentCache::MESILogic::complete_request ( > this=0x392d530, queueEntry=0x39086f8, message=...) > at ptlsim/build/cache/mesiLogic.cpp:305 > 305 message.arg); > (gdb) thread apply all bt > > Thread 2 (Thread 0x7f7417db3700 (LWP 5801)): > #0 0x00007f7473b410fe in pthread_cond_timedwait@@GLIBC_2.3.2 () > from /lib/x86_64-linux-gnu/libpthread.so.0 > #1 0x00000000004ba51a in cond_timedwait (ts=0x7f7417db2e60, > mutex=0xe3bd00, > cond=0xe3bd60) at qemu/posix-aio-compat.c:104 > #2 aio_thread (unused=<optimized out>) at qemu/posix-aio-compat.c:325 > #3 0x00007f7473b3ce9a in start_thread () > from /lib/x86_64-linux-gnu/libpthread.so.0 > #4 0x00007f74729f24bd in clone () from /lib/x86_64-linux-gnu/libc.so.6 > #5 0x0000000000000000 in ?? () > > Thread 1 (Thread 0x7f7475c39740 (LWP 5800)): > #0 0x00000000005af1ef in > Memory::CoherentCache::MESILogic::complete_request ( > this=0x392d530, queueEntry=0x39086f8, message=...) > at ptlsim/build/cache/mesiLogic.cpp:305 > #1 0x0000000000595c0a in > Memory::CoherentCache::CacheController::complete_request (this=0x3908490, > message=..., queueEntry=0x39086f8) > at ptlsim/build/cache/coherentCache.cpp:386 > #2 0x00000000005975c8 in > Memory::CoherentCache::CacheController::handle_lower_interconnect > (this=0x3908490, message=...) > at ptlsim/build/cache/coherentCache.cpp:230 > #3 0x0000000000597817 in > Memory::CoherentCache::CacheController::handle_interconnect_cb > (this=0x3908490, arg=0x2fb7280) > at ptlsim/build/cache/coherentCache.cpp:412 > #4 0x000000000058d3f2 in > superstl::TFunctor1<Memory::Controller>::operator() ( > this=<optimized out>, arg=<optimized out>) at > ptlsim/lib/superstl.h:3950 > #5 0x00000000005ffd65 in superstl::Signal::emit (this=<optimized out>, > arg=<optimized out>) at ptlsim/build/lib/superstl.cpp:1431 > #6 0x00000000005b668e in > Memory::SplitPhaseBus::BusInterconnect::data_broadcast_completed_cb > (this=0x396d3e0, arg=0x396d648) > at ptlsim/build/cache/splitPhaseBus.cpp:507 > #7 0x00000000005b92e0 in > superstl::TFunctor1<Memory::SplitPhaseBus::BusInterconnect>::operator() > (this=<optimized out>, arg=<optimized out>) > at ptlsim/lib/superstl.h:3950 > #8 0x00000000005ffd65 in superstl::Signal::emit (this=<optimized out>, > arg=<optimized out>) at ptlsim/build/lib/superstl.cpp:1431 > #9 0x00000000005aa703 in execute (this=0x2fc9028) > at ptlsim/cache/memoryHierarchy.h:110 > #10 Memory::MemoryHierarchy::clock (this=0x2fb5ff0) > at ptlsim/build/cache/memoryHierarchy.cpp:106 > #11 0x000000000067c4d7 in BaseMachine::run (this=0x1268320, config=...) > at ptlsim/build/sim/machine.cpp:258 > #12 0x000000000068b534 in ptl_simulate () at > ptlsim/build/sim/ptlsim.cpp:1357 > #13 0x000000000057f902 in sim_cpu_exec () at qemu/cpu-exec.c:310 > #14 0x000000000041ffe5 in main_loop () at qemu/vl.c:1450 > #15 main (argc=7, argv=0x7fffb576a238, envp=<optimized out>) at > qemu/vl.c:3189 > > I also tried making some adjustments to that xeon config file (like using > p2p connections between L2/L3 and L3/MEM instead of split_bus), but these > didn't seem to help. As I said, I'm not sure if I'm doing something wrong > or if there's a problem in Marss; if possible, it would be helpful to get a > working machine configuration with L3 cache to start from. Please let me > know if I can provide any other information. > > Thanks, > Peter > > > On Tue, Jul 3, 2012 at 8:37 AM, Furat Afram <[email protected]> wrote: > >> Hi >> You don't need to remove any files, MARSS complies with all >> the available machines then you can choose any in run time using " >> -machine". >> Can you attach the log files in both cases (you own configuration and >> example configuration) and your the machine configuration you created >> Thanks >> -Furat >> On Mon, Jul 2, 2012 at 11:11 PM, Peter Hornyack >> <[email protected]>wrote: >> >>> Hello, >>> >>> I've been using Marss for a few days and would like to simulate a more >>> sophisticated machine than those that are included in the original >>> default.conf configuration. I found the "Machine Configuration" page >>> on the Marss website that describes how to edit the machine >>> configuration files: >>> http://marss86.org/index.php?title=Machine_Configuration. However, >>> when using the example configuration from the bottom of that web page, >>> or any of my own created machine configurations that try to use an L3 >>> cache, I am getting errors from Marss. These are the steps to >>> reproduce the problem (on my Core 2 Duo machine running Ubuntu 12.04): >>> >>> > git clone git://github.com/avadhpatel/marss.git >>> > cd marss >>> > rm -f config/atom_core.conf config/default.conf config/moesi.conf >>> config/ooo_core.conf >>> > edit config/example.conf: >>> Copy example configuration from bottom of this page: >>> http://marss86.org/index.php?title=Machine_Configuration >>> > scons c=2 >>> > edit test.cfg: >>> -machine heterogeneous >>> -bench-name test >>> -stats test.stats >>> -logfile test.log >>> -loglevel 10 >>> > qemu/qemu-system-x86_64 -hda /path/to/ubuntu-kvm-natty-amd64.raw -m >>> 1024 -simconfig test.cfg >>> >>> My disk image contains a 64-bit Ubuntu 11.04 distribution, and I see >>> the output "Simulator is now waiting for a 'run' command" in my >>> terminal. In the emulated system I now run a program that switches to >>> simulation mode, and I see the following output: >>> >>> PTLCALL type PTLCALL_ENQUEUE >>> MARSSx86::Command received : -run >>> Completed 0 cycles, 0 commits: 0 Hz, >>> 0 >>> Completed 461000 cycles, 0 commits: 2302454 Hz, >>> 0 >>> Completed 927000 cycles, 0 commits: 2329693 Hz, >>> 0 >>> insns/sec: rip ffffffff8109c080 ffffffff8100c980[vcpu 0] thread 0: >>> WARNING: At cycle 1048577, 0 user commits: no instructions have >>> committed for 1048577 cycles; the pipeline could be deadlocked >>> qemu-system-x86_64: ptlsim/build/core/ooo-core/ooo.cpp:876: bool >>> ooo::OooCore::runcycle(void*): Assertion `0' failed. >>> Aborted (core dumped) >>> >>> If I perform the same steps but don't remove the default config files >>> and use the default "shared_l2" or "private_L2" machines, then the >>> simulation runs fine with my test program. I have also created a >>> different machine configuration with an L3 cache (attempting to >>> simulate an 8-core Intel Xeon processor) that causes a segfault in >>> Marss during the simulation. >>> >>> I'm not sure if this issue is a bug in Marss or a problem due to a bad >>> machine configuration. If somebody can take a look and offer any >>> advice, that would be great. The steps that I've included should >>> hopefully make it easy to reproduce this issue, but I can gladly post >>> my example.conf file, my other 8-core config file, test.log output, or >>> anything else that would be helpful. >>> >>> Also, if anybody has a working machine configuration with an L3 cache >>> or an 8-core configuration and can post it, that would also be >>> excellent (I looked around the mailing list archives a bit for >>> something like this, but failed to find anything); if I can at least >>> get my hands on a configuration that works, then I can hopefully tweak >>> it to something close to the processor that I'm trying to simulate. >>> >>> Thanks, >>> Peter >>> >>> _______________________________________________ >>> http://www.marss86.org >>> Marss86-Devel mailing list >>> [email protected] >>> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel >>> >> >> >
xeon.conf
Description: Binary data
_______________________________________________ http://www.marss86.org Marss86-Devel mailing list [email protected] https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
