[m5-dev] Cron m5test@zizzer /z/m5/regression/do-regression quick

2011-01-22 Thread Cron Daemon
* 
build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby-MESI_CMP_directory
 passed.
* 
build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby-MESI_CMP_directory
 passed.
* 
build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby-MESI_CMP_directory
 passed.
* 
build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby-MOESI_CMP_directory
 passed.
* 
build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby-MOESI_CMP_token
 passed.
* 
build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby-MOESI_CMP_token
 passed.
* 
build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby-MOESI_CMP_token
 passed.
* 
build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby-MOESI_CMP_directory
 passed.
* 
build/ALPHA_SE_MOESI_hammer/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby-MOESI_hammer
 passed.
* 
build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby-MOESI_CMP_directory
 passed.
* 
build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby-MOESI_CMP_token
 passed.
* 
build/ALPHA_SE_MOESI_hammer/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby-MOESI_hammer
 passed.
* 
build/ALPHA_SE_MOESI_hammer/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby-MOESI_hammer
 passed.
* build/ALPHA_SE/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby 
passed.
* 
build/ALPHA_SE_MOESI_hammer/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby-MOESI_hammer
 passed.
* 
build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby-MESI_CMP_directory
 passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/simple-atomic passed.
* build/ALPHA_SE/tests/fast/quick/30.eio-mp/alpha/eio/simple-atomic-mp 
passed.
* build/ALPHA_SE/tests/fast/quick/01.hello-2T-smt/alpha/linux/o3-timing 
passed.
* 
build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby-MOESI_CMP_directory
 passed.
* build/ALPHA_SE/tests/fast/quick/30.eio-mp/alpha/eio/simple-timing-mp 
passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby 
passed.
* build/ALPHA_SE/tests/fast/quick/20.eio-short/alpha/eio/simple-timing 
passed.
* build/ALPHA_SE/tests/fast/quick/20.eio-short/alpha/eio/simple-atomic 
passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/simple-timing passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/simple-atomic passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/o3-timing passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby 
passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/inorder-timing 
passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/o3-timing passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/simple-timing passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/simple-timing-ruby 
passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/simple-timing passed.
* 
build/ALPHA_FS/tests/fast/quick/10.linux-boot/alpha/linux/tsunami-simple-atomic-dual
 passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/simple-atomic passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/o3-timing passed.
* 
build/ALPHA_FS/tests/fast/quick/10.linux-boot/alpha/linux/tsunami-simple-timing 
passed.
* build/ALPHA_SE/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby 
passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/inorder-timing passed.
* 
build/ALPHA_FS/tests/fast/quick/10.linux-boot/alpha/linux/tsunami-simple-timing-dual
 passed.
* 
build/ALPHA_FS/tests/fast/quick/80.netperf-stream/alpha/linux/twosys-tsunami-simple-atomic
 passed.
* 
build/ALPHA_FS/tests/fast/quick/10.linux-boot/alpha/linux/tsunami-simple-atomic 
passed.
* build/POWER_SE/tests/fast/quick/00.hello/power/linux/o3-timing passed.
* build/POWER_SE/tests/fast/quick/00.hello/power/linux/simple-atomic passed.
* build/ALPHA_SE/tests/fast/quick/50.memtest/alpha/linux/memtest passed.
* build/SPARC_SE/tests/fast/quick/00.hello/sparc/linux/simple-timing-ruby 
passed.
* build/SPARC_SE/tests/fast/quick/02.insttest/sparc/linux/simple-atomic 
passed.
* 
build/SPARC_SE/tests/fast/quick/40.m5threads-test-atomic/sparc/linux/simple-atomic-mp
 passed.
* 
build/SPARC_SE/tests/fast/quick/40.m5threads-test-atomic/sparc/linux/simple-timing-mp
 passed.
* build/X86_SE/tests/fast/quick/00.hello/x86/linux/simple-timing-ruby 
passed.
* build/ARM_SE/tests/fast/quick/00.hello/arm/linux/simple-timing passed.
* build/ARM_SE/tests/fast/quick/00.hello/arm/linux/simple-atomic passed.

[m5-dev] Notification from M5 Bugs

2011-01-22 Thread Flyspray
THIS IS AN AUTOMATED MESSAGE, DO NOT REPLY.

The following task has a new comment added:

FS#337 - Checkpoint Tester Identifies Mismatches (Bugs) for X86_FS
User who did this: - Gabe Black (gblack)

--
I looked at this, and there are still some problems with your command
line.

1. X86_FS_MOESI_hammer doesn't exist in the public repository. I
created it by merging ALPHA_SE_MOESI_hammer and X86_FS
2. The X86 FS files available publicly don't yet, to the best of my
knowledge, support the --script option to fs.py. It's unnecessary
anyway since the simulation is stopped long, long before it gets to
user land.
3. The version of fs.py in the public repository doesn't seem to want
to run without --kernel being specified. I don't remember for sure if
I added a default, but apparently I didn't.

I have some basic ideas about what actually makes the checker upset,
but I need to look at it again more carefully.
--

More information can be found at the following URL:
http://www.m5sim.org/flyspray/task/337#comment163

You are receiving this message because you have requested it from the
Flyspray bugtracking system.  You can be removed from future
notifications by visiting the URL shown above.

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: x86: Timing support for pagetable walker

2011-01-22 Thread Gabe Black

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/396/#review796
---



src/arch/x86/pagetable_walker.cc
http://reviews.m5sim.org/r/396/#comment1118

Possible memory leak? I think the sender is responsible for cleaning up the 
packet for atomic accesses, and I don't see where that's being done. I may be 
wrong about atomic mode, or have missed where this is cleaned up.



src/arch/x86/pagetable_walker.cc
http://reviews.m5sim.org/r/396/#comment1116

Possible memory leak?



src/arch/x86/pagetable_walker.cc
http://reviews.m5sim.org/r/396/#comment1115

Instead of putting the majority of this function in an if/else, you could 
split it into two different functions. Then you could make the functional part 
take different parameters and not have to use the req or translation object to 
ferry information back.



src/arch/x86/pagetable_walker.cc
http://reviews.m5sim.org/r/396/#comment1117

Possible memory leak? Same rational as atomic mode.



src/arch/x86/pagetable_walker.cc
http://reviews.m5sim.org/r/396/#comment1119

Possible memory leak? If write is set to something we can't just lose it, 
we need to clean it up. That doesn't apply if write is statically allocated, 
but I don't see where that's happening.


- Gabe


On 2011-01-20 16:57:10, Brad Beckmann wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/396/
 ---
 
 (Updated 2011-01-20 16:57:10)
 
 
 Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and 
 Nathan Binkert.
 
 
 Summary
 ---
 
 x86: Timing support for pagetable walker
 
 Move page table walker state to its own object type, and make the
 walker instantiate state for each outstanding walk. By storing the
 states in a queue, the walker is able to handle multiple outstanding
 timing requests. Note that functional walks use separate state
 elements.
 
 
 Diffs
 -
 
   src/arch/x86/tlb.hh 9f9e10967912 
   src/arch/x86/tlb.cc 9f9e10967912 
   src/arch/x86/pagetable_walker.cc 9f9e10967912 
   src/arch/x86/pagetable_walker.hh 9f9e10967912 
 
 Diff: http://reviews.m5sim.org/r/396/diff
 
 
 Testing
 ---
 
 
 Thanks,
 
 Brad
 


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: x86: Timing support for pagetable walker

2011-01-22 Thread Gabe Black
You're headed in the right direction but aren't quite there yet. If you
split out the functional part of startWalk into its own function, you
can change the signature and pass the address and size by reference.
Then you don't need a request or translation object at all. You could
even put that code right into startFunctional since that's the only
place it's called from.

Also I think there may be some memory leaks here from packets not being
deleted when they should, and if that's true it's actually my fault from
the original code. It would be nice to verify that and if necessary
clean it up and not duplicate the brokenness.

Gabe


Gabe Black wrote:
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/396/


 src/arch/x86/pagetable_walker.cc
 http://reviews.m5sim.org/r/396/diff/3/?file=9827#file9827line232
 (Diff revision 3)

   
 Walker::WalkerState::startWalk()

   
   232 
 walker-port.sendAtomic(read);

 Possible memory leak? I think the sender is responsible for cleaning up the 
 packet for atomic accesses, and I don't see where that's being done. I may be 
 wrong about atomic mode, or have missed where this is cleaned up.

 src/arch/x86/pagetable_walker.cc
 http://reviews.m5sim.org/r/396/diff/3/?file=9827#file9827line239
 (Diff revision 3)

   
 Walker::WalkerState::startWalk()

   
   239 
 walker-port.sendAtomic(write);

 Possible memory leak?

 src/arch/x86/pagetable_walker.cc
 http://reviews.m5sim.org/r/396/diff/3/?file=9827#file9827line244
 (Diff revision 3)

   
 Walker::WalkerState::startWalk()

   
   244 
 } else {

 Instead of putting the majority of this function in an if/else, you could 
 split it into two different functions. Then you could make the functional 
 part take different parameters and not have to use the req or translation 
 object to ferry information back.

 src/arch/x86/pagetable_walker.cc
 http://reviews.m5sim.org/r/396/diff/3/?file=9827#file9827line246
 (Diff revision 3)

   
 Walker::WalkerState::startWalk()

   
   246 
 walker-port.sendFunctional(read);

 Possible memory leak? Same rational as atomic mode.

 src/arch/x86/pagetable_walker.cc
 http://reviews.m5sim.org/r/396/diff/3/?file=9827#file9827line250
 (Diff revision 3)

   
 Walker::WalkerState::startWalk()

   
   250 
 fault = stepWalk(write);

 Possible memory leak? If write is set to something we can't just lose it, we 
 need to clean it up. That doesn't apply if write is statically allocated, but 
 I don't see where that's happening.

 - Gabe


 On January 20th, 2011, 4:57 p.m., Brad Beckmann wrote:

 Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt,
 and Nathan Binkert.
 By Brad Beckmann.

 /Updated 2011-01-20 16:57:10/


   Description

 x86: Timing support for pagetable walker

 Move page table walker state to its own object type, and make the
 walker instantiate state for each outstanding walk. By storing the
 states in a queue, the walker is able to handle multiple outstanding
 timing requests. Note that functional walks use separate state
 elements.


   Diffs

 * src/arch/x86/tlb.hh (9f9e10967912)
 * src/arch/x86/tlb.cc (9f9e10967912)
 * src/arch/x86/pagetable_walker.cc (9f9e10967912)
 * src/arch/x86/pagetable_walker.hh (9f9e10967912)

 View Diff http://reviews.m5sim.org/r/396/diff/

 

 ___
 m5-dev mailing list
 m5-dev@m5sim.org
 http://m5sim.org/mailman/listinfo/m5-dev
   

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Error in Simulating Mesh Network

2011-01-22 Thread Gabe Black
You should be able to move that around any other patches ahead of it,
right? It's so simple I wouldn't expect it to really depend on the
intervening patches.

Gabe

Beckmann, Brad wrote:
 Hi Nilay,

 Yes, I am aware of this problem and one of the patches 
 (http://reviews.m5sim.org/r/381/) I'm planning to check in does fix this.  
 Unfortunately, those patches are being hung up because I need to do some more 
 work on another one of them and right now I don't have any time to do so.   
 As you can see from the patch, it is a very simple fix, so you may want to do 
 it locally if it blocking you.

 Brad


   
 -Original Message-
 From: m5-dev-boun...@m5sim.org [mailto:m5-dev-boun...@m5sim.org]
 On Behalf Of Nilay Vaish
 Sent: Thursday, January 20, 2011 6:16 AM
 To: m5-dev@m5sim.org
 Subject: [m5-dev] Error in Simulating Mesh Network

 Brad, I tried simulating a mesh network with four processors.

 ./build/ALPHA_FS_MOESI_hammer/m5.prof ./configs/example/ruby_fs.py -
 -maxtick 2000 -n 4 --topology Mesh --mesh-rows 2 --num-l2cache 4
 --num-dir 4

 I receive the following error:

 panic: FIFO ordering violated: [MessageBuffer:  consumer-yes [ [71227521,
 870, 1; ] ]] [Version 1, L1Cache, triggerQueue_in]
   name: [Version 1, L1Cache, triggerQueue_in] current time: 71227512 delta:
 1 arrival_time: 71227513 last arrival_time: 71227521
   @ cycle 35613756000
 [enqueue:build/ALPHA_FS_MOESI_hammer/mem/ruby/buffers/MessageB
 uffer.cc,
 line 198]

 Do you think that the options I have specified should work correctly?

 Thanks
 Nilay
 ___
 m5-dev mailing list
 m5-dev@m5sim.org
 http://m5sim.org/mailman/listinfo/m5-dev
 


 ___
 m5-dev mailing list
 m5-dev@m5sim.org
 http://m5sim.org/mailman/listinfo/m5-dev
   

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Ruby: Add support for locked memory accesses in X86_FS

2011-01-22 Thread Nilay Vaish

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/392/#review798
---


What's the difference between RMW and its locked version? I know that 
RMW is not handled right now.

- Nilay


On 2011-01-06 16:09:58, Brad Beckmann wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/392/
 ---
 
 (Updated 2011-01-06 16:09:58)
 
 
 Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and 
 Nathan Binkert.
 
 
 Summary
 ---
 
 Ruby: Add support for locked memory accesses in X86_FS
 
 
 Diffs
 -
 
   src/mem/ruby/libruby.hh 9f9e10967912 
   src/mem/ruby/libruby.cc 9f9e10967912 
   src/mem/ruby/system/DMASequencer.cc 9f9e10967912 
   src/mem/ruby/system/RubyPort.cc 9f9e10967912 
   src/mem/ruby/system/Sequencer.cc 9f9e10967912 
 
 Diff: http://reviews.m5sim.org/r/392/diff
 
 
 Testing
 ---
 
 
 Thanks,
 
 Brad
 


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Ruby: Add support for locked memory accesses in X86_FS

2011-01-22 Thread Nilay Vaish

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/392/#review799
---


What's the difference between RMW and its locked version? I know that 
RMW is not handled right now.

- Nilay


On 2011-01-06 16:09:58, Brad Beckmann wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/392/
 ---
 
 (Updated 2011-01-06 16:09:58)
 
 
 Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and 
 Nathan Binkert.
 
 
 Summary
 ---
 
 Ruby: Add support for locked memory accesses in X86_FS
 
 
 Diffs
 -
 
   src/mem/ruby/libruby.hh 9f9e10967912 
   src/mem/ruby/libruby.cc 9f9e10967912 
   src/mem/ruby/system/DMASequencer.cc 9f9e10967912 
   src/mem/ruby/system/RubyPort.cc 9f9e10967912 
   src/mem/ruby/system/Sequencer.cc 9f9e10967912 
 
 Diff: http://reviews.m5sim.org/r/392/diff
 
 
 Testing
 ---
 
 
 Thanks,
 
 Brad
 


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: ruby: support to stallAndWait the mandatory queue

2011-01-22 Thread Arkaprava Basu

Hi Nilay,

You are mostly correct. I believe this patch contains two things

1. Support in SLICC to allow waiting and stalling on messages in message 
buffer when the directory is in blocking state for that address (i.e. 
can not process the message at this point),  until some event occurred 
that can make consumption of the message possible. When the directory 
unblocks, it provides the support for waking up the messages that were 
hitherto waiting (this is the precise reason why u did not see pop of 
mandatory queue, but see WakeUpAllDependants).


2. It contains changes to MOESI_hammer protocol that leverages this support.

For the purpose of this particular discussion, the 1st part is the 
relevant one.


As far as I understand, the support in SLICC for waiting and stalling 
was introduced primarily to enhance fairness in the way SLICC handles 
the coherence requests. Without this support when a message arrives to a 
controller in blocking state, it recycles, which means it polls again 
(and thus looks up again) in 10 cycles (generally recycle latency is set 
to 10). If there are multiple messages arrive while the controller was 
blocking state for a given address, you can easily see that there is NO 
fairness. A message that arrived latest for the blocking address can 
be served first when the controller unblocks. With the new support for 
stalling and waiting, the blocked messages are put in a FIFO queue and 
thus providing better fairness.
But as you have correctly guessed, another major advantage of this 
support is that it reduces unnecessary lookups to the cache structure 
that happens due to polling (a.k.a recycle).  So in summary, I believe 
that the problem you are seeing with too many lookups will *reduce* when 
the protocols are adjusted to take advantage of this facility. On 
related note, I should also mention that another fringe benefit of this 
support is that it helps in debugging coherence protocols. With this, 
coherence protocol traces won't contains thousands of debug messages for 
recycling, which can be pretty annoying for the protocol writers.


I hope this helps,

Thanks
Arka



On 01/22/2011 06:40 AM, Nilay Vaish wrote:

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/408/#review797
---


I was thinking about why the ratio of number of memory lookups, as reported by 
gprof,
and the number of memory references, as reported in stats.txt.

While I was working with the MESI CMP directory protocol, I had seen that the 
same
request from the processor is looked up again and again in the cache, if the 
request
is waiting for some event to happen. For example, suppose a processor asks for 
loading
address A, but the cache has no space for holding address A. Then, it will give 
up
some cache block B before it can bring in address A.

The problem is that while the cache block B is being given, it is possible that 
the
request made for address A is looked up in the cache again, even though we know 
it
is not possible that we would find it in the cache. This is because the 
requests in
the mandatory queue are recycled till they get done with.

Clearly, we should remove the request for bringing in address A to a separate 
structure,
instead of looking it up again and again. The new structure should be looked up 
whenever
an event, that could possibly affect the status of this request, occurs. If we 
do this,
then I think we should see a further reduction in the number of lookups. I 
would expect
almost 90% of the lookups to the cache to go away. This should also mean a 5% 
improvement
in simulator performance.

Brad, do agree with the above reasoning? If I am reading the patch correctly, I 
think
this patch is trying to do that, though I do not see the mandatory queue being 
popped.
Can you explain the purpose of the patch in a slightly verbose manner? If it is 
doing
doing what I said above, then I think we should do this for all the protocols.

- Nilay


On 2011-01-06 16:19:46, Brad Beckmann wrote:

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/408/
---

(Updated 2011-01-06 16:19:46)


Review request for Default, Ali Saidi, Gabe Black, Steve Reinhardt, and Nathan 
Binkert.


Summary
---

ruby: support to stallAndWait the mandatory queue

By stalling and waiting the mandatory queue instead of recycling it, one can
ensure that no incoming messages are starved when the mandatory queue puts
signficant of pressure on the L1 cache controller (i.e. the ruby memtester).


Diffs
-

   src/mem/protocol/MOESI_CMP_token-L1cache.sm 9f9e10967912
   src/mem/protocol/MOESI_hammer-cache.sm 9f9e10967912
   src/mem/ruby/buffers/MessageBuffer.hh 9f9e10967912