Re: [gem5-dev] Review Request 2646: mem: Split port retry for all different packet classes
--- This is an automatically generated e-mail. To reply, visit: http://reviews.gem5.org/r/2646/#review5872 --- Ship it! Thanks! - Steve Reinhardt On Feb. 8, 2015, 9:48 a.m., Andreas Hansson wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > http://reviews.gem5.org/r/2646/ > --- > > (Updated Feb. 8, 2015, 9:48 a.m.) > > > Review request for Default. > > > Repository: gem5 > > > Description > --- > > Changeset 10705:1fec273b2946 > --- > mem: Split port retry for all different packet classes > > This patch fixes a long-standing isue with the port flow > control. Before this patch the retry mechanism was shared between all > different packet classes. As a result, a snoop response could get > stuck behind a request waiting for a retry, even if the send/recv > functions were split. This caused message-dependent deadlocks in > stress-test scenarios. > > The patch splits the retry into one per packet (message) class. Thus, > sendTimingReq has a corresponding recvReqRetry, sendTimingResp has > recvRespRetry etc. Most of the changes to the code involve simply > clarifying what type of request a specific object was accepting. > > The biggest change in functionality is in the cache downstream packet > queue, facing the memory. This queue was shared by requests and snoop > responses, and it is now split into two queues, each with their own > flow control, but the same physical MasterPort. These changes fixes > the previously seen deadlocks. > > > Diffs > - > > src/arch/x86/pagetable_walker.hh 94901e131a7f > src/arch/x86/pagetable_walker.cc 94901e131a7f > src/cpu/kvm/base.hh 94901e131a7f > src/cpu/minor/fetch1.hh 94901e131a7f > src/cpu/minor/fetch1.cc 94901e131a7f > src/cpu/minor/lsq.hh 94901e131a7f > src/cpu/minor/lsq.cc 94901e131a7f > src/cpu/o3/cpu.hh 94901e131a7f > src/cpu/o3/cpu.cc 94901e131a7f > src/cpu/o3/fetch.hh 94901e131a7f > src/cpu/o3/fetch_impl.hh 94901e131a7f > src/cpu/o3/lsq.hh 94901e131a7f > src/cpu/o3/lsq_impl.hh 94901e131a7f > src/cpu/simple/atomic.hh 94901e131a7f > src/cpu/simple/timing.hh 94901e131a7f > src/cpu/simple/timing.cc 94901e131a7f > src/cpu/testers/directedtest/RubyDirectedTester.hh 94901e131a7f > src/cpu/testers/memtest/memtest.hh 94901e131a7f > src/cpu/testers/memtest/memtest.cc 94901e131a7f > src/cpu/testers/networktest/networktest.hh 94901e131a7f > src/cpu/testers/networktest/networktest.cc 94901e131a7f > src/cpu/testers/rubytest/RubyTester.hh 94901e131a7f > src/cpu/testers/traffic_gen/traffic_gen.hh 94901e131a7f > src/cpu/testers/traffic_gen/traffic_gen.cc 94901e131a7f > src/dev/dma_device.hh 94901e131a7f > src/dev/dma_device.cc 94901e131a7f > src/mem/addr_mapper.hh 94901e131a7f > src/mem/addr_mapper.cc 94901e131a7f > src/mem/bridge.hh 94901e131a7f > src/mem/bridge.cc 94901e131a7f > src/mem/cache/base.hh 94901e131a7f > src/mem/cache/base.cc 94901e131a7f > src/mem/cache/cache.hh 94901e131a7f > src/mem/cache/cache_impl.hh 94901e131a7f > src/mem/coherent_xbar.hh 94901e131a7f > src/mem/coherent_xbar.cc 94901e131a7f > src/mem/comm_monitor.hh 94901e131a7f > src/mem/comm_monitor.cc 94901e131a7f > src/mem/dram_ctrl.hh 94901e131a7f > src/mem/dram_ctrl.cc 94901e131a7f > src/mem/dramsim2.hh 94901e131a7f > src/mem/dramsim2.cc 94901e131a7f > src/mem/external_slave.cc 94901e131a7f > src/mem/mem_checker_monitor.hh 94901e131a7f > src/mem/mem_checker_monitor.cc 94901e131a7f > src/mem/mport.hh 94901e131a7f > src/mem/noncoherent_xbar.hh 94901e131a7f > src/mem/noncoherent_xbar.cc 94901e131a7f > src/mem/packet_queue.hh 94901e131a7f > src/mem/packet_queue.cc 94901e131a7f > src/mem/port.hh 94901e131a7f > src/mem/port.cc 94901e131a7f > src/mem/qport.hh 94901e131a7f > src/mem/ruby/slicc_interface/AbstractController.hh 94901e131a7f > src/mem/ruby/slicc_interface/AbstractController.cc 94901e131a7f > src/mem/ruby/structures/RubyMemoryControl.hh 94901e131a7f > src/mem/ruby/system/DMASequencer.hh 94901e131a7f > src/mem/ruby/system/DMASequencer.cc 94901e131a7f > src/mem/ruby/system/RubyPort.hh 94901e131a7f > src/mem/ruby/system/RubyPort.cc 94901e131a7f > src/mem/simple_mem.hh 94901e131a7f > src/mem/simple_mem.cc 94901e131a7f > src/mem/tport.hh 94901e131a7f > src/mem/tport.cc 94901e131a7f > src/mem/xbar.hh 94901e131a7f > src/mem/xbar.cc 94901e131a7f > src/sim/system.hh 94901e131a7f > > Diff: http://reviews.gem5.org/r/2646/diff/ > > > Testing > --- > > > Thanks, > > Andreas Hansson > > ___ gem5-dev mailing list gem5-dev@gem5.org http://m5sim.org/mailman/listinfo/gem
Re: [gem5-dev] Review Request 2646: mem: Split port retry for all different packet classes
--- This is an automatically generated e-mail. To reply, visit: http://reviews.gem5.org/r/2646/ --- (Updated Feb. 8, 2015, 5:48 p.m.) Review request for Default. Repository: gem5 Description (updated) --- Changeset 10705:1fec273b2946 --- mem: Split port retry for all different packet classes This patch fixes a long-standing isue with the port flow control. Before this patch the retry mechanism was shared between all different packet classes. As a result, a snoop response could get stuck behind a request waiting for a retry, even if the send/recv functions were split. This caused message-dependent deadlocks in stress-test scenarios. The patch splits the retry into one per packet (message) class. Thus, sendTimingReq has a corresponding recvReqRetry, sendTimingResp has recvRespRetry etc. Most of the changes to the code involve simply clarifying what type of request a specific object was accepting. The biggest change in functionality is in the cache downstream packet queue, facing the memory. This queue was shared by requests and snoop responses, and it is now split into two queues, each with their own flow control, but the same physical MasterPort. These changes fixes the previously seen deadlocks. Diffs (updated) - src/arch/x86/pagetable_walker.hh 94901e131a7f src/arch/x86/pagetable_walker.cc 94901e131a7f src/cpu/kvm/base.hh 94901e131a7f src/cpu/minor/fetch1.hh 94901e131a7f src/cpu/minor/fetch1.cc 94901e131a7f src/cpu/minor/lsq.hh 94901e131a7f src/cpu/minor/lsq.cc 94901e131a7f src/cpu/o3/cpu.hh 94901e131a7f src/cpu/o3/cpu.cc 94901e131a7f src/cpu/o3/fetch.hh 94901e131a7f src/cpu/o3/fetch_impl.hh 94901e131a7f src/cpu/o3/lsq.hh 94901e131a7f src/cpu/o3/lsq_impl.hh 94901e131a7f src/cpu/simple/atomic.hh 94901e131a7f src/cpu/simple/timing.hh 94901e131a7f src/cpu/simple/timing.cc 94901e131a7f src/cpu/testers/directedtest/RubyDirectedTester.hh 94901e131a7f src/cpu/testers/memtest/memtest.hh 94901e131a7f src/cpu/testers/memtest/memtest.cc 94901e131a7f src/cpu/testers/networktest/networktest.hh 94901e131a7f src/cpu/testers/networktest/networktest.cc 94901e131a7f src/cpu/testers/rubytest/RubyTester.hh 94901e131a7f src/cpu/testers/traffic_gen/traffic_gen.hh 94901e131a7f src/cpu/testers/traffic_gen/traffic_gen.cc 94901e131a7f src/dev/dma_device.hh 94901e131a7f src/dev/dma_device.cc 94901e131a7f src/mem/addr_mapper.hh 94901e131a7f src/mem/addr_mapper.cc 94901e131a7f src/mem/bridge.hh 94901e131a7f src/mem/bridge.cc 94901e131a7f src/mem/cache/base.hh 94901e131a7f src/mem/cache/base.cc 94901e131a7f src/mem/cache/cache.hh 94901e131a7f src/mem/cache/cache_impl.hh 94901e131a7f src/mem/coherent_xbar.hh 94901e131a7f src/mem/coherent_xbar.cc 94901e131a7f src/mem/comm_monitor.hh 94901e131a7f src/mem/comm_monitor.cc 94901e131a7f src/mem/dram_ctrl.hh 94901e131a7f src/mem/dram_ctrl.cc 94901e131a7f src/mem/dramsim2.hh 94901e131a7f src/mem/dramsim2.cc 94901e131a7f src/mem/external_slave.cc 94901e131a7f src/mem/mem_checker_monitor.hh 94901e131a7f src/mem/mem_checker_monitor.cc 94901e131a7f src/mem/mport.hh 94901e131a7f src/mem/noncoherent_xbar.hh 94901e131a7f src/mem/noncoherent_xbar.cc 94901e131a7f src/mem/packet_queue.hh 94901e131a7f src/mem/packet_queue.cc 94901e131a7f src/mem/port.hh 94901e131a7f src/mem/port.cc 94901e131a7f src/mem/qport.hh 94901e131a7f src/mem/ruby/slicc_interface/AbstractController.hh 94901e131a7f src/mem/ruby/slicc_interface/AbstractController.cc 94901e131a7f src/mem/ruby/structures/RubyMemoryControl.hh 94901e131a7f src/mem/ruby/system/DMASequencer.hh 94901e131a7f src/mem/ruby/system/DMASequencer.cc 94901e131a7f src/mem/ruby/system/RubyPort.hh 94901e131a7f src/mem/ruby/system/RubyPort.cc 94901e131a7f src/mem/simple_mem.hh 94901e131a7f src/mem/simple_mem.cc 94901e131a7f src/mem/tport.hh 94901e131a7f src/mem/tport.cc 94901e131a7f src/mem/xbar.hh 94901e131a7f src/mem/xbar.cc 94901e131a7f src/sim/system.hh 94901e131a7f Diff: http://reviews.gem5.org/r/2646/diff/ Testing --- Thanks, Andreas Hansson ___ gem5-dev mailing list gem5-dev@gem5.org http://m5sim.org/mailman/listinfo/gem5-dev
Re: [gem5-dev] Review Request 2646: mem: Split port retry for all different packet classes
> On Feb. 7, 2015, 6:14 p.m., Steve Reinhardt wrote: > > Awesome! This lack of separate channels for requests & responses has > > bugged me for years, I'm really glad to see it addressed. > > > > One minor suggestion: to me, terms like recvRetryReq and recvRetryResp seem > > backward, since 'retry' is the noun and 'req'/'resp' are adjectives. So in > > isolation, recvReqRetry makes more sense, because I'm receiving a retry > > that's specific to requests (i.e., a request-flavored retry), not receiving > > a request for a retry (as recvRetryReq implies). On the other hand, I can > > see how there is some parallelism between sendTimingReq and recvRetryReq. > > But to me the readability in context trumps the parallelism in the abstract > > (which you only see in places like the commit message where the function > > names are juxtaposed). > > > > Just to overcome the effort hurdle, I think this issue can be fixed with: > > perl -pi -e 's/([Rr])(etry)R(eq|esp)/$1$3R$2/' > > Steve Reinhardt wrote: > might want to tack a 'g' onto that substitution just in case Makes sense. We might want to drop "Timing" all together, and simply have sendReq, sendResp, recvSnoopReq etc. I'll bump the patch with the suggested changes. - Andreas --- This is an automatically generated e-mail. To reply, visit: http://reviews.gem5.org/r/2646/#review5867 --- On Feb. 7, 2015, 5:24 p.m., Andreas Hansson wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > http://reviews.gem5.org/r/2646/ > --- > > (Updated Feb. 7, 2015, 5:24 p.m.) > > > Review request for Default. > > > Repository: gem5 > > > Description > --- > > Changeset 10707:ec8bd9a8e1e2 > --- > mem: Split port retry for all different packet classes > > This patch fixes a long-standing isue with the port flow > control. Before this patch the retry mechanism was shared between all > different packet classes. As a result, a snoop response could get > stuck behind a request waiting for a retry, even if the send/recv > functions were split. This caused message-dependent deadlocks in > stress-test scenarios. > > The patch splits the retry into one per packet (message) class. Thus, > sendTimingReq has a corresponding recvRetryReq, sendTimingResp has > recvRetryResp etc. Most of the changes to the code involve simply > clarifying what type of request a specific object was accepting. > > The biggest change in functionality is in the cache downstream packet > queue, facing the memory. This queue was shared by requests and snoop > responses, and it is now split into two queues, each with their own > flow control, but the same physical MasterPort. These changes fixes > the previously seen deadlocks. > > > Diffs > - > > src/cpu/minor/fetch1.hh 94901e131a7f > src/cpu/kvm/base.hh 94901e131a7f > src/arch/x86/pagetable_walker.hh 94901e131a7f > src/arch/x86/pagetable_walker.cc 94901e131a7f > src/cpu/minor/fetch1.cc 94901e131a7f > src/cpu/minor/lsq.hh 94901e131a7f > src/cpu/minor/lsq.cc 94901e131a7f > src/cpu/o3/cpu.hh 94901e131a7f > src/cpu/o3/cpu.cc 94901e131a7f > src/cpu/o3/fetch.hh 94901e131a7f > src/cpu/o3/fetch_impl.hh 94901e131a7f > src/cpu/o3/lsq.hh 94901e131a7f > src/cpu/o3/lsq_impl.hh 94901e131a7f > src/cpu/simple/atomic.hh 94901e131a7f > src/cpu/simple/timing.hh 94901e131a7f > src/cpu/simple/timing.cc 94901e131a7f > src/cpu/testers/directedtest/RubyDirectedTester.hh 94901e131a7f > src/cpu/testers/memtest/memtest.hh 94901e131a7f > src/cpu/testers/memtest/memtest.cc 94901e131a7f > src/cpu/testers/networktest/networktest.hh 94901e131a7f > src/cpu/testers/networktest/networktest.cc 94901e131a7f > src/cpu/testers/rubytest/RubyTester.hh 94901e131a7f > src/cpu/testers/traffic_gen/traffic_gen.hh 94901e131a7f > src/cpu/testers/traffic_gen/traffic_gen.cc 94901e131a7f > src/dev/dma_device.hh 94901e131a7f > src/dev/dma_device.cc 94901e131a7f > src/mem/addr_mapper.hh 94901e131a7f > src/mem/addr_mapper.cc 94901e131a7f > src/mem/bridge.hh 94901e131a7f > src/mem/bridge.cc 94901e131a7f > src/mem/cache/base.hh 94901e131a7f > src/mem/cache/base.cc 94901e131a7f > src/mem/cache/cache.hh 94901e131a7f > src/mem/cache/cache_impl.hh 94901e131a7f > src/mem/coherent_xbar.hh 94901e131a7f > src/mem/coherent_xbar.cc 94901e131a7f > src/mem/comm_monitor.hh 94901e131a7f > src/mem/comm_monitor.cc 94901e131a7f > src/mem/dram_ctrl.hh 94901e131a7f > src/mem/dram_ctrl.cc 94901e131a7f > src/mem/dramsim2.hh 94901e131a7f > src/mem/dramsim2.cc 94901e131a7f > src/mem/external_slave.cc 94901e131a7f > src/mem/mem_checker_monitor.hh 94901e131a7f > src/mem/mem_checker_monitor.cc 9
Re: [gem5-dev] Review Request 2646: mem: Split port retry for all different packet classes
--- This is an automatically generated e-mail. To reply, visit: http://reviews.gem5.org/r/2646/#review5867 --- Awesome! This lack of separate channels for requests & responses has bugged me for years, I'm really glad to see it addressed. One minor suggestion: to me, terms like recvRetryReq and recvRetryResp seem backward, since 'retry' is the noun and 'req'/'resp' are adjectives. So in isolation, recvReqRetry makes more sense, because I'm receiving a retry that's specific to requests (i.e., a request-flavored retry), not receiving a request for a retry (as recvRetryReq implies). On the other hand, I can see how there is some parallelism between sendTimingReq and recvRetryReq. But to me the readability in context trumps the parallelism in the abstract (which you only see in places like the commit message where the function names are juxtaposed). Just to overcome the effort hurdle, I think this issue can be fixed with: perl -pi -e 's/([Rr])(etry)R(eq|esp)/$1$3R$2/' - Steve Reinhardt On Feb. 7, 2015, 9:24 a.m., Andreas Hansson wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > http://reviews.gem5.org/r/2646/ > --- > > (Updated Feb. 7, 2015, 9:24 a.m.) > > > Review request for Default. > > > Repository: gem5 > > > Description > --- > > Changeset 10707:ec8bd9a8e1e2 > --- > mem: Split port retry for all different packet classes > > This patch fixes a long-standing isue with the port flow > control. Before this patch the retry mechanism was shared between all > different packet classes. As a result, a snoop response could get > stuck behind a request waiting for a retry, even if the send/recv > functions were split. This caused message-dependent deadlocks in > stress-test scenarios. > > The patch splits the retry into one per packet (message) class. Thus, > sendTimingReq has a corresponding recvRetryReq, sendTimingResp has > recvRetryResp etc. Most of the changes to the code involve simply > clarifying what type of request a specific object was accepting. > > The biggest change in functionality is in the cache downstream packet > queue, facing the memory. This queue was shared by requests and snoop > responses, and it is now split into two queues, each with their own > flow control, but the same physical MasterPort. These changes fixes > the previously seen deadlocks. > > > Diffs > - > > src/cpu/minor/fetch1.hh 94901e131a7f > src/cpu/kvm/base.hh 94901e131a7f > src/arch/x86/pagetable_walker.hh 94901e131a7f > src/arch/x86/pagetable_walker.cc 94901e131a7f > src/cpu/minor/fetch1.cc 94901e131a7f > src/cpu/minor/lsq.hh 94901e131a7f > src/cpu/minor/lsq.cc 94901e131a7f > src/cpu/o3/cpu.hh 94901e131a7f > src/cpu/o3/cpu.cc 94901e131a7f > src/cpu/o3/fetch.hh 94901e131a7f > src/cpu/o3/fetch_impl.hh 94901e131a7f > src/cpu/o3/lsq.hh 94901e131a7f > src/cpu/o3/lsq_impl.hh 94901e131a7f > src/cpu/simple/atomic.hh 94901e131a7f > src/cpu/simple/timing.hh 94901e131a7f > src/cpu/simple/timing.cc 94901e131a7f > src/cpu/testers/directedtest/RubyDirectedTester.hh 94901e131a7f > src/cpu/testers/memtest/memtest.hh 94901e131a7f > src/cpu/testers/memtest/memtest.cc 94901e131a7f > src/cpu/testers/networktest/networktest.hh 94901e131a7f > src/cpu/testers/networktest/networktest.cc 94901e131a7f > src/cpu/testers/rubytest/RubyTester.hh 94901e131a7f > src/cpu/testers/traffic_gen/traffic_gen.hh 94901e131a7f > src/cpu/testers/traffic_gen/traffic_gen.cc 94901e131a7f > src/dev/dma_device.hh 94901e131a7f > src/dev/dma_device.cc 94901e131a7f > src/mem/addr_mapper.hh 94901e131a7f > src/mem/addr_mapper.cc 94901e131a7f > src/mem/bridge.hh 94901e131a7f > src/mem/bridge.cc 94901e131a7f > src/mem/cache/base.hh 94901e131a7f > src/mem/cache/base.cc 94901e131a7f > src/mem/cache/cache.hh 94901e131a7f > src/mem/cache/cache_impl.hh 94901e131a7f > src/mem/coherent_xbar.hh 94901e131a7f > src/mem/coherent_xbar.cc 94901e131a7f > src/mem/comm_monitor.hh 94901e131a7f > src/mem/comm_monitor.cc 94901e131a7f > src/mem/dram_ctrl.hh 94901e131a7f > src/mem/dram_ctrl.cc 94901e131a7f > src/mem/dramsim2.hh 94901e131a7f > src/mem/dramsim2.cc 94901e131a7f > src/mem/external_slave.cc 94901e131a7f > src/mem/mem_checker_monitor.hh 94901e131a7f > src/mem/mem_checker_monitor.cc 94901e131a7f > src/mem/mport.hh 94901e131a7f > src/mem/noncoherent_xbar.hh 94901e131a7f > src/mem/noncoherent_xbar.cc 94901e131a7f > src/mem/packet_queue.hh 94901e131a7f > src/mem/packet_queue.cc 94901e131a7f > src/mem/port.hh 94901e131a7f > src/mem/port.cc 94901e131a7f > src/mem/qport.hh 94901e131a7f > src/mem/ruby/slicc_interface/Abstr
Re: [gem5-dev] Review Request 2646: mem: Split port retry for all different packet classes
> On Feb. 7, 2015, 10:14 a.m., Steve Reinhardt wrote: > > Awesome! This lack of separate channels for requests & responses has > > bugged me for years, I'm really glad to see it addressed. > > > > One minor suggestion: to me, terms like recvRetryReq and recvRetryResp seem > > backward, since 'retry' is the noun and 'req'/'resp' are adjectives. So in > > isolation, recvReqRetry makes more sense, because I'm receiving a retry > > that's specific to requests (i.e., a request-flavored retry), not receiving > > a request for a retry (as recvRetryReq implies). On the other hand, I can > > see how there is some parallelism between sendTimingReq and recvRetryReq. > > But to me the readability in context trumps the parallelism in the abstract > > (which you only see in places like the commit message where the function > > names are juxtaposed). > > > > Just to overcome the effort hurdle, I think this issue can be fixed with: > > perl -pi -e 's/([Rr])(etry)R(eq|esp)/$1$3R$2/' might want to tack a 'g' onto that substitution just in case - Steve --- This is an automatically generated e-mail. To reply, visit: http://reviews.gem5.org/r/2646/#review5867 --- On Feb. 7, 2015, 9:24 a.m., Andreas Hansson wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > http://reviews.gem5.org/r/2646/ > --- > > (Updated Feb. 7, 2015, 9:24 a.m.) > > > Review request for Default. > > > Repository: gem5 > > > Description > --- > > Changeset 10707:ec8bd9a8e1e2 > --- > mem: Split port retry for all different packet classes > > This patch fixes a long-standing isue with the port flow > control. Before this patch the retry mechanism was shared between all > different packet classes. As a result, a snoop response could get > stuck behind a request waiting for a retry, even if the send/recv > functions were split. This caused message-dependent deadlocks in > stress-test scenarios. > > The patch splits the retry into one per packet (message) class. Thus, > sendTimingReq has a corresponding recvRetryReq, sendTimingResp has > recvRetryResp etc. Most of the changes to the code involve simply > clarifying what type of request a specific object was accepting. > > The biggest change in functionality is in the cache downstream packet > queue, facing the memory. This queue was shared by requests and snoop > responses, and it is now split into two queues, each with their own > flow control, but the same physical MasterPort. These changes fixes > the previously seen deadlocks. > > > Diffs > - > > src/cpu/minor/fetch1.hh 94901e131a7f > src/cpu/kvm/base.hh 94901e131a7f > src/arch/x86/pagetable_walker.hh 94901e131a7f > src/arch/x86/pagetable_walker.cc 94901e131a7f > src/cpu/minor/fetch1.cc 94901e131a7f > src/cpu/minor/lsq.hh 94901e131a7f > src/cpu/minor/lsq.cc 94901e131a7f > src/cpu/o3/cpu.hh 94901e131a7f > src/cpu/o3/cpu.cc 94901e131a7f > src/cpu/o3/fetch.hh 94901e131a7f > src/cpu/o3/fetch_impl.hh 94901e131a7f > src/cpu/o3/lsq.hh 94901e131a7f > src/cpu/o3/lsq_impl.hh 94901e131a7f > src/cpu/simple/atomic.hh 94901e131a7f > src/cpu/simple/timing.hh 94901e131a7f > src/cpu/simple/timing.cc 94901e131a7f > src/cpu/testers/directedtest/RubyDirectedTester.hh 94901e131a7f > src/cpu/testers/memtest/memtest.hh 94901e131a7f > src/cpu/testers/memtest/memtest.cc 94901e131a7f > src/cpu/testers/networktest/networktest.hh 94901e131a7f > src/cpu/testers/networktest/networktest.cc 94901e131a7f > src/cpu/testers/rubytest/RubyTester.hh 94901e131a7f > src/cpu/testers/traffic_gen/traffic_gen.hh 94901e131a7f > src/cpu/testers/traffic_gen/traffic_gen.cc 94901e131a7f > src/dev/dma_device.hh 94901e131a7f > src/dev/dma_device.cc 94901e131a7f > src/mem/addr_mapper.hh 94901e131a7f > src/mem/addr_mapper.cc 94901e131a7f > src/mem/bridge.hh 94901e131a7f > src/mem/bridge.cc 94901e131a7f > src/mem/cache/base.hh 94901e131a7f > src/mem/cache/base.cc 94901e131a7f > src/mem/cache/cache.hh 94901e131a7f > src/mem/cache/cache_impl.hh 94901e131a7f > src/mem/coherent_xbar.hh 94901e131a7f > src/mem/coherent_xbar.cc 94901e131a7f > src/mem/comm_monitor.hh 94901e131a7f > src/mem/comm_monitor.cc 94901e131a7f > src/mem/dram_ctrl.hh 94901e131a7f > src/mem/dram_ctrl.cc 94901e131a7f > src/mem/dramsim2.hh 94901e131a7f > src/mem/dramsim2.cc 94901e131a7f > src/mem/external_slave.cc 94901e131a7f > src/mem/mem_checker_monitor.hh 94901e131a7f > src/mem/mem_checker_monitor.cc 94901e131a7f > src/mem/mport.hh 94901e131a7f > src/mem/noncoherent_xbar.hh 94901e131a7f > src/mem/noncoherent_xbar.cc 94901e131a7f > src/mem/packet_queue.hh 94901e131a7f > src/mem/pa
[gem5-dev] Review Request 2646: mem: Split port retry for all different packet classes
--- This is an automatically generated e-mail. To reply, visit: http://reviews.gem5.org/r/2646/ --- Review request for Default. Repository: gem5 Description --- Changeset 10707:ec8bd9a8e1e2 --- mem: Split port retry for all different packet classes This patch fixes a long-standing isue with the port flow control. Before this patch the retry mechanism was shared between all different packet classes. As a result, a snoop response could get stuck behind a request waiting for a retry, even if the send/recv functions were split. This caused message-dependent deadlocks in stress-test scenarios. The patch splits the retry into one per packet (message) class. Thus, sendTimingReq has a corresponding recvRetryReq, sendTimingResp has recvRetryResp etc. Most of the changes to the code involve simply clarifying what type of request a specific object was accepting. The biggest change in functionality is in the cache downstream packet queue, facing the memory. This queue was shared by requests and snoop responses, and it is now split into two queues, each with their own flow control, but the same physical MasterPort. These changes fixes the previously seen deadlocks. Diffs - src/cpu/minor/fetch1.hh 94901e131a7f src/cpu/kvm/base.hh 94901e131a7f src/arch/x86/pagetable_walker.hh 94901e131a7f src/arch/x86/pagetable_walker.cc 94901e131a7f src/cpu/minor/fetch1.cc 94901e131a7f src/cpu/minor/lsq.hh 94901e131a7f src/cpu/minor/lsq.cc 94901e131a7f src/cpu/o3/cpu.hh 94901e131a7f src/cpu/o3/cpu.cc 94901e131a7f src/cpu/o3/fetch.hh 94901e131a7f src/cpu/o3/fetch_impl.hh 94901e131a7f src/cpu/o3/lsq.hh 94901e131a7f src/cpu/o3/lsq_impl.hh 94901e131a7f src/cpu/simple/atomic.hh 94901e131a7f src/cpu/simple/timing.hh 94901e131a7f src/cpu/simple/timing.cc 94901e131a7f src/cpu/testers/directedtest/RubyDirectedTester.hh 94901e131a7f src/cpu/testers/memtest/memtest.hh 94901e131a7f src/cpu/testers/memtest/memtest.cc 94901e131a7f src/cpu/testers/networktest/networktest.hh 94901e131a7f src/cpu/testers/networktest/networktest.cc 94901e131a7f src/cpu/testers/rubytest/RubyTester.hh 94901e131a7f src/cpu/testers/traffic_gen/traffic_gen.hh 94901e131a7f src/cpu/testers/traffic_gen/traffic_gen.cc 94901e131a7f src/dev/dma_device.hh 94901e131a7f src/dev/dma_device.cc 94901e131a7f src/mem/addr_mapper.hh 94901e131a7f src/mem/addr_mapper.cc 94901e131a7f src/mem/bridge.hh 94901e131a7f src/mem/bridge.cc 94901e131a7f src/mem/cache/base.hh 94901e131a7f src/mem/cache/base.cc 94901e131a7f src/mem/cache/cache.hh 94901e131a7f src/mem/cache/cache_impl.hh 94901e131a7f src/mem/coherent_xbar.hh 94901e131a7f src/mem/coherent_xbar.cc 94901e131a7f src/mem/comm_monitor.hh 94901e131a7f src/mem/comm_monitor.cc 94901e131a7f src/mem/dram_ctrl.hh 94901e131a7f src/mem/dram_ctrl.cc 94901e131a7f src/mem/dramsim2.hh 94901e131a7f src/mem/dramsim2.cc 94901e131a7f src/mem/external_slave.cc 94901e131a7f src/mem/mem_checker_monitor.hh 94901e131a7f src/mem/mem_checker_monitor.cc 94901e131a7f src/mem/mport.hh 94901e131a7f src/mem/noncoherent_xbar.hh 94901e131a7f src/mem/noncoherent_xbar.cc 94901e131a7f src/mem/packet_queue.hh 94901e131a7f src/mem/packet_queue.cc 94901e131a7f src/mem/port.hh 94901e131a7f src/mem/port.cc 94901e131a7f src/mem/qport.hh 94901e131a7f src/mem/ruby/slicc_interface/AbstractController.hh 94901e131a7f src/mem/ruby/slicc_interface/AbstractController.cc 94901e131a7f src/mem/ruby/structures/RubyMemoryControl.hh 94901e131a7f src/mem/ruby/system/DMASequencer.hh 94901e131a7f src/mem/ruby/system/DMASequencer.cc 94901e131a7f src/mem/ruby/system/RubyPort.hh 94901e131a7f src/mem/ruby/system/RubyPort.cc 94901e131a7f src/mem/simple_mem.hh 94901e131a7f src/mem/simple_mem.cc 94901e131a7f src/mem/tport.hh 94901e131a7f src/mem/tport.cc 94901e131a7f src/mem/xbar.hh 94901e131a7f src/mem/xbar.cc 94901e131a7f src/sim/system.hh 94901e131a7f Diff: http://reviews.gem5.org/r/2646/diff/ Testing --- Thanks, Andreas Hansson ___ gem5-dev mailing list gem5-dev@gem5.org http://m5sim.org/mailman/listinfo/gem5-dev