Hi Tushar,

I looked into what you suggested about modifying the link latencies to
mimic off-chip, and it seems that as well as modifying the bandwidth
multiplier is how the original GEMS differentiates the SMP/CMP
simulations (aside from their SMP protocols that seem to mostly model
a single cache per chip).

However, in comparing the default values that GEMS uses and GEM5 uses
for the bandwidth factor and endpoint_bandwifh, I have some questions
as to what which is more appropiate.

1)
The actual bandwidth of the link is calculated by the bandwidth_factor
* endpoint_bandwidth,

which in: (default values) GEM5: 16 x 1000, and for GEMS it ranges
between (16-72) x 10000.

Given that the endpoint_bandwidth in GEMS is an order of magnitude
larger, is it correct to say that the default endpoint_bandwidth value
is *too low* in the GEM5 for the CMP case?

Or is the metric not 'thousandths of a byte per cycle' as it is GEMS?

It would almost seem that the bandwidth modeled is that of a SMP (less
bw available for off-chip access) while the link latency is that of a
CMP (1-cycle).

2) In the MeshDirCorners, if I wanted each L1/L2 pair to be modeled on
a separate chip, I would adjust the int_links latency correct?

Would I need to adjust the links associated with the corner
directories? I assume not since the directories would be assumed to be
placed on the corner chips.

I know that the values can be adjusted accordingly, a second opinion
would be helpful though.

Thanks
Malek

Malek


On Mon, Apr 2, 2012 at 10:10 AM, Malek Musleh <malek.mus...@gmail.com> wrote:
> Hi Tushar,
>
> Thanks for the clarification. I had thought this earllier on, but when
> reading the tutorial (slides 13-136) and gem5 wiki
> (MOESI_CMP_directory), where there are references to multiple chips,
> and simulationg SMP/SCMPs, I was trying to convince myself otherwise.
>
> Malek
>
> On Sun, Apr 1, 2012 at 7:19 PM, Tushar Krishna <tus...@csail.mit.edu> wrote:
>> Hi Malek,
>> The "int" links just mean links internal to the network, and "ext" links 
>> just mean links connecting the network to the external world, i.e. the 
>> cache/directory controllers.
>> They have nothing to do with on-chip vs off-chip.
>>
>> The L2 being shared or private depends on the coherence protocol.
>> If I am not mistaken, the MOESI_CMP_directory models a shared L2 design.
>> If you use N L2's, it just makes it a shared *distributed* L2 design.
>> How you connect up the private L1's and the shared L2(s) depends on the 
>> topology you choose.
>>
>> The gem5 simulator does not have the SMP protocols which GEMS had. It only 
>> has the CMP protocols which inherently makes it a single-chip design.
>> You *could* mimic multiple chips by playing with the link latencies in the 
>> topology file.
>>
>> I am not sure which protocols model private L2's in gem5.
>> The private L2 protocols usually just have a protocol-cache.sm file that 
>> models both the private L1 and L2, instead of separate protocol-L1cache.sm 
>> and protocol-L2cache.sm.
>> I know MOESI_hammer models a private L2, but the L2 tracks only partial 
>> state, and resorts to broadcasts.
>>
>> cheers,
>> Tushar
>>
>>
>> On Apr 1, 2012, at 5:29 PM, Malek Musleh wrote:
>>
>>> Hi Tushar,
>>>
>>> I am looking for clarification between configuring caches to be
>>> on-chip versus off-chip in GEM5. It seems to me that each router
>>> represents a single chip, So in connecting multiple L1 caches to a
>>> shared L2 via "ext_link" correlate to all of them being on-chip, and
>>> then this chip/router is connected to another router/chip via
>>> "int_link". Is that correct?
>>>
>>> So if I wanted to model private L1/L2 caches, but have all to be
>>> on-chip, I would have the N L1/L2 caches all connected to a single
>>> router through "ext_link" correct?
>>>
>>> However, the code for Crossbar seems to contradict this, because there
>>> are N = len(nodes) int_links and ext_links created, where the Crossbar
>>> simulates a pure CMP.
>>>
>>> This is all assuming I am using the MOESI_CMP_directory.
>>>
>>> Thanks,
>>>
>>> Malek
>>>
>>> On Mon, Jan 9, 2012 at 5:46 PM, Tushar Krishna <tus...@csail.mit.edu> wrote:
>>>> Hi Malek,
>>>> Hmm yeah checking the condition once should be enough, not sure why it is
>>>> checked twice.
>>>>
>>>> But in any case, the condition being checked is correct, even for your
>>>> topology.
>>>> Each "ext_link" is the link connecting a controller (L1, L2, dir etc) to a
>>>> router, while "int_link" is for links connecting routers. Hence the total
>>>> number of ext links should be equal to the total number of controllers.
>>>>
>>>> In your topology in, each router will be connected to one L2 and n L1's via
>>>> ext links.
>>>> Look at how ext_links are added in the current Mesh/MeshDirCorners
>>>> topologies.
>>>>
>>>> cheers,
>>>> Tushar
>>>>
>>>>
>>>> On 1/9/2012 1:24 AM, gem5-users-requ...@gem5.org wrote:
>>>>>
>>>>> Date: Sun, 8 Jan 2012 23:26:19 -0500
>>>>> From: Malek Musleh<malek.mus...@gmail.com>
>>>>> To: gem5 users mailing list<gem5-users@gem5.org>
>>>>> Subject: [gem5-users] Issue Defining new Ruby Topology/ Possible Bug
>>>>>        in      Topology.cc
>>>>> Message-ID:
>>>>>
>>>>>  <CAPOxfJUDwWvr=DcBi6aLPoKcE1=a_9mntvyeh5ni9dzqmlj...@mail.gmail.com>
>>>>> Content-Type: text/plain; charset=ISO-8859-1
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> I am trying to define a new type of network topology to use with the
>>>>> Ruby Memory System. Specifically, I am trying to define a Mesh-type
>>>>> topology that can support multiple (n) L1 caches connected to a shared
>>>>> L2 cache for a per chip node basis, unlike the currently provided
>>>>> Mesh/MeshDirCorners Topologies in which each L2 cache has only a
>>>>> single L1 cache connected to it.
>>>>>
>>>>> However, in attempting to do so (n == 2), I have encountered the
>>>>> following error:
>>>>>
>>>>>
>>>>> Listening for system connection on port 3456
>>>>> fatal: m_nodes (1) != ext_links vector length (0)
>>>>>  @ cycle Topology
>>>>> [build/ALPHA_FS_MOESI_CMP_directory/mem/ruby/network/Topology.cc:83,
>>>>> line 269784]
>>>>> Memory Usage:<extra arg>%d KBytes
>>>>> simout (END)
>>>>>
>>>>> Now, looking at line 83 in Topology.cc, there is the following:
>>>>>
>>>>>     if (m_nodes != params()->ext_links.size()&&
>>>>>         m_nodes != params()->ext_links.size()) {
>>>>>         fatal("m_nodes (%d) != ext_links vector length (%d)\n",
>>>>>               m_nodes != params()->ext_links.size());
>>>>>     }
>>>>>
>>>>> 1) I am not sure if this condition is valid for the type of topology I
>>>>> am trying to create
>>>>> 2) I am not sure why the condition is checked twice?
>>>>>
>>>>> As in, if(cond_a&&  cond_a) is the same as if(cond_a).
>>>>>
>>>>>
>>>>> Was the second condition meant to be something else, or was the
>>>>> condition accidentally typed twice?
>>>>> Has anyone else created the type of topology I have described?
>>>>>
>>>>> Thanks in advance.
>>>>>
>>>>> Malek
>>>>
>>>>
>>>> _______________________________________________
>>>> gem5-users mailing list
>>>> gem5-users@gem5.org
>>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>> _______________________________________________
>>> gem5-users mailing list
>>> gem5-users@gem5.org
>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>> _______________________________________________
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to