On Fri, Feb 3, 2012 at 1:24 AM, Ong Zhong Liang <[email protected]> wrote:

> Hi
> My area of research involves collecting data about the cacheLine state.
> I've done some modification to the old mars v0.1 code to monitor changes in
> the mesiCache files. With marss v0.2.1, I was hoping to port the changes
> over but I'm unsure about the way the cache parameters are specified and
> thought I'd ask.
>
> Reading through the most recent email thread about cacheConstants.h not
> being used (dated Sept 30, 2011), it stated that the cacheConstant values
> were not used. Instead, the value was supposed to be passed in by
> ptl-qemu.cpp through CPUID values.
>
> From 0.2 version we have moved all cache parameters into auto-generated
file so users can easily configure cache parameters using config files.
Here is a small attempt to explain how this works:

Step 1: Scons uses 'ptlsim/tools/config_gen.py' script and configuration
and generate two files: ptlsim/build/cache/cacheTypes.h and
ptlsim/build/cache/cacheTypes.cpp.   In cacheTypes.cpp, code generator
create different types of cache lines which are 'CacheLines' type of object
(its defined in 'ptlsim/cache/cacheLines.h' file).  Code generator also
creates a 'get_cachelines' function to get the cache line based on type.

Step 2: Each type of cache controller uses the cache type to get the
desired CacheLines object by calling 'get_cachelines' function.

For your experiments you can modify base CacheLines class to contain any
cache line specific data.

- Avadh

However from what I can glean from the source code. the MESI_cache / shared
> L2 configuration uses the coherent cache, which, it seems, stores the
> cacheLines in the same cacheLines_/cacheLineBase class as the older version
> (which still #includes cacheContants.h). Nevertheless from running the
> experiment, it seems that the cache sizes do follow the input values
> specified in the config files, yet these exact same coherentCache files are
> being used during runtime.
>
> I would greatly appreciate any clarification.
>
> Thank you!
>
> Zhong Liang Ong
> ________________________________________
> From: [email protected] [
> [email protected]] On Behalf Of
> [email protected] [
> [email protected]]
> Sent: Friday, September 30, 2011 12:00 AM
> To: [email protected]
> Subject: Marss86-Devel Digest, Vol 19, Issue 20
>
> Send Marss86-Devel mailing list submissions to
>        [email protected]
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
> or, via email, send a message with subject or body 'help' to
>        [email protected]
>
> You can reach the person managing the list at
>        [email protected]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Marss86-Devel digest..."
>
>
> Today's Topics:
>
>   1. Re: regarding multi-p2p (avadh patel)
>   2. Re: regarding multi-p2p (sparsh mittal)
>   3. Release 0.2 (avadh patel)
>   4. use of cache constants (sparsh mittal)
>   5. Re: use of cache constants (DRAM Ninjas)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 28 Sep 2011 09:55:59 -0700
> From: avadh patel <[email protected]>
> To: sparsh mittal <[email protected]>
> Cc: marss86-devel <[email protected]>
> Subject: Re: [marss86-devel] regarding multi-p2p
> Message-ID:
>        <cadtv7ctcvffme99-btq5am_kv48m6jq_ci1bhdwewb9edba...@mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Wed, Sep 28, 2011 at 5:59 AM, sparsh mittal <[email protected]
> >wrote:
>
> > Actually, I was thinking as Paul is thinking,and this may be true for
> > parallel workload. But what if we have multi-progammed workload? These
> > workloads don't share any data among different cores, so I hope in L1,
> they
> > will not need to share data. In L2 they are sharing the space but not the
> > data. Correct me if I am wrong. I am personally using multi-prog.
> workload,
> > that's why I was thinking of no need of coherence in L1.
> >
> > Even if you are running multi-prog workloads, kernel will share some
> data.
> For your scheme to work, you need to implement a special WT-L1 and L2 cache
> that will make sure that on each write cache line has to be locked in
> shared
> L2 so write update is seen as atomic and hence you serialize all memory
> updates for correctness.
>
> - Avadh
>
> Thanks and Regards
> > Sparsh Mittal
> >
> >
> >
> >
> > On Wed, Sep 28, 2011 at 12:10 AM, DRAM Ninjas <[email protected]
> >wrote:
> >
> >> Certainly you can do this, but unless I'm misunderstanding something, it
> >> won't actually be 'correct'. If you have multiple L1 caches, you have to
> >> enforce consistency on them -- which is precisely why you can only have
> the
> >> bus there be a MESI bus. If you just have all of the private caches
> write
> >> through, then who knows what value that will end up going back to
> memory is
> >> and what each processor will see as the 'right' values.
> >>
> >> From a simulation standpoint it obviously won't matter since there's no
> >> data involved in the memoryHierarchy objects, but I don't know if
> realism is
> >> a concern for you.
> >>
> >>
> >> On Wed, Sep 28, 2011 at 12:09 AM, avadh patel <[email protected]
> >wrote:
> >>
> >>>
> >>> On Tue, Sep 27, 2011 at 2:59 PM, sparsh mittal <
> [email protected]>wrote:
> >>>
> >>>> Hello
> >>>> This regards to a previous discussion (copied below):
> >>>>
> >>>> 1. So can we say that for multi-core, L1 cache can be only mesi, as
> per
> >>>> code existing now.
> >>>>
> >>>
> >>> Now you can also use moesi cache.
> >>>
> >>>> 2. If I think of implementing p2p_multi, would you give some hint:
> which
> >>>> files/code sections would be affected (some hints you have already
> given
> >>>> below). Any precaution?
> >>>>
> >>>> Take a look at ptlsim/cache/p2p.* files.  You can add a array (use
> >>> 'dynarray') in p2p which can store multiple upper level controllers
> when
> >>> registered.  And when it receives a request, it can find 'dest'
> controller
> >>> from the array and forward request to 'dest' controller.
> >>>
> >>> - Avadh
> >>>
> >>> Summary of my experiments:
> >>>>> 1. The configuration with L1 =mesi runs fine, which you pointed out
> >>>>> 2. The config with L1=write-through, and L1-L2 as mesi does not work
> >>>>> (copied below, item 1, reduced log file attached)
> >>>>>
> >>>>
> >>>> I looked at the configuration and logfile and the reason its not
> working
> >>>> is because the bus interconnect is designed for MESI only. So when you
> >>>> attached write-through caches it doesnt work because bus is waiting
> for
> >>>> snoop response from all connected controllers. And WT caches are not
> >>>> designed to perform snoop operations and send response back, so they
> ignore
> >>>> the request and dont respond anything. (Look for 'responseReceived' in
> >>>> logfile, it at the end).  Due to this behavior the cores wait for
> cache
> >>>> request to complete and never make any progress.
> >>>>
> >>>> 3. The config with L1=write-through, and L1-L2 as p2p does not work
> >>>>> (copied below, item 2, log file has almost nothing)
> >>>>>
> >>>>
> >>>> In this configuration, first thing is that you used 'p2p' interconnect
> >>>> to attach all L1 I,D caches and L2 cache, but 'p2p' supports
> connecting only
> >>>> 2 controllers.
> >>>>
> >>>> The solution to this issue is to create a new interconnect module that
> >>>> allows you to send messages directly to lower cache and send response
> back.
> >>>>  To develop such a module should not take too long as you are not
> going to
> >>>> buffer any request, you'll just pass the request from source to
> destination
> >>>> just like in 'p2p' interconnect. But unlike 'p2p' this interface will
> >>>> support multiple Upper and single Lower interconnect. I suggest you
> take a
> >>>> look at p2p interconnect design on how it passes the message from one
> >>>> interconnect to other. And create a new interconnect, lets call it
> >>>> 'p2p_multi', that allows multiple upper connections.
> >>>>
> >>>> Thanks and Regards
> >>>> Sparsh Mittal
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> http://www.marss86.org
> >>>> Marss86-Devel mailing list
> >>>> [email protected]
> >>>> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
> >>>>
> >>>>
> >>>
> >>> _______________________________________________
> >>> http://www.marss86.org
> >>> Marss86-Devel mailing list
> >>> [email protected]
> >>> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
> >>>
> >>>
> >>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.cs.binghamton.edu/mailman/private/marss86-devel/attachments/20110928/579108bf/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 2
> Date: Wed, 28 Sep 2011 12:54:29 -0500
> From: sparsh mittal <[email protected]>
> To: avadh patel <[email protected]>
> Cc: marss86-devel <[email protected]>
> Subject: Re: [marss86-devel] regarding multi-p2p
> Message-ID:
>        <caf9m4ziwkyzmzl8wg+ve43_rnwkuyxea6n5cj1tnlkuqz4f...@mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Wed, Sep 28, 2011 at 11:55 AM, avadh patel <[email protected]> wrote:
>
> >
> > Even if you are running multi-prog workloads, kernel will share some
> data.
> > For your scheme to work, you need to implement a special WT-L1 and L2
> cache
> > that will make sure that on each write cache line has to be locked in
> shared
> > L2 so write update is seen as atomic and hence you serialize all memory
> > updates for correctness.
> >
> > I have done some implementation of multip2p.
>
>  1.       If I only take user-stats, still do I need to implement what you
> said above?
>
> 2.        Now, my understanding of  how to apply your above comment is:
> when
> L1 cache writes to L2, I need to make sure
>
> a.       No other L1 cache is updating L2 at that time or no other L1 can
> replace that block before it is written.
>
> b.      Something else?
>
>  Thanks a lot for these insights.
>
>
>
> > - Avadh
> >
> >  Thanks and Regards
> >> Sparsh Mittal
> >>
> >>
> >>
> >>
> >> On Wed, Sep 28, 2011 at 12:10 AM, DRAM Ninjas <[email protected]
> >wrote:
> >>
> >>> Certainly you can do this, but unless I'm misunderstanding something,
> it
> >>> won't actually be 'correct'. If you have multiple L1 caches, you have
> to
> >>> enforce consistency on them -- which is precisely why you can only
> have the
> >>> bus there be a MESI bus. If you just have all of the private caches
> write
> >>> through, then who knows what value that will end up going back to
> memory is
> >>> and what each processor will see as the 'right' values.
> >>>
> >>> From a simulation standpoint it obviously won't matter since there's no
> >>> data involved in the memoryHierarchy objects, but I don't know if
> realism is
> >>> a concern for you.
> >>>
> >>>
> >>> On Wed, Sep 28, 2011 at 12:09 AM, avadh patel <[email protected]
> >wrote:
> >>>
> >>>>
> >>>> On Tue, Sep 27, 2011 at 2:59 PM, sparsh mittal <
> [email protected]
> >>>> > wrote:
> >>>>
> >>>>> Hello
> >>>>> This regards to a previous discussion (copied below):
> >>>>>
> >>>>> 1. So can we say that for multi-core, L1 cache can be only mesi, as
> per
> >>>>> code existing now.
> >>>>>
> >>>>
> >>>> Now you can also use moesi cache.
> >>>>
> >>>>> 2. If I think of implementing p2p_multi, would you give some hint:
> >>>>> which files/code sections would be affected (some hints you have
> already
> >>>>> given below). Any precaution?
> >>>>>
> >>>>> Take a look at ptlsim/cache/p2p.* files.  You can add a array (use
> >>>> 'dynarray') in p2p which can store multiple upper level controllers
> when
> >>>> registered.  And when it receives a request, it can find 'dest'
> controller
> >>>> from the array and forward request to 'dest' controller.
> >>>>
> >>>> - Avadh
> >>>>
> >>>> Summary of my experiments:
> >>>>>> 1. The configuration with L1 =mesi runs fine, which you pointed out
> >>>>>> 2. The config with L1=write-through, and L1-L2 as mesi does not work
> >>>>>> (copied below, item 1, reduced log file attached)
> >>>>>>
> >>>>>
> >>>>> I looked at the configuration and logfile and the reason its not
> >>>>> working is because the bus interconnect is designed for MESI only.
> So when
> >>>>> you attached write-through caches it doesnt work because bus is
> waiting for
> >>>>> snoop response from all connected controllers. And WT caches are not
> >>>>> designed to perform snoop operations and send response back, so they
> ignore
> >>>>> the request and dont respond anything. (Look for 'responseReceived'
> in
> >>>>> logfile, it at the end).  Due to this behavior the cores wait for
> cache
> >>>>> request to complete and never make any progress.
> >>>>>
> >>>>> 3. The config with L1=write-through, and L1-L2 as p2p does not work
> >>>>>> (copied below, item 2, log file has almost nothing)
> >>>>>>
> >>>>>
> >>>>> In this configuration, first thing is that you used 'p2p'
> interconnect
> >>>>> to attach all L1 I,D caches and L2 cache, but 'p2p' supports
> connecting only
> >>>>> 2 controllers.
> >>>>>
> >>>>> The solution to this issue is to create a new interconnect module
> that
> >>>>> allows you to send messages directly to lower cache and send
> response back.
> >>>>>  To develop such a module should not take too long as you are not
> going to
> >>>>> buffer any request, you'll just pass the request from source to
> destination
> >>>>> just like in 'p2p' interconnect. But unlike 'p2p' this interface will
> >>>>> support multiple Upper and single Lower interconnect. I suggest you
> take a
> >>>>> look at p2p interconnect design on how it passes the message from one
> >>>>> interconnect to other. And create a new interconnect, lets call it
> >>>>> 'p2p_multi', that allows multiple upper connections.
> >>>>>
> >>>>> Thanks and Regards
> >>>>> Sparsh Mittal
> >>>>>
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> http://www.marss86.org
> >>>>> Marss86-Devel mailing list
> >>>>> [email protected]
> >>>>> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
> >>>>>
> >>>>>
> >>>>
> >>>> _______________________________________________
> >>>> http://www.marss86.org
> >>>> Marss86-Devel mailing list
> >>>> [email protected]
> >>>> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
> >>>>
> >>>>
> >>>
> >>
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.cs.binghamton.edu/mailman/private/marss86-devel/attachments/20110928/e22f6f3e/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 3
> Date: Wed, 28 Sep 2011 11:24:08 -0700
> From: avadh patel <[email protected]>
> To: MARSS List <[email protected]>
> Subject: [marss86-devel] Release 0.2
> Message-ID:
>        <CADtv7CTkMJUxfqtxOuE8L5daf=rfqe3y696ddvhdz_+04eo...@mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hi Everyone,
>
> I am very glad to announce long due 0.2 release of Marss.
> Here are some of the highlights of this release:
>
> * New Statistics Collection Framework
> * New Modular Core, Cache and Interconnect Framework
> * Upgraded Qemu to 0.14.1 release [from Adnan]
> * Atom Core Model
> * MOESI Cache Model
> * Switch Interconnect Model
> * Machine Configuration for easy designs
> * Periodic Stats Dump support [from Paul]
> * Better Utility scripts [from Paul]
> * Bug fixes related to QEMU interface
>
> Its been a long time since last release but we are glad to provide
> more stable and modular framework that will enable faster
> development of new core and cache modules.
>
> You can download the tar from here:
> https://github.com/avadhpatel/marss/tarball/0.2
>
> With this I have merged all the changes from core-models to
> master branch and all bug fixes will apply to master branch only.
>
> We are looking to compile a list of features for next (0.3) release,
> if you have something in mind or you can contribute please send
> us an email.  Few things I have in mind are:
>
> * Better debugging support (Please send your suggestions)
> * Multi-threaded simulations for higher number of cores
> * Update Qemu to 0.15 or the latest stable release
>
> Once again thanks for your contributions and bug reports.
>
> - Avadh
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.cs.binghamton.edu/mailman/private/marss86-devel/attachments/20110928/325fe3e1/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 4
> Date: Wed, 28 Sep 2011 17:49:46 -0500
> From: sparsh mittal <[email protected]>
> To: marss86-devel <[email protected]>
> Subject: [marss86-devel] use of cache constants
> Message-ID:
>        <CAF9M4Zi=owBv9uwbfCWT9FVGCqO+VPZdV_qWqK3d=ylw0ys...@mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello
> I was trying to understand what is the use of constants defined in the
> cacheConstants.h, now when new config. mechanism is in place. When a
> machine
> is specified at run time, these constants differ from that value.
> These constants are used in ptl-qemu.cpp file.
> Thanks and Regards
> Sparsh Mittal
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.cs.binghamton.edu/mailman/private/marss86-devel/attachments/20110928/bbedd5bb/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 5
> Date: Thu, 29 Sep 2011 01:42:37 -0400
> From: DRAM Ninjas <[email protected]>
> To: sparsh mittal <[email protected]>
> Cc: marss86-devel <[email protected]>
> Subject: Re: [marss86-devel] use of cache constants
> Message-ID:
>        <CAG_SxBPS_nTEqkxeMM1X=qCra=mjsj5m60dfqx4ehn_mwb0...@mail.gmail.com
> >
> Content-Type: text/plain; charset="iso-8859-1"
>
> The majority (or perhaps all?) of them aren't used anymore with the new
> configuration mechanism. Perhaps a few miscellaneous ones like MEM_REQ_NUM
> or whatever, but those aren't that important.
>
> On Wed, Sep 28, 2011 at 6:49 PM, sparsh mittal <[email protected]
> >wrote:
>
> > Hello
> > I was trying to understand what is the use of constants defined in the
> > cacheConstants.h, now when new config. mechanism is in place. When a
> machine
> > is specified at run time, these constants differ from that value.
> > These constants are used in ptl-qemu.cpp file.
> > Thanks and Regards
> > Sparsh Mittal
> >
> >
> >
> > _______________________________________________
> > http://www.marss86.org
> > Marss86-Devel mailing list
> > [email protected]
> > https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
> >
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://www.cs.binghamton.edu/mailman/private/marss86-devel/attachments/20110929/5f449487/attachment-0001.html
> >
>
> ------------------------------
>
> _______________________________________________
> http://www.marss86.org
> Marss86-Devel mailing list
> [email protected]
> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
>
>
> End of Marss86-Devel Digest, Vol 19, Issue 20
> *********************************************
>
>
> _______________________________________________
> http://www.marss86.org
> Marss86-Devel mailing list
> [email protected]
> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
>
_______________________________________________
http://www.marss86.org
Marss86-Devel mailing list
[email protected]
https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel

Reply via email to