totally agreed :)

On Wed, Oct 19, 2011 at 5:14 PM, Ashish <[email protected]> wrote:

> On Wed, Oct 19, 2011 at 8:26 PM, Raffaele P. Guidi
> <[email protected]> wrote:
> > It's probably my fault: as a former basketball player I believe that when
> an
> > assist is missed you have to blame the one who throws the ball :) Also,
> > after having worked alone on DirectMemory for a long time, discussing
> these
> > matters with skilled professionals like you is a real pleasure.
>
> Well having worked as Terracotta Field Engg, got to know a few things,
> and that what I discuss here.
>
> At ASF, its "we"- the community and we are all together in this :) so
> its no ones fault.
>
> Well during conversations and while I watched some commits at github,
> got a bit confused about DirectMemory objective. That's why I asked
> the question on other thread about it core objective.
>
> I feel we should focus on getting a release out, with know feature,
> and the slowly add features. Adding documentation would be vital to
> wide adoption.
>
> >
> > Thanks,
> >    Raffaele
> >
> > On Wed, Oct 19, 2011 at 3:33 PM, Ashish <[email protected]> wrote:
> >
> >> On Wed, Oct 19, 2011 at 7:01 PM, Raffaele P. Guidi
> >> <[email protected]> wrote:
> >> > Sorry, Ashish, but I think there must be a misunderstanding: the map
> >> doesn't
> >> > contain the actual data, it is just the index to data itself, which is
> >> into
> >> > the off-heap memory. In fact it is a collection of Pointer objects,
> which
> >> > contain the offset and the lenght of the DirectBuffer that contains
> the
> >> > actual byte array. So: replicating the map (which is natively offered
> by
> >> > both hc and terracotta) means replicating the INDEX of the data, not
> data
> >> > itself.
> >>
> >> Aha, got it. I thought about the data that's why brought up the point :)
> >>
> >> >
> >> > Again: replication of the map(index) is one matter, distribution of
> the
> >> data
> >> > is a different question. I'm not proposing to use  terracotta or
> >> hazelcast
> >> > for their caching features but for their *clustering* features
> >>
> >> Got it now :)
> >>
> >> >
> >> > On Wed, Oct 19, 2011 at 2:46 PM, Ashish <[email protected]>
> wrote:
> >> >
> >> >> On Wed, Oct 19, 2011 at 5:41 PM, Raffaele P. Guidi
> >> >> <[email protected]> wrote:
> >> >> > Also, on replication/distribution, we have two distinct aspects:
> >> >> >
> >> >> >
> >> >> >   1. *map replication* - the pointers map has to be replicated to
> all
> >> >> nodes
> >> >> >   and each pointer have to contain also a reference to the node who
> >> >> "owns" the
> >> >> >   real data
> >> >> >   2. *communication between nodes* - once one node knows that one
> >> entry
> >> >> is
> >> >> >   contained in node "n" has to ask for it
> >> >> >
> >> >> >
> >> >> > The first point is easily covered by terracotta or hazelcast, while
> >> the
> >> >> > second one should be implemented using an RPC mechanism (Thrift or
> >> Avro
> >> >> are
> >> >> > both good choices). Another option is to cover also point 1 with a
> >> custom
> >> >> > replication built on top of the chosen RPC framework - of course
> this
> >> >> would
> >> >> > lead to another (do we really need it?) distributed map
> >> implementation.
> >> >>
> >> >> Disagree on this. Be it TC or Hazelcast, they shall cover both the
> >> points.
> >> >> Lets take an example of Terracotta. Its a Client-Server architecture
> >> >> with striping on Server side.
> >> >> Now if you choose TC (short for Terracotta), you got 3 options
> >> >> 1. Use DSO or Distributed Shared Object mode - needs instrumentation
> >> >> and other stuff, not recommended
> >> >> 2. Use Ehcache at back, and TC takes care Distributing data
> >> >> 3. Use Map via TC Toolkit
> >> >>
> >> >> TC will not let you know where its storing the key (which infact are
> >> >> stored in HA manner on Server Stripe). That's the beauty of TC. It
> >> >> does the faulting/flushing transparently to the user code.
> >> >>
> >> >> On Hazelcast side, it does allow to know where the key is, but the
> >> >> moment you use its client, it becomes transparent to you.
> >> >>
> >> >> IMHO, using any existing cache solution would complicate the user
> story.
> >> >>
> >> >> Distribution is a nice to have feature, and infact would lead to a
> >> >> wider adoption :)
> >> >>
> >> >> >
> >> >> > Keeping things like this is easy - of course making it
> >> >> efficient/performant
> >> >> > is a different story (i.e., should I keep a local cache of
> frequently
> >> >> > accessed items stored in other nodes? etc..).
> >> >> >
> >> >> > Ciao,
> >> >> >    R
> >> >> >
> >> >>
> >> >> thanks
> >> >> ashish
> >> >>
> >> >
> >>
> >>
> >>
> >> --
> >> thanks
> >> ashish
> >>
> >> Blog: http://www.ashishpaliwal.com/blog
> >> My Photo Galleries: http://www.pbase.com/ashishpaliwal
> >>
> >
>
>
>
> --
> thanks
> ashish
>
> Blog: http://www.ashishpaliwal.com/blog
> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>

Reply via email to