On May 29, 2013, at 2:17 PM, Dan Berindei <dan.berin...@gmail.com> wrote:

> 
> Cheers
> Dan
> 
> 
> 
> On Tue, May 28, 2013 at 4:20 PM, Galder Zamarreño <gal...@redhat.com> wrote:
> Basically, the difference between Hot Rod then and now is: 32 bytes, which 
> is: new metadata object, a reference to it, and 2 unusued longs for immortal 
> entries.
> 
> With embedded metadata (~191 bytes per entry): 
> https://dl.dropboxusercontent.com/u/6148072/with-embedded.png
> With versioned entries (~159 bytes per entry): 
> https://dl.dropboxusercontent.com/u/6148072/with-versioned.png
> 
> And there's more: we have internal cache entries that have a reference to 
> internal cache values, per entry. This is useful for some cases (cache 
> stores…etc), but this extra reference to the value, plus the value object 
> itself, is 16 bytes (how useful is it really to have a separate value 
> instance to keep just the value? Needs reassessing its usefulness...).
> 
> 
> +1 to remove InternalCacheValue (I thought about storing InternalCacheValues 
> directly in the DataContainer, but it would probably create too much garbage 
> to create InternalCacheEntries all the time).

Interesting point… I think ICEs are used way more than ICVs, which yeah, would 
generate a lot of garbage creating them all the time. I'll investigate this 
just in case.

> 
>  
> So really, I think the minimum Hot Rod overhead we should aim for is ~143 
> bytes. If each server module could define what the ICE class (well, classes 
> to cover all immortal, mortal…etc cases) to use, which is purely designed for 
> their servers (i.e. hot rod just needs: value + version; memcached needs: 
> value + version + flags), we could get to this level…
> 
> 
> You mean embed the metadata fields directly in the ICE? Wouldn't that get us 
> back to what we had before introducing the Metadata interface?

Nope. Users can still provide metadata and that would be stored in a MetadatICE 
instance, with some extra cost. However, for our own internal use cases for 
metadata (hotrod, memcached, rest), we can optimise things further with what I 
said: having endpoint specific ICE/Vs that are stored directly, saving memory 
compare to what we had before (at least 16 bytes per entry).

>  
> You still want the metadata to be passed from the client, but for those 
> specialised use cases in Infinispan, we could have a mapping between the 
> metadata type and the type of ICEs created…
> 
> 
> If HotRod knows it always needs a ServerEntryVersion, it could implement 
> EntryVersion directly in HotRodMetadata implementation that implements 
> EntryVersion directly (version() returns this). This could even be combined 
> with your idea, so we could have a HotRodInternalCacheEntry that implements 
> InternalCacheEntry, Metadata, and EntryVersion :)

^ Hmmm, interesting propect. If the metadata passed is also an ICE, it could 
just take it… but there's the matter or mortal/immortal/transient entries to 
deal with too. 

> 
>  
> If anyone has any other ideas, shout now! :)
> 
> Cheers,
> 
> On May 28, 2013, at 3:07 PM, Galder Zamarreño <gal...@redhat.com> wrote:
> 
> > All, I've managed to replicate this locally and I'm now looking at the heap 
> > dumps.
> >
> > Cheers,
> >
> > On May 27, 2013, at 5:34 PM, Galder Zamarreño <gal...@redhat.com> wrote:
> >
> >> Hmmmm, not the expected results. There's no way we should be consuming 
> >> more memory per entry. It should definitely be less, particularly for Hot 
> >> Rod and Memcached and about the same for REST. The fact that all of them 
> >> grow seems to be there's an issue in the core impl.
> >>
> >> @Martin, can you generate a heap dump, say for Hot Rod (with 512b entries) 
> >> at the end of the test (when the cache is still populated)?
> >>
> >> Cheers,
> >>
> >> On May 24, 2013, at 2:50 PM, Martin Gencur <mgen...@redhat.com> wrote:
> >>
> >>> Hi,
> >>> so I gave it another try with latest Infinispan and Infinispan-server 
> >>> snapshots (HEAD: a901168, resp. bc432fa) . In short, the results are 
> >>> still the same for inVM mode but worse for client-server mode. Of course, 
> >>> I haven't changed anything in the test since last time.
> >>>
> >>> This time, I left out the results for 1MB entries because there's high 
> >>> distortion due to the low number of entries stored in a cache       
> >>> (storing 100MB of data). Previously, the results looked better for HotRod 
> >>> compared to the first round of tests I did for ISPN 5.2. Now the results 
> >>> for HotRod are worst of the three measurements, inVM mode remains the 
> >>> same:
> >>>
> >>> HotRod (ISPN 5.2):
> >>> entry size -> overhead per entry
> >>> 512B  -> 174B
> >>> 1kB   -> 178B
> >>> 10kB  -> 176B
> >>>
> >>> HotRod (ISPN 5.3-SNAPSHOT, a few weeks ago)
> >>> 512B  -> 159 (~ -9%)
> >>> 1kB   -> 159 (~ -11%)
> >>> 10kB  -> 154 (this is perhaps affected by the low number of entries 
> >>> stored in mem.)
> >>>
> >>> HotRod (ISPN 5.3-SNAPSHOT, now)
> >>> 512B  -> 191
> >>> 1kB   -> 191 (measured twice)
> >>> 10kB  -> 186 (looks a bit distorted already)
> >>>
> >>> ------------------------
> >>>
> >>> Memcached (ISPN 5.2)
> >>> 512B  -> 184
> >>> 1kB   -> 181
> >>> 10kB  -> 182
> >>>
> >>> Memcached (ISPN 5.3-SNAPSHOT)
> >>> 512   -> 228
> >>> 1kB   -> 227
> >>> 10kB  -> 235
> >>>
> >>> --------------------------------
> >>>
> >>> REST (ISPN 5.2)
> >>> 512B  -> 208
> >>> 1kB   -> 205
> >>> 10kB  -> 206
> >>>
> >>> REST (ISPN 5.3-SNAPSHOT)
> >>> 512   -> 247
> >>> 1kB   -> 247
> >>> 10kB  -> 251
> >>>
> >>> ------------------------------------
> >>>
> >>> inVM (ISPN 5.2)
> >>> 512B  -> 151
> >>> 1kB   -> 151
> >>> 10kB  -> 155
> >>>
> >>> inVM (ISPN 5.3-SNAPSHOT)
> >>> 512   -> 150
> >>> 1kB   -> 150
> >>> 10kB  -> 150
> >>>
> >>>
> >>> Martin
> >>>
> >>>
> >>> Dne 23.5.2013 18:13, Mircea Markus napsal(a):
> >>>> Hi Martin,
> >>>>
> >>>> Galder has finalised the remaining bits of ISPN-2281. Is it possible for 
> >>>> you to re-run the test to see where we are with the memory consumption?
> >>>>
> >>>> On 13 May 2013, at 10:44, Galder Zamarreño
> >>>> <gal...@redhat.com>
> >>>> wrote:
> >>>>
> >>>>
> >>>>>> On May 7, 2013, at 4:55 PM, Manik Surtani <msurt...@redhat.com>
> >>>>>> wrote:
> >>>>>>
> >>>>>>
> >>>>>>> On 7 May 2013, at 15:39, Martin Gencur <mgen...@redhat.com>
> >>>>>>> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>> I can make a blog post once we have this for Memcached and REST. I 
> >>>>>>>> guess it is not ready yet.
> >>>>>>>>
> >>>>>>> Yes please.  Nice work.  :)
> >>>>>>>
> >>>> Cheers,
> >>>>
> >>>
> >>> _______________________________________________
> >>> infinispan-dev mailing list
> >>> infinispan-dev@lists.jboss.org
> >>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >>
> >>
> >> --
> >> Galder Zamarreño
> >> gal...@redhat.com
> >> twitter.com/galderz
> >>
> >> Project Lead, Escalante
> >> http://escalante.io
> >>
> >> Engineer, Infinispan
> >> http://infinispan.org
> >>
> >>
> >> _______________________________________________
> >> infinispan-dev mailing list
> >> infinispan-dev@lists.jboss.org
> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> >
> >
> > --
> > Galder Zamarreño
> > gal...@redhat.com
> > twitter.com/galderz
> >
> > Project Lead, Escalante
> > http://escalante.io
> >
> > Engineer, Infinispan
> > http://infinispan.org
> >
> >
> > _______________________________________________
> > infinispan-dev mailing list
> > infinispan-dev@lists.jboss.org
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> --
> Galder Zamarreño
> gal...@redhat.com
> twitter.com/galderz
> 
> Project Lead, Escalante
> http://escalante.io
> 
> Engineer, Infinispan
> http://infinispan.org
> 
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> _______________________________________________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev


--
Galder Zamarreño
gal...@redhat.com
twitter.com/galderz

Project Lead, Escalante
http://escalante.io

Engineer, Infinispan
http://infinispan.org


_______________________________________________
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Reply via email to