+1 to Denis, Near caches are on-heap, as well, so I guess we need it.

BTW, TeamCity Bot uses Guava on-heap caching above Ignite (Durable Memory).
This is because keeping the same instance of Java object cached brings a
visible performance boost for really hot code points. At least, it reduces
GC pressure. Offheap->onheap unmarshalling gives new object from JVM point
of view.

ср, 19 июн. 2019 г. в 00:35, Denis Magda <dma...@apache.org>:

> Ivan,
>
> I believe that, yes, those caches are still extremely useful for
> low-latency use cases. Companies are ready to allocate more RAM in favor of
> lower latencies because off-heap access is still slower than the on-heap
> one. There are not that many use case of this kind, but I can recall
> several companies that exploit on-heap caching a lot.
>
> -
> Denis
>
>
> On Tue, Jun 18, 2019 at 11:42 AM Павлухин Иван <vololo...@gmail.com>
> wrote:
>
> > Do we still need onheap caches?
> >
> > вт, 18 июн. 2019 г. в 21:30, Denis Magda <dma...@apache.org>:
> > >
> > > +1
> > >
> > > Thick (aka. standard clients) provide comprehensive compute APIs with
> > > peer-class-loading. That's a huge differentiator for Ignite. Until thin
> > > clients support compute and ML API at the same level as the standard
> > client
> > > does, I would not consider the standard clients' discontinuation. Plus,
> > as
> > > Alex outlined, a functional gap is even wider.
> > >
> > > -
> > > Denis
> > >
> > >
> > > On Mon, Jun 17, 2019 at 6:28 AM Alexey Goncharuk <
> > alexey.goncha...@gmail.com>
> > > wrote:
> > >
> > > > Nikolay,
> > > >
> > > > I had this thought too, but I am not too eager to implement it yet.
> The
> > > > reason is transaction protocol complexity/performance issues with
> thin
> > > > clients.
> > > >
> > > > A thick client can communicate with each primary node and coordinate
> > > > prepare/commit phases. Thin client can only communicate with one
> node,
> > so
> > > > the change will mean an additional network hop. Of course, we can
> make
> > thin
> > > > clients implement the same protocol, but it will immediately increase
> > the
> > > > protocol complexity for all platforms.
> > > >
> > > > Plus, we do not have near cache on thin clients, we do not support
> p2p
> > > > class deployment, etc. Since thin clients are positioned as
> > > > platform-agnostic, I do not think it makes sense to expose all
> feature
> > set
> > > > of Igntie to thin clients.
> > > >
> > > > Instead, we can significantly simplify client node configuration - it
> > > > currently requires the same config as a regular Ignite node, however,
> > in
> > > > most cases, the configuration can be reduced almost to a several
> > host:port
> > > > pairs.
> > > >
> > > > пн, 17 июн. 2019 г. в 15:58, Nikolay Izhikov <nizhi...@apache.org>:
> > > >
> > > > > Alexey.
> > > > >
> > > > > I want to share a thought (just don't drop it out in one moment :)
> ).
> > > > >
> > > > > Do we really need "client nodes"?
> > > > >
> > > > > We have thin client protocol that is a very convenient point to
> > interact
> > > > > with Ignite.
> > > > > So, why, we need one more entity and work mode such as "client
> node"?
> > > > >
> > > > > From my point of view, client nodes were required in the time
> > without a
> > > > > thin client.
> > > > > Now, we have it.
> > > > >
> > > > > Let's simplify Ignite codebase and drop client nodes!
> > > > >
> > > > > How does it sound?
> > > > >
> > > > >
> > > > > В Пн, 17/06/2019 в 15:52 +0300, Alexey Goncharuk пишет:
> > > > > > Nikolay,
> > > > > >
> > > > > > Local caches and scalar are already in the list :) Added the
> > outdated
> > > > > > metrics point.
> > > > > >
> > > > > > пн, 17 июн. 2019 г. в 15:32, Nikolay Izhikov <
> nizhi...@apache.org
> > >:
> > > > > >
> > > > > > > * Scalar.
> > > > > > > * LOCAL caches.
> > > > > > > * Deprecated metrics.
> > > > > > >
> > > > > > > В Пн, 17/06/2019 в 15:18 +0300, Alexey Goncharuk пишет:
> > > > > > > > Igniters,
> > > > > > > >
> > > > > > > > Even though we are still planning the Ignite 2.8 release, I
> > would
> > > > > like to
> > > > > > > > kick-off a discussion related to Ignite 3.0, because the
> > efforts
> > > > for
> > > > > AI
> > > > > > >
> > > > > > > 3.0
> > > > > > > > will be significantly larger than for AI 2.8, better to start
> > > > early.
> > > > > > > >
> > > > > > > > As a first step, I would like to discuss the list of things
> to
> > be
> > > > > removed
> > > > > > > > in Ignite 3.0 (partially this thread is inspired by Denis
> > Magda's
> > > > > IGFS
> > > > > > > > removal thread). I've separated all to-be-removed points from
> > > > > existing
> > > > > > > > Ignite 3.0 wishlist [1] to a dedicated block and also added a
> > few
> > > > > more
> > > > > > > > things that look right to be dropped.
> > > > > > > >
> > > > > > > > Please share your thoughts, probably, there are more outdated
> > > > things
> > > > > we
> > > > > > > > need to add to the wishlist.
> > > > > > > >
> > > > > > > > As a side question: I think it makes sense to create tickets
> > for
> > > > such
> > > > > > > > improvements, how do we track them. Will the 3.0 version
> > suffice or
> > > > > > >
> > > > > > > should
> > > > > > > > we add a separate label?
> > > > >
> > > >
> >
> >
> >
> > --
> > Best regards,
> > Ivan Pavlukhin
> >
>

Reply via email to