The reason we cannot add another memory region to page memory is
performance. Currently, the translation of a page ID to a virtual address
is basically a no-op. If we add dynamic memory expansion, we need to
maintain some sort of page ID mapping, which adds significant (in my view,
about 10%) overhead.

Agree with Semyon, we can set default size based on available memory on a
machine.

2017-04-18 11:38 GMT+03:00 Semyon Boikov <sboi...@gridgain.com>:

> I think Ignite can behave like JVM: we can have -Xms -Xmx settings with
> defaults depending on available memory.
>
> Thanks
>
> On Tue, Apr 18, 2017 at 4:56 AM, Dmitriy Setrakyan <dsetrak...@apache.org>
> wrote:
>
> > Denis,
> >
> > If what you are suggesting is true, then we can always allocate about 80%
> > of available memory by default. By the way, it must also work on Windows,
> > so we should definitely test it.
> >
> > Alexey G, can you comment?
> >
> > D.
> >
> > On Mon, Apr 17, 2017 at 6:17 PM, Denis Magda <dma...@apache.org> wrote:
> >
> > > Dmitriy,
> > >
> > > > Denis, it sounds like with this approach, in case of the
> > over-allocation,
> > > > the system will just get slower and slower and users will end up
> > blaming
> > > > Ignite for it. Am I understanding your suggestion correctly?
> > >
> > >
> > > This will not happen (at least in Unix) unless all the nodes really
> used
> > > all the allocated memory by putting data there or touching all the
> memory
> > > range somehow else.
> > >
> > > > How was this handled in Ignite 1.9?
> > >
> > >
> > > If you are talking about the legacy off-heap impl then we requested
> small
> > > chunks of data from an operating system rather than a continuous memory
> > > region as in the page memory. But I would think of the page memory as
> of
> > > Java heap which also can request 8 GB continuous memory region on a 8
> GB
> > > machine following heap settings of an app but an operating system will
> > not
> > > return the whole range immediately unless Java app occupies the whole
> > heap
> > > or use special parameters.
> > >
> > > At all, I think it’s safe to use approach suggested by me unless I miss
> > > something.
> > >
> > > —
> > > Denis
> > >
> > > > On Apr 17, 2017, at 6:05 PM, Dmitriy Setrakyan <
> dsetrak...@apache.org>
> > > wrote:
> > > >
> > > > On Mon, Apr 17, 2017 at 6:00 PM, Denis Magda <dma...@apache.org>
> > wrote:
> > > >
> > > >> Dmitriy,
> > > >>
> > > >> All the nodes will request its own continuous memory region that
> takes
> > > >> 70-80% of all RAM from an underlying operation system. However, the
> > > >> operating system will not outfit the nodes with physical pages
> mapped
> > to
> > > >> RAM immediately allowing every node's process to start successfully.
> > The
> > > >> nodes will communicate to RAM via a virtual memory which in its turn
> > > will
> > > >> give an access to physical pages whenever is needed applying low
> level
> > > >> eviction and swapping techniques.
> > > >>
> > > >
> > > > Denis, it sounds like with this approach, in case of the
> > over-allocation,
> > > > the system will just get slower and slower and users will end up
> > blaming
> > > > Ignite for it. Am I understanding your suggestion correctly?
> > > >
> > > > How was this handled in Ignite 1.9?
> > > >
> > > > D.
> > >
> > >
> >
>

Reply via email to