Suhas,

It depends on multiple factors; and may need to be
configured/provisioned/tuned based on usecase.

The memory requirement depends on how much and how big the data is, you may
need to account for overhead of each entry; and the GC impact, which
depends on wether its a read only or write heavy system or data is
constantly modified.

The CPU comes into picture, based on number of clients/application threads
performing the operation and how they are contending for resources...If you
have large number of clients accessing the data, its better to partition
them across multiple servers and provision the server based on resource
availability.

You can find additional info at:
https://cwiki.apache.org/confluence/display/GEODE/Sizing+a+Geode+Cluster

Also, not off-heap is supported with Geode, you can minimize the GC impact
by storing the data on off-heap:
https://cwiki.apache.org/confluence/display/GEODE/Off-Heap+Memory+Spec

-Anil.

















On Wed, Dec 14, 2016 at 10:14 AM, Suhas Gogate <[email protected]> wrote:

> Hi, I am looking for recommendation for amount of heap single Geode member
> server can handle reasonably well and how many CPU cores would it need
> (assuming the clients are local to member server i.e. network is not a
> bottle neck), or in other words, In a typical production deployment is
> there a recommendation on how much heap single Geode server should manage
> and with how many cores?
>
>
>
> Also, as we have more and more RAM w/ today’s enterprise grade servers
> e.g. 128G, 256G, so is it better to run multiple Geode Servers on such
> machines w/ smaller heap allocations or one server w/ large heap size?  I
> understand Geode provides a way to distribute replicas across physical
> machines so multiple servers per single machine should not be a problem in
> this regards. Although bigger heap possibly could mean bigger GC pauses?
>
>
>
> Appreciate your insight?
>
>
>
> Thanks & Regards, Suhas
>
>
>
>

Reply via email to