On Mon, Jan 18, 2016 at 3:45 AM, nino martinez wael <
nino.martinez.w...@gmail.com> wrote:

> Hi
>
> We are considering using Ignite as a backend for our client / server
> environment (for Ignites key/value store). IT's a telephony based
> system, and the idea are that as a telephony CALL passes through the
> system Ignite nodes will follow the call adding and manipulating data.
> Each call would have it's own node cluster (as we don't need to share
> data between calls). There could be a lot of concurrent calls, max
> would probably be around 1000.
>
> So the question are how does Ignite handle
> * Cluster size of up to 1000 concurrent clusters (all based on a predicate)
>

Most likely you mean 1000 nodes. Ignite has been known to be deployed in
1000+ node clusters. However, it sounds like you are trying to achieve data
isolation, so you may need to start up a cache per call, not a node per
call.

As Alexey suggested, once you describe your use case better, we will be
able to provide better suggestions.


> * A cluster where nodes will come and go and probably are error-prone
>

Ignite nodes can join and leave as needed. However, you should avoid
frequent topology changes, because likely there will be a background
rebalancing triggered every time a node joins or leaves.


> * Realtime, it would be a problem if one node dies and the others have
> to wait for confirmation of death etc
>

This generally does not happen in Ignite. Topology changes generally have
little effect on performance, as all the recovery and rebalancing happens
in the background and cluster continues to operate.


> * OSGI, any issues?
>

Ignite 1.5 added OSGI support. Not sure which issues you are concerned
with, but here are links to the documentation:

https://apacheignite.readme.io/docs/osgi-installation-in-karaf
https://apacheignite.readme.io/docs/osgi-supported-modules
https://apacheignite.readme.io/docs/osgi-starting-inside-a-container


>
> Please ask if I need to rephrase some of my questions
>
> --
> Best regards / Med venlig hilsen
> Nino Martinez
>

Reply via email to