Hello,

How do you decide about a number of continuous queries to be started? I see
on one of your pictures that a query is started per key. Usually, a single
query per cache per client should be enough as long as the filter allows to
exclude records of no interest for a client. Probably, you can put
client-specific logic into the filter.

As for 1000s of continuous queries for a single node cluster, this sounds
like overkill as long as the server node needs to process all the 1000s
queries for every data change.

-
Denis


On Tue, Feb 11, 2020 at 3:00 AM zork <pranavkhuran...@gmail.com> wrote:

> *Topology*:
>
> Server-1 --> Cache myCache, holds continuously updating data like market
> data(prices, status, tradetime etc) for instruments. InstrumentId is the
> key
> for this cache.
>                    Server-1 running with following jvm params:
> -Xms1g,-Xmx6g
>
> Client-1 --> Pushing continuous updates on the cache
>
> Client - 2 & 3 --> Listening updates on myCache using separate Continuous
> query on every key (i.e. one continuous query per instrumentId).
>
> The Cache Configuration is as follows:
>           cacheMode    PARTITIONED
>           atomicityMode    ATOMIC
>           backups    2
>           readFromBackup    true
>           copyOnRead    true
>           statisticsEnabled    true
>           managementEnabled    true
>
> System hardware: 8 core, 32gb RAM
> For now all servers and client below run on same machine.
>
> ---------------------------------------------------------
>
> *Scenario*: There are 1000 items in the myCache and client-1 is pushing 3
> updates per second on every item. Lets say both client-2 and client-3 have
> 1000 different continuous queries open to listen to every update.
>
> With the above scenario, we are observing the server-1 alone taking 60-70%
> CPU and about 4GB memory.
> In this case when high number continuous queries machine reaches 100% CPU
> utilization.
>
> *Thinking to fix as*: Use single continuous query per client to listen to
> all the updates. i.e. there would be just one continuous query and it would
> listen to all the updates.
>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2754/Untitled.png>
>
> But now the problem is that both the clients do not need to listen to
> updates in all the keys in cache. So we are thinking of adding another
> column to the ignite cache using which we can filter the updates by
> checking
> if the client column of the updated row contains the client name for which
> filter is being checked. e.g. the new table would look like-
>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2754/Untitled2.png>
>
> Would this be the correct way to achieve what we are trying to achieve? Or
> could this be done some other better way in ignite?
>
> Follow up Question:
> How many continuous queries can ignite handle at once with the
> configuration
> we mentioned or is there any such benchmark values available on any
> particular configuration? Is it fine to have as many as 1000 (or even more)
> continuous queries running at once? If yes, how can we make it less CPU
> intensive and more performant?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Reply via email to