Re: Continuous Query remote listener misses some events or respond really late

2017-06-09 Thread Sasha Belyak
Thank for your reply. From code I see that you log only entries with non
null values. If your absolutely shure that you never put null in cache - I
will create loadtest to reproduce it and create issue for you. But it will
be great, if you move logging before event.getValue! = null.

среда, 7 июня 2017 г. пользователь begineer написал:

> Hi.. Sorry its quite late to reply. CQ is setup in execute method of
> service
> not in init(), but we do have initialQuery in CQ to scan existing events to
> matching the filter. Below is snapshot of one of the many ignite services
> set to process trade on when trade moves to particular status.
>
> As you can see, I have added logs to remote filter predicate. But these
> logs
> don't get printed when trade get stuck at particular status. So I assume,
> remote filter does not pick the events it is supposed to track.
>
> public enum TradeStatus {
> NEW, CHANGED, EXPIRED, FAILED, UNCHANGED , SUCCESS
> }
>
>
> /**
>  * Ignite Service which picks up CHANGED trade delivery items
>  */
> public class ChangedTradeService implements Service{
>
> @IgniteInstanceResource
> private transient Ignite ignite;
> private transient IgniteCache tradeCache;
> private transient QueryCursor> cursor;
>
> @Override
> public void init(ServiceContext serviceContext) throws Exception {
> tradeCache = ignite.cache("tradeCache");
> }
>
> @Override
> public void execute(ServiceContext serviceContext) throws
> Exception {
> ContinuousQuery query = new
> ContinuousQuery<>();
> query.setLocalListener((CacheEntryUpdatedListenerAsync Trade>)
> events -> events
> .forEach(event ->
> process(event.getValue(;
> query.setRemoteFilterFactory(
> factoryOf(checkStatus(status)));
> query.setInitialQuery(new ScanQuery<>(
> checkStatusPredicate(status)));
> QueryCursor> cursor =
> tradeCache.query(query);
> cursor.forEach(entry -> process(entry.getValue()));
> }
>
> private void process(Trade item){
>  log.info("transition started for trade id :"+item.getPkey());
> //move the trade to next state(e.g SUCCESS) and next
> Service(contains CQ,
> which is looking for SUCCESS status) will pick this up for processing
> further and so on
>  log.info("transition finished for trade id
> :"+item.getPkey());
> }
>
> @Override
> public void cancel(ServiceContext serviceContext) {
> cursor.close();
> }
>
> static CacheEntryEventFilterAsync
> checkStatus(TradeStatus
> status) {
> return event -> event.getValue() != null &&
> checkStatusPredicate(status).apply(event.getKey(), event.getValue());
> }
>
> static IgniteBiPredicate
> checkStatusPredicate(TradeStatus status) {
> return (k, v) -> {
> LOG.debug("Status checking for: {} Event value: {}
> isStatus: {}", status,
> v, v.getStatus() == status);
> return v.getStatus() == status;
> };
> }
> }
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p13476.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Why am I not getting the size of cache?

2017-05-18 Thread Sasha Belyak
Hi,

why you didn't use cache.size instead of cache.localSize? localSize count
only entries, stored in local node and if you ignite node connect to some
cluster this test two keys may be stored in other nodes and you get zero
local cache size... Look in logs output something like "Topology snapshot
[ver=1, servers=1, clients=0, CPUs=4, heap=3.5GB]" to find out how many
server nodes in your current topology.

Best Regards,
Alexander Belyak

2017-05-19 11:36 GMT+07:00 ignitedFox :

> Hi all,
>
> I am using the below code to insert some sample data to my cache, and
> display the count of entries in it.
>
>
>
> But it is showing 0. Could somebody kindly tell me why this is happening
> and
> how to fix this?
>
> Thanks in advance..
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Why-am-I-not-getting-the-size-of-cache-tp13014.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: How Cache Management Works

2017-05-18 Thread Sasha Belyak
Hi,
ignite split every cache into some number of partitions and store
partitions on cluster nodes. It decide where to store data by two step:
1) calculate partitionId by key (long in your case)
2) calculate affinity function by partitionId and cluster topology. For
every partition affinity function return list of nodes: master, backup1,
backup2,... (if you specify backups in cache configuration). Ignite will
store partition accordingly this list.
If you specify node filter in cache configuration - ignite will use only
appropriate nodes (i.e. it will use only filtered nodes in affinity
function) and if you use standard affinity finction - it will try to
distribute equivalent numbers of partition to all nodes. In docs we have
pretty pictures for it:
https://apacheignite.readme.io/docs/cache-modes#partitioned-mode .
What exactly are you interested in?

2017-05-18 13:16 GMT+07:00 rushi_rashi :

> Hi,
>
> I have a cache partitioned on 4 nodes and having 6 ignite instance per
> node.
> The key used for storing data is of type long.
> I want to know how ignite decides where to store data.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-Cache-Management-Works-tp12987.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Write Behind with delete performance

2017-05-10 Thread Sasha Belyak
Yes, if key wasn't flushed (end event wasn't collected by flusher thread to
flush) - it will be just overwritten in memory with new value (doesn't
matter what operation, i.e. insert can be overwritten to delete or visa
versa).
So in your example if WB cache get
insert k1 =0, k2=2, k3 =3 and delete k1=0
and don't start flush between these operations - it eventually write to
store:
deleteAll( k1=0 ), writeAll( k2=2, k3=3 ).
Of course, WB cache must process deleteAll ( k1=0 ) because it can't know
if the k1 key already be in store before our example.
You can read about it in
https://apacheignite.readme.io/docs/persistent-store#write-behind-caching

2017-05-11 0:24 GMT+07:00 waterg :

> Thank you for the quick explaining and creating jira tickets.
>
> Just a thought:
> If insert k1 =0, k2=2, k3 =3 and delete k1=0 and k1 is not being flushed to
> store yet, would it be possible to just delete in memory, w/o flushing to
> store? and deleteAll for all k1 like records that have been flushed to
> store?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Write-Behind-with-delete-
> performance-tp12580p12604.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Write Behind with delete performance

2017-05-09 Thread Sasha Belyak
Hello Jessie,
this happen because write behind work as:
1) Store cache updates in sorted map: oldest updates go to store first, but
if you update (delete/insert, any operation with same key) key in write
behind map (cached), this operation will be coalesced (to tear down store
load), but new operation will have same order as old one (i.e. if you put
key-value pairs into WB store: k1=1, k2=2, k3=0, k1=3 then WB translate it
to k1=3, k2=2, k3=0 (with this order, not k2=2, k3=0, k1=3).
2) After whole WB cache size grow to writeBehindFlushSize (or when
writeBehindFlushFrequency timeout will come... but this is not our case) -
flusher threads start working:
3) All flusher's start to process sorter map (WB cache) and:
3.0) Test if it collect writeBehindBatchSize of entries, or if new entry
have different operation from previous one
3.1) Lock entry, if entry is not evicting now - switch it to PENDING state,
unlock entry
3.2) Process writeAll or deleteAll on store (in 3.0 flusher test that all
entries have same operation) and turn all keys to FLUSHED state
Flusher work until cache is empty.

One additional point: if writer thread trying to update key, that already
presented in WB map and this key in PENDING state - writer will wait until
this key get FLUSHED state.

>From this long text we can get two conclusion:
1) If insert/update and delete operation often switch each other - flusher
can't get whole batch and will flush only continuous inserts/updates or
continuous deletes
2) If some key update very often - writer can often wait until flusher
flush this key.

Its not perfect, but I make two issue for improve it:
1) https://issues.apache.org/jira/browse/IGNITE-5184
2) https://issues.apache.org/jira/browse/IGNITE-5003

And thanks for the excellent description of the problem.

2017-05-10 7:46 GMT+07:00 waterg :

>
> Hello I've come up with the code that
>
> 1. the write in parallel works great with the parameters below
>
> 
> 
> 
>
> BUT, when I start to add deletes in the process, for example 1 remove every
> 19 puts,
> I start to the write behind performance deteriorates. And log looks like
> below:
> You see the delete are almost always executed with only 1 records, even
> though a deleteAll method was called.
>
> Tue May 09 10:49:38 PDT 2017Write w Delete start
> --
> [1494352178763]---Datebase BATCH upsert:87 entries successful
> 
> [1494352178763]---Datebase BATCH upsert:35 entries successful
> 
> [1494352178780]---Datebase BATCH upsert:100 entries successful
> 
> [1494352178782]---Datebase BATCH upsert:100 entries successful
> 
> [1494352178784]---Datebase BATCH upsert:100 entries successful
> 
> [1494352178884]---Datebase BATCH upsert:100 entries successful
> 
> [1494352178902]---Datebase BATCH upsert:39 entries successful
> 
> [1494352178902]---Datebase BATCH upsert:100 entries successful
> 
> [1494352178903]---Datebase BATCH upsert:39 entries successful
> 
> [1494352178903]---Datebase BATCH upsert:29 entries successful
> 
> [1494352178906]---Datebase BATCH upsert:100 entries successful
> 
> [1494352178906]---Datebase BATCH upsert:39 entries successful
> 
> [1494352178910]---Datebase BATCH upsert:100 entries successful
> 
> [1494352178910]---Datebase BATCH upsert:39 entries successful
> 
> [1494352178923]---Datebase BATCH upsert:39 entries successful
> 
> [1494352178960]---Datebase BATCH upsert:1 entries successful
> 
> [1494352179009]---Datebase BATCH upsert:38 entries successful
> 
> [1494352179023]---Datebase BATCH DELETE:1 entries successful
> 
> [1494352179023]---Datebase BATCH DELETE:1 entries successful
> 
> [1494352179024]---Datebase BATCH DELETE:1 entries successful
> 
> [1494352179038]---Datebase BATCH upsert:39 entries successful
> 
> [1494352179039]---Datebase BATCH DELETE:1 entries successful
> 
> [1494352179039]---Datebase BATCH DELETE:1 entries successful
> 
> [1494352179039]---Datebase BATCH upsert:36 entries successful
> 
> [1494352179043]---Datebase BATCH DELETE:1 entries successful
> 
> [1494352179095]---Datebase BATCH upsert:36 entries successful
> 
> [1494352179135]---Datebase BATCH upsert:1 entries successful
> 
> [1494352179139]---Datebase BATCH DELETE:1 entries successful
> 
> [1494352179143]---Datebase BATCH upsert:1 entries 

Re: Continuous Query remote listener misses some events or respond really late

2017-05-05 Thread Sasha Belyak
As far as I understant you create CQ in Service.init, so node with running
service is CQ node. All other nodes from grid will send CQ events to this
node to process in your service and if you don't configure nodeFilter for
service - any node can run it, so any node can be CQ node.
But it shouldn't be a problem if you create CQ in Service.init() and
haven't too heavy load on you cluster (anyway if data owner node failed to
deliver messages to node with running service (CQ node) - you should see it
in logs). If you give some code examples  how you use CQ - I can say more.

2017-05-05 17:59 GMT+07:00 begineer :

> Thanks, In my application, all nodes are server nodes
> And how do we be sure that nodes removed/ reconnect to grid is CQ node, it
> can be any.
> Also, Is this issue possible in all below scenarios?
> 1. if node happens to be CQ node or any node?
> 2. node is removed from grid forcefully(manual shutdown)
> 3. node went down due to some reason and grid dropped it
>
> 3rd one looks like safe option since it is dropped by grid so grid should
> be
> ware where to shift the CQ? Please correct me if I am wrong.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p12454.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query remote listener misses some events or respond really late

2017-05-05 Thread Sasha Belyak
If node with CQ leave grid (or just reconnect to grid, if it client node) -
you should recreate CQ, because some cache updates can happen when node
with CQ listener can't receive it. What happen it this case:
1) Node with changed cache entry process CQ, entry pass remote filter and
node try to send continues query event message to CQ node
2) If sender node can't push msg by any reasons (sender will retry few
times) - it can't wait receiver too long and drop it.
3) After CQ node return to the cluster - it must recreate CQ to process
initialQuery to get such events.
If you sure that no CQ owners node leaves grid - we need to continue,
becouse it can be bug.
And yes, I think that it is not evidently that you must recreate CQ after
client reconnect, but that is how ignite work now.

2017-05-05 16:56 GMT+07:00 begineer :

> Umm. actually nothing get logged in such scenario. However, as you
> indicated
> earlier, I could see trades get stuck if a node leaves the grid(not
> always).
> Do you know why that happens? Is that a bug?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p12452.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query remote listener misses some events or respond really late

2017-05-04 Thread Sasha Belyak
Can you share you log files?

2017-05-03 19:05 GMT+07:00 begineer :

> 1) How you use ContinuousQuery: with initialQuery or without? : with
> initial
> query having same predicate
> 2) Did some nodes disconnect when you loose updates? no
> 3) Did you log entries in CQ.localListener? Just to be sure that error in
> CQ
> logic, not in your service logic. :
>  No log entries in remote filter, nor in locallistner
> 4) Can someone update old entries? Maybe they just get into CQ again after
> 4-5 hours by external update?
>--- I tried adding same events just to trigger event again, some time it
> moves ahead(event discovered), some times get stuck at same state.
> Also, CQ detects them at its won after long time mentioned, we dont add any
> event in this case.
> Regards,
> Surinder
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p12387.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query remote listener misses some events or respond really late

2017-05-03 Thread Sasha Belyak
1) How you use ContinuousQuery: with initialQuery or without?
2) Did some nodes disconnect when you loose updates?
3) Did you log entries in CQ.localListener? Just to be sure that error in
CQ logic, not in your service logic.
4) Can someone update old entries? Maybe they just get into CQ again after
4-5 hours by external update?

2017-05-03 17:13 GMT+07:00 begineer :

> Hi Thanks for looking into this. Its not easily reproduce-able. I only see
> it
> some times. Here is my cache and service configuration
>
> Cache configuration:
>
> readThrough="true"
> writeThrough="true"
> writeBehindEnabled="true"
> writeBehindFlushThreadCount="5"
> backups="1"
> readFromBackup="true"
>
> service configuartion:
>
> maxPerNodeCount="1"
> totalCount="1"
>
> Cache is distributed over 12 nodes.
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338p12382.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Continuous Query remote listener misses some events or respond really late

2017-05-03 Thread Sasha Belyak
Hi,
I'm trying to reproduce it in one host (with 6 ignite server node) but all
work fine for me. Can you share ignite configuration, cache configuration,
logs or some reproducer?

2017-05-02 15:48 GMT+07:00 begineer :

> Hi,
> I am currently facing intermittent issue with continuous query. Cant really
> reproduce it but if any one faced this issue, please do let me know
> My application is deployed on 12 nodes with 5-6 services are used to detect
> respective events using continuous query.
> Lets say I have a cache of type
> Cache where Trade is like this
> class Trade{
> int pkey,
> String type
> 
> TradeState state;//enum
> }
> CQ detects the new entry to cache(with updated state) and checks if trade
> has the state which matches its remote filter criteria.
> A Trade moves from state1-state5. each CQ listens to one stage and do some
> processing and move it to next state where next CQ will detect it and act
> accordingly.
> Problem is sometimes, trade get stuck in some state and does not move. I
> have put logs in remote listener Predicate method(which checks the filter
> criteria) but these logs don't get printed on console. Some times CQ detect
> events after 4-5 hours.
> I am using ignite 1.8.2
> Does any one seen this behavior, I will be grateful for help extended
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Continuous-Query-remote-listener-misses-some-events-
> or-respond-really-late-tp12338.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Web Sessions Caching Failover question

2017-04-28 Thread Sasha Belyak
I think it's very interesting use case (if I understand you correctly) to
store data locally and, if client can, connect to the cluster and share
data. But for now this use case doesn't support directly and, probably, you
should put some local write behind cache before apache ignite client node
and work with this cache from you app. Cache should:
1) Have some evict policy for prevent overflowing of local JVM memory
2) Trying to write cached data in background to put all data into your
cluster when it available


Re: Apache ignite performance

2017-04-28 Thread Sasha Belyak
Hello Sweta Das!
1. It's not recommended to perform any performance tests on virtual
environment, because this environments have too many points where something
can work not as expected: memory swapping, virtual file systems, driver
problems and the effects from other virtual machines on the same host.
Probably, it will be better if you perform tests on the real hardware and
then compare results with you virtual environment.
2. You are interested in monitoring external events or events occurring
inside a Ignite cluster?

2017-04-26 23:33 GMT+07:00 sweta Das :

> Hi
> We are evaluating apache ignite for compute and data grid on virtual
> machines.
> Can anyone give me idea or inputs on below points-
> 1. Performance or drawbacks of ignite on virtual machines versus ignite
> deployed on physical servers? Has anyone seen any overcommitting memory in
> VMs ?
> 2. Is there anyway to monitor realtime events in production?
>
>