Thanks for the replies

> do you execute transactions in parallel? Usually if keys that are used in
> transactions are not intersected you can start several Thread an execute
> transactions from them simultaneously.

The timings I posed are to update 11k entries in a cache that was pre-loaded
with 1 million records. A single transaction was started around this update.
Also this update uses bulk methods (i.e. cache.putAll()) to update all 11k
entries in the cache. I had observed a tremendous performance improvement in
doing a bulk update. 

> how do you measure and how your code looks like? Also don’t forget about
> VM warmup before starting gathering performance statistics. 

I monitor it by loggers. Its a standalone application code for POC purpose.
The code basically serially updates cache entries from single or multiple
caches. A snippet of the relevant code is shared here -
http://pastebin.com/ENv6q7Ni

> what is the reason why you started measuring this particular transactions?
> Do you have any specific use case? If you use case is just to preload that
> cache as fast as possible you can use IgniteDataStreamer for that 
https://apacheignite.readme.io/docs/data-streamers

Our use case is to update cache entries on an user event. This update
practically triggers updates in multiple caches which in-turn again triggers
updates is other caches and so and so forth. A graph dependency framework is
implemented to determine what are the next set of updates. 
All these updates are to be implemented in one transaction. I tried using
the affinity features of ignite but experienced a very slow performance with
ignite.compute().affinityCall(). 

Let me know if you need more details.
Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Slow-Transaction-Performance-tp5548p5614.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to