Hi, If every transaction updates around 11K entries it’s not surprisingly that the performance will suffer especially if there are multiple transactions that are executed in parallel and work with the same subset of the keys.
My suggestion is to split such huge transaction on a smaller ones if it’s possible on your side. If this doesn’t work for you use case try to use OPTIMISTIC/SERIALIZABLE transactions that should work faster. Also please see the difference for isolation levels [1] because I’m not sure that OPTIMISTIC/READ_COMMITTED should work for you. Affinity run/call may work if you collocate caches [2] around some entity and send update to a node that keeps all the data avoiding data transfer over the wire. [1] https://apacheignite.readme.io/v1.6/docs/transactions#optimistic-transactions <https://apacheignite.readme.io/v1.6/docs/transactions#optimistic-transactions> [2] https://apacheignite.readme.io/docs/affinity-collocation <https://apacheignite.readme.io/docs/affinity-collocation> — Denis > On Jun 13, 2016, at 2:32 PM, pragmaticbigdata <[email protected]> wrote: > > Thanks for the replies > >> do you execute transactions in parallel? Usually if keys that are used in >> transactions are not intersected you can start several Thread an execute >> transactions from them simultaneously. > > The timings I posed are to update 11k entries in a cache that was pre-loaded > with 1 million records. A single transaction was started around this update. > Also this update uses bulk methods (i.e. cache.putAll()) to update all 11k > entries in the cache. I had observed a tremendous performance improvement in > doing a bulk update. > >> how do you measure and how your code looks like? Also don’t forget about >> VM warmup before starting gathering performance statistics. > > I monitor it by loggers. Its a standalone application code for POC purpose. > The code basically serially updates cache entries from single or multiple > caches. A snippet of the relevant code is shared here - > http://pastebin.com/ENv6q7Ni > >> what is the reason why you started measuring this particular transactions? >> Do you have any specific use case? If you use case is just to preload that >> cache as fast as possible you can use IgniteDataStreamer for that > https://apacheignite.readme.io/docs/data-streamers > > Our use case is to update cache entries on an user event. This update > practically triggers updates in multiple caches which in-turn again triggers > updates is other caches and so and so forth. A graph dependency framework is > implemented to determine what are the next set of updates. > All these updates are to be implemented in one transaction. I tried using > the affinity features of ignite but experienced a very slow performance with > ignite.compute().affinityCall(). > > Let me know if you need more details. > Thanks. > > > > -- > View this message in context: > http://apache-ignite-users.70518.x6.nabble.com/Slow-Transaction-Performance-tp5548p5614.html > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
