Hi all,

I have two questions for you!

*QUESTION 1*

I'm following the example in [1] (a mix between "jdbc transactional" and
"jdbc bulk operations") and I've enabled write behind, however after the
first 10k-20k insertions the performance drops *dramatically*.

Based on prints I've added to the CacheStore, I've noticed what Ignite is
doing is this:

- writeAll called with 512 elements (Ignites buffers elements, that's good)
- openConnection with autocommit=true is called each time inside writeAll
(since session is not stored in atomic mode)
- writeAll is called with 512 elements a few dozen times, each time it
opens a new JDBC connection as mentioned above
- ...
- writeAll called with ONE element (for some reason Ignite stops buffering
elements)
- writeAll is called with ONE element from here on, each time it opens a
new JDBC connection as mentioned above
- ...

Things to note:

- All config values are the defaults ones except for write through and
write behind which are both enabled.
- I'm running this as a server node (only one node on the cluster, the
application itself).
- I see the problem even with a big heap (ie, Ignite is not nearly out of
memory).
- I'm using PostgreSQL for this test (it's fine ingesting around 40k rows
per second on this computer, so that shouldn't be a problem)

What is causing Ignite to stop buffering elements after calling writeAll()
a few dozen times?

*QUESTION 2*

I've read on the docs that using ATOMIC mode (default mode) is better for
performance, but I'm not getting why. If I'm not wrong using TRANSACTIONAL
mode would cause the CacheStore to reuse connections (not call
openConnection(autocommit=true) on each writeAll()).

Shouldn't it be better to use transactional mode?

Regards,
Matt

[1]
https://apacheignite.readme.io/docs/persistent-store#section-cachestore-example

Reply via email to