No, only with inserts, I haven't tried removing at this rate yet but it may
have the same problem.

I'm debugging Ignite internal code and I may be onto something. The thing
is Ignite has a cacheMaxSize (aka, WriteBehindFlushSize) and
cacheCriticalSize (which by default is cacheMaxSize*1.5). When the cache
reaches that size Ignite starts writing elements SYNCHRONOUSLY, as you can
see in [1].

I think this makes things worse since only one single value is flushed at a
time, it becomes much slower forcing Ignite to do more synchronous writes.

Anyway, I'm still not sure why the cache reaches that level when the
database is clearly able to keep up with the insertions. I'll check if it
has to do with the number of open connections or what.

Any insight on this is very welcome!

[1]
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStore.java#L620

On Tue, May 2, 2017 at 2:17 PM, Jessie Lin <jessie.jianwei....@gmail.com>
wrote:

> I noticed that behavior when any cache.remove operation is involved. I
> keep putting stuff in cache seems to be working properly.
>
> Do you use remove operation?
>
> On Tue, May 2, 2017 at 9:57 AM, Matt <dromitl...@gmail.com> wrote:
>
>> I'm stuck with that. No matter what config I use (flush size, write
>> threads, etc) this is the behavior I always get. It's as if Ignite internal
>> buffer is full and it's trying to write and get rid of the oldest (one)
>> element only.
>>
>> Any idea people? What is your CacheStore configuration to avoid this?
>>
>> On Tue, May 2, 2017 at 11:50 AM, Jessie Lin <jessie.jianwei....@gmail.com
>> > wrote:
>>
>>> Hello Matt, thank you for posting. I've noticed similar behavior.
>>>
>>> Would be curious to see the response from the engineering team.
>>>
>>> Best,
>>> Jessie
>>>
>>> On Tue, May 2, 2017 at 1:03 AM, Matt <dromitl...@gmail.com> wrote:
>>>
>>>> Hi all,
>>>>
>>>> I have two questions for you!
>>>>
>>>> *QUESTION 1*
>>>>
>>>> I'm following the example in [1] (a mix between "jdbc transactional"
>>>> and "jdbc bulk operations") and I've enabled write behind, however after
>>>> the first 10k-20k insertions the performance drops *dramatically*.
>>>>
>>>> Based on prints I've added to the CacheStore, I've noticed what Ignite
>>>> is doing is this:
>>>>
>>>> - writeAll called with 512 elements (Ignites buffers elements, that's
>>>> good)
>>>> - openConnection with autocommit=true is called each time inside
>>>> writeAll (since session is not stored in atomic mode)
>>>> - writeAll is called with 512 elements a few dozen times, each time it
>>>> opens a new JDBC connection as mentioned above
>>>> - ...
>>>> - writeAll called with ONE element (for some reason Ignite stops
>>>> buffering elements)
>>>> - writeAll is called with ONE element from here on, each time it opens
>>>> a new JDBC connection as mentioned above
>>>> - ...
>>>>
>>>> Things to note:
>>>>
>>>> - All config values are the defaults ones except for write through and
>>>> write behind which are both enabled.
>>>> - I'm running this as a server node (only one node on the cluster, the
>>>> application itself).
>>>> - I see the problem even with a big heap (ie, Ignite is not nearly out
>>>> of memory).
>>>> - I'm using PostgreSQL for this test (it's fine ingesting around 40k
>>>> rows per second on this computer, so that shouldn't be a problem)
>>>>
>>>> What is causing Ignite to stop buffering elements after calling
>>>> writeAll() a few dozen times?
>>>>
>>>> *QUESTION 2*
>>>>
>>>> I've read on the docs that using ATOMIC mode (default mode) is better
>>>> for performance, but I'm not getting why. If I'm not wrong using
>>>> TRANSACTIONAL mode would cause the CacheStore to reuse connections (not
>>>> call openConnection(autocommit=true) on each writeAll()).
>>>>
>>>> Shouldn't it be better to use transactional mode?
>>>>
>>>> Regards,
>>>> Matt
>>>>
>>>> [1] https://apacheignite.readme.io/docs/persistent-store#sec
>>>> tion-cachestore-example
>>>>
>>>
>>>
>>
>

Reply via email to