(Inline)

On Fri, Apr 22, 2016, 4:26 PM vkulichenko <valentin.kuliche...@gmail.com>
wrote:
>
> Hi Matt,
>
> I'm confused. The locking does happen on per-entry level, otherwise it's
> impossible to guarantee data consistency. Two concurrent updates or reads
> for the same key will wait for each other on this lock. But this should
not
> cause performance degradation, unless you have very few keys and very high
> contention on them.
>

Based on his claim of a lot of threads waiting on the same locks, I assumed
that's what was happening -- high contention for a few cache keys. I don't
know his use case, but I can imagine cases with a fairly small number of
very "hot" entries.
It wouldn't necessarily require very few keys, right? Just high contention
on a few of them.

> The only thing I see here is that the value is deserialized on read. This
is
> done because JCache requires store-by-value semantics and thus we create a
> copy each time you get the value (by deserializing its binary
> representation). You can override this behavior by setting
> CacheConfiguration.setCopyOnRead(false) property, this should give you
> performance improvement. Only note that it's not safe to modify the
instance
> that you got from cache this way.
>

Do you think that would be a candidate for the "Performance tips" page in
the docs? I know I've referred to that page a few times recently myself.

> -Val
>
>
>
> --
> View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Performance-Issue-Threads-blocking-tp4433p4465.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Reply via email to