.NET thin client multithreaded?

2019-08-22 Thread Eduard Llull
Hello everyone,

We have just developed a gRPC service in .NET core that performs a bunch of
cache gets for every RPC. We've been using the Apache.NET nuGet starting
the Ignite node in client mode (thick client) but we just changed it to use
the thin client and we see much much worse response times: from 4ms average
and 15ms 95th percentile, to 21ms average and 86ms 95th percentile, but the
times get event worse under load: peaks of 115ms average, 1s 95th
percentile.

We were expecting some level of degradation in the response times when
changeing from the thick to the thin client but not as much. In fact,
trying to reduce the impact, we've deployed a Ignite node in client mode on
every host where we have our gRPC service deployed and the gRPC service
connects to the local Ignite node.

The gRPC service receives several tens of concurrent requests when under
load, but we instantiate one single ApacheClient (Ignition.StartClient())
shared by all the threads that are serving the RPC requests. I've seen in
the Java Thin Client documentation (
https://apacheignite.readme.io/docs/java-thin-client-initialization-and-configuration#section-multithreading)
the following:

Thin client is single-threaded and thread-safe. The only shared resource is
the underlying communication channel, and only one thread reads/writes to
the channel while other threads are waiting for the operation to complete.
Use multiple threads with thin client connection pool to improve performance

Presently thin client has no feature to create multiple threads to improve
throughput. You can create multiple threads by getting thin client
connection from a pool in your application to improve throughput.

But there is no such warning in the .NET thin client documentation (
https://apacheignite-net.readme.io/docs/thin-client).

Is it possible that the huge increase in the reponse times comes from
contention when multiple gRPC threads are using the same thin client (thus,
the same ClientSocket) to communicate with the cluster?

In the mean time we will use a thin client pool as recommended in the Java
documentation to see if it improves the performance.


Thank you very much.


Re: Node failure with "Failed to write buffer." error

2019-08-22 Thread Denis Magda
Ivan, Alex Goncharuk,

The exception trace is not helpful, it's not obvious what might be a reason
and how to address it. How do we tackle these problems?

Ibrahim, please attach all the log files for a detailed look.

-
Denis


On Thu, Aug 22, 2019 at 3:08 AM ihalilaltun 
wrote:

> Hi folks,
>
> We have been experiencing node failures with the error "Failed to write
> buffer." recently. Any ideas or optimizations not to get the error and node
> failure?
>
> Thanks...
>
> [2019-08-22T01:20:55,916][ERROR][wal-write-worker%null-#221][] Critical
> system error detected. Will be handled accordingly to configured handler
> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
> [type=CRITICAL_ERROR, err=class
> o.a.i.i.processors.cache.persistence.StorageException: Failed to write
> buffer.]]
> org.apache.ignite.internal.processors.cache.persistence.StorageException:
> Failed to write buffer.
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3484)
> [ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.body(FileWriteAheadLogManager.java:3301)
> [ignite-core-2.7.5.jar:2.7.5]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> [ignite-core-2.7.5.jar:2.7.5]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
> Caused by: java.nio.channels.ClosedChannelException
> at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110)
> ~[?:1.8.0_201]
> at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:253)
> ~[?:1.8.0_201]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIO.position(RandomAccessFileIO.java:48)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator.position(FileIODecorator.java:41)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.file.AbstractFileIO.writeFully(AbstractFileIO.java:111)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3477)
> ~[ignite-core-2.7.5.jar:2.7.5]
> ... 3 more
> [2019-08-22T01:20:55,921][WARN
> ][wal-write-worker%null-#221][FailureProcessor] No deadlocked threads
> detected.
> [2019-08-22T01:20:56,347][WARN
> ][wal-write-worker%null-#221][FailureProcessor] Thread dump at 2019/08/22
> 01:20:56 UTC
>
>
> *Ignite version*: 2.7.5
> *Cluster size*: 16
> *Client size*: 22
> *Cluster OS version*: Centos 7
> *Cluster Kernel version*: 4.4.185-1.el7.elrepo.x86_64
> *Java version* :
> java version "1.8.0_201"
> Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
> Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
>
> Current disk sizes;
> Screen_Shot_2019-08-22_at_12.png
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2515/Screen_Shot_2019-08-22_at_12.png>
>
> Ignite and gc logs;
> ignite-9.zip
> 
> Ignite configuration file;
> default-config.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2515/default-config.xml>
>
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Capacity planning for production deployment on kubernetes

2019-08-22 Thread Denis Magda
Please share the whole log file. It might be the case that something goes
wrong with volumes you attached to Ignite pods.

-
Denis


On Thu, Aug 22, 2019 at 8:07 AM Shiva Kumar 
wrote:

> Hi Denis,
>
> Thanks for your response,
> yes in our test also we have seen OOM errors and pod crash.
> so we will follow the recommendation for RAM requirements and also I was
> checking to ignite documentation on disk space required for WAL + WAL
> archive.
> here in this link
> https://apacheignite.readme.io/docs/write-ahead-log#section-wal-archive
>
> it says: archive size is defined as 4 times the size of the checkpointing
> buffer and checkpointing buffer is a function of the data region (
> https://apacheignite.readme.io/docs/durable-memory-tuning#section-checkpointing-buffer-size
> )
>
> but in this link
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-SubfoldersGeneration
>
> under *Estimating disk space* section it explains something to estimate
> disk space required for WAL but it is not clear, can you please help me the
> correct recommendation for calculating the disk space required for WAL+WAL
> archive.
>
> In one of my testing, I configured 4GB for data region and 10GB for
> WAL+WAL archive but our pods crashing as disk mounted for WAL+WAL archive
> runs out of space.
>
> [ignite@ignite-cluster-ignite-node-2 ignite]$* df -h*
> Filesystem  Size  Used Avail Use% Mounted on
> overlay 158G   39G  112G  26% /
> tmpfs63G 0   63G   0% /dev
> tmpfs63G 0   63G   0% /sys/fs/cgroup
> /dev/vda1   158G   39G  112G  26% /etc/hosts
> shm  64M 0   64M   0% /dev/shm
> */dev/vdq9.8G  9.7G   44M 100% /opt/ignite/wal*
> /dev/vdr 50G  1.4G   48G   3% /opt/ignite/persistence
> tmpfs63G   12K   63G   1% /run/secrets/
> kubernetes.io/serviceaccount
> tmpfs63G 0   63G   0% /proc/acpi
> tmpfs63G 0   63G   0% /proc/scsi
> tmpfs63G 0   63G   0% /sys/firmware
>
>
> and this is the error message in ignite node:
>
> "ERROR","JVM will be halted immediately due to the failure:
> [failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class
> o.a.i.IgniteCheckedException: Failed to archive WAL segment
> [srcFile=/opt/ignite/wal/node00-37ea8ba6-3198-46a1-9e9e-38aff27ed9c9/0006.wal,
> dstFile=/opt/ignite/wal/archive/node00-37ea8ba6-3198-46a1-9e9e-38aff27ed9c9/0236.wal.tmp]]]"
>
>
> On Thu, Aug 22, 2019 at 8:04 PM Denis Mekhanikov 
> wrote:
>
>> Shivakumar,
>>
>> Such allocation doesn’t allow full memory utilization, so it’s possible,
>> that nodes will crash because of out of memory errors.
>> So, it’s better to follow the given recommendation.
>>
>> If you want us to investigate reasons of the failures, please provide
>> logs and configuration of the failed nodes.
>>
>> Denis
>> On 21 Aug 2019, 16:17 +0300, Shiva Kumar ,
>> wrote:
>>
>> Hi all,
>> we are testing field use case before deploying in the field and we want
>> to know whether below resource limits are suitable in production.
>> There are 3 nodes (3 pods on kubernetes) running. Each having below
>> configuration
>>
>>DefaultDataRegion: 60GB
>> JVM: 32GB
>> Resource allocated for each container: 64GB
>>
>> And ignite documents says (JVM+ All DataRegion) should not exceed 70% of
>> total RAM allocated to each node(container).
>> but we started testing with the above configuration and up to 9 days
>> ignite cluster was running successfully and there was some data ingestion
>> but suddenly pods crashed and they were unable to recover from the crash.
>> does the above resource configuration not good for node recovery??
>>
>>


Re: Using Ignite as blob store?

2019-08-22 Thread Denis Magda
How about setting page size to more KBs or MBs based on the average value?
That should work perfectly fine.

-
Denis


On Thu, Aug 22, 2019 at 8:11 AM Shane Duan  wrote:

> Thanks, Ilya. The blob size varies from a few KBs to a few MBs.
>
> Cheers,
> Shane
>
>
> On Thu, Aug 22, 2019 at 5:02 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> How large are these blobs? Ignite is going to divide blobs into <4k
>> chunks. We have no special optimizations for storing large key-value pairs.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 22 авг. 2019 г. в 02:53, Shane Duan :
>>
>>> Hi Igniters, is it a good idea to use Ignite(with persistence) as a blob
>>> store? I did run some testing with a small dataset, and it looks performing
>>> okay, even with a small off-heap mem for the data region.
>>>
>>> Thanks!
>>>
>>> Shane
>>>
>>


Re: questions

2019-08-22 Thread narges saleh
I am not sure you can find real world examples where caches can be evenly
partitioned, if the partitioning factor is an affinity key. I comparing,
with partitioning case with relational databases, say partitioning based on
month of the year. I definitely don't have 100s of departments but I do
have 10s of departments, but departments are very disproportional in size.
As for rebalancing case, the pods will be added to the system as the volume
increases, so I'd assume that would prompt ignite to rebalance.

On Thu, Aug 22, 2019 at 11:00 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> 1) No. Ignite only rebalances data when nodes are joining or leaving
> cluster.
> 2) Ignite's affinity is not really well suited to such detailed manual
> assignment. It is assumed that your cache has large number of partitions
> (e.g. 1024) and data is distributed evenly between all partitions. Having
> department as affinity key is suboptimal because there's not many
> departments and they usually vary in size. That's the kind of distribution
> that you want to avoid.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 22 авг. 2019 г. в 18:37, narges saleh :
>
>> Thanks Ilya for replies.
>> 1)  Doesn't ignite rebalance the nodes if there are additional nodes
>> available and the data doesn't fit the cache current ignite node? Consider
>> a scenario where I have 100 pods on a physical node, assuming pod = ignite
>> node.
>> 2)  I am not sure what you mean by confining half of cache to one cluster
>> and another half to another node. If my affinity key is department id, why
>> can't I have department A on a partitioned cache, one partition on one node
>> in cluster A, and the other partition on another node on another cluster.
>>
>> I might be misunderstanding the whole, and I'd appreciate clarification.
>>
>> On Thu, Aug 22, 2019 at 6:52 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> 1) When there is an overflow, either page eviction kicks in, or, if it
>>> is disabled, you get an IgniteOOM, after which the node is no longer
>>> usable. Please avoid overflowing any data regions since there's no graceful
>>> handling currently.
>>> 2) I don't think so. You can't easily confine half of cache's data to
>>> one cluster group and another half to other group.
>>>
>>> Such scenarios are not recommended. We expect that all partitions have
>>> same amount of data. Not that there are a few gargantuan partitions that
>>> don't fit in a single node.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> вт, 20 авг. 2019 г. в 06:29, narges saleh :
>>>
 Hello All,

 I'd appreciate your answers to my questions.

 1) Assuming I use affinity key among 4 caches, and they all end up on
 the same ignite node. What happens where is an overflow? Does the overflow
 data end up on a joined node? How do I keep the related data from all the
 caches close to each other when the volume of exceeds a single node?

 2) Is there a concept of cluster affinity, meaning having a cluster
 group defined based on some affinity key? For example, if I have two
 departments A and B, can I have a cluster group for department A and
 another for department B?

 Thanks,
 Narges

>>>


Re: questions

2019-08-22 Thread Ilya Kasnacheev
Hello!

1) No. Ignite only rebalances data when nodes are joining or leaving
cluster.
2) Ignite's affinity is not really well suited to such detailed manual
assignment. It is assumed that your cache has large number of partitions
(e.g. 1024) and data is distributed evenly between all partitions. Having
department as affinity key is suboptimal because there's not many
departments and they usually vary in size. That's the kind of distribution
that you want to avoid.

Regards,
-- 
Ilya Kasnacheev


чт, 22 авг. 2019 г. в 18:37, narges saleh :

> Thanks Ilya for replies.
> 1)  Doesn't ignite rebalance the nodes if there are additional nodes
> available and the data doesn't fit the cache current ignite node? Consider
> a scenario where I have 100 pods on a physical node, assuming pod = ignite
> node.
> 2)  I am not sure what you mean by confining half of cache to one cluster
> and another half to another node. If my affinity key is department id, why
> can't I have department A on a partitioned cache, one partition on one node
> in cluster A, and the other partition on another node on another cluster.
>
> I might be misunderstanding the whole, and I'd appreciate clarification.
>
> On Thu, Aug 22, 2019 at 6:52 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> 1) When there is an overflow, either page eviction kicks in, or, if it is
>> disabled, you get an IgniteOOM, after which the node is no longer usable.
>> Please avoid overflowing any data regions since there's no graceful
>> handling currently.
>> 2) I don't think so. You can't easily confine half of cache's data to one
>> cluster group and another half to other group.
>>
>> Such scenarios are not recommended. We expect that all partitions have
>> same amount of data. Not that there are a few gargantuan partitions that
>> don't fit in a single node.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> вт, 20 авг. 2019 г. в 06:29, narges saleh :
>>
>>> Hello All,
>>>
>>> I'd appreciate your answers to my questions.
>>>
>>> 1) Assuming I use affinity key among 4 caches, and they all end up on
>>> the same ignite node. What happens where is an overflow? Does the overflow
>>> data end up on a joined node? How do I keep the related data from all the
>>> caches close to each other when the volume of exceeds a single node?
>>>
>>> 2) Is there a concept of cluster affinity, meaning having a cluster
>>> group defined based on some affinity key? For example, if I have two
>>> departments A and B, can I have a cluster group for department A and
>>> another for department B?
>>>
>>> Thanks,
>>> Narges
>>>
>>


Re: questions

2019-08-22 Thread narges saleh
Thanks Ilya for replies.
1)  Doesn't ignite rebalance the nodes if there are additional nodes
available and the data doesn't fit the cache current ignite node? Consider
a scenario where I have 100 pods on a physical node, assuming pod = ignite
node.
2)  I am not sure what you mean by confining half of cache to one cluster
and another half to another node. If my affinity key is department id, why
can't I have department A on a partitioned cache, one partition on one node
in cluster A, and the other partition on another node on another cluster.

I might be misunderstanding the whole, and I'd appreciate clarification.

On Thu, Aug 22, 2019 at 6:52 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> 1) When there is an overflow, either page eviction kicks in, or, if it is
> disabled, you get an IgniteOOM, after which the node is no longer usable.
> Please avoid overflowing any data regions since there's no graceful
> handling currently.
> 2) I don't think so. You can't easily confine half of cache's data to one
> cluster group and another half to other group.
>
> Such scenarios are not recommended. We expect that all partitions have
> same amount of data. Not that there are a few gargantuan partitions that
> don't fit in a single node.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 20 авг. 2019 г. в 06:29, narges saleh :
>
>> Hello All,
>>
>> I'd appreciate your answers to my questions.
>>
>> 1) Assuming I use affinity key among 4 caches, and they all end up on the
>> same ignite node. What happens where is an overflow? Does the overflow data
>> end up on a joined node? How do I keep the related data from all the caches
>> close to each other when the volume of exceeds a single node?
>>
>> 2) Is there a concept of cluster affinity, meaning having a cluster group
>> defined based on some affinity key? For example, if I have two departments
>> A and B, can I have a cluster group for department A and another for
>> department B?
>>
>> Thanks,
>> Narges
>>
>


Re: Using Ignite as blob store?

2019-08-22 Thread Shane Duan
Thanks, Ilya. The blob size varies from a few KBs to a few MBs.

Cheers,
Shane


On Thu, Aug 22, 2019 at 5:02 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> How large are these blobs? Ignite is going to divide blobs into <4k
> chunks. We have no special optimizations for storing large key-value pairs.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 22 авг. 2019 г. в 02:53, Shane Duan :
>
>> Hi Igniters, is it a good idea to use Ignite(with persistence) as a blob
>> store? I did run some testing with a small dataset, and it looks performing
>> okay, even with a small off-heap mem for the data region.
>>
>> Thanks!
>>
>> Shane
>>
>


Re: Capacity planning for production deployment on kubernetes

2019-08-22 Thread Shiva Kumar
Hi Denis,

Thanks for your response,
yes in our test also we have seen OOM errors and pod crash.
so we will follow the recommendation for RAM requirements and also I was
checking to ignite documentation on disk space required for WAL + WAL
archive.
here in this link
https://apacheignite.readme.io/docs/write-ahead-log#section-wal-archive

it says: archive size is defined as 4 times the size of the checkpointing
buffer and checkpointing buffer is a function of the data region (
https://apacheignite.readme.io/docs/durable-memory-tuning#section-checkpointing-buffer-size
)

but in this link
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-SubfoldersGeneration

under *Estimating disk space* section it explains something to estimate
disk space required for WAL but it is not clear, can you please help me the
correct recommendation for calculating the disk space required for WAL+WAL
archive.

In one of my testing, I configured 4GB for data region and 10GB for WAL+WAL
archive but our pods crashing as disk mounted for WAL+WAL archive runs out
of space.

[ignite@ignite-cluster-ignite-node-2 ignite]$* df -h*
Filesystem  Size  Used Avail Use% Mounted on
overlay 158G   39G  112G  26% /
tmpfs63G 0   63G   0% /dev
tmpfs63G 0   63G   0% /sys/fs/cgroup
/dev/vda1   158G   39G  112G  26% /etc/hosts
shm  64M 0   64M   0% /dev/shm
*/dev/vdq9.8G  9.7G   44M 100% /opt/ignite/wal*
/dev/vdr 50G  1.4G   48G   3% /opt/ignite/persistence
tmpfs63G   12K   63G   1% /run/secrets/
kubernetes.io/serviceaccount
tmpfs63G 0   63G   0% /proc/acpi
tmpfs63G 0   63G   0% /proc/scsi
tmpfs63G 0   63G   0% /sys/firmware


and this is the error message in ignite node:

"ERROR","JVM will be halted immediately due to the failure:
[failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class
o.a.i.IgniteCheckedException: Failed to archive WAL segment
[srcFile=/opt/ignite/wal/node00-37ea8ba6-3198-46a1-9e9e-38aff27ed9c9/0006.wal,
dstFile=/opt/ignite/wal/archive/node00-37ea8ba6-3198-46a1-9e9e-38aff27ed9c9/0236.wal.tmp]]]"


On Thu, Aug 22, 2019 at 8:04 PM Denis Mekhanikov 
wrote:

> Shivakumar,
>
> Such allocation doesn’t allow full memory utilization, so it’s possible,
> that nodes will crash because of out of memory errors.
> So, it’s better to follow the given recommendation.
>
> If you want us to investigate reasons of the failures, please provide logs
> and configuration of the failed nodes.
>
> Denis
> On 21 Aug 2019, 16:17 +0300, Shiva Kumar ,
> wrote:
>
> Hi all,
> we are testing field use case before deploying in the field and we want to
> know whether below resource limits are suitable in production.
> There are 3 nodes (3 pods on kubernetes) running. Each having below
> configuration
>
>DefaultDataRegion: 60GB
> JVM: 32GB
> Resource allocated for each container: 64GB
>
> And ignite documents says (JVM+ All DataRegion) should not exceed 70% of
> total RAM allocated to each node(container).
> but we started testing with the above configuration and up to 9 days
> ignite cluster was running successfully and there was some data ingestion
> but suddenly pods crashed and they were unable to recover from the crash.
> does the above resource configuration not good for node recovery??
>
>


Re: Capacity planning for production deployment on kubernetes

2019-08-22 Thread Denis Mekhanikov
Shivakumar,

Such allocation doesn’t allow full memory utilization, so it’s possible, that 
nodes will crash because of out of memory errors.
So, it’s better to follow the given recommendation.

If you want us to investigate reasons of the failures, please provide logs and 
configuration of the failed nodes.

Denis
On 21 Aug 2019, 16:17 +0300, Shiva Kumar , wrote:
> Hi all,
> we are testing field use case before deploying in the field and we want to 
> know whether below resource limits are suitable in production.
> There are 3 nodes (3 pods on kubernetes) running. Each having below 
> configuration
>
>                            DefaultDataRegion: 60GB
>                                                 JVM: 32GB
> Resource allocated for each container: 64GB
>
> And ignite documents says (JVM+ All DataRegion) should not exceed 70% of 
> total RAM allocated to each node(container).
> but we started testing with the above configuration and up to 9 days ignite 
> cluster was running successfully and there was some data ingestion but 
> suddenly pods crashed and they were unable to recover from the crash.
> does the above resource configuration not good for node recovery??


Re: Memory usage metric doesn't come down when memory is freed up

2019-08-22 Thread Ilya Kasnacheev
Hello!

When you get IgniteOOM, your node is now in incorrect state where behavior is 
undefined. You should plan carefully to avoid getting IOOM on nodes, or enable 
persistence/page eviction.

Regards,

On 2019/08/15 13:16:26, colinc  wrote: 
> I am using the Ignite metrics API to monitor memory usage levels in an
> attempt to avoid OOM conditions.
> 
> I have found that the metrics API appears to provide reasonably accurate
> figures as entries are being written to the cache - but usage levels do not
> come down again when entries are removed. I have tried removing all entries,
> individual entries and even destroying the cache. I have also tried waiting
> for a significant period of time.
> 
> Even when the cache is destroyed, the memory usage figure does not appear to
> drop. In fact, removing entries can even cause an increase in the figure.
> Despite the metrics, it is possible to insert new entries following removal
> of the old ones - indicating that space is in fact available.
> 
> A reproducer is below and produces results like:
> 
> Out of memory: CachePartialUpdateException after 89MB
> Memory used: 99.39624%
> Memory used: 99.41027%
> Memory used: 99.41406%
> 
> 
> 
> package mytest;
> 
> import org.apache.ignite.DataRegionMetrics;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.IgniteCache;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.configuration.DataRegionConfiguration;
> import org.apache.ignite.configuration.DataStorageConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.failure.NoOpFailureHandler;
> import org.junit.Test;
> 
> import javax.cache.CacheException;
> 
> public class MemoryTest {
> 
> private static final String CACHE_NAME = "cache";
> private static final String DEFAULT_MEMORY_REGION = "Default_Region";
> private static final long MEM_SIZE = 100L * 1024 * 1024;
> 
> 
> @Test
> public void testOOM() throws InterruptedException {
> try (Ignite ignite = startIgnite("IgniteMemoryMonitorTest1")) {
> fillDataRegion(ignite);
> IgniteCache cache =
> ignite.getOrCreateCache(CACHE_NAME);
> 
> // Clear all entries from the cache to free up memory 
> memUsed(ignite);
> cache.clear();  // Fails here
> cache.put("Key", "Value");
> memUsed(ignite);
> 
> cache.destroy();
> Thread.sleep(5000);
> memUsed(ignite);
> }
> }
> 
> 
> private Ignite startIgnite(String instanceName) {
> IgniteConfiguration cfg = new IgniteConfiguration();
> cfg.setIgniteInstanceName(instanceName);
> cfg.setDataStorageConfiguration(createDataStorageConfiguration());
> cfg.setFailureHandler(new NoOpFailureHandler());
> return Ignition.start(cfg);
> }
> 
> private DataStorageConfiguration createDataStorageConfiguration() {
> return new DataStorageConfiguration()
> .setDefaultDataRegionConfiguration(
> new DataRegionConfiguration()
> .setName(DEFAULT_MEMORY_REGION)
> .setInitialSize(MEM_SIZE)
> .setMaxSize(MEM_SIZE)
> .setMetricsEnabled(true));
> }
> 
> 
> private void fillDataRegion(Ignite ignite) {
> byte[] megabyte = new byte[1024 * 1024];
> 
> int storedDataMB = 0;
> try {
> IgniteCache cache =
> ignite.getOrCreateCache(CACHE_NAME);
> for (int i = 0; i < 200; i++) {
> cache.put(i, megabyte);
> storedDataMB++;
> 
> memUsed(ignite);
> }
> } catch (CacheException e) {
> System.out.println("Out of memory: " +
> e.getClass().getSimpleName() + " after " + storedDataMB + "MB");
> }
> }
> 
> private void memUsed(Ignite ignite) {
> DataRegionConfiguration defaultDataRegionCfg =
> ignite.configuration()
> .getDataStorageConfiguration()
> .getDefaultDataRegionConfiguration();
> String regionName = defaultDataRegionCfg.getName();
> DataRegionMetrics metrics = ignite.dataRegionMetrics(regionName);
> float usedMem = metrics.getPagesFillFactor() *
> metrics.getTotalAllocatedPages() * metrics.getPageSize();
> float pctUsed = 100 * usedMem / defaultDataRegionCfg.getMaxSize();
> System.out.println("Memory used: " + pctUsed + "%");
> }
> } 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> 


Re: One of Ignite pod keeps crashing and not joining the cluster

2019-08-22 Thread Stanislav Lukyanov
Hi,

Please share
- Ignite version you're running
- Exact steps and events (a node was restarted, a client joined, etc)
- Logs of all three servers

Thanks,
Stan

On Mon, Aug 19, 2019 at 3:27 PM radha jai  wrote:

> Hi ,
>  Ignite being deployed on the kubernetes, there were 3 replicas of ignite
> server, The sever was up and running for some days, and data being injected
> successfully, after that suddenly  I am getting below error on one of the
> server pod, which is getting restating mutiple times:
>Failed to process custom exchange task:
> ClientCacheChangeDummyDiscoveryMessage
>  [reqId=6b5f6c50-a8c9-4b04-a461-49bfd0112eb0, cachesToClose=null,
> startCaches=[BgwService]] java.lang.NullPointerException| at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesChanges(CacheAffinitySharedManager.java:635)|
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:391)|
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2475)|
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2620)|
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2539)|
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)|
> at java.lang.Thread.run(Thread.java:748)"
>
> Below is my ignite-xml file:
> ignite-config.xml:
> 
> 
> http://www.springframework.org/schema/beans";
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>  xmlns:util="http://www.springframework.org/schema/util";
>  xsi:schemaLocation="
>   http://www.springframework.org/schema/beans
>   http://www.springframework.org/schema/beans/spring-beans.xsd
>   http://www.springframework.org/schema/util
>   http://www.springframework.org/schema/util/spring-util.xsd";>
> 
> 
>  
> class="org.apache.ignite.configuration.ConnectorConfiguration">
> value="/opt/ignite/conf/jetty-server.xml" />
>
>  
>  
>  
>class="org.apache.ignite.configuration.DataStorageConfiguration">
>   
>   
>   
>class="org.apache.ignite.configuration.DataRegionConfiguration">
>
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>  
>  
>   
>
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
> 
> 
> 
>
>   
>  
> 
> 
> Trans
> Info
> Msg
> 
> 
> 
>  
>   class="org.apache.ignite.configuration.CacheConfiguration">
>
>
>
>
>
>   factory-method="factoryOf">
>
>  
>
>
>  
>
>  
>
>  
>   class="org.apache.ignite.configuration.CacheConfiguration">
>
>
>
>
>
>   factory-method="factoryOf">
>
>  
>
>
>  
>
>  
>
>  
>   class="org.apache.ignite.configuration.CacheConfiguration">
>
>
>
>
>
>   factory-method="factoryOf">
>
>  
>
>
>  
>
>  
>
>  
>   class="org.apache.ignite.configuration.CacheConfiguration">
>
>
>
>
>  
>  
> 
>
> 
> 
>
> Regards
> radha
>


Re: Using Ignite as blob store?

2019-08-22 Thread Ilya Kasnacheev
Hello!

How large are these blobs? Ignite is going to divide blobs into <4k chunks.
We have no special optimizations for storing large key-value pairs.

Regards,
-- 
Ilya Kasnacheev


чт, 22 авг. 2019 г. в 02:53, Shane Duan :

> Hi Igniters, is it a good idea to use Ignite(with persistence) as a blob
> store? I did run some testing with a small dataset, and it looks performing
> okay, even with a small off-heap mem for the data region.
>
> Thanks!
>
> Shane
>


Re: questions

2019-08-22 Thread Ilya Kasnacheev
Hello!

1) When there is an overflow, either page eviction kicks in, or, if it is
disabled, you get an IgniteOOM, after which the node is no longer usable.
Please avoid overflowing any data regions since there's no graceful
handling currently.
2) I don't think so. You can't easily confine half of cache's data to one
cluster group and another half to other group.

Such scenarios are not recommended. We expect that all partitions have same
amount of data. Not that there are a few gargantuan partitions that don't
fit in a single node.

Regards,
-- 
Ilya Kasnacheev


вт, 20 авг. 2019 г. в 06:29, narges saleh :

> Hello All,
>
> I'd appreciate your answers to my questions.
>
> 1) Assuming I use affinity key among 4 caches, and they all end up on the
> same ignite node. What happens where is an overflow? Does the overflow data
> end up on a joined node? How do I keep the related data from all the caches
> close to each other when the volume of exceeds a single node?
>
> 2) Is there a concept of cluster affinity, meaning having a cluster group
> defined based on some affinity key? For example, if I have two departments
> A and B, can I have a cluster group for department A and another for
> department B?
>
> Thanks,
> Narges
>


Re: Does IgniteCache.containsKey lock the key in a Transaction?

2019-08-22 Thread Denis Mekhanikov
Yohan,

IgniteCache#containsKey(...) 

 locks a key under pessimistic transactions with REPEATABLE_READ isolation 
level, just like a get().
And it doesn’t make servers send values back to a requesting node, so basically 
it does what you need.

Denis

> On 19 Aug 2019, at 14:08, Yohan Fernando  wrote:
> 
> Hi Nattapon,
>  
> Unfortunately explicit locks cannot be used within transactions and an 
> exception will be thrown.
>  
> It seems the only way is to rely on implicit locks using calls like get() and 
> containsKey(). I looked through the ignite source for these methods and it 
> does appear like containsKey delegates to the same call as get() but has a 
> flag about whether to serialize or not so I assume that containsKey might 
> avoid serialization. However I’m not an expert on the Ignite codebase so it 
> would be good if someone can confirm that this is indeed the case.
>  
> Thanks
>  
> Yohan
>  
> From: nattapon mailto:wrong...@gmail.com>> 
> Sent: 19 August 2019 08:00
> To: user@ignite.apache.org 
> Subject: Re: Does IgniteCache.containsKey lock the key in a Transaction?
>  
> Caution: This email originated from outside of Tudor.
>  
> Hi Yohan,
>  
> There is IgniteCache.lock(key) method described in 
> https://apacheignite.readme.io/docs/distributed-locks 
> 
>  . Is it suited your requirement?
>  
> IgniteCache cache = ignite.cache("myCache");
> 
> // Create a lock for the given key
> Lock lock = cache.lock("keyLock");
> try {
> // Acquire the lock
> lock.lock();
>   
> cache.put("Hello", 11);
> cache.put("World", 22);
> }
> finally {
> // Release the lock
> lock.unlock();
> }
>  
> Regards,
> Nattapon
>  
> On Fri, Aug 16, 2019 at 5:23 PM Yohan Fernando  > wrote:
> Hi All, Does  IgniteCache.containsKey() lock the key in a Transaction similar 
> to IgniteCache.get() ? Basically I want a lightweight call to lock the key 
> without having to Serialize objects from each node within a Transaction. 
>  
>  
> _
> 
> This email, its contents, and any attachments transmitted with it are 
> intended only for the addressee(s) and may be confidential and legally 
> privileged. We do not waive any confidentiality by misdelivery. If you have 
> received this email in error, please notify the sender immediately and delete 
> it. You should not copy it, forward it or otherwise use the contents, 
> attachments or information in any way. Any liability for viruses is excluded 
> to the fullest extent permitted by law.
> 
> Tudor Capital Europe LLP (TCE) is authorised and regulated by The Financial 
> Conduct Authority (the FCA). TCE is registered as a limited liability 
> partnership in England and Wales No: OC340673 with its registered office at 
> 10 New Burlington Street, London, W1S 3BE, United Kingdom
> 
> _
> 
> This email, its contents, and any attachments transmitted with it are 
> intended only for the addressee(s) and may be confidential and legally 
> privileged. We do not waive any confidentiality by misdelivery. If you have 
> received this email in error, please notify the sender immediately and delete 
> it. You should not copy it, forward it or otherwise use the contents, 
> attachments or information in any way. Any liability for viruses is excluded 
> to the fullest extent permitted by law.
> 
> Tudor Capital Europe LLP (TCE) is authorised and regulated by The Financial 
> Conduct Authority (the FCA). TCE is registered as a limited liability 
> partnership in England and Wales No: OC340673 with its registered office at 
> 10 New Burlington Street, London, W1S 3BE, United Kingdom
> 



Re: Cache spreading to new nodes

2019-08-22 Thread Denis Mekhanikov
Marco,

IgniteCache.localEntries
.iterator()
will iterate over all entries in the cache on a local node. So, it doesn't
iterate over caches, but over entries in one cache instead.
It brings entries from off-heap to heap, so data is duplicated during
iteration. But no “local cache” is created. Entries are just brought to
heap which can be heavy for a garbage collector.

> Yes, I read that I should have set the attributes. However, now it feels
like an unnecessary step? What would that improve, in my case?

Node filters should be stateless and return the same entries on all nodes.
So, make sure, that it’s impossible that this node filter acts differently
on different nodes.
Using an attribute-based node filter is a safe way to choose nodes for
caches since such filter is guaranteed to work identically on every node.

> I have just one question: you called it "backup filter". Is the
nodeFilter a filter for only backup nodes or was that a typo? I thought it
was a filter for all the nodes for a cache.

Backup filter and node filter are different things.
The one that you specify using CacheConfiguration#setNodeFilter()

is
used to choose nodes, where a cache should be stored.

On the other hand, backupFilter is a property of RendezvousAffinityFunction
.
It can be used to choose where backup partitions should be stored based on
a location of a primary partition. A possible use-case for it is making
primary and backup partitions be stored on different racks in a datacenter.
As far as I can see, you don’t need this one.

Denis

On 15 Aug 2019, at 10:05, Marco Bernagozzi 
wrote:

Hi,
Sorry, tearing down the project to make a runnable proved to be a much
bigger project than expected. I eventually managed, and the outcome is:
I used to call:
List cacheNames = new ArrayList<>();
ignite.cacheNames().forEach(
n -> {
if (!n.equals("settingsCache")) {

ignite.cache(n).localEntries(CachePeekMode.ALL).iterator().forEachRemaining(a
-> cacheNames.add(a.getKey().toString()));
}
}
);
to check the local caches, which apparently creates a local copy of the
cache in the machine (!?).
Now, I replaced it with:
List cacheNames = new ArrayList<>();
UUID localId = ignite.cluster().localNode().id();
ignite.cacheNames().forEach(
cache -> {
if (!cache.equals("settingsCache")) {
boolean containsCache =
ignite.cluster().forCacheNodes(cache).nodes().stream()
.anyMatch(n -> n.id().equals(localId));
if (containsCache) {
cacheNames.add(cache);
}
}
}
);

And the issue disapeared. Is this an intended behaviour? Because it looks
weird to me.

To reply to:
"I think, it’s better not to set it, because otherwise if you don’t trigger
the rebalance, then only one node will store the cache."
With the configuration I posted you, the cache is spread out to the
machines that I use in the setNodeFilter().

 Yes, I believe you're correct with the NodeFilter. It should be pointless
to have now, right? That was me experimenting and trying to figure out why
was the cache spreading to new nodes.

fetchNodes() fetches the ids of the local node and the k most empty nodes (
where k is given as an input for each cache). I check how full a node is
based on the code right above, in which I check how many caches a node has.

Yes, I read that I should have set the attributes. However, now it feels
like an unnecessary step? What would that improve, in my case?

 And yes, it makes sense now! Thanks for the clarification. I thought that
the rebalancing was rebalancing something in an uncontrolled way, but turns
out everything was due to my
ignite.cache(n).localEntries(CachePeekMode.ALL) creating a local cache.

I have just one question: you called it "backup filter". Is the nodeFilter
a filter for only backup nodes or was that a typo? I thought it was a
filter for all the nodes for a cache.

On Wed, 14 Aug 2019 at 17:58, Denis Mekhanikov 
wrote:

> Marco,
>
> Rebalance mode set to NONE means that your cache won’t be rebalanced at
> all unless you trigger it manually.
> I think, it’s better not to set it, because otherwise if you don’t trigger
> the rebalance, then only one node will store the cache.
>
> Also the backup filter specified in the affinity function doesn’t seem
> correct to me. It’s always true, since your node filter accepts only those
> nodes, that are in the nodesForOptimization list

Node failure with "Failed to write buffer." error

2019-08-22 Thread ihalilaltun
Hi folks,

We have been experiencing node failures with the error "Failed to write
buffer." recently. Any ideas or optimizations not to get the error and node
failure?

Thanks...

[2019-08-22T01:20:55,916][ERROR][wal-write-worker%null-#221][] Critical
system error detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
[type=CRITICAL_ERROR, err=class
o.a.i.i.processors.cache.persistence.StorageException: Failed to write
buffer.]]
org.apache.ignite.internal.processors.cache.persistence.StorageException:
Failed to write buffer.
at
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3484)
[ignite-core-2.7.5.jar:2.7.5]
at
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.body(FileWriteAheadLogManager.java:3301)
[ignite-core-2.7.5.jar:2.7.5]
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
[ignite-core-2.7.5.jar:2.7.5]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
Caused by: java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110)
~[?:1.8.0_201]
at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:253)
~[?:1.8.0_201]
at
org.apache.ignite.internal.processors.cache.persistence.file.RandomAccessFileIO.position(RandomAccessFileIO.java:48)
~[ignite-core-2.7.5.jar:2.7.5]
at
org.apache.ignite.internal.processors.cache.persistence.file.FileIODecorator.position(FileIODecorator.java:41)
~[ignite-core-2.7.5.jar:2.7.5]
at
org.apache.ignite.internal.processors.cache.persistence.file.AbstractFileIO.writeFully(AbstractFileIO.java:111)
~[ignite-core-2.7.5.jar:2.7.5]
at
org.apache.ignite.internal.processors.cache.persistence.wal.FileWriteAheadLogManager$WALWriter.writeBuffer(FileWriteAheadLogManager.java:3477)
~[ignite-core-2.7.5.jar:2.7.5]
... 3 more
[2019-08-22T01:20:55,921][WARN
][wal-write-worker%null-#221][FailureProcessor] No deadlocked threads
detected.
[2019-08-22T01:20:56,347][WARN
][wal-write-worker%null-#221][FailureProcessor] Thread dump at 2019/08/22
01:20:56 UTC


*Ignite version*: 2.7.5 
*Cluster size*: 16 
*Client size*: 22 
*Cluster OS version*: Centos 7 
*Cluster Kernel version*: 4.4.185-1.el7.elrepo.x86_64 
*Java version* : 
java version "1.8.0_201" 
Java(TM) SE Runtime Environment (build 1.8.0_201-b09) 
Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode) 

Current disk sizes;
Screen_Shot_2019-08-22_at_12.png

  
Ignite and gc logs;
ignite-9.zip
  
Ignite configuration file;
default-config.xml
  



-
İbrahim Halil Altun
Senior Software Engineer @ Segmentify
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite backup/restore Cache-wise

2019-08-22 Thread Stanislav Lukyanov
GridGain Snapshots allow you to take a backup on a live, working cluster.
If you can allow to stop the cluster activity while snapshot is being taken
you can:
- Deactive the cluster (e.g. control.sh --deactivate)
- Copy the persistence files (you would need work/binary_meta,
work/marshaller, work/db)
- Activate the cluster

To restore the data:
- Deactivate the cluster
- Place the backup files to the original location
- Activate the cluster

You may try to exclude certain caches from work/db, or only copy certain
caches, but be aware that this is not a supported use case so you may need
to figure our the correct action sequence that would work for you.

Stan


Question on submitted post

2019-08-22 Thread Pascoe Scholle
Hi there,

How long does it usually take, for a post to be seen on the forum? Maybe I
made a mistake so I will provide my question here. Excuse me if I am being
impatient:


=
Good afternoon everyone,

I have recently run into an issue and I think the problem lies in the
server node configuration. I will attache the output of the stack trace at
the bottom, however I first wish to explain what the software does and how
we are using ignite.

I start multiple server nodes with peerClassEnabled set to true, using  a
TcpDiscoveryVmIpFinder and dont set anything other than a port range for
the ipFinder.

Using the REST protocol a ComputeTaskAdapter task is executed which starts
a service and this in turn starts a grpc server, I have placed some scala
code to show what I mean.

class StartService extends ComputeTaskAdapter[String, Any]{
  var ignite: Ignite = null;
  @IgniteInstanceResource
  def setIgnite(someIgnite: Ignite): Unit = {
ignite = someIgnite
  }

 def map(...)={
...
// port is an integer
val server = new GrpcServer(ignite, port);

val service = new ServiceImpl(name, server);
/*
within the method execute of the Service interface, server.start() is called
*/

val serviceconfig = new ServiceConfiguration();
  serviceconfig.setName(name);
  serviceconfig.setTotalCount(1);
  serviceconfig.setMaxPerNodeCount(1);
  ignite.services().deploy(serviceconfig);
...
}

}

this task returns a map with some non important variables.

The grpc server takes the ignite instance created within the above
mentioned computeTask as a variable, I am not sure if this could be the
cause of the issue.

Using grpc protocol, we create a ComputeTask which is executed by the grpc
server some more code below:

class GrpcServer(val ignite:Ignite, val port:Int) extends ..Some Grpc
stuff..{

def someGrpcProtocol(request: Message):Future[String]={
val newTask = new SomeTask();

ignite.compute(ignite.cluster()).execute(newTask, someinput);
Future("Request is being processed");
}

}


If a single server node is started, the program runs without problems.
However, adding more nodes and trying to execute the new tasks on a remote
node or on a node that has a certain attribute gives me a massive stack
trace in the face.
Basically, if I want to execute a task on a node where the service and grpc
server do not reside, the exception happens.

I have placed all custom classes within a jar that lies in the libs folder
of the ignite-bin project.
We are currently on version 2.7

If you require anything else just let me know, ill be on it asap.

Thanks for any help that may come my way.

Cheers!

Here is most of the stack trace:
class org.apache.ignite.binary.BinaryObjectException: Failed to read field
[name=server]
at
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:192)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:875)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1984)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:703)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:188)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:875)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1984)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:703)
at
org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:188)
at
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:875)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
at
org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:313)
at
org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:102)
at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10140)
at
org.apache.ignite.internal.processors.job.GridJobWorker.