exceptions in Ignite node when a thin client process ends

2019-07-31 Thread Matan Levy
Hi,

In my application I am using ODBC + Java thin clients to get data from the
cache(stores in Ignite nodes).
whenever a process using this thin clients is finished, I am getting in the
node logs this exception :

*[07:45:19,559][SEVERE][grid-nio-worker-client-listener-1-#30][ClientListenerProcessor]
Failed to process selector key [s
es=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker
[readBuf=java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192
], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0,
bytesRcvd0=0, bytesSent0=0, select=true, super=GridWo
rker [name=grid-nio-worker-client-listener-1, igniteInstanceName=null,
finished=false, heartbeatTs=1564289118230, hashCo
de=1829856117, interrupted=false,
runner=grid-nio-worker-client-listener-1-#30]]], writeBuf=null,
readBuf=null, inRecove
ry=null, outRecovery=null, super=GridNioSessionImpl
[locAddr=/0:0:0:0:0:0:0:1:10800, rmtAddr=/0:0:0:0:0:0:0:1:63697, cre
ateTime=1564289116225, closeTime=0, bytesSent=1346, bytesRcvd=588,
bytesSent0=0, bytesRcvd0=0, sndSchedTime=156428911623
5, lastSndTime=1564289116235, lastRcvTime=1564289116235, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyn
cNotifyFilter, GridNioCodecFilter [parser=ClientListenerBufferedParser,
directMode=false]], accepted=true, markedForClos
e=false]]]
java.io.IOException: An existing connection was forcibly closed by the
remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:11
04)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNi
oServer.java:2389)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:215
6)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1797)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)*



is there a soemthing I can do to avoid those exceptions?
Thanks!


Re: [IgniteKernal%ServerNode] Exception during start processors, node will be stopped and close connections java.lang.NullPointerException: null......cache.GridCacheUtils.affinityNode

2019-07-31 Thread siva
Hi,

Thanks for reply.

 
1.whenever server is restarting, we are doing data load into ignite cache.So
i though might be it will be there in nightly release(disk data load into
ignite cache on server restart) and somehow if i'm able to run nightly
version,so that i will no need to load data on server restart.

2.When restarting  Server and  disk persist data is there, getting the
NullPointerException.


Thanks





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: CacheConfiguration#setTypes: deserialization on client

2019-07-31 Thread Denis Magda
Ruslan,

Yes, I believe it's safe to remove that line from your configuration as
long as it just enforces that you don't put an object of a different class
into the cache.

-
Denis


On Wed, Jul 31, 2019 at 12:18 AM Ruslan Kamashev  wrote:

> Example of CacheConfiguration:
>
> new CacheConfiguration("exampleCache")
> .setDataRegionName("exampleDataRegion")
> .setSqlSchema("PUBLIC")
> .setCacheMode(CacheMode.PARTITIONED)
> .setBackups(3)
>
> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC)
> .setAffinity(getRendezvousAffinityFunction())
> // https://issues.apache.org/jira/browse/IGNITE-11352
> .setStatisticsEnabled(false)
> .setManagementEnabled(true)
> .setTypes(TestKey.class, TestValue.class) // Can I just remove
> this line in my configuration without some sideeffects?
> .setKeyConfiguration(
> new CacheKeyConfiguration()
> .setTypeName(TestKey.class.getTypeName())
> .setAffinityKeyFieldName("name")
> )
> .setQueryEntities(Arrays.asList(
> new QueryEntity(TestKey.class.getName(),
> TestValue.class.getName())
> .setTableName("exampleTable")
> ))
> .setAtomicityMode(CacheAtomicityMode.ATOMIC)
>
>
> On Wed, Jul 31, 2019 at 1:46 AM Denis Magda  wrote:
>
>> Could you please share your configuration?
>>
>> -
>> Denis
>>
>>
>> On Tue, Jul 30, 2019 at 10:37 AM Ruslan Kamashev 
>> wrote:
>>
>>> Related issue https://issues.apache.org/jira/browse/IGNITE-1903
>>> I don't use CacheStore, but I have the same problem with
>>> CacheConfiguration#setTypes.
>>> Could you offer a workaround for solving this problem? Can I just remove
>>> this line in my configuration without some sideeffects?
>>>
>>> Apache Ignite 2.7.0
>>>
>>


Re: Affinity collocation and node rebalancing

2019-07-31 Thread Denis Magda
Hello Lily,

Ignite ensures data consistency all the times taking care of data
rebalancing scenarios as well. For instance, if there are Table1 and Table2
and they are collocated (Table2.affinityKey links it to Table1.someKey)
then during rebalancing all the collocated records of both tables will be
moved to the new node.

If to go in details, Ignite rebalances partitions. Once your cluster
topology changes, the affinity function recalculates partitions
distributions to make it even cluster-wide. As a result, some of the
partitions will stay on existing nodes while the others will be moved to
the new one. Collocated data is always mapped/stored to/in the partition
that has the same number as the partition of the parental record  - for
instance, if Table1.someKey is mapped to partition 5 of Table1 then all the
records of Table2 that Table2.affinityKey==Table1.someKey will be stored in
the partition 5 of Table2. If partition 5 needs to be rebalanced, then it
will be rebalanced for all the tables you have so that the new node owns
all the partitions 5 of all the tables. The only requirement here is that
the total partition number and affinity function must be the same for all
the tables (which is true by default).

-
Denis


On Tue, Jul 30, 2019 at 4:28 PM lhendra  wrote:

> Hi,
>
> I'm totally new to Ignite and have been reading lots of documentation.
> I'm trying to find info on how node rebalancing is handled in relation to
> affinity collocation.
> There are 2 use cases that I want to confirm:
> (1) Create a new table with an affinity key to an existing table.
> (2) Add new columns to existing tables.
> In both cases, let's say we start loading data that causes nodes to fill
> up,
> and we throw in a new node. Does Ignite automatically perform node
> rebalancing WHILE preserving the affinity constraints in any/both cases?
>
> Thanks,
> Lily
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite 2.7.0: Ignite client:: memory leak

2019-07-31 Thread Denis Magda
Manesh,

Start all the nodes with the following parameters:
https://apacheignite.readme.io/docs/jvm-and-system-tuning#section-getting-heap-dump-on-out-of-memory-errors

JVM will create a heap dump on failure and you'll be able to see the root
cause of the leak. If that's in Ignite then please file a BLOCKER ticket.


-
Denis


On Tue, Jul 30, 2019 at 7:45 AM Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com> wrote:

> Infact, in the logs you can see that whenever the below print comes up,
> memory jumps up by 100-200MB
>
> >>Full map updating for 873 groups performed in 16 ms
>
>
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=4c8b23b4, uptime=00:19:13.959]
> ^-- H/N/C [hosts=8, nodes=10, CPUs=48]
> ^-- CPU [cur=8%, avg=7.55%, GC=0%]
> ^-- PageMemory [pages=0]
> *^-- Heap [used=1307MB, free=35.9%, comm=2039MB]*
> ^-- Off-heap [used=0MB, free=-1%, comm=0MB]
> ^-- Outbound messages queue [size=0]
> ^-- Public thread pool [active=0, idle=4, qSize=0]
> ^-- System thread pool [active=0, idle=2, qSize=0]
> 2019-07-30 14:11:48.485  INFO 26 --- [   sys-#167]
> .c.d.d.p.GridDhtPartitionsExchangeFuture : Received full message, will
> finish exchange [node=9e2951dc-e8ad-44e6-9495-83b0e5337511,
> resVer=AffinityTopologyVersion [topVer=1361, minorTopVer=0]]
> 2019-07-30 14:11:48.485  INFO 26 --- [   sys-#167]
> .c.d.d.p.GridDhtPartitionsExchangeFuture : Received full message, need
> merge [curFut=AffinityTopologyVersion [topVer=1357, minorTopVer=0],
> resVer=AffinityTopologyVersion [topVer=1361, minorTopVer=0]]
> 2019-07-30 14:11:48.485  INFO 26 --- [   sys-#167]
> .i.p.c.GridCachePartitionExchangeManager : Merge exchange future on finish
> [curFut=AffinityTopologyVersion [topVer=1357, minorTopVer=0],
> mergedFut=AffinityTopologyVersion [topVer=1358, minorTopVer=0],
> evt=NODE_JOINED, evtNode=864571bd-7235-4fe0-9e52-f3a78f35dbb2,
> evtNodeClient=false]
> 2019-07-30 14:11:48.485  INFO 26 --- [   sys-#167]
> .i.p.c.GridCachePartitionExchangeManager : Merge exchange future on finish
> [curFut=AffinityTopologyVersion [topVer=1357, minorTopVer=0],
> mergedFut=AffinityTopologyVersion [topVer=1359, minorTopVer=0],
> evt=NODE_FAILED, evtNode=20eef25d-b7ec-4340-9da8-1a5a35678ba5,
> evtNodeClient=false]
> 2019-07-30 14:11:48.485  INFO 26 --- [   sys-#167]
> .i.p.c.GridCachePartitionExchangeManager : Merge exchange future on finish
> [curFut=AffinityTopologyVersion [topVer=1357, minorTopVer=0],
> mergedFut=AffinityTopologyVersion [topVer=1360, minorTopVer=0],
> evt=NODE_JOINED, evtNode=9c318eb2-dd21-457c-8d1f-e6d4677e1a55,
> evtNodeClient=true]
> 2019-07-30 14:11:48.486  INFO 26 --- [   sys-#167]
> .i.p.c.GridCachePartitionExchangeManager : Merge exchange future on finish
> [curFut=AffinityTopologyVersion [topVer=1357, minorTopVer=0],
> mergedFut=AffinityTopologyVersion [topVer=1361, minorTopVer=0],
> evt=NODE_FAILED, evtNode=864571bd-7235-4fe0-9e52-f3a78f35dbb2,
> evtNodeClient=false]
> 2019-07-30 14:11:48.861  INFO 26 --- [   sys-#167]
> o.a.i.i.p.c.CacheAffinitySharedManager   : Affinity applying from full
> message performed in 375 ms.
> 2019-07-30 14:11:48.864  INFO 26 --- [   sys-#167]
> .c.d.d.p.GridDhtPartitionsExchangeFuture : Affinity changes applied in 379
> ms.
> 2019-07-30 14:11:48.880  INFO 26 --- [   sys-#167]
> .c.d.d.p.GridDhtPartitionsExchangeFuture : Full map updating for 873 groups
> performed in 16 ms.
> 2019-07-30 14:11:48.880  INFO 26 --- [   sys-#167]
> .c.d.d.p.GridDhtPartitionsExchangeFuture : Finish exchange future
> [startVer=AffinityTopologyVersion [topVer=1357, minorTopVer=0],
> resVer=AffinityTopologyVersion [topVer=1361, minorTopVer=0], err=null]
> 2019-07-30 14:11:48.927  INFO 26 --- [   sys-#167]
> .c.d.d.p.GridDhtPartitionsExchangeFuture : Detecting lost partitions
> performed in 47 ms.
> 2019-07-30 14:11:49.280  INFO 26 --- [   sys-#167]
> .c.d.d.p.GridDhtPartitionsExchangeFuture : Completed partition exchange
> [localNode=4c8b23b4-ce12-4dbb-a7ea-9279711f4008,
> exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion
> [topVer=1357, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode
> [id=20eef25d-b7ec-4340-9da8-1a5a35678ba5, addrs=[0:0:0:0:0:0:0:1%lo,
> 127.0.0.1, 192.168.1.139, 192.168.1.181], sockAddrs=[/192.168.1.181:47500,
> /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, /192.168.1.139:47500],
> discPort=47500, order=1357, intOrder=696, lastExchangeTime=1564495322589,
> loc=false, ver=2.7.0#20181130-sha1:256ae401, isClient=false], done=true],
> topVer=AffinityTopologyVersion [topVer=1361, minorTopVer=0],
> durationFromInit=4411]
> 2019-07-30 14:11:49.289  INFO 26 --- [ange-worker-#43]
> .i.p.c.GridCachePartitionExchangeManager : Skipping rebalancing (no
> affinity changes) [top=AffinityTopologyVersion [topVer=1361,
> minorTopVer=0], rebTopVer=AffinityTopologyVersion [topVer=-1,
> minorTopVer=0], evt=NODE_JOINED,
> evtNode=20eef25d-b7ec

RE: [IgniteKernal%ServerNode] Exception during start processors, nodewill be stopped and close connections java.lang.NullPointerException:null......cache.GridCacheUtils.affinityNode

2019-07-31 Thread Alexandr Shapkin
Hi,

As far as I can see, [1] is included into 2.8 nightly build and most likely 
will be included into next minor release.

Please, clarify, following questions:
1) What is the point of using 2.8 nightly build, since it could be unstable for 
now?
2) What was the original issue you’ve been trying to solve for now?

[1] and logs from your initial post looks quite different. So I’m wondering why 
do you think they are the same?

Is the original issue (Null Pointer exception) still reproducible for latest 
stable release (2.7.5)?

[1] - https://issues.apache.org/jira/browse/IGNITE-10451

From: siva
Sent: Wednesday, July 31, 2019 2:44 PM
To: user@ignite.apache.org
Subject: Re: [IgniteKernal%ServerNode] Exception during start processors, 
nodewill be stopped and close connections 
java.lang.NullPointerException:null..cache.GridCacheUtils.affinityNode

Hi,
could you provide some suggestion on above issue.
1.whether this issue is fixed or still need to wait for next release?
2.Or Any other alternative solution for the above issue?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Partition loss event

2019-07-31 Thread Alexandr Shapkin
Hi,

You need to configure includeEventTypes configuration property before starting 
the grid [1]

Xml:
 
 

 
 


Code:
cfg.setIncludeEventTypes(EventType.EVT_CACHE_REBALANCE_PART_DATA_LOST);

[1] - https://apacheignite.readme.io/docs/events#section-configuration


From: balazspeterfi
Sent: Wednesday, July 31, 2019 4:20 PM
To: user@ignite.apache.org
Subject: Partition loss event

Hi All,

I'm using Ignite 2.7.5 and am trying to add some logic to my service that
would trigger a full data reload in case there was a partition loss event. I
followed the docs and added a listener on the
*EVT_CACHE_REBALANCE_PART_DATA_LOST* event, but my code never gets called. I
was testing it by setting the backups to 0 and cache mode to *PARTITIONED*.
After killing a node I can see errors from Ignite when accessing one of the
keys on the lost partition but for some reason the event listener doesn't
get executed.
This is the error I receive on accessing lost data:

Does anyone have any idea what I can possibly do wrong?

Thanks,
Balazs



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite 2.7.0: unresponsive: Found long running cache future -

2019-07-31 Thread Loredana Radulescu Ivanoff
Hello,

I have recently encountered this "long-running cache future" message while
doing stress tests on Ignite (we run it embedded inside an application
server). I put CPU pressure on two out of four nodes using stress-ng, that
rendered the two nodes irresponsive for the duration of the test, and then
one of the other two "healthy" nodes started logging the "long running
future" message.

In my opinion, it could be related to a cache operation that cannot
complete due to another node(nodes?) that do not respond in time. I hope
that helps a little.

On Tue, Jul 30, 2019 at 3:52 PM Denis Magda  wrote:

> Igniters,
>
> Does this remind you any long-running transaction, starvation or deadlock?
> It's not obvious from the attached files what caused the issue. Mahesh
> please share all the logs from all the nodes and apps for analysis.
>
>
> -
> Denis
>
>
> On Sun, Jul 28, 2019 at 8:50 PM Mahesh Renduchintala <
> mahesh.renduchint...@aline-consulting.com> wrote:
>
>> Hi,
>>
>>
>>
>> After a while, an ignite node becomes unresponsive printing
>>
>>
>> >>Found long-running cache future
>>
>>
>> Please check the attached log.
>>
>>
>> What does "Found long-running cache future" mean?
>>
>> please throw some light.
>>
>>
>> regards
>>
>> mahesh
>>
>>
>>


RE: Running Ignite Cluster using Docker containers

2019-07-31 Thread Alexandr Shapkin
Hello,

With default configuration Ignite uses MulticastIpFinder that will try to scan 
for available Ignite instances within your network automatically.
The default required ports are: 47500 and 47100 for discovery and 
communications and localhost(127.0.0.1) is the default alias for docker machine.

I would suggest you to set port forwarding for 47100 as well.
With that default configuration you can connect from you local machine to 
Docker at 127.0.0.1:47500 for thick and 127.0.0.1:10800 for thin clients.

You can change default configuration with CONFIG_URL parameter [1] that could 
point to file:///my-config.xml as well. [2]

You can also try to replace the whole “config” folder with your own 
configuration files:
docker run -v /myconfigFolder:/opt/ignite/apache-ignite/config ….

[1] - https://apacheignite.readme.io/docs/docker-deployment
[2] - 
https://stackoverflow.com/questions/54714911/run-ignite-docker-with-custom-config-file

From: vitalys
Sent: Wednesday, July 31, 2019 5:12 PM
To: user@ignite.apache.org
Subject: Running Ignite Cluster using Docker containers

Hi,

I am trying to run Ignite Cluster using Docker containers. I have Docker for
Desktop running on my Windows box and I downloaded Apache Ignite 2.7.0 image
from Docker Hub. I am able to start Ignite server using docker command :

docker run --rm --name myignite -p 47500:47500 -p 47501:47501 -p
10800:10800 -it apacheignite/ignite

I am trying to connect to the Server Node from my client application and
here I hit a snag. How do I connect to the server? I opened an image and
went to the "config" folder and I found "default-config.xml" there with no
"IpFinder" section. What ports am I suppose to connect to from the client?
What hostname do I need to specify on the client side, "localhost"?  

I looked for the documentation in Apache Ignite tutorial but there are very
few things there designate to this topic. Is there any example or some kind
of tutorial on how to run Ignite Cluster using Docker containers?

Thank you in advance.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



What happens when a client gets disconnected

2019-07-31 Thread Matt Nohelty
Sorry for the long delay in responding to this issue.  I will work on
replicating this issue in a more controlled test environment and try to
grab thread dumps from there.

In a previous post you mentioned that the blocking in this thread dump
should only happen when a data node is affected which is usually a server
node and you also said that near cache consistency is observed
continuously.  If we have near caching enabled, does that mean clients
become data nodes?  If that's the case, does that explain why we are seeing
blocking when a client crashes or hangs?

Assuming this is related to near caching, is there any configuration to
adjust this behavior to give us availability over perfect consistency?
Having a failure on one client ripple across the entire system and
effectively take down all other clients of that cluster is a major problem.
We obviously want to avoid problems like an OOM error or a big GC pause in
the client application but if these things happen we need to be able to
absorb these gracefully and limit the blast radius to just that client
node.


Running Ignite Cluster using Docker containers

2019-07-31 Thread vitalys
Hi,

I am trying to run Ignite Cluster using Docker containers. I have Docker for
Desktop running on my Windows box and I downloaded Apache Ignite 2.7.0 image
from Docker Hub. I am able to start Ignite server using docker command :

docker run --rm --name myignite -p 47500:47500 -p 47501:47501 -p
10800:10800 -it apacheignite/ignite

I am trying to connect to the Server Node from my client application and
here I hit a snag. How do I connect to the server? I opened an image and
went to the "config" folder and I found "default-config.xml" there with no
"IpFinder" section. What ports am I suppose to connect to from the client?
What hostname do I need to specify on the client side, "localhost"?  

I looked for the documentation in Apache Ignite tutorial but there are very
few things there designate to this topic. Is there any example or some kind
of tutorial on how to run Ignite Cluster using Docker containers?

Thank you in advance.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Partition loss event

2019-07-31 Thread balazspeterfi
Hi All,

I'm using Ignite 2.7.5 and am trying to add some logic to my service that
would trigger a full data reload in case there was a partition loss event. I
followed the docs and added a listener on the
*EVT_CACHE_REBALANCE_PART_DATA_LOST* event, but my code never gets called. I
was testing it by setting the backups to 0 and cache mode to *PARTITIONED*.
After killing a node I can see errors from Ignite when accessing one of the
keys on the lost partition but for some reason the event listener doesn't
get executed.
This is the error I receive on accessing lost data:

Does anyone have any idea what I can possibly do wrong?

Thanks,
Balazs



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [IgniteKernal%ServerNode] Exception during start processors, node will be stopped and close connections java.lang.NullPointerException: null......cache.GridCacheUtils.affinityNode

2019-07-31 Thread siva
Hi,
could you provide some suggestion on above issue.
1.whether this issue is fixed or still need to wait for next release?
2.Or Any other alternative solution for the above issue?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: NegativeArraySizeException on cluster rebalance

2019-07-31 Thread Dmitriy Govorukhin
Hi Ibrahim,

Looks like Btree corrupted problem, I guess it was fixed in
https://issues.apache.org/jira/browse/IGNITE-11953
The fix you can find in 9c323149db1cee0ff6586389def059a85428b116
org/apache/ignite/internal/processors/cache/IgniteCacheOffheapManagerImpl.java:1631

Thank you!

On Wed, Jul 17, 2019 at 1:55 PM ihalilaltun 
wrote:

> Hi Igniters,
>
> We are getting negativearraysizeexception on one of our cacheGroups
> rebalancing period. I am adding the details that I could get from logs;
>
> *Ignite version*: 2.7.5
> *Cluster size*: 16
> *Client size*: 22
> *Cluster OS version*: Centos 7
> *Cluster Kernel version*: 4.4.185-1.el7.elrepo.x86_64
> *Java version* :
> java version "1.8.0_201"
> Java(TM) SE Runtime Environment (build 1.8.0_201-b09)
> Java HotSpot(TM) 64-Bit Server VM (build 25.201-b09, mixed mode)
>
>
> Pojo we use;
> UserPriceDropRecord.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2515/UserPriceDropRecord.txt>
>
>
> Error we got;
>
> [2019-07-17T10:23:13,162][ERROR][sys-#241][GridDhtPartitionSupplier] Failed
> to continue supplying [grp=userPriceDropDataCacheGroup,
> demander=12d8bad8-62a9-465d-aca4-4afa203d6778,
> topVer=AffinityTopologyVersion [topVer=238, minorTopVer=0], topic=1]
> java.lang.NegativeArraySizeException: null
> at
> org.apache.ignite.internal.pagemem.PageUtils.getBytes(PageUtils.java:63)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.readFullRow(CacheDataRowAdapter.java:330)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:167)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:108)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.tree.DataRow.(DataRow.java:55)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.tree.CacheDataRowStore.dataRow(CacheDataRowStore.java:92)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.tree.CacheDataTree.getRow(CacheDataTree.java:200)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.tree.CacheDataTree.getRow(CacheDataTree.java:49)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.fillFromBuffer0(BPlusTree.java:5512)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$AbstractForwardCursor.fillFromBuffer(BPlusTree.java:5280)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$AbstractForwardCursor.nextPage(BPlusTree.java:5332)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.next(BPlusTree.java:5566)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$9.onHasNext(IgniteCacheOffheapManagerImpl.java:1185)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteRebalanceIteratorImpl.advance(IgniteRebalanceIteratorImpl.java:79)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteRebalanceIteratorImpl.nextX(IgniteRebalanceIteratorImpl.java:139)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteRebalanceIteratorImpl.next(IgniteRebalanceIteratorImpl.java:185)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.IgniteRebalanceIteratorImpl.next(IgniteRebalanceIteratorImpl.java:37)
> ~[ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionSupplier.handleDemandMessage(GridDhtPartitionSupplier.java:333)
> [ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleDemandMessage(GridDhtPreloader.java:404)
> [ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:424)
> [ignite-core-2.7.5.jar:2.7.5]
> at
>
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:409)
> [ignite-core-2.7.5.jar:2.7.5

Re: CacheConfiguration#setTypes: deserialization on client

2019-07-31 Thread Ruslan Kamashev
Example of CacheConfiguration:

new CacheConfiguration("exampleCache")
.setDataRegionName("exampleDataRegion")
.setSqlSchema("PUBLIC")
.setCacheMode(CacheMode.PARTITIONED)
.setBackups(3)

.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC)
.setAffinity(getRendezvousAffinityFunction())
// https://issues.apache.org/jira/browse/IGNITE-11352
.setStatisticsEnabled(false)
.setManagementEnabled(true)
.setTypes(TestKey.class, TestValue.class) // Can I just remove this
line in my configuration without some sideeffects?
.setKeyConfiguration(
new CacheKeyConfiguration()
.setTypeName(TestKey.class.getTypeName())
.setAffinityKeyFieldName("name")
)
.setQueryEntities(Arrays.asList(
new QueryEntity(TestKey.class.getName(),
TestValue.class.getName())
.setTableName("exampleTable")
))
.setAtomicityMode(CacheAtomicityMode.ATOMIC)


On Wed, Jul 31, 2019 at 1:46 AM Denis Magda  wrote:

> Could you please share your configuration?
>
> -
> Denis
>
>
> On Tue, Jul 30, 2019 at 10:37 AM Ruslan Kamashev 
> wrote:
>
>> Related issue https://issues.apache.org/jira/browse/IGNITE-1903
>> I don't use CacheStore, but I have the same problem with
>> CacheConfiguration#setTypes.
>> Could you offer a workaround for solving this problem? Can I just remove
>> this line in my configuration without some sideeffects?
>>
>> Apache Ignite 2.7.0
>>
>