Re: Ignite can't activate

2018-11-28 Thread yangjiajun
Hello.
Here is a reproducer for my case:
1.Start a node with persistence enabled.
2.Create a table without cache group and create an index on it.
3.Create another table and assign a cache group to it.Use same name to
create an index on this table.
4.Stop the node.
5.Restart the node and  do activate.Then you will see the exception. 


yangjiajun wrote
> Hello.
> My ignite can't activate after restart.I only have one node which is ver
> 2.6.The exception cause ignite can't activate is :
> 
> [14:18:05,802][SEVERE][exchange-worker-#110][GridCachePartitionExchangeManager]
> Failed to wait for completion of partition map exchange (preloading will
> not
> start): GridDhtPartitionsExchangeFuture
> [firstDiscoEvt=DiscoveryCustomEvent
> [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
> sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
> intOrder=1, lastExchangeTime=1543471362559, loc=true,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false], topVer=1,
> nodeId8=5cb09b16, msg=null, type=DISCOVERY_CUSTOM_EVT,
> tstamp=1543471364603]], crd=TcpDiscoveryNode
> [id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
> sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
> intOrder=1, lastExchangeTime=1543471362559, loc=true,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false],
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=1,
> minorTopVer=1], discoEvt=DiscoveryCustomEvent [customMsg=null,
> affTopVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
> sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
> intOrder=1, lastExchangeTime=1543471362559, loc=true,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false], topVer=1,
> nodeId8=5cb09b16, msg=null, type=DISCOVERY_CUSTOM_EVT,
> tstamp=1543471364603]], nodeId=5cb09b16, evt=DISCOVERY_CUSTOM_EVT],
> added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE,
> res=false, hash=1392437407], init=false, lastVer=GridCacheVersion
> [topVer=0,
> order=1543471361380, nodeOrder=0], partReleaseFut=PartitionReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=1], futures=[]], AtomicUpdateReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], futures=[]],
> DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], futures=[]], LocalTxReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], futures=[]],
> AllTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], futures=[RemoteTxReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> futures=[]],
> exchActions=null, affChangeMsg=null, initTs=1543471364624,
> centralizedAff=false, forceAffReassignment=true, changeGlobalStateE=class
> o.a.i.IgniteException: Duplicate index name
> [cache=SQL_PUBLIC_TABLE_TEMP_TEST1_R_1_X, schemaName=PUBLIC,
> idxName=TABLE_TEMP_99_R_1_X_ROMA3C_BSP_BATCH_ID,
> existingTable=TABLE_TEMP_99_R_1_X, table=TABLE_TEMP_TEST1_R_1_X],
> done=true, state=DONE, evtLatch=0, remaining=[], super=GridFutureAdapter
> [ignoreInterrupts=false, state=DONE, res=class
> o.a.i.IgniteCheckedException:
> Cluster state change failed., hash=721409668]]
> class org.apache.ignite.IgniteCheckedException: Cluster state change
> failed.
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:2697)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2467)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1149)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:712)
>   at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
>   at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
>   at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:748)
> 
> I use "CREATE INDEX IF NOT EXISTS" statement to add indices.According to
> my
> understanding,ignite is not allowed to create two indices with same
> name.Is
> there a bug?C

Re: Failed to get page IO instance (page content is corrupted) after onenode failed when trying to reboot.

2018-11-28 Thread Ray Liu (rayliu)
Here's my analysis

Looks like I encountered this bug
https://issues.apache.org/jira/browse/IGNITE-8659
Because in log file ignite-9d66b750-first-restart-node1.log, I see
[2018-11-29T03:01:39,135][INFO 
][exchange-worker-#162][GridCachePartitionExchangeManager] Rebalancing started 
[top=AffinityTopologyVersion [topVer=11834, minorTopVer=0], evt=NODE_JOINED, 
node=6018393e-a88c-40f5-8d77-d136d5226741]
[2018-11-29T03:01:39,136][INFO 
][exchange-worker-#162][GridDhtPartitionDemander] Starting rebalancing 
[grp=SQL_PUBLIC_WBXSITEACCOUNT, mode=ASYNC, 
fromNode=6018393e-a88c-40f5-8d77-d136d5226741, partitionsCount=345, 
topology=AffinityTopologyVersion [topVer=11834, minorTopVer=0], rebalanceId=47]

But why did rebalance started after two hours after the node started?
Is it because PME got stuck for two hours?

Also it looks like the PME got stuck again when rebalance started (This is when 
I restarted node2 and node3).
Because in the same log file, I see
[2018-11-29T03:01:59,443][WARN 
][exchange-worker-#162][GridDhtPartitionsExchangeFuture] Unable to await 
partitions release latch within timeout: ServerLatch [permits=2, 
pendingAcks=[6018393e-a88c-40f5-8d77-d136d5226741, 
75a180ea-78de-4d63-8bd5-291557bd58f4], super=CompletableLatch [id=exchange, 
topVer=AffinityTopologyVersion [topVer=11835, minorTopVer=0]]]

Based on this document 
https://apacheignite.readme.io/docs/rebalancing#section-rebalance-modes, the 
rebalance is async by default.
So what is block PME this time?

So basically I have three questions.
1. Why node1 can't join cluster("Still waiting for initial partition map 
exchange" for two hours) when restarted?
Is it because node2 and node3 have some newly ingested data when node1 is down?

2. Why is node3 blocked by " Unable to await partitions release latch within 
timeout " when restarted?

3. Is https://issues.apache.org/jira/browse/IGNITE-8659 the solution?

Andrew, can you take a look please?
I think it's a critical problem because the only way to get node1 working is to 
delete data and wal folder.
No need to say, it will cause data loss.

Thanks

Ray wrote:

This issue happened again.

Here's the summary.
I'm running a three nodes of Ignite 2.6 cluster with these config


http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>































node1:49500
node2:49500
node3:49500














I have a few caches setup with TTL with enabled persistence.
Why I'm mentioning this because I check this thread

http://apache-ignite-users.70518.x6.nabble.com/And-again-Failed-to-get-page-IO-instance-page-content-is-corrupted-td20095.html#a22037
and a few tickets mentioned in this ticket.
https://issues.apache.org/jira/browse/IGNITE-8659
https://issues.apache.org/jira/browse/IGNITE-5874
Other issues is ignored because they're already fixed in 2.6


Node1 goes down because of a long GC pause.
When I try to restart Ignite service on Node1, I got "Still waiting for
initial partition map exchange" warning log going on for more than 2 hours. 
[WARN ][main][GridCachePartitionExchangeManager] Still waiting for initial
partition map exchange [fut=GridDhtPartitionsExchangeFuture
[firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=9d66b750-09a3-4f0e-afa9-7cf24847ee6a, addrs=[10.252.4.60, 127.0.0.1],
sockAddrs=[rpsj1ign001.webex.com/10.252.4.60:49500, /127.0.0.1:49500],
discPort=49500, order=11813, intOrder=5909, lastExchangeTime=1543451981558,
loc=true, ver=2.6.0#20180709-sha1:5faffcee, isClient=false], topVer=11813,
nodeId8=9d66b750, msg=null, type=NODE_JOINED, tstamp=1543451943071],
crd=TcpDiscoveryNode [id=f14c8e36-9a20-4668-b52e-0de64c743700,
addrs=[10.252.10.20, 127.0.0.1],
sockAddrs=[rpsj1ign003.webex.com/10.252.10.20:49500, /127.0.0.1:4950

Ignite can't activate

2018-11-28 Thread yangjiajun
Hello.
My ignite can't activate after restart.I only have one node which is ver
2.6.The exception cause ignite can't activate is :

[14:18:05,802][SEVERE][exchange-worker-#110][GridCachePartitionExchangeManager]
Failed to wait for completion of partition map exchange (preloading will not
start): GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryCustomEvent
[customMsg=null, affTopVer=AffinityTopologyVersion [topVer=1,
minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
intOrder=1, lastExchangeTime=1543471362559, loc=true,
ver=2.6.0#20180710-sha1:669feacc, isClient=false], topVer=1,
nodeId8=5cb09b16, msg=null, type=DISCOVERY_CUSTOM_EVT,
tstamp=1543471364603]], crd=TcpDiscoveryNode
[id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
intOrder=1, lastExchangeTime=1543471362559, loc=true,
ver=2.6.0#20180710-sha1:669feacc, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=1], discoEvt=DiscoveryCustomEvent [customMsg=null,
affTopVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
super=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
intOrder=1, lastExchangeTime=1543471362559, loc=true,
ver=2.6.0#20180710-sha1:669feacc, isClient=false], topVer=1,
nodeId8=5cb09b16, msg=null, type=DISCOVERY_CUSTOM_EVT,
tstamp=1543471364603]], nodeId=5cb09b16, evt=DISCOVERY_CUSTOM_EVT],
added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE,
res=false, hash=1392437407], init=false, lastVer=GridCacheVersion [topVer=0,
order=1543471361380, nodeOrder=0], partReleaseFut=PartitionReleaseFuture
[topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion
[topVer=1, minorTopVer=1], futures=[]], AtomicUpdateReleaseFuture
[topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], futures=[]],
DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=1], futures=[]], LocalTxReleaseFuture
[topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], futures=[]],
AllTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=1,
minorTopVer=1], futures=[RemoteTxReleaseFuture
[topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], futures=[]],
exchActions=null, affChangeMsg=null, initTs=1543471364624,
centralizedAff=false, forceAffReassignment=true, changeGlobalStateE=class
o.a.i.IgniteException: Duplicate index name
[cache=SQL_PUBLIC_TABLE_TEMP_TEST1_R_1_X, schemaName=PUBLIC,
idxName=TABLE_TEMP_99_R_1_X_ROMA3C_BSP_BATCH_ID,
existingTable=TABLE_TEMP_99_R_1_X, table=TABLE_TEMP_TEST1_R_1_X],
done=true, state=DONE, evtLatch=0, remaining=[], super=GridFutureAdapter
[ignoreInterrupts=false, state=DONE, res=class o.a.i.IgniteCheckedException:
Cluster state change failed., hash=721409668]]
class org.apache.ignite.IgniteCheckedException: Cluster state change failed.
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:2697)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2467)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1149)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:712)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)

I use "CREATE INDEX IF NOT EXISTS" statement to add indices.According to my
understanding,ignite is not allowed to create two indices with same name.Is
there a bug?Can I recover my ignite?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ODBC driver build error

2018-11-28 Thread Ray
Hi Igor,

Thanks for the reply.
I installed OpenSSL 1.0.2 instead, but there's another error.

SeverityCodeDescription Project FileLineSuppression 
State
Error   LNK1120 1 unresolved externals  odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\project\vs\Win32\Debug\ignite.odbc.dll
1   
Error   LNK2019 unresolved external symbol __vsnwprintf_s referenced in
function _StringVPrintfWorkerW@20   odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\project\vs\odbccp32.lib(dllload.obj)
1   




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Pain points of Ignite user community

2018-11-28 Thread Rohan Honwade
Thank you Stan.

Denis, I don’t intend to speak for my employer. The content will be my personal 
opinion.

Regards,
Rohan

> On Nov 28, 2018, at 8:05 PM, Stanislav Lukyanov  
> wrote:
> 
> Hi,
>  
> I expect a write-up on some of the Ignite pitfalls to be out soon – ping me 
> next week.
>  
> Stan
>  
> From: Rohan Honwade 
> Sent: 29 ноября 2018 г. 0:42
> To: user@ignite.apache.org 
> Subject: Pain points of Ignite user community
>  
> Hello,
>  
> I am currently creating some helpful blog articles for Ignite users. Can 
> someone who is active on this mailing list or the StackOverflow Ignite 
> section 
> 
>   please let me know what are the major pain points that users face when 
> using Ignite? 
>  
> Regards,
> RH



RE: Failed to get page IO instance (page content is corrupted) after onenode failed when trying to reboot.

2018-11-28 Thread Ray
This issue happened again.

Here's the summary.
I'm running a three nodes of Ignite 2.6 cluster with these config


http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>































node1:49500
node2:49500
node3:49500














I have a few caches setup with TTL with enabled persistence.
Why I'm mentioning this because I check this thread
http://apache-ignite-users.70518.x6.nabble.com/And-again-Failed-to-get-page-IO-instance-page-content-is-corrupted-td20095.html#a22037
and a few tickets mentioned in this ticket.
https://issues.apache.org/jira/browse/IGNITE-8659
https://issues.apache.org/jira/browse/IGNITE-5874
Other issues is ignored because they're already fixed in 2.6


Node1 goes down because of a long GC pause.
When I try to restart Ignite service on Node1, I got "Still waiting for
initial partition map exchange" warning log going on for more than 2 hours. 
[WARN ][main][GridCachePartitionExchangeManager] Still waiting for initial
partition map exchange [fut=GridDhtPartitionsExchangeFuture
[firstDiscoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=9d66b750-09a3-4f0e-afa9-7cf24847ee6a, addrs=[10.252.4.60, 127.0.0.1],
sockAddrs=[rpsj1ign001.webex.com/10.252.4.60:49500, /127.0.0.1:49500],
discPort=49500, order=11813, intOrder=5909, lastExchangeTime=1543451981558,
loc=true, ver=2.6.0#20180709-sha1:5faffcee, isClient=false], topVer=11813,
nodeId8=9d66b750, msg=null, type=NODE_JOINED, tstamp=1543451943071],
crd=TcpDiscoveryNode [id=f14c8e36-9a20-4668-b52e-0de64c743700,
addrs=[10.252.10.20, 127.0.0.1],
sockAddrs=[rpsj1ign003.webex.com/10.252.10.20:49500, /127.0.0.1:49500],
discPort=49500, order=2310, intOrder=1158, lastExchangeTime=1543451942304,
loc=false, ver=2.6.0#20180709-sha1:5faffcee, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
[topVer=11813, minorTopVer=0], discoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=9d66b750-09a3-4f0e-afa9-7cf24847ee6a,
addrs=[10.252.4.60, 127.0.0.1],
sockAddrs=[rpsj1ign001.webex.com/10.252.4.60:49500, /127.0.0.1:49500],
discPort=49500, order=11813, intOrder=5909, lastExchangeTime=1543451981558,
loc=true, ver=2.6.0#20180709-sha1:5faffcee, isClient=false], topVer=11813,
nodeId8=9d66b750, msg=null, type=NODE_JOINED, tstamp=1543451943071],
nodeId=9d66b750, evt=NODE_JOINED], added=true, initFut=GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=830022440], init=false,
lastVer=null, partReleaseFut=PartitionReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11813, minorTopVer=0],
futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion
[topVer=11813, minorTopVer=0], futures=[]], AtomicUpdateReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11813, minorTopVer=0], futures=[]],
DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=11813,
minorTopVer=0], futures=[]], LocalTxReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11813, minorTopVer=0], futures=[]],
AllTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=11813,
minorTopVer=0], futures=[RemoteTxReleaseFuture
[topVer=AffinityTopologyVersion [topVer=11813, minorTopVer=0],
futures=[]], exchActions=ExchangeActions [startCaches=null,
stopCaches=null, startGrps=[], stopGrps=[], resetParts=null,
stateChangeRequest=null], affChangeMsg=null, initTs=1543451943112,
centralizedAff=false, forceAffReassignment=false, changeGlobalStateE=null,
done=false, state=SRV, evtLatch=0,
remaining=[0126e998-0c18-452f-8f3b-b6dd4b2ae84c,
f14c8e36-9a20-4668-b52e-0de64c743700], super=GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=773110813]]]

So I try to reboot Ignite service on node2 and node3.
But only node2 manages to join the cluster, node3 prints "Still waiting for
initial partition map exchange" for more than 30 minutes.

So I stopped all three nodes, and restarted the Ignite service on them.
Then I got Failed to get page IO instance (page content is corrupted) on
Node1.

[ERROR][exchange-worker-#162][] Critical system error detected. Will be
handled accordingly to configured handler 

Re: Swap Space configuration - Exception_Access_Violation

2018-11-28 Thread userx
Hi Andrei,

Please allow me sometime, I will paste the entire log.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Misleading docs or bug?

2018-11-28 Thread joseheitor
JDBC Client connection URL with 'streaming=true' runs (much faster than
without) but no data is inserted into SQL table. No errors are reported. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Pain points of Ignite user community

2018-11-28 Thread Stanislav Lukyanov
Hi,

I expect a write-up on some of the Ignite pitfalls to be out soon – ping me 
next week.

Stan

From: Rohan Honwade
Sent: 29 ноября 2018 г. 0:42
To: user@ignite.apache.org
Subject: Pain points of Ignite user community

Hello,

I am currently creating some helpful blog articles for Ignite users. Can 
someone who is active on this mailing list or the StackOverflow Ignite section  
please let me know what are the major pain points that users face when using 
Ignite? 

Regards,
RH



Re: Ignite cache.getAll takes a long time

2018-11-28 Thread Justin Ji
I use the default thread pool settings, can it improve the performance after
I increase the system thread pool?

Another question, should I add the configuration in server nodes?

Looking forward to your reply



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


GridProcessorAdapter fails to start due to failure to initialise WAL segment on Ignite startup

2018-11-28 Thread Raymond Wilson
I'm using Ignite 2.6 with the C# client.

I have a running cluster that I was debugging. All requests were read only
(there were no state mutating operations running in the cluster.

I terminated the one server node in the grid (running in the debugger) to
make a small code change and re-run it (I do this frequently). The node may
have been stopped for longer than the partitioning timeout.

On re-running the server node it failed to start. On re-running the
complete cluster it still failed to start, and all other nodes report
failure to connect to a inactive grid.

Looking at the log for the server node that is failing I get the following
log showing an exception while initializing a WAL segment. This failure
seems permanent and is unexpected as we are using the strict WAL atomicity
mode (WalMode.Fsync) for all persisted regions.Is this a recoverable error,
or does this imply data loss? [NB: This is a dev system so no prod data is
affected]]


2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer >>>
__  >>>   /  _/ ___/ |/ /  _/_  __/ __/>>>
_/ // (7 7// /  / / / _/  >>> /___/\___/_/|_/___/ /_/ /___/
 >>>   >>> ver. 2.6.0#20180710-sha1:669feacc  >>> 2018 Copyright(C) Apache
Software Foundation  >>>   >>> Ignite documentation:
http://ignite.apache.org
2018-11-29 12:26:09,933 [1] INFO  ImmutableCacheComputeServer Config URL:
n/a
2018-11-29 12:26:09,948 [1] INFO  ImmutableCacheComputeServer
IgniteConfiguration [igniteInstanceName=TRex-Immutable, pubPoolSize=50,
svcPoolSize=12, callbackPoolSize=12, stripedPoolSize=12, sysPoolSize=12,
mgmtPoolSize=4, igfsPoolSize=12, dataStreamerPoolSize=12,
utilityCachePoolSize=12, utilityCacheKeepAliveTime=6, p2pPoolSize=2,
qryPoolSize=12, igniteHome=null,
igniteWorkDir=C:\Users\rwilson\AppData\Local\Temp\TRexIgniteData\Immutable,
mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6e4784bc,
nodeId=8f32d0a6-539c-40dd-bc42-d044f28bac73,
marsh=org.apache.ignite.internal.binary.BinaryMarshaller@e4487af,
marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000,
sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1,
metricsUpdateFreq=2000, metricsExpTime=9223372036854775807,
discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000,
ackTimeout=5000, marsh=null, reconCnt=10, reconDelay=2000,
maxAckTimeout=60, forceSrvMode=false, clientReconnectDisabled=false,
internalLsnr=null], segPlc=STOP, segResolveAttempts=2,
waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=1,
commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null,
enableForcibleNodeKill=false, enableTroubleshootingLog=false,
srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@10d68fcd,
locAddr=127.0.0.1, locHost=null, locPort=47100, locPortRange=100,
shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=3,
connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768,
sockRcvBuf=32768, msgQueueLimit=1024, slowClientQueueLimit=0, nioSrvr=null,
shmemSrv=null, usePairedConnections=false, connectionsPerNode=1,
tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=16,
unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1,
boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRslvr=null,
ctxInitLatch=java.util.concurrent.CountDownLatch@117e949d[Count = 1],
stopping=false,
metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@6db9f5a4],
evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@5f8edcc5,
colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null],
indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@7a675056,
addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1,
txCfg=org.apache.ignite.configuration.TransactionConfiguration@d21a74c,
cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED,
p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100,
timeSrvPortRange=100, failureDetectionTimeout=1,
clientFailureDetectionTimeout=3, metricsLogFreq=1, hadoopCfg=null,
connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@6e509ffa,
odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
[seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
grpName=null], classLdr=null, sslCtxFactory=null,
platformCfg=PlatformDotNetConfiguration [binaryCfg=null],
binaryCfg=BinaryConfiguration [idMapper=null, nameMapper=null,
serializer=null, compactFooter=true], memCfg=null, pstCfg=null,
dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040,
sysCacheMaxSize=104857600, pageSize=16384, concLvl=0,
dfltDataRegConf=DataRegionConfiguration [name=Default-Immutable,
maxSize=1073741824, initSize=134217728, swapPath=null,
pageEvictionMode=DISABLED, evictionThreshold=0.9, emptyPagesPoolSize=100,
metricsEnabled=false, metricsSubIntervalCount=5,
metricsRateTimeInterval=6, persistenceEnabled=true,
checkpointPageBufSize=0],
storagePath=/persist\TRexIgniteData\Imm

Re: Slow select distinct query on primary key

2018-11-28 Thread yongjec
Here is my Ignite server configuration.

http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd";>




























127.0.0.1:47500..47509


















And here are my JVM heap flags.

ignite.sh -J-Xms8g -J-Xmx16g
/home/ansible/ignite/apache-ignite-fabric-2.6.0-bin/config/poc1.xml




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Pain points of Ignite user community

2018-11-28 Thread Denis Magda
Hi Rohan,

Sounds interesting, is it gonna be published on Oracle resources? Is Oracle
going to make Ignite available in the cloud? Just curious why the company
is looking towards Ignite.

--
Denis

On Wed, Nov 28, 2018 at 1:42 PM Rohan Honwade 
wrote:

> Hello,
>
> I am currently creating some helpful blog articles for Ignite users. Can
> someone who is active on this mailing list or the StackOverflow Ignite
> section   please let me know what
> are the major pain points that users face when using Ignite?
>
> Regards,
> RH
>


Slow select distinct query on primary key

2018-11-28 Thread yongjec
I am running below SQL query via Sqlline.sh, and I think it is running too
slow (57s). Could someone confirm whether this response time is normal, or I
am doing something wrong?

Here is the query:

0: jdbc:ignite:thin://127.0.0.1/> SELECT DISTINCT ACCOUNT_ID FROM
PERF_POSITIONS;
++
|   ACCOUNT_ID   |
++
| 1684   |
| 1201   |
| 1686   |
...
...
| 1441   |
++
1,001 rows selected (57.453 seconds)


My setup is a single Azure VM (CentOS 7) with 16 Cores and 64GB RAM. The
host is idle other than the Ignite process.

My Dataset has 50 million rows with a total of 1001 distinct ACCOUNT_ID
values. Rows are almost evenly distributed among the account_id's. As you
can see in the below table definition, ACCOUNT_ID is first column of the
primary key and the index.

CREATE TABLE PERF_POSITIONS (
ACCOUNT_ID VARCHAR(50) NOT NULL,
EFFECTIVE_DATE DATE NOT NULL,
FREQUENCY CHAR(1) NOT NULL,
SOURCE_ID INTEGER NOT NULL,
SECURITY_ALIAS BIGINT NOT NULL,
POSITION_TYPE VARCHAR(10),
IT VARCHAR(50),
IN VARCHAR(255),
PAI VARCHAR(100),
TIC VARCHAR(100),
GR DOUBLE,
NR DOUBLE,
GRL DOUBLE,
IR DOUBLE,
ABAL DOUBLE,
BG DOUBLE,
EG DOUBLE,
CGD DOUBLE,
CGC DOUBLE,
CFG DOUBLE,
GLG DOUBLE,
IB DOUBLE,
WT DOUBLE,
BL DOUBLE,
EL DOUBLE,
CDL DOUBLE,
CCL DOUBLE,
CL DOUBLE,
GLL DOUBLE,
IBL DOUBLE,
NC DOUBLE,
BP DOUBLE,
EP DOUBLE,
CP DOUBLE,
PR DOUBLE,
BAI DOUBLE,
EAI DOUBLE,
CI DOUBLE,
SF VARCHAR(10),
US VARCHAR(255),
UD DATE,
PRIMARY KEY (ACCOUNT_ID, EFFECTIVE_DATE, FREQUENCY, SOURCE_ID,
SECURITY_ALIAS, POSITION_TYPE)
)
WITH "template=partitioned, backups=1, affinityKey=ACCOUNT_ID,
KEY_TYPE=ie.models.PerfPositionKey, VALUE_TYPE=ie.models.PerfPosition";
CREATE INDEX PERF_POSITIONS_IDX ON PERF_POSITIONS (ACCOUNT_ID,
EFFECTIVE_DATE, FREQUENCY, SOURCE_ID, SECURITY_ALIAS, POSITION_TYPE);


When I run the query, I see below warning showing up in the log.

[22:32:51,598][WARNING][client-connector-#136][IgniteH2Indexing] Query
execution is too long [time=57260 ms, sql='SELECT DISTINCT
__Z0.ACCOUNT_ID __C0_0
FROM PUBLIC.PERF_POSITIONS __Z0', plan=
SELECT DISTINCT
__Z0.ACCOUNT_ID AS __C0_0
FROM PUBLIC.PERF_POSITIONS __Z0
/* PUBLIC."_key_PK" */
, parameters=[]]






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Pain points of Ignite user community

2018-11-28 Thread Rohan Honwade
Hello,

I am currently creating some helpful blog articles for Ignite users. Can 
someone who is active on this mailing list or the StackOverflow Ignite section 
  please let me know what are the major 
pain points that users face when using Ignite? 

Regards,
RH

Re: C# Custom Affinity Functions

2018-11-28 Thread Pavel Tupitsyn
Hi,

It is a bug, I have filed a ticket [1].
Unfortunately, no workarounds so far.
Thanks for the report.

[1] https://issues.apache.org/jira/browse/IGNITE-10451

On Thu, Nov 15, 2018 at 12:38 AM JoshN  wrote:

> Hi,
>
> We are using ignite 2.6 C#. If we define a custom affinity function
> (written
> in c#) when configuring the cache (using a IgniteConfiguration object not
> spring config) the first call to Ignition.Start is successful. If we
> activate the grid, wait for it to become active then shut the application.
> When we try and start Ignite again calls to Ignition.Start result in:
>
> 
> 
> 
> INFO: Successfully bound to TCP port [port=48500,
> localHost=127.0.0.1/127.0.0.1,
> locNodeId=054c1c83-b316-47df-9b04-c85bd3db31ea]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Successfully locked persistence storage folder
>
> [C:\Users\jn\AppData\Local\Temp\test\Mutable\Persistence\node00-08c8ec2a-dc07-460a-9114-1fd7c23ec76f]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Consistent ID used for local node is
> [08c8ec2a-dc07-460a-9114-1fd7c23ec76f] according to persistence data
> storage
> folders
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Resolved directory for serialized binary metadata:
>
> C:\Users\jn\AppData\Local\Temp\test\test\work\binary_meta\node00-08c8ec2a-dc07-460a-9114-1fd7c23ec76f
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Resolved page store work directory:
>
> C:\Users\jn\AppData\Local\Temp\test\Mutable\Persistence\node00-08c8ec2a-dc07-460a-9114-1fd7c23ec76f
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Resolved write ahead log work directory:
>
> C:\Users\jn\AppData\Local\Temp\test\test\WalStore\node00-08c8ec2a-dc07-460a-9114-1fd7c23ec76f
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Resolved write ahead log archive directory:
>
> C:\Users\jn\AppData\Local\Temp\test\Mutable\WalArchive\node00-08c8ec2a-dc07-460a-9114-1fd7c23ec76f
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Started write-ahead log manager [mode=LOG_ONLY]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Read checkpoint status
>
> [startMarker=C:\Users\jn\AppData\Local\Temp\test\Mutable\Persistence\node00-08c8ec2a-dc07-460a-9114-1fd7c23ec76f\cp\1542227061117-a884e99a-f052-440a-8bef-3e1038e5e829-START.bin,
>
> endMarker=C:\Users\jn\AppData\Local\Temp\test\Mutable\Persistence\node00-08c8ec2a-dc07-460a-9114-1fd7c23ec76f\cp\1542227061117-a884e99a-f052-440a-8bef-3e1038e5e829-END.bin]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Started page memory [memoryAllocated=100.0 MiB, pages=6344,
> tableSize=498.1 KiB, checkpointBuffer=100.0 MiB]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Checking memory state [lastValidPos=FileWALPointer [idx=0,
> fileOff=167632, len=53], lastMarked=FileWALPointer [idx=0, fileOff=167632,
> len=53], lastCheckpointId=a884e99a-f052-440a-8bef-3e1038e5e829]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Found last checkpoint marker
> [cpId=a884e99a-f052-440a-8bef-3e1038e5e829, pos=FileWALPointer [idx=0,
> fileOff=167632, len=53]]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Applying lost cache updates since last checkpoint record
> [lastMarked=FileWALPointer [idx=0, fileOff=167632, len=53],
> lastCheckpointId=a884e99a-f052-440a-8bef-3e1038e5e829]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Finished applying WAL changes [updatesApplied=0, time=53ms]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger info
> INFO: Restoring history for BaselineTopology[id=0]
> Nov 15, 2018 9:24:51 AM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Exception during start processors, node will be stopped and close
> connections
> class org.apache.ignite.IgniteCheckedException: Failed to start processor:
> GridProcessorAdapter []
> at
>
> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1742)
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:980)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:649)
> at
>
> org.apache.ignite.internal.processors.platform.PlatformAbstractBootstrap.start(PlatformAbstractBootstrap.java:43)
> at
>
> org.apache.ignite.internal.processors.platform.PlatformIgnition.start(PlatformIgnition.java:75)
> Caused by: class org.apache.ignite.IgniteCheckedE

Re: Swap Space configuration - Exception_Access_Violation

2018-11-28 Thread aealexsandrov
Hi,

Possible you face some of JDK issue. Could you please provide full error
message from the dump-core file or log like next:

EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x5e5521f4, pid=2980,
tid=0x3220 
# 
# JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build
1.8.0_131-b11) 
# Java VM: Java HotSpot(TM) Client VM (25.131-b11 mixed mode windows-x86 ) 
# Problematic frame: 
# C [MSVCR100.dll+0x121f4] 
# 
# Failed to write core dump. Minidumps are not enabled by default on client
versions of Windows 
# 
# If you would like to submit a bug report, please visit: 
# http://bugreport.java.com/bugreport/crash.jsp 
# The crash happened outside the Java Virtual Machine in native code. 
# See problematic frame for where to report the bug. 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cluster freeze with SSL enabled and JDK 11

2018-11-28 Thread Ilya Kasnacheev
Hello!

I have tried running SSL tests again and they seem to pass (only one test
fails for some different reason)

Can you try running those 2 nodes in stand-alone processes, see if problem
persists? I can see that you have SSL-enabled Tomcat running in same VM,
which I imaging could interfere with Ignite's SSL.

Note that you will need to do some load (such as REST cache operations) to
see if communication indeed works (or doesn't).

Regards,
-- 
Ilya Kasnacheev


ср, 28 нояб. 2018 г. в 16:01, Loredana Radulescu Ivanoff :

> Hello again,
>
> I haven't been able to solve this issue on my own, so I'm hoping you'd be
> able to take another look.
>
> To recap: only with Java 11 and TLS enabled, the second node I bring in
> the cluster never starts up, and remains stuck at "Still waiting for
> initial partition map exchange". The first nodes  keeps logging "Unable to
> await partitions release latch within timeout". To me, this looks like an
> Ignite issue, and no matter what causes the situation (arguably in this
> case an SSL error), there should be a more elegant exit out of it, i.e. the
> second node should give up after a while, if there isn't a better way to
> retry and achieve successful communication. The two nodes are able to
> communicate, and increasing various network timeouts/failure detection
> timeout does not help.
>
> Previously it was mentioned that the Ignite unit test did not show a
> repro. I suggest running a test that uses two different machines, because
> when I run the nodes on the same machine, I do not get a repro either.
>
> Attaching here logging from the two nodes including SSL messages.
>
> Is Ignite support for Java 11 going to be available before Oracle ends
> free commercial support for Java 1.8 in Jan 2019?
>
> Thank you.
>
> On Thu, Oct 25, 2018 at 9:29 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I have tried to run the test with protocol "TLSv1.2", didn't see any
>> difference.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 24 окт. 2018 г. в 20:23, Loredana Radulescu Ivanoff <
>> lradu...@tibco.com>:
>>
>>> Hello again,
>>>
>>> I am working on getting the full SSL logs over to you, but I have one
>>> more question in between: TLS v1.3 is enabled by default in JDK 11, and my
>>> app is using TLS v1.2 specifically. There's a known issue that's recently
>>> addressed by the JDK related to TLS v1.3 half close policy, details here:
>>> https://bugs.java.com/view_bug.do?bug_id=8207009
>>>
>>> Would you be able to confirm whether your SSL test runs successfully
>>> when the connecting client/server use TLS v.12 specifically ?
>>>
>>> FYI, I have tried specifically disabling TLS v1.3 using both the
>>> "jdk.tls.client.protocols" and "jdk.tls.server.protocols" system
>>> properties, and also set "jdk.tls.acknowledgeCloseNotify" to true on
>>> both sides as indicated here -
>>> https://bugs.java.com/view_bug.do?bug_id=8208526
>>>
>>> Based on my explorations so far, I think this may be a JDK issue
>>> (specifically in the JSSE provider) that has not been addressed yet. Either
>>> way, do you think three is anything that could be done in Ignite to
>>> explicitly close the connection on both sides in a scenario like this ?
>>>
>>> What I can safely share on the SSL logs so far is this (both nodes get
>>> stuck, node 1 in failing to close the SSL connection, node 2 in waiting for
>>> partition exchange)
>>>
>>> Node 1:
>>>
>>> "2018-10-23 14:18:40.981 PDT|SSLEngineImpl.java:715|Closing inbound of
>>> SSLEngine
>>> javax.net.ssl|ERROR|3F|grid-nio-worker-tcp-comm-1-#26%%|2018-10-23
>>> 14:18:40.981 PDT|TransportContext.java:313|Fatal (INTERNAL_ERROR): closing
>>> inbound before receiving peer's close_notify (
>>> "throwable" : {
>>>   javax.net.ssl.SSLException: closing inbound before receiving peer's
>>> close_notify
>>>   at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:129)
>>>   at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117)
>>>   at
>>> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308)
>>>   at
>>> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
>>>   at
>>> java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:255)
>>>   at
>>> java.base/sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:724)
>>>   at
>>> org.apache.ignite.internal.util.nio.ssl.GridNioSslHandler.shutdown(GridNioSslHandler.java:185)
>>>   at
>>> org.apache.ignite.internal.util.nio.ssl.GridNioSslFilter.onSessionClosed(GridNioSslFilter.java:223)
>>>   at
>>> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionClosed(GridNioFilterAdapter.java:95)
>>>   at
>>> org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionClosed(GridNioServer.java:3447)
>>>   at
>>> org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionClosed(GridNioFilterChain.java:149)
>>>   at
>>> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridN

Re: MyBatis L2 newby

2018-11-28 Thread aealexsandrov
Hi,

I think that you could take a look at the next thread:

http://apache-ignite-users.70518.x6.nabble.com/MyBatis-Ignite-integration-release-td2949.html

This integration was tested with Ignite as I know:

https://apacheignite-mix.readme.io/docs/mybatis-l2-cache
http://www.mybatis.org/ignite-cache/

However, they have its own GitHub repository. I guess that you can find
examples there.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Swapspace - Getting OOM error ?maxSize more than RAM

2018-11-28 Thread aealexsandrov
Hi,

You faced that OOM issue because your cache was mapped to the default region
with max size 6.4 GB instead of 500MB_Region with max size 64GB.

I guess that you try to load more than 6.4 GB of data to this cache.

Generally, you should set the required data region for every cache in case
if it should not be mapped to the default region. 

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: control.sh from remote terminal?

2018-11-28 Thread Alexey Kuznetsov
Hi, Jose.

Yes it can.

Just run conrol.sh without options and you will see help about its
parameters.

>> control.sh [--host HOST_OR_IP] [--port PORT]

On Wed, Nov 28, 2018 at 11:25 PM joseheitor  wrote:

> Can control.sh connect to a remote cluster?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Alexey Kuznetsov


control.sh from remote terminal?

2018-11-28 Thread joseheitor
Can control.sh connect to a remote cluster?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: java.lang.ClassCastException:org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl

2018-11-28 Thread Stanislav Lukyanov
Hi,

Try a full fresh start:
- stop all nodes
- clean work directory (D:\ApacheIgnite2_6\ApacheIgnite\work)
- make sure all nodes use the same configuration
- start nodes again

Stan

From: userx
Sent: 20 ноября 2018 г. 8:11
To: user@ignite.apache.org
Subject: Re: 
java.lang.ClassCastException:org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl

Hi Ilya,

Thank you for your reponse.
Any configuration I choose, like the one below,





http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd";>

























127.0.0.1:47500..47502










It is still giving the same error.

java.lang.ClassCastException:
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl cannot be cast
to
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignite startu is very slow

2018-11-28 Thread Evgenii Zhuravlev
I've asked for the logs of node startup, not the log for stopping the
cluster.

Evgenii

вт, 27 нояб. 2018 г. в 23:19, kvenkatramtreddy :

> Hi,
> Please find attached log.
> ignite_deactivate.log
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1700/ignite_deactivate.log>
>
>
> whole cluster is de-activated and it is going in hung state.
>
> Thanks & Regards,
> Venkat
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Upgrading from 1.9 .... need help

2018-11-28 Thread Denis Magda
Hello Sam,

It's totally fine to have a single data region for all of your caches.
That's a default behavior.

Usually, you create several data regions to fine-tune memory usage by
certain caches. For instance, you can have region A which consumes 20% of
RAM available persisting 100% in Ignite persistence, another region B might
be allocated 80% of RAM and it won't persist data at all.

Please refer to this section for more details:
https://apacheignite.readme.io/docs/memory-configuration#section-data-regions

Denis

On Thu, Nov 15, 2018 at 11:01 AM javastuff@gmail.com <
javastuff@gmail.com> wrote:

> Hello Ilya,
>
> Can you please elaborate on "You would need to use some other means to
> limit
> sizes of your caches"?
>
> Are there any guidelines or best practices for defining data region?
>
> Thanks,
> -Sam
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite benchmarking with YCSB

2018-11-28 Thread Ilya Kasnacheev
Hello!

No, you will have to use `jstack` or visualvm to get thread dump and then
read it. It is human readable.

Regards,
-- 
Ilya Kasnacheev


ср, 28 нояб. 2018 г. в 18:07, summasumma :

> Thanks Ilya for the response.
>
> reg: "You will need to do thread dump mid-benchmark to see which thread
> pools are full." --> can you please let me know if there is visor cli or
> command to check thread dump?
>
> Regards,
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite startup is very slow

2018-11-28 Thread Denis Magda
What do you see in the logs? Please share them. It's hard to guess what's
going on in your environment.

--
Denis

On Sun, Nov 18, 2018 at 7:32 PM kvenkatramtreddy 
wrote:

> Hi Team,
>
> any update on above ticket. any hints to investigate further. As per me, it
> seems to be Partition Map exchange. But I am unable to understand why it is
> taking time on startup. It is got only 3 nodes.
>
> I had the ran the SAR command as well, it was showing iowait as 6-10%.
>
>
>
> Even it is taking time on first node start up as well.
>
> Can we do anything like, background thread to re-balance Partition map
> exchange when cluster is online.
>
> Thanks & Regards,
> Venkat
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Using 3rd Party Persistence Together with Ignite Persistence

2018-11-28 Thread Denis Magda
Hello Patrick,

Do you really need to persist to both MySQL and Ignite Persistence? Is it
possible to settle on one of the disk storages only?

--
Denis

On Tue, Nov 27, 2018 at 9:36 AM PatrickEgli  wrote:

> Hi
>
> We're interested in using Apache Ignite for our next project. Our goal is
> to
> write an API with high performance and a high availability on a huge data
> set. In order to get a solid unterstanding of Apache Ignite we've read the
> documentation.
>
> Our first plan was to use MySQL together with Apache Ignite Persistence.
> Today we saw that Ignite cannot guarantee consistency between native
> persistence and 3rd party persistence.  3rd Party Persistence
> <
> https://apacheignite.readme.io/docs/3rd-party-store#section-using-3rd-party-persistence-together-with-ignite-persistence>
>
>
> Consistency is a really important part for our project. So the 3rd party
> persistence approache is probably the wrong feature for us.
>
> Beside the 3rd party persistence we think about Hibernate L2 cache. Could
> this feature be better for us?
>
> What would you recommend us?
>
> Thanks in advance.
>
> Best regards
> Patrick
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite benchmarking with YCSB

2018-11-28 Thread summasumma
Thanks Ilya for the response.

reg: "You will need to do thread dump mid-benchmark to see which thread
pools are full." --> can you please let me know if there is visor cli or
command to check thread dump?

Regards,








--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite cache.getAll takes a long time

2018-11-28 Thread Justin Ji
Hi all

I have a cluster with 3 server nodes which have 4 CPU cores and 8Gb memory.
When applications call cache.getAll(set), it often takes a long time almost
500ms(about 0.6%), even the set size is less than 3. 

And I do not think it was caused by the network, because the network latency
between application and server is less than 2 milliseconds.

The CPU of Ignite servers were between 3%~5% and the QPS are about 1000 read
and  1000 write.

Here is my code segment:

Cache:
CacheConfiguration cacheCfg =
DefaultIgniteConfiguration.getCacheConfiguration(IgniteCacheKey.DATA_POINT.getCode());
cacheCfg.setBackups(2);
cacheCfg.setDataRegionName(Constants.THREE_GB_REGION);
   
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
cacheCfg.setWriteThrough(true);
cacheCfg.setWriteBehindEnabled(true);
cacheCfg.setWriteBehindFlushThreadCount(2);
cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
cacheCfg.setWriteBehindFlushSize(409600);
cacheCfg.setWriteBehindBatchSize(1024);
cacheCfg.setStoreKeepBinary(true);
cacheCfg.setQueryParallelism(4);
//2M
cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);


getAll:

   public Map getAll(Set keys) {
long total = totalRead.incrementAndGet();

long startTime = System.currentTimeMillis();
Map all = igniteCache.getAll(keys);
long currentTime = System.currentTimeMillis();

long took = currentTime - startTime;
if (took > 500) {
readGreaterThan500.incrementAndGet();
logger.warn("read complete, cost:{}ms", took);
}
if (total % 1 == 0) {
double percent = 1000.0 * readGreaterThan500.get() / total;
logger.info("percent that more 500ms:{}‰",
readGreaterThan500.get(), total, format.format(percent));
}
return all;
}

putAllAsync:

public IgniteFuture putAllAsync(Map map) {
long total = totalWrite.incrementAndGet();

long startTime = System.currentTimeMillis();

Map objectMap = Maps.newHashMap();
for (Map.Entry entry : map.entrySet()) {
objectMap.put(entry.getKey(),
ignite.binary().toBinary(entry.getValue()));
}
IgniteFuture future = igniteCache.putAllAsync(objectMap);

long currentTime = System.currentTimeMillis();
long took = currentTime - startTime;
if (took > 500) {
writeGreaterThan500.incrementAndGet();
logger.warn("write complete, cost:{}ms", took);
}
if (total % 1 == 0) {
double percent = 1000.0 * writeGreaterThan500.get() / total;
logger.info("percent that more than 500ms :{}‰",
writeGreaterThan500.get(), total,
format.format(percent));
}
return future;
}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite cache.getAll takes a long time

2018-11-28 Thread Justin Ji
Hi all

I have a cluster with 3 server nodes which have 4 CPU cores and 8Gb memory.
When applications call cache.getAll(set), it often takes a long time almost
500ms(about 0.6%), even the set size is less than 3. 

And I do not think it was caused by the network, because the network latency
between application and server is less than 2 milliseconds.

The CPU of Ignite servers were between 3%~5% and the QPS are about 1000 read
and  1000 write.

Here is my code segment:

Cache:
CacheConfiguration cacheCfg =
DefaultIgniteConfiguration.getCacheConfiguration(IgniteCacheKey.DATA_POINT.getCode());
cacheCfg.setBackups(2);
cacheCfg.setDataRegionName(Constants.THREE_GB_REGION);
   
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
cacheCfg.setWriteThrough(true);
cacheCfg.setWriteBehindEnabled(true);
cacheCfg.setWriteBehindFlushThreadCount(2);
//每15秒刷新一次
cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
cacheCfg.setWriteBehindFlushSize(409600);
cacheCfg.setWriteBehindBatchSize(1024);
cacheCfg.setStoreKeepBinary(true);
cacheCfg.setQueryParallelism(4);
//2M
cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);


getAll:

   public Map getAll(Set keys) {
long total = totalRead.incrementAndGet();

long startTime = System.currentTimeMillis();
Map all = igniteCache.getAll(keys);
long currentTime = System.currentTimeMillis();

long took = currentTime - startTime;
if (took > 500) {
readGreaterThan500.incrementAndGet();
logger.warn("read complete, cost:{}ms", took);
}
if (total % 1 == 0) {
double percent = 1000.0 * readGreaterThan500.get() / total;
logger.info("percent that more 500ms:{}‰",
readGreaterThan500.get(), total, format.format(percent));
}
return all;
}

putAllAsync:

public IgniteFuture putAllAsync(Map map) {
long total = totalWrite.incrementAndGet();

long startTime = System.currentTimeMillis();

Map objectMap = Maps.newHashMap();
for (Map.Entry entry : map.entrySet()) {
objectMap.put(entry.getKey(),
ignite.binary().toBinary(entry.getValue()));
}
IgniteFuture future = igniteCache.putAllAsync(objectMap);

long currentTime = System.currentTimeMillis();
long took = currentTime - startTime;
if (took > 500) {
writeGreaterThan500.incrementAndGet();
logger.warn("write complete, cost:{}ms", took);
}
if (total % 1 == 0) {
double percent = 1000.0 * writeGreaterThan500.get() / total;
logger.info("percent that more than 500ms :{}‰",
writeGreaterThan500.get(), total,
format.format(percent));
}
return future;
}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: How to restart Apache Ignite nodes via command line

2018-11-28 Thread Stanislav Lukyanov
Well, as you said you need to write some scripts. To shutdown the cluster you 
kill Ignite processes, 
to load new configurations – copy the configuration files. No magic here, and 
nothing specific to Ignite.

Stan

From: Max Barrios
Sent: 28 ноября 2018 г. 0:40
To: user@ignite.apache.org
Subject: How to restart Apache Ignite nodes via command line

Is there any way to restart Apache Ignite nodes via command line? I want to 
* load new configurations 
* shutdown the cluster to upgrade my VM instances
* do general maintenance
and just can't find any documentation that shows me how to do this via the 
command line. Similar devops actions with other techs, I have running using 
ansible/bash scripts.I want to do the same with Ignite. 

Thanks

Max





RE: PublicThreadPoolSize vs FifoQueueCollisionSpi.ParallelJobsNumber

2018-11-28 Thread Stanislav Lukyanov
Hi,

1) No.
With publilcThreadPoolSize=16 and parallelJobsNumber=32 you’ll have 32 jobs 
submitted to executor with 16 threads,
which means that 16 jobs will be executing and 16 will be waiting in the public 
thread pool’s queue.

2) Yes – see setWaitingJobsNumber.

Stan

From: Prasad Bhalerao
Sent: 28 ноября 2018 г. 9:20
To: user@ignite.apache.org
Subject: PublicThreadPoolSize vs FifoQueueCollisionSpi.ParallelJobsNumber

Hi,

What will happen in following case:

publilcThreadPoolSize is to 16.

But FifoQueueCollisionSpi is used instead of NoopCollisionSpi.
FifoQueueCollisionSpi.setParallelJobsNumber(32);

1) Will ignite execute 32 jobs in parallel even though the publicThreadPool 
size is set to 16?

2) Is there any configuration to set fifo queue size as well so that number of 
jobs that be submitted to this queue can be limited?

Thanks,
Prasad



Re: Failed to read data from remote connection

2018-11-28 Thread Igor Sapego
Do you shut down C++ node properly prior killing the process?

Does this exceptions impacts cluster's functionality anyhow?

Best Regards,
Igor


On Wed, Nov 28, 2018 at 8:53 AM wangsan  wrote:

> As I restart cpp client many times concurrently ,may be zkcluster(ignite)
> has some node path has been closed.
> From cpp client logs ,
> I can see zkdiscovery watch  44 first,but the node has been closed
>
> watchPath=f78ec20a-5458-47b2-86e9-7b7ed0ee4227:0e508bf8-521f-4898-9b83-fc216b35601c:81|44
> About 30 seconds past,Received communication error resolve request
> Then it watch another path: 42
>
> watchPath=5cb0efb1-0d1b-4b54-a8b7-ac3414e7735f:23fb17f7-cdbd-4cee-991a-46041bb0fa26:81|42
> Then I don't why log "Start check connection process"?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ODBC driver build error

2018-11-28 Thread Igor Sapego
It seems like you are using OpenSSL 1.1, which is not yet supported.
You can use 1.0 instead for now.

Best Regards,
Igor


On Wed, Nov 28, 2018 at 11:53 AM Ray  wrote:

> I try to build ODBC driver on Windows Visual Studio 2017.
> I installed these dependencies
> Windows SDK 7.1
> JDK 8
> Win64 OpenSSL v1.1.1a from https://slproweb.com/products/Win32OpenSSL.html
>
> I set OPENSSL_HOME=C:\Program Files\OpenSSL-Win64
>
> When I try to build, there's these error logs.
>
> SeverityCodeDescription Project FileLine
> Suppression State
> Error   C7525   inline variables require at least '/std:c++17'  odbc
>
> d:\ignite-ignite-2.6\modules\platforms\cpp\odbc\include\ignite\odbc\ssl\ssl_bindings.h
> 133
> Error (active)  E0325   inline specifier allowed on function declarations
> only
> odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\include\ignite\odbc\ssl\ssl_bindings.h
> 133
> Error (active)  E0018   expected a ')'  odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\include\ignite\odbc\ssl\ssl_bindings.h
> 133
> Error (active)  E0065   expected a ';'  odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\include\ignite\odbc\ssl\ssl_bindings.h
> 134
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_set_tlsext_host_name_" odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 80
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_free_"
> odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 86
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_set_connect_state_"odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 91
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_free_"
> odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 97
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_get_peer_certificate"  odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 103
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "X509_free"
> odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 105
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_free_"
> odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 111
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_free_"
> odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 124
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_write_"odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 152
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_pending_"  odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 172
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSL_read_"
> odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 180
> Error (active)  E0109   expression preceding parentheses of apparent call
> must
> have (pointer-to-) function typeodbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 206
> Error (active)  E0109   expression preceding parentheses of apparent call
> must
> have (pointer-to-) function typeodbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 208
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "SSLv23_client_method_" odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 216
> Error (active)  E0020   identifier "SSL_CTRL_OPTIONS" is undefined
> odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 237
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "BIO_new_ssl_connect"   odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 292
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "BIO_set_nbio_" odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 301
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "BIO_set_conn_hostname_"odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 315
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "BIO_free_all"  odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 320
> Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
> "BIO_get_ssl_"  odbc
>
> D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
> 326

Re: Ignite cluster going down frequently

2018-11-28 Thread Hemasundara Rao
Hi  Ilya Kasnacheev,

 Did you get a chance to go though the log attached?
This is one of the critical issue we are facing in our dev environment.
Your input is of great help for us if we get, what is causing this issue
and a probable solution to it.

Thanks and Regards,
Hemasundar.

On Mon, 26 Nov 2018 at 16:54, Hemasundara Rao <
hemasundara@travelcentrictechnology.com> wrote:

> Hi  Ilya Kasnacheev,
>   I have attached the log file.
>
> Regards,
> Hemasundar.
>
> On Mon, 26 Nov 2018 at 16:50, Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Maybe you have some data in your caches which causes runaway heap usage
>> in your own code. Previously you did not have such data or code which would
>> react in such fashion.
>>
>> It's hard to say, can you provide more logs from the node before it
>> segments?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 26 нояб. 2018 г. в 14:17, Hemasundara Rao <
>> hemasundara@travelcentrictechnology.com>:
>>
>>> Thank you very much Ilya Kasnacheev for your response.
>>>
>>> We are loading data initially, after that only small delta change will
>>> be updated.
>>> Grid down issue is happening after it is running successfully 2 to 3
>>> days.
>>> Once the issue started, it is repeating frequently and not getting any
>>> clue.
>>>
>>> Thanks and Regards,
>>> Hemasundar.
>>>
>>>
>>> On Mon, 26 Nov 2018 at 13:43, Ilya Kasnacheev 
>>> wrote:
>>>
 Hello!

 Node will get segmented if other nodes fail to wait for Discovery
 response from that node. This usually means either network problems or long
 GC pauses causes by insufficient heap on one of nodes.

 Make sure your data load process does not cause heap usage spikes.

 Regards.
 --
 Ilya Kasnacheev


 пт, 23 нояб. 2018 г. в 07:54, Hemasundara Rao <
 hemasundara@travelcentrictechnology.com>:

> Hi All,
> We are running two node ignite server cluster.
> It was running without any issue for almost 5 days. We are using this
> grid for static data. Ignite process is running with around 8GB memory
> after we load our data.
> Suddenly grid server nodes going down , we tried 3 times running the
> server nodes and loading static data. Those server node going down again
> and again.
>
> Please let us know how to overcome these kind of issue.
>
> Attache the log file and configuration file.
>
> *Following Is the part of log from on server : *
>
> [04:45:58,335][WARNING][tcp-disco-msg-worker-#2%StaticGrid_NG_Dev%][TcpDiscoverySpi]
> Node is out of topology (probably, due to short-time network problems).
> [04:45:58,335][WARNING][disco-event-worker-#41%StaticGrid_NG_Dev%][GridDiscoveryManager]
> Local node SEGMENTED: TcpDiscoveryNode
> [id=8a825790-a987-42c3-acb0-b3ea270143e1, addrs=[10.201.30.63], 
> sockAddrs=[/
> 10.201.30.63:47600], discPort=47600, order=42, intOrder=23,
> lastExchangeTime=1542861958327, loc=true, 
> ver=2.4.0#20180305-sha1:aa342270,
> isClient=false]
> [04:45:58,335][INFO][tcp-disco-sock-reader-#78%StaticGrid_NG_Dev%][TcpDiscoverySpi]
> Finished serving remote node connection [rmtAddr=/10.201.30.64:36695,
> rmtPort=36695
> [04:45:58,337][INFO][tcp-disco-sock-reader-#70%StaticGrid_NG_Dev%][TcpDiscoverySpi]
> Finished serving remote node connection [rmtAddr=/10.201.30.172:58418,
> rmtPort=58418
> [04:45:58,337][INFO][tcp-disco-sock-reader-#74%StaticGrid_NG_Dev%][TcpDiscoverySpi]
> Finished serving remote node connection [rmtAddr=/10.201.10.125:63403,
> rmtPort=63403
> [04:46:01,516][INFO][tcp-comm-worker-#1%StaticGrid_NG_Dev%][TcpDiscoverySpi]
> Pinging node: 6a603d8b-f8bf-40bf-af50-6c04a56b572e
> [04:46:01,546][INFO][tcp-comm-worker-#1%StaticGrid_NG_Dev%][TcpDiscoverySpi]
> Finished node ping [nodeId=6a603d8b-f8bf-40bf-af50-6c04a56b572e, res=true,
> time=49ms]
> [04:46:02,482][INFO][tcp-comm-worker-#1%StaticGrid_NG_Dev%][TcpDiscoverySpi]
> Pinging node: 5ec6ee69-075e-4829-84ca-ae40411c7bc3
> [04:46:02,482][INFO][tcp-comm-worker-#1%StaticGrid_NG_Dev%][TcpDiscoverySpi]
> Finished node ping [nodeId=5ec6ee69-075e-4829-84ca-ae40411c7bc3, 
> res=false,
> time=7ms]
> [04:46:08,283][INFO][tcp-disco-sock-reader-#4%StaticGrid_NG_Dev%][TcpDiscoverySpi]
> Finished serving remote node connection [rmtAddr=/10.201.30.64:48038,
> rmtPort=48038
> [04:46:08,367][WARNING][disco-event-worker-#41%StaticGrid_NG_Dev%][GridDiscoveryManager]
> Restarting JVM according to configured segmentation policy.
> [04:46:08,388][WARNING][disco-event-worker-#41%StaticGrid_NG_Dev%][GridDiscoveryManager]
> Node FAILED: TcpDiscoveryNode [id=20687a72-b5c7-48bf-a5ab-37bd3f7fa064,
> addrs=[10.201.30.64], sockAddrs=[/10.201.30.64:47601],
> discPort=47601, order=41, intOrder=22, lastExchangeTime=1542262724642,
> loc=false, ver=2.4.0#20180305-sha1:aa342270, isClient=false]
>>>

ODBC driver build error

2018-11-28 Thread Ray
I try to build ODBC driver on Windows Visual Studio 2017.
I installed these dependencies 
Windows SDK 7.1
JDK 8
Win64 OpenSSL v1.1.1a from https://slproweb.com/products/Win32OpenSSL.html

I set OPENSSL_HOME=C:\Program Files\OpenSSL-Win64

When I try to build, there's these error logs. 

SeverityCodeDescription Project FileLineSuppression 
State
Error   C7525   inline variables require at least '/std:c++17'  odbc
d:\ignite-ignite-2.6\modules\platforms\cpp\odbc\include\ignite\odbc\ssl\ssl_bindings.h
133 
Error (active)  E0325   inline specifier allowed on function declarations only
odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\include\ignite\odbc\ssl\ssl_bindings.h
133 
Error (active)  E0018   expected a ')'  odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\include\ignite\odbc\ssl\ssl_bindings.h
133 
Error (active)  E0065   expected a ';'  odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\include\ignite\odbc\ssl\ssl_bindings.h
134 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"SSL_set_tlsext_host_name_" odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
80  
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member "SSL_free_"
odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
86  
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"SSL_set_connect_state_"odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
91  
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member "SSL_free_"
odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
97  
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"SSL_get_peer_certificate"  odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
103 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member "X509_free"
odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
105 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member "SSL_free_"
odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
111 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member "SSL_free_"
odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
124 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"SSL_write_"odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
152 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"SSL_pending_"  odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
172 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member "SSL_read_"
odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
180 
Error (active)  E0109   expression preceding parentheses of apparent call must
have (pointer-to-) function typeodbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
206 
Error (active)  E0109   expression preceding parentheses of apparent call must
have (pointer-to-) function typeodbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
208 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"SSLv23_client_method_" odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
216 
Error (active)  E0020   identifier "SSL_CTRL_OPTIONS" is undefined  odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
237 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"BIO_new_ssl_connect"   odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
292 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"BIO_set_nbio_" odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
301 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"BIO_set_conn_hostname_"odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
315 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"BIO_free_all"  odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
320 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"BIO_get_ssl_"  odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
326 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"BIO_free_all"  odbc
D:\ignite-ignite-2.6\modules\platforms\cpp\odbc\src\ssl\secure_socket_client.cpp
331 
Error (active)  E0135   namespace "ignite::odbc::ssl" has no member
"SSL_connect_"  odbc
D:\ignite-ignite-2.6\modules\platforms\