[jira] [Updated] (IGNITE-9974) Drop Hadoop assemblies

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-9974:
--
Fix Version/s: (was: 2.9)

> Drop Hadoop assemblies
> --
>
> Key: IGNITE-9974
> URL: https://issues.apache.org/jira/browse/IGNITE-9974
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.7
>Reporter: Peter Ivanov
>Assignee: Peter Ivanov
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After dropping Hadoop binaries from release delivery (see IGNITE-9953) it is 
> required to drop assemblies and corresponding files / profiles if exist.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12033) Callbacks from striped pool due to async/await may hang cluster

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12033:
---
Fix Version/s: (was: 2.9)
   2.10

> Callbacks from striped pool due to async/await may hang cluster
> ---
>
> Key: IGNITE-12033
> URL: https://issues.apache.org/jira/browse/IGNITE-12033
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, platforms
>Affects Versions: 2.7.5
>Reporter: Ilya Kasnacheev
>Priority: Critical
> Fix For: 2.10
>
>
> Discussed on dev-list:
> http://apache-ignite-developers.2346864.n4.nabble.com/Re-EXTERNAL-Re-Replace-or-Put-after-PutAsync-causes-Ignite-to-hang-td42921.html
> *Must use the public pool for callbacks as the most obvious step.*
> 
> http://apache-ignite-users.70518.x6.nabble.com/Replace-or-Put-after-PutAsync-causes-Ignite-to-hang-td27871.html#a28051
> There's a reproducer project. Long story short, .Net can invoke cache 
> operations with future callbacks, which will be invoked from striped pool. If 
> such callbacks are to use cache operations, those will be possibly sheduled 
> to the same stripe and cause a deadlock.
> The code is very simple:
> {code}
> Console.WriteLine("PutAsync");
> await cache.PutAsync(1, "Test");
> Console.WriteLine("Replace");
> cache.Replace(1, "Testing"); // Hangs here
> Console.WriteLine("Wait");
> await Task.Delay(Timeout.Infinite); 
> {code}
> async/await should absolutely not allow any client code to be run from 
> stripes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12794) Scan query fails with an assertion error: Unexpected row key

2020-07-21 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162525#comment-17162525
 ] 

Aleksey Plekhanov commented on IGNITE-12794:


Moved to the next release due to inactivity.

> Scan query fails with an assertion error: Unexpected row key
> 
>
> Key: IGNITE-12794
> URL: https://issues.apache.org/jira/browse/IGNITE-12794
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Denis Mekhanikov
>Assignee: Denis Mekhanikov
>Priority: Major
> Fix For: 2.9
>
> Attachments: ScanQueryExample.java
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Scan query fails with an exception:
> {noformat}
> Exception in thread "main" java.lang.AssertionError: Unexpected row key
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.unswap(GridCacheMapEntry.java:548)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.unswap(GridCacheMapEntry.java:512)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.advance(GridCacheQueryManager.java:3045)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.onHasNext(GridCacheQueryManager.java:2997)
>   at 
> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
>   at 
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
>   at 
> org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:127)
>   at scan.ScanQueryExample.main(ScanQueryExample.java:31)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)"
> {noformat}
> The issue is reproduced when performing concurrent scan queries and updates. 
> A reproducer is attached. You will need to enable asserts in order to 
> reproduce this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12794) Scan query fails with an assertion error: Unexpected row key

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12794:
---
Fix Version/s: (was: 2.9)
   2.10

> Scan query fails with an assertion error: Unexpected row key
> 
>
> Key: IGNITE-12794
> URL: https://issues.apache.org/jira/browse/IGNITE-12794
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8
>Reporter: Denis Mekhanikov
>Assignee: Denis Mekhanikov
>Priority: Major
> Fix For: 2.10
>
> Attachments: ScanQueryExample.java
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Scan query fails with an exception:
> {noformat}
> Exception in thread "main" java.lang.AssertionError: Unexpected row key
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.unswap(GridCacheMapEntry.java:548)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.unswap(GridCacheMapEntry.java:512)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.advance(GridCacheQueryManager.java:3045)
>   at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.onHasNext(GridCacheQueryManager.java:2997)
>   at 
> org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
>   at 
> org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
>   at 
> org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:127)
>   at scan.ScanQueryExample.main(ScanQueryExample.java:31)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)"
> {noformat}
> The issue is reproduced when performing concurrent scan queries and updates. 
> A reproducer is attached. You will need to enable asserts in order to 
> reproduce this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-11970) Excessive use of memory in continuous queries

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-11970:
---
Fix Version/s: (was: 2.9)
   2.10

> Excessive use of memory in continuous queries
> -
>
> Key: IGNITE-11970
> URL: https://issues.apache.org/jira/browse/IGNITE-11970
> Project: Ignite
>  Issue Type: Bug
>Reporter: Igor Belyakov
>Assignee: Igor Belyakov
>Priority: Critical
> Fix For: 2.10
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When we prepare to send an entry into the continuous query's filter and 
> listener, we store it in an instance of CacheContinuousQueryEventBuffer.Batch.
> The batch is an array of entries of size 
> IGNITE_CONTINUOUS_QUERY_SERVER_BUFFER_SIZE (default is 1000) that stores the 
> currently received entries (we need it for the case of concurrent updates to 
> make sure that we preserve the order of update counters).
> The issue is that when we process a part of the array we keep the links to 
> the processed entries until we exhaust the array (after when we finally clear 
> it). Because of that we may store up to 999 garbage objects which can be a 
> lot if the entries are big.
> Need to clear the entries right after we've processed them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-11970) Excessive use of memory in continuous queries

2020-07-21 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162522#comment-17162522
 ] 

Aleksey Plekhanov commented on IGNITE-11970:


Moved to the next release due to inactivity.

> Excessive use of memory in continuous queries
> -
>
> Key: IGNITE-11970
> URL: https://issues.apache.org/jira/browse/IGNITE-11970
> Project: Ignite
>  Issue Type: Bug
>Reporter: Igor Belyakov
>Assignee: Igor Belyakov
>Priority: Critical
> Fix For: 2.9
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> When we prepare to send an entry into the continuous query's filter and 
> listener, we store it in an instance of CacheContinuousQueryEventBuffer.Batch.
> The batch is an array of entries of size 
> IGNITE_CONTINUOUS_QUERY_SERVER_BUFFER_SIZE (default is 1000) that stores the 
> currently received entries (we need it for the case of concurrent updates to 
> make sure that we preserve the order of update counters).
> The issue is that when we process a part of the array we keep the links to 
> the processed entries until we exhaust the array (after when we finally clear 
> it). Because of that we may store up to 999 garbage objects which can be a 
> lot if the entries are big.
> Need to clear the entries right after we've processed them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12930) DistributedProcess fails node if unable to send single message to coordinator

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12930:
---
Priority: Critical  (was: Major)

> DistributedProcess fails node if unable to send single message to coordinator
> -
>
> Key: IGNITE-12930
> URL: https://issues.apache.org/jira/browse/IGNITE-12930
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maxim Muzafarov
>Assignee: Amelchev Nikita
>Priority: Critical
> Fix For: 2.9
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The 
> [DistributedProcess|https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/util/distributed/DistributedProcess.java]
>  fails the local node ({{FailureHandler}} CRITICAL_ERROR thrown) if unable to 
> send a message to the coordinator (e.g. the coordinator fails right before 
> the single message is sent).
> {code:java}
> try {
> ctx.io().sendToGridTopic(p.crdId, 
> GridTopic.TOPIC_DISTRIBUTED_PROCESS, singleMsg, SYSTEM_POOL);
> }
> catch (IgniteCheckedException e) {
> log.error("Unable to send message to coordinator.", e);
> ctx.failure().process(new FailureContext(CRITICAL_ERROR,
> new Exception("Unable to send message to coordinator.", 
> e)));
> }
> {code}
> h4. Expected behaviour
> If the {{ClusterTopologyCheckedException}} occurs need to wait for the 
> NODE_LEFT event of the coordinator node and re-init the distributed process 
> future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13028) Spring Data integration doesn't introspect the fields of the key object

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13028:
---
Fix Version/s: (was: 2.9)
   2.10

> Spring Data integration doesn't introspect the fields of the key object
> ---
>
> Key: IGNITE-13028
> URL: https://issues.apache.org/jira/browse/IGNITE-13028
> Project: Ignite
>  Issue Type: Improvement
>  Components: spring, springdata
>Affects Versions: 2.8
>Reporter: Denis A. Magda
>Priority: Major
>  Labels: newbie
> Fix For: 2.10
>
>
> Suppose you have key and value POJOs associated with Ignite caches/tables:
> * Key: 
> https://github.com/GridGain-Demos/ignite-spring-data-demo/blob/master/src/main/java/org/gridgain/demo/springdata/model/CityKey.java
> * Value: 
> https://github.com/GridGain-Demos/ignite-spring-data-demo/blob/master/src/main/java/org/gridgain/demo/springdata/model/City.java
> The key object includes a couple of fields ({{id}} and {{countryCode}}) that 
> are not visible to the Spring's query-autogeneration feature. For instance, 
> you have to use direct queries if want to get [all the cities with a specific 
> value of {{id}} 
> field|https://github.com/GridGain-Demos/ignite-spring-data-demo/blob/master/src/main/java/org/gridgain/demo/springdata/dao/CityRepository.java#L42]:
> {code:java}
> @Query("SELECT * FROM City WHERE id = ?")
> public Cache.Entry findById(int id);
> {code}
> If the query-autogeneration feature could introspect the metadata of the key, 
> then you would not need to fall back to the direct queries and would add the 
> following query to the repository:
> {code:java}
> public Cache.Entry findById(int id);
> {code}
> The same issue exists if a key is of a primitive type (Integer, String, etc.)
> To reproduce you can use this project:
> https://github.com/GridGain-Demos/ignite-spring-data-demo



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-11238) Possible hang on exchange

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-11238:
---
Fix Version/s: (was: 2.9)
   2.10

> Possible hang on exchange
> -
>
> Key: IGNITE-11238
> URL: https://issues.apache.org/jira/browse/IGNITE-11238
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Reporter: Igor Seliverstov
>Priority: Critical
> Fix For: 2.10
>
>
> Currently we may hang on exchange for a while (two network timeouts) waiting 
> for release a latch (see 
> {{GridDhtPartitionsExchangeFuture#waitPartitionRelease releaseLatch}}) in 
> case a processing topology version has not been added to discovery history 
> yet but client acknowledge already received by coordinator:
> {code:java}
> [2019-02-06 
> 17:43:17,009][ERROR][sys-#43%mvcc.CacheMvccPartitionedSqlCoordinatorFailoverTest0%][ExchangeLatchManager]
>  Topology AffinityTopologyVersion [topVer=24, minorTopVer=0] not found in 
> discovery history ; consider increasing IGNITE_DISCOVERY_HISTORY_SIZE 
> property. Current value is -1
> class org.apache.ignite.IgniteException: Topology AffinityTopologyVersion 
> [topVer=24, minorTopVer=0] not found in discovery history ; consider 
> increasing IGNITE_DISCOVERY_HISTORY_SIZE property. Current value is -1
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager.aliveNodesForTopologyVer(ExchangeLatchManager.java:260)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager.getLatchCoordinator(ExchangeLatchManager.java:302)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager.processAck(ExchangeLatchManager.java:351)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager.lambda$new$0(ExchangeLatchManager.java:121)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1561)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1189)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$8.run(GridIoManager.java:1086)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> This way the received ack won't be processed, so, we will be waiting for 
> retry:
> {code:java}
> // Try to resend ack.
> releaseLatch.countDown();
> {code}
> To solve the issue we need to test whether the version is present in 
> discovery history and put it into a pending map if i isn't so (see 
> {{ExchangeLatchManager#pendingAcks}})



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-10010) Node halted if second node was stopped, then cache destroyed, then second node returned

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-10010:
---
Fix Version/s: (was: 2.9)
   2.10

> Node halted if second node was stopped, then cache destroyed, then second 
> node returned
> ---
>
> Key: IGNITE-10010
> URL: https://issues.apache.org/jira/browse/IGNITE-10010
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Sergey Kozlov
>Assignee: Alexey Goncharuk
>Priority: Critical
> Fix For: 2.10
>
> Attachments: PersistenceNodeRestartAfterCacheDropSelfTest.java, 
> ignite-gridparitition-nullpointer.zip
>
>
> Partitions cache sizes1. Start 2 nodes with PDS
> 2. Activate cluster
> 3. Connect sqlline.
> 4. Create table {{create table t1(a int, b varchar, primary key(a)) with 
> "ATOMICITY=TRANSACTIONAL_SNAPSHOT,backups=1";}}
> 5. Stop node 1
> 6. Drop table {{drop table t1;}}
> 7. Start node 1
> 8. Node 2 stopped by handler:
> {noformat}
> c:\Work\apache-ignite-2.7.0-SNAPSHOT-bin>bin\ignite.bat server.xml -v -J-DID=1
> Ignite Command Line Startup, ver. 2.7.0-SNAPSHOT#19700101-sha1:DEV
> 2018 Copyright(C) Apache Software Foundation
> [18:04:22,745][INFO][main][IgniteKernal]
> >>>__  
> >>>   /  _/ ___/ |/ /  _/_  __/ __/
> >>>  _/ // (7 7// /  / / / _/
> >>> /___/\___/_/|_/___/ /_/ /___/
> >>>
> >>> ver. 2.7.0-SNAPSHOT#19700101-sha1:DEV
> >>> 2018 Copyright(C) Apache Software Foundation
> >>>
> >>> Ignite documentation: http://ignite.apache.org
> [18:04:22,745][INFO][main][IgniteKernal] Config URL: 
> file:/c:/Work/apache-ignite-2.7.0-SNAPSHOT-bin/server.xml
> [18:04:22,760][INFO][main][IgniteKernal] IgniteConfiguration 
> [igniteInstanceName=null, pubPoolSize=8, svcPoolSize=8, cal
> lbackPoolSize=8, stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, 
> igfsPoolSize=8, dataStreamerPoolSize=8, utilityCacheP
> oolSize=8, utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=8, 
> igniteHome=c:\Work\apache-ignite-2.7.0-SNAPSHO
> T-bin, igniteWorkDir=c:\Work\apache-ignite-2.7.0-SNAPSHOT-bin\work, 
> mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94
> fa3e, nodeId=d02069db-6d0b-4a40-b185-1d95fa330853, marsh=BinaryMarshaller [], 
> marshLocJobs=false, daemon=false, p2pEnabl
> ed=false, netTimeout=5000, sndRetryDelay=1000, sndRetryCnt=3, 
> metricsHistSize=1, metricsUpdateFreq=2000, metricsExpT
> ime=9223372036854775807, discoSpi=TcpDiscoverySpi [addrRslvr=null, 
> sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10,
>  reconDelay=2000, maxAckTimeout=60, forceSrvMode=false, 
> clientReconnectDisabled=false, internalLsnr=null], segPlc=ST
> OP, segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true, 
> segChkFreq=1, commSpi=TcpCommunicationSp
> i [connectGate=null, 
> connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@22ff4249,
>  enableForcibleNodeKill=false, enableTroubleshootingLog=false, locAddr=null, 
> locHost=null, locPort=47100, locPortRange=1
> 00, shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=60, 
> connTimeout=5000, maxConnTimeout=60, r
> econCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0, 
> slowClientQueueLimit=0, nioSrvr=null, shmemSrv=null, us
> ePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true, 
> filterReachableAddresses=false, ackSndThreshold=32, una
> ckedMsgsBufSize=0, sockWriteTimeout=2000, boundTcpPort=-1, 
> boundTcpShmemPort=-1, selectorsCnt=4, selectorSpins=0, addrRs
> lvr=null, ctxInitLatch=java.util.concurrent.CountDownLatch@2d1ef81a[Count = 
> 1], stopping=false], evtSpi=org.apache.ignit
> e.spi.eventstorage.NoopEventStorageSpi@4c402120, colSpi=NoopCollisionSpi [], 
> deploySpi=LocalDeploymentSpi [], indexingSp
> i=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@815b41f, 
> addrRslvr=null, encryptionSpi=org.apache.ignite.spi.encry
> ption.noop.NoopEncryptionSpi@5542c4ed, clientMode=false, 
> rebalanceThreadPoolSize=1, txCfg=TransactionConfiguration [txSe
> rEnabled=false, dfltIsolation=REPEATABLE_READ, dfltConcurrency=PESSIMISTIC, 
> dfltTxTimeout=0, txTimeoutOnPartitionMapExch
> ange=0, pessimisticTxLogSize=0, pessimisticTxLogLinger=1, 
> tmLookupClsName=null, txManagerFactory=null, useJtaSync=fa
> lse], cacheSanityCheckEnabled=true, discoStartupDelay=6, 
> deployMode=SHARED, p2pMissedCacheSize=100, locHost=127.0.0.
> 1, timeSrvPortBase=31100, timeSrvPortRange=100, 
> failureDetectionTimeout=1, sysWorkerBlockedTimeout=null, clientFailu
> reDetectionTimeout=3, metricsLogFreq=6, hadoopCfg=null, 
> connectorCfg=ConnectorConfiguration [jettyPath=null, hos
> t=null, port=11211, noDelay=true, directBuf=false, sndBufSize=32768

[jira] [Updated] (IGNITE-12456) Cluster Data Store grid gets Corrupted for Load test

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12456:
---
Fix Version/s: (was: 2.9)
   2.10

> Cluster Data Store grid gets Corrupted for Load test
> 
>
> Key: IGNITE-12456
> URL: https://issues.apache.org/jira/browse/IGNITE-12456
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.7
>Reporter: Ravi Kumar Powli
>Assignee: Alexey Goncharuk
>Priority: Critical
> Fix For: 2.10
>
> Attachments: default-config.xml
>
>
> We have Apache Ignite 3 node cluster setup with Amazon S3 Based Discovery.we 
> are into AWS cloud environment with Microservice model with 8 
> microservices.we are using Ignite for session data store.While performing 
> load test we are facing data grid issues if the number of clients reached 40 
> , Once data grid got corrupted we will lost the session store data and 
> Application will not respond due to session data also corrupted and all the 
> instances will auto scaled down to initial size 8.We need to restart Apache 
> Ignite to make the Application up.Please find the attached Apache Ignite 
> Configuration for your reference.
> This is Impacting the scalability for the micro-services. It is very evident 
> that the current state based architecture will not scale beyond a certain TPS 
> and the state store, especially Ignite becomes the single point of failure, 
> stalling the full Micro-service cluster.
>  Apache Ignite Version : 2.7.0
> {code}
> 07:24:46,678][SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%|#2%DataStoreIgniteCache%][G]
>  Blocked system-critical thread has been detected. This can lead to 
> cluster-wide undefined behaviour [threadName=sys-stripe-5, 
> blockedFor=21s]07:24:46,678][SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%|#2%DataStoreIgniteCache%][G]
>  Blocked system-critical thread has been detected. This can lead to 
> cluster-wide undefined behaviour [threadName=sys-stripe-5, 
> blockedFor=21s][07:24:46,680][SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%|#2%DataStoreIgniteCache%][]
>  Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker 
> [name=sys-stripe-5, igniteInstanceName=DataStoreIgniteCache, finished=false, 
> heartbeatTs=1575271465499]]]class org.apache.ignite.IgniteException: 
> GridWorker [name=sys-stripe-5, igniteInstanceName=DataStoreIgniteCache, 
> finished=false, heartbeatTs=1575271465499] at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
>  at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
>  at 
> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
>  at 
> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297) 
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$0(ServerImpl.java:2663)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7181)
>  at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700)
>  at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
>  at 
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)[07:24:52,692][SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%|#2%DataStoreIgniteCache%][G]
>  Blocked system-critical thread has been detected. This can lead to 
> cluster-wide undefined behaviour [threadName=ttl-cleanup-worker, 
> blockedFor=27s][07:24:52,692][SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%|#2%DataStoreIgniteCache%][]
>  Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker 
> [name=ttl-cleanup-worker, igniteInstanceName=DataStoreIgniteCache, 
> finished=false, heartbeatTs=1575271465044]]]class 
> org.apache.ignite.IgniteException: GridWorker [name=ttl-cleanup-worker, 
> igniteInstanceName=DataStoreIgniteCache, finished=false, 
> heartbeatTs=1575271465044] at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
>  at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
>  at 
> or

[jira] [Updated] (IGNITE-12404) .NET: Adopt nullable reference types

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12404:
---
Fix Version/s: (was: 2.9)
   2.10

> .NET: Adopt nullable reference types
> 
>
> Key: IGNITE-12404
> URL: https://issues.apache.org/jira/browse/IGNITE-12404
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Critical
>  Labels: .NET
> Fix For: 3.0, 2.10
>
>   Original Estimate: 40h
>  Remaining Estimate: 40h
>
> .NET 5 is due on November 2020. Microsoft recommends adopting nullable 
> annotations on public APIs before that date to all library authors:
> * https://devblogs.microsoft.com/dotnet/embracing-nullable-reference-types/
> * https://stu.dev/csharp8-doing-unsupported-things/
> * https://www.youtube.com/watch?v=TJiLhRPgyq4&feature=youtu.be
> The adoption can be performed on any language version by using special 
> attributes in the source code (no binary dependency required): 
> https://github.com/manuelroemer/Nullable 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-8409) Ignite gets stuck on IgniteDataStreamer.addData when using Object with AffinityKeyMapped

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-8409:
--
Fix Version/s: (was: 2.9)
   2.10

> Ignite gets stuck on IgniteDataStreamer.addData when using Object with 
> AffinityKeyMapped
> 
>
> Key: IGNITE-8409
> URL: https://issues.apache.org/jira/browse/IGNITE-8409
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.3
>Reporter: Andrey Aleksandrov
>Priority: Critical
> Fix For: 2.10
>
> Attachments: ContextCpty.java, TradeKey.java, TradeKeyNew.java
>
>
> This problem reproduces from time to time when we are streaming the data 
> (TradeKey.java) to Ignite sql cache. As AffinityKeyMapped we used the object 
> type (ContextCpty.java)
> When we change AffinityKeyMapped type from object to long type 
> (TradeKeyNew.java) then problem disappears.
> Investigation help to understand that we hang in BPlusTree.java class in next 
> method:
> private Result putDown(final Put p, final long pageId, final long fwdId, 
> final int lvl)
> In this method:
> res = p.tryReplaceInner(pageId, page, fwdId, lvl);
> if (res != RETRY) // Go down recursively.
> res = putDown(p, p.pageId, p.fwdId, lvl - 1);
> if (res == RETRY_ROOT || p.isFinished())
> return res;
> if (res == RETRY)
> checkInterrupted(); //WE ALWAYS GO TO THIS PLACE



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-11302) idleConnectionTimeout TcpComm different on server and client (client default > server custom) lead to wait until client timeout on server side

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-11302:
---
Fix Version/s: (was: 2.9)
   2.10

> idleConnectionTimeout TcpComm different on server and client (client default 
> > server custom) lead to wait until client timeout on server side
> --
>
> Key: IGNITE-11302
> URL: https://issues.apache.org/jira/browse/IGNITE-11302
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.5
>Reporter: ARomantsov
>Assignee: Dmitriy Sorokin
>Priority: Critical
> Fix For: 2.10
>
>
> Server config:
> {code:xml}
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> 
> 
> 
> Client config
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> 
> 
> {code}
> Server wait until default idleConnectionTimeout (10 m) for client fail.
> If both config with idleConnectionTimeout=1 s - ignite worked according to 
> config



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-11687) Concurrent WAL replay & log may fail with CRC error on read

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-11687:
---
Fix Version/s: (was: 2.9)
   2.10

> Concurrent WAL replay & log may fail with CRC error on read
> ---
>
> Key: IGNITE-11687
> URL: https://issues.apache.org/jira/browse/IGNITE-11687
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Alexey Goncharuk
>Assignee: Anton Kalashnikov
>Priority: Critical
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The cause is the way {{end}} is calculated for WAL iterator:
> {code}
> if (hnd != null)
> end = hnd.position();
> {code}
> {code}
> @Override public FileWALPointer position() {
> lock.lock();
> try {
> return new FileWALPointer(getSegmentId(), (int)written, 0);
> }
> finally {
> lock.unlock();
> }
> }
> {code}
> Consider a partially written entry. In this case, {{written}} has been 
> already updated, concurrent WAL replay will attempt to read the incompletely 
> written record and since {{end}} is not null, iterator will fail with CRC 
> error.
> The issue may be rarely reproduced by {{IgniteWalSerializerVersionTest}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12852) Comma in field is not supported by COPY command

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12852:
---
Fix Version/s: (was: 2.9)
   2.10

> Comma in field is not supported by COPY command
> ---
>
> Key: IGNITE-12852
> URL: https://issues.apache.org/jira/browse/IGNITE-12852
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8
>Reporter: YuJue Li
>Assignee: YuJue Li
>Priority: Critical
> Fix For: 2.10
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> CREATE TABLE test(a int,b varchar(100),c int,PRIMARY key(a)); 
>  
> a.csv: 
> 1,"a,b",2 
>  
> COPY FROM '/data/a.csv' INTO test (a,b,c) FORMAT CSV; 
>  
> The copy command fails because there is a comma in the second field,but this 
> is a fully legal and compliant CSV format



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-6324) Transactional cache data partially available after crash.

2020-07-21 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162508#comment-17162508
 ] 

Aleksey Plekhanov commented on IGNITE-6324:
---

Moved to the next release due to inactivity.

> Transactional cache data partially available after crash.
> -
>
> Key: IGNITE-6324
> URL: https://issues.apache.org/jira/browse/IGNITE-6324
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 1.9, 2.1
>Reporter: Stanilovsky Evgeny
>Priority: Critical
> Fix For: 2.9
>
> Attachments: InterruptCommitedThreadTest.java
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If InterruptedException raise in client code during pds store operations we 
> can obtain inconsistent cache after restart. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-6324) Transactional cache data partially available after crash.

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-6324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-6324:
--
Fix Version/s: (was: 2.9)
   2.10

> Transactional cache data partially available after crash.
> -
>
> Key: IGNITE-6324
> URL: https://issues.apache.org/jira/browse/IGNITE-6324
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Affects Versions: 1.9, 2.1
>Reporter: Stanilovsky Evgeny
>Priority: Critical
> Fix For: 2.10
>
> Attachments: InterruptCommitedThreadTest.java
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If InterruptedException raise in client code during pds store operations we 
> can obtain inconsistent cache after restart. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-10417) notifyDiscoveryListener() call can be lost.

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-10417:
---
Fix Version/s: (was: 2.9)
   2.10

> notifyDiscoveryListener() call can be lost.
> ---
>
> Key: IGNITE-10417
> URL: https://issues.apache.org/jira/browse/IGNITE-10417
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Pavel Voronkin
>Assignee: Pavel Voronkin
>Priority: Critical
> Fix For: 2.10
>
>
> 1) ServerImpl:5455 notifyDiscoveryListener can not be called in case of 
> spiState != CONNECTED, for example DISCONNECTING which is valid state 
> nevetheless inside notifyDiscoveryListener we check spiState == CONNECTED || 
> spiState == DISCONNECTING, also we need to improve logging in 
> notifyDiscoveryListener().
> 2)  Improve logging on duplicated custom discovery event.
> 3) Add assert if msg.creatorNodeId() doesn't exist in topology this is bad 
> behaviour taht should lead to node fail.
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-3212) Servers get stuck with the warning "Failed to wait for initial partition map exchange" during falover test

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-3212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-3212:
--
Fix Version/s: (was: 2.9)
   2.10

> Servers get stuck with the warning "Failed to wait for initial partition map 
> exchange" during falover test
> --
>
> Key: IGNITE-3212
> URL: https://issues.apache.org/jira/browse/IGNITE-3212
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Ksenia Rybakova
>Priority: Critical
> Fix For: 3.0, 2.10
>
>
> Servers being restarted during failover test get stuck after some time with 
> the warning "Failed to wait for initial partition map exchange". 
> {noformat}
> [08:44:41,303][INFO ][disco-event-worker-#80%null%][GridDiscoveryManager] 
> Added new node to topology: TcpDiscoveryNode 
> [id=db557f04-43b7-4e28-ae0d-d4dcf4139c89, addrs=
> [10.20.0.222, 127.0.0.1], sockAddrs=[fosters-222/10.20.0.222:47503, 
> /10.20.0.222:47503, /127.0.0.1:47503], discPort=47503, order=44, intOrder=32, 
> lastExchangeTime=1464
> 363880917, loc=false, ver=1.6.0#20160525-sha1:48321a40, isClient=false]
> [08:44:41,304][INFO ][disco-event-worker-#80%null%][GridDiscoveryManager] 
> Topology snapshot [ver=44, servers=19, clients=1, CPUs=64, heap=160.0GB]
> [08:45:11,455][INFO ][disco-event-worker-#80%null%][GridDiscoveryManager] 
> Added new node to topology: TcpDiscoveryNode 
> [id=6fae61a7-c1c1-40e5-8ad0-8bf5d6c86eb7, addrs=
> [10.20.0.223, 127.0.0.1], sockAddrs=[fosters-223/10.20.0.223:47503, 
> /10.20.0.223:47503, /127.0.0.1:47503], discPort=47503, order=45, intOrder=33, 
> lastExchangeTime=1464
> 363910999, loc=false, ver=1.6.0#20160525-sha1:48321a40, isClient=false]
> [08:45:11,455][INFO ][disco-event-worker-#80%null%][GridDiscoveryManager] 
> Topology snapshot [ver=45, servers=20, clients=1, CPUs=64, heap=170.0GB]
> [08:45:19,942][INFO ][ignite-update-notifier-timer][GridUpdateNotifier] 
> Update status is not available.
> [08:46:20,370][WARN ][main][GridCachePartitionExchangeManager] Failed to wait 
> for initial partition map exchange. Possible reasons are:
>   ^-- Transactions in deadlock.
>   ^-- Long running transactions (ignore if this is the case).
>   ^-- Unreleased explicit locks.
> [08:48:30,375][WARN ][main][GridCachePartitionExchangeManager] Still waiting 
> for initial partition map exchange ...
> {noformat}
> "Failed to wait for partition release future" warnings are on other nodes.
> {noformat}
> [08:09:45,822][WARN 
> ][exchange-worker-#82%null%][GridDhtPartitionsExchangeFuture] Failed to wait 
> for partition release future [topVer=AffinityTopologyVersion [topVer=29, 
> minorTopVer=0], node=cab5d0e0-7365-4774-8f99-d9f131c5d896]. Dumping pending 
> objects that might be the cause:
> [08:09:45,822][WARN 
> ][exchange-worker-#82%null%][GridCachePartitionExchangeManager] Ready 
> affinity version: AffinityTopologyVersion [topVer=28, minorTopVer=1]
> [08:09:45,826][WARN 
> ][exchange-worker-#82%null%][GridCachePartitionExchangeManager] Last exchange 
> future: GridDhtPartitionsExchangeFuture ...
> {noformat}
> Load config:
> - 1 client, 20 servers (5 servers per 1 host)
> - warmup 60
> - duration 66h
> - preload 5M
> - key range 10M
> - operations: PUT PUT_ALL GET GET_ALL INVOKE INVOKE_ALL REMOVE REMOVE_ALL 
> PUT_IF_ABSENT REPLACE
> - backups count 3
> - 3 servers restart every 15 min with 30 sec step, pause between stop and 
> start 5min



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-9764) Node may hang on start if cluster state is in transition

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-9764:
--
Fix Version/s: (was: 2.9)
   2.10

> Node may hang on start if cluster state is in transition
> 
>
> Key: IGNITE-9764
> URL: https://issues.apache.org/jira/browse/IGNITE-9764
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Goncharuk
>Priority: Critical
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.10
>
> Attachments: JoinNodeToTransitionStateClusterTest.java
>
>
> The following sequence of events may cause node hang on start
> Node starts, detects cluster state transition and waits for it to complete
> {code}
> "start-node-1" #11804 prio=5 os_prio=0 tid=0x7f9cc4022000 nid=0x1094 
> waiting on condition [0x7f9ffc4c2000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
>   at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1084)
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2033)
>   at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1728)
>   - locked <0x9467c890> (a 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance)
>   at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1156)
>   at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:654)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:917)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:855)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:843)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.startGrid(GridAbstractTest.java:809)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTest.lambda$testConcurrentJoinAndActivate$4(IgniteClusterActivateDeactivateTest.java:539)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteClusterActivateDeactivateTest$$Lambda$99/295822519.call(Unknown
>  Source)
>   at 
> org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:86)
> {code}
> Nio thread that is to process a message that would complete the exchange is 
> attempting to create a session and get a local node ID
> {code}
> "grid-nio-worker-tcp-comm-3-#9833%cache.IgniteClusterActivateDeactivateTest3%"
>  #11875 prio=5 os_prio=0 tid=0x7f9c8009e800 nid=0x10dc waiting on 
> condition [0x7f9ff4d76000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.await(IgniteUtils.java:7577)
>   at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.getSpiContext(TcpCommunicationSpi.java:2266)
>   at 
> org.apache.ignite.spi.IgniteSpiAdapter.getLocalNode(IgniteSpiAdapter.java:156)
>   at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.safeLocalNodeId(TcpCommunicationSpi.java:4006)
>   at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.nodeIdMessage(TcpCommunicationSpi.java:3999)
>   at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.access$300(TcpCommunicationSpi.java:271)
>   at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2.onConnected(TcpCommunicationSpi.java:412)
>   at 
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionOpened(GridNioFilterChain.java:251)
>   at 
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionOpened(GridNioFilterAdapter.java:88)
>   at 
> org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionOpened(GridNioCodecFilter.java:67)
>   at 
>

[jira] [Updated] (IGNITE-2176) Not valid exceptions in case when example can't works with remote node started with server classpath (Java 8)

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-2176:
--
Fix Version/s: (was: 2.9)
   2.10

> Not valid exceptions in case when example can't works with remote node 
> started with server classpath (Java 8)
> -
>
> Key: IGNITE-2176
> URL: https://issues.apache.org/jira/browse/IGNITE-2176
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final, 2.7.5
> Environment: jdk 1.7
> OS X 10.10.2
>Reporter: Ilya Suntsov
>Assignee: Alexey Goncharuk
>Priority: Major
> Fix For: 2.10
>
>
> Steps for reproduce:
> 1. Start one node from IDEA and one more from terminal
> 2. Run StreamVisitorExample (Java 8):
> Exception:
> {noformat}
> Exception in thread "pub-#2%null%" class 
> org.apache.ignite.binary.BinaryInvalidTypeException: 
> org.apache.ignite.examples.java8.streaming.StreamVisitorExample
>   at 
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:467)
>   at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadClass(BinaryUtils.java:1330)
>   at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadClass(BinaryUtils.java:1284)
>   at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readClass(BinaryReaderExImpl.java:339)
>   at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.readFixedType(BinaryFieldAccessor.java:835)
>   at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read(BinaryFieldAccessor.java:645)
>   at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:696)
>   at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1450)
>   at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1646)
>   at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read(BinaryFieldAccessor.java:645)
>   at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:696)
>   at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1450)
>   at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:267)
>   at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal(BinaryMarshaller.java:112)
>   at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:271)
>   at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:49)
>   at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:76)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:819)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$1600(GridIoManager.java:103)
>   at 
> org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:782)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.ignite.examples.java8.streaming.StreamVisitorExample
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:348)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8172)
>   at 
> org.apache.ignite.internal.MarshallerContextAdapter.getClass(MarshallerContextAdapter.java:185)
>   at 
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:458)
>   ... 22 more
> {noformat}
> 3. Run StreamTransformerExample (Java 8)
> Exception:
> {noformat}
> Exception in thread "pub-#2%null%" class 
> org.apache.ignite.binary.BinaryInvalidTypeException: 
> org.apache.ignite.examples.java8.streaming.StreamTransformerExample
>   at 
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:467)
>   at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadClass(BinaryUtils.java:1330)
>   at 
> org.

[jira] [Updated] (IGNITE-12131) MMAP mode should be disabled by default for WAL manager

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12131:
---
Fix Version/s: (was: 2.9)
   2.10

> MMAP mode should be disabled by default for WAL manager
> ---
>
> Key: IGNITE-12131
> URL: https://issues.apache.org/jira/browse/IGNITE-12131
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey N. Gura
>Assignee: Andrey N. Gura
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> MMAP mode has a bunch of problems (see for example 
> http://www.mapdb.org/blog/mmap_files_alloc_and_jvm_crash/ )
> Users are increasingly stumbling over these issues especially in virtualized 
> environments.
> MMAP should be disabled by default because it requires additional care from 
> user stand point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13013) Thick client must not open server sockets when used by serverless functions

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13013:
---
Fix Version/s: (was: 2.9)
   2.10

> Thick client must not open server sockets when used by serverless functions
> ---
>
> Key: IGNITE-13013
> URL: https://issues.apache.org/jira/browse/IGNITE-13013
> Project: Ignite
>  Issue Type: Improvement
>  Components: networking
>Affects Versions: 2.8
>Reporter: Denis A. Magda
>Assignee: Ivan Bessonov
>Priority: Critical
> Fix For: 2.10
>
>
> A thick client fails to start if being used inside of a serverless function 
> such as AWS Lamda or Azure Functions. Cloud providers prohibit opening 
> network ports to accept connections on the function's end. In short, the 
> function can only connect to a remote address.
> To reproduce, you can follow this tutorial and swap the thin client (used in 
> the tutorial) with the thick one: 
> https://www.gridgain.com/docs/tutorials/serverless/azure_functions_tutorial
> The thick client needs to support a mode when the communication SPI doesn't 
> create a server socket if the client is used for serverless computing. This 
> improvement looks like an extra task of this initiative: 
> https://issues.apache.org/jira/browse/IGNITE-12438



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13038) Phase out Ignite Web Console

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13038:
---
Fix Version/s: (was: 2.9)
   2.10

> Phase out Ignite Web Console
> 
>
> Key: IGNITE-13038
> URL: https://issues.apache.org/jira/browse/IGNITE-13038
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis A. Magda
>Assignee: Alexey Kuznetsov
>Priority: Critical
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The community voted to stop the development and maintenance of Ignite Web 
> Console:
> http://apache-ignite-developers.2346864.n4.nabble.com/RESULT-VOTE-Stop-Maintenance-of-Ignite-Web-Console-td47548.html
> The following needs to be done:
> * Move the tool's source code from the Ignite core to its own repository 
> (https://github.com/apache/ignite-web-console)
> * Update the repository description highlighting that the tool is no longer 
> supported by the community.
> * Unlist Web Console documentation from the navigation and main menus. Do NOT 
> remove as long as there are many pages referring to the docs. Just hide.
> * Put a callout on all the hidden documentation pages saying that the tool's 
> source code is archived (provide a reference to the repo).
> * Close all the JIRA tickets created for Web Console with the notice that the 
> tool is discontinued and no longer supported.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12550) Add page read latency histogram per data region

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12550:
---
Fix Version/s: (was: 2.9)
   2.10

> Add page read latency histogram per data region
> ---
>
> Key: IGNITE-12550
> URL: https://issues.apache.org/jira/browse/IGNITE-12550
> Project: Ignite
>  Issue Type: Improvement
>  Components: persistence
>Reporter: Alexey Goncharuk
>Assignee: Alexey Goncharuk
>Priority: Major
> Fix For: 2.10
>
>
> During an incident I experienced a large checkpoint mark duration. It was 
> impossible to determine whether this was caused by a stalled disk because of 
> large number of long page reads or by some other reasons.
> Having a metric showing the page read latency histogram would help in such 
> cases.
> We already have a {{pagesRead}} metric, just need to measure the read timings.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12694) A possible partition desync if last supplier has left and returned later.

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12694:
---
Fix Version/s: 2.10

> A possible partition desync if last supplier has left and returned later.
> -
>
> Key: IGNITE-12694
> URL: https://issues.apache.org/jira/browse/IGNITE-12694
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Scherbakov
>Assignee: Alexey Scherbakov
>Priority: Major
> Fix For: 2.10
>
>
> Consider the scenario:
> Two nodes A and B in the grid.
> 1. B is rebalancing from A.
> 2. Before rebalancing is finished A has left, partitions on B have stale data.
> 3. A returns to the topology.
> Rebalancing will not start without manual intervention, because update 
> counters for partitions will be same.
> This happens because LWM is assigned during PME before actual updates are 
> loaded.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12694) A possible partition desync if last supplier has left and returned later.

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12694:
---
Fix Version/s: (was: 2.9)

> A possible partition desync if last supplier has left and returned later.
> -
>
> Key: IGNITE-12694
> URL: https://issues.apache.org/jira/browse/IGNITE-12694
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Scherbakov
>Assignee: Alexey Scherbakov
>Priority: Major
>
> Consider the scenario:
> Two nodes A and B in the grid.
> 1. B is rebalancing from A.
> 2. Before rebalancing is finished A has left, partitions on B have stale data.
> 3. A returns to the topology.
> Rebalancing will not start without manual intervention, because update 
> counters for partitions will be same.
> This happens because LWM is assigned during PME before actual updates are 
> loaded.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12610) Disable H2 object cache reliably

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12610:
---
Fix Version/s: (was: 2.9)
   2.10

> Disable H2 object cache reliably
> 
>
> Key: IGNITE-12610
> URL: https://issues.apache.org/jira/browse/IGNITE-12610
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8
>Reporter: Ivan Pavlukhin
>Assignee: Artsiom Panko
>Priority: Major
>  Labels: newbie
> Fix For: 2.10
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Internally H2 maintains a cache of {{org.h2.value.Value}} objects. It can be 
> disabled by using "h2.objectCache" system property. There is a clear intent 
> to disable this cache because the system property is set to "false" in 
> {{org.apache.ignite.internal.processors.query.h2.ConnectionManager}}. But 
> apparently it is too late, because the property is read by H2 internals 
> before it. Consequently the object cache is enabled by default.
> We need to set this property earlier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12451) Introduce deadlock detection for cache entry reentrant locks

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12451:
---
Fix Version/s: (was: 2.9)
   2.10

> Introduce deadlock detection for cache entry reentrant locks
> 
>
> Key: IGNITE-12451
> URL: https://issues.apache.org/jira/browse/IGNITE-12451
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.7.6
>Reporter: Ivan Rakov
>Assignee: Mirza Aliev
>Priority: Major
> Fix For: 2.10
>
>
> Aside from IGNITE-12365, we still have possible threat of cache-entry-level 
> deadlock in case of careless usage of JCache mass operations (putAll, 
> removeAll):
> 1. If two different user threads will perform putAll on the same two keys in 
> reverse order (primary node for which is the same), there's a chance that 
> sys-stripe threads will be deadlocked.
> 2. Even without direct contract violation from user side, HashMap can be 
> passed as argument for putAll. Even if user threads have called mass 
> operations with two keys in the same order, HashMap iteration order is not 
> strictly defined, which may cause the same deadlock. 
> Local deadlock detection should mitigate this issue. We can create a wrapper 
> for ReentrantLock with logic that performs cycle detection in wait-for graph 
> in case we are waiting for lock acquisition for too long. Exception will be 
> thrown from one of the threads in such case, failing user operation, but 
> letting the system make progress.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12414) .NET: Performance: review CopyOnWriteConcurrentDictionary.GetOrAdd usage and locking

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12414:
---
Fix Version/s: (was: 2.9)
   2.10

> .NET: Performance: review CopyOnWriteConcurrentDictionary.GetOrAdd usage and 
> locking
> 
>
> Key: IGNITE-12414
> URL: https://issues.apache.org/jira/browse/IGNITE-12414
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
> Fix For: 3.0, 2.10
>
>
> CopyOnWriteConcurrentDictionary.GetOrAdd uses lock right away, while the 
> class assumes frequent reads and infrequent writes. It can be beneficial to 
> check for the key outside of the lock.
> In particular, this often causes contention because of 
> BinarySystemHandlers.GetWriteHandler call.
> Review other usages of this method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-8260) Transparent data encryption

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-8260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-8260:
--
Fix Version/s: (was: 2.9)
   2.10

> Transparent data encryption
> ---
>
> Key: IGNITE-8260
> URL: https://issues.apache.org/jira/browse/IGNITE-8260
> Project: Ignite
>  Issue Type: New Feature
>Affects Versions: 2.4
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-18
> Fix For: 2.10
>
>
> TDE feature should allow to user to protect data stored in the persistence 
> storage with some cypher algorithm.
> Design details described in 
> [IEP-18|https://cwiki.apache.org/confluence/display/IGNITE/IEP-18%3A+Transparent+Data+Encryption].
> When this task will be done production ready TDE implementation should be 
> available for Ignite.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12203) Rebalance is loading partitions already loading after cancellation without WAL

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12203:
---
Fix Version/s: (was: 2.9)
   2.10

> Rebalance is loading partitions already loading after cancellation without WAL
> --
>
> Key: IGNITE-12203
> URL: https://issues.apache.org/jira/browse/IGNITE-12203
> Project: Ignite
>  Issue Type: Bug
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I have seen from log partition miss warnings and after that rebelance was 
> canceled and was forcibly restarted but already over another suppliers. In 
> case when added nodes without a storage in persistent cluster it can lead to 
> several times fully rebalance. It seem to bee because we have not updated 
> partitions state until rebalance finished.
> Should to prevent partition eviction until rebalance on all nodes completed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12956) Fully prepared TX rolled back on recovery if TX coordinator failed and some primary has only reads

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12956:
---
Fix Version/s: (was: 2.9)
   2.10

> Fully prepared TX rolled back on recovery if TX coordinator failed and some 
> primary has only reads
> --
>
> Key: IGNITE-12956
> URL: https://issues.apache.org/jira/browse/IGNITE-12956
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We have 3 nodes with partitioned cache with 2 backups.
>  A - tx coordinator.
>  B and C - other nodes.
> Let's start tx from A and perform
> {noformat}
> cache.put(primaryKeyA, someVal);
> cache.get(primaryKeyB;
> tx.prepare();
> {noformat}
> then kill A.
> Expected: tx recovered and
> {noformat}
> cache.get(primaryKeyA)!=null
> {noformat}
> Actual: tx rolled back and
>  and
> {noformat}
> cache.get(primaryKeyA)==null
> {noformat}
> Reason: Node C has only 1 active transaction (because reads not propagated to 
> backup), but expect to have 2.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12793) Deadlock in the System Pool on Metadata processing

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12793:
---
Fix Version/s: (was: 2.9)
   2.10

> Deadlock in the System Pool on Metadata processing
> --
>
> Key: IGNITE-12793
> URL: https://issues.apache.org/jira/browse/IGNITE-12793
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8, 2.7.6
>Reporter: Sergey Kosarev
>Assignee: Sergey Chugunov
>Priority: Major
> Fix For: 2.10
>
> Attachments: ignite-12793-threaddump.txt
>
>
> I've recently tried to apply Ilya's idea 
> (https://issues.apache.org/jira/browse/IGNITE-12663) of minimizing thread 
> pools and tried to set system pool to 3 in my own tests.
>  It caused deadlock on a client node and I think it can happen not only on 
> such small pool values.
> Details are following:
>  I'm not using persistence currently (if it matters).
>  On the client note I use ignite compute to call a job on every server node 
> (there are 3 server nodes in the tests).
> Then I've found in logs:
> {noformat}
> [10:55:21] : [Step 1/1] [2020-03-13 10:55:21,773]
> grid-timeout-worker-#8
> [WARN] [o.a.i.i.IgniteKernal] - Possible thread pool starvation detected (no 
> task completed in last 3ms, is system thread pool size large enough?)
>  [10:55:21] : [Step 1/1] ^-- System thread pool [active=3, idle=0, qSize=9]
> {noformat}
> I see in threaddumps that all 3 system pool workers do the same - processing 
> of job responses:
> {noformat}
>   "sys-#26" #605 daemon prio=5 os_prio=0 tid=0x64a0a800 nid=0x1f34 
> waiting on condition [0x7b91d000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
>  at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.metadata(CacheObjectBinaryProcessorImpl.java:749)
>  at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$1.metadata(CacheObjectBinaryProcessorImpl.java:250)
>  at 
> org.apache.ignite.internal.binary.BinaryContext.metadata(BinaryContext.java:1169)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.getOrCreateSchema(BinaryReaderExImpl.java:2005)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:285)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.(BinaryReaderExImpl.java:184)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1982)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:702)
>  at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:187)
>  at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:887)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1762)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1797)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.deserializeOrUnmarshal(BinaryUtils.java:2160)
>  at 
> org.apache.ignite.internal.binary.BinaryUtils.doReadCollection(BinaryUtils.java:2091)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1914)
>  at 
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1714)
>  at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:306)
>  at 
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:100)
>  at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
>  at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10493)
>  at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse

[jira] [Updated] (IGNITE-9005) Eviction policy MBeans change failed LifecycleAwareTest on cache name injectoin

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-9005:
--
Fix Version/s: (was: 2.9)
   2.10

> Eviction policy MBeans change failed LifecycleAwareTest on cache name 
> injectoin
> ---
>
> Key: IGNITE-9005
> URL: https://issues.apache.org/jira/browse/IGNITE-9005
> Project: Ignite
>  Issue Type: Bug
>Reporter: Dmitry Pavlov
>Assignee: Nikolai Kulagin
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> http://apache-ignite-developers.2346864.n4.nabble.com/MTCGA-new-failures-in-builds-1485687-needs-to-be-handled-td32531.html
> New test failure detected 
> https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=7246907407546697403&branch=%3Cdefault%3E&tab=testDetails
> after merging 
> IGNITE-8776 Eviction policy MBeans are never registered if 
> evictionPolicyFactory is used 
> Revert of commit makes test passing.
> Locally test also failed. Failed with message
> {noformat}
> Unexpected cache name for 
> org.apache.ignite.internal.processors.cache.GridCacheLifecycleAwareSelfTest$TestEvictionPolicy@322714f4
>  expected: but was:
> {noformat}
> Message of failure seems to be related to TestEvictionPolicy instance from 
> test class. 
> Seems that returing call to cctx.kernalContext (). resource (). 
> injectCacheName (rsrc, cfg.getName ()); should fix issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13054) Prevent nodes with stale data joining the active topology.

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13054:
---
Fix Version/s: (was: 2.9)
   2.10

> Prevent nodes with stale data joining the active topology.
> --
>
> Key: IGNITE-13054
> URL: https://issues.apache.org/jira/browse/IGNITE-13054
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Scherbakov
>Priority: Major
> Fix For: 2.10
>
>
> After IGNITE-13003 LOST state is preserved then nodes with lost data are 
> retuned to topology after failure.
> If resetting is performed on lesser topology, it's possible to get into 
> sutiation where a node with most recent data is returned to active topology 
> where some key already could be modified.
> This should not be allowed because brings conflicting data into grid. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12982) NullPointerException on TcpCommunicationMetricsListener for some of the cases

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12982:
---
Fix Version/s: (was: 2.9)
   2.10

> NullPointerException on TcpCommunicationMetricsListener for some of the cases
> -
>
> Key: IGNITE-12982
> URL: https://issues.apache.org/jira/browse/IGNITE-12982
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maxim Muzafarov
>Priority: Major
> Fix For: 2.10
>
>
> The code block below throws an {{NullPointerException}} for some of the 
> cases. Investigation required.
> {code}
> @Override public void onMessageSent(GridNioSession ses, Message 
> msg) {
> Object consistentId = ses.meta(CONSISTENT_ID_META);
> if (consistentId != null)
> metricsLsnr.onMessageSent(msg, consistentId);
> }
> {code}
> {code}
> [2020-05-04 
> 18:12:12,991][ERROR][grid-nio-worker-tcp-comm-0-#543%snapshot.IgniteClusterSnapshotSelfTest2%][TestRecordingCommunicationSpi]
>  Failed to process selector key [ses=GridSelectorNioSessionImpl 
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=0, 
> bytesRcvd=42, bytesSent=18, bytesRcvd0=42, bytesSent0=18, select=true, 
> super=GridWorker [name=grid-nio-worker-tcp-comm-0, 
> igniteInstanceName=snapshot.IgniteClusterSnapshotSelfTest2, finished=false, 
> heartbeatTs=1588605131981, hashCode=1038334332, interrupted=false, 
> runner=grid-nio-worker-tcp-comm-0-#543%snapshot.IgniteClusterSnapshotSelfTest2%]]],
>  writeBuf=java.nio.DirectByteBuffer[pos=10 lim=32768 cap=32768], 
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], 
> inRecovery=GridNioRecoveryDescriptor [acked=0, resendCnt=0, rcvCnt=0, 
> sentCnt=0, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode 
> [id=7f78d082-6ce9-42b1-ab08-da1fde40, 
> consistentId=snapshot.IgniteClusterSnapshotSelfTest0, addrs=ArrayList 
> [127.0.0.1], sockAddrs=HashSet [/127.0.0.1:47500], discPort=47500, order=1, 
> intOrder=1, lastExchangeTime=1588605131971, loc=false, 
> ver=2.9.0#20200428-sha1:e551fa71, isClient=false], connected=true, 
> connectCnt=0, queueLimit=4096, reserveCnt=1, pairedConnections=false], 
> outRecovery=GridNioRecoveryDescriptor [acked=0, resendCnt=0, rcvCnt=0, 
> sentCnt=0, reserved=true, lastAck=0, nodeLeft=false, node=TcpDiscoveryNode 
> [id=7f78d082-6ce9-42b1-ab08-da1fde40, 
> consistentId=snapshot.IgniteClusterSnapshotSelfTest0, addrs=ArrayList 
> [127.0.0.1], sockAddrs=HashSet [/127.0.0.1:47500], discPort=47500, order=1, 
> intOrder=1, lastExchangeTime=1588605131971, loc=false, 
> ver=2.9.0#20200428-sha1:e551fa71, isClient=false], connected=true, 
> connectCnt=0, queueLimit=4096, reserveCnt=1, pairedConnections=false], 
> closeSocket=true, 
> outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@69a257d1,
>  super=GridNioSessionImpl [locAddr=/127.0.0.1:47102, 
> rmtAddr=/127.0.0.1:50655, createTime=1588605131981, closeTime=0, 
> bytesSent=18, bytesRcvd=42, bytesSent0=18, bytesRcvd0=42, 
> sndSchedTime=1588605131981, lastSndTime=1588605131981, 
> lastRcvTime=1588605131981, readsPaused=false, 
> filterChain=FilterChain[filters=[GridNioCodecFilter 
> [parser=o.a.i.i.util.nio.GridDirectParser@fc19b0b, directMode=true], 
> GridConnectionBytesVerifyFilter], accepted=true, markedForClose=true]]]
> java.lang.NullPointerException
>   at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$1.onMessageSent(TcpCommunicationSpi.java:803)
>   at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$1.onMessageSent(TcpCommunicationSpi.java:472)
>   at 
> org.apache.ignite.internal.util.nio.GridNioServer.onMessageWritten(GridNioServer.java:1764)
>   at 
> org.apache.ignite.internal.util.nio.GridNioServer.access$1800(GridNioServer.java:99)
>   at 
> org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processWrite0(GridNioServer.java:1665)
>   at 
> org.apache.ignite.internal.util.nio.GridNioServer$DirectNioClientWorker.processWrite(GridNioServer.java:1365)
>   at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2437)
>   at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2201)
>   at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1842)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>   at java.lang.Thread.run(Thread.java:748)
> [2020-05-04 
> 18:12:12,993][ERROR][grid-nio-worker-tcp-comm-0-#543%snapshot.IgniteClusterSnapshotSelfTest2%][TestRecordingCommunicationSpi]
>

[jira] [Updated] (IGNITE-12941) .NET: Support .NET 5

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12941:
---
Fix Version/s: (was: 2.9)
   2.10

> .NET: Support .NET 5
> 
>
> Key: IGNITE-12941
> URL: https://issues.apache.org/jira/browse/IGNITE-12941
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, newbie
> Fix For: 2.10
>
>
> .NET 5 is in preview. Ignite.NET does not seem to work there. Tested on 
> Ubuntu:
> * Install .NET 5 SDK from Snap
> * Create new console app, add Apache.Ignite nuget package
> * Add Ignition.Start
> * dotnet run
> {code}
> Unhandled exception. Apache.Ignite.Core.Common.IgniteException: Failed to 
> load libjvm.so:
> [option=/usr/bin/java, 
> path=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so, 
> error=Unknown error]
> [option=/usr/bin/java, 
> path=/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so, 
> error=/snap/core18/current/lib/x86_64-linux-gnu/libm.so.6: version 
> `GLIBC_2.29' not found (required by 
> /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/amd64/server/libjvm.so)]
>at Apache.Ignite.Core.Impl.Unmanaged.Jni.JvmDll.Load(String 
> configJvmDllPath, ILogger log)
>at Apache.Ignite.Core.Ignition.Start(IgniteConfiguration cfg)
>at Apache.Ignite.Core.Ignition.Start()
>at dotnet5.Program.Main(String[] args) in 
> /home/pavel/w/tests/dotnet5/Program.cs:line 10
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13053) resetLostPartitions is not working if invoked from a node where a cache not started

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13053:
---
Fix Version/s: (was: 2.9)
   2.10

> resetLostPartitions is not working if invoked from a node where a cache not 
> started
> ---
>
> Key: IGNITE-13053
> URL: https://issues.apache.org/jira/browse/IGNITE-13053
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Scherbakov
>Priority: Major
> Fix For: 2.10
>
>
> Reproduced by CachePartitionLossWithRestartsTest and an attemp to reset lost 
> policies using node with index=nonAffIdx



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13102) IgniteCache#isClosed() returns false on server node even if the cache had been closed before.

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13102:
---
Fix Version/s: (was: 2.9)
   2.10

> IgniteCache#isClosed() returns false on server node even if the cache had 
> been closed before.
> -
>
> Key: IGNITE-13102
> URL: https://issues.apache.org/jira/browse/IGNITE-13102
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.8, 2.8.1
>Reporter: Sergey Antonov
>Priority: Major
> Fix For: 2.10
>
>
> IgniteCache#isClosed() still returns {{false}} even after 
> {{IgniteCache#close()}}. Only server nodes affect by this problem. 
> Simple reproducer:
> {code:java}
> @Test
> public void test() throws Exception {
> IgniteEx node = startGrid(0);
> IgniteCache cache = 
> node.getOrCreateCache(DEFAULT_CACHE_NAME);
> assertFalse(cache.isClosed());
> cache.close();
> // java.lang.AssertionError
> assertTrue(cache.isClosed());
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13101) Metastore may leave uncompleted write futures during node stop

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13101:
---
Fix Version/s: (was: 2.9)
   2.10

> Metastore may leave uncompleted write futures during node stop
> --
>
> Key: IGNITE-13101
> URL: https://issues.apache.org/jira/browse/IGNITE-13101
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexey Goncharuk
>Priority: Major
> Fix For: 2.10
>
>
> I've got the following thread-dump (only relevant parts are retained) during 
> one of the teamcity runs:
> {code}
> "sys-#103862%baseline.IgniteStableBaselineBinObjFieldsQuerySelfTest0%" 
> #107048 prio=5 os_prio=0 tid=0x7fa2d8009800 nid=0x480d waiting on 
> condition [0x7fa1d1cdc000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
>   at 
> org.apache.ignite.internal.processors.metric.GridMetricManager.remove(GridMetricManager.java:411)
>   at 
> org.apache.ignite.internal.processors.cache.CacheGroupMetricsImpl.remove(CacheGroupMetricsImpl.java:497)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.cleanup(GridCacheProcessor.java:512)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCacheGroup(GridCacheProcessor.java:2901)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCacheGroup(GridCacheProcessor.java:2889)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCacheStopRequestOnExchangeDone(GridCacheProcessor.java:2781)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.onExchangeDone(GridCacheProcessor.java:2878)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onDone(GridDhtPartitionsExchangeFuture.java:2431)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:3832)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:3608)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processSingleMessage(GridDhtPartitionsExchangeFuture.java:3207)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.access$200(GridDhtPartitionsExchangeFuture.java:154)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:2994)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture$2.apply(GridDhtPartitionsExchangeFuture.java:2982)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onReceiveSingleMessage(GridDhtPartitionsExchangeFuture.java:2982)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.processSinglePartitionUpdate(GridCachePartitionExchangeManager.java:1989)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.preprocessSingleMessage(GridCachePartitionExchangeManager.java:524)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager.access$1100(GridCachePartitionExchangeManager.java:182)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:407)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$2.onMessage(GridCachePartitionExchangeManager.java:389)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:3715)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$MessageHandler.apply(GridCachePartitionExchangeManager.java:3694)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCache

[jira] [Updated] (IGNITE-13169) Remove Ignite bean name requirement for Spring Data Repository

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13169:
---
Fix Version/s: (was: 2.9)
   2.10

> Remove Ignite bean name requirement for Spring Data Repository
> --
>
> Key: IGNITE-13169
> URL: https://issues.apache.org/jira/browse/IGNITE-13169
> Project: Ignite
>  Issue Type: Improvement
>  Components: springdata
>Affects Versions: 2.8.1
>Reporter: Semyon Danilov
>Assignee: Semyon Danilov
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> At the moment IgniteRepositoryFactoryBean requires Ignite instance bean (or 
> IgniteConfiguration instance bean) with specific name.
> There are couple of problems with that behavior:
> 1) We have a SpringBoot autoconfiguration module which creates bean with 
> different name, so Ignite Spring Data won't work out of the box
>  2) That is, actually, not a Spring-way to do things: Spring prefers 
> injecting by class, qualifiers like name and order should be used only when 
> necessay
> I propose changing behavior to "getting bean by class and not by name"
>  
> This won't require any user code changes, because we only remove restrictions 
> on Ignite instance bean name



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13183) Query timeout redesign

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13183:
---
Fix Version/s: (was: 2.9)
   2.10

> Query timeout redesign
> --
>
> Key: IGNITE-13183
> URL: https://issues.apache.org/jira/browse/IGNITE-13183
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.8.1
>Reporter: Taras Ledkov
>Assignee: Taras Ledkov
>Priority: Major
> Fix For: 2.10
>
>
> *Motivation:*
> Now the query timeout is set up for each node separately by the node 
> configuration. This property isn't propagated for the all nodes of cluster. 
> Also user cannot change / disable query timeout without restart all nodes of 
> the cluster.
> *Proposal fix:*
> - Adds the default query timeout property to the 
> {{DistributedSqlConfiguration}}. Use distributed metastore to store and 
> manage the property.
> - Deprecates the {{SqlConfiguration#defaultQueryTimeout}} property and use it 
> to set up initial value of new property 
> {{DistributedSqlConfiguration#defaultQueryTimeout}}
> - Adds info about explicit query timeout to {{GridH2QueryRequest}} (boolean 
> flag {{explicitTimeout=false}} by default). This is necessary so that the 
> default timeout may be used for queries from old nodes.
> - When query timeout is set to zero by old node ({{explicitTimeout=false}}) 
> we assume this is the default value and use default timeout for this queries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13116) CPP: Can not compile using msvc 14.1

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13116:
---
Fix Version/s: (was: 2.9)
   2.10

> CPP: Can not compile using msvc 14.1
> 
>
> Key: IGNITE-13116
> URL: https://issues.apache.org/jira/browse/IGNITE-13116
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.8.1
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are linking errors when trying to build Ignite C++ with msvc 15.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13174) C++: Add Windows support to CMake build system

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13174:
---
Fix Version/s: (was: 2.9)

> C++: Add Windows support to CMake build system
> --
>
> Key: IGNITE-13174
> URL: https://issues.apache.org/jira/browse/IGNITE-13174
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.8.1
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ticket IGNITE-13078 adds CMake build system support, but only for Linux. Need 
> make sure it works on Windows and create TC job for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12509) CACHE_REBALANCE_STOPPED event raises for wrong caches in case of specified RebalanceDelay

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12509:
---
Fix Version/s: (was: 2.9)
   2.10

> CACHE_REBALANCE_STOPPED event raises for wrong caches in case of specified 
> RebalanceDelay
> -
>
> Key: IGNITE-12509
> URL: https://issues.apache.org/jira/browse/IGNITE-12509
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Rakov
>Assignee: Yaroslav Molochkov
>Priority: Major
>  Labels: newbie
> Fix For: 2.10
>
> Attachments: RebalanceDelayTest.java
>
>
> Steps to reproduce:
> 1. Start in-memory cluster with 2 server nodes
> 2. Start 3 caches with different rebalance delays (e.g. 5, 10 and 15 seconds) 
> and upload some data
> 3. Start localListener for EVT_CACHE_REBALANCE_STOPPED event on one of the 
> nodes.
> 4. Start one more server node.
> 5. Wait for 5 seconds, till rebalance delay is reached.
> 6. EVT_CACHE_REBALANCE_STOPPED event received 3 times (1 for each cache), but 
> in fact only 1 cache was rebalanced. The same happens for the rest of the 
> caches.
> As result on rebalance finish we're getting event for each cache 
> [CACHE_COUNT] times, instead of 1.
> Reproducer attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13174) C++: Add Windows support to CMake build system

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13174:
---
Fix Version/s: 2.10

> C++: Add Windows support to CMake build system
> --
>
> Key: IGNITE-13174
> URL: https://issues.apache.org/jira/browse/IGNITE-13174
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.8.1
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ticket IGNITE-13078 adds CMake build system support, but only for Linux. Need 
> make sure it works on Windows and create TC job for it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12822) .NET: Build fails on Xamarin

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12822:
---
Fix Version/s: (was: 2.9)
   2.10

> .NET: Build fails on Xamarin
> 
>
> Key: IGNITE-12822
> URL: https://issues.apache.org/jira/browse/IGNITE-12822
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.8
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
> Fix For: 2.10
>
>
> * Create new Xamarin Forms app in Visual Studio
> * Add reference to Apache.Ignite NuGet package
> * Try to rebuild all:
> {code}
> C:\Program Files (x86)\Microsoft Visual 
> Studio\2019\Community\MSBuild\Xamarin\Android\Xamarin.Android.Common.targets(1697,2):
>  error XA2002: Can not resolve reference: `System.Configuration`, referenced 
> by `Apache.Ignite.Core`. Please add a NuGet package or assembly reference for 
> `System.Configuration`, or remove the reference to `Apache.Ignite.Core`.
> {code}
> Xamarin does not include System.Configuration assembly.
> The workaround is to manually add a reference to System.Configuration from 
> anywhere (it is not used at runtime, we just need to satisfy the build):
> {code}
>   
> 
>   ..\..\bin\System.Configuration.dll
> 
>   
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-7369) .NET: Thin client: Transactions

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-7369:
--
Fix Version/s: (was: 2.9)
   2.10

> .NET: Thin client: Transactions
> ---
>
> Key: IGNITE-7369
> URL: https://issues.apache.org/jira/browse/IGNITE-7369
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 2.4
>Reporter: Pavel Tupitsyn
>Assignee: Sergey Stronchinskiy
>Priority: Major
>  Labels: .NET, iep-34
> Fix For: 2.10
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Implement transactions in thin client protocol and .NET thin client.
> Main issue: Ignite transactions are tied to a specific thread.
> See how JDBC works around this by starting a dedicated thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13148) Thin Client Continuous Query

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13148:
---
Fix Version/s: (was: 2.9)
   2.10

> Thin Client Continuous Query
> 
>
> Key: IGNITE-13148
> URL: https://issues.apache.org/jira/browse/IGNITE-13148
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms, thin client
>Affects Versions: 2.8
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
> Fix For: 2.10
>
>   Original Estimate: 96h
>  Time Spent: 51h 20m
>  Remaining Estimate: 9h 10m
>
> Add Continuous Queries to thin client protocol.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13107) ODBC: Memory leak in the tests

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13107:
---
Fix Version/s: (was: 2.9)
   2.10

> ODBC: Memory leak in the tests
> --
>
> Key: IGNITE-13107
> URL: https://issues.apache.org/jira/browse/IGNITE-13107
> Project: Ignite
>  Issue Type: Improvement
>  Components: odbc
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The memory leak, which is reproducible on TC Windows debug configuration, 
> have place in case of odce-test unit tests executing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13282) Fix TcpDiscoveryCoordinatorFailureTest.testClusterFailedNewCoordinatorInitialized()

2020-07-21 Thread Vladimir Steshin (Jira)
Vladimir Steshin created IGNITE-13282:
-

 Summary: Fix 
TcpDiscoveryCoordinatorFailureTest.testClusterFailedNewCoordinatorInitialized()
 Key: IGNITE-13282
 URL: https://issues.apache.org/jira/browse/IGNITE-13282
 Project: Ignite
  Issue Type: Bug
Reporter: Vladimir Steshin
Assignee: Vladimir Steshin






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-7369) .NET: Thin client: Transactions

2020-07-21 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162233#comment-17162233
 ] 

Pavel Tupitsyn commented on IGNITE-7369:


[~GuruStron] please see my comments on GitHub

> .NET: Thin client: Transactions
> ---
>
> Key: IGNITE-7369
> URL: https://issues.apache.org/jira/browse/IGNITE-7369
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 2.4
>Reporter: Pavel Tupitsyn
>Assignee: Sergey Stronchinskiy
>Priority: Major
>  Labels: .NET, iep-34
> Fix For: 2.9
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Implement transactions in thin client protocol and .NET thin client.
> Main issue: Ignite transactions are tied to a specific thread.
> See how JDBC works around this by starting a dedicated thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13281) Test failed: GridIoManagerFileTransmissionSelfTest#testChunkHandlerInitSizeFail

2020-07-21 Thread Stanilovsky Evgeny (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162212#comment-17162212
 ] 

Stanilovsky Evgeny commented on IGNITE-13281:
-

[~mmuzaf] may be you have ideas ?

> Test failed: 
> GridIoManagerFileTransmissionSelfTest#testChunkHandlerInitSizeFail
> ---
>
> Key: IGNITE-13281
> URL: https://issues.apache.org/jira/browse/IGNITE-13281
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.8.1
>Reporter: Stanilovsky Evgeny
>Priority: Major
> Attachments: Ignite_Tests_2.4_Java_8_9_10_11_Basic_1_24053.log.zip
>
>
> I found this problem on TC (current master)
> {code:java}
> [14:33:27]W:   [org.apache.ignite:ignite-core] class 
> org.apache.ignite.IgniteException: Test exception. Initialization failed
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManagerFileTransmissionSelfTest$16.chunkHandler(GridIoManagerFileTransmissionSelfTest.java:777)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManager.createReceiver(GridIoManager.java:3062)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManager.receiveFromChannel(GridIoManager.java:2949)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processOpenedChannel(GridIoManager.java:2892)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4800(GridIoManager.java:243)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:1234)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13281) Test failed: GridIoManagerFileTransmissionSelfTest#testChunkHandlerInitSizeFail

2020-07-21 Thread Stanilovsky Evgeny (Jira)
Stanilovsky Evgeny created IGNITE-13281:
---

 Summary: Test failed: 
GridIoManagerFileTransmissionSelfTest#testChunkHandlerInitSizeFail
 Key: IGNITE-13281
 URL: https://issues.apache.org/jira/browse/IGNITE-13281
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.8.1
Reporter: Stanilovsky Evgeny
 Attachments: Ignite_Tests_2.4_Java_8_9_10_11_Basic_1_24053.log.zip

I found this problem on TC (current master)

{code:java}
[14:33:27]W: [org.apache.ignite:ignite-core] class 
org.apache.ignite.IgniteException: Test exception. Initialization failed
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManagerFileTransmissionSelfTest$16.chunkHandler(GridIoManagerFileTransmissionSelfTest.java:777)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.createReceiver(GridIoManager.java:3062)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.receiveFromChannel(GridIoManager.java:2949)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.processOpenedChannel(GridIoManager.java:2892)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4800(GridIoManager.java:243)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:1234)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13281) Test failed: GridIoManagerFileTransmissionSelfTest#testChunkHandlerInitSizeFail

2020-07-21 Thread Stanilovsky Evgeny (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stanilovsky Evgeny updated IGNITE-13281:

Description: 
I found this problem on TC (current master)

{code:java}
[14:33:27]W: [org.apache.ignite:ignite-core] class 
org.apache.ignite.IgniteException: Test exception. Initialization failed
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManagerFileTransmissionSelfTest$16.chunkHandler(GridIoManagerFileTransmissionSelfTest.java:777)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.createReceiver(GridIoManager.java:3062)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.receiveFromChannel(GridIoManager.java:2949)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.processOpenedChannel(GridIoManager.java:2892)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4800(GridIoManager.java:243)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:1234)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
{code}

https://ci.ignite.apache.org/viewLog.html?buildId=5481076


  was:
I found this problem on TC (current master)

{code:java}
[14:33:27]W: [org.apache.ignite:ignite-core] class 
org.apache.ignite.IgniteException: Test exception. Initialization failed
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManagerFileTransmissionSelfTest$16.chunkHandler(GridIoManagerFileTransmissionSelfTest.java:777)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.createReceiver(GridIoManager.java:3062)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.receiveFromChannel(GridIoManager.java:2949)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.processOpenedChannel(GridIoManager.java:2892)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4800(GridIoManager.java:243)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
org.apache.ignite.internal.managers.communication.GridIoManager$7.run(GridIoManager.java:1234)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[14:33:27]W: [org.apache.ignite:ignite-core]at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
{code}



> Test failed: 
> GridIoManagerFileTransmissionSelfTest#testChunkHandlerInitSizeFail
> ---
>
> Key: IGNITE-13281
> URL: https://issues.apache.org/jira/browse/IGNITE-13281
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.8.1
>Reporter: Stanilovsky Evgeny
>Priority: Major
> Attachments: Ignite_Tests_2.4_Java_8_9_10_11_Basic_1_24053.log.zip
>
>
> I found this problem on TC (current master)
> {code:java}
> [14:33:27]W:   [org.apache.ignite:ignite-core] class 
> org.apache.ignite.IgniteException: Test exception. Initialization failed
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManagerFileTransmissionSelfTest$16.chunkHandler(GridIoManagerFileTransmissionSelfTest.java:777)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManager.createReceiver(GridIoManager.java:3062)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManager.receiveFromChannel(GridIoManager.java:2949)
> [14:33:27]W:   [org.apache.ignite:ignite-core]at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processOpenedChannel(GridIoManager.java:2892)
> [14:33:27]W:   [org.apache.ignite:igni

[jira] [Commented] (IGNITE-5848) Ignite should support Hibernate 5.2.X

2020-07-21 Thread Scott Feldstein (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-5848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162143#comment-17162143
 ] 

Scott Feldstein commented on IGNITE-5848:
-

When IGNITE-9893 was implemented it was decided that we would bypass hibernate 
5.2.  So I don't think this is needed.

> Ignite should support Hibernate 5.2.X
> -
>
> Key: IGNITE-5848
> URL: https://issues.apache.org/jira/browse/IGNITE-5848
> Project: Ignite
>  Issue Type: Sub-task
>  Components: cache, hibernate
>Affects Versions: 2.0, 2.1
>Reporter: Nikolay Tikhonov
>Priority: Major
>  Labels: community, java8
> Fix For: 2.10
>
>
> Currently Ignite supports Hibernate 5.1.X
> With Hibernate version of 5.2.X got the following exception:
> {code:java}
> Handler dispatch failed; nested exception is java.lang.AbstractMethodError: 
> org.apache.ignite.cache.hibernate.HibernateEntityRegion$AccessStrategy.putFromLoad(Lorg/hibernate/engine/spi/SharedSessionContractImplementor;Ljava/lang/Object;Ljava/lang/Object;JLjava/lang/Object;Z)Z
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13222) .NET: Consolidate tests - get rid of Tests.DotNetCore folder

2020-07-21 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162107#comment-17162107
 ] 

Ignite TC Bot commented on IGNITE-13222:


{panel:title=Branch: [pull/8064/head] Base: [master] : Possible Blockers 
(9)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}PDS (Indexing){color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=5481811]]
* IgnitePdsWithIndexingCoreTestSuite: IgniteWalRecoveryTest.testRandomCrash - 
Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}PDS 4{color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=5481816]]

{color:#d04437}Cache 2{color} [[tests 0 TIMEOUT , Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=5481797]]

{color:#d04437}MVCC PDS 4{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=5481843]]
* IgnitePdsMvccTestSuite4: 
IgniteClusterActivateDeactivateTestWithPersistenceAndMemoryReuse.testReActivateInReadOnlySimple_5_Servers_4_Clients_FromClient
 - Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}MVCC Cache 7{color} [[tests 0 TIMEOUT , Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=5481837]]

{color:#d04437}Basic 1{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=5481776]]
* IgniteBasicTestSuite: 
BPlusTreeReuseSelfTest.testSizeForRandomPutRmvMultithreadedAsync_16 - Test has 
low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Continuous Query 4{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=5481807]]
* IgniteCacheQuerySelfTestSuite6: 
ContinuousQueryMarshallerTest.testRemoteFilterFactoryServer - Test has low fail 
rate in base branch 2,6% and is not flaky

{color:#d04437}Cache (Failover) 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=5481789]]
* IgniteCacheFailoverTestSuite2: 
GridCachePartitionedFailoverSelfTest.testOptimisticRepeatableReadTxTopologyChange
 - Test has low fail rate in base branch 0,0% and is not flaky

{color:#d04437}Web Sessions{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=5481767]]
* IgniteWebSessionSelfTestSuite: WebSessionSelfTest.testRestarts - Test has low 
fail rate in base branch 0,0% and is not flaky

{panel}
{panel:title=Branch: [pull/8064/head] Base: [master] : New Tests 
(1274)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Platform .NET (Long Running){color} [[tests 
42|https://ci.ignite.apache.org/viewLog.html?buildId=5481821]]
* {color:#013220}exe: 
ClientReconnectCompatibilityTest.TestReconnectToOldNodeDisablesPartitionAwareness
 - PASSED{color}
* {color:#013220}exe: 
0",0).TestPartitionAwarenessDisablesAutomaticallyOnVersionsOlderThan140 - 
PASSED{color}
* {color:#013220}exe: 
0",0).TestCreateCacheWithFullConfigWorksOnAllVersions - PASSED{color}
* {color:#013220}exe: 
0",0).TestComputeOperationsThrowCorrectExceptionWhenFeatureIsMissing - 
PASSED{color}
* {color:#013220}exe: 
0",0).TestClusterOperationsThrowCorrectExceptionOnVersionsOlderThan150 - 
PASSED{color}
* {color:#013220}exe: 
0",0).TestClusterGroupOperationsThrowCorrectExceptionWhenFeatureIsMissing 
- PASSED{color}
* {color:#013220}exe: 0",0).TestCacheOperationsAreSupportedOnAllVersions - 
PASSED{color}
* {color:#013220}exe: 
6",2).TestClusterOperationsThrowCorrectExceptionOnVersionsOlderThan150 - 
PASSED{color}
* {color:#013220}exe: 
6",2).TestClusterGroupOperationsThrowCorrectExceptionWhenFeatureIsMissing 
- PASSED{color}
* {color:#013220}exe: 6",2).TestCacheOperationsAreSupportedOnAllVersions - 
PASSED{color}
* {color:#013220}exe: 
0",1).TestWithExpiryPolicyThrowCorrectExceptionOnVersionsOlderThan150 - 
PASSED{color}
... and 31 new tests

{color:#8b}Platform .NET{color} [[tests 
31|https://ci.ignite.apache.org/viewLog.html?buildId=5481817]]
* {color:#013220}exe: MarshallerTest.TestExplicitMarshaller - PASSED{color}
* {color:#013220}exe: 
ClientProtocolCompatibilityTest.TestCacheOperationsAreSupportedOnAllProtocols(1)
 - PASSED{color}
* {color:#013220}exe: 
ClientProtocolCompatibilityTest.TestCacheOperationsAreSupportedOnAllProtocols(0)
 - PASSED{color}
* {color:#013220}exe: 
ClientProtocolCompatibilityTest.TestClientOlderThanServerConnectsOnClientVersion(1)
 - PASSED{color}
* {color:#013220}exe: 
ClientProtocolCompatibilityTest.TestClientOlderThanServerConnectsOnClientVersion(0)
 - PASSED{color}
* {color:#013220}exe: 
ClientProtocolCompatibilityTest.TestClientNewerThanServerReconnectsOnServerVersion
 - PASSED{color}
* {color:#013220}exe: 
ClientProtocolCompatibilityTest.TestCacheOperationsAreSupportedOnAllProtocols(6)
 - PASSED{color}
* {color:#013220}exe: 
ClientProtocolCompatibilityTest.TestCacheOperationsAreSupportedOnAllProtocols(5)
 - PASSED{color}
* {color:#013220}exe: 
ClientProtocolCompatibilityTest.TestCacheOperationsAreSupportedOnAllProtocols(4)
 - PASSED{

[jira] [Updated] (IGNITE-12996) Remote filter of IgniteEvents has to run inside the Ignite Sandbox.

2020-07-21 Thread Denis Garus (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Garus updated IGNITE-12996:
-
Release Note: A remote filter of IgniteEvents will run on a remote node 
inside the Ignite Sandbox if it is turned on.

> Remote filter of IgniteEvents has to run inside the Ignite Sandbox.
> ---
>
> Key: IGNITE-12996
> URL: https://issues.apache.org/jira/browse/IGNITE-12996
> Project: Ignite
>  Issue Type: Task
>  Components: security
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
> Fix For: 2.9
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Remote filter of IgniteEvents has to run on a remote node inside the Ignite 
> Sandbox if it is turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13261) Using transactions or scan queries inside the ignite sandbox can throw an AccessControlException

2020-07-21 Thread Denis Garus (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162098#comment-17162098
 ] 

Denis Garus commented on IGNITE-13261:
--

[~alex_pl] thank you!

> Using transactions or scan queries inside the ignite sandbox can throw an 
> AccessControlException
> 
>
> Key: IGNITE-13261
> URL: https://issues.apache.org/jira/browse/IGNITE-13261
> Project: Ignite
>  Issue Type: Bug
>  Components: security
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
> Fix For: 2.9
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Any subject should be able to use transactions or scan queries inside the 
> ignite sandbox without additional permissions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12996) Remote filter of IgniteEvents has to run inside the Ignite Sandbox.

2020-07-21 Thread Denis Garus (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162099#comment-17162099
 ] 

Denis Garus commented on IGNITE-12996:
--

[~alex_pl] thank you!

> Remote filter of IgniteEvents has to run inside the Ignite Sandbox.
> ---
>
> Key: IGNITE-12996
> URL: https://issues.apache.org/jira/browse/IGNITE-12996
> Project: Ignite
>  Issue Type: Task
>  Components: security
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
> Fix For: 2.9
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Remote filter of IgniteEvents has to run on a remote node inside the Ignite 
> Sandbox if it is turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13016) Fix backward checking of failed node.

2020-07-21 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162076#comment-17162076
 ] 

Aleksey Plekhanov commented on IGNITE-13016:


Cherry-picked to 2.9

> Fix backward checking of failed node.
> -
>
> Key: IGNITE-13016
> URL: https://issues.apache.org/jira/browse/IGNITE-13016
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Backward node connection checking looks wierd. What we might improve are:
> 1) Addresses checking could be done in parrallel, not sequentially.
> {code:java}
> for (InetSocketAddress addr : nodeAddrs) {
> // Connection refused may be got if node doesn't listen
> // (or blocked by firewall, but anyway assume it is dead).
> if (!isConnectionRefused(addr)) {
> liveAddr = addr;
> break;
> }
> }
> {code}
> 2) Any io-exception should be considered as failed connection, not only 
> connection-refused:
> {code:java}
> catch (ConnectException e) {
> return true;
> }
> catch (IOException e) {
> return false;
> }
> {code}
> 3) Timeout on connection checking should not be constant or hardcode:
> {code:java}
> sock.connect(addr, 100);
> {code}
> 4) Decision to check connection should rely on configured exchange timeout, 
> no on the ping interval
> {code:java}
> // We got message from previous in less than double connection check interval.
> boolean ok = rcvdTime + U.millisToNanos(connCheckInterval) * 2 >= now;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Stanilovsky Evgeny (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162070#comment-17162070
 ] 

Stanilovsky Evgeny commented on IGNITE-13270:
-

ok, i try to reproduce it if you give me table and index creation queries for A 
B and C tables plz.

> Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5
> --
>
> Key: IGNITE-13270
> URL: https://issues.apache.org/jira/browse/IGNITE-13270
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8, 2.8.1
>Reporter: Tanmay Ambre
>Priority: Major
>
> After upgrading to ignite 2.8.0 from 2.7.5 the following query is causing 
> massive CPU burn. We have a 8 node server cluster for Ignite (with 32 CPUs on 
> each node). with 2.8.0, 2 nodes are maxed out on CPU. with 2.8.1 all nodes 
> are above 80%. On 2.7.5 the same query was performing very nicely. 
> The query is a join with 3 tables having same affinity key. however, we dont 
> know the affinity key when we are querying and there this results in a merge 
> scan. 
> Query:
> select A.A_ID, A.AFFINITYKEY, B.B_ID from A, B, C
> where
> A.A_ID = B.A_ID AND
> B.B_ID = C.B_ID AND
> A.AFFINITYKEY = B.AFFINITYKEY AND
> B.AFFINITYKEY = C.AFFINITYKEY AND
> B.B_ID = ? AND
> C.C_ID = ?
>  
> Performance:
> 2.7.5 ~ 2 to 5 ms
> 2.8.0 ~ 6 to 12 seconds
> 2.8.1 ~ within 3 seconds
>  
> Volume (we have a EDA based system where messages come in burst. One burst 
> has approx. 150K events - each event correspond to one query). 
>  
> throughput of our java based processor (that fires this query):
> 2.7.5 ~ 750 transactions per second
> 2.8.0 ~ 3 transactions per second
> 2.8.1 ~ 6 transactions per second
>  
> CPU burn
> 2.7.5 ~ 10 to 15% on each node
> 2.8.0 ~ 100% on 2 nodes other nodes are idling
> 2.8.1 ~ 80 to 90% on each node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13261) Using transactions or scan queries inside the ignite sandbox can throw an AccessControlException

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-13261:
---
Fix Version/s: 2.9

> Using transactions or scan queries inside the ignite sandbox can throw an 
> AccessControlException
> 
>
> Key: IGNITE-13261
> URL: https://issues.apache.org/jira/browse/IGNITE-13261
> Project: Ignite
>  Issue Type: Bug
>  Components: security
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
> Fix For: 2.9
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Any subject should be able to use transactions or scan queries inside the 
> ignite sandbox without additional permissions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13268) Add indexes manipulation commands to control.sh

2020-07-21 Thread Sergey Chugunov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162037#comment-17162037
 ] 

Sergey Chugunov commented on IGNITE-13268:
--

[~vmalinovskiy], sure, will take a look.

> Add indexes manipulation commands to control.sh
> ---
>
> Key: IGNITE-13268
> URL: https://issues.apache.org/jira/browse/IGNITE-13268
> Project: Ignite
>  Issue Type: Improvement
>  Components: control.sh
>Reporter: Vladimir Malinovskiy
>Assignee: Vladimir Malinovskiy
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> These subcommands are to be added to the *--cache* command:
> h2. --indexes_list
> Gets list of indexes info. Although filters can be specified via command 
> arguments lines of output should still be grepable.
> h4. Argument:
>  * *--node-id* is a UUID of node for which to perform the operation. If node 
> isn’t specified explicitly it will be chosen by grid.
>  * *--group-name* is a regular expression corresponding to the group name.
>  * *--cache-name* is a regular expression corresponding to the name of the 
> cache.
>  * *--index-name* is a regular expression that matches the name of the index.
> h2. --indexes-rebuild-status
> Gets list of indexes that are currently being rebuilt.
> h4. Argument:
>  * *--node-id* is a UUID of node for which to perform the operation. If node 
> isn’t specified explicitly indexes rebuild info will be collected from all 
> nodes.
> h2. --indexes_force_rebuild
> Triggers force rebuild of indexes. This information should be reported in the 
> output:
>  * List of caches that weren’t found.
>  * List of caches that have index rebuild in progress. Indexes rebuild 
> shouldn’t be restarted for these caches.
>  * List of caches for which index rebuild was triggered.
> Indexes rebuild should be performed asynchronously.
> h4. Argument:
>  * *--node-id* is a UUID of node for which to perform the operation. 
> Mandatory parameter.
>  * *--group-names* is a comma-separated list of group names for which to 
> rebuild indexes. Either this option or --cache-names must be specified.
>  * *--cache-names* is a comma-separated list of cache names for which to 
> rebuild indexes. Either this option or --group-names must be specified.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13271) Add new type of WAL records to track/debug atomic updates on backup nodes

2020-07-21 Thread Vyacheslav Koptilin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162020#comment-17162020
 ] 

Vyacheslav Koptilin commented on IGNITE-13271:
--

Hello [~ivan.glukos],

Could you please take a look?

> Add new type of WAL records to track/debug atomic updates on backup nodes
> -
>
> Key: IGNITE-13271
> URL: https://issues.apache.org/jira/browse/IGNITE-13271
> Project: Ignite
>  Issue Type: Task
>Reporter: Vyacheslav Koptilin
>Assignee: Vyacheslav Koptilin
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, updates of the same key, for atomic caches, can arrive on backup 
> nodes in a different order and the most actual only is applied.
> For instance:
> Primary node: update1 -> update2 -> update3
> Backup node: update1-> update3 -> update2(which is skipped)
> It seems it would be useful to track/log these updates for testing purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13279) Ignition.start failing with error java.lang.IllegalStateException: Failed to parse version: -1595322995-

2020-07-21 Thread Keshava Munegowda (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Keshava Munegowda updated IGNITE-13279:
---
Priority: Critical  (was: Major)

> Ignition.start failing with error java.lang.IllegalStateException: Failed to 
> parse version: -1595322995-
> 
>
> Key: IGNITE-13279
> URL: https://issues.apache.org/jira/browse/IGNITE-13279
> Project: Ignite
>  Issue Type: Bug
>  Components: examples, general
>Affects Versions: 2.8.1
>Reporter: Keshava Munegowda
>Priority: Critical
>
> I am using the Apache ignite : apache-ignite-2.8.1-bin
> I started the apache iginite node using : ./bin/ignite.sh 
> ./examples/config/example-ignite.xml
> The node is successful started with this message:
> ```
> [root@mdw apache-ignite-2.8.1-bin]# ./bin/ignite.sh 
> ./examples/config/example-ignite.xml
> [02:19:43] __ 
> [02:19:43] / _/ ___/ |/ / _/_ __/ __/
> [02:19:43] _/ // (7 7 // / / / / _/
> [02:19:43] /___/\___/_/|_/___/ /_/ /___/
> [02:19:43]
> [02:19:43] ver. 2.8.1#20200521-sha1:86422096
> [02:19:43] 2020 Copyright(C) Apache Software Foundation
> [02:19:43]
> [02:19:43] Ignite documentation: http://ignite.apache.org
> [02:19:43]
> [02:19:43] Quiet mode.
> [02:19:43] ^-- Logging to file 
> '/data/kmg/apache-ignite-2.8.1-bin/work/log/ignite-4135cf96.0.log'
> [02:19:43] ^-- Logging by 'JavaLogger [quiet=true, config=null]'
> [02:19:43] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or 
> "-v" to ignite.\{sh|bat}
> [02:19:43]
> [02:19:43] OS: Linux 3.10.0-1127.el7.x86_64 amd64
> [02:19:43] VM information: OpenJDK Runtime Environment 1.8.0_252-b09 Oracle 
> Corporation OpenJDK 64-Bit Server VM 25.252-b09
> [02:19:43] Please set system property '-Djava.net.preferIPv4Stack=true' to 
> avoid possible problems in mixed environments.
> [02:19:43] Configured plugins:
> [02:19:43] ^-- None
> [02:19:43]
> [02:19:43] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler 
> [tryStop=false, timeout=0, super=AbstractFailureHandler 
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
> SYSTEM_CRITICAL_OPERATION_TIMEOUT
> [02:19:46] Message queue limit is set to 0 which may lead to potential OOMEs 
> when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to 
> message queues growth on sender and receiver sides.
> [02:19:46] Security status [authentication=off, tls/ssl=off]
> [02:19:48] Performance suggestions for grid (fix if possible)
> [02:19:48] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
> [02:19:48] ^-- Disable grid events (remove 'includeEventTypes' from 
> configuration)
> [02:19:48] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
> [02:19:48] ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to 
> JVM options)
> [02:19:48] ^-- Set max direct memory size if getting 'OOME: Direct buffer 
> memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
> [02:19:48] ^-- Disable processing of calls to System.gc() (add 
> '-XX:+DisableExplicitGC' to JVM options)
> [02:19:48] ^-- Speed up flushing of dirty pages by OS (alter 
> vm.dirty_expire_centisecs parameter by setting to 500)
> [02:19:48] Refer to this page for more performance suggestions: 
> https://apacheignite.readme.io/docs/jvm-and-system-tuning
> [02:19:48]
> [02:19:48] To start Console Management & Monitoring run 
> ignitevisorcmd.\{sh|bat}
> [02:19:48] Data Regions Configured:
> [02:19:48] ^-- default [initSize=256.0 MiB, maxSize=75.5 GiB, 
> persistence=false, lazyMemoryAllocation=true]
> [02:19:48]
> [02:19:48] Ignite node started OK (id=4135cf96)
> [02:19:48] Topology snapshot [ver=1, locNode=4135cf96, servers=1, clients=0, 
> state=ACTIVE, CPUs=32, offheap=76.0GB, heap=27.0GB]
> [02:19:48] ^-- Baseline [id=0, size=1, online=1, offline=0]
> ```
> Now, I have benchmarking application, which start the apache iginitition 
> using the java API
> Ignition.start("examples/config/example-ignite.xml");
> This method is failing with below error log:
> ```
> 0 [main] DEBUG org.springframework.core.env.StandardEnvironment - Adding 
> PropertySource 'systemProperties' with lowest search precedence
> 2 [main] DEBUG org.springframework.core.env.StandardEnvironment - Adding 
> PropertySource 'systemEnvironment' with lowest search precedence
> 3 [main] DEBUG org.springframework.core.env.StandardEnvironment - Initialized 
> StandardEnvironment with PropertySources [MapPropertySource@1928301845 
> {name='systemProperties', properties={java.runtime.name=OpenJDK Runtime 
> Environment, 
> sun.boot.library.path=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre/lib/amd64,
>  java.vm.version=25.252-b09, java.vm.vendor=Oracle Corporation, 
> java.vendor.url=http

[jira] [Commented] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Tanmay Ambre (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162018#comment-17162018
 ] 

Tanmay Ambre commented on IGNITE-13270:
---

No our caches are transient. We don't use persistence

To give you a background of the tests -
 # we do a complete data wipe of data in ignite  (as if its a new install)
 # activate the cluster
 # Recreate our caches and indexes
 # Load data
 # and then test

We followed the same process for testing all 3 versions of ignite. 

Problem is the same query with same plan works nicely on 2.7.5 but not on 2.8.0 
and 2.8.1 - This is the concern. The CPU burn is order of magnitude higher in 
2.8.0 and 2.8.1

 

> Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5
> --
>
> Key: IGNITE-13270
> URL: https://issues.apache.org/jira/browse/IGNITE-13270
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8, 2.8.1
>Reporter: Tanmay Ambre
>Priority: Major
>
> After upgrading to ignite 2.8.0 from 2.7.5 the following query is causing 
> massive CPU burn. We have a 8 node server cluster for Ignite (with 32 CPUs on 
> each node). with 2.8.0, 2 nodes are maxed out on CPU. with 2.8.1 all nodes 
> are above 80%. On 2.7.5 the same query was performing very nicely. 
> The query is a join with 3 tables having same affinity key. however, we dont 
> know the affinity key when we are querying and there this results in a merge 
> scan. 
> Query:
> select A.A_ID, A.AFFINITYKEY, B.B_ID from A, B, C
> where
> A.A_ID = B.A_ID AND
> B.B_ID = C.B_ID AND
> A.AFFINITYKEY = B.AFFINITYKEY AND
> B.AFFINITYKEY = C.AFFINITYKEY AND
> B.B_ID = ? AND
> C.C_ID = ?
>  
> Performance:
> 2.7.5 ~ 2 to 5 ms
> 2.8.0 ~ 6 to 12 seconds
> 2.8.1 ~ within 3 seconds
>  
> Volume (we have a EDA based system where messages come in burst. One burst 
> has approx. 150K events - each event correspond to one query). 
>  
> throughput of our java based processor (that fires this query):
> 2.7.5 ~ 750 transactions per second
> 2.8.0 ~ 3 transactions per second
> 2.8.1 ~ 6 transactions per second
>  
> CPU burn
> 2.7.5 ~ 10 to 15% on each node
> 2.8.0 ~ 100% on 2 nodes other nodes are idling
> 2.8.1 ~ 80 to 90% on each node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13271) Add new type of WAL records to track/debug atomic updates on backup nodes

2020-07-21 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162017#comment-17162017
 ] 

Ignite TC Bot commented on IGNITE-13271:


{panel:title=Branch: [pull/8060/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/8060/head] Base: [master] : New Tests 
(8)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Service Grid (legacy mode){color} [[tests 
4|https://ci.ignite.apache.org/viewLog.html?buildId=5476089]]
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=0e87c8c6371-d700b7ab-0f16-4eeb-990c-20c47f96ea3f, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=ac5699a3-5f95-430d-bf47-4e99c8ccfb63, topVer=0, msgTemplate=null, 
span=null, nodeId8=ac5699a3, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595254012124]], val2=AffinityTopologyVersion 
[topVer=6283406109545921379, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=0e87c8c6371-d700b7ab-0f16-4eeb-990c-20c47f96ea3f, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=ac5699a3-5f95-430d-bf47-4e99c8ccfb63, topVer=0, msgTemplate=null, 
span=null, nodeId8=ac5699a3, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595254012124]], val2=AffinityTopologyVersion 
[topVer=6283406109545921379, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=34ad1a01-2fa4-4a73-8982-639970c08a05, topVer=0, 
msgTemplate=null, span=null, nodeId8=b67542e3, msg=, type=NODE_JOINED, 
tstamp=1595254012124], val2=AffinityTopologyVersion 
[topVer=1033303141680072443, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=34ad1a01-2fa4-4a73-8982-639970c08a05, topVer=0, 
msgTemplate=null, span=null, nodeId8=b67542e3, msg=, type=NODE_JOINED, 
tstamp=1595254012124], val2=AffinityTopologyVersion 
[topVer=1033303141680072443, minorTopVer=0]]] - PASSED{color}

{color:#8b}Service Grid{color} [[tests 
4|https://ci.ignite.apache.org/viewLog.html?buildId=5476088]]
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=d1b880b4-2903-4fe8-bd8a-b097ae054561, topVer=0, 
msgTemplate=null, span=null, nodeId8=8dc88a81, msg=, type=NODE_JOINED, 
tstamp=1595253956807], val2=AffinityTopologyVersion 
[topVer=3258640746282525568, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=d1b880b4-2903-4fe8-bd8a-b097ae054561, topVer=0, 
msgTemplate=null, span=null, nodeId8=8dc88a81, msg=, type=NODE_JOINED, 
tstamp=1595253956807], val2=AffinityTopologyVersion 
[topVer=3258640746282525568, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=bc0ab8c6371-e815ee84-834c-45d1-a092-51819bb624c3, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=886e9b10-da14-4ffb-aec8-1d8e21dc7638, topVer=0, msgTemplate=null, 
span=null, nodeId8=886e9b10, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595253956807]], val2=AffinityTopologyVersion 
[topVer=-2331752221821955656, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=bc0ab8c6371-e815ee84-834c-45d1-a092-51819bb624c3, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=886e9b10-da14-4ffb-aec8-1d8e21dc7638, topVer=0, msgTemplate=null, 
span=null, nodeId8=886e9b10, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595253956807]], val2=AffinityTopologyVersion 
[topVer=-2331752221821955656, minorTopVer=0]]] - PASSED{color}

{panel}
[TeamCity *--> Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5476111&buildTypeId=IgniteTests24Java8_RunAll]

> Add new type of WAL records to track/debug atomic updates on backup nodes
> -
>
> Key: IGNITE-13271
>   

[jira] [Commented] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Tanmay Ambre (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17162019#comment-17162019
 ] 

Tanmay Ambre commented on IGNITE-13270:
---

h2 version is 1.4.197

> Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5
> --
>
> Key: IGNITE-13270
> URL: https://issues.apache.org/jira/browse/IGNITE-13270
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8, 2.8.1
>Reporter: Tanmay Ambre
>Priority: Major
>
> After upgrading to ignite 2.8.0 from 2.7.5 the following query is causing 
> massive CPU burn. We have a 8 node server cluster for Ignite (with 32 CPUs on 
> each node). with 2.8.0, 2 nodes are maxed out on CPU. with 2.8.1 all nodes 
> are above 80%. On 2.7.5 the same query was performing very nicely. 
> The query is a join with 3 tables having same affinity key. however, we dont 
> know the affinity key when we are querying and there this results in a merge 
> scan. 
> Query:
> select A.A_ID, A.AFFINITYKEY, B.B_ID from A, B, C
> where
> A.A_ID = B.A_ID AND
> B.B_ID = C.B_ID AND
> A.AFFINITYKEY = B.AFFINITYKEY AND
> B.AFFINITYKEY = C.AFFINITYKEY AND
> B.B_ID = ? AND
> C.C_ID = ?
>  
> Performance:
> 2.7.5 ~ 2 to 5 ms
> 2.8.0 ~ 6 to 12 seconds
> 2.8.1 ~ within 3 seconds
>  
> Volume (we have a EDA based system where messages come in burst. One burst 
> has approx. 150K events - each event correspond to one query). 
>  
> throughput of our java based processor (that fires this query):
> 2.7.5 ~ 750 transactions per second
> 2.8.0 ~ 3 transactions per second
> 2.8.1 ~ 6 transactions per second
>  
> CPU burn
> 2.7.5 ~ 10 to 15% on each node
> 2.8.0 ~ 100% on 2 nodes other nodes are idling
> 2.8.1 ~ 80 to 90% on each node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-7369) .NET: Thin client: Transactions

2020-07-21 Thread Sergey Stronchinskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161990#comment-17161990
 ] 

Sergey Stronchinskiy commented on IGNITE-7369:
--

[~ptupitsyn] Submitted for review.

> .NET: Thin client: Transactions
> ---
>
> Key: IGNITE-7369
> URL: https://issues.apache.org/jira/browse/IGNITE-7369
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 2.4
>Reporter: Pavel Tupitsyn
>Assignee: Sergey Stronchinskiy
>Priority: Major
>  Labels: .NET, iep-34
> Fix For: 2.9
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Implement transactions in thin client protocol and .NET thin client.
> Main issue: Ignite transactions are tied to a specific thread.
> See how JDBC works around this by starting a dedicated thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-7369) .NET: Thin client: Transactions

2020-07-21 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-7369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161988#comment-17161988
 ] 

Ignite TC Bot commented on IGNITE-7369:
---

{panel:title=Branch: [pull/7992/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/7992/head] Base: [master] : New Tests 
(81)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Platform .NET{color} [[tests 
42|https://ci.ignite.apache.org/viewLog.html?buildId=5477938]]
* {color:#013220}exe: CacheClientAbstractTxTest.TestTxClose - PASSED{color}
* {color:#013220}exe: 
CacheClientAbstractTxTest.TestTransactionScopeWithManualIgniteTx - PASSED{color}
* {color:#013220}exe: CacheClientAbstractTxTest.TestTransactionScopeSingleCache 
- PASSED{color}
* {color:#013220}exe: CacheClientAbstractTxTest.TestTransactionScopeOptions - 
PASSED{color}
* {color:#013220}exe: CacheClientAbstractTxTest.TestTransactionScopeMultiCache 
- PASSED{color}
* {color:#013220}exe: 
CacheClientAbstractTxTest.TestTransactionScopeAllOperationsSync - PASSED{color}
* {color:#013220}exe: CacheClientAbstractTxTest.TestThrowsIfMultipleStarted - 
PASSED{color}
* {color:#013220}exe: CacheClientAbstractTxTest.TestThrowsIfMultipleStarted - 
PASSED{color}
* {color:#013220}exe: CacheClientAbstractTxTest.TestSuppressedTransactionScope 
- PASSED{color}
* {color:#013220}exe: CacheClientAbstractTxTest.TestNestedTransactionScope - 
PASSED{color}
* {color:#013220}exe: 
CacheClientAbstractTxTest.TestDifferentClientsCanStartTransactions - 
PASSED{color}
... and 31 new tests

{color:#8b}Platform .NET (Core Linux){color} [[tests 
39|https://ci.ignite.apache.org/viewLog.html?buildId=5477940]]
* {color:#013220}dll: CachePartitionedTxTest.TestTimeout - PASSED{color}
* {color:#013220}dll: CachePartitionedTxTest.TestThrowsIfMultipleStarted - 
PASSED{color}
* {color:#013220}dll: 
CachePartitionedTxTest.TestThrowsIfEndAlreadyCompletedTransaction - 
PASSED{color}
* {color:#013220}dll: CachePartitionedTxTest.TestSuppressedTransactionScope - 
PASSED{color}
* {color:#013220}dll: CachePartitionedTxTest.TestNestedTransactionScope - 
PASSED{color}
* {color:#013220}dll: 
CachePartitionedTxTest.TestDifferentClientsCanStartTransactions - PASSED{color}
* {color:#013220}dll: CachePartitionedTxTest.TestClientTransactionConfiguration 
- PASSED{color}
* {color:#013220}dll: CacheClientLocalTxTest.TestWithLabel - PASSED{color}
* {color:#013220}dll: CachePartitionedTxTest.TestTxClose - PASSED{color}
* {color:#013220}dll: 
CachePartitionedTxTest.TestTransactionScopeWithManualIgniteTx - PASSED{color}
* {color:#013220}dll: CachePartitionedTxTest.TestTransactionScopeSingleCache - 
PASSED{color}
... and 28 new tests

{panel}
[TeamCity *-> Run :: .NET* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=5477943&buildTypeId=IgniteTests24Java8_RunAllNet]

> .NET: Thin client: Transactions
> ---
>
> Key: IGNITE-7369
> URL: https://issues.apache.org/jira/browse/IGNITE-7369
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 2.4
>Reporter: Pavel Tupitsyn
>Assignee: Sergey Stronchinskiy
>Priority: Major
>  Labels: .NET, iep-34
> Fix For: 2.9
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Implement transactions in thin client protocol and .NET thin client.
> Main issue: Ignite transactions are tied to a specific thread.
> See how JDBC works around this by starting a dedicated thread.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (IGNITE-13280) Improper index usage, fields enumeration not used with pk index creation.

2020-07-21 Thread Stanilovsky Evgeny (Jira)
Stanilovsky Evgeny created IGNITE-13280:
---

 Summary: Improper index usage, fields enumeration not used with pk 
index creation.
 Key: IGNITE-13280
 URL: https://issues.apache.org/jira/browse/IGNITE-13280
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.8.1
Reporter: Stanilovsky Evgeny
Assignee: Stanilovsky Evgeny


For example:

{code:java}
CREATE TABLE PUBLIC.TEST_TABLE (FIRST_NAME VARCHAR, LAST_NAME VARCHAR, ADDRESS 
VARCHAR, LANG VARCHAR,  CONSTRAINT PK_PERSON PRIMARY KEY (FIRST_NAME, 
LAST_NAME));

CREATE INDEX "idx2" ON PUBLIC.TEST_TABLE (LANG, ADDRESS);
{code}

and further explain:


{code:java}
SELECT
"__Z0"."FIRST_NAME" AS "__C0_0",
"__Z0"."LAST_NAME" AS "__C0_1",
"__Z0"."ADDRESS" AS "__C0_2",
"__Z0"."LANG" AS "__C0_3"
FROM "PUBLIC"."TEST_TABLE" "__Z0"
/* PUBLIC.IDX2: ADDRESS > 0 */
WHERE "__Z0"."ADDRESS" > 0
{code}

this is erroneous  to use "idx2" here, because first index field LANG not 
equals to predicate ADDRESS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13016) Fix backward checking of failed node.

2020-07-21 Thread Vladimir Steshin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Steshin updated IGNITE-13016:
--
Ignite Flags:   (was: Release Notes Required)

> Fix backward checking of failed node.
> -
>
> Key: IGNITE-13016
> URL: https://issues.apache.org/jira/browse/IGNITE-13016
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Backward node connection checking looks wierd. What we might improve are:
> 1) Addresses checking could be done in parrallel, not sequentially.
> {code:java}
> for (InetSocketAddress addr : nodeAddrs) {
> // Connection refused may be got if node doesn't listen
> // (or blocked by firewall, but anyway assume it is dead).
> if (!isConnectionRefused(addr)) {
> liveAddr = addr;
> break;
> }
> }
> {code}
> 2) Any io-exception should be considered as failed connection, not only 
> connection-refused:
> {code:java}
> catch (ConnectException e) {
> return true;
> }
> catch (IOException e) {
> return false;
> }
> {code}
> 3) Timeout on connection checking should not be constant or hardcode:
> {code:java}
> sock.connect(addr, 100);
> {code}
> 4) Decision to check connection should rely on configured exchange timeout, 
> no on the ping interval
> {code:java}
> // We got message from previous in less than double connection check interval.
> boolean ok = rcvdTime + U.millisToNanos(connCheckInterval) * 2 >= now;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13016) Fix backward checking of failed node.

2020-07-21 Thread Sergey Chugunov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161933#comment-17161933
 ] 

Sergey Chugunov commented on IGNITE-13016:
--

[~vladsz83],

The patch looks good to me, I merged it to master branch in commit 
*03ee85695014ff6aaa87e256d330d32342d34224*.

Thank you for contribution!

> Fix backward checking of failed node.
> -
>
> Key: IGNITE-13016
> URL: https://issues.apache.org/jira/browse/IGNITE-13016
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Backward node connection checking looks wierd. What we might improve are:
> 1) Addresses checking could be done in parrallel, not sequentially.
> {code:java}
> for (InetSocketAddress addr : nodeAddrs) {
> // Connection refused may be got if node doesn't listen
> // (or blocked by firewall, but anyway assume it is dead).
> if (!isConnectionRefused(addr)) {
> liveAddr = addr;
> break;
> }
> }
> {code}
> 2) Any io-exception should be considered as failed connection, not only 
> connection-refused:
> {code:java}
> catch (ConnectException e) {
> return true;
> }
> catch (IOException e) {
> return false;
> }
> {code}
> 3) Timeout on connection checking should not be constant or hardcode:
> {code:java}
> sock.connect(addr, 100);
> {code}
> 4) Decision to check connection should rely on configured exchange timeout, 
> no on the ping interval
> {code:java}
> // We got message from previous in less than double connection check interval.
> boolean ok = rcvdTime + U.millisToNanos(connCheckInterval) * 2 >= now;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (IGNITE-13016) Fix backward checking of failed node.

2020-07-21 Thread Sergey Chugunov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161933#comment-17161933
 ] 

Sergey Chugunov edited comment on IGNITE-13016 at 7/21/20, 10:35 AM:
-

[~vladsz83],

The patch looks good to me, I merged it to master branch in commit 
*03ee85695014ff6aaa87e256d330d32342d34224*. Please fill in release notes for 
the ticket.

Thank you for contribution!


was (Author: sergeychugunov):
[~vladsz83],

The patch looks good to me, I merged it to master branch in commit 
*03ee85695014ff6aaa87e256d330d32342d34224*.

Thank you for contribution!

> Fix backward checking of failed node.
> -
>
> Key: IGNITE-13016
> URL: https://issues.apache.org/jira/browse/IGNITE-13016
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Vladimir Steshin
>Assignee: Vladimir Steshin
>Priority: Major
>  Labels: iep-45
> Fix For: 2.9
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Backward node connection checking looks wierd. What we might improve are:
> 1) Addresses checking could be done in parrallel, not sequentially.
> {code:java}
> for (InetSocketAddress addr : nodeAddrs) {
> // Connection refused may be got if node doesn't listen
> // (or blocked by firewall, but anyway assume it is dead).
> if (!isConnectionRefused(addr)) {
> liveAddr = addr;
> break;
> }
> }
> {code}
> 2) Any io-exception should be considered as failed connection, not only 
> connection-refused:
> {code:java}
> catch (ConnectException e) {
> return true;
> }
> catch (IOException e) {
> return false;
> }
> {code}
> 3) Timeout on connection checking should not be constant or hardcode:
> {code:java}
> sock.connect(addr, 100);
> {code}
> 4) Decision to check connection should rely on configured exchange timeout, 
> no on the ping interval
> {code:java}
> // We got message from previous in less than double connection check interval.
> boolean ok = rcvdTime + U.millisToNanos(connCheckInterval) * 2 >= now;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12996) Remote filter of IgniteEvents has to run inside the Ignite Sandbox.

2020-07-21 Thread Denis Garus (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161928#comment-17161928
 ] 

Denis Garus commented on IGNITE-12996:
--

[~alex_pl], could you please review the changes. 
There is one blocker in TC visa that also present on the master branch. 
Unfortunately, it blocks getting a green visa. 

> Remote filter of IgniteEvents has to run inside the Ignite Sandbox.
> ---
>
> Key: IGNITE-12996
> URL: https://issues.apache.org/jira/browse/IGNITE-12996
> Project: Ignite
>  Issue Type: Task
>  Components: security
>Reporter: Denis Garus
>Assignee: Denis Garus
>Priority: Major
>  Labels: iep-38
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Remote filter of IgniteEvents has to run on a remote node inside the Ignite 
> Sandbox if it is turned on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12996) Remote filter of IgniteEvents has to run inside the Ignite Sandbox.

2020-07-21 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161924#comment-17161924
 ] 

Ignite TC Bot commented on IGNITE-12996:


{panel:title=Branch: [pull/8058/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}PDS (Indexing){color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=5480797]]

{panel}
{panel:title=Branch: [pull/8058/head] Base: [master] : New Tests 
(10)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Service Grid (legacy mode){color} [[tests 
4|https://ci.ignite.apache.org/viewLog.html?buildId=5475968]]
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=68d41105-a043-4d0f-a77c-108878624641, topVer=0, 
msgTemplate=null, span=null, nodeId8=1248fea0, msg=, type=NODE_JOINED, 
tstamp=1595251915586], val2=AffinityTopologyVersion 
[topVer=-5433749287640349845, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=64b7c6c6371-a1f05a7c-6bc7-4094-a9c1-d88629957897, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=a4873f81-a45e-4593-98b9-f2acbfcbc5e4, topVer=0, msgTemplate=null, 
span=null, nodeId8=a4873f81, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595251915586]], val2=AffinityTopologyVersion 
[topVer=7462137751004898471, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=64b7c6c6371-a1f05a7c-6bc7-4094-a9c1-d88629957897, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=a4873f81-a45e-4593-98b9-f2acbfcbc5e4, topVer=0, msgTemplate=null, 
span=null, nodeId8=a4873f81, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595251915586]], val2=AffinityTopologyVersion 
[topVer=7462137751004898471, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=68d41105-a043-4d0f-a77c-108878624641, topVer=0, 
msgTemplate=null, span=null, nodeId8=1248fea0, msg=, type=NODE_JOINED, 
tstamp=1595251915586], val2=AffinityTopologyVersion 
[topVer=-5433749287640349845, minorTopVer=0]]] - PASSED{color}

{color:#8b}Service Grid{color} [[tests 
4|https://ci.ignite.apache.org/viewLog.html?buildId=5475967]]
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=28c9d64f-92a3-4e11-8f7c-dc630dd9352f, topVer=0, 
msgTemplate=null, span=null, nodeId8=07e642ac, msg=, type=NODE_JOINED, 
tstamp=1595251789472], val2=AffinityTopologyVersion 
[topVer=7217010853577529856, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=28c9d64f-92a3-4e11-8f7c-dc630dd9352f, topVer=0, 
msgTemplate=null, span=null, nodeId8=07e642ac, msg=, type=NODE_JOINED, 
tstamp=1595251789472], val2=AffinityTopologyVersion 
[topVer=7217010853577529856, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=4ae8a6c6371-61a125ed-d1c5-4d83-8daf-6c2dc1262756, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=bc66a8b9-f585-450b-8a62-35b39e4da832, topVer=0, msgTemplate=null, 
span=null, nodeId8=bc66a8b9, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595251789472]], val2=AffinityTopologyVersion 
[topVer=4583702723832650516, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=4ae8a6c6371-61a125ed-d1c5-4d83-8daf-6c2dc1262756, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=bc66a8b9-f585-450b-8a62-35b39e4da832, topVer=0, msgTemplate=null, 
span=null, nodeId8=bc66a8b9, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595251789472]], val2=AffinityTopologyVersion 
[topVer=4583702723832650516, minorTopVer=0]]] - PASSED{color}

{color:#8b}Security{color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=5475983]]
* {color:#013220}SecurityTestSuite: EventsSandboxTest.testRemoteFilter - 
PASSED{color}
* {color:#013220}Secu

[jira] [Created] (IGNITE-13279) Ignition.start failing with error java.lang.IllegalStateException: Failed to parse version: -1595322995-

2020-07-21 Thread Keshava Munegowda (Jira)
Keshava Munegowda created IGNITE-13279:
--

 Summary: Ignition.start failing with error 
java.lang.IllegalStateException: Failed to parse version: -1595322995-
 Key: IGNITE-13279
 URL: https://issues.apache.org/jira/browse/IGNITE-13279
 Project: Ignite
  Issue Type: Bug
  Components: examples, general
Affects Versions: 2.8.1
Reporter: Keshava Munegowda


I am using the Apache ignite : apache-ignite-2.8.1-bin

I started the apache iginite node using : ./bin/ignite.sh 
./examples/config/example-ignite.xml

The node is successful started with this message:
```
[root@mdw apache-ignite-2.8.1-bin]# ./bin/ignite.sh 
./examples/config/example-ignite.xml
[02:19:43] __ 
[02:19:43] / _/ ___/ |/ / _/_ __/ __/
[02:19:43] _/ // (7 7 // / / / / _/
[02:19:43] /___/\___/_/|_/___/ /_/ /___/
[02:19:43]
[02:19:43] ver. 2.8.1#20200521-sha1:86422096
[02:19:43] 2020 Copyright(C) Apache Software Foundation
[02:19:43]
[02:19:43] Ignite documentation: http://ignite.apache.org
[02:19:43]
[02:19:43] Quiet mode.
[02:19:43] ^-- Logging to file 
'/data/kmg/apache-ignite-2.8.1-bin/work/log/ignite-4135cf96.0.log'
[02:19:43] ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[02:19:43] ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or 
"-v" to ignite.\{sh|bat}
[02:19:43]
[02:19:43] OS: Linux 3.10.0-1127.el7.x86_64 amd64
[02:19:43] VM information: OpenJDK Runtime Environment 1.8.0_252-b09 Oracle 
Corporation OpenJDK 64-Bit Server VM 25.252-b09
[02:19:43] Please set system property '-Djava.net.preferIPv4Stack=true' to 
avoid possible problems in mixed environments.
[02:19:43] Configured plugins:
[02:19:43] ^-- None
[02:19:43]
[02:19:43] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler 
[tryStop=false, timeout=0, super=AbstractFailureHandler 
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, 
SYSTEM_CRITICAL_OPERATION_TIMEOUT
[02:19:46] Message queue limit is set to 0 which may lead to potential OOMEs 
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to 
message queues growth on sender and receiver sides.
[02:19:46] Security status [authentication=off, tls/ssl=off]
[02:19:48] Performance suggestions for grid (fix if possible)
[02:19:48] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[02:19:48] ^-- Disable grid events (remove 'includeEventTypes' from 
configuration)
[02:19:48] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
[02:19:48] ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to JVM 
options)
[02:19:48] ^-- Set max direct memory size if getting 'OOME: Direct buffer 
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[02:19:48] ^-- Disable processing of calls to System.gc() (add 
'-XX:+DisableExplicitGC' to JVM options)
[02:19:48] ^-- Speed up flushing of dirty pages by OS (alter 
vm.dirty_expire_centisecs parameter by setting to 500)
[02:19:48] Refer to this page for more performance suggestions: 
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[02:19:48]
[02:19:48] To start Console Management & Monitoring run ignitevisorcmd.\{sh|bat}
[02:19:48] Data Regions Configured:
[02:19:48] ^-- default [initSize=256.0 MiB, maxSize=75.5 GiB, 
persistence=false, lazyMemoryAllocation=true]
[02:19:48]
[02:19:48] Ignite node started OK (id=4135cf96)
[02:19:48] Topology snapshot [ver=1, locNode=4135cf96, servers=1, clients=0, 
state=ACTIVE, CPUs=32, offheap=76.0GB, heap=27.0GB]
[02:19:48] ^-- Baseline [id=0, size=1, online=1, offline=0]

```

Now, I have benchmarking application, which start the apache iginitition using 
the java API


Ignition.start("examples/config/example-ignite.xml");


This method is failing with below error log:

```
0 [main] DEBUG org.springframework.core.env.StandardEnvironment - Adding 
PropertySource 'systemProperties' with lowest search precedence
2 [main] DEBUG org.springframework.core.env.StandardEnvironment - Adding 
PropertySource 'systemEnvironment' with lowest search precedence
3 [main] DEBUG org.springframework.core.env.StandardEnvironment - Initialized 
StandardEnvironment with PropertySources [MapPropertySource@1928301845 
{name='systemProperties', properties={java.runtime.name=OpenJDK Runtime 
Environment, 
sun.boot.library.path=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre/lib/amd64,
 java.vm.version=25.252-b09, java.vm.vendor=Oracle Corporation, 
java.vendor.url=http://java.oracle.com/, path.separator=:, java.vm.name=OpenJDK 
64-Bit Server VM, file.encoding.pkg=sun.io, user.country=US, 
sun.java.launcher=SUN_STANDARD, sun.os.patch.level=unknown, 
java.vm.specification.name=Java Virtual Machine Specification, 
user.dir=/data/kmg/SBK, java.runtime.version=1.8.0_252-b09, 
java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, 
java.endorsed.dirs=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre/lib/en

[jira] [Created] (IGNITE-13278) Forgotten logger isInfoEnabled check.

2020-07-21 Thread Stanilovsky Evgeny (Jira)
Stanilovsky Evgeny created IGNITE-13278:
---

 Summary: Forgotten logger isInfoEnabled check.
 Key: IGNITE-13278
 URL: https://issues.apache.org/jira/browse/IGNITE-13278
 Project: Ignite
  Issue Type: Improvement
  Components: general
Affects Versions: 2.8.1
Reporter: Stanilovsky Evgeny
Assignee: Stanilovsky Evgeny



In RO tests with enabled -ea we can obtain near assertion:
{code:java}
java.lang.AssertionError: Logging at INFO level without checking if INFO level 
is enabled: Cluster state was changed from ACTIVE to ACTIVE
at 
org.apache.ignite.testframework.junits.logger.GridTestLog4jLogger.info(GridTestLog4jLogger.java:481)
 ~[ignite-core-tests.jar]
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-12398) Apache Ignite Cluster(Amazon S3 Based Discovery) Nodes getting down if we connect Ignite Visor Command Line Interface

2020-07-21 Thread Aleksey Plekhanov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161897#comment-17161897
 ] 

Aleksey Plekhanov commented on IGNITE-12398:


[~ravimsc], can you please provide more information about your case? Where do 
you start visor, inside S3 or outside? Do you set some additional properties in 
visor configuration ({{IgniteConfiguration.LocalHost}} for example)?
I've checked {{getAddress()}} method with address patterns you provided, and 
this method return not null for such values.
I found some problems with daemon nodes (visor uses deamon node to join 
cluster), they joined to ring (like server nodes) instead of joining like a 
client. When joined to ring IpFinder.registerAddresses is invoked and if some 
of addresses passed by visor is unresolvable, exception like yours can be 
thrown (I can't imagine how it can happen since Ignite uses IP address if host 
is unresolvable). But you can workaround it, just set ClientMode = true in 
visor configuration and visor will not join the ring and will connect to 
cluster like a client.
I think this ticket is not a blocker (workaround exists, we can't reproduce 
it). I've set priority to "critical" and have targeted the ticket to the next 
release.

> Apache Ignite Cluster(Amazon S3 Based Discovery) Nodes getting down if we 
> connect Ignite Visor Command Line Interface
> -
>
> Key: IGNITE-12398
> URL: https://issues.apache.org/jira/browse/IGNITE-12398
> Project: Ignite
>  Issue Type: Bug
>  Components: aws, general, s3, visor
>Affects Versions: 2.7
> Environment: Production
>Reporter: Ravi Kumar Powli
>Assignee: Emmanouil Gkatziouras
>Priority: Critical
> Fix For: 2.10
>
>
> We have Apache Ignite 3 node cluster setup with Amazon S3 Based Discovery. If 
> we connect any one cluster node using Ignite Visor Command Line Interface it 
> got hang and automatically cluster nodes(all the three nodes) are going down. 
> Please find the below exception stacktrace.
> {noformat}
> [SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%][] Critical system 
> error detected. Will be handled accordingly to configured handler 
> [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.NullPointerException]]
> java.lang.NullPointerException
> at 
> org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.key(TcpDiscoveryS3IpFinder.java:247)
> at 
> org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.registerAddresses(TcpDiscoveryS3IpFinder.java:205)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddFinishedMessage(ServerImpl.java:4616)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4232)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2816)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2611)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7188)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700)
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> [10:36:54,600][SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%][] 
> Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: GridWorker 
> [name=tcp-disco-msg-worker, igniteInstanceName=DataStoreIgniteCache, 
> finished=true, heartbeatTs=1574332614423]]]
> class org.apache.ignite.IgniteException: GridWorker 
> [name=tcp-disco-msg-worker, igniteInstanceName=DataStoreIgniteCache, 
> finished=true, heartbeatTs=1574332614423]
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
> at 
> org.apache.ignite.internal.worker.WorkersRegistry.onStopped(WorkersRegistry.java:169)
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:153)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
> at org.

[jira] [Updated] (IGNITE-12398) Apache Ignite Cluster(Amazon S3 Based Discovery) Nodes getting down if we connect Ignite Visor Command Line Interface

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12398:
---
Priority: Critical  (was: Blocker)

> Apache Ignite Cluster(Amazon S3 Based Discovery) Nodes getting down if we 
> connect Ignite Visor Command Line Interface
> -
>
> Key: IGNITE-12398
> URL: https://issues.apache.org/jira/browse/IGNITE-12398
> Project: Ignite
>  Issue Type: Bug
>  Components: aws, general, s3, visor
>Affects Versions: 2.7
> Environment: Production
>Reporter: Ravi Kumar Powli
>Assignee: Emmanouil Gkatziouras
>Priority: Critical
> Fix For: 2.9
>
>
> We have Apache Ignite 3 node cluster setup with Amazon S3 Based Discovery. If 
> we connect any one cluster node using Ignite Visor Command Line Interface it 
> got hang and automatically cluster nodes(all the three nodes) are going down. 
> Please find the below exception stacktrace.
> {noformat}
> [SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%][] Critical system 
> error detected. Will be handled accordingly to configured handler 
> [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.NullPointerException]]
> java.lang.NullPointerException
> at 
> org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.key(TcpDiscoveryS3IpFinder.java:247)
> at 
> org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.registerAddresses(TcpDiscoveryS3IpFinder.java:205)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddFinishedMessage(ServerImpl.java:4616)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4232)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2816)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2611)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7188)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700)
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> [10:36:54,600][SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%][] 
> Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: GridWorker 
> [name=tcp-disco-msg-worker, igniteInstanceName=DataStoreIgniteCache, 
> finished=true, heartbeatTs=1574332614423]]]
> class org.apache.ignite.IgniteException: GridWorker 
> [name=tcp-disco-msg-worker, igniteInstanceName=DataStoreIgniteCache, 
> finished=true, heartbeatTs=1574332614423]
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
> at 
> org.apache.ignite.internal.worker.WorkersRegistry.onStopped(WorkersRegistry.java:169)
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:153)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> [10:36:59] Ignite node stopped OK [name=DataStoreIgniteCache, 
> uptime=00:01:13.934]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-12398) Apache Ignite Cluster(Amazon S3 Based Discovery) Nodes getting down if we connect Ignite Visor Command Line Interface

2020-07-21 Thread Aleksey Plekhanov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksey Plekhanov updated IGNITE-12398:
---
Fix Version/s: (was: 2.9)
   2.10

> Apache Ignite Cluster(Amazon S3 Based Discovery) Nodes getting down if we 
> connect Ignite Visor Command Line Interface
> -
>
> Key: IGNITE-12398
> URL: https://issues.apache.org/jira/browse/IGNITE-12398
> Project: Ignite
>  Issue Type: Bug
>  Components: aws, general, s3, visor
>Affects Versions: 2.7
> Environment: Production
>Reporter: Ravi Kumar Powli
>Assignee: Emmanouil Gkatziouras
>Priority: Critical
> Fix For: 2.10
>
>
> We have Apache Ignite 3 node cluster setup with Amazon S3 Based Discovery. If 
> we connect any one cluster node using Ignite Visor Command Line Interface it 
> got hang and automatically cluster nodes(all the three nodes) are going down. 
> Please find the below exception stacktrace.
> {noformat}
> [SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%][] Critical system 
> error detected. Will be handled accordingly to configured handler 
> [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=java.lang.NullPointerException]]
> java.lang.NullPointerException
> at 
> org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.key(TcpDiscoveryS3IpFinder.java:247)
> at 
> org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder.registerAddresses(TcpDiscoveryS3IpFinder.java:205)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddFinishedMessage(ServerImpl.java:4616)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4232)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2816)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2611)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7188)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2700)
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> [10:36:54,600][SEVERE][tcp-disco-msg-worker-#2%DataStoreIgniteCache%][] 
> Critical system error detected. Will be handled accordingly to configured 
> handler [hnd=NoOpFailureHandler [super=AbstractFailureHandler 
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext 
> [type=SYSTEM_WORKER_TERMINATION, err=class o.a.i.IgniteException: GridWorker 
> [name=tcp-disco-msg-worker, igniteInstanceName=DataStoreIgniteCache, 
> finished=true, heartbeatTs=1574332614423]]]
> class org.apache.ignite.IgniteException: GridWorker 
> [name=tcp-disco-msg-worker, igniteInstanceName=DataStoreIgniteCache, 
> finished=true, heartbeatTs=1574332614423]
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
> at 
> org.apache.ignite.internal.worker.WorkersRegistry.onStopped(WorkersRegistry.java:169)
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:153)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7119)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> [10:36:59] Ignite node stopped OK [name=DataStoreIgniteCache, 
> uptime=00:01:13.934]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Stanilovsky Evgeny (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161895#comment-17161895
 ] 

Stanilovsky Evgeny commented on IGNITE-13270:
-

ok, without deep investigation i can suggest only :
1. check that your sql plans are correct now : explain select ...  <- you need 
to find correct index usage here
2. if p 1. is incorrect or cpu burn still in progress try to rebuild current 
indexes, stop node by node, remove or move index.bin file from all or  
suspicious caches and start node - index will rebuild automatically. 

And one point - we are talking about cluster with persistent caches isn`t it ?

> Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5
> --
>
> Key: IGNITE-13270
> URL: https://issues.apache.org/jira/browse/IGNITE-13270
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8, 2.8.1
>Reporter: Tanmay Ambre
>Priority: Major
>
> After upgrading to ignite 2.8.0 from 2.7.5 the following query is causing 
> massive CPU burn. We have a 8 node server cluster for Ignite (with 32 CPUs on 
> each node). with 2.8.0, 2 nodes are maxed out on CPU. with 2.8.1 all nodes 
> are above 80%. On 2.7.5 the same query was performing very nicely. 
> The query is a join with 3 tables having same affinity key. however, we dont 
> know the affinity key when we are querying and there this results in a merge 
> scan. 
> Query:
> select A.A_ID, A.AFFINITYKEY, B.B_ID from A, B, C
> where
> A.A_ID = B.A_ID AND
> B.B_ID = C.B_ID AND
> A.AFFINITYKEY = B.AFFINITYKEY AND
> B.AFFINITYKEY = C.AFFINITYKEY AND
> B.B_ID = ? AND
> C.C_ID = ?
>  
> Performance:
> 2.7.5 ~ 2 to 5 ms
> 2.8.0 ~ 6 to 12 seconds
> 2.8.1 ~ within 3 seconds
>  
> Volume (we have a EDA based system where messages come in burst. One burst 
> has approx. 150K events - each event correspond to one query). 
>  
> throughput of our java based processor (that fires this query):
> 2.7.5 ~ 750 transactions per second
> 2.8.0 ~ 3 transactions per second
> 2.8.1 ~ 6 transactions per second
>  
> CPU burn
> 2.7.5 ~ 10 to 15% on each node
> 2.8.0 ~ 100% on 2 nodes other nodes are idling
> 2.8.1 ~ 80 to 90% on each node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (IGNITE-13268) Add indexes manipulation commands to control.sh

2020-07-21 Thread Vladimir Malinovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Malinovskiy updated IGNITE-13268:
--
Fix Version/s: 2.10

> Add indexes manipulation commands to control.sh
> ---
>
> Key: IGNITE-13268
> URL: https://issues.apache.org/jira/browse/IGNITE-13268
> Project: Ignite
>  Issue Type: Improvement
>  Components: control.sh
>Reporter: Vladimir Malinovskiy
>Assignee: Vladimir Malinovskiy
>Priority: Major
> Fix For: 2.10
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> These subcommands are to be added to the *--cache* command:
> h2. --indexes_list
> Gets list of indexes info. Although filters can be specified via command 
> arguments lines of output should still be grepable.
> h4. Argument:
>  * *--node-id* is a UUID of node for which to perform the operation. If node 
> isn’t specified explicitly it will be chosen by grid.
>  * *--group-name* is a regular expression corresponding to the group name.
>  * *--cache-name* is a regular expression corresponding to the name of the 
> cache.
>  * *--index-name* is a regular expression that matches the name of the index.
> h2. --indexes-rebuild-status
> Gets list of indexes that are currently being rebuilt.
> h4. Argument:
>  * *--node-id* is a UUID of node for which to perform the operation. If node 
> isn’t specified explicitly indexes rebuild info will be collected from all 
> nodes.
> h2. --indexes_force_rebuild
> Triggers force rebuild of indexes. This information should be reported in the 
> output:
>  * List of caches that weren’t found.
>  * List of caches that have index rebuild in progress. Indexes rebuild 
> shouldn’t be restarted for these caches.
>  * List of caches for which index rebuild was triggered.
> Indexes rebuild should be performed asynchronously.
> h4. Argument:
>  * *--node-id* is a UUID of node for which to perform the operation. 
> Mandatory parameter.
>  * *--group-names* is a comma-separated list of group names for which to 
> rebuild indexes. Either this option or --cache-names must be specified.
>  * *--cache-names* is a comma-separated list of cache names for which to 
> rebuild indexes. Either this option or --group-names must be specified.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13268) Add indexes manipulation commands to control.sh

2020-07-21 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161839#comment-17161839
 ] 

Ignite TC Bot commented on IGNITE-13268:


{panel:title=Branch: [pull/8054/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}PDS (Indexing){color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=5478780]]

{panel}
{panel:title=Branch: [pull/8054/head] Base: [master] : New Tests 
(30)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Service Grid (legacy mode){color} [[tests 
4|https://ci.ignite.apache.org/viewLog.html?buildId=5475665]]
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=0544c933-2798-4a18-a361-455cc362a6ce, topVer=0, 
msgTemplate=null, span=null, nodeId8=9285b0a4, msg=, type=NODE_JOINED, 
tstamp=1595236191043], val2=AffinityTopologyVersion 
[topVer=2082538619630230610, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=924d68b6371-89738cef-13a0-465b-9939-f7391d1c2551, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=b88f658f-a2e1-4cb1-a294-6e635be87fc7, topVer=0, msgTemplate=null, 
span=null, nodeId8=b88f658f, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595236191043]], val2=AffinityTopologyVersion 
[topVer=-8387876848463035692, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=924d68b6371-89738cef-13a0-465b-9939-f7391d1c2551, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=b88f658f-a2e1-4cb1-a294-6e635be87fc7, topVer=0, msgTemplate=null, 
span=null, nodeId8=b88f658f, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595236191043]], val2=AffinityTopologyVersion 
[topVer=-8387876848463035692, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=0544c933-2798-4a18-a361-455cc362a6ce, topVer=0, 
msgTemplate=null, span=null, nodeId8=9285b0a4, msg=, type=NODE_JOINED, 
tstamp=1595236191043], val2=AffinityTopologyVersion 
[topVer=2082538619630230610, minorTopVer=0]]] - PASSED{color}

{color:#8b}Service Grid{color} [[tests 
4|https://ci.ignite.apache.org/viewLog.html?buildId=5475664]]
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=384a1f88-4f9a-4970-b1b4-dd73aa2bceab, topVer=0, 
msgTemplate=null, span=null, nodeId8=265bf6fc, msg=, type=NODE_JOINED, 
tstamp=1595236109924], val2=AffinityTopologyVersion 
[topVer=-8910998317732334030, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryEvent [evtNode=384a1f88-4f9a-4970-b1b4-dd73aa2bceab, topVer=0, 
msgTemplate=null, span=null, nodeId8=265bf6fc, msg=, type=NODE_JOINED, 
tstamp=1595236109924], val2=AffinityTopologyVersion 
[topVer=-8910998317732334030, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.topologyVersion[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=86e4b7b6371-39295c77-2897-42d2-a8f2-1b8bf5d72136, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=a6624906-a436-475e-aa01-3599a93b22ad, topVer=0, msgTemplate=null, 
span=null, nodeId8=a6624906, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595236109924]], val2=AffinityTopologyVersion 
[topVer=7906347396054888457, minorTopVer=0]]] - PASSED{color}
* {color:#013220}IgniteServiceGridTestSuite: 
ServiceDeploymentProcessIdSelfTest.requestId[Test event=IgniteBiTuple 
[val1=DiscoveryCustomEvent [customMsg=ServiceChangeBatchRequest 
[id=86e4b7b6371-39295c77-2897-42d2-a8f2-1b8bf5d72136, reqs=SingletonList 
[ServiceUndeploymentRequest []]], affTopVer=null, super=DiscoveryEvent 
[evtNode=a6624906-a436-475e-aa01-3599a93b22ad, topVer=0, msgTemplate=null, 
span=null, nodeId8=a6624906, msg=null, type=DISCOVERY_CUSTOM_EVT, 
tstamp=1595236109924]], val2=AffinityTopologyVersion 
[topVer=7906347396054888457, minorTopVer=0]]] - PASSED{color}

{color:#8b}Control Utility{color} [[tests 
22|https://ci.ignite.apache.org/viewLog.html?buildId=5475685]]
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexForceRebuildTest.testIndexR

[jira] [Comment Edited] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Tanmay Ambre (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161780#comment-17161780
 ] 

Tanmay Ambre edited comment on IGNITE-13270 at 7/21/20, 7:07 AM:
-

Have taken multiple snapshots [for the same thread]

 

1

--

   org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.read 
line: 5896

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1365

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doFind 
line: 1332

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findOne 
line: 1300

   org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.find 
line: 360

   org.h2.index.BaseIndex.find line: 130

   org.h2.index.IndexCursor.find line: 176

   org.h2.table.TableFilter.next line: 471

   org.h2.table.TableFilter.next line: 541

   org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow line: 1452

   org.h2.result.LazyResult.hasNext line: 79

   org.h2.result.LazyResult.next line: 59

   org.h2.command.dml.Select.queryFlat line: 527

   org.h2.command.dml.Select.queryWithoutCache line: 633

   org.h2.command.dml.Query.queryWithoutCacheLazyCheck line: 114

   org.h2.command.dml.Query.query line: 352

   org.h2.command.dml.Query.query line: 333

   org.h2.command.CommandContainer.query line: 114

   org.h2.command.Command.executeQuery line: 202

   org.h2.jdbc.JdbcPreparedStatement.executeQuery line: 114

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery 
line: 824

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer
 line: 912

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0
 line: 417

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest
 line: 242

   org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage 
line: 2138

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17 
line: 2095

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$$Lambda$482/222427158.onMessage
 line: not available

   
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage
 line: 3379

   
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener 
line: 1843

   
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0
 line: 1468

   org.apache.ignite.internal.managers.communication.GridIoManager.access$5200 
line: 229

   org.apache.ignite.internal.managers.communication.GridIoManager$9.run line: 
1365

   java.util.concurrent.ThreadPoolExecutor.runWorker line: 1149

   java.util.concurrent.ThreadPoolExecutor$Worker.run line: 624

   java.lang.Thread.run line: 748


was (Author: tambre):
Have taken multiple snapshots

 

1

--

   org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.read 
line: 5896

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1365

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doFind 
line: 1332

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findOne 
line: 1300

   org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.find 
line: 360

   org.h2.index.BaseIndex.find line: 130

   org.h2.index.IndexCursor.find line: 176

   org.h2.table.TableFilter.next line: 471

   org.h2.table.TableFilter.next line: 541

   org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow line: 1452

   org.h2.result.LazyResult.hasNext line: 79

   org.h2.result.LazyResult.next line: 59

   org.h2.command.dml.Select.queryFlat line: 527

   org.h2.command.dml.Select.queryWithoutCache line: 633

   org.h2.command.dml.Query.queryWithoutCacheLazyCheck line: 114

   org.h2.command.dml.Query.query line: 352

   org.h2.command.dml.Query.query line: 333

   org.h2.command.CommandContainer.query line: 114

   org.h2.command.Command.executeQuery line: 202

   org.h2.jdbc.JdbcPreparedStatement.executeQuery line: 114

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery 
line: 824

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer
 line: 912

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0
 line: 417

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest
 line: 242

   org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.o

[jira] [Commented] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Tanmay Ambre (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161784#comment-17161784
 ] 

Tanmay Ambre commented on IGNITE-13270:
---

Trace 5: 

   org.h2.value.Value.cache line: 410

   org.h2.value.ValueString.get line: 153

   org.h2.value.ValueString.get line: 134

   org.apache.ignite.internal.processors.query.h2.H2Utils.wrap line: 582

   org.apache.ignite.internal.processors.query.h2.opt.H2CacheRow.wrap line: 169

   org.apache.ignite.internal.processors.query.h2.opt.H2CacheRow.getValue0 
line: 104

   org.apache.ignite.internal.processors.query.h2.opt.H2CacheRow.getValue line: 
86

   org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare line: 
392

   org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare line: 
63

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.compare 
line: 5200

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$AbstractForwardCursor.findLowerBound
 line: 5317

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.fillFromBuffer0
 line: 5588

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$AbstractForwardCursor.fillFromBuffer
 line: 5376

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$AbstractForwardCursor.nextPage
 line: 5428

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.next
 line: 5661

   org.apache.ignite.internal.processors.query.h2.H2Cursor.next line: 66

   org.h2.index.IndexCursor.next line: 316

   org.h2.table.TableFilter.next line: 502

   org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow line: 1452

   org.h2.result.LazyResult.hasNext line: 79

   org.h2.result.LazyResult.next line: 59

   org.h2.command.dml.Select.queryFlat line: 527

   org.h2.command.dml.Select.queryWithoutCache line: 633

   org.h2.command.dml.Query.queryWithoutCacheLazyCheck line: 114

   org.h2.command.dml.Query.query line: 352

   org.h2.command.dml.Query.query line: 333

   org.h2.command.CommandContainer.query line: 114

   org.h2.command.Command.executeQuery line: 202

   org.h2.jdbc.JdbcPreparedStatement.executeQuery line: 114

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery 
line: 824

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer
 line: 912

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0
 line: 417

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest
 line: 242

   org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage 
line: 2138

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17 
line: 2095

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$$Lambda$482/222427158.onMessage
 line: not available

   
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage
 line: 3379

   
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener 
line: 1843

   
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0
 line: 1468

   org.apache.ignite.internal.managers.communication.GridIoManager.access$5200 
line: 229

   org.apache.ignite.internal.managers.communication.GridIoManager$9.run line: 
1365

   java.util.concurrent.ThreadPoolExecutor.runWorker line: 1149

   java.util.concurrent.ThreadPoolExecutor$Worker.run line: 624

   java.lang.Thread.run line: 748

> Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5
> --
>
> Key: IGNITE-13270
> URL: https://issues.apache.org/jira/browse/IGNITE-13270
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8, 2.8.1
>Reporter: Tanmay Ambre
>Priority: Major
>
> After upgrading to ignite 2.8.0 from 2.7.5 the following query is causing 
> massive CPU burn. We have a 8 node server cluster for Ignite (with 32 CPUs on 
> each node). with 2.8.0, 2 nodes are maxed out on CPU. with 2.8.1 all nodes 
> are above 80%. On 2.7.5 the same query was performing very nicely. 
> The query is a join with 3 tables having same affinity key. however, we dont 
> know the affinity key when we are querying and there this results in a merge 
> scan. 
> Query:
> select A.A_ID, A.AFFINITYKEY, B.B_ID from A, B, C
> where
> A.A_ID = B.A_ID AND
> B.B_ID = C.B_ID AND
> A.AFFINITYKEY = B.AFFINITYKEY AND
> B.AFFINITYKEY = C.AFFINITYKEY AND
> B.B_ID = ? AND
> C.C_ID = ?
>  
> Performance:
> 2.7.5 ~ 2 to 5 ms
> 2.8.0 ~ 6 to 12 seconds
> 2.8.1 ~ within 3 seconds
>  
> Volume (we have a EDA based system where messages come in burst. One burst 
>

[jira] [Commented] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Tanmay Ambre (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161783#comment-17161783
 ] 

Tanmay Ambre commented on IGNITE-13270:
---

Trace 4:

   org.apache.ignite.internal.processors.cache.persistence.DataStructure.read 
line: 341

   org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.read 
line: 5876

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.askNeighbor
 line: 2569

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$1300
 line: 94

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0
 line: 342

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run
 line: 5709

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run
 line: 278

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run
 line: 5695

   
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage
 line: 169

   org.apache.ignite.internal.processors.cache.persistence.DataStructure.read 
line: 364

   org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.read 
line: 5896

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1365

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doFind 
line: 1332

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findOne 
line: 1300

   org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.find 
line: 360

   org.h2.index.BaseIndex.find line: 130

   org.h2.index.IndexCursor.find line: 176

   org.h2.table.TableFilter.next line: 471

   org.h2.table.TableFilter.next line: 541

   org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow line: 1452

   org.h2.result.LazyResult.hasNext line: 79

   org.h2.result.LazyResult.next line: 59

   org.h2.command.dml.Select.queryFlat line: 527

   org.h2.command.dml.Select.queryWithoutCache line: 633

   org.h2.command.dml.Query.queryWithoutCacheLazyCheck line: 114

   org.h2.command.dml.Query.query line: 352

   org.h2.command.dml.Query.query line: 333

   org.h2.command.CommandContainer.query line: 114

   org.h2.command.Command.executeQuery line: 202

   org.h2.jdbc.JdbcPreparedStatement.executeQuery line: 114

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery 
line: 824

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer
 line: 912

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0
 line: 417

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest
 line: 242

   org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage 
line: 2138

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17 
line: 2095

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$$Lambda$482/222427158.onMessage
 line: not available

   
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage
 line: 3379

   
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener 
line: 1843

   
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0
 line: 1468

   org.apache.ignite.internal.managers.communication.GridIoManager.access$5200 
line: 229

   org.apache.ignite.internal.managers.communication.GridIoManager$9.run line: 
1365

   java.util.concurrent.ThreadPoolExecutor.runWorker line: 1149

   java.util.concurrent.ThreadPoolExecutor$Worker.run line: 624

   java.lang.Thread.run line: 748

> Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5
> --
>
> Key: IGNITE-13270
> URL: https://issues.apache.org/jira/browse/IGNITE-13270
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8, 2.8.1
>Reporter: Tanmay Ambre
>Priority: Major
>
> After upgrading to ignite 2.8.0 fr

[jira] [Commented] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Tanmay Ambre (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161781#comment-17161781
 ] 

Tanmay Ambre commented on IGNITE-13270:
---

Trace 2:

   org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.acquirePage 
line: 481

   
org.apache.ignite.internal.processors.cache.persistence.DataStructure.acquirePage
 line: 158

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.acquirePage
 line: 5858

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$AbstractForwardCursor.nextPage
 line: 5417

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$ForwardCursor.next
 line: 5661

   org.apache.ignite.internal.processors.query.h2.H2Cursor.next line: 66

   org.h2.index.IndexCursor.next line: 316

   org.h2.table.TableFilter.next line: 502

   org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow line: 1452

   org.h2.result.LazyResult.hasNext line: 79

   org.h2.result.LazyResult.next line: 59

   org.h2.command.dml.Select.queryFlat line: 527

   org.h2.command.dml.Select.queryWithoutCache line: 633

   org.h2.command.dml.Query.queryWithoutCacheLazyCheck line: 114

   org.h2.command.dml.Query.query line: 352

   org.h2.command.dml.Query.query line: 333

   org.h2.command.CommandContainer.query line: 114

   org.h2.command.Command.executeQuery line: 202

   org.h2.jdbc.JdbcPreparedStatement.executeQuery line: 114

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery 
line: 824

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer
 line: 912

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0
 line: 417

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest
 line: 242

   org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage 
line: 2138

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17 
line: 2095

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$$Lambda$482/222427158.onMessage
 line: not available

   
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage
 line: 3379

   
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener 
line: 1843

   
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0
 line: 1468

   org.apache.ignite.internal.managers.communication.GridIoManager.access$5200 
line: 229

   org.apache.ignite.internal.managers.communication.GridIoManager$9.run line: 
1365

   java.util.concurrent.ThreadPoolExecutor.runWorker line: 1149

   java.util.concurrent.ThreadPoolExecutor$Worker.run line: 624

   java.lang.Thread.run line: 748

> Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5
> --
>
> Key: IGNITE-13270
> URL: https://issues.apache.org/jira/browse/IGNITE-13270
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8, 2.8.1
>Reporter: Tanmay Ambre
>Priority: Major
>
> After upgrading to ignite 2.8.0 from 2.7.5 the following query is causing 
> massive CPU burn. We have a 8 node server cluster for Ignite (with 32 CPUs on 
> each node). with 2.8.0, 2 nodes are maxed out on CPU. with 2.8.1 all nodes 
> are above 80%. On 2.7.5 the same query was performing very nicely. 
> The query is a join with 3 tables having same affinity key. however, we dont 
> know the affinity key when we are querying and there this results in a merge 
> scan. 
> Query:
> select A.A_ID, A.AFFINITYKEY, B.B_ID from A, B, C
> where
> A.A_ID = B.A_ID AND
> B.B_ID = C.B_ID AND
> A.AFFINITYKEY = B.AFFINITYKEY AND
> B.AFFINITYKEY = C.AFFINITYKEY AND
> B.B_ID = ? AND
> C.C_ID = ?
>  
> Performance:
> 2.7.5 ~ 2 to 5 ms
> 2.8.0 ~ 6 to 12 seconds
> 2.8.1 ~ within 3 seconds
>  
> Volume (we have a EDA based system where messages come in burst. One burst 
> has approx. 150K events - each event correspond to one query). 
>  
> throughput of our java based processor (that fires this query):
> 2.7.5 ~ 750 transactions per second
> 2.8.0 ~ 3 transactions per second
> 2.8.1 ~ 6 transactions per second
>  
> CPU burn
> 2.7.5 ~ 10 to 15% on each node
> 2.8.0 ~ 100% on 2 nodes other nodes are idling
> 2.8.1 ~ 80 to 90% on each node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Tanmay Ambre (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161782#comment-17161782
 ] 

Tanmay Ambre commented on IGNITE-13270:
---

Trace 3:

   
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage
 line: 165

   org.apache.ignite.internal.processors.cache.persistence.DataStructure.read 
line: 364

   org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.read 
line: 5896

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1365

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doFind 
line: 1332

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findOne 
line: 1300

   org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.find 
line: 360

   org.h2.index.BaseIndex.find line: 130

   org.h2.index.IndexCursor.find line: 176

   org.h2.table.TableFilter.next line: 471

   org.h2.table.TableFilter.next line: 541

   org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow line: 1452

   org.h2.result.LazyResult.hasNext line: 79

   org.h2.result.LazyResult.next line: 59

   org.h2.command.dml.Select.queryFlat line: 527

   org.h2.command.dml.Select.queryWithoutCache line: 633

   org.h2.command.dml.Query.queryWithoutCacheLazyCheck line: 114

   org.h2.command.dml.Query.query line: 352

   org.h2.command.dml.Query.query line: 333

   org.h2.command.CommandContainer.query line: 114

   org.h2.command.Command.executeQuery line: 202

   org.h2.jdbc.JdbcPreparedStatement.executeQuery line: 114

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery 
line: 824

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer
 line: 912

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0
 line: 417

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest
 line: 242

   org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage 
line: 2138

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17 
line: 2095

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$$Lambda$482/222427158.onMessage
 line: not available

   
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage
 line: 3379

   
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener 
line: 1843

   
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0
 line: 1468

   org.apache.ignite.internal.managers.communication.GridIoManager.access$5200 
line: 229

   org.apache.ignite.internal.managers.communication.GridIoManager$9.run line: 
1365

   java.util.concurrent.ThreadPoolExecutor.runWorker line: 1149

   java.util.concurrent.ThreadPoolExecutor$Worker.run line: 624

   java.lang.Thread.run line: 748

> Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5
> --
>
> Key: IGNITE-13270
> URL: https://issues.apache.org/jira/browse/IGNITE-13270
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8, 2.8.1
>Reporter: Tanmay Ambre
>Priority: Major
>
> After upgrading to ignite 2.8.0 from 2.7.5 the following query is causing 
> massive CPU burn. We have a 8 node server cluster for Ignite (with 32 CPUs on 
> each node). with 2.8.0, 2 nodes are maxed out on CPU. with 2.8.1 all nodes 
> are above 80%. On 2.7.5 the same query was performing very nicely. 
> The query is a join with 3 tables having same affinity key. however, we dont 
> know the affinity key when we are querying and there this results in a merge 
> scan. 
> Query:
> select A.A_ID, A.AFFINITYKEY, B.B_ID from A, B, C
> where
> A.A_ID = B.A_ID AND
> B.B_ID = C.B_ID AND
> A.AFFINITYKEY = B.AFFINITYKEY AND
> B.AFFINITYKEY = C.AFFINITYKEY AND
> B.B_ID = ? AND
> C.C_ID = ?
>  
> Performance:
> 2.7.5 ~ 2 to 5 ms
> 2.8.0 ~ 6 to 12 seconds
> 2.8.1 ~ within 3 seconds
>  
> Volume (we have a EDA based system wher

[jira] [Commented] (IGNITE-13270) Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5

2020-07-21 Thread Tanmay Ambre (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17161780#comment-17161780
 ] 

Tanmay Ambre commented on IGNITE-13270:
---

Have taken multiple snapshots

 

1

--

   org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.read 
line: 5896

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1365

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findDown 
line: 1374

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doFind 
line: 1332

   
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findOne 
line: 1300

   org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.find 
line: 360

   org.h2.index.BaseIndex.find line: 130

   org.h2.index.IndexCursor.find line: 176

   org.h2.table.TableFilter.next line: 471

   org.h2.table.TableFilter.next line: 541

   org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow line: 1452

   org.h2.result.LazyResult.hasNext line: 79

   org.h2.result.LazyResult.next line: 59

   org.h2.command.dml.Select.queryFlat line: 527

   org.h2.command.dml.Select.queryWithoutCache line: 633

   org.h2.command.dml.Query.queryWithoutCacheLazyCheck line: 114

   org.h2.command.dml.Query.query line: 352

   org.h2.command.dml.Query.query line: 333

   org.h2.command.CommandContainer.query line: 114

   org.h2.command.Command.executeQuery line: 202

   org.h2.jdbc.JdbcPreparedStatement.executeQuery line: 114

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQuery 
line: 824

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeSqlQueryWithTimer
 line: 912

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest0
 line: 417

   
org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor.onQueryRequest
 line: 242

   org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.onMessage 
line: 2138

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.lambda$start$17 
line: 2095

   
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$$Lambda$482/222427158.onMessage
 line: not available

   
org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage
 line: 3379

   
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener 
line: 1843

   
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0
 line: 1468

   org.apache.ignite.internal.managers.communication.GridIoManager.access$5200 
line: 229

   org.apache.ignite.internal.managers.communication.GridIoManager$9.run line: 
1365

   java.util.concurrent.ThreadPoolExecutor.runWorker line: 1149

   java.util.concurrent.ThreadPoolExecutor$Worker.run line: 624

   java.lang.Thread.run line: 748

> Massive CPU burn in 2.8.0 and 2.8.1 after upgrading from 2.7.5
> --
>
> Key: IGNITE-13270
> URL: https://issues.apache.org/jira/browse/IGNITE-13270
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.8, 2.8.1
>Reporter: Tanmay Ambre
>Priority: Major
>
> After upgrading to ignite 2.8.0 from 2.7.5 the following query is causing 
> massive CPU burn. We have a 8 node server cluster for Ignite (with 32 CPUs on 
> each node). with 2.8.0, 2 nodes are maxed out on CPU. with 2.8.1 all nodes 
> are above 80%. On 2.7.5 the same query was performing very nicely. 
> The query is a join with 3 tables having same affinity key. however, we dont 
> know the affinity key when we are querying and there this results in a merge 
> scan. 
> Query:
> select A.A_ID, A.AFFINITYKEY, B.B_ID from A, B, C
> where
> A.A_ID = B.A_ID AND
> B.B_ID = C.B_ID AND
> A.AFFINITYKEY = B.AFFINITYKEY AND
> B.AFFINITYKEY = C.AFFINITYKEY AND
> B.B_ID = ? AND
> C.C_ID = ?
>  
> Performance:
> 2.7.5 ~ 2 to 5 ms
> 2.8.0 ~ 6 to 12 seconds
> 2.8.1 ~ within 3 seconds
>  
> Volume (we have a EDA based system where messages come in burst. One burst 
> has approx. 150K events - each event correspond to one query). 
>  
> throughput of our java based processor (that fires this query):
> 2.7.5 ~ 750 transactions per second
> 2.8.0 ~ 3 transactions per second
> 2.8.1 ~ 6 transactions per second
>  
> CPU burn
> 2.7.5 ~ 10 to 15% on each node
> 2.8.0 ~ 100% on 2 nodes other nodes are idling
> 2.8.1 ~ 80 to 90% on each node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)