[jira] [Updated] (IGNITE-8728) Baselined node rejoining crashes other baseline nodes - Duplicate Key Error
[ https://issues.apache.org/jira/browse/IGNITE-8728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahesh Renduchintala updated IGNITE-8728: - Priority: Critical (was: Major) > Baselined node rejoining crashes other baseline nodes - Duplicate Key Error > --- > > Key: IGNITE-8728 > URL: https://issues.apache.org/jira/browse/IGNITE-8728 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.7 >Reporter: Mahesh Renduchintala >Priority: Critical > Attachments: NS1_ignite-9676df15.0.log, NS2_ignite-7cfc8008.0.log, > node-config.xml > > > I have two nodes on which we have 3 tables which are partitioned. Index are > also built on these tables. > For 24 hours caches work fine. The tables are definitely distributed across > both the nodes > Node 2 reboots due to some issue - goes out of the baseline - comes back and > joins the baseline. Other baseline nodes crash and in the logs we see > duplicate Key error > [10:38:35,437][INFO][tcp-disco-srvr-#2|#2][TcpDiscoverySpi] TCP discovery > accepted incoming connection [rmtAddr=/192.168.1.7, rmtPort=45102] > [10:38:35,437][INFO][tcp-disco-srvr-#2|#2][TcpDiscoverySpi] TCP discovery > spawning a new thread for connection [rmtAddr=/192.168.1.7, rmtPort=45102] > [10:38:35,437][INFO][tcp-disco-sock-reader-#12|#12][TcpDiscoverySpi] Started > serving remote node connection [rmtAddr=/192.168.1.7:45102, rmtPort=45102] > [10:38:35,451][INFO][tcp-disco-sock-reader-#12|#12][TcpDiscoverySpi] > Finished serving remote node connection [rmtAddr=/192.168.1.7:45102, > rmtPort=45102 > [10:38:35,457][SEVERE][tcp-disco-msg-worker-#3|#3][TcpDiscoverySpi] > TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node > in order to prevent cluster wide instability. > *java.lang.IllegalStateException: Duplicate key* > at org.apache.ignite.cache.QueryEntity.checkIndexes(QueryEntity.java:223) > at org.apache.ignite.cache.QueryEntity.makePatch(QueryEntity.java:174) > at > org.apache.ignite.internal.processors.query.QuerySchema.makePatch(QuerySchema.java:114) > at > org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor.makeSchemaPatch(DynamicCacheDescriptor.java:360) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.validateNode(GridCacheProcessor.java:2536) > at > org.apache.ignite.internal.managers.GridManagerAdapter$1.validateNode(GridManagerAdapter.java:566) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processJoinRequestMessage(ServerImpl.java:3629) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2736) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621) > at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) > [10:38:35,459][SEVERE][tcp-disco-msg-worker-#3|#3][] Critical system error > detected. Will be handled accordingly to configured handler [hnd=class > o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext > [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: > Duplicate key]] > java.lang.IllegalStateException: Duplicate key > at org.apache.ignite.cache.QueryEntity.checkIndexes(QueryEntity.java:223) > at org.apache.ignite.cache.QueryEntity.makePatch(QueryEntity.java:174) > at > org.apache.ignite.internal.processors.query.QuerySchema.makePatch(QuerySchema.java:114) > at > org.apache.ignite.internal.processors.cache.DynamicCacheDescriptor.makeSchemaPatch(DynamicCacheDescriptor.java:360) > at > org.apache.ignite.internal.processors.cache.GridCacheProcessor.validateNode(GridCacheProcessor.java:2536) > at > org.apache.ignite.internal.managers.GridManagerAdapter$1.validateNode(GridManagerAdapter.java:566) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processJoinRequestMessage(ServerImpl.java:3629) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2736) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621) > at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) > [10:38:35,460][SEVERE][tcp-disco-msg-worker-#3|#3][] JVM will be halted > immediately due to the failure: [failureCtx=FailureContext > [type=SYSTEM_WORKER_TERMINATION,
[jira] [Updated] (IGNITE-10611) Web Console: Refactor ActivitiesService to use user object instead of id.
[ https://issues.apache.org/jira/browse/IGNITE-10611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov updated IGNITE-10611: -- Summary: Web Console: Refactor ActivitiesService to use user object instead of id. (was: Web Console: Refactor ActivitiesService to use uer object instead of id.) > Web Console: Refactor ActivitiesService to use user object instead of id. > - > > Key: IGNITE-10611 > URL: https://issues.apache.org/jira/browse/IGNITE-10611 > Project: Ignite > Issue Type: Improvement > Components: wizards >Reporter: Alexey Kuznetsov >Assignee: Alexey Kuznetsov >Priority: Major > Fix For: 2.8 > > > It is better to pass a "user" object because it contains more info. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10897) Blocked drop table operations cause strange issues
[ https://issues.apache.org/jira/browse/IGNITE-10897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] JiajunYang updated IGNITE-10897: Description: Reproduce steps to get a blocked drop table operation: 1.Create a table and put some data to a node with persistence enabled. 2.Do a select query targeting on the table which runs long time. 3.Drop the table before the select query end. Then u will see drop table operation blocks until the select query end. A strange issue cause by blocked drop table operations: 1.Do another drop table operation when there is a blocked drop table operation.This operation will also block. 2.Try to recreate the table with same name.Then you can see "table already exists" exception. 3.Try to drop the table again,then you can see "Table doesn't exist" exception. was: Reproduce steps to get a blocked drop table operation: 1.Create a table and put some data to a node with persistence enabled. 2.Do a select query targeting on the table which runs long time. 3.Drop the table before the select query end. Then u will see drop table operation blocks until the select query end. A strange issue cause by blocked drop table operations: 1.Do another drop table operation when there is a blocked drop table operation.This operation will also block. 2.Try to recreate the table with same name.When you can see "table already exists" exception. 3.Try to drop the table again,then you can see "Table doesn't exist" exception. > Blocked drop table operations cause strange issues > -- > > Key: IGNITE-10897 > URL: https://issues.apache.org/jira/browse/IGNITE-10897 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.6, 2.7 >Reporter: JiajunYang >Priority: Major > > Reproduce steps to get a blocked drop table operation: > 1.Create a table and put some data to a node with persistence enabled. > 2.Do a select query targeting on the table which runs long time. > 3.Drop the table before the select query end. > Then u will see drop table operation blocks until the select query end. > A strange issue cause by blocked drop table operations: > 1.Do another drop table operation when there is a blocked drop table > operation.This operation will also block. > 2.Try to recreate the table with same name.Then you can see "table > already exists" exception. > 3.Try to drop the table again,then you can see > "Table doesn't exist" exception. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IGNITE-10533) Web Console: Images outdated on web site.
[ https://issues.apache.org/jira/browse/IGNITE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Magda closed IGNITE-10533. > Web Console: Images outdated on web site. > - > > Key: IGNITE-10533 > URL: https://issues.apache.org/jira/browse/IGNITE-10533 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Alexey Kuznetsov >Assignee: Denis Magda >Priority: Major > > For example see: https://apacheignite-tools.readme.io/docs/ignite-web-console -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10533) Web Console: Images outdated on web site.
[ https://issues.apache.org/jira/browse/IGNITE-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740858#comment-16740858 ] Denis Magda commented on IGNITE-10533: -- Looks good thanks! > Web Console: Images outdated on web site. > - > > Key: IGNITE-10533 > URL: https://issues.apache.org/jira/browse/IGNITE-10533 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Alexey Kuznetsov >Assignee: Denis Magda >Priority: Major > > For example see: https://apacheignite-tools.readme.io/docs/ignite-web-console -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10909) GridCacheBalancingStoreSelfTest.testConcurrentLoad flaky test fail in Cache 1
[ https://issues.apache.org/jira/browse/IGNITE-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10909: Ignite Flags: (was: Docs Required) > GridCacheBalancingStoreSelfTest.testConcurrentLoad flaky test fail in Cache 1 > - > > Key: IGNITE-10909 > URL: https://issues.apache.org/jira/browse/IGNITE-10909 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10909) GridCacheBalancingStoreSelfTest.testConcurrentLoad flaky test fail in Cache 1
[ https://issues.apache.org/jira/browse/IGNITE-10909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10909: Description: {code} junit.framework.AssertionFailedError: Redundant load call. expected:<1> but was:<2> at junit.framework.Assert.fail(Assert.java:57) at junit.framework.Assert.failNotEquals(Assert.java:329) at junit.framework.Assert.assertEquals(Assert.java:78) at junit.framework.Assert.assertEquals(Assert.java:234) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$ConcurrentVerifyStore.load(GridCacheBalancingStoreSelfTest.java:363) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$ConcurrentVerifyStore.load(GridCacheBalancingStoreSelfTest.java:340) at org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:98) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$2.run(GridCacheBalancingStoreSelfTest.java:180) at org.apache.ignite.testframework.GridTestUtils$7.call(GridTestUtils.java:1306) at org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:84) junit.framework.AssertionFailedError: Redundant load call. expected:<1> but was:<2> at junit.framework.Assert.fail(Assert.java:57) at junit.framework.Assert.failNotEquals(Assert.java:329)Failure in thread: Thread [id=25256, name=load-thread-3] at junit.framework.Assert.assertEquals(Assert.java:78) at junit.framework.Assert.assertEquals(Assert.java:234) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$ConcurrentVerifyStore.load(GridCacheBalancingStoreSelfTest.java:363) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$ConcurrentVerifyStore.load(GridCacheBalancingStoreSelfTest.java:340) at org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:98) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$2.run(GridCacheBalancingStoreSelfTest.java:180) at org.apache.ignite.testframework.GridTestUtils$7.call(GridTestUtils.java:1306) at org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:84) junit.framework.AssertionFailedError: Redundant load call. expected:<1> but was:<2> at junit.framework.Assert.fail(Assert.java:57) at junit.framework.Assert.failNotEquals(Assert.java:329) at junit.framework.Assert.assertEquals(Assert.java:78) at junit.framework.Assert.assertEquals(Assert.java:234) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$ConcurrentVerifyStore.load(GridCacheBalancingStoreSelfTest.java:363) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$ConcurrentVerifyStore.load(GridCacheBalancingStoreSelfTest.java:340) at org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:98) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$2.run(GridCacheBalancingStoreSelfTest.java:180) at org.apache.ignite.testframework.GridTestUtils$7.call(GridTestUtils.java:1306) at org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:84) java.lang.RuntimeException: java.lang.InterruptedException at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$2.run(GridCacheBalancingStoreSelfTest.java:175) at org.apache.ignite.testframework.GridTestUtils$7.call(GridTestUtils.java:1306) at org.apache.ignite.testframework.GridTestThread.run(GridTestThread.java:84) Caused by: java.lang.InterruptedException at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2048) at java.util.concurrent.CyclicBarrier.dowait(CyclicBarrier.java:234) at java.util.concurrent.CyclicBarrier.await(CyclicBarrier.java:362) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$2.run(GridCacheBalancingStoreSelfTest.java:172) ... 2 more junit.framework.AssertionFailedError: Redundant load call. expected:<1> but was:<2> at junit.framework.Assert.fail(Assert.java:57) at junit.framework.Assert.failNotEquals(Assert.java:329) at junit.framework.Assert.assertEquals(Assert.java:78) at junit.framework.Assert.assertEquals(Assert.java:234) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$ConcurrentVerifyStore.load(GridCacheBalancingStoreSelfTest.java:363) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$ConcurrentVerifyStore.load(GridCacheBalancingStoreSelfTest.java:340) at org.apache.ignite.internal.processors.cache.CacheStoreBalancingWrapper.load(CacheStoreBalancingWrapper.java:98) at org.apache.ignite.cache.store.GridCacheBalancingStoreSelfTest$2.run(GridCacheBalancingStoreSelfTest.java:180) at org.apache.ignite.testframework.GridTestUtils$7.call(GridTestUtils.java:1306)
[jira] [Created] (IGNITE-10909) GridCacheBalancingStoreSelfTest.testConcurrentLoad flaky test fail in Cache 1
Dmitriy Govorukhin created IGNITE-10909: --- Summary: GridCacheBalancingStoreSelfTest.testConcurrentLoad flaky test fail in Cache 1 Key: IGNITE-10909 URL: https://issues.apache.org/jira/browse/IGNITE-10909 Project: Ignite Issue Type: Bug Reporter: Dmitriy Govorukhin Assignee: Dmitriy Govorukhin -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-10908) GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky fail with NPE in Service Grid (legacy mode)
[ https://issues.apache.org/jira/browse/IGNITE-10908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin reassigned IGNITE-10908: --- Assignee: Dmitriy Govorukhin > GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky > fail with NPE in Service Grid (legacy mode) > -- > > Key: IGNITE-10908 > URL: https://issues.apache.org/jira/browse/IGNITE-10908 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > > [TC > link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=1053083353985663802=testDetails] > {code} > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106) > at java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097) > at > org.apache.ignite.internal.processors.service.GridServiceProcessor.deployAll(GridServiceProcessor.java:646) > at > org.apache.ignite.internal.processors.service.GridServiceProcessor.deployAll(GridServiceProcessor.java:589) > at > org.apache.ignite.internal.IgniteServicesImpl.deployAllAsync(IgniteServicesImpl.java:254) > at > org.apache.ignite.internal.processors.service.GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange(GridServiceProcessorBatchDeploySelfTest.java:148) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10908) GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky fail with NPE in Service Grid (legacy mode)
[ https://issues.apache.org/jira/browse/IGNITE-10908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10908: Description: [TC link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=1053083353985663802=testDetails] {code} java.lang.NullPointerException at java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106) at java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097) at org.apache.ignite.internal.processors.service.GridServiceProcessor.deployAll(GridServiceProcessor.java:646) at org.apache.ignite.internal.processors.service.GridServiceProcessor.deployAll(GridServiceProcessor.java:589) at org.apache.ignite.internal.IgniteServicesImpl.deployAllAsync(IgniteServicesImpl.java:254) at org.apache.ignite.internal.processors.service.GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange(GridServiceProcessorBatchDeploySelfTest.java:148) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) at java.lang.Thread.run(Thread.java:748) {code} > GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky > fail with NPE in Service Grid (legacy mode) > -- > > Key: IGNITE-10908 > URL: https://issues.apache.org/jira/browse/IGNITE-10908 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > > [TC > link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=1053083353985663802=testDetails] > {code} > java.lang.NullPointerException > at > java.util.concurrent.ConcurrentHashMap.replaceNode(ConcurrentHashMap.java:1106) > at java.util.concurrent.ConcurrentHashMap.remove(ConcurrentHashMap.java:1097) > at > org.apache.ignite.internal.processors.service.GridServiceProcessor.deployAll(GridServiceProcessor.java:646) > at > org.apache.ignite.internal.processors.service.GridServiceProcessor.deployAll(GridServiceProcessor.java:589) > at > org.apache.ignite.internal.IgniteServicesImpl.deployAllAsync(IgniteServicesImpl.java:254) > at > org.apache.ignite.internal.processors.service.GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange(GridServiceProcessorBatchDeploySelfTest.java:148) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10908) GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky fail with NPE in Service Grid (legacy mode)
Dmitriy Govorukhin created IGNITE-10908: --- Summary: GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky fail with NPE in Service Grid (legacy mode) Key: IGNITE-10908 URL: https://issues.apache.org/jira/browse/IGNITE-10908 Project: Ignite Issue Type: Bug Reporter: Dmitriy Govorukhin -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10908) GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky fail with NPE in Service Grid (legacy mode)
[ https://issues.apache.org/jira/browse/IGNITE-10908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10908: Ignite Flags: (was: Docs Required) > GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky > fail with NPE in Service Grid (legacy mode) > -- > > Key: IGNITE-10908 > URL: https://issues.apache.org/jira/browse/IGNITE-10908 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10908) GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky fail with NPE in Service Grid (legacy mode)
[ https://issues.apache.org/jira/browse/IGNITE-10908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10908: Fix Version/s: 2.8 > GridServiceProcessorBatchDeploySelfTest.testDeployAllTopologyChange flaky > fail with NPE in Service Grid (legacy mode) > -- > > Key: IGNITE-10908 > URL: https://issues.apache.org/jira/browse/IGNITE-10908 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10907) IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky failed in Basic 1
[ https://issues.apache.org/jira/browse/IGNITE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10907: Description: [TC link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] > IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky > failed in Basic 1 > > > Key: IGNITE-10907 > URL: https://issues.apache.org/jira/browse/IGNITE-10907 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > > [TC > link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10907) IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky failed in Basic 1
[ https://issues.apache.org/jira/browse/IGNITE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10907: Labels: MakeTeamcityGreenAgain (was: ) > IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky > failed in Basic 1 > > > Key: IGNITE-10907 > URL: https://issues.apache.org/jira/browse/IGNITE-10907 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.8 > > > The main problem in a test. The test suggests that the current thread will > steal a job, but there is a case when it is not satisfied. > [TC > link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] > {code:java} > java.util.concurrent.ExecutionException: java.lang.AssertionError > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.ignite.internal.util.IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor(IgniteUtilsSelfTest.java:1081) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.ignite.internal.util.IgniteUtilsSelfTest.runTask(IgniteUtilsSelfTest.java:1115) > at > org.apache.ignite.internal.util.IgniteUtilsSelfTest.lambda$testDoInParallelWithStealingJobRunTaskInExecutor$1(IgniteUtilsSelfTest.java:1077) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ... 1 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10907) IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky failed in Basic 1
[ https://issues.apache.org/jira/browse/IGNITE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10907: Description: The main problem in a test. The test suggests that the current thread will steal a job, but there is a case when it is not satisfied. [TC link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] {code:java} java.util.concurrent.ExecutionException: java.lang.AssertionError at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor(IgniteUtilsSelfTest.java:1081) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.runTask(IgniteUtilsSelfTest.java:1115) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.lambda$testDoInParallelWithStealingJobRunTaskInExecutor$1(IgniteUtilsSelfTest.java:1077) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more {code} was: [TC link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] {code} java.util.concurrent.ExecutionException: java.lang.AssertionError at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor(IgniteUtilsSelfTest.java:1081) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.runTask(IgniteUtilsSelfTest.java:1115) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.lambda$testDoInParallelWithStealingJobRunTaskInExecutor$1(IgniteUtilsSelfTest.java:1077) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more {code} > IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky > failed in Basic 1 > > > Key: IGNITE-10907 > URL: https://issues.apache.org/jira/browse/IGNITE-10907 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > > The main problem in a test. The test suggests that the current thread will > steal a job, but there is a case when it is not satisfied. > [TC >
[jira] [Updated] (IGNITE-10907) IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky failed in Basic 1
[ https://issues.apache.org/jira/browse/IGNITE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10907: Description: [TC link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] java.util.concurrent.ExecutionException: java.lang.AssertionError at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor(IgniteUtilsSelfTest.java:1081) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.runTask(IgniteUtilsSelfTest.java:1115) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.lambda$testDoInParallelWithStealingJobRunTaskInExecutor$1(IgniteUtilsSelfTest.java:1077) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more was:[TC link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] > IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky > failed in Basic 1 > > > Key: IGNITE-10907 > URL: https://issues.apache.org/jira/browse/IGNITE-10907 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > > [TC > link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] > java.util.concurrent.ExecutionException: java.lang.AssertionError > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at > org.apache.ignite.internal.util.IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor(IgniteUtilsSelfTest.java:1081) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertTrue(Assert.java:52) > at > org.apache.ignite.internal.util.IgniteUtilsSelfTest.runTask(IgniteUtilsSelfTest.java:1115) > at > org.apache.ignite.internal.util.IgniteUtilsSelfTest.lambda$testDoInParallelWithStealingJobRunTaskInExecutor$1(IgniteUtilsSelfTest.java:1077) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at >
[jira] [Updated] (IGNITE-10907) IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky failed in Basic 1
[ https://issues.apache.org/jira/browse/IGNITE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10907: Description: [TC link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] {code} java.util.concurrent.ExecutionException: java.lang.AssertionError at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor(IgniteUtilsSelfTest.java:1081) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.runTask(IgniteUtilsSelfTest.java:1115) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.lambda$testDoInParallelWithStealingJobRunTaskInExecutor$1(IgniteUtilsSelfTest.java:1077) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more {code} was: [TC link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] java.util.concurrent.ExecutionException: java.lang.AssertionError at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor(IgniteUtilsSelfTest.java:1081) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.runTask(IgniteUtilsSelfTest.java:1115) at org.apache.ignite.internal.util.IgniteUtilsSelfTest.lambda$testDoInParallelWithStealingJobRunTaskInExecutor$1(IgniteUtilsSelfTest.java:1077) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ... 1 more > IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky > failed in Basic 1 > > > Key: IGNITE-10907 > URL: https://issues.apache.org/jira/browse/IGNITE-10907 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > > [TC > link|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=3644593059787670337=testDetails] > {code} >
[jira] [Updated] (IGNITE-10907) IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky failed in Basic 1
[ https://issues.apache.org/jira/browse/IGNITE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10907: Fix Version/s: 2.8 > IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky > failed in Basic 1 > > > Key: IGNITE-10907 > URL: https://issues.apache.org/jira/browse/IGNITE-10907 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10907) IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky failed in Basic 1
[ https://issues.apache.org/jira/browse/IGNITE-10907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-10907: Ignite Flags: (was: Docs Required) > IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky > failed in Basic 1 > > > Key: IGNITE-10907 > URL: https://issues.apache.org/jira/browse/IGNITE-10907 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > Fix For: 2.8 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10906) SQL: UPDATE statement allows null for entire value cache object
Pavel Kuznetsov created IGNITE-10906: Summary: SQL: UPDATE statement allows null for entire value cache object Key: IGNITE-10906 URL: https://issues.apache.org/jira/browse/IGNITE-10906 Project: Ignite Issue Type: Bug Components: sql Reporter: Pavel Kuznetsov Currently next query doesn't cause error: {code:sql} CREATE TABLE SIMPLE (id INT PRIMARY KEY, name VARCHAR) WITH "wrap_value=false, wrap_key=false" UPDATE SIMPLE SET _val = null {code} But it should, because underlying cache doesn't support null values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10907) IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky failed in Basic 1
Dmitriy Govorukhin created IGNITE-10907: --- Summary: IgniteUtilsSelfTest.testDoInParallelWithStealingJobRunTaskInExecutor flaky failed in Basic 1 Key: IGNITE-10907 URL: https://issues.apache.org/jira/browse/IGNITE-10907 Project: Ignite Issue Type: Bug Reporter: Dmitriy Govorukhin Assignee: Dmitriy Govorukhin -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10891) IgnitePdsThreadInterruptionTest.testInterruptsOnLFSRead flaky in PDS indexing
[ https://issues.apache.org/jira/browse/IGNITE-10891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740621#comment-16740621 ] Ignite TC Bot commented on IGNITE-10891: {panel:title=-- Run :: All: No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=2771736buildTypeId=IgniteTests24Java8_RunAll] > IgnitePdsThreadInterruptionTest.testInterruptsOnLFSRead flaky in PDS indexing > - > > Key: IGNITE-10891 > URL: https://issues.apache.org/jira/browse/IGNITE-10891 > Project: Ignite > Issue Type: Bug >Reporter: Dmitriy Govorukhin >Assignee: Dmitriy Govorukhin >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.8 > > > The main problem in the test. The test suggests none exception will be thrown > during reads and interrupt threads, but oh there is a case when this > condition is not met. We can get IgniteInterruptedException > GridCacheAdapter.asyncOpsSem during reads if current thread was interrupted > before asyncOpsSem.acquire(); > > [TC link to test > history|https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=4074948011040554273=testDetails_IgniteTests24Java8=%3Cdefault%3E] > {code:java} > [2019-01-10 22:46:23,224][ERROR][main][root] Test failed. > junit.framework.AssertionFailedError: Expected: but was: > javax.cache.CacheException: class > org.apache.ignite.IgniteInterruptedException: Failed to wait for asynchronous > operation permit (thread got interrupted). > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.assertTrue(Assert.java:22) > at junit.framework.Assert.assertNull(Assert.java:277) > at junit.framework.Assert.assertNull(Assert.java:268) > at > org.apache.ignite.internal.processors.cache.persistence.db.file.IgnitePdsThreadInterruptionTest.testInterruptsOnLFSRead(IgnitePdsThreadInterruptionTest.java:170) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2088) > at java.lang.Thread.run(Thread.java:748) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10905) org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException happens during rolling restart of a cluster
Maxim Pudov created IGNITE-10905: Summary: org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException happens during rolling restart of a cluster Key: IGNITE-10905 URL: https://issues.apache.org/jira/browse/IGNITE-10905 Project: Ignite Issue Type: Bug Components: cache Affects Versions: 2.7 Reporter: Maxim Pudov Fix For: 2.8 JVM is halted after this error during rolling restart of a cluster: org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtInvalidPartitionException: Adding entry to partition that is concurrently evicted [grp=cacheGroup_7, part=518, shouldBeMoving=, belongs=true, topVer=AffinityTopologyVersion [topVer=42, minorTopVer=0], curTopVer=AffinityTopologyVersion [topVer=43, minorTopVer=0]] at org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl.localPartition0(GridDhtPartitionTopologyImpl.java:950) at org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl.localPartition(GridDhtPartitionTopologyImpl.java:825) at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionDemander.handleSupplyMessage(GridDhtPartitionDemander.java:744) at org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPreloader.handleSupplyMessage(GridDhtPreloader.java:387) at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:418) at org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$5.apply(GridCachePartitionExchangeManager.java:408) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1056) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:581) at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$700(GridCacheIoManager.java:101) at org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1613) at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569) at org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:127) at org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2768) at org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1529) at org.apache.ignite.internal.managers.communication.GridIoManager.access$4400(GridIoManager.java:127) at org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1498) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10640) Create cluster-wide MetaStorage analogue
[ https://issues.apache.org/jira/browse/IGNITE-10640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740546#comment-16740546 ] Ignite TC Bot commented on IGNITE-10640: {panel:title=-- Run :: All: No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=2770843buildTypeId=IgniteTests24Java8_RunAll] > Create cluster-wide MetaStorage analogue > > > Key: IGNITE-10640 > URL: https://issues.apache.org/jira/browse/IGNITE-10640 > Project: Ignite > Issue Type: New Feature >Reporter: Ivan Bessonov >Assignee: Ivan Bessonov >Priority: Major > Labels: IEP-4, Phase-2 > > Issues like IGNITE-8571 require the ability to store and update some > properties consistently on the whole cluster. It is proposed to implement > generic way of doing this. Main requirements are: > * read / write / delete; > * surviving node / cluster restart; > * consistency; > * ability to add listeners on changing properties. > First implementation is going to be based on local MetaStorage to guarantee > data persistence. Existing MetaStorage API is a subject to change as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-10898) Exchange coordinator failover breaks in some cases when node filter is used
[ https://issues.apache.org/jira/browse/IGNITE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Lantukh reassigned IGNITE-10898: - Assignee: Ilya Lantukh > Exchange coordinator failover breaks in some cases when node filter is used > --- > > Key: IGNITE-10898 > URL: https://issues.apache.org/jira/browse/IGNITE-10898 > Project: Ignite > Issue Type: Bug >Reporter: Alexey Goncharuk >Assignee: Ilya Lantukh >Priority: Critical > Fix For: 2.8 > > Attachments: NodeWithFilterRestartTest.java > > > Currently if a node does not pass cache node filter, we do not store this > cache affinity on the node unless the node is coordinator. This, however, may > fail in the following scenario: > 1) A node passing node filter joins cluster > 2) During the join coordinator fails, new coordinator is selected for which > previous exchange is completed > 3) Next coordinator attempts to fetch the affinity, and joining node resends > partitions single message, but there are two problems here. First, exchange > fast-reply does not wait for the new affinity initialization which results in > {{IllegalStateException}}. Second, such an attempt to fetch affinity may lead > either to deadlock or to incorrectly fetched affinity (basically, coordinator > must be in consensus with other nodes passing node filter) > Test attached reproduces the issue. > I suggest to always calculate and keep affinity on all nodes, even ones not > passing the filter. In this case, there will be no need to fetch and > recalculate affinity ({{initCoordinatorCaches}} will go away. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9250) Replace CachesRegistry by ClusterCachesInfo
[ https://issues.apache.org/jira/browse/IGNITE-9250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Kalashnikov updated IGNITE-9250: -- Description: Now we have duplicate of registerCaches(and groups). They holds in ClusterCachesInfo - main storage, and also they holds in CachesRegistry. It looks like redundantly and can lead to unconsistancy of caches info. Main difference is CachesRegistry filled on exchange and ClusterCachesInfo on cache change message recieved before exchange. was:Now we have duplicate of registerCaches(and groups). They holds in ClusterCachesInfo - main storage, and also they holds in CachesRegistry. It looks like redundantly and can lead to unconsistancy of caches info. > Replace CachesRegistry by ClusterCachesInfo > --- > > Key: IGNITE-9250 > URL: https://issues.apache.org/jira/browse/IGNITE-9250 > Project: Ignite > Issue Type: Improvement >Reporter: Anton Kalashnikov >Assignee: Anton Kalashnikov >Priority: Major > > Now we have duplicate of registerCaches(and groups). They holds in > ClusterCachesInfo - main storage, and also they holds in CachesRegistry. It > looks like redundantly and can lead to unconsistancy of caches info. > Main difference is CachesRegistry filled on exchange and ClusterCachesInfo > on cache change message recieved before exchange. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10827) auto close iterator for query cursor when all data has been read
[ https://issues.apache.org/jira/browse/IGNITE-10827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740496#comment-16740496 ] Yury Gerzhedovich commented on IGNITE-10827: [~vozerov], thanks for the notes. All of them fixed. Please check it again. Tests are running - waiting results. > auto close iterator for query cursor when all data has been read > > > Key: IGNITE-10827 > URL: https://issues.apache.org/jira/browse/IGNITE-10827 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Yury Gerzhedovich >Assignee: Yury Gerzhedovich >Priority: Major > Labels: sql > Fix For: 2.8 > > > There is QueryCursorImpl class which we use as main realization of cursor for > H2 iterators. As of now we call close method explicit when we suppose that > all data already read. Will be better call close() method when iterator > hasNext() method return false. Such realization will be more safely and > simple to use. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9250) Replace CachesRegistry by ClusterCachesInfo
[ https://issues.apache.org/jira/browse/IGNITE-9250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Kalashnikov updated IGNITE-9250: -- Description: Now we have duplicate of registerCaches(and groups). They holds in ClusterCachesInfo - main storage, and also they holds in CachesRegistry. It looks like redundantly and can lead to unconsistancy of caches info. (was: Now we have duplicate of registerCaches(and groups). They holds in ClusterCachesInfo - main storage, and also they holds in CacheAffinitySharedManager.CachesInfo. It looks like redundantly and can lead to unconsistancy of caches info.) > Replace CachesRegistry by ClusterCachesInfo > --- > > Key: IGNITE-9250 > URL: https://issues.apache.org/jira/browse/IGNITE-9250 > Project: Ignite > Issue Type: Improvement >Reporter: Anton Kalashnikov >Assignee: Anton Kalashnikov >Priority: Major > > Now we have duplicate of registerCaches(and groups). They holds in > ClusterCachesInfo - main storage, and also they holds in CachesRegistry. It > looks like redundantly and can lead to unconsistancy of caches info. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9250) Replace CachesRegistry by ClusterCachesInfo
[ https://issues.apache.org/jira/browse/IGNITE-9250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anton Kalashnikov updated IGNITE-9250: -- Summary: Replace CachesRegistry by ClusterCachesInfo (was: Replace CacheAffinitySharedManager.CachesInfo by ClusterCachesInfo) > Replace CachesRegistry by ClusterCachesInfo > --- > > Key: IGNITE-9250 > URL: https://issues.apache.org/jira/browse/IGNITE-9250 > Project: Ignite > Issue Type: Improvement >Reporter: Anton Kalashnikov >Assignee: Anton Kalashnikov >Priority: Major > > Now we have duplicate of registerCaches(and groups). They holds in > ClusterCachesInfo - main storage, and also they holds in > CacheAffinitySharedManager.CachesInfo. It looks like redundantly and can lead > to unconsistancy of caches info. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8596) SQL: remove unnecessary index lookups when query parallelism is enabled
[ https://issues.apache.org/jira/browse/IGNITE-8596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov updated IGNITE-8596: - Issue Type: Improvement (was: Task) > SQL: remove unnecessary index lookups when query parallelism is enabled > --- > > Key: IGNITE-8596 > URL: https://issues.apache.org/jira/browse/IGNITE-8596 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 2.5 >Reporter: Vladimir Ozerov >Assignee: Andrew Mashenkov >Priority: Major > Labels: iep-24, performance > Fix For: 2.8 > > > See > {{org.apache.ignite.internal.processors.query.h2.twostep.GridMapQueryExecutor#onQueryRequest}} > method. If table is segmented, we will submit as many SQL requests as much > segments. But consider a case when target cache partition(s) is already > defined by user or derived through partition pruning. In this case most of > segments will not contain useful information and return empty result set. At > the same time these queries may impose index or data page scans, thus > consuming resources without a reason. > To mitigate the problem we should not submit SQL requests to segments we are > not interested in. > Note that it is not sufficient to simply skip SQL requests on mapper, because > reducer expects separate response for every message. We should fix both local > mapper logic as well as protocol. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IGNITE-8875) Add JMX methods to block\unblock new incoming connections from thin clients.
[ https://issues.apache.org/jira/browse/IGNITE-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov closed IGNITE-8875. > Add JMX methods to block\unblock new incoming connections from thin clients. > > > Key: IGNITE-8875 > URL: https://issues.apache.org/jira/browse/IGNITE-8875 > Project: Ignite > Issue Type: Improvement > Components: jdbc, odbc, thin client >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-8875) Add JMX methods to block\unblock new incoming connections from thin clients.
[ https://issues.apache.org/jira/browse/IGNITE-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov resolved IGNITE-8875. -- Resolution: Incomplete > Add JMX methods to block\unblock new incoming connections from thin clients. > > > Key: IGNITE-8875 > URL: https://issues.apache.org/jira/browse/IGNITE-8875 > Project: Ignite > Issue Type: Improvement > Components: jdbc, odbc, thin client >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-10576) MVCC TX: Rework UpdateSourceIterator to mix operation types
[ https://issues.apache.org/jira/browse/IGNITE-10576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov reassigned IGNITE-10576: - Assignee: (was: Andrew Mashenkov) > MVCC TX: Rework UpdateSourceIterator to mix operation types > --- > > Key: IGNITE-10576 > URL: https://issues.apache.org/jira/browse/IGNITE-10576 > Project: Ignite > Issue Type: Improvement > Components: mvcc >Reporter: Igor Seliverstov >Priority: Major > Fix For: 2.8 > > > The current UpdateSourceIterator implementation doesn't suit Cache API needs. > It should be able to mix operation types per key. > For example we may execute putAll operation with a half of keys are having > values and a half aren't, this way we should mix DELETE operation for > null-value keys and PUT operation for others. > Another use case is a transform operation which should turn into a number of > PUT/UPDATE/DELETE operations on a backup node. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10902) [ML] Implement a few regression metrics in one RegressionMetrics class
Aleksey Zinoviev created IGNITE-10902: - Summary: [ML] Implement a few regression metrics in one RegressionMetrics class Key: IGNITE-10902 URL: https://issues.apache.org/jira/browse/IGNITE-10902 Project: Ignite Issue Type: Sub-task Components: ml Affects Versions: 2.8 Reporter: Aleksey Zinoviev Assignee: Aleksey Zinoviev Look for possible metrics in Spark, Smile, Scikit-learn -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10827) auto close iterator for query cursor when all data has been read
[ https://issues.apache.org/jira/browse/IGNITE-10827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yury Gerzhedovich updated IGNITE-10827: --- Description: There is QueryCursorImpl class which we use as main realization of cursor for H2 iterators. As of now we call close method explicit when we suppose that all data already read. Will be better call close() method when iterator hasNext() method return false. Such realization will be more safely and simple to use. (was: There is QueryCursorImpl class which we use as main realization of cursor for H2 iterators. As of now we call close method explicit when we suppose that all data already read. Will be better call close() method when iterator hasNext() method return false or when next() method of iterator throw any Exception. Such realization will be more safely and simple to use.) > auto close iterator for query cursor when all data has been read > > > Key: IGNITE-10827 > URL: https://issues.apache.org/jira/browse/IGNITE-10827 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Yury Gerzhedovich >Assignee: Yury Gerzhedovich >Priority: Major > Labels: sql > Fix For: 2.8 > > > There is QueryCursorImpl class which we use as main realization of cursor for > H2 iterators. As of now we call close method explicit when we suppose that > all data already read. Will be better call close() method when iterator > hasNext() method return false. Such realization will be more safely and > simple to use. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10902) [ML] Implement a few regression metrics in one RegressionMetrics class
[ https://issues.apache.org/jira/browse/IGNITE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10902: -- Description: Look for possible metrics in Spark, Smile, Scikit-learn [https://scikit-learn.org/stable/modules/model_evaluation.html] [https://github.com/haifengl/smile/tree/master/core/src/main/java/smile/validation] was: Look for possible metrics in Spark, Smile, Scikit-learn [https://scikit-learn.org/stable/modules/model_evaluation.html] > [ML] Implement a few regression metrics in one RegressionMetrics class > -- > > Key: IGNITE-10902 > URL: https://issues.apache.org/jira/browse/IGNITE-10902 > Project: Ignite > Issue Type: Sub-task > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Major > > Look for possible metrics in Spark, Smile, Scikit-learn > [https://scikit-learn.org/stable/modules/model_evaluation.html] > [https://github.com/haifengl/smile/tree/master/core/src/main/java/smile/validation] > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10902) [ML] Implement a few regression metrics in one RegressionMetrics class
[ https://issues.apache.org/jira/browse/IGNITE-10902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10902: -- Description: Look for possible metrics in Spark, Smile, Scikit-learn [https://scikit-learn.org/stable/modules/model_evaluation.html] was:Look for possible metrics in Spark, Smile, Scikit-learn > [ML] Implement a few regression metrics in one RegressionMetrics class > -- > > Key: IGNITE-10902 > URL: https://issues.apache.org/jira/browse/IGNITE-10902 > Project: Ignite > Issue Type: Sub-task > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Major > > Look for possible metrics in Spark, Smile, Scikit-learn > [https://scikit-learn.org/stable/modules/model_evaluation.html] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10904) [ML] Refactor all examples with regression to use RegressionMetrics
Aleksey Zinoviev created IGNITE-10904: - Summary: [ML] Refactor all examples with regression to use RegressionMetrics Key: IGNITE-10904 URL: https://issues.apache.org/jira/browse/IGNITE-10904 Project: Ignite Issue Type: Sub-task Components: ml Affects Versions: 2.8 Reporter: Aleksey Zinoviev Assignee: Aleksey Zinoviev Fix For: 2.8 Look for all regression examples and add as a final step the RegressionMetrics usage -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10903) [ML] Provide an example with training of regression model and its evaluation
Aleksey Zinoviev created IGNITE-10903: - Summary: [ML] Provide an example with training of regression model and its evaluation Key: IGNITE-10903 URL: https://issues.apache.org/jira/browse/IGNITE-10903 Project: Ignite Issue Type: Sub-task Components: ml Affects Versions: 2.8 Reporter: Aleksey Zinoviev Assignee: Aleksey Zinoviev Fix For: 2.8 It could be parametric or non-parametric regression -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-5438) JDBC thin: support query timeout
[ https://issues.apache.org/jira/browse/IGNITE-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740471#comment-16740471 ] Alexander Lapin commented on IGNITE-5438: - Ready for preliminary review. > JDBC thin: support query timeout > > > Key: IGNITE-5438 > URL: https://issues.apache.org/jira/browse/IGNITE-5438 > Project: Ignite > Issue Type: Task > Components: jdbc >Affects Versions: 2.0 >Reporter: Taras Ledkov >Assignee: Alexander Lapin >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The {{setQueryTimeout}} method of JDBC {{Statement}} must be supported. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10870) [ML] Add an example for KNN/LogReg and multi-class task full Iris dataset
[ https://issues.apache.org/jira/browse/IGNITE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10870: -- Priority: Major (was: Critical) > [ML] Add an example for KNN/LogReg and multi-class task full Iris dataset > - > > Key: IGNITE-10870 > URL: https://issues.apache.org/jira/browse/IGNITE-10870 > Project: Ignite > Issue Type: Sub-task > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Major > Fix For: 2.8 > > > Add a one or two examples for KNN/LogReg and Iris dataset with 3 classes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10901) [ML][Umbrella] Add support of regression metrics to evaluate regression models
[ https://issues.apache.org/jira/browse/IGNITE-10901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10901: -- Summary: [ML][Umbrella] Add support of regression metrics to evaluate regression models (was: [ML][Umbrella] Add support of regression metrics to evaluate regression) > [ML][Umbrella] Add support of regression metrics to evaluate regression models > -- > > Key: IGNITE-10901 > URL: https://issues.apache.org/jira/browse/IGNITE-10901 > Project: Ignite > Issue Type: Improvement > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Major > > Look at scikit-learn metrics like > |*Regression*| | | > |‘explained_variance’|[{{metrics.explained_variance_score}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.explained_variance_score.html#sklearn.metrics.explained_variance_score]| > | > |‘neg_mean_absolute_error’|[{{metrics.mean_absolute_error}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html#sklearn.metrics.mean_absolute_error]| > | > |‘neg_mean_squared_error’|[{{metrics.mean_squared_error}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html#sklearn.metrics.mean_squared_error]| > | > |‘neg_mean_squared_log_error’|[{{metrics.mean_squared_log_error}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_log_error.html#sklearn.metrics.mean_squared_log_error]| > | > |‘neg_median_absolute_error’|[{{metrics.median_absolute_error}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.median_absolute_error.html#sklearn.metrics.median_absolute_error]| > | > |‘r2’|[{{metrics.r2_score}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html#sklearn.metrics.r2_score]| -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10901) [ML][Umbrella] Add support of regression metrics to evaluate regression
Aleksey Zinoviev created IGNITE-10901: - Summary: [ML][Umbrella] Add support of regression metrics to evaluate regression Key: IGNITE-10901 URL: https://issues.apache.org/jira/browse/IGNITE-10901 Project: Ignite Issue Type: Improvement Components: ml Affects Versions: 2.8 Reporter: Aleksey Zinoviev Assignee: Aleksey Zinoviev Look at scikit-learn metrics like |*Regression*| | | |‘explained_variance’|[{{metrics.explained_variance_score}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.explained_variance_score.html#sklearn.metrics.explained_variance_score]| | |‘neg_mean_absolute_error’|[{{metrics.mean_absolute_error}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html#sklearn.metrics.mean_absolute_error]| | |‘neg_mean_squared_error’|[{{metrics.mean_squared_error}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html#sklearn.metrics.mean_squared_error]| | |‘neg_mean_squared_log_error’|[{{metrics.mean_squared_log_error}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_log_error.html#sklearn.metrics.mean_squared_log_error]| | |‘neg_median_absolute_error’|[{{metrics.median_absolute_error}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.median_absolute_error.html#sklearn.metrics.median_absolute_error]| | |‘r2’|[{{metrics.r2_score}}|https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html#sklearn.metrics.r2_score]| -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9634) [ML] Trainers as pipeline parameters that can be varied
[ https://issues.apache.org/jira/browse/IGNITE-9634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-9634: - Priority: Major (was: Minor) > [ML] Trainers as pipeline parameters that can be varied > --- > > Key: IGNITE-9634 > URL: https://issues.apache.org/jira/browse/IGNITE-9634 > Project: Ignite > Issue Type: Sub-task > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Major > Fix For: 2.8 > > > Based > http://apache-ignite-developers.2346864.n4.nabble.com/ML-New-Feature-Trainers-as-pipeline-parameters-that-can-be-varied-td35132.html -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-10391) MVCC: Invoke request fails on backup while rebalance is in progress.
[ https://issues.apache.org/jira/browse/IGNITE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov resolved IGNITE-10391. --- Resolution: Duplicate > MVCC: Invoke request fails on backup while rebalance is in progress. > > > Key: IGNITE-10391 > URL: https://issues.apache.org/jira/browse/IGNITE-10391 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: mvcc_stabilization_stage_1, transactions > Fix For: 2.8 > > > Invoke request fails with Assertion on backup while rebalance is in progress. > Enlist request handler expects entry processor instead of value in case of > Invoke operation, > but when rebalance is in progress we pass entry history to backup side. This > causes assertion triggering. > We have to handle correctly this case and apply history and then entry > processor. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8250) Adopt Fuzzy CMeans to PartitionedDatasets
[ https://issues.apache.org/jira/browse/IGNITE-8250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-8250: - Priority: Trivial (was: Minor) > Adopt Fuzzy CMeans to PartitionedDatasets > - > > Key: IGNITE-8250 > URL: https://issues.apache.org/jira/browse/IGNITE-8250 > Project: Ignite > Issue Type: Improvement > Components: ml >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Trivial > > Add Model/Trainer, tests, example -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9497) [ML] Add Pipeline support to Cross-Validation process
[ https://issues.apache.org/jira/browse/IGNITE-9497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-9497: - Priority: Major (was: Minor) > [ML] Add Pipeline support to Cross-Validation process > - > > Key: IGNITE-9497 > URL: https://issues.apache.org/jira/browse/IGNITE-9497 > Project: Ignite > Issue Type: New Feature > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Major > Fix For: 2.8 > > > Change API of ParamGrid.addHyperParam to support meta-information about > Pipeline Stage > Add to Cross-Validation method to support evaluate the whole Pipeline Process > and inject hyper-parameters from the ParamGrid -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9936) [ML] Make readable the models ouput in RandomForestClassificationExample
[ https://issues.apache.org/jira/browse/IGNITE-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-9936: - Priority: Major (was: Minor) > [ML] Make readable the models ouput in RandomForestClassificationExample > > > Key: IGNITE-9936 > URL: https://issues.apache.org/jira/browse/IGNITE-9936 > Project: Ignite > Issue Type: Improvement > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Major > Fix For: 2.8 > > > The output is > >>> Trained model: Models composition [ > aggregator = [OnMajorityPredictionsAggregator], > models = [ > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7d3d101b, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@30c8681, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5cdec700, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6d026701, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@78aa1f72, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1f75a668, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@35399441, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4b7dc788, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6304101a, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5170bcf4, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@2812b107, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@df6620a, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4e31276e, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1a72a540, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@27d5a580, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@198d6542, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5e403b4a, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5117dd67, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5be49b60, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@2931522b, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7674b62c, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@19e7a160, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@662706a7, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@45a4b042, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@16b2bb0c, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@327af41b, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6cb6decd, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@c7045b9, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@f99f5e0, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6aa61224, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@30bce90b, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@3e6f3f28, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7e19ebf0, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@2474f125, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7357a011, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@3406472c, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5717c37, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@68f4865, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4816278d, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4eaf3684, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@40317ba2, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@3c01cfa1, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@45d2ade3, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@727eb8cb, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@39d9314d, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@b978d10, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5b7a8434, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5c45d770, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@2ce6c6ec, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1bae316d, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@147a5d08, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6676f6a0, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7cbd9d24, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1672fe87, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5026735c, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1b45c0e, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@11f0a5a1, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@10f7f7de, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@73a8da0f, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@50dfbc58, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4416d64f, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6bf08014, >
[jira] [Updated] (IGNITE-9936) [ML] Make readable the models ouput in RandomForestClassificationExample
[ https://issues.apache.org/jira/browse/IGNITE-9936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-9936: - Priority: Minor (was: Major) > [ML] Make readable the models ouput in RandomForestClassificationExample > > > Key: IGNITE-9936 > URL: https://issues.apache.org/jira/browse/IGNITE-9936 > Project: Ignite > Issue Type: Improvement > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Minor > Fix For: 2.8 > > > The output is > >>> Trained model: Models composition [ > aggregator = [OnMajorityPredictionsAggregator], > models = [ > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7d3d101b, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@30c8681, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5cdec700, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6d026701, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@78aa1f72, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1f75a668, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@35399441, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4b7dc788, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6304101a, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5170bcf4, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@2812b107, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@df6620a, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4e31276e, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1a72a540, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@27d5a580, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@198d6542, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5e403b4a, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5117dd67, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5be49b60, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@2931522b, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7674b62c, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@19e7a160, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@662706a7, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@45a4b042, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@16b2bb0c, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@327af41b, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6cb6decd, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@c7045b9, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@f99f5e0, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6aa61224, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@30bce90b, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@3e6f3f28, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7e19ebf0, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@2474f125, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7357a011, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@3406472c, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5717c37, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@68f4865, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4816278d, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4eaf3684, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@40317ba2, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@3c01cfa1, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@45d2ade3, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@727eb8cb, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@39d9314d, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@b978d10, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5b7a8434, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5c45d770, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@2ce6c6ec, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1bae316d, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@147a5d08, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6676f6a0, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@7cbd9d24, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1672fe87, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@5026735c, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@1b45c0e, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@11f0a5a1, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@10f7f7de, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@73a8da0f, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@50dfbc58, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@4416d64f, > org.apache.ignite.ml.tree.randomforest.data.TreeRoot@6bf08014, >
[jira] [Updated] (IGNITE-10870) [ML] Add an example for KNN/LogReg and multi-class task full Iris dataset
[ https://issues.apache.org/jira/browse/IGNITE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10870: -- Priority: Minor (was: Major) > [ML] Add an example for KNN/LogReg and multi-class task full Iris dataset > - > > Key: IGNITE-10870 > URL: https://issues.apache.org/jira/browse/IGNITE-10870 > Project: Ignite > Issue Type: Sub-task > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Minor > Fix For: 2.8 > > > Add a one or two examples for KNN/LogReg and Iris dataset with 3 classes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10869) [ML] Add MultiClass classification metrics
[ https://issues.apache.org/jira/browse/IGNITE-10869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10869: -- Priority: Minor (was: Major) > [ML] Add MultiClass classification metrics > -- > > Key: IGNITE-10869 > URL: https://issues.apache.org/jira/browse/IGNITE-10869 > Project: Ignite > Issue Type: Sub-task > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Minor > Fix For: 2.8 > > > Add ability to calculate multiple metrics (as binary metrics) for multiclass > classification > It can be merged with OneVsRest approach -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10900) Print a warning if native persistence is used without an explicit consistent ID
[ https://issues.apache.org/jira/browse/IGNITE-10900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740463#comment-16740463 ] Stanislav Lukyanov commented on IGNITE-10900: - There are some pros and cons of adding a warning when there is no explicit consistent ID. The benefits are described in the Description. The main downside is that this will be yet another warning printed for the default configuration - and also for the examples. This is always confusing for a new user ("I've just started Ignite and I'm already doing something wrong?"). Also, currently there is no easy way to change consistent ID per node when the configuration file/bean is shared between all nodes. However, the warning doesn't break any behavior, and the benefit of having consistent IDs in more installations seems more important than the downsides. > Print a warning if native persistence is used without an explicit consistent > ID > --- > > Key: IGNITE-10900 > URL: https://issues.apache.org/jira/browse/IGNITE-10900 > Project: Ignite > Issue Type: Bug >Reporter: Stanislav Lukyanov >Priority: Major > > Experience shows that when Native Persistence is enabled, it is better to > explicitly set ConsistentIDs than use the autogenerated ones. > First, it simplifies managing the baseline topology. It is much easier to > manage it via control.sh when the nodes have stable and meaningful names. > Second, it helps to avoid certain shoot-yourself-in-the-foot issues. E.g. if > one loses all the data of a baseline node, when that node is restarted it > doesn't have its old autogenerated consistent ID - so it is not a part of the > baseline anymore. This may be unexpected and confusing. > Finally, having explicit consistent IDs improves the general stability of the > setup - one knows what the the set of nodes, where they run and what they're > called. > All in all, it seems beneficial to urge users to explicitly configure > consistent IDs. We can do this by introducing a warning that is printed every > time a new consistent ID is automatically generated. It should also be > printed when a node doesn't have an explicit consistent ID and picks up one > from an existing peristence folder. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10899) Service Grid: disconnecting during node stop may lead to deadlock
[ https://issues.apache.org/jira/browse/IGNITE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Daradur updated IGNITE-10899: Description: In a rare case {{onDisconneced}} may be called during node stopping and deadlock may occur because of {{ServiceDeploymentManage#stopProcessong}} blocks busyLock and not release it intentionally. The issue has been found on TeamCity in Zookeeper's suite with the following stack trace: {code:java} disco-notifier-worker-#569118%client4%" #609288 prio=5 os_prio=0 tid=0x7f905b440800 nid=0x3f6fbd sleeping[0x7f9383efd000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.ignite.internal.util.GridSpinReadWriteLock.writeLock(GridSpinReadWriteLock.java:204) at org.apache.ignite.internal.util.GridSpinBusyLock.block(GridSpinBusyLock.java:76) at org.apache.ignite.internal.processors.service.ServiceDeploymentManager.stopProcessing(ServiceDeploymentManager.java:137) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.stopProcessor(IgniteServiceProcessor.java:261) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.onDisconnected(IgniteServiceProcessor.java:429) at org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:4010) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:819) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.lambda$onDiscovery$0(GridDiscoveryManager.java:602) - locked <0xf7ecdfa0> (a java.lang.Object) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4$$Lambda$25/2087171109.run(Unknown Source) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2696) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2734) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at java.lang.Thread.run(Thread.java:748) {code} was: In a rare case, when {{onDisconneced}} may be called during node stopping deadlock may occur because of {{ServiceDeploymentManage#stopProcessong}} blocks busyLock and not release it intended. The issue found on TeamCity Zookeeper suite with the following stack trace: {CODE} disco-notifier-worker-#569118%client4%" #609288 prio=5 os_prio=0 tid=0x7f905b440800 nid=0x3f6fbd sleeping[0x7f9383efd000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.ignite.internal.util.GridSpinReadWriteLock.writeLock(GridSpinReadWriteLock.java:204) at org.apache.ignite.internal.util.GridSpinBusyLock.block(GridSpinBusyLock.java:76) at org.apache.ignite.internal.processors.service.ServiceDeploymentManager.stopProcessing(ServiceDeploymentManager.java:137) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.stopProcessor(IgniteServiceProcessor.java:261) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.onDisconnected(IgniteServiceProcessor.java:429) at org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:4010) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:819) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.lambda$onDiscovery$0(GridDiscoveryManager.java:602) - locked <0xf7ecdfa0> (a java.lang.Object) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4$$Lambda$25/2087171109.run(Unknown Source) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2696) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2734) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at java.lang.Thread.run(Thread.java:748) {CODE} > Service Grid: disconnecting during node stop may lead to deadlock > - > > Key: IGNITE-10899 > URL: https://issues.apache.org/jira/browse/IGNITE-10899 > Project: Ignite > Issue Type: Task > Components: managed services >Affects Versions: 2.7 >Reporter: Vyacheslav Daradur >Assignee: Vyacheslav Daradur >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.8 > > > In a rare case {{onDisconneced}} may be called during node stopping and > deadlock may occur because of {{ServiceDeploymentManage#stopProcessong}} > blocks busyLock and not release it intentionally. > The issue has been found on TeamCity in Zookeeper's suite
[jira] [Updated] (IGNITE-10899) Service Grid: disconnecting during node stop may lead to deadlock
[ https://issues.apache.org/jira/browse/IGNITE-10899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Daradur updated IGNITE-10899: Description: In a rare case {{onDisconneced}} may be called during node stopping and deadlock may occur because of {{ServiceDeploymentManage#stopProcessong}} blocks busyLock and not release it intentionally. The issue has been found on TeamCity in [Zookeeper's suite|https://ci.ignite.apache.org/viewLog.html?buildId=2768270=IgniteTests24Java8_ZooKeeperDiscovery2] with the following stack trace: {code:java} disco-notifier-worker-#569118%client4%" #609288 prio=5 os_prio=0 tid=0x7f905b440800 nid=0x3f6fbd sleeping[0x7f9383efd000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.ignite.internal.util.GridSpinReadWriteLock.writeLock(GridSpinReadWriteLock.java:204) at org.apache.ignite.internal.util.GridSpinBusyLock.block(GridSpinBusyLock.java:76) at org.apache.ignite.internal.processors.service.ServiceDeploymentManager.stopProcessing(ServiceDeploymentManager.java:137) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.stopProcessor(IgniteServiceProcessor.java:261) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.onDisconnected(IgniteServiceProcessor.java:429) at org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:4010) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:819) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.lambda$onDiscovery$0(GridDiscoveryManager.java:602) - locked <0xf7ecdfa0> (a java.lang.Object) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4$$Lambda$25/2087171109.run(Unknown Source) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2696) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2734) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at java.lang.Thread.run(Thread.java:748) {code} was: In a rare case {{onDisconneced}} may be called during node stopping and deadlock may occur because of {{ServiceDeploymentManage#stopProcessong}} blocks busyLock and not release it intentionally. The issue has been found on TeamCity in Zookeeper's suite with the following stack trace: {code:java} disco-notifier-worker-#569118%client4%" #609288 prio=5 os_prio=0 tid=0x7f905b440800 nid=0x3f6fbd sleeping[0x7f9383efd000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.ignite.internal.util.GridSpinReadWriteLock.writeLock(GridSpinReadWriteLock.java:204) at org.apache.ignite.internal.util.GridSpinBusyLock.block(GridSpinBusyLock.java:76) at org.apache.ignite.internal.processors.service.ServiceDeploymentManager.stopProcessing(ServiceDeploymentManager.java:137) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.stopProcessor(IgniteServiceProcessor.java:261) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.onDisconnected(IgniteServiceProcessor.java:429) at org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:4010) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:819) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.lambda$onDiscovery$0(GridDiscoveryManager.java:602) - locked <0xf7ecdfa0> (a java.lang.Object) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4$$Lambda$25/2087171109.run(Unknown Source) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2696) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2734) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at java.lang.Thread.run(Thread.java:748) {code} > Service Grid: disconnecting during node stop may lead to deadlock > - > > Key: IGNITE-10899 > URL: https://issues.apache.org/jira/browse/IGNITE-10899 > Project: Ignite > Issue Type: Task > Components: managed services >Affects Versions: 2.7 >Reporter: Vyacheslav Daradur >Assignee: Vyacheslav Daradur >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.8 > > > In a rare case {{onDisconneced}} may be called during node stopping and > deadlock may occur because of
[jira] [Updated] (IGNITE-9250) Replace CacheAffinitySharedManager.CachesInfo by ClusterCachesInfo
[ https://issues.apache.org/jira/browse/IGNITE-9250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-9250: --- Ignite Flags: (was: Docs Required) > Replace CacheAffinitySharedManager.CachesInfo by ClusterCachesInfo > -- > > Key: IGNITE-9250 > URL: https://issues.apache.org/jira/browse/IGNITE-9250 > Project: Ignite > Issue Type: Improvement >Reporter: Anton Kalashnikov >Assignee: Anton Kalashnikov >Priority: Major > > Now we have duplicate of registerCaches(and groups). They holds in > ClusterCachesInfo - main storage, and also they holds in > CacheAffinitySharedManager.CachesInfo. It looks like redundantly and can lead > to unconsistancy of caches info. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10697) [ML] Add Frequency Encoding
[ https://issues.apache.org/jira/browse/IGNITE-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10697: -- Priority: Trivial (was: Major) > [ML] Add Frequency Encoding > --- > > Key: IGNITE-10697 > URL: https://issues.apache.org/jira/browse/IGNITE-10697 > Project: Ignite > Issue Type: New Feature > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Trivial > Fix For: 2.8 > > > Encode the values to a fraction of all the labels. Can work with linear > models if the frequency is correlated with the target value. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10711) [ML] [Umbrella] Provide metrics to evaluate the quality of model
[ https://issues.apache.org/jira/browse/IGNITE-10711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10711: -- Priority: Minor (was: Major) > [ML] [Umbrella] Provide metrics to evaluate the quality of model > > > Key: IGNITE-10711 > URL: https://issues.apache.org/jira/browse/IGNITE-10711 > Project: Ignite > Issue Type: New Feature > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Minor > Fix For: 2.8 > > > This is an umbrella ticket for all metric-related tickets -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10870) [ML] Add an example for KNN/LogReg and multi-class task full Iris dataset
[ https://issues.apache.org/jira/browse/IGNITE-10870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10870: -- Priority: Critical (was: Major) > [ML] Add an example for KNN/LogReg and multi-class task full Iris dataset > - > > Key: IGNITE-10870 > URL: https://issues.apache.org/jira/browse/IGNITE-10870 > Project: Ignite > Issue Type: Sub-task > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Critical > Fix For: 2.8 > > > Add a one or two examples for KNN/LogReg and Iris dataset with 3 classes -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7022) Use QuadTree for kNN performance
[ https://issues.apache.org/jira/browse/IGNITE-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-7022: - Priority: Trivial (was: Minor) > Use QuadTree for kNN performance > > > Key: IGNITE-7022 > URL: https://issues.apache.org/jira/browse/IGNITE-7022 > Project: Ignite > Issue Type: Improvement > Components: ml >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Trivial > > Now, kNN implementation is not too fast. Its performance could be increased > with [https://en.wikipedia.org/wiki/Quadtree] > Also, benchmarks should be provided too -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7328) Improve Labeled Dataset loading from txt file
[ https://issues.apache.org/jira/browse/IGNITE-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-7328: - Priority: Trivial (was: Minor) > Improve Labeled Dataset loading from txt file > - > > Key: IGNITE-7328 > URL: https://issues.apache.org/jira/browse/IGNITE-7328 > Project: Ignite > Issue Type: New Feature > Components: ml >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Trivial > > 1. Wouldn't it be better to parse rows in-place (not to save them as strings > at first)? In current implementation we will be needed to keep a dataset in > memory twice and it might be a problem for big datasets. > 2. What about the case when a dataset contains not only a numerical data? Do > we consider this case or for such purposes some other "DatasetLoader" will be > used? > 3. Just an idea, in case we don't want to fall on bad data (99% of cases) > would be great to understand the quality of loaded dataset such as number of > missed rows/values. > 4. Does a situation when a row doesn't contain required number of columns > should be considered as "bad data" and don't break parsing with > IndexOutOfBoundException? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7327) Add CSV loading to Labeled Dataset with Loader
[ https://issues.apache.org/jira/browse/IGNITE-7327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-7327: - Priority: Trivial (was: Minor) > Add CSV loading to Labeled Dataset with Loader > --- > > Key: IGNITE-7327 > URL: https://issues.apache.org/jira/browse/IGNITE-7327 > Project: Ignite > Issue Type: New Feature > Components: ml >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Trivial > > Comment from [~dmitrievanthony] > Lots of datasets (from Kaggle for example) are supplied in CSV format with > header line. In connection with it does it make sense to: > Use some CSV parsing (it's a bit more complicated than just splitting by > comma)? > Add ability to use first header line as a source for so called feature names? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8571) Baseline auto-adjust feature
[ https://issues.apache.org/jira/browse/IGNITE-8571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin reassigned IGNITE-8571: -- Assignee: (was: Ivan Bessonov) > Baseline auto-adjust feature > > > Key: IGNITE-8571 > URL: https://issues.apache.org/jira/browse/IGNITE-8571 > Project: Ignite > Issue Type: New Feature >Reporter: Eduard Shangareev >Priority: Major > Labels: IEP-4, Phase-2 > > Now we have only one way to change BLAT - manually update it via console.sh > or API. > We need to add the possibility to change it automatically. Adjust to current > topology. > So, I propose 3 new parameters which would be responsible to tune this > feature. > 1. Flag autoAdjustEnabled - true/false. Easy. Manual baseline control or auto > adjusting baseline. > 2. autoAdjustTimeout - time which we would wait after the actual topology > change. But it would be reset if new discovery event happened. (node > join/exit). > 3. autoAdjustMaxTimeout - time which we would wait from the first dicovery > event in the chain. If we achieved it than we would change BLAT right away > (no matter were another node join/exit happened or not). > We need to change API next way: > 1. org.apache.ignite.IgniteCluster > *Add* > isBaselineAutoAdjustEnabled() > setBaselineAutoAdjustEnabled(boolean enabled); > setBaselineAutoAdjustTimeout(long timeoutInMs); > setBaselineAutoAdjustMaxTimeout(long timeoutInMs); > 2. org.apache.ignite.configuration.IgniteConfiguration > *Add* > IgniteConfiguration setBaselineAutoAdjustEnabled(boolean enabled); > IgniteConfiguration setBaselineAutoAdjustTimeout(long timeoutInMs); > IgniteConfiguration setBaselineAutoAdjustMaxTimeout(long timeoutInMs); > Also, we need to ensure that all nodes would have the same parameters. > And we should be able to survive coordinator left during parameters changes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10529) [ML][Umbrella] Add Confusion Matrix support for classification algorithms
[ https://issues.apache.org/jira/browse/IGNITE-10529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10529: -- Priority: Major (was: Minor) > [ML][Umbrella] Add Confusion Matrix support for classification algorithms > - > > Key: IGNITE-10529 > URL: https://issues.apache.org/jira/browse/IGNITE-10529 > Project: Ignite > Issue Type: New Feature > Components: ml >Affects Versions: 2.8 >Reporter: Aleksey Zinoviev >Assignee: Aleksey Zinoviev >Priority: Major > Fix For: 2.8 > > > This is an umbrella ticket for Confusion Matrix Support -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IGNITE-10391) MVCC: Invoke request fails on backup while rebalance is in progress.
[ https://issues.apache.org/jira/browse/IGNITE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov closed IGNITE-10391. - > MVCC: Invoke request fails on backup while rebalance is in progress. > > > Key: IGNITE-10391 > URL: https://issues.apache.org/jira/browse/IGNITE-10391 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: mvcc_stabilization_stage_1, transactions > Fix For: 2.8 > > > Invoke request fails with Assertion on backup while rebalance is in progress. > Enlist request handler expects entry processor instead of value in case of > Invoke operation, > but when rebalance is in progress we pass entry history to backup side. This > causes assertion triggering. > We have to handle correctly this case and apply history and then entry > processor. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IGNITE-10254) MVCC: invokeAll may hangs on unstable topology.
[ https://issues.apache.org/jira/browse/IGNITE-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov closed IGNITE-10254. - > MVCC: invokeAll may hangs on unstable topology. > --- > > Key: IGNITE-10254 > URL: https://issues.apache.org/jira/browse/IGNITE-10254 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: Hanging, mvcc_stabilization_stage_1 > Fix For: 2.8 > > > Test IgniteCacheEntryProcessorNodeJoinTest.testEntryProcessorNodeLeave() > hangs with TRANSACTIONAL_SNAPSHOT cache mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10145) [ML] Implement ROC AUC metric
[ https://issues.apache.org/jira/browse/IGNITE-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10145: -- Priority: Minor (was: Major) > [ML] Implement ROC AUC metric > - > > Key: IGNITE-10145 > URL: https://issues.apache.org/jira/browse/IGNITE-10145 > Project: Ignite > Issue Type: New Feature > Components: ml >Reporter: Yury Babak >Assignee: Aleksey Zinoviev >Priority: Minor > Fix For: 2.8 > > > Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from > prediction scores. > We want to implement this score for our models. > Some links: > * [wiki|https://en.wikipedia.org/wiki/Receiver_operating_characteristic] > * [google > dev|https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (IGNITE-10391) MVCC: Invoke request fails on backup while rebalance is in progress.
[ https://issues.apache.org/jira/browse/IGNITE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov reopened IGNITE-10391: --- > MVCC: Invoke request fails on backup while rebalance is in progress. > > > Key: IGNITE-10391 > URL: https://issues.apache.org/jira/browse/IGNITE-10391 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: mvcc_stabilization_stage_1, transactions > Fix For: 2.8 > > > Invoke request fails with Assertion on backup while rebalance is in progress. > Enlist request handler expects entry processor instead of value in case of > Invoke operation, > but when rebalance is in progress we pass entry history to backup side. This > causes assertion triggering. > We have to handle correctly this case and apply history and then entry > processor. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10900) Print a warning if native persistence is used without an explicit consistent ID
Stanislav Lukyanov created IGNITE-10900: --- Summary: Print a warning if native persistence is used without an explicit consistent ID Key: IGNITE-10900 URL: https://issues.apache.org/jira/browse/IGNITE-10900 Project: Ignite Issue Type: Bug Reporter: Stanislav Lukyanov Experience shows that when Native Persistence is enabled, it is better to explicitly set ConsistentIDs than use the autogenerated ones. First, it simplifies managing the baseline topology. It is much easier to manage it via control.sh when the nodes have stable and meaningful names. Second, it helps to avoid certain shoot-yourself-in-the-foot issues. E.g. if one loses all the data of a baseline node, when that node is restarted it doesn't have its old autogenerated consistent ID - so it is not a part of the baseline anymore. This may be unexpected and confusing. Finally, having explicit consistent IDs improves the general stability of the setup - one knows what the the set of nodes, where they run and what they're called. All in all, it seems beneficial to urge users to explicitly configure consistent IDs. We can do this by introducing a warning that is printed every time a new consistent ID is automatically generated. It should also be printed when a node doesn't have an explicit consistent ID and picks up one from an existing peristence folder. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9290) Make remove explicit locks async when node left.
[ https://issues.apache.org/jira/browse/IGNITE-9290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740456#comment-16740456 ] Alexey Goncharuk commented on IGNITE-9290: -- [~amashenkov], I've re-checked, since we do not submit the task to the striped pool, nothing else should be changed. Thanks, merged to master. > Make remove explicit locks async when node left. > > > Key: IGNITE-9290 > URL: https://issues.apache.org/jira/browse/IGNITE-9290 > Project: Ignite > Issue Type: Bug > Components: cache >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Critical > Labels: deadlock, iep-25 > Fix For: 2.8 > > > GridCacheMvccManager.removeExplicitNodeLocks() run synchronously in discovery > and exchange threads. This introduce unnecessary delays in discovery and > exchange process. > Also, this may cause a deadlock on node stop if user transaction holds an > entry lock and awaits some Ignite manager response (e.g. cache store or DR or > CQ), as manager stops right after last exchange has finished so managers > can't detect node is stopping. > > [1] > [http://apache-ignite-developers.2346864.n4.nabble.com/Synchronous-tx-entries-unlocking-in-discovery-exchange-threads-td33827.html] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-10391) MVCC: Invoke request fails on backup while rebalance is in progress.
[ https://issues.apache.org/jira/browse/IGNITE-10391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov resolved IGNITE-10391. --- Resolution: Fixed Fixed within IGNITE-10794 > MVCC: Invoke request fails on backup while rebalance is in progress. > > > Key: IGNITE-10391 > URL: https://issues.apache.org/jira/browse/IGNITE-10391 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: mvcc_stabilization_stage_1, transactions > Fix For: 2.8 > > > Invoke request fails with Assertion on backup while rebalance is in progress. > Enlist request handler expects entry processor instead of value in case of > Invoke operation, > but when rebalance is in progress we pass entry history to backup side. This > causes assertion triggering. > We have to handle correctly this case and apply history and then entry > processor. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10899) Service Grid: disconnecting during node stop may lead to deadlock
Vyacheslav Daradur created IGNITE-10899: --- Summary: Service Grid: disconnecting during node stop may lead to deadlock Key: IGNITE-10899 URL: https://issues.apache.org/jira/browse/IGNITE-10899 Project: Ignite Issue Type: Task Components: managed services Affects Versions: 2.7 Reporter: Vyacheslav Daradur Assignee: Vyacheslav Daradur Fix For: 2.8 In a rare case, when {{onDisconneced}} may be called during node stopping deadlock may occur because of {{ServiceDeploymentManage#stopProcessong}} blocks busyLock and not release it intended. The issue found on TeamCity Zookeeper suite with the following stack trace: {CODE} disco-notifier-worker-#569118%client4%" #609288 prio=5 os_prio=0 tid=0x7f905b440800 nid=0x3f6fbd sleeping[0x7f9383efd000] java.lang.Thread.State: TIMED_WAITING (sleeping) at java.lang.Thread.sleep(Native Method) at org.apache.ignite.internal.util.GridSpinReadWriteLock.writeLock(GridSpinReadWriteLock.java:204) at org.apache.ignite.internal.util.GridSpinBusyLock.block(GridSpinBusyLock.java:76) at org.apache.ignite.internal.processors.service.ServiceDeploymentManager.stopProcessing(ServiceDeploymentManager.java:137) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.stopProcessor(IgniteServiceProcessor.java:261) at org.apache.ignite.internal.processors.service.IgniteServiceProcessor.onDisconnected(IgniteServiceProcessor.java:429) at org.apache.ignite.internal.IgniteKernal.onDisconnected(IgniteKernal.java:4010) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:819) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.lambda$onDiscovery$0(GridDiscoveryManager.java:602) - locked <0xf7ecdfa0> (a java.lang.Object) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4$$Lambda$25/2087171109.run(Unknown Source) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2696) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2734) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at java.lang.Thread.run(Thread.java:748) {CODE} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10145) [ML] Implement ROC AUC metric
[ https://issues.apache.org/jira/browse/IGNITE-10145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Zinoviev updated IGNITE-10145: -- Summary: [ML] Implement ROC AUC metric (was: [ML] ROC AUC score) > [ML] Implement ROC AUC metric > - > > Key: IGNITE-10145 > URL: https://issues.apache.org/jira/browse/IGNITE-10145 > Project: Ignite > Issue Type: New Feature > Components: ml >Reporter: Yury Babak >Assignee: Aleksey Zinoviev >Priority: Major > Fix For: 2.8 > > > Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from > prediction scores. > We want to implement this score for our models. > Some links: > * [wiki|https://en.wikipedia.org/wiki/Receiver_operating_characteristic] > * [google > dev|https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10884) Failure to perform non-MVCC SQL from transactions
[ https://issues.apache.org/jira/browse/IGNITE-10884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740430#comment-16740430 ] Roman Kondakov commented on IGNITE-10884: - [~gvvinblade], please review. TC is green. > Failure to perform non-MVCC SQL from transactions > - > > Key: IGNITE-10884 > URL: https://issues.apache.org/jira/browse/IGNITE-10884 > Project: Ignite > Issue Type: Bug > Components: mvcc, sql >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Assignee: Roman Kondakov >Priority: Blocker > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > MVCC was added with expectation is that it would not affect existing KV > transactional code, neither SQL on non-TRANSACTIONAL_SNAPSHOT caches. > However, this turned not to be the case: if you open an OPTIMISIC > SERIALIZABLE transaction and do SQL query to fetch data from table, exception > will be thrown with *Only pessimistic repeatable read transactions are > supported at the moment* > {code} > Exception in thread "main" javax.cache.CacheException: Only pessimistic > repeatable read transactions are supported at the moment. > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636) > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388) > at > IgniteTransactionTester.testTransactionException(IgniteTransactionTester.java:53) > at IgniteTransactionTester.main(IgniteTransactionTester.java:38) > Caused by: class > org.apache.ignite.internal.processors.query.IgniteSQLException: Only > pessimistic repeatable read transactions are supported at the moment. > at > org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:690) > at > org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:671) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.runQueryTwoStep(IgniteH2Indexing.java:1793) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunDistributedQuery(IgniteH2Indexing.java:2610) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2315) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130) > at > org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:685) > ... 4 more > {code} > This is a major regression towards 2.6. Please see linked reproducer > (IgniteTransactionTester class). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10898) Exchange coordinator failover breaks in some cases when node filter is used
[ https://issues.apache.org/jira/browse/IGNITE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk updated IGNITE-10898: -- Fix Version/s: 2.8 > Exchange coordinator failover breaks in some cases when node filter is used > --- > > Key: IGNITE-10898 > URL: https://issues.apache.org/jira/browse/IGNITE-10898 > Project: Ignite > Issue Type: Bug >Reporter: Alexey Goncharuk >Priority: Critical > Fix For: 2.8 > > > Currently if a node does not pass cache node filter, we do not store this > cache affinity on the node unless the node is coordinator. This, however, may > fail in the following scenario: > 1) A node passing node filter joins cluster > 2) During the join coordinator fails, new coordinator is selected for which > previous exchange is completed > 3) Next coordinator attempts to fetch the affinity, and joining node resends > partitions single message, but there are two problems here. First, exchange > fast-reply does not wait for the new affinity initialization which results in > {{IllegalStateException}}. Second, such an attempt to fetch affinity may lead > either to deadlock or to incorrectly fetched affinity (basically, coordinator > must be in consensus with other nodes passing node filter) > Test attached reproduces the issue. > I suggest to always calculate and keep affinity on all nodes, even ones not > passing the filter. In this case, there will be no need to fetch and > recalculate affinity ({{initCoordinatorCaches}} will go away. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10897) Blocked drop table operations cause strange issues
[ https://issues.apache.org/jira/browse/IGNITE-10897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev updated IGNITE-10897: - Component/s: sql > Blocked drop table operations cause strange issues > -- > > Key: IGNITE-10897 > URL: https://issues.apache.org/jira/browse/IGNITE-10897 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.6, 2.7 >Reporter: JiajunYang >Priority: Major > > Reproduce steps to get a blocked drop table operation: > 1.Create a table and put some data to a node with persistence enabled. > 2.Do a select query targeting on the table which runs long time. > 3.Drop the table before the select query end. > Then u will see drop table operation blocks until the select query end. > A strange issue cause by blocked drop table operations: > 1.Do another drop table operation when there is a blocked drop table > operation.This operation will also block. > 2.Try to recreate the table with same name.When you can see "table > already exists" exception. > 3.Try to drop the table again,then you can see > "Table doesn't exist" exception. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10897) Blocked drop table operations cause strange issues
[ https://issues.apache.org/jira/browse/IGNITE-10897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev updated IGNITE-10897: - Ignite Flags: (was: Docs Required) > Blocked drop table operations cause strange issues > -- > > Key: IGNITE-10897 > URL: https://issues.apache.org/jira/browse/IGNITE-10897 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.6, 2.7 >Reporter: JiajunYang >Priority: Major > > Reproduce steps to get a blocked drop table operation: > 1.Create a table and put some data to a node with persistence enabled. > 2.Do a select query targeting on the table which runs long time. > 3.Drop the table before the select query end. > Then u will see drop table operation blocks until the select query end. > A strange issue cause by blocked drop table operations: > 1.Do another drop table operation when there is a blocked drop table > operation.This operation will also block. > 2.Try to recreate the table with same name.When you can see "table > already exists" exception. > 3.Try to drop the table again,then you can see > "Table doesn't exist" exception. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10898) Exchange coordinator failover breaks in some cases when node filter is used
[ https://issues.apache.org/jira/browse/IGNITE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk updated IGNITE-10898: -- Attachment: NodeWithFilterRestartTest.java > Exchange coordinator failover breaks in some cases when node filter is used > --- > > Key: IGNITE-10898 > URL: https://issues.apache.org/jira/browse/IGNITE-10898 > Project: Ignite > Issue Type: Bug >Reporter: Alexey Goncharuk >Priority: Critical > Fix For: 2.8 > > Attachments: NodeWithFilterRestartTest.java > > > Currently if a node does not pass cache node filter, we do not store this > cache affinity on the node unless the node is coordinator. This, however, may > fail in the following scenario: > 1) A node passing node filter joins cluster > 2) During the join coordinator fails, new coordinator is selected for which > previous exchange is completed > 3) Next coordinator attempts to fetch the affinity, and joining node resends > partitions single message, but there are two problems here. First, exchange > fast-reply does not wait for the new affinity initialization which results in > {{IllegalStateException}}. Second, such an attempt to fetch affinity may lead > either to deadlock or to incorrectly fetched affinity (basically, coordinator > must be in consensus with other nodes passing node filter) > Test attached reproduces the issue. > I suggest to always calculate and keep affinity on all nodes, even ones not > passing the filter. In this case, there will be no need to fetch and > recalculate affinity ({{initCoordinatorCaches}} will go away. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9858) [Test Failed] SystemCacheNotConfiguredTest#test flaky fails on TC (timeout).
[ https://issues.apache.org/jira/browse/IGNITE-9858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740436#comment-16740436 ] Dmitriy Govorukhin commented on IGNITE-9858: [~xtern] Changes looks good to me, thanks for the contribution! Merge to master. > [Test Failed] SystemCacheNotConfiguredTest#test flaky fails on TC (timeout). > > > Key: IGNITE-9858 > URL: https://issues.apache.org/jira/browse/IGNITE-9858 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.6 >Reporter: Pavel Pereslegin >Assignee: Pavel Pereslegin >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.8 > > > SystemCacheNotConfiguredTest hangs sometimes on TeamCity (timeout). > Example of such failures on master branch: > [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8==testDetails=-2762467041583095183=TEST_STATUS_DESC=50_IgniteTests24Java8=%3Cdefault%3E] > When we using ip finder in shared mode each node should register self address > (except clients, obviously). > Check that the node is a client uses installed (via DI > @IgniteInstanceResource) Ignite instance (see > {{TcpDiscoveryIpFinderAdapter#initializeLocalAddresses}}). > So when client and server starts simultaneously, the following scenario is > possible - Ignite server injected at first, then the Ignite client injected > when the SPI is initialized ({{spiStart}}) both nodes assume that the local > node is a client and don't register local address. > {noformat} > [2018-10-11 18:03:49,794][WARN > ][tcp-client-disco-msg-worker-#57%client%][TcpDiscoverySpi] IP finder > returned empty addresses list. Please check IP finder configuration. Will > retry every 2000 ms. Change 'reconnectDelay' to configure the frequency of > retries.{noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10898) Exchange coordinator failover breaks in some cases when node filter is used
[ https://issues.apache.org/jira/browse/IGNITE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk updated IGNITE-10898: -- Priority: Critical (was: Major) > Exchange coordinator failover breaks in some cases when node filter is used > --- > > Key: IGNITE-10898 > URL: https://issues.apache.org/jira/browse/IGNITE-10898 > Project: Ignite > Issue Type: Bug >Reporter: Alexey Goncharuk >Priority: Critical > > Currently if a node does not pass cache node filter, we do not store this > cache affinity on the node unless the node is coordinator. This, however, may > fail in the following scenario: > 1) A node passing node filter joins cluster > 2) During the join coordinator fails, new coordinator is selected for which > previous exchange is completed > 3) Next coordinator attempts to fetch the affinity, and joining node resends > partitions single message, but there are two problems here. First, exchange > fast-reply does not wait for the new affinity initialization which results in > {{IllegalStateException}}. Second, such an attempt to fetch affinity may lead > either to deadlock or to incorrectly fetched affinity (basically, coordinator > must be in consensus with other nodes passing node filter) > Test attached reproduces the issue. > I suggest to always calculate and keep affinity on all nodes, even ones not > passing the filter. In this case, there will be no need to fetch and > recalculate affinity ({{initCoordinatorCaches}} will go away. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10898) Exchange coordinator failover breaks in some cases when node filter is used
[ https://issues.apache.org/jira/browse/IGNITE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk updated IGNITE-10898: -- Description: Currently if a node does not pass cache node filter, we do not store this cache affinity on the node unless the node is coordinator. This, however, may fail in the following scenario: 1) A node passing node filter joins cluster 2) During the join coordinator fails, new coordinator is selected for which previous exchange is completed 3) Next coordinator attempts to fetch the affinity, and joining node resends partitions single message, but there are two problems here. First, exchange fast-reply does not wait for the new affinity initialization which results in {{IllegalStateException}}. Second, such an attempt to fetch affinity may lead either to deadlock or to incorrectly fetched affinity (basically, coordinator must be in consensus with other nodes passing node filter) Test attached reproduces the issue. I suggest to always calculate and keep affinity on all nodes, even ones not passing the filter. In this case, there will be no need to fetch and recalculate affinity ({{initCoordinatorCaches}} will go away. > Exchange coordinator failover breaks in some cases when node filter is used > --- > > Key: IGNITE-10898 > URL: https://issues.apache.org/jira/browse/IGNITE-10898 > Project: Ignite > Issue Type: Bug >Reporter: Alexey Goncharuk >Priority: Major > > Currently if a node does not pass cache node filter, we do not store this > cache affinity on the node unless the node is coordinator. This, however, may > fail in the following scenario: > 1) A node passing node filter joins cluster > 2) During the join coordinator fails, new coordinator is selected for which > previous exchange is completed > 3) Next coordinator attempts to fetch the affinity, and joining node resends > partitions single message, but there are two problems here. First, exchange > fast-reply does not wait for the new affinity initialization which results in > {{IllegalStateException}}. Second, such an attempt to fetch affinity may lead > either to deadlock or to incorrectly fetched affinity (basically, coordinator > must be in consensus with other nodes passing node filter) > Test attached reproduces the issue. > I suggest to always calculate and keep affinity on all nodes, even ones not > passing the filter. In this case, there will be no need to fetch and > recalculate affinity ({{initCoordinatorCaches}} will go away. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10754) Query history statistics API
[ https://issues.apache.org/jira/browse/IGNITE-10754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740434#comment-16740434 ] Ignite TC Bot commented on IGNITE-10754: {panel:title=-- Run :: All: Possible Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1} {color:#d04437}Platform .NET{color} [[tests 7|https://ci.ignite.apache.org/viewLog.html?buildId=2770954]] * exe: IgniteConfigurationParityTest.TestIgniteConfiguration - 0,0% fails in last 667 master runs. {color:#d04437}Platform .NET (Core Linux){color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=2770956]] * dll: IgniteConfigurationParityTest.TestIgniteConfiguration - 0,0% fails in last 690 master runs. {panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=2770024buildTypeId=IgniteTests24Java8_RunAll] > Query history statistics API > > > Key: IGNITE-10754 > URL: https://issues.apache.org/jira/browse/IGNITE-10754 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Yury Gerzhedovich >Assignee: Yury Gerzhedovich >Priority: Major > Labels: iep-29, monitoring > Time Spent: 10m > Remaining Estimate: 0h > > As of now we have query statistics > (*_org.apache.ignite.IgniteCache#queryMetrics_*) , but have few issues. > 1) Duration execution it just time between start execution and return cursor > to client and doesn't include all life time of query. > 2) It doesn't know about multistatement queries. Such queries participate in > statistics as single query without splitting. > 3) API to access the statistics expose as depend on cache, however query > don't have such dependency. > > Need to create parallel similar realization as we already have. > Use new infrastructure of tracking running queries developed under > IGNITE-10621 and update statistics on unregister phase. > Expose API on upper level then it placed now. Right place will be written > later. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10898) Exchange coordinator failover breaks in some cases when node filter is used
[ https://issues.apache.org/jira/browse/IGNITE-10898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk updated IGNITE-10898: -- Ignite Flags: (was: Docs Required) > Exchange coordinator failover breaks in some cases when node filter is used > --- > > Key: IGNITE-10898 > URL: https://issues.apache.org/jira/browse/IGNITE-10898 > Project: Ignite > Issue Type: Bug >Reporter: Alexey Goncharuk >Priority: Major > > Currently if a node does not pass cache node filter, we do not store this > cache affinity on the node unless the node is coordinator. This, however, may > fail in the following scenario: > 1) A node passing node filter joins cluster > 2) During the join coordinator fails, new coordinator is selected for which > previous exchange is completed > 3) Next coordinator attempts to fetch the affinity, and joining node resends > partitions single message, but there are two problems here. First, exchange > fast-reply does not wait for the new affinity initialization which results in > {{IllegalStateException}}. Second, such an attempt to fetch affinity may lead > either to deadlock or to incorrectly fetched affinity (basically, coordinator > must be in consensus with other nodes passing node filter) > Test attached reproduces the issue. > I suggest to always calculate and keep affinity on all nodes, even ones not > passing the filter. In this case, there will be no need to fetch and > recalculate affinity ({{initCoordinatorCaches}} will go away. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10884) Failure to perform non-MVCC SQL from transactions
[ https://issues.apache.org/jira/browse/IGNITE-10884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740429#comment-16740429 ] Ignite TC Bot commented on IGNITE-10884: {panel:title=-- Run :: All: No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=2770385buildTypeId=IgniteTests24Java8_RunAll] > Failure to perform non-MVCC SQL from transactions > - > > Key: IGNITE-10884 > URL: https://issues.apache.org/jira/browse/IGNITE-10884 > Project: Ignite > Issue Type: Bug > Components: mvcc, sql >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Assignee: Roman Kondakov >Priority: Blocker > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > MVCC was added with expectation is that it would not affect existing KV > transactional code, neither SQL on non-TRANSACTIONAL_SNAPSHOT caches. > However, this turned not to be the case: if you open an OPTIMISIC > SERIALIZABLE transaction and do SQL query to fetch data from table, exception > will be thrown with *Only pessimistic repeatable read transactions are > supported at the moment* > {code} > Exception in thread "main" javax.cache.CacheException: Only pessimistic > repeatable read transactions are supported at the moment. > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636) > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388) > at > IgniteTransactionTester.testTransactionException(IgniteTransactionTester.java:53) > at IgniteTransactionTester.main(IgniteTransactionTester.java:38) > Caused by: class > org.apache.ignite.internal.processors.query.IgniteSQLException: Only > pessimistic repeatable read transactions are supported at the moment. > at > org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:690) > at > org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:671) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.runQueryTwoStep(IgniteH2Indexing.java:1793) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunDistributedQuery(IgniteH2Indexing.java:2610) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2315) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130) > at > org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:685) > ... 4 more > {code} > This is a major regression towards 2.6. Please see linked reproducer > (IgniteTransactionTester class). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10898) Exchange coordinator failover breaks in some cases when node filter is used
Alexey Goncharuk created IGNITE-10898: - Summary: Exchange coordinator failover breaks in some cases when node filter is used Key: IGNITE-10898 URL: https://issues.apache.org/jira/browse/IGNITE-10898 Project: Ignite Issue Type: Bug Reporter: Alexey Goncharuk -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9120) Metadata writer does not propagate error to failure handler
[ https://issues.apache.org/jira/browse/IGNITE-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Goncharuk updated IGNITE-9120: - Fix Version/s: 2.8 > Metadata writer does not propagate error to failure handler > --- > > Key: IGNITE-9120 > URL: https://issues.apache.org/jira/browse/IGNITE-9120 > Project: Ignite > Issue Type: Bug >Reporter: Alexand Polyakov >Assignee: Alexand Polyakov >Priority: Major > Fix For: 2.8 > > > In logs > {code:java} > [WARN] [tcp-disco-msg-worker- # 2% DPL_GRID% DplGridNodeName%] > [o.a.i.i.p.c.b.CacheObjectBinaryProcessorImpl] Failed to save metadata for > typeId: 978611101; The exception was selected: there was no space left on the > device{code} > Node does not shut down > The number of stalled transactions begins to grow. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10897) Blocked drop table operations cause strange issues
JiajunYang created IGNITE-10897: --- Summary: Blocked drop table operations cause strange issues Key: IGNITE-10897 URL: https://issues.apache.org/jira/browse/IGNITE-10897 Project: Ignite Issue Type: Bug Affects Versions: 2.7, 2.6 Reporter: JiajunYang Reproduce steps to get a blocked drop table operation: 1.Create a table and put some data to a node with persistence enabled. 2.Do a select query targeting on the table which runs long time. 3.Drop the table before the select query end. Then u will see drop table operation blocks until the select query end. A strange issue cause by blocked drop table operations: 1.Do another drop table operation when there is a blocked drop table operation.This operation will also block. 2.Try to recreate the table with same name.When you can see "table already exists" exception. 3.Try to drop the table again,then you can see "Table doesn't exist" exception. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10809) IgniteClusterActivateDeactivateTestWithPersistence.testActivateFailover3 fails in master
[ https://issues.apache.org/jira/browse/IGNITE-10809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740402#comment-16740402 ] Dmitriy Govorukhin commented on IGNITE-10809: - [~sergey-chugunov] Thanks for the contribution! Changes merged to master. > IgniteClusterActivateDeactivateTestWithPersistence.testActivateFailover3 > fails in master > > > Key: IGNITE-10809 > URL: https://issues.apache.org/jira/browse/IGNITE-10809 > Project: Ignite > Issue Type: Bug >Reporter: Sergey Chugunov >Assignee: Sergey Chugunov >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.8 > > > Test logic involves independent activation two sets of nodes and then their > join into a single cluster. > After introducing BaselineTopology concept in 2.4 version this action became > prohibited to enforce data integrity. > Test should be refactored to take this into account. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-10254) MVCC: invokeAll may hangs on unstable topology.
[ https://issues.apache.org/jira/browse/IGNITE-10254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov resolved IGNITE-10254. --- Resolution: Duplicate Fixed within IGNITE-10794 > MVCC: invokeAll may hangs on unstable topology. > --- > > Key: IGNITE-10254 > URL: https://issues.apache.org/jira/browse/IGNITE-10254 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: Hanging, mvcc_stabilization_stage_1 > Fix For: 2.8 > > > Test IgniteCacheEntryProcessorNodeJoinTest.testEntryProcessorNodeLeave() > hangs with TRANSACTIONAL_SNAPSHOT cache mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10896) Add ability to use more than one key with control.sh --cache idle_verify
[ https://issues.apache.org/jira/browse/IGNITE-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ARomantsov updated IGNITE-10896: Description: Now I can use only one of next options 1) --exclude-caches cache1,...,cacheN 2) --cache-filter ALL|SYSTEM|PERSISTENT|NOT_PERSISTENT 3) cache1,...,cacheN I suppose that using 1 and 2 or 2 and 3 make this command more flexiable was: Now I can use only one of next options 1) --exclude-caches cache1,...,cacheN or 2) --cache-filter ALL|SYSTEM|PERSISTENT|NOT_PERSISTENT 3) cache1,...,cacheN I suppose that using 1 and 2 or 2 and 3 make this command more flexiable > Add ability to use more than one key with control.sh --cache idle_verify > > > Key: IGNITE-10896 > URL: https://issues.apache.org/jira/browse/IGNITE-10896 > Project: Ignite > Issue Type: Improvement >Reporter: ARomantsov >Priority: Major > Fix For: 2.8 > > > Now I can use only one of next options > 1) --exclude-caches cache1,...,cacheN > 2) --cache-filter ALL|SYSTEM|PERSISTENT|NOT_PERSISTENT > 3) cache1,...,cacheN > I suppose that using 1 and 2 or 2 and 3 make this command more flexiable -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10896) Add ability to use more than one key with control.sh --cache idle_verify
[ https://issues.apache.org/jira/browse/IGNITE-10896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ARomantsov updated IGNITE-10896: Fix Version/s: 2.8 > Add ability to use more than one key with control.sh --cache idle_verify > > > Key: IGNITE-10896 > URL: https://issues.apache.org/jira/browse/IGNITE-10896 > Project: Ignite > Issue Type: Improvement >Reporter: ARomantsov >Priority: Major > Fix For: 2.8 > > > Now I can use only one of next options > 1) --exclude-caches cache1,...,cacheN or > 2) --cache-filter ALL|SYSTEM|PERSISTENT|NOT_PERSISTENT > 3) cache1,...,cacheN > I suppose that using 1 and 2 or 2 and 3 make this command more flexiable -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-10896) Add ability to use more than one key with control.sh --cache idle_verify
ARomantsov created IGNITE-10896: --- Summary: Add ability to use more than one key with control.sh --cache idle_verify Key: IGNITE-10896 URL: https://issues.apache.org/jira/browse/IGNITE-10896 Project: Ignite Issue Type: Improvement Reporter: ARomantsov Now I can use only one of next options 1) --exclude-caches cache1,...,cacheN or 2) --cache-filter ALL|SYSTEM|PERSISTENT|NOT_PERSISTENT 3) cache1,...,cacheN I suppose that using 1 and 2 or 2 and 3 make this command more flexiable -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-10755) MVCC: Flaky continuous query tests
[ https://issues.apache.org/jira/browse/IGNITE-10755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Kondakov reassigned IGNITE-10755: --- Assignee: Roman Kondakov > MVCC: Flaky continuous query tests > -- > > Key: IGNITE-10755 > URL: https://issues.apache.org/jira/browse/IGNITE-10755 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Roman Kondakov >Assignee: Roman Kondakov >Priority: Major > Labels: CQ, MakeTeamcityGreenAgain, mvcc_stabilization_stage_1 > Fix For: 2.8 > > > Some continuous query tests are flaky when MVCC is enabled: > * {{CacheContinuousQueryConcurrentPartitionUpdateTest}} > ** {{testConcurrentUpdatesAndQueryStartMvccTxCacheGroup}} > ** {{testConcurrentUpdatesAndQueryStartMvccTx}} > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9859) Add debug logging on refreshPartitions cause
[ https://issues.apache.org/jira/browse/IGNITE-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740386#comment-16740386 ] Dmitriy Govorukhin commented on IGNITE-9859: [~mshonichev], [~dpavlov] Merged to master, thanks! > Add debug logging on refreshPartitions cause > > > Key: IGNITE-9859 > URL: https://issues.apache.org/jira/browse/IGNITE-9859 > Project: Ignite > Issue Type: Improvement >Affects Versions: 2.5 >Reporter: Max Shonichev >Assignee: Max Shonichev >Priority: Major > Fix For: 2.8 > > Attachments: > IGNITE_9859__add_debug_logging_on_resendPartitions_cause.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Need some additional log messages for debugging PME issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10794) MVCC: RemoveAll is broken on unstable topology
[ https://issues.apache.org/jira/browse/IGNITE-10794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740383#comment-16740383 ] Igor Seliverstov commented on IGNITE-10794: --- Merged to master. > MVCC: RemoveAll is broken on unstable topology > -- > > Key: IGNITE-10794 > URL: https://issues.apache.org/jira/browse/IGNITE-10794 > Project: Ignite > Issue Type: Bug > Components: mvcc >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Critical > Labels: Hanging, mvcc_stabilization_stage_1, transaction > Fix For: 2.8 > > > Enlist batch holds key and values in arrays structures. This implies that > keys and vals arrays sizes should be equals. > Also, we have an optimization and do not save 'null' vals for 'remove' > operation. > This invariant can become broken on removeAll operation for 2 entries > belonging to partitions in different states (moving and owning). For the > first one, it's 'mvcc history' will be added to 'vals' array, but nothing > will be added for the second one. > Reproducer IgniteCacheEntryProcessorNodeJoinTest.testEntryProcessorNodeLeave > See stacktrace: > {noformat} > java.lang.AssertionError: > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture$Batch.add(GridDhtTxAbstractEnlistFuture.java:1156) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.addToBatch(GridDhtTxAbstractEnlistFuture.java:705) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.processEntry(GridDhtTxAbstractEnlistFuture.java:650) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.continueLoop(GridDhtTxAbstractEnlistFuture.java:533) > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxAbstractEnlistFuture.init(GridDhtTxAbstractEnlistFuture.java:362) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistFuture.enlistLocal(GridNearTxEnlistFuture.java:531) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistFuture.sendBatch(GridNearTxEnlistFuture.java:426) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistFuture.sendNextBatches(GridNearTxEnlistFuture.java:173) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxEnlistFuture.map(GridNearTxEnlistFuture.java:149) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxAbstractEnlistFuture.mapOnTopology(GridNearTxAbstractEnlistFuture.java:342) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxAbstractEnlistFuture.init(GridNearTxAbstractEnlistFuture.java:257) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.updateAsync(GridNearTxLocal.java:2074) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.mvccRemoveAllAsync0(GridNearTxLocal.java:1951) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.removeAllAsync0(GridNearTxLocal.java:1670) > at > org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.removeAllAsync(GridNearTxLocal.java:550) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-10884) Failure to perform non-MVCC SQL from transactions
[ https://issues.apache.org/jira/browse/IGNITE-10884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740371#comment-16740371 ] Ignite TC Bot commented on IGNITE-10884: {panel:title=- Run :: MVCC Cache: No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *- Run :: MVCC Cache* Results|https://ci.ignite.apache.org/viewLog.html?buildId=2770400buildTypeId=IgniteTests24Java8_RunMvccCache] > Failure to perform non-MVCC SQL from transactions > - > > Key: IGNITE-10884 > URL: https://issues.apache.org/jira/browse/IGNITE-10884 > Project: Ignite > Issue Type: Bug > Components: mvcc, sql >Affects Versions: 2.7 >Reporter: Ilya Kasnacheev >Assignee: Roman Kondakov >Priority: Blocker > Fix For: 2.8 > > Time Spent: 10m > Remaining Estimate: 0h > > MVCC was added with expectation is that it would not affect existing KV > transactional code, neither SQL on non-TRANSACTIONAL_SNAPSHOT caches. > However, this turned not to be the case: if you open an OPTIMISIC > SERIALIZABLE transaction and do SQL query to fetch data from table, exception > will be thrown with *Only pessimistic repeatable read transactions are > supported at the moment* > {code} > Exception in thread "main" javax.cache.CacheException: Only pessimistic > repeatable read transactions are supported at the moment. > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:697) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:636) > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:388) > at > IgniteTransactionTester.testTransactionException(IgniteTransactionTester.java:53) > at IgniteTransactionTester.main(IgniteTransactionTester.java:38) > Caused by: class > org.apache.ignite.internal.processors.query.IgniteSQLException: Only > pessimistic repeatable read transactions are supported at the moment. > at > org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:690) > at > org.apache.ignite.internal.processors.cache.mvcc.MvccUtils.tx(MvccUtils.java:671) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.runQueryTwoStep(IgniteH2Indexing.java:1793) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunDistributedQuery(IgniteH2Indexing.java:2610) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2315) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130) > at > org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:685) > ... 4 more > {code} > This is a major regression towards 2.6. Please see linked reproducer > (IgniteTransactionTester class). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9859) Add debug logging on refreshPartitions cause
[ https://issues.apache.org/jira/browse/IGNITE-9859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740357#comment-16740357 ] Ignite TC Bot commented on IGNITE-9859: --- {panel:title=-- Run :: All: No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=2765953buildTypeId=IgniteTests24Java8_RunAll] > Add debug logging on refreshPartitions cause > > > Key: IGNITE-9859 > URL: https://issues.apache.org/jira/browse/IGNITE-9859 > Project: Ignite > Issue Type: Improvement >Affects Versions: 2.5 >Reporter: Max Shonichev >Assignee: Max Shonichev >Priority: Major > Fix For: 2.8 > > Attachments: > IGNITE_9859__add_debug_logging_on_resendPartitions_cause.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Need some additional log messages for debugging PME issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8227) Research possibility and implement JUnit test failure handler for TeamCity
[ https://issues.apache.org/jira/browse/IGNITE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740353#comment-16740353 ] Ignite TC Bot commented on IGNITE-8227: --- {panel:title=-- Run :: All: No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} [TeamCity *-- Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=2762114buildTypeId=IgniteTests24Java8_RunAll] > Research possibility and implement JUnit test failure handler for TeamCity > -- > > Key: IGNITE-8227 > URL: https://issues.apache.org/jira/browse/IGNITE-8227 > Project: Ignite > Issue Type: Test >Reporter: Dmitriy Pavlov >Assignee: Ryabov Dmitrii >Priority: Major > > After IEP-14 > (https://cwiki.apache.org/confluence/display/IGNITE/IEP-14+Ignite+failures+handling) > we found a lot of TC failures involving unexpected nodes stop. > To avoid suites exit codes, tests have NoOpFailureHandler as default. > But instead of this, better handler could be > stopNode + fail currenly running test with message. > This default allows to identify such failures without log-message fail > condition. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-4380) Cache invoke calls can be lost
[ https://issues.apache.org/jira/browse/IGNITE-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16740350#comment-16740350 ] Alexey Goncharuk commented on IGNITE-4380: -- [~NSAmelchev], the key lock future was bothering me and now I finally understood why. You properly noticed that the prepare future does not wait for the locks to be acquired. Notice, however, that the method which invokes an {{EntryProcessor}} is called {{onEntriesLocked}}, which implies that the entries should be properly locked by the time this method is invoked. The reason it currently does not work is the following piece of code in {{readyLocks}}: {code} if (cacheCtx.isLocal()) continue; {code} We should not skip local entries entirely, but instead handle them differently than {{GridDistributedCacheEntry}}. Please check if we can use a more generic class than {{GridDistributedCacheEntry}} there (maybe with a bit of refactoring). If we cannot, let's just add a proper local entries handling in a separate method. Then, in my understanding, there is no need to waiting for {{keysLockFuture}}. > Cache invoke calls can be lost > -- > > Key: IGNITE-4380 > URL: https://issues.apache.org/jira/browse/IGNITE-4380 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.0 >Reporter: Semen Boikov >Assignee: Amelchev Nikita >Priority: Critical > Labels: MakeTeamcityGreenAgain > Fix For: 2.8 > > > * Recently added test > GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded fails on TC in > various configurations with transactional cache. > Example of failure > GridCacheReplicatedOffHeapTieredMultiNodeFullApiSelfTest.testInvokeAllMultithreaded: > {noformat} > junit.framework.AssertionFailedError: expected:<2> but was:<10868> > at junit.framework.Assert.fail(Assert.java:57) > at junit.framework.Assert.failNotEquals(Assert.java:329) > at junit.framework.Assert.assertEquals(Assert.java:78) > at junit.framework.Assert.assertEquals(Assert.java:234) > at junit.framework.Assert.assertEquals(Assert.java:241) > at junit.framework.TestCase.assertEquals(TestCase.java:409) > at > org.apache.ignite.internal.processors.cache.GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded(GridCacheAbstractFullApiSelfTest.java:342) > at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at junit.framework.TestCase.runTest(TestCase.java:176) > at > org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1803) > at > org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:118) > at > org.apache.ignite.testframework.junits.GridAbstractTest$4.run(GridAbstractTest.java:1718) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-10440) Analyse test suites for possible acceleration
[ https://issues.apache.org/jira/browse/IGNITE-10440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleg Ignatenko updated IGNITE-10440: Description: For a bunch of test suites that appear to have longest time to run on Teamcity, find out if it is possible to apply ["scale factor" utilities|https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/testframework/GridTestUtils.java] to speed up these and do necessary rework if it is. (was: For a bunch of test suites that appear to have longest time to run on Teamcity, find out if it is possible to apply ["scale factor" utilities|https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/testframework/GridTestUtils.java|see ScaleFactorUtil here] to seed up these and do necessary rework if it is.) > Analyse test suites for possible acceleration > - > > Key: IGNITE-10440 > URL: https://issues.apache.org/jira/browse/IGNITE-10440 > Project: Ignite > Issue Type: Improvement >Reporter: Alexey Platonov >Assignee: Alexey Platonov >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.8 > > > For a bunch of test suites that appear to have longest time to run on > Teamcity, find out if it is possible to apply ["scale factor" > utilities|https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/testframework/GridTestUtils.java] > to speed up these and do necessary rework if it is. -- This message was sent by Atlassian JIRA (v7.6.3#76005)