[jira] [Updated] (IGNITE-19010) Remove configuration api dependency from api module
[ https://issues.apache.org/jira/browse/IGNITE-19010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Bessonov updated IGNITE-19010: --- Fix Version/s: 3.0.0-beta2 > Remove configuration api dependency from api module > --- > > Key: IGNITE-19010 > URL: https://issues.apache.org/jira/browse/IGNITE-19010 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Bessonov >Assignee: Ivan Bessonov >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > # Remove "ignite-configuration-api" dependency from the "ignite-api" module, > because it's unused and shouldn't be there. > # Remove unnecessary "ignite-configuration" dependencies, replacing them > with "ignite-configuration-api". > # Move "ConfigurationModule" to "ignite-configuration-api". > # Consider other API changes to configuration, in order to avoid using > implementation module as a dependency where possible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19010) Remove configuration api dependency from api module
Ivan Bessonov created IGNITE-19010: -- Summary: Remove configuration api dependency from api module Key: IGNITE-19010 URL: https://issues.apache.org/jira/browse/IGNITE-19010 Project: Ignite Issue Type: Improvement Reporter: Ivan Bessonov # Remove "ignite-configuration-api" dependency from the "ignite-api" module, because it's unused and shouldn't be there. # Remove unnecessary "ignite-configuration" dependencies, replacing them with "ignite-configuration-api". # Move "ConfigurationModule" to "ignite-configuration-api". # Consider other API changes to configuration, in order to avoid using implementation module as a dependency where possible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-19010) Remove configuration api dependency from api module
[ https://issues.apache.org/jira/browse/IGNITE-19010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Bessonov reassigned IGNITE-19010: -- Assignee: Ivan Bessonov > Remove configuration api dependency from api module > --- > > Key: IGNITE-19010 > URL: https://issues.apache.org/jira/browse/IGNITE-19010 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Bessonov >Assignee: Ivan Bessonov >Priority: Major > Labels: ignite-3 > > # Remove "ignite-configuration-api" dependency from the "ignite-api" module, > because it's unused and shouldn't be there. > # Remove unnecessary "ignite-configuration" dependencies, replacing them > with "ignite-configuration-api". > # Move "ConfigurationModule" to "ignite-configuration-api". > # Consider other API changes to configuration, in order to avoid using > implementation module as a dependency where possible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19009) Introduce file transfer support in messaging
Mikhail Pochatkin created IGNITE-19009: -- Summary: Introduce file transfer support in messaging Key: IGNITE-19009 URL: https://issues.apache.org/jira/browse/IGNITE-19009 Project: Ignite Issue Type: Improvement Components: networking Reporter: Mikhail Pochatkin Assignee: Mikhail Pochatkin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-12483) ReflectionFactory is essential thanks to PlatformDotNetSessionLockResult
[ https://issues.apache.org/jira/browse/IGNITE-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Pavlov updated IGNITE-12483: --- Component/s: (was: binary) > ReflectionFactory is essential thanks to PlatformDotNetSessionLockResult > > > Key: IGNITE-12483 > URL: https://issues.apache.org/jira/browse/IGNITE-12483 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Kasnacheev >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise, usability > Fix For: 2.15 > > Time Spent: 20m > Remaining Estimate: 0h > > We currently treat ReflectionFactory as a nice-to-have thing, so we silently > ignore failures of its reflection: > {code} > try { > Class refFactoryCls = > Class.forName("sun.reflect.ReflectionFactory"); > refFac = > refFactoryCls.getMethod("getReflectionFactory").invoke(null); > ctorFac = > refFac.getClass().getMethod("newConstructorForSerialization", Class.class, > Constructor.class); > } > catch (NoSuchMethodException | ClassNotFoundException | > IllegalAccessException | InvocationTargetException ignored) { > // No-op. > } > {code} > However, it is now essential thanks to the class > PlatformDotNetSessionLockResult, which is always registered during note > start-up and which does not have empty constructor. > So not having access to ReflectionFactory (JBoss will hide it, for example) > will lead to the following cryptic exception (courtesy stack overflow): > {code} > 2019-12-19 09:11:39,355 SEVERE [org.apache.ignite.internal.IgniteKernal] > (ServerService Thread Pool -- 81) Got exception while starting (will rollback > startup routine).: class org.apache.ignite.binary.BinaryObjectException: > Failed to find empty constructor for class: > org.apache.ignite.internal.processors.platform.websession.PlatformDotNetSessionLockResult > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryClassDescriptor.constructor(BinaryClassDescriptor.java:981) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryClassDescriptor.(BinaryClassDescriptor.java:267) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryContext.registerPredefinedType(BinaryContext.java:1063) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryContext.registerPredefinedType(BinaryContext.java:1048) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryContext.(BinaryContext.java:350) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.start(CacheObjectBinaryProcessorImpl.java:208) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1700) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1013) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) > at > org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88) > {code} > My suggestions are the following: > - Introduce a warning when ReflectionFactory not found instead of ignoring > exception. > - Add empty constructor to PlatformDotNetSessionLockResult and make sure no > other classes need reflection during start-up. > - (optionally) instead, introduce an error when ReflectionFactory not found. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18976) Affinity broken on thick client after reconnection
[ https://issues.apache.org/jira/browse/IGNITE-18976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Daschinsky updated IGNITE-18976: - Fix Version/s: 2.15 > Affinity broken on thick client after reconnection > -- > > Key: IGNITE-18976 > URL: https://issues.apache.org/jira/browse/IGNITE-18976 > Project: Ignite > Issue Type: Bug > Components: binary >Affects Versions: 2.14 >Reporter: Sergey Kosarev >Assignee: Ivan Daschinsky >Priority: Major > Labels: ise > Fix For: 2.15 > > Attachments: IgniteClientReconnectAffinityTest.java > > Time Spent: 20m > Remaining Estimate: 0h > > 1 Using AffinyKey +BynaryTypeconfiguration > 2 Client is reconnected > 3 Affinity is Broken and Binary marshalling is broken: > on Affinity.partition wrong value is returning: > {noformat} > at org.junit.Assert.failNotEquals(Assert.java:834) > at org.junit.Assert.assertEquals(Assert.java:645) > at org.junit.Assert.assertEquals(Assert.java:631) > at > org.apache.ignite.testframework.junits.JUnitAssertAware.assertEquals(JUnitAssertAware.java:95) > at > org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyPartition(IgniteClientReconnectAffinityTest.java:213) > at > org.apache.ignite.internal.IgniteClientReconnectAffinityTest.testReconnectClientAnnotatedAffinityKeyWithBinaryConfigPartition(IgniteClientReconnectAffinityTest.java:123) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.apache.ignite.testframework.junits.GridAbstractTest$6.run(GridAbstractTest.java:2504) > at java.lang.Thread.run(Thread.java:748) > {noformat} [^IgniteClientReconnectAffinityTest.java] > Exception on cache.get : > {noformat} > class org.apache.ignite.binary.BinaryObjectException: Failed to serialize > object > [typeName=org.apache.ignite.internal.IgniteClientReconnectAffinityTest$TestAnnotatedKey] > at > org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:916) > at > org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:232) > at > org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:165) > at > org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:152) > at > org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:251) > at > org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:583) > at > org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:1492) > at > org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheKeyObject(CacheObjectBinaryProcessorImpl.java:1287) > at > org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheKeyObject(GridCacheContext.java:1818) > at > org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.getAsync(GridDhtColocatedCache.java:279) > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4759) > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.repairableGet(GridCacheAdapter.java:4725) > at > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1373) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1108) > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:686) > at > org.apache.ignite.internal.IgniteClientReconnectAffinityTest.doReconnectClientAffinityKeyGet(IgniteClientReconnectAffinityTest.java:180) > at > org.apache.ignite.internal.IgniteClientReconnectAffinityTest.testReconnectClientAnnotatedAffinityKeyWithBinaryConfigGet(IgniteClientReconnectAffinityTest.java:118) > at
[jira] [Updated] (IGNITE-12483) ReflectionFactory is essential thanks to PlatformDotNetSessionLockResult
[ https://issues.apache.org/jira/browse/IGNITE-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Pavlov updated IGNITE-12483: --- Component/s: (was: integrations) > ReflectionFactory is essential thanks to PlatformDotNetSessionLockResult > > > Key: IGNITE-12483 > URL: https://issues.apache.org/jira/browse/IGNITE-12483 > Project: Ignite > Issue Type: Bug > Components: binary >Reporter: Ilya Kasnacheev >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise, usability > Fix For: 2.15 > > Time Spent: 20m > Remaining Estimate: 0h > > We currently treat ReflectionFactory as a nice-to-have thing, so we silently > ignore failures of its reflection: > {code} > try { > Class refFactoryCls = > Class.forName("sun.reflect.ReflectionFactory"); > refFac = > refFactoryCls.getMethod("getReflectionFactory").invoke(null); > ctorFac = > refFac.getClass().getMethod("newConstructorForSerialization", Class.class, > Constructor.class); > } > catch (NoSuchMethodException | ClassNotFoundException | > IllegalAccessException | InvocationTargetException ignored) { > // No-op. > } > {code} > However, it is now essential thanks to the class > PlatformDotNetSessionLockResult, which is always registered during note > start-up and which does not have empty constructor. > So not having access to ReflectionFactory (JBoss will hide it, for example) > will lead to the following cryptic exception (courtesy stack overflow): > {code} > 2019-12-19 09:11:39,355 SEVERE [org.apache.ignite.internal.IgniteKernal] > (ServerService Thread Pool -- 81) Got exception while starting (will rollback > startup routine).: class org.apache.ignite.binary.BinaryObjectException: > Failed to find empty constructor for class: > org.apache.ignite.internal.processors.platform.websession.PlatformDotNetSessionLockResult > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryClassDescriptor.constructor(BinaryClassDescriptor.java:981) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryClassDescriptor.(BinaryClassDescriptor.java:267) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryContext.registerPredefinedType(BinaryContext.java:1063) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryContext.registerPredefinedType(BinaryContext.java:1048) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryContext.(BinaryContext.java:350) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.start(CacheObjectBinaryProcessorImpl.java:208) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1700) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1013) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) > at > org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88) > {code} > My suggestions are the following: > - Introduce a warning when ReflectionFactory not found instead of ignoring > exception. > - Add empty constructor to PlatformDotNetSessionLockResult and make sure no > other classes need reflection during start-up. > - (optionally) instead, introduce an error when ReflectionFactory not found. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-12483) ReflectionFactory is essential thanks to PlatformDotNetSessionLockResult
[ https://issues.apache.org/jira/browse/IGNITE-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Pavlov updated IGNITE-12483: --- Component/s: integrations > ReflectionFactory is essential thanks to PlatformDotNetSessionLockResult > > > Key: IGNITE-12483 > URL: https://issues.apache.org/jira/browse/IGNITE-12483 > Project: Ignite > Issue Type: Bug > Components: binary, integrations >Reporter: Ilya Kasnacheev >Assignee: Aleksey Plekhanov >Priority: Major > Labels: ise, usability > Fix For: 2.15 > > Time Spent: 20m > Remaining Estimate: 0h > > We currently treat ReflectionFactory as a nice-to-have thing, so we silently > ignore failures of its reflection: > {code} > try { > Class refFactoryCls = > Class.forName("sun.reflect.ReflectionFactory"); > refFac = > refFactoryCls.getMethod("getReflectionFactory").invoke(null); > ctorFac = > refFac.getClass().getMethod("newConstructorForSerialization", Class.class, > Constructor.class); > } > catch (NoSuchMethodException | ClassNotFoundException | > IllegalAccessException | InvocationTargetException ignored) { > // No-op. > } > {code} > However, it is now essential thanks to the class > PlatformDotNetSessionLockResult, which is always registered during note > start-up and which does not have empty constructor. > So not having access to ReflectionFactory (JBoss will hide it, for example) > will lead to the following cryptic exception (courtesy stack overflow): > {code} > 2019-12-19 09:11:39,355 SEVERE [org.apache.ignite.internal.IgniteKernal] > (ServerService Thread Pool -- 81) Got exception while starting (will rollback > startup routine).: class org.apache.ignite.binary.BinaryObjectException: > Failed to find empty constructor for class: > org.apache.ignite.internal.processors.platform.websession.PlatformDotNetSessionLockResult > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryClassDescriptor.constructor(BinaryClassDescriptor.java:981) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryClassDescriptor.(BinaryClassDescriptor.java:267) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryContext.registerPredefinedType(BinaryContext.java:1063) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryContext.registerPredefinedType(BinaryContext.java:1048) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.binary.BinaryContext.(BinaryContext.java:350) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.start(CacheObjectBinaryProcessorImpl.java:208) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1700) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1013) > at > deployment.StreamsApp.ear//org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2038) > at > org.jboss.as.ee@18.0.1.Final//org.jboss.as.ee.component.BasicComponent.createInstance(BasicComponent.java:88) > {code} > My suggestions are the following: > - Introduce a warning when ReflectionFactory not found instead of ignoring > exception. > - Add empty constructor to PlatformDotNetSessionLockResult and make sure no > other classes need reflection during start-up. > - (optionally) instead, introduce an error when ReflectionFactory not found. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19008) .NET: Thin 3.0: Improve logging
[ https://issues.apache.org/jira/browse/IGNITE-19008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19008: Description: .NET client logging should include the following: *Warn level:* * Failed to establish connection to specified endpoint ** Can’t connect ** Failed handshake or magic * Existing connection failed *Info level:* * Partition assignment change events *Debug level:* * All connections * Schema updates * Retries *Trace level:* * All operations was: .NET client logging should include the following: * *Warn level:* ** Failed to establish connection to specified endpoint *** Can’t connect *** Failed handshake or magic ** Existing connection failed * *Info level:* ** Partition assignment change events * *Debug level:* ** All connections ** Schema updates ** Retries * *Trace level:* ** All operations > .NET: Thin 3.0: Improve logging > --- > > Key: IGNITE-19008 > URL: https://issues.apache.org/jira/browse/IGNITE-19008 > Project: Ignite > Issue Type: Improvement > Components: platforms, thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: .NET, ignite-3 > Fix For: 3.0.0-beta2 > > > .NET client logging should include the following: > *Warn level:* > * Failed to establish connection to specified endpoint > ** Can’t connect > ** Failed handshake or magic > * Existing connection failed > *Info level:* > * Partition assignment change events > *Debug level:* > * All connections > * Schema updates > * Retries > *Trace level:* > * All operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19007) Java thin 3.0: Improve logging
[ https://issues.apache.org/jira/browse/IGNITE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19007: Description: Java client logging should include the following: *Warn level:* * Failed to establish connection to specified endpoint ** Can’t connect ** Failed handshake or magic * Existing connection failed *Info level:* * Partition assignment change events *Debug level:* * All connections * Schema updates * Retries *Trace level:* * All operations was: Java client logging should include the following: *Warn level:* * Failed to establish connection to specified endpoint ** Can’t connect ** Failed handshake or magic * Existing connection failed *Info level:* * Partition assignment change events *Debug level:* * All connections * Schema updates * Retries *Trace level:* * All operations > Java thin 3.0: Improve logging > -- > > Key: IGNITE-19007 > URL: https://issues.apache.org/jira/browse/IGNITE-19007 > Project: Ignite > Issue Type: Improvement > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Java client logging should include the following: > *Warn level:* > * Failed to establish connection to specified endpoint > ** Can’t connect > ** Failed handshake or magic > * Existing connection failed > *Info level:* > * Partition assignment change events > *Debug level:* > * All connections > * Schema updates > * Retries > *Trace level:* > * All operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19007) Java thin 3.0: Improve logging
[ https://issues.apache.org/jira/browse/IGNITE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19007: Description: Java client logging should include the following: *Warn level:* * Failed to establish connection to specified endpoint ** Can’t connect ** Failed handshake or magic * Existing connection failed *Info level:* * Partition assignment change events *Debug level:* * All connections * Schema updates * Retries *Trace level:* * All operations was: Java client logging should include the following: *Warn level:* ** Failed to establish connection to specified endpoint *** Can’t connect *** Failed handshake or magic ** Existing connection failed *Info level:* ** Partition assignment change events *Debug level:* ** All connections ** Schema updates ** Retries *Trace level:* ** All operations > Java thin 3.0: Improve logging > -- > > Key: IGNITE-19007 > URL: https://issues.apache.org/jira/browse/IGNITE-19007 > Project: Ignite > Issue Type: Improvement > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Java client logging should include the following: > *Warn level:* > * Failed to establish connection to specified endpoint > ** Can’t connect > ** Failed handshake or magic > * Existing connection failed > *Info level:* > * Partition assignment change events > *Debug level:* > * All connections > * Schema updates > * Retries > *Trace level:* > * All operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19007) Java thin 3.0: Improve logging
[ https://issues.apache.org/jira/browse/IGNITE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19007: Description: Java client logging should include the following: *Warn level:* ** Failed to establish connection to specified endpoint *** Can’t connect *** Failed handshake or magic ** Existing connection failed *Info level:* ** Partition assignment change events *Debug level:* ** All connections ** Schema updates ** Retries *Trace level:* ** All operations was: Java client logging should include the following: * Warn level: ** Failed to establish connection to specified endpoint *** Can’t connect *** Failed handshake or magic ** Existing connection failed * Info level: ** Partition assignment change events * Debug level: ** All connections ** Schema updates ** Retries * Trace level: ** All operations > Java thin 3.0: Improve logging > -- > > Key: IGNITE-19007 > URL: https://issues.apache.org/jira/browse/IGNITE-19007 > Project: Ignite > Issue Type: Improvement > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Java client logging should include the following: > *Warn level:* > ** Failed to establish connection to specified endpoint > *** Can’t connect > *** Failed handshake or magic > ** Existing connection failed > *Info level:* > ** Partition assignment change events > *Debug level:* > ** All connections > ** Schema updates > ** Retries > *Trace level:* > ** All operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19008) .NET: Thin 3.0: Improve logging
Pavel Tupitsyn created IGNITE-19008: --- Summary: .NET: Thin 3.0: Improve logging Key: IGNITE-19008 URL: https://issues.apache.org/jira/browse/IGNITE-19008 Project: Ignite Issue Type: Improvement Components: platforms, thin client Affects Versions: 3.0.0-beta1 Reporter: Pavel Tupitsyn Assignee: Pavel Tupitsyn Fix For: 3.0.0-beta2 .NET client logging should include the following: * *Warn level:* ** Failed to establish connection to specified endpoint *** Can’t connect *** Failed handshake or magic ** Existing connection failed * *Info level:* ** Partition assignment change events * *Debug level:* ** All connections ** Schema updates ** Retries * *Trace level:* ** All operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19007) Java thin 3.0: Improve logging
Pavel Tupitsyn created IGNITE-19007: --- Summary: Java thin 3.0: Improve logging Key: IGNITE-19007 URL: https://issues.apache.org/jira/browse/IGNITE-19007 Project: Ignite Issue Type: Improvement Components: thin client Affects Versions: 3.0.0-beta1 Reporter: Pavel Tupitsyn Assignee: Pavel Tupitsyn Fix For: 3.0.0-beta2 Java client logging should include the following: * Warn level: ** Failed to establish connection to specified endpoint *** Can’t connect *** Failed handshake or magic ** Existing connection failed * Info level: ** Partition assignment change events * Debug level: ** All connections ** Schema updates ** Retries * Trace level: ** All operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19006) Thin 3.0: Improve server-side logging
[ https://issues.apache.org/jira/browse/IGNITE-19006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19006: Description: Server-side logging in *ignite-client-handler* module should include the following: *Warn level:* * Failed handshakes * Operation errors (writeError) * Idle channel events (IdleChannelHandler) *Info level:* * Partition assignment change events *Debug level:* * All connections *Trace level:* * All client operations was: Server-side logging in *ignite-client-handler* module should include the following: * Warn level: * Failed handshakes * Operation errors (writeError) * Idle channel events (IdleChannelHandler) * Info level: * Partition assignment change events * Debug level: * All connections * Trace level: * All client operations > Thin 3.0: Improve server-side logging > - > > Key: IGNITE-19006 > URL: https://issues.apache.org/jira/browse/IGNITE-19006 > Project: Ignite > Issue Type: Improvement > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Server-side logging in *ignite-client-handler* module should include the > following: > *Warn level:* > * Failed handshakes > * Operation errors (writeError) > * Idle channel events (IdleChannelHandler) > *Info level:* > * Partition assignment change events > *Debug level:* > * All connections > *Trace level:* > * All client operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19006) Thin 3.0: Improve server-side logging
[ https://issues.apache.org/jira/browse/IGNITE-19006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19006: Description: Server-side logging in *ignite-client-handler* module should include the following: * Warn level: * Failed handshakes * Operation errors (writeError) * Idle channel events (IdleChannelHandler) * Info level: * Partition assignment change events * Debug level: * All connections * Trace level: * All client operations was: Server-side logging in *ignite-client-handler* module should include the following: * Warn level: * Failed handshakes * Operation errors (writeError) * Idle channel events (IdleChannelHandler) * Info level: * Partition assignment change events * Debug level: * All connections * Trace level: * All client operations > Thin 3.0: Improve server-side logging > - > > Key: IGNITE-19006 > URL: https://issues.apache.org/jira/browse/IGNITE-19006 > Project: Ignite > Issue Type: Improvement > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Server-side logging in *ignite-client-handler* module should include the > following: > * Warn level: >* Failed handshakes >* Operation errors (writeError) >* Idle channel events (IdleChannelHandler) > * Info level: >* Partition assignment change events > * Debug level: >* All connections > * Trace level: >* All client operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19006) Thin 3.0: Improve server-side logging
[ https://issues.apache.org/jira/browse/IGNITE-19006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19006: Description: Server-side logging in *ignite-client-handler* module should include the following: * Warn level: * Failed handshakes * Operation errors (writeError) * Idle channel events (IdleChannelHandler) * Info level: * Partition assignment change events * Debug level: * All connections * Trace level: * All client operations was: Server-side logging in *ignite-client-handler* module should include the following: * Warn level: ** Failed handshakes ** Operation errors (writeError) ** Idle channel events (IdleChannelHandler) * Info level: ** Partition assignment change events * Debug level: ** All connections * Trace level: ** All operations > Thin 3.0: Improve server-side logging > - > > Key: IGNITE-19006 > URL: https://issues.apache.org/jira/browse/IGNITE-19006 > Project: Ignite > Issue Type: Improvement > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Server-side logging in *ignite-client-handler* module should include the > following: > * Warn level: > * Failed handshakes > * Operation errors (writeError) > * Idle channel events (IdleChannelHandler) > * Info level: > * Partition assignment change events > * Debug level: > * All connections > * Trace level: > * All client operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19006) Thin 3.0: Improve server-side logging
Pavel Tupitsyn created IGNITE-19006: --- Summary: Thin 3.0: Improve server-side logging Key: IGNITE-19006 URL: https://issues.apache.org/jira/browse/IGNITE-19006 Project: Ignite Issue Type: Improvement Components: thin client Affects Versions: 3.0.0-beta1 Reporter: Pavel Tupitsyn Assignee: Pavel Tupitsyn Fix For: 3.0.0-beta2 Server-side logging in *ignite-client-handler* module should include the following: * Warn level: ** Failed handshakes ** Operation errors (writeError) ** Idle channel events (IdleChannelHandler) * Info level: ** Partition assignment change events * Debug level: ** All connections * Trace level: ** All operations -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19005) Fix current metric names from prefix.prefix.metric to prefix.prefix.Metric
[ https://issues.apache.org/jira/browse/IGNITE-19005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Gusakov updated IGNITE-19005: Description: JvmMetricSource has the metric names, which violate metric naming conventions: memory.heap.init must be replaced by memory.heap.Init and etc. was: JvmMetricSource has the metric names, which violates metric naming conventions: memory.heap.init must be replaced by memory.heap.Init and etc. > Fix current metric names from prefix.prefix.metric to prefix.prefix.Metric > -- > > Key: IGNITE-19005 > URL: https://issues.apache.org/jira/browse/IGNITE-19005 > Project: Ignite > Issue Type: Task >Reporter: Kirill Gusakov >Priority: Major > > JvmMetricSource has the metric names, which violate metric naming > conventions: > memory.heap.init must be replaced by memory.heap.Init and etc. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19005) Fix current metric names from prefix.prefix.metric to prefix.prefix.Metric
[ https://issues.apache.org/jira/browse/IGNITE-19005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Gusakov updated IGNITE-19005: Description: JvmMetricSource has the metric names, which violates metric naming conventions: memory.heap.init must be replaced by memory.heap.Init and etc. > Fix current metric names from prefix.prefix.metric to prefix.prefix.Metric > -- > > Key: IGNITE-19005 > URL: https://issues.apache.org/jira/browse/IGNITE-19005 > Project: Ignite > Issue Type: Task >Reporter: Kirill Gusakov >Priority: Major > > JvmMetricSource has the metric names, which violates metric naming > conventions: > memory.heap.init must be replaced by memory.heap.Init and etc. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19005) Fix current metric names from prefix.prefix.metric to prefix.prefix.Metric
Kirill Gusakov created IGNITE-19005: --- Summary: Fix current metric names from prefix.prefix.metric to prefix.prefix.Metric Key: IGNITE-19005 URL: https://issues.apache.org/jira/browse/IGNITE-19005 Project: Ignite Issue Type: Task Reporter: Kirill Gusakov -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19004) ignite config show is broken on connected state
Aleksandr created IGNITE-19004: -- Summary: ignite config show is broken on connected state Key: IGNITE-19004 URL: https://issues.apache.org/jira/browse/IGNITE-19004 Project: Ignite Issue Type: Bug Components: cli Reporter: Aleksandr Steps to reproduce: * connect to an initialized cluster * node config show (ok) * node config show --node-name < without node name * node config show (broken) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18838) Start and stop of needed RAFT nodes must be moved from TableManager to DistributionZoneManager
[ https://issues.apache.org/jira/browse/IGNITE-18838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Gusakov updated IGNITE-18838: Description: According to the new replication design: distribution zones is the center of data distribution, but not table partitions. So, the current logic for start/stop needed RAFT nodes must be move to DistributionZoneManager (DZM). *Defition of done:* - DistributionZoneManager listens *.pending/*.stable assignment keys for all distribution zones - On update of pending key, DZM must start the needed raft group node - On update of stable key, DZM must stop unneded node (so, the all nodes, which doesn't belong to (pending ++ stable) set) Important TODO: Also, we must to keep the logic, connected with the partition listeners for new tables and index management. was: According to the new replication design: distribution zones is the center of data distribution, but not table partitions. So, the current logic for start/stop needed RAFT nodes must be move to DistributionZoneManager (DZM). *Defition of done:* - DistributionZoneManager listens \*.pending/\*.stable assignment keys for all distribution zones - On update of pending key, DZM must start the needed raft group node - On update of stable key, DZM must stop unneded node (so, the all nodes, which doesn't belong to (pending ++ stable) set) > Start and stop of needed RAFT nodes must be moved from TableManager to > DistributionZoneManager > -- > > Key: IGNITE-18838 > URL: https://issues.apache.org/jira/browse/IGNITE-18838 > Project: Ignite > Issue Type: Task >Reporter: Kirill Gusakov >Priority: Major > Labels: ignite-3 > > According to the new replication design: distribution zones is the center of > data distribution, but not table partitions. So, the current logic for > start/stop needed RAFT nodes must be move to DistributionZoneManager (DZM). > *Defition of done:* > - DistributionZoneManager listens *.pending/*.stable assignment keys for all > distribution zones > - On update of pending key, DZM must start the needed raft group node > - On update of stable key, DZM must stop unneded node (so, the all nodes, > which doesn't belong to (pending ++ stable) set) > Important TODO: Also, we must to keep the logic, connected with the partition > listeners for new tables and index management. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18857) Add rebalance configuration listeners to DistributionZoneManager
[ https://issues.apache.org/jira/browse/IGNITE-18857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Gusakov updated IGNITE-18857: Description: DistributionZoneManager must schedule new rebalance (update pending/planned keys, see [https://github.com/apache/ignite-3/blob/main/modules/distribution-zones/tech-notes/rebalance.md]), if number of replicas in distribution zone configuration was updated. Also, new assignments calculation must be moved to DZM too. (was: DistributionZoneManager must schedule new rebalance (update pending/planned keys, see [https://github.com/apache/ignite-3/blob/main/modules/distribution-zones/tech-notes/rebalance.md]), if number of replicas in distribution zone configuration was updated.) > Add rebalance configuration listeners to DistributionZoneManager > > > Key: IGNITE-18857 > URL: https://issues.apache.org/jira/browse/IGNITE-18857 > Project: Ignite > Issue Type: Task >Reporter: Kirill Gusakov >Assignee: Kirill Gusakov >Priority: Major > Labels: ignite-3 > > DistributionZoneManager must schedule new rebalance (update pending/planned > keys, see > [https://github.com/apache/ignite-3/blob/main/modules/distribution-zones/tech-notes/rebalance.md]), > if number of replicas in distribution zone configuration was updated. Also, > new assignments calculation must be moved to DZM too. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18991) Move stable/planned/pending assignments from table to distribution zone root keys
[ https://issues.apache.org/jira/browse/IGNITE-18991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Gusakov updated IGNITE-18991: Description: Under the activities about moving distribution zone based data management we need to: * Remove assignments from TableConfiguration and use metastore based stable assignments instead * Replace metastore stable/planned/pending assignments per table by the same per distribution zone with the correspondance keys roots zoneId.* was: Under the activities about moving distribution zone based data management we need to: * Remove assignments from TableConfiguration > Move stable/planned/pending assignments from table to distribution zone root > keys > - > > Key: IGNITE-18991 > URL: https://issues.apache.org/jira/browse/IGNITE-18991 > Project: Ignite > Issue Type: Task >Reporter: Kirill Gusakov >Priority: Major > > Under the activities about moving distribution zone based data management we > need to: > * Remove assignments from TableConfiguration and use metastore based stable > assignments instead > * Replace metastore stable/planned/pending assignments per table by the same > per distribution zone with the correspondance keys roots zoneId.* > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18991) Move stable/planned/pending assignments from table to distribution zone root keys
[ https://issues.apache.org/jira/browse/IGNITE-18991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Gusakov updated IGNITE-18991: Summary: Move stable/planned/pending assignments from table to distribution zone root keys (was: Remove assignments from table configuration) > Move stable/planned/pending assignments from table to distribution zone root > keys > - > > Key: IGNITE-18991 > URL: https://issues.apache.org/jira/browse/IGNITE-18991 > Project: Ignite > Issue Type: Task >Reporter: Kirill Gusakov >Priority: Major > > Under the activities about moving distribution zone based data management we > need to: > * Remove assignments from TableConfiguration > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18991) Remove assignments from table configuration
[ https://issues.apache.org/jira/browse/IGNITE-18991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Gusakov updated IGNITE-18991: Description: Under the activities about moving distribution zone based data management we need to: * Remove assignments from TableConfiguration was:We need to remove assignments from table configuration and rebind whole logic connected with it to metastore-based assignments > Remove assignments from table configuration > --- > > Key: IGNITE-18991 > URL: https://issues.apache.org/jira/browse/IGNITE-18991 > Project: Ignite > Issue Type: Task >Reporter: Kirill Gusakov >Priority: Major > > Under the activities about moving distribution zone based data management we > need to: > * Remove assignments from TableConfiguration > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18951) Add JDBC SSL support to the CLI
[ https://issues.apache.org/jira/browse/IGNITE-18951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr reassigned IGNITE-18951: -- Assignee: Aleksandr > Add JDBC SSL support to the CLI > --- > > Key: IGNITE-18951 > URL: https://issues.apache.org/jira/browse/IGNITE-18951 > Project: Ignite > Issue Type: Improvement > Components: cli >Reporter: Ivan Gagarkin >Assignee: Aleksandr >Priority: Critical > Labels: ignite-3 > > As a user, I would like to use a secured JDBC connection in the CLI. > We need to support changes made under IGNITE-18578 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19003) Support of multiple data storages for 1 distribution zone
Kirill Gusakov created IGNITE-19003: --- Summary: Support of multiple data storages for 1 distribution zone Key: IGNITE-19003 URL: https://issues.apache.org/jira/browse/IGNITE-19003 Project: Ignite Issue Type: Task Reporter: Kirill Gusakov Different tables from 1 distribution zones can request the different storage types. So: * TableConfiguration must has compound reference (zoneId, dataStorageName) * DistributionZoneConfiguration must have the ability to manage multiple dataStorages instead of one -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19002) Move dataStorage configuration from table to distribution zone config
Kirill Gusakov created IGNITE-19002: --- Summary: Move dataStorage configuration from table to distribution zone config Key: IGNITE-19002 URL: https://issues.apache.org/jira/browse/IGNITE-19002 Project: Ignite Issue Type: Task Reporter: Kirill Gusakov To implement the distribution zone based architecture we need to move the dataStorage config from TableConfiguration to DistributionZoneConfiguration -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18475) Huge performance drop with enabled sync write per log entry for RAFT logs
[ https://issues.apache.org/jira/browse/IGNITE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698938#comment-17698938 ] Ivan Bessonov edited comment on IGNITE-18475 at 3/10/23 1:22 PM: - First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: {-}If only a single RAFT log entry has been lost{-}, according to a new leader, the group will move into broken state. For example: {code:java} // Before start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] Node 2 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] // After start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1] Node 2 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1]{code} Log for node 0 is silently "corrupted", data is inconsistent, configuration is inconsistent. {*}This is, most likely, a bug in JRaft{*}. Following message can be seen in such test, instead of error, for node 0: {code:java} WARNING: Received entries of which the lastLog=2 is not greater than appliedIndex=2, return immediately with nothing changed. {code} # ## ### 2. {-}If multiple log entries have been lost{-}, according to a new leader, aforementioned bug is not happening. New majority, that consists of old nodes, will continue working, while old minority with "newer" data will fail to replicate new updates. To my knowledge, no attempts of snapshot installation would take place. Some data is permanently lost, if not recovered manually. Some group nodes required manual cleanup. Otherwise, data is consistent. EDIT: _real conditions are not known. Apparently, the same behavior can be reproduced in both cases, but I have not encountered it in the second case._ 4. Full cluster restart, where majority of nodes lose log suffix, seems to be equivalent to 3.2.2 Jira can't handle code blocks inside of lists, sorry for messed formatting was (Author: ibessonov): First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader -
[jira] [Comment Edited] (IGNITE-18475) Huge performance drop with enabled sync write per log entry for RAFT logs
[ https://issues.apache.org/jira/browse/IGNITE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698938#comment-17698938 ] Ivan Bessonov edited comment on IGNITE-18475 at 3/10/23 1:21 PM: - First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: {-}If only a single RAFT log entry has been lost{-}, according to a new leader, the group will move into broken state. For example: {code:java} // Before start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] Node 2 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] // After start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1] Node 2 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1]{code} Log for node 0 is silently "corrupted", data is inconsistent, configuration is inconsistent. {*}This is, most likely, a bug in JRaft{*}. Following message can be seen in such test, instead of error, for node 0: {code:java} WARNING: Received entries of which the lastLog=2 is not greater than appliedIndex=2, return immediately with nothing changed. {code} # ## ### 2. {-}If multiple log entries have been lost{-}, according to a new leader, aforementioned bug is not happening. New majority, that consists of old nodes, will continue working, while old minority with "newer" data will fail to replicate new updates. To my knowledge, no attempts of snapshot installation would take place. Some data is permanently lost, if not recovered manually. Some group nodes required manual cleanup. Otherwise, data is consistent. EDIT: _real conditions are not known. Same behavior can be reproduced in both cases._ 4. Full cluster restart, where majority of nodes lose log suffix, seems to be equivalent to 3.2.2 Jira can't handle code blocks inside of lists, sorry for messed formatting was (Author: ibessonov): First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with
[jira] [Comment Edited] (IGNITE-18475) Huge performance drop with enabled sync write per log entry for RAFT logs
[ https://issues.apache.org/jira/browse/IGNITE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698938#comment-17698938 ] Ivan Bessonov edited comment on IGNITE-18475 at 3/10/23 1:19 PM: - First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: If only a single RAFT log entry has been lost, according to a new leader, the group will move into broken state. For example: {code:java} // Before start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] Node 2 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] // After start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1] Node 2 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1]{code} Log for node 0 is silently "corrupted", data is inconsistent, configuration is inconsistent. {*}This is, most likely, a bug in JRaft{*}. Following message can be seen in such test, instead of error, for node 0: {code:java} WARNING: Received entries of which the lastLog=2 is not greater than appliedIndex=2, return immediately with nothing changed. {code} # ## ### 2. If multiple log entries have been lost, according to a new leader, aforementioned bug is not happening. New majority, that consists of old nodes, will continue working, while old minority with "newer" data will fail to replicate new updates. To my knowledge, no attempts of snapshot installation would take place. Some data is permanently lost, if not recovered manually. Some group nodes required manual cleanup. Otherwise, data is consistent. 4. Full cluster restart, where majority of nodes lose log suffix, seems to be equivalent to 3.2.2 Jira can't handle code blocks inside of lists, sorry for messed formatting was (Author: ibessonov): First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: If only a single RAFT log entry has
[jira] [Comment Edited] (IGNITE-18475) Huge performance drop with enabled sync write per log entry for RAFT logs
[ https://issues.apache.org/jira/browse/IGNITE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698938#comment-17698938 ] Ivan Bessonov edited comment on IGNITE-18475 at 3/10/23 1:14 PM: - First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: If only a single RAFT log entry has been lost, according to a new leader, the group will move into broken state. For example: {code:java} // Before start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] Node 2 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] // After start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1] Node 2 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1]{code} Log for node 0 is silently "corrupted", data is inconsistent, configuration is inconsistent. {*}This is, most likely, a bug in JRaft{*}. Following message can be seen in such test, instead of error, for node 0: {code:java} WARNING: Received entries of which the lastLog=2 is not greater than appliedIndex=2, return immediately with nothing changed. {code} # ## ### 2. If multiple log entries have been lost, according to a new leader, aforementioned bug is not happening. New majority, that consists of old nodes, will continue working, while old minority with "newer" data will fail to replicate new updates. To my knowledge, no attempts of snapshot installation would take place. Some data is permanently lost, is not recovered manually. Some group nodes required manual cleanup. Otherwise, data is consistent. 4. Full cluster restart, where majority of nodes lose log suffix, seems to be equivalent to 3.2.2 Jira can't handle code blocks inside of lists, sorry for messed formatting was (Author: ibessonov): First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: If only a single RAFT log entry has
[jira] [Resolved] (IGNITE-18252) Set 'failIfNoTests' option to 'true' for all ignite-extensions builds
[ https://issues.apache.org/jira/browse/IGNITE-18252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov resolved IGNITE-18252. Resolution: Fixed > Set 'failIfNoTests' option to 'true' for all ignite-extensions builds > - > > Key: IGNITE-18252 > URL: https://issues.apache.org/jira/browse/IGNITE-18252 > Project: Ignite > Issue Type: Sub-task > Components: extensions >Reporter: Ilya Shishkov >Assignee: Vitaliy Osipov >Priority: Minor > Labels: IEP-59, ise, teamcity > > {{failIfNoTests}} should be set to 'true' *_for all modules_* in order to > prevent situations when module was not tested. Currently, we have build > configuration configuration as shown below: > {quote} > -pl modules/%DIR_EXTENSION% -am > -Dmaven.test.failure.ignore=true > -DfailIfNoTests=*{color:#DE350B}false{color}* > -Dignite.version=%IGNITE_VERSION% > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18475) Huge performance drop with enabled sync write per log entry for RAFT logs
[ https://issues.apache.org/jira/browse/IGNITE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698938#comment-17698938 ] Ivan Bessonov edited comment on IGNITE-18475 at 3/10/23 1:13 PM: - First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: If only a single RAFT log entry has been lost, according to a new leader, the group will move into broken state. For example: {code:java} // Before start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] Node 2 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] // After start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1] Node 2 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1]{code} Log for node 0 is silently "corrupted", data is inconsistent, configuration is inconsistent. {*}This is, most likely, a bug in JRaft{*}. Following message can be seen in such test, instead of error, for node 0: {code:java} WARNING: Received entries of which the lastLog=2 is not greater than appliedIndex=2, return immediately with nothing changed. {code} # ## ### 2. If multiple log entries have been lost, according to a new leader, aforementioned bug is not happening. New majority, that consists of old nodes, will continue working, while old minority with "newer" data will fail to replicate new updates. To my knowledge, no attempts of snapshot installation would take place. Some data is permanently lost, is not recovered manually. Some group nodes required manual cleanup. Otherwise, data is consistent. 4. Full cluster restart, where majority of nodes lose log suffix, seems to be equivalent to 3.2.2 was (Author: ibessonov): First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: If only a single RAFT log entry has been lost, according to a new leader, the group will move into broken state.
[jira] [Comment Edited] (IGNITE-18475) Huge performance drop with enabled sync write per log entry for RAFT logs
[ https://issues.apache.org/jira/browse/IGNITE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698938#comment-17698938 ] Ivan Bessonov edited comment on IGNITE-18475 at 3/10/23 1:12 PM: - First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: If only a single RAFT log entry has been lost, according to a new leader, the group will move into broken state. For example: {code:java} // Before start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] Node 2 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] // After start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1] Node 2 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1]{code} Log for node 0 is silently "corrupted", data is inconsistent, configuration is inconsistent. {*}This is, most likely, a bug in JRaft{*}. Following message can be seen in such test, instead of error, for node 0: {code:java} WARNING: Received entries of which the lastLog=2 is not greater than appliedIndex=2, return immediately with nothing changed. {code} # ## ### If multiple log entries have been lost, according to a new leader, aforementioned bug is not happening. New majority, that consists of old nodes, will continue working, while old minority with "newer" data will fail to replicate new updates. To my knowledge, no attempts of snapshot installation would take place. Some data is permanently lost, is not recovered manually. Some group nodes required manual cleanup. Otherwise, data is consistent. 4. Full cluster restart, where majority of nodes lose log suffix, seems to be equivalent to 3.2.2 was (Author: ibessonov): First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: If only a single RAFT log entry has been lost, according to a new leader, the group will move into broken state. For
[jira] [Commented] (IGNITE-18475) Huge performance drop with enabled sync write per log entry for RAFT logs
[ https://issues.apache.org/jira/browse/IGNITE-18475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698938#comment-17698938 ] Ivan Bessonov commented on IGNITE-18475: First of all, what are the implications of completely disabling fsync for the log. # If a minority of nodes have been restarted with the loss of log suffix, everything works fine. Nodes are treated according to their real state, log is replicated once again. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartSingleNode{}}}. # If a majority of nodes have been restarted, but only the minority has a loss of log suffix, everything works fine. Case is covered by {{{}ItTruncateSuffixAndRestartTest#testRestartTwoNodes{}}}. This means that, in any situation, if only a minority of nodes lost the log suffix, raft group remains healthy and consistent. # If a majority of nodes have been restarted, with the majority experiencing the loss of log suffix, things become unstable: ## If leader has not been restarted, it may replicate the log suffix to the followers that experienced data loss. If this happened, data will be consistent. ## If leader has been restarted, the re-election will occur. Now it all depends on its results. ### Node with newest data is elected as a leader - everything's fine, data will be consistent after replication. ### Node with data loss is elected as a leader. Two things may happen: If only a single RAFT log entry has been lost, according to a new leader, the group will move into broken state. For example: {code:java} // Before start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] Node 2 (offline) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] // After start: Node 0 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_DATA, id=LogId [index=2, term=1], ..., data=1] Node 1 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1] Node 2 (online) 1: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=1, term=1], ..., data=0] 2: LogEntry [type=ENTRY_TYPE_CONFIGURATION, id=LogId [index=2, term=3], ..., data=1]{code} Log for node 0 is silently "corrupted", data is inconsistent, configuration is inconsistent. {*}This is, most likely, a bug in JRaft{*}. Following message can be seen in such test, instead of error, for node 0: {code:java} WARNING: Received entries of which the lastLog=2 is not greater than appliedIndex=2, return immediately with nothing changed. {code} If multiple log entries have been lost, according to a new leader, aforementioned bug is not happening. New majority, that consists of old nodes, will continue working, while old minority with "newer" data will fail to replicate new updates. To my knowledge, no attempts of snapshot installation would take place. Some data is permanently lost, is not recovered manually. Some group nodes required manual cleanup. Otherwise, data is consistent. # Full cluster restart, where majority of nodes lose log suffix, seems to be equivalent to 3.2.2 > Huge performance drop with enabled sync write per log entry for RAFT logs > - > > Key: IGNITE-18475 > URL: https://issues.apache.org/jira/browse/IGNITE-18475 > Project: Ignite > Issue Type: Task >Reporter: Kirill Gusakov >Assignee: Ivan Bessonov >Priority: Major > Labels: ignite-3 > Time Spent: 10m > Remaining Estimate: 0h > > During the YCSB benchmark runs for ignite-3 beta1 we found out, that we have > significant issues with performance for select/insert queries. > One of the root cause of these issues: write every log entry to rocksdb with > enabled sync option (which leads to frequent fsync calls). > These issues can be reproduced by localised jmh benchmarks > [SelectBenchmark|https://github.com/gridgain/apache-ignite-3/blob/4b9de922caa4aef97a5e8e159d5db76a3fc7a3ad/modules/runner/src/test/java/org/apache/ignite/internal/benchmark/SelectBenchmark.java#L39] > and > [InsertBenchmark|https://github.com/gridgain/apache-ignite-3/blob/4b9de922caa4aef97a5e8e159d5db76a3fc7a3ad/modules/runner/src/test/java/org/apache/ignite/internal/benchmark/InsertBenchmark.java#L29] > with RaftOptions.sync=true/false: > * jdbc select queries: 115ms vs 4ms > * jdbc insert queries: 70ms vs 2.5ms > (These results received on MacBook Pro (16-inch, 2019) and it
[jira] [Assigned] (IGNITE-18252) Set 'failIfNoTests' option to 'true' for all ignite-extensions builds
[ https://issues.apache.org/jira/browse/IGNITE-18252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov reassigned IGNITE-18252: -- Assignee: Vitaliy Osipov > Set 'failIfNoTests' option to 'true' for all ignite-extensions builds > - > > Key: IGNITE-18252 > URL: https://issues.apache.org/jira/browse/IGNITE-18252 > Project: Ignite > Issue Type: Sub-task > Components: extensions >Reporter: Ilya Shishkov >Assignee: Vitaliy Osipov >Priority: Minor > Labels: IEP-59, ise, teamcity > > {{failIfNoTests}} should be set to 'true' *_for all modules_* in order to > prevent situations when module was not tested. Currently, we have build > configuration configuration as shown below: > {quote} > -pl modules/%DIR_EXTENSION% -am > -Dmaven.test.failure.ignore=true > -DfailIfNoTests=*{color:#DE350B}false{color}* > -Dignite.version=%IGNITE_VERSION% > {quote} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18911) Add tests for examples of extensions
[ https://issues.apache.org/jira/browse/IGNITE-18911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698927#comment-17698927 ] Ilya Shishkov commented on IGNITE-18911: [~PetrovMikhail], thank you a lot for the review. > Add tests for examples of extensions > > > Key: IGNITE-18911 > URL: https://issues.apache.org/jira/browse/IGNITE-18911 > Project: Ignite > Issue Type: Sub-task > Components: extensions >Reporter: Ilya Shishkov >Assignee: Ilya Shishkov >Priority: Minor > Labels: ise > > Modules from a list below does not have tests for extensions: > # Zookeeper Ip Finder > # Spring Transactions > # Spring Boot Thin Client Autoconfigure > # Spring Boot Autoconfigure -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18966) Sql. Custom data types. Fix least restrictive type and nullability.
[ https://issues.apache.org/jira/browse/IGNITE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Zhuravkov updated IGNITE-18966: -- Description: 1. Calcite uses ANY type for the DEFAULT operator and introduction of custom data types caused a regression that broke that rule. 2. Nullable attribute is not correctly set for custom data types - it creates a custom data type with nullability = true when it should be false. 3. Update commonTypeForBinaryComparison to convert to/from custom data type in binary comparison operators. was: 1. Calcite uses ANY type for the DEFAULT operator and the introduction of a custom data type caused a regression that broke that rule. 2. Nullable attribute is not correctly set for custom data types - it creates a custom data type with nullability = true when it should be false. 3. Update commonTypeForBinaryComparison to convert to/from custom data type in binary comparison operators. > Sql. Custom data types. Fix least restrictive type and nullability. > --- > > Key: IGNITE-18966 > URL: https://issues.apache.org/jira/browse/IGNITE-18966 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Maksim Zhuravkov >Assignee: Maksim Zhuravkov >Priority: Minor > Labels: calcite2-required, calcite3-required, ignite-3 > Fix For: 3.0.0-beta2 > > > 1. Calcite uses ANY type for the DEFAULT operator and introduction of custom > data types caused a regression that broke that rule. > > 2. Nullable attribute is not correctly set for custom data types - it creates > a custom data type with nullability = true when it should be false. > 3. Update commonTypeForBinaryComparison to convert to/from custom data type > in binary comparison operators. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19001) Sql. Query with distinct aggregate fails (H2 engine).
[ https://issues.apache.org/jira/browse/IGNITE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin updated IGNITE-19001: -- Component/s: sql > Sql. Query with distinct aggregate fails (H2 engine). > - > > Key: IGNITE-19001 > URL: https://issues.apache.org/jira/browse/IGNITE-19001 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Pavel Pereslegin >Priority: Major > > Sql fields query fails with assertion error for the query > {code:sql} > SELECT COUNT(*), COUNT(DISTINCT(v)) FROM test > {code} > (if we replace {{count( *)}} with {{count(v)}}, everything will work) > Reproducer: > {code:java} > IgniteCache cache = startGrid().getOrCreateCache(DEFAULT_CACHE_NAME); > cache.query(new SqlFieldsQuery("CREATE TABLE test(id int primary key, v > int)")); > cache.query(new SqlFieldsQuery("SELECT COUNT(*), COUNT(DISTINCT(v)) FROM > test")); > {code} > Error: > {noformat} > java.lang.AssertionError > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregate(GridSqlQuerySplitter.java:1732) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregates(GridSqlQuerySplitter.java:1611) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelectExpression(GridSqlQuerySplitter.java:1563) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelect(GridSqlQuerySplitter.java:1181) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQueryModel(GridSqlQuerySplitter.java:1131) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQuery(GridSqlQuerySplitter.java:378) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split0(GridSqlQuerySplitter.java:290) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split(GridSqlQuerySplitter.java:221) > at > org.apache.ignite.internal.processors.query.h2.QueryParser.parseH2(QueryParser.java:552) > at > org.apache.ignite.internal.processors.query.h2.QueryParser.parse0(QueryParser.java:229) > at > org.apache.ignite.internal.processors.query.h2.QueryParser.parse(QueryParser.java:142) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1006) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3111) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3082) > at > org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3817) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$3(GridQueryProcessor.java:3128) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:3256) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:3078) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:3006) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:819) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:767) > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:428) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19001) Sql. Query with distinct aggregate fails (H2 engine).
[ https://issues.apache.org/jira/browse/IGNITE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin updated IGNITE-19001: -- Ignite Flags: Release Notes Required (was: Docs Required,Release Notes Required) > Sql. Query with distinct aggregate fails (H2 engine). > - > > Key: IGNITE-19001 > URL: https://issues.apache.org/jira/browse/IGNITE-19001 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Pereslegin >Priority: Major > > Sql fields query fails with assertion error for the query > {code:sql} > SELECT COUNT(*), COUNT(DISTINCT(v)) FROM test > {code} > (if we replace {{count( *)}} with {{count(v)}}, everything will work) > Reproducer: > {code:java} > IgniteCache cache = startGrid().getOrCreateCache(DEFAULT_CACHE_NAME); > cache.query(new SqlFieldsQuery("CREATE TABLE test(id int primary key, v > int)")); > cache.query(new SqlFieldsQuery("SELECT COUNT(*), COUNT(DISTINCT(v)) FROM > test")); > {code} > Error: > {noformat} > java.lang.AssertionError > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregate(GridSqlQuerySplitter.java:1732) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregates(GridSqlQuerySplitter.java:1611) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelectExpression(GridSqlQuerySplitter.java:1563) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelect(GridSqlQuerySplitter.java:1181) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQueryModel(GridSqlQuerySplitter.java:1131) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQuery(GridSqlQuerySplitter.java:378) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split0(GridSqlQuerySplitter.java:290) > at > org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split(GridSqlQuerySplitter.java:221) > at > org.apache.ignite.internal.processors.query.h2.QueryParser.parseH2(QueryParser.java:552) > at > org.apache.ignite.internal.processors.query.h2.QueryParser.parse0(QueryParser.java:229) > at > org.apache.ignite.internal.processors.query.h2.QueryParser.parse(QueryParser.java:142) > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1006) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3111) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3082) > at > org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3817) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$3(GridQueryProcessor.java:3128) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:3256) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:3078) > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:3006) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:819) > at > org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:767) > at > org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:428) > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19001) Sql. Query with distinct aggregate fails (H2 engine).
[ https://issues.apache.org/jira/browse/IGNITE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin updated IGNITE-19001: -- Description: Sql fields query fails with assertion error for the query {code:sql} SELECT COUNT(*), COUNT(DISTINCT(v)) FROM test {code} (if we replace {{count( *)}} with {{count(v)}}, everything will work) Reproducer: {code:java} IgniteCache cache = startGrid().getOrCreateCache(DEFAULT_CACHE_NAME); cache.query(new SqlFieldsQuery("CREATE TABLE test(id int primary key, v int)")); cache.query(new SqlFieldsQuery("SELECT COUNT(*), COUNT(DISTINCT(v)) FROM test")); {code} Error: {noformat} java.lang.AssertionError at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregate(GridSqlQuerySplitter.java:1732) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregates(GridSqlQuerySplitter.java:1611) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelectExpression(GridSqlQuerySplitter.java:1563) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelect(GridSqlQuerySplitter.java:1181) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQueryModel(GridSqlQuerySplitter.java:1131) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQuery(GridSqlQuerySplitter.java:378) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split0(GridSqlQuerySplitter.java:290) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split(GridSqlQuerySplitter.java:221) at org.apache.ignite.internal.processors.query.h2.QueryParser.parseH2(QueryParser.java:552) at org.apache.ignite.internal.processors.query.h2.QueryParser.parse0(QueryParser.java:229) at org.apache.ignite.internal.processors.query.h2.QueryParser.parse(QueryParser.java:142) at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1006) at org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3111) at org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3082) at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3817) at org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$3(GridQueryProcessor.java:3128) at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:3256) at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:3078) at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:3006) at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:819) at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:767) at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:428) {noformat} was: Sql fields query fails with assertion error for the query "SELECT COUNT(*), COUNT(DISTINCT(v)) FROM test". Reproducer: {code:java} IgniteCache cache = startGrid().getOrCreateCache(DEFAULT_CACHE_NAME); cache.query(new SqlFieldsQuery("CREATE TABLE test(id int primary key, v int)")); cache.query(new SqlFieldsQuery("SELECT COUNT(*), COUNT(DISTINCT(v)) FROM test")); {code} Error: {noformat} java.lang.AssertionError at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregate(GridSqlQuerySplitter.java:1732) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregates(GridSqlQuerySplitter.java:1611) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelectExpression(GridSqlQuerySplitter.java:1563) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelect(GridSqlQuerySplitter.java:1181) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQueryModel(GridSqlQuerySplitter.java:1131) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQuery(GridSqlQuerySplitter.java:378) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split0(GridSqlQuerySplitter.java:290) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split(GridSqlQuerySplitter.java:221) at
[jira] [Created] (IGNITE-19001) Sql. Query with distinct aggregate fails (H2 engine).
Pavel Pereslegin created IGNITE-19001: - Summary: Sql. Query with distinct aggregate fails (H2 engine). Key: IGNITE-19001 URL: https://issues.apache.org/jira/browse/IGNITE-19001 Project: Ignite Issue Type: Bug Reporter: Pavel Pereslegin Sql fields query fails with assertion error for the query "SELECT COUNT(*), COUNT(DISTINCT(v)) FROM test". Reproducer: {code:java} IgniteCache cache = startGrid().getOrCreateCache(DEFAULT_CACHE_NAME); cache.query(new SqlFieldsQuery("CREATE TABLE test(id int primary key, v int)")); cache.query(new SqlFieldsQuery("SELECT COUNT(*), COUNT(DISTINCT(v)) FROM test")); {code} Error: {noformat} java.lang.AssertionError at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregate(GridSqlQuerySplitter.java:1732) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitAggregates(GridSqlQuerySplitter.java:1611) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelectExpression(GridSqlQuerySplitter.java:1563) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitSelect(GridSqlQuerySplitter.java:1181) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQueryModel(GridSqlQuerySplitter.java:1131) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.splitQuery(GridSqlQuerySplitter.java:378) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split0(GridSqlQuerySplitter.java:290) at org.apache.ignite.internal.processors.query.h2.sql.GridSqlQuerySplitter.split(GridSqlQuerySplitter.java:221) at org.apache.ignite.internal.processors.query.h2.QueryParser.parseH2(QueryParser.java:552) at org.apache.ignite.internal.processors.query.h2.QueryParser.parse0(QueryParser.java:229) at org.apache.ignite.internal.processors.query.h2.QueryParser.parse(QueryParser.java:142) at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1006) at org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3111) at org.apache.ignite.internal.processors.query.GridQueryProcessor$2.applyx(GridQueryProcessor.java:3082) at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3817) at org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$3(GridQueryProcessor.java:3128) at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:3256) at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:3078) at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:3006) at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:819) at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:767) at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:428) {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18911) Add tests for examples of extensions
[ https://issues.apache.org/jira/browse/IGNITE-18911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Petrov updated IGNITE-18911: Ignite Flags: (was: Docs Required,Release Notes Required) > Add tests for examples of extensions > > > Key: IGNITE-18911 > URL: https://issues.apache.org/jira/browse/IGNITE-18911 > Project: Ignite > Issue Type: Sub-task > Components: extensions >Reporter: Ilya Shishkov >Assignee: Ilya Shishkov >Priority: Minor > Labels: ise > > Modules from a list below does not have tests for extensions: > # Zookeeper Ip Finder > # Spring Transactions > # Spring Boot Thin Client Autoconfigure > # Spring Boot Autoconfigure -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18911) Add tests for examples of extensions
[ https://issues.apache.org/jira/browse/IGNITE-18911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698898#comment-17698898 ] Mikhail Petrov commented on IGNITE-18911: - [~shishkovilja] Merged to the master branch. Thank you a lot for the contribution. > Add tests for examples of extensions > > > Key: IGNITE-18911 > URL: https://issues.apache.org/jira/browse/IGNITE-18911 > Project: Ignite > Issue Type: Sub-task > Components: extensions >Reporter: Ilya Shishkov >Assignee: Ilya Shishkov >Priority: Minor > Labels: ise > > Modules from a list below does not have tests for extensions: > # Zookeeper Ip Finder > # Spring Transactions > # Spring Boot Thin Client Autoconfigure > # Spring Boot Autoconfigure -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18911) Add tests for examples of extensions
[ https://issues.apache.org/jira/browse/IGNITE-18911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698895#comment-17698895 ] Ilya Shishkov commented on IGNITE-18911: *Test results:* [Spring Boot Autoconfigure|https://ci.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_SpringBootAutoconfigure/7125640?buildTab=tests] [Spring Boot Thin Client Autoconfigure|https://ci2.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_SpringBootThinClientAutoconfigure/7086587?buildTab=tests] [Spring Transactions|https://ci2.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_SpringTransactions/7086589?buildTab=tests] [Zookeeper Ip Finder|https://ci.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_ZookeeperIpFinder/7125543?buildTab=tests] > Add tests for examples of extensions > > > Key: IGNITE-18911 > URL: https://issues.apache.org/jira/browse/IGNITE-18911 > Project: Ignite > Issue Type: Sub-task > Components: extensions >Reporter: Ilya Shishkov >Assignee: Ilya Shishkov >Priority: Minor > Labels: ise > > Modules from a list below does not have tests for extensions: > # Zookeeper Ip Finder > # Spring Transactions > # Spring Boot Thin Client Autoconfigure > # Spring Boot Autoconfigure -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-18911) Add tests for examples of extensions
[ https://issues.apache.org/jira/browse/IGNITE-18911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698895#comment-17698895 ] Ilya Shishkov edited comment on IGNITE-18911 at 3/10/23 11:49 AM: -- *Test results:* * [Spring Boot Autoconfigure|https://ci.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_SpringBootAutoconfigure/7125640?buildTab=tests] * [Spring Boot Thin Client Autoconfigure|https://ci2.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_SpringBootThinClientAutoconfigure/7086587?buildTab=tests] * [Spring Transactions|https://ci2.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_SpringTransactions/7086589?buildTab=tests] * [Zookeeper Ip Finder|https://ci.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_ZookeeperIpFinder/7125543?buildTab=tests] was (Author: shishkovilja): *Test results:* [Spring Boot Autoconfigure|https://ci.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_SpringBootAutoconfigure/7125640?buildTab=tests] [Spring Boot Thin Client Autoconfigure|https://ci2.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_SpringBootThinClientAutoconfigure/7086587?buildTab=tests] [Spring Transactions|https://ci2.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_SpringTransactions/7086589?buildTab=tests] [Zookeeper Ip Finder|https://ci.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_ZookeeperIpFinder/7125543?buildTab=tests] > Add tests for examples of extensions > > > Key: IGNITE-18911 > URL: https://issues.apache.org/jira/browse/IGNITE-18911 > Project: Ignite > Issue Type: Sub-task > Components: extensions >Reporter: Ilya Shishkov >Assignee: Ilya Shishkov >Priority: Minor > Labels: ise > > Modules from a list below does not have tests for extensions: > # Zookeeper Ip Finder > # Spring Transactions > # Spring Boot Thin Client Autoconfigure > # Spring Boot Autoconfigure -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18915) Thin 3.0: Add SQL tests with UUID columns
[ https://issues.apache.org/jira/browse/IGNITE-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698881#comment-17698881 ] Pavel Tupitsyn commented on IGNITE-18915: - Merged to main: b1bff1ccc68b798b202d81e894ee8fee58417fc5 > Thin 3.0: Add SQL tests with UUID columns > - > > Key: IGNITE-18915 > URL: https://issues.apache.org/jira/browse/IGNITE-18915 > Project: Ignite > Issue Type: Improvement > Components: thin client >Reporter: Igor Sapego >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 10m > Remaining Estimate: 0h > > Now when UUID support is implemented for SQL core > (https://issues.apache.org/jira/browse/IGNITE-16376), we need to add tests > with UUID columns to clients to ensure that everything works well in clients. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18966) Sql. Custom data types. Fix least restrictive type and nullability.
[ https://issues.apache.org/jira/browse/IGNITE-18966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Zhuravkov reassigned IGNITE-18966: - Assignee: Maksim Zhuravkov > Sql. Custom data types. Fix least restrictive type and nullability. > --- > > Key: IGNITE-18966 > URL: https://issues.apache.org/jira/browse/IGNITE-18966 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Maksim Zhuravkov >Assignee: Maksim Zhuravkov >Priority: Minor > Labels: calcite2-required, calcite3-required, ignite-3 > Fix For: 3.0.0-beta2 > > > 1. Calcite uses ANY type for the DEFAULT operator and the introduction of a > custom data type caused a regression that broke that rule. > > 2. Nullable attribute is not correctly set for custom data types - it creates > a custom data type with nullability = true when it should be false. > 3. Update commonTypeForBinaryComparison to convert to/from custom data type > in binary comparison operators. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18993) ODBC: Regression. Missed handling of single quotes
[ https://issues.apache.org/jira/browse/IGNITE-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Daschinsky updated IGNITE-18993: - Fix Version/s: 2.15 > ODBC: Regression. Missed handling of single quotes > -- > > Key: IGNITE-18993 > URL: https://issues.apache.org/jira/browse/IGNITE-18993 > Project: Ignite > Issue Type: Bug > Components: platforms >Affects Versions: 2.8, 2.9, 2.10, 2.11, 2.12, 2.13, 2.14 >Reporter: Ivan Daschinsky >Assignee: Ivan Daschinsky >Priority: Major > Labels: c++, odbc > Fix For: 2.15 > > Time Spent: 20m > Remaining Estimate: 0h > > When {{SQLTables}} is called with param 'table type' {{'TABLES'}} (with > single comma), it returns an empty table list. > It is a quite common way of calling this procedure (i.e. by INFORMATICA) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18992) CDC: Use single partition topic for metadata updates
[ https://issues.apache.org/jira/browse/IGNITE-18992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-18992: --- Summary: CDC: Use single partition topic for metadata updates (was: Use single partition topic for metadata updates) > CDC: Use single partition topic for metadata updates > - > > Key: IGNITE-18992 > URL: https://issues.apache.org/jira/browse/IGNITE-18992 > Project: Ignite > Issue Type: Task > Components: extensions >Reporter: Ilya Shishkov >Priority: Major > Labels: IEP-59, ise > > In order to read data with guaranteed order, metadata topic must have only > one partition. To achieve this we should: > * Write binary types and marshaller mappings only to a single partition in > {{IgniteToKafkaCdcStreamer#onMappings}} and {{#onTypes}}. > * Assign {{KafkaConsumer}} in {{KafkaToIgniteMetadataUpdater}} to a single > partition. > * Print warnings into logs, when topic has more than one partition. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18506) ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky
[ https://issues.apache.org/jira/browse/IGNITE-18506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky reassigned IGNITE-18506: --- Assignee: Evgeny Stanilovsky > ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky > > > Key: IGNITE-18506 > URL: https://issues.apache.org/jira/browse/IGNITE-18506 > Project: Ignite > Issue Type: Bug >Reporter: Roman Puchkovskiy >Assignee: Evgeny Stanilovsky >Priority: Major > Labels: ignite-3, tech-debt > > [https://ci.ignite.apache.org/test/5151808410902558750?currentProjectId=ApacheIgnite3xGradle_Test_IntegrationTests=true=] > In failing runs, nodes leave the cluster and return to it during the test. > This seems to make session hang (until timeout of 5 minutes triggers). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18506) ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky
[ https://issues.apache.org/jira/browse/IGNITE-18506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698866#comment-17698866 ] Evgeny Stanilovsky commented on IGNITE-18506: - The problem will be fixed after [1]. Seems all we need here is to correct disabled annotation. [1] https://issues.apache.org/jira/browse/IGNITE-18203 > ItDataSchemaSyncTest.checkSchemasCorrectlyRestore() is flaky > > > Key: IGNITE-18506 > URL: https://issues.apache.org/jira/browse/IGNITE-18506 > Project: Ignite > Issue Type: Bug >Reporter: Roman Puchkovskiy >Priority: Major > Labels: ignite-3, tech-debt > > [https://ci.ignite.apache.org/test/5151808410902558750?currentProjectId=ApacheIgnite3xGradle_Test_IntegrationTests=true=] > In failing runs, nodes leave the cluster and return to it during the test. > This seems to make session hang (until timeout of 5 minutes triggers). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18993) ODBC: Regression. Missed handling of single quotes
[ https://issues.apache.org/jira/browse/IGNITE-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698863#comment-17698863 ] Igor Sapego commented on IGNITE-18993: -- Looks good to me. > ODBC: Regression. Missed handling of single quotes > -- > > Key: IGNITE-18993 > URL: https://issues.apache.org/jira/browse/IGNITE-18993 > Project: Ignite > Issue Type: Bug > Components: platforms >Affects Versions: 2.8, 2.9, 2.10, 2.11, 2.12, 2.13, 2.14 >Reporter: Ivan Daschinsky >Assignee: Ivan Daschinsky >Priority: Major > Labels: c++, odbc > Time Spent: 10m > Remaining Estimate: 0h > > When {{SQLTables}} is called with param 'table type' {{'TABLES'}} (with > single comma), it returns an empty table list. > It is a quite common way of calling this procedure (i.e. by INFORMATICA) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18915) Thin 3.0: Add SQL tests with UUID columns
[ https://issues.apache.org/jira/browse/IGNITE-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698856#comment-17698856 ] Igor Sapego commented on IGNITE-18915: -- Looks good to me. > Thin 3.0: Add SQL tests with UUID columns > - > > Key: IGNITE-18915 > URL: https://issues.apache.org/jira/browse/IGNITE-18915 > Project: Ignite > Issue Type: Improvement > Components: thin client >Reporter: Igor Sapego >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > Now when UUID support is implemented for SQL core > (https://issues.apache.org/jira/browse/IGNITE-16376), we need to add tests > with UUID columns to clients to ensure that everything works well in clients. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19000) testCurrentDateTimeTimeStamp fails on Windows
Aleksey Demakov created IGNITE-19000: Summary: testCurrentDateTimeTimeStamp fails on Windows Key: IGNITE-19000 URL: https://issues.apache.org/jira/browse/IGNITE-19000 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 3.0.0-beta2 Reporter: Aleksey Demakov Assignee: Aleksey Demakov Fix For: 3.0.0-beta2 The test testCurrentDateTimeTimeStamp fails on windows with the following error: exp ts:2023-03-10T10:43:24.767792200, act ts:2023-03-10T10:43:24.767 ==> expected: but was: Expected :true Actual :false org.opentest4j.AssertionFailedError: exp ts:2023-03-10T10:43:24.767792200, act ts:2023-03-10T10:43:24.767 ==> expected: but was: at app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151) at app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132) at app//org.junit.jupiter.api.AssertTrue.failNotTrue(AssertTrue.java:63) at app//org.junit.jupiter.api.AssertTrue.assertTrue(AssertTrue.java:36) at app//org.junit.jupiter.api.Assertions.assertTrue(Assertions.java:211) at app//org.apache.ignite.internal.sql.engine.ItFunctionsTest.checkDateTimeQuery(ItFunctionsTest.java:91) at app//org.apache.ignite.internal.sql.engine.ItFunctionsTest.testCurrentDateTimeTimeStamp(ItFunctionsTest.java:61) ... It appears SQL truncates the time to 3 digits after decimal point thus causing the failure. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18999) Snapshot only primary copies of partitions
[ https://issues.apache.org/jira/browse/IGNITE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolay Izhikov updated IGNITE-18999: - Description: Ignite must provide an option to snapshot only primary copies of partitions. This will improve: * snapshot creation time. * disk usage during snapshot creation. * space amount to store snapshot. This will lead to the following disadvantages during restore process: * rebalance (can be omitted with custom file copy script and cellular affinity) * index.bin rebuild - performance improved in IGNITE-18271 was: Ignite must provide an option to snapshot only primary copies of partitions. This will improve: * snapshot creation time. * disk usage during snapshot creation. * space amount to store snapshot. This will lead to the following disadvantages during restore process: * rebalance (can be omitted with custom file copy script and cellular affinity) * index.bin rebuild - performance improved in IGNITE-18271 > Snapshot only primary copies of partitions > -- > > Key: IGNITE-18999 > URL: https://issues.apache.org/jira/browse/IGNITE-18999 > Project: Ignite > Issue Type: Improvement >Reporter: Nikolay Izhikov >Assignee: Nikolay Izhikov >Priority: Major > Labels: iep-43, ise > > Ignite must provide an option to snapshot only primary copies of partitions. > This will improve: > * snapshot creation time. > * disk usage during snapshot creation. > * space amount to store snapshot. > This will lead to the following disadvantages during restore process: > * rebalance (can be omitted with custom file copy script and cellular > affinity) > * index.bin rebuild - performance improved in IGNITE-18271 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18999) Snapshot only primary copies of partitions
[ https://issues.apache.org/jira/browse/IGNITE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolay Izhikov updated IGNITE-18999: - Labels: iep-43 (was: ) > Snapshot only primary copies of partitions > -- > > Key: IGNITE-18999 > URL: https://issues.apache.org/jira/browse/IGNITE-18999 > Project: Ignite > Issue Type: Improvement >Reporter: Nikolay Izhikov >Priority: Major > Labels: iep-43 > > Ignite must provide an option to snapshot only primary copies of partitions. > This will improve: > * snapshot creation time. > * disk usage during snapshot creation. > * space amount to store snapshot. > This will lead to the following disadvantages during restore process: > * rebalance (can be omitted with custom file copy script and cellular > affinity) > * index.bin rebuild - performance improved in IGNITE-18271 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18999) Snapshot only primary copies of partitions
[ https://issues.apache.org/jira/browse/IGNITE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolay Izhikov updated IGNITE-18999: - Labels: iep-43 ise (was: iep-43) > Snapshot only primary copies of partitions > -- > > Key: IGNITE-18999 > URL: https://issues.apache.org/jira/browse/IGNITE-18999 > Project: Ignite > Issue Type: Improvement >Reporter: Nikolay Izhikov >Priority: Major > Labels: iep-43, ise > > Ignite must provide an option to snapshot only primary copies of partitions. > This will improve: > * snapshot creation time. > * disk usage during snapshot creation. > * space amount to store snapshot. > This will lead to the following disadvantages during restore process: > * rebalance (can be omitted with custom file copy script and cellular > affinity) > * index.bin rebuild - performance improved in IGNITE-18271 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18999) Snapshot only primary copies of partitions
Nikolay Izhikov created IGNITE-18999: Summary: Snapshot only primary copies of partitions Key: IGNITE-18999 URL: https://issues.apache.org/jira/browse/IGNITE-18999 Project: Ignite Issue Type: Improvement Reporter: Nikolay Izhikov Ignite must provide an option to snapshot only primary copies of partitions. This will improve: * snapshot creation time. * disk usage during snapshot creation. * space amount to store snapshot. This will lead to the following disadvantages during restore process: * rebalance (can be omitted with custom file copy script and cellular affinity) * index.bin rebuild - performance improved in IGNITE-18271 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18999) Snapshot only primary copies of partitions
[ https://issues.apache.org/jira/browse/IGNITE-18999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolay Izhikov reassigned IGNITE-18999: Assignee: Nikolay Izhikov > Snapshot only primary copies of partitions > -- > > Key: IGNITE-18999 > URL: https://issues.apache.org/jira/browse/IGNITE-18999 > Project: Ignite > Issue Type: Improvement >Reporter: Nikolay Izhikov >Assignee: Nikolay Izhikov >Priority: Major > Labels: iep-43, ise > > Ignite must provide an option to snapshot only primary copies of partitions. > This will improve: > * snapshot creation time. > * disk usage during snapshot creation. > * space amount to store snapshot. > This will lead to the following disadvantages during restore process: > * rebalance (can be omitted with custom file copy script and cellular > affinity) > * index.bin rebuild - performance improved in IGNITE-18271 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-12970) Cluster snapshot must support encryption caches
[ https://issues.apache.org/jira/browse/IGNITE-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolay Izhikov reassigned IGNITE-12970: Assignee: Nikolay Izhikov > Cluster snapshot must support encryption caches > --- > > Key: IGNITE-12970 > URL: https://issues.apache.org/jira/browse/IGNITE-12970 > Project: Ignite > Issue Type: Improvement >Reporter: Maxim Muzafarov >Assignee: Nikolay Izhikov >Priority: Major > Labels: iep-43 > Time Spent: 1h > Remaining Estimate: 0h > > Currently, a cluster snapshot operation not supports including encrypted > caches to the snapshot. The {{EncryptionFileIO}} must be added for coping > cache partition files and its deltas (see IEP-43 for details about copying > cache partition files). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-12970) Cluster snapshot must support encryption caches
[ https://issues.apache.org/jira/browse/IGNITE-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolay Izhikov reassigned IGNITE-12970: Assignee: (was: Nikolay Izhikov) > Cluster snapshot must support encryption caches > --- > > Key: IGNITE-12970 > URL: https://issues.apache.org/jira/browse/IGNITE-12970 > Project: Ignite > Issue Type: Improvement >Reporter: Maxim Muzafarov >Priority: Major > Labels: iep-43 > Time Spent: 1h > Remaining Estimate: 0h > > Currently, a cluster snapshot operation not supports including encrypted > caches to the snapshot. The {{EncryptionFileIO}} must be added for coping > cache partition files and its deltas (see IEP-43 for details about copying > cache partition files). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18535) Define new classes for versioned tables/indexes schemas
[ https://issues.apache.org/jira/browse/IGNITE-18535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Mashenkov reassigned IGNITE-18535: - Assignee: Andrey Mashenkov > Define new classes for versioned tables/indexes schemas > --- > > Key: IGNITE-18535 > URL: https://issues.apache.org/jira/browse/IGNITE-18535 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Bessonov >Assignee: Andrey Mashenkov >Priority: Major > Labels: ignite-3 > > Current approach with schema management is faulty and can't support indexes. > On top of that, it doesn't allow us to truly have multi-versioned historical > data. Once the table is removed, it's removed for good, meaning that > "current" RO transactions will not be able to finish. This is not acceptable. > h3. Schema definitions > What we need to have is the following: > {code:java} > SchemaDefinitions = map {version -> SchemaDefinition} > SchemaDefinition = {timestamp, set {TableDefinition}, set{IndexDefinition}} > TableDefinition = {name, id, array[ColumnDefinition], ...} > IndexDefinition = {name, id, tableId, state, array[IdxColumnDefinition], > ...}{code} > Schema must be versioned, that's the first point. Well, it's already > versioned in "main", here I mean the global versioning to tie everything to > transactions and management of SQL indexes. > Each definition correspond to a time period, where it represents the "actual" > state of things. It must be used for RO queries, for example. RW transactions > always use LATEST schema, obviously. > Now, the meaning of defined values: > * version - a simple auto-incrementing integer value; > * "timestamp" - the schema is considered to be valid from this timestamp > until the timestamp of "next" version (or "inifinity" if the next version > doesn't yet exist); > * most of tables and indexes properties are self-explanatory; > * index state - RO or RW. We should differentiate the indexes that are not > yet built frome indexes that are fully available. > Currently, it's not too clear where to store this structure. The problem lies > in the realm of metadata synchronization, that's not yet designed. But the > thing is that all nodes must eventually have an up-to-date state and every > data/index update must be consistent with the version that belongs to a > current operation's timestamp. > There are two likely candidates - Meta-Storage or Configuration. We'll figure > it out later. > h3. Seralization / storage > It would be convenient to only store the oldest version + the collection of > diffs. Every node would unpack that locally, but we would save a lot on the > storage space in meta-storage in case when user has a lot of tables/indexes. > This approach would also be beneficial for another reason: we need to know, > what's changed between versions. It may be hard to calculate if all that we > have are definitions themselves. > h3. General thoughts > This may be a good place to start using integer tableId and indexId more > often. UUIDs are too much. What's good is that "serializability" of schemas > gives us easy way of generating integer ids, just like it's don right now > with configuration. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18998) Handle PrimaryReplica move during the rebalance correctly
Kirill Gusakov created IGNITE-18998: --- Summary: Handle PrimaryReplica move during the rebalance correctly Key: IGNITE-18998 URL: https://issues.apache.org/jira/browse/IGNITE-18998 Project: Ignite Issue Type: Task Reporter: Kirill Gusakov -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-18997) Remove redundant future transformations from PartitionReplicaListener
Aleksandr Polovtcev created IGNITE-18997: Summary: Remove redundant future transformations from PartitionReplicaListener Key: IGNITE-18997 URL: https://issues.apache.org/jira/browse/IGNITE-18997 Project: Ignite Issue Type: Improvement Reporter: Aleksandr Polovtcev Assignee: Aleksandr Polovtcev PartitionReplicaListener has a following pattern in its code: {code:java} if (request instanceof ReplicaSafeTimeSyncRequest) { return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) request) .thenApply(Function.identity()); {code} This {{thenApply(Function.identity())}} is only needed for code being able to compile. This makes little sense, as we can simply use generic wildcards and remove these transfomations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18997) Remove redundant future transformations from PartitionReplicaListener
[ https://issues.apache.org/jira/browse/IGNITE-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtcev updated IGNITE-18997: - Description: PartitionReplicaListener has a following pattern in its code: {code:java} if (request instanceof ReplicaSafeTimeSyncRequest) { return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) request) .thenApply(Function.identity()); {code} This {{thenApply(Function.identity())}} is only needed for code being able to compile. This makes little sense, as we can simply use generic wildcards and remove these transfomations. was: PartitionReplicaListener has a following pattern in its code: {code:java} if (request instanceof ReplicaSafeTimeSyncRequest) { return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) request) .thenApply(Function.identity()); {code} This {{thenApply(Function.identity())}} is only needed for code being able to compile. This makes little sense, as we can simply use generic wildcards and remove these transfomations. > Remove redundant future transformations from PartitionReplicaListener > - > > Key: IGNITE-18997 > URL: https://issues.apache.org/jira/browse/IGNITE-18997 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Trivial > Labels: ignite-3 > > PartitionReplicaListener has a following pattern in its code: > {code:java} > if (request instanceof ReplicaSafeTimeSyncRequest) { > return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) > request) > .thenApply(Function.identity()); > {code} > This {{thenApply(Function.identity())}} is only needed for code being able to > compile. This makes little sense, as we can simply use generic wildcards and > remove these transfomations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18997) Remove redundant future transformations from PartitionReplicaListener
[ https://issues.apache.org/jira/browse/IGNITE-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtcev updated IGNITE-18997: - Description: PartitionReplicaListener has a following pattern in its code: {code:java} if (request instanceof ReplicaSafeTimeSyncRequest) { return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) request) .thenApply(Function.identity()); } {code} This {{thenApply(Function.identity())}} is only needed for code being able to compile. This makes little sense, as we can simply use generic wildcards and remove these transfomations. was: PartitionReplicaListener has a following pattern in its code: {code:java} if (request instanceof ReplicaSafeTimeSyncRequest) { return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) request) .thenApply(Function.identity()); {code} This {{thenApply(Function.identity())}} is only needed for code being able to compile. This makes little sense, as we can simply use generic wildcards and remove these transfomations. > Remove redundant future transformations from PartitionReplicaListener > - > > Key: IGNITE-18997 > URL: https://issues.apache.org/jira/browse/IGNITE-18997 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Trivial > Labels: ignite-3 > > PartitionReplicaListener has a following pattern in its code: > {code:java} > if (request instanceof ReplicaSafeTimeSyncRequest) { > return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) > request) > .thenApply(Function.identity()); > } > {code} > This {{thenApply(Function.identity())}} is only needed for code being able to > compile. This makes little sense, as we can simply use generic wildcards and > remove these transfomations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18997) Remove redundant future transformations from PartitionReplicaListener
[ https://issues.apache.org/jira/browse/IGNITE-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtcev updated IGNITE-18997: - Fix Version/s: 3.0.0-beta2 > Remove redundant future transformations from PartitionReplicaListener > - > > Key: IGNITE-18997 > URL: https://issues.apache.org/jira/browse/IGNITE-18997 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Trivial > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > PartitionReplicaListener has a following pattern in its code: > {code:java} > if (request instanceof ReplicaSafeTimeSyncRequest) { > return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) > request) > .thenApply(Function.identity()); > } > {code} > This {{thenApply(Function.identity())}} is only needed for code being able to > compile. This makes little sense, as we can simply use generic wildcards and > remove these transfomations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-18997) Remove redundant future transformations from PartitionReplicaListener
[ https://issues.apache.org/jira/browse/IGNITE-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtcev updated IGNITE-18997: - Description: PartitionReplicaListener has a following pattern in its code: {code:java} if (request instanceof ReplicaSafeTimeSyncRequest) { return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) request) .thenApply(Function.identity()); } {code} This {{thenApply(Function.identity())}} is only needed for code being able to compile. This makes little sense, as we can simply use generic wildcards and remove these transfomations. was: PartitionReplicaListener has a following pattern in its code: {code:java} if (request instanceof ReplicaSafeTimeSyncRequest) { return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) request) .thenApply(Function.identity()); } {code} This {{thenApply(Function.identity())}} is only needed for code being able to compile. This makes little sense, as we can simply use generic wildcards and remove these transfomations. > Remove redundant future transformations from PartitionReplicaListener > - > > Key: IGNITE-18997 > URL: https://issues.apache.org/jira/browse/IGNITE-18997 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Trivial > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > PartitionReplicaListener has a following pattern in its code: > {code:java} > if (request instanceof ReplicaSafeTimeSyncRequest) { > return processReplicaSafeTimeSyncRequest((ReplicaSafeTimeSyncRequest) > request) > .thenApply(Function.identity()); > } > {code} > This {{thenApply(Function.identity())}} is only needed for code being able to > compile. This makes little sense, as we can simply use generic wildcards and > remove these transfomations. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17321) Document which api can work with partition awareness
[ https://issues.apache.org/jira/browse/IGNITE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698813#comment-17698813 ] Julia Bakulina commented on IGNITE-17321: - [~timonin.maksim] thank you for review! > Document which api can work with partition awareness > > > Key: IGNITE-17321 > URL: https://issues.apache.org/jira/browse/IGNITE-17321 > Project: Ignite > Issue Type: Improvement > Components: thin client >Reporter: Luchnikov Alexander >Assignee: Julia Bakulina >Priority: Minor > Labels: documentation, ise > Fix For: 2.15 > > Time Spent: 2h 50m > Remaining Estimate: 0h > > In javadoc > org.apache.ignite.configuration.ClientConfiguration#partitionAwarenessEnabled > and in the description of functionality > https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness, > it is not described with which api this functionality will work and in what > cases. For example, will it work with getAll, in a transaction? > Describe in the documentation and in the javadoc in which cases it works and > with which api. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17321) Document which api can work with partition awareness
[ https://issues.apache.org/jira/browse/IGNITE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Timonin updated IGNITE-17321: Fix Version/s: 2.15 > Document which api can work with partition awareness > > > Key: IGNITE-17321 > URL: https://issues.apache.org/jira/browse/IGNITE-17321 > Project: Ignite > Issue Type: Improvement > Components: thin client >Reporter: Luchnikov Alexander >Assignee: Julia Bakulina >Priority: Minor > Labels: documentation, ise > Fix For: 2.15 > > Time Spent: 2h 50m > Remaining Estimate: 0h > > In javadoc > org.apache.ignite.configuration.ClientConfiguration#partitionAwarenessEnabled > and in the description of functionality > https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness, > it is not described with which api this functionality will work and in what > cases. For example, will it work with getAll, in a transaction? > Describe in the documentation and in the javadoc in which cases it works and > with which api. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17321) Document which api can work with partition awareness
[ https://issues.apache.org/jira/browse/IGNITE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Timonin updated IGNITE-17321: Ignite Flags: (was: Docs Required,Release Notes Required) > Document which api can work with partition awareness > > > Key: IGNITE-17321 > URL: https://issues.apache.org/jira/browse/IGNITE-17321 > Project: Ignite > Issue Type: Improvement > Components: thin client >Reporter: Luchnikov Alexander >Assignee: Julia Bakulina >Priority: Minor > Labels: documentation, ise > Fix For: 2.15 > > Time Spent: 2h 50m > Remaining Estimate: 0h > > In javadoc > org.apache.ignite.configuration.ClientConfiguration#partitionAwarenessEnabled > and in the description of functionality > https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness, > it is not described with which api this functionality will work and in what > cases. For example, will it work with getAll, in a transaction? > Describe in the documentation and in the javadoc in which cases it works and > with which api. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17321) Document which api can work with partition awareness
[ https://issues.apache.org/jira/browse/IGNITE-17321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698799#comment-17698799 ] Ignite TC Bot commented on IGNITE-17321: {panel:title=Branch: [pull/10453/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/10453/head] Base: [master] : No new tests found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel} [TeamCity *-- Run :: All* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7081265buildTypeId=IgniteTests24Java8_RunAll] > Document which api can work with partition awareness > > > Key: IGNITE-17321 > URL: https://issues.apache.org/jira/browse/IGNITE-17321 > Project: Ignite > Issue Type: Improvement > Components: thin client >Reporter: Luchnikov Alexander >Assignee: Julia Bakulina >Priority: Minor > Labels: documentation, ise > Time Spent: 2h 40m > Remaining Estimate: 0h > > In javadoc > org.apache.ignite.configuration.ClientConfiguration#partitionAwarenessEnabled > and in the description of functionality > https://ignite.apache.org/docs/latest/thin-clients/java-thin-client#partition-awareness, > it is not described with which api this functionality will work and in what > cases. For example, will it work with getAll, in a transaction? > Describe in the documentation and in the javadoc in which cases it works and > with which api. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18993) ODBC: Regression. Missed handling of single quotes
[ https://issues.apache.org/jira/browse/IGNITE-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698796#comment-17698796 ] Ivan Daschinsky commented on IGNITE-18993: -- [~isapego] Could you review this patch, please? > ODBC: Regression. Missed handling of single quotes > -- > > Key: IGNITE-18993 > URL: https://issues.apache.org/jira/browse/IGNITE-18993 > Project: Ignite > Issue Type: Bug > Components: platforms >Affects Versions: 2.8, 2.9, 2.10, 2.11, 2.12, 2.13, 2.14 >Reporter: Ivan Daschinsky >Assignee: Ivan Daschinsky >Priority: Major > Labels: c++, odbc > Time Spent: 10m > Remaining Estimate: 0h > > When {{SQLTables}} is called with param 'table type' {{'TABLES'}} (with > single comma), it returns an empty table list. > It is a quite common way of calling this procedure (i.e. by INFORMATICA) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-18993) ODBC: Regression. Missed handling of single quotes
[ https://issues.apache.org/jira/browse/IGNITE-18993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17698794#comment-17698794 ] Ignite TC Bot commented on IGNITE-18993: {panel:title=Branch: [pull/10593/head] Base: [master] : Possible Blockers (1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1} {color:#d04437}Platform C++ CMake (Win x64 / Debug){color} [[tests 1|https://ci2.ignite.apache.org/viewLog.html?buildId=7086523]] * IgniteThinClientTest: ContinuousQueryTestSuite: TestLongEventsProcessingDisconnect - History for base branch is absent. {panel} {panel:title=Branch: [pull/10593/head] Base: [master] : New Tests (1063)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1} {color:#8b}Platform C++ CMake (Win x64 / Debug){color} [[tests 1063|https://ci2.ignite.apache.org/viewLog.html?buildId=7086523]] * {color:#013220}IgniteThinClientTest: CacheClientTestSuite: CacheClientGetAndPutIfAbsentComplexKey - PASSED{color} * {color:#013220}IgniteThinClientTest: ComputeClientTestSuite: EchoTaskNull - PASSED{color} * {color:#013220}IgniteThinClientTest: ComputeClientTestSuite: EchoTaskPrimitives - PASSED{color} * {color:#013220}IgniteThinClientTest: ComputeClientTestSuite: EchoTaskObject - PASSED{color} * {color:#013220}IgniteThinClientTest: ComputeClientTestSuite: EchoTaskGuid - PASSED{color} * {color:#013220}IgniteThinClientTest: ComputeClientTestSuite: TaskWithTimeout - PASSED{color} * {color:#013220}IgniteThinClientTest: ComputeClientTestSuite: TaskWithNoFailover - PASSED{color} * {color:#013220}IgniteThinClientTest: ComputeClientTestSuite: TaskWithNoResultCache - PASSED{color} * {color:#013220}IgniteThinClientTest: IgniteClientTestSuite: IgniteClientConnection - PASSED{color} * {color:#013220}IgniteThinClientTest: IgniteClientTestSuite: IgniteClientConnectionFailover - PASSED{color} * {color:#013220}IgniteThinClientTest: IgniteClientTestSuite: IgniteClientConnectionLimit - PASSED{color} ... and 1052 new tests {panel} [TeamCity *- Run :: CPP* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7086034buildTypeId=IgniteTests24Java8_RunCpp] > ODBC: Regression. Missed handling of single quotes > -- > > Key: IGNITE-18993 > URL: https://issues.apache.org/jira/browse/IGNITE-18993 > Project: Ignite > Issue Type: Bug > Components: platforms >Affects Versions: 2.8, 2.9, 2.10, 2.11, 2.12, 2.13, 2.14 >Reporter: Ivan Daschinsky >Assignee: Ivan Daschinsky >Priority: Major > Labels: c++, odbc > Time Spent: 10m > Remaining Estimate: 0h > > When {{SQLTables}} is called with param 'table type' {{'TABLES'}} (with > single comma), it returns an empty table list. > It is a quite common way of calling this procedure (i.e. by INFORMATICA) -- This message was sent by Atlassian Jira (v8.20.10#820010)