[jira] [Commented] (IGNITE-21822) Sql. Allow to set time zone on per statement basis

2024-04-08 Thread Pavel Pereslegin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835177#comment-17835177
 ] 

Pavel Pereslegin commented on IGNITE-21822:
---

Need to ensure that using a public API {{TIMESTAMP WITH LOCAL TIME ZONE}} 
literal stores the value, taking into account the client's time zone (such test 
already exists - {{ItCommonApiTest#checkTimestampOperations}}, but it must be 
executed not in GMT/UTC timezone, it seems that TeamCity runs in GMT).

> Sql. Allow to set time zone on per statement basis
> --
>
> Key: IGNITE-21822
> URL: https://issues.apache.org/jira/browse/IGNITE-21822
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> In IGNITE-21551, support for time zone was added to the sql engine. However, 
> only Session was updated, while Statement was left intact. Let's eliminate 
> such a discrepancy by providing an ability to set time zone on per statement 
> basis. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21892) ItPlacementDriverReplicaSideTest testNotificationToPlacementDriverAboutChangeLeader is flaky

2024-04-08 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21892:
-
Epic Link: IGNITE-21389

> ItPlacementDriverReplicaSideTest 
> testNotificationToPlacementDriverAboutChangeLeader is flaky
> 
>
> Key: IGNITE-21892
> URL: https://issues.apache.org/jira/browse/IGNITE-21892
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Zhuravkov
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This test is flaky. Build error:
> {code}
>   java.lang.AssertionError: There are replicas alive [replicas=[group_1]]
> at 
> org.apache.ignite.internal.replicator.ReplicaManager.stop(ReplicaManager.java:658)
> at 
> org.apache.ignite.internal.replicator.ItPlacementDriverReplicaSideTest.lambda$beforeTest$3(ItPlacementDriverReplicaSideTest.java:200)
> at 
> org.apache.ignite.internal.util.IgniteUtils.lambda$closeAll$0(IgniteUtils.java:559)
> at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
> at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
> at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
> at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
> at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
> at 
> org.apache.ignite.internal.util.IgniteUtils.closeAll(IgniteUtils.java:557)
> at 
> org.apache.ignite.internal.util.IgniteUtils.closeAll(IgniteUtils.java:580)
> at 
> org.apache.ignite.internal.replicator.ItPlacementDriverReplicaSideTest.afterTest(ItPlacementDriverReplicaSideTest.java:214)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
> {code}
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_ModuleReplicator/7987165?expandBuildDeploymentsSection=false&hideTestsFromDependencies=false&expandBuildTestsSection=true&hideProblemsFromDependencies=false&expandBuildProblemsSection=true&expandBuildChangesSection=true&showLog=7987165_489_86.470.489&logFilter=debug&logView=flowAware
> I was not able to reproduce the same error locally, I got an error on the 
> following line instead:
> {code}
> assertTrue(waitForCondition(() -> nodesToReceivedDeclineMsg.size() == 
> placementDriverNodeNames.size(), 10_000));
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21892) ItPlacementDriverReplicaSideTest testNotificationToPlacementDriverAboutChangeLeader is flaky

2024-04-08 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21892:
-
Priority: Major  (was: Minor)

> ItPlacementDriverReplicaSideTest 
> testNotificationToPlacementDriverAboutChangeLeader is flaky
> 
>
> Key: IGNITE-21892
> URL: https://issues.apache.org/jira/browse/IGNITE-21892
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Zhuravkov
>Assignee: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This test is flaky. Build error:
> {code}
>   java.lang.AssertionError: There are replicas alive [replicas=[group_1]]
> at 
> org.apache.ignite.internal.replicator.ReplicaManager.stop(ReplicaManager.java:658)
> at 
> org.apache.ignite.internal.replicator.ItPlacementDriverReplicaSideTest.lambda$beforeTest$3(ItPlacementDriverReplicaSideTest.java:200)
> at 
> org.apache.ignite.internal.util.IgniteUtils.lambda$closeAll$0(IgniteUtils.java:559)
> at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> at 
> java.base/java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177)
> at 
> java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1655)
> at 
> java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484)
> at 
> java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474)
> at 
> java.base/java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> at 
> java.base/java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> at 
> java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> at 
> java.base/java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497)
> at 
> org.apache.ignite.internal.util.IgniteUtils.closeAll(IgniteUtils.java:557)
> at 
> org.apache.ignite.internal.util.IgniteUtils.closeAll(IgniteUtils.java:580)
> at 
> org.apache.ignite.internal.replicator.ItPlacementDriverReplicaSideTest.afterTest(ItPlacementDriverReplicaSideTest.java:214)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
> at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
> {code}
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_IntegrationTests_ModuleReplicator/7987165?expandBuildDeploymentsSection=false&hideTestsFromDependencies=false&expandBuildTestsSection=true&hideProblemsFromDependencies=false&expandBuildProblemsSection=true&expandBuildChangesSection=true&showLog=7987165_489_86.470.489&logFilter=debug&logView=flowAware
> I was not able to reproduce the same error locally, I got an error on the 
> following line instead:
> {code}
> assertTrue(waitForCondition(() -> nodesToReceivedDeclineMsg.size() == 
> placementDriverNodeNames.size(), 10_000));
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-16979) Add support for load testing via Gatling to ducktests

2024-04-08 Thread Sergey Korotkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Korotkov updated IGNITE-16979:
-
Description: 
Add ability to perform load testing via the ignite-gatling extention in 
ducktests framework


  was:
Add ability to perform load testing via the Gatling tool in ducktests framework



> Add support for load testing via Gatling to ducktests
> -
>
> Key: IGNITE-16979
> URL: https://issues.apache.org/jira/browse/IGNITE-16979
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Minor
>  Labels: ducktests
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Add ability to perform load testing via the ignite-gatling extention in 
> ducktests framework



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-16979) Add support for load testing via Gatling to ducktests

2024-04-08 Thread Sergey Korotkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Korotkov updated IGNITE-16979:
-
Priority: Minor  (was: Major)

> Add support for load testing via Gatling to ducktests
> -
>
> Key: IGNITE-16979
> URL: https://issues.apache.org/jira/browse/IGNITE-16979
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Minor
>  Labels: ducktests
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Add ability to perform load testing via the Gatling tool in ducktests 
> framework



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (IGNITE-16979) Add support for load testing via Gatling to ducktests

2024-04-08 Thread Sergey Korotkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Korotkov reopened IGNITE-16979:
--

> Add support for load testing via Gatling to ducktests
> -
>
> Key: IGNITE-16979
> URL: https://issues.apache.org/jira/browse/IGNITE-16979
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Major
>  Labels: ducktests
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Add ability to perform load testing via the Gatling tool in ducktests 
> framework



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22007) Optimise Read-only index scan for RocksDB

2024-04-08 Thread Philipp Shergalis (Jira)
Philipp Shergalis created IGNITE-22007:
--

 Summary: Optimise Read-only index scan for RocksDB 
 Key: IGNITE-22007
 URL: https://issues.apache.org/jira/browse/IGNITE-22007
 Project: Ignite
  Issue Type: Improvement
  Components: persistence
Reporter: Philipp Shergalis


In IGNITE-21987 we added readOnlyScan method for SortedIndexStorage, but 
RocksDB currently RocksDB uses default method, relying on read-write 
implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-22003) JRaft Replicator is not stopped when cluster init is canceled

2024-04-08 Thread Tiago Marques Godinho (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17835000#comment-17835000
 ] 

Tiago Marques Godinho commented on IGNITE-22003:


I think this behaviour might have been cause by another issue.
Let me have a bit more time to investigate before picking this up.

> JRaft Replicator is not stopped when cluster init is canceled
> -
>
> Key: IGNITE-22003
> URL: https://issues.apache.org/jira/browse/IGNITE-22003
> Project: Ignite
>  Issue Type: Bug
>Reporter: Tiago Marques Godinho
>Priority: Major
>  Labels: ignite-3
>
> The JRaft Group Replicator should be stopped when the cluster initialization 
> is canceled.
> My understanding is that this resource is created during the initialization 
> process, and therefore, should be closed properly if it fails. Moreover, it 
> seems that it also keeps references to other resources created in the same 
> context (like the LogManager), preventing them from being properly cleaned.
> Here is a stacktrace of the replicator routine running after the init is 
> canceled:
> {code:java}
> getEntry:387, RocksDbSharedLogStorage 
> (org.apache.ignite.internal.raft.storage.impl)
> getTermFromLogStorage:838, LogManagerImpl 
> (org.apache.ignite.raft.jraft.storage.impl)
> getTerm:834, LogManagerImpl (org.apache.ignite.raft.jraft.storage.impl)
> fillCommonFields:1550, Replicator (org.apache.ignite.raft.jraft.core)
> sendEmptyEntries:749, Replicator (org.apache.ignite.raft.jraft.core)
> sendEmptyEntries:735, Replicator (org.apache.ignite.raft.jraft.core)
> sendHeartbeat:1748, Replicator (org.apache.ignite.raft.jraft.core)
> lambda$onError$8:1065, Replicator (org.apache.ignite.raft.jraft.core)
> call:515, Executors$RunnableAdapter (java.util.concurrent)
> run:264, FutureTask (java.util.concurrent)
> runWorker:1128, ThreadPoolExecutor (java.util.concurrent)
> run:628, ThreadPoolExecutor$Worker (java.util.concurrent)
> run:829, Thread (java.lang){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22006) 'Failed to process the lease granted message' error under load with balance transfer scenario

2024-04-08 Thread Nikita Sivkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Sivkov updated IGNITE-22006:
---
Description: 
*Steps to reproduce:*

Perform a long (about 2 hours) load test with a balance transfer scenario (see 
scenario pseudo code in attachments).

*Expected result:*

No errors happen.

*Actual result:*

Get error in server logs - {{Failed to process the lease granted message}}
{code:java}
2024-04-05 17:50:39:180 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-13][NodeImpl]
 Node <127_part_16/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=3.
2024-04-05 17:50:39:187 +0300 
[WARNING][CompletableFutureDelayScheduler][ReplicaManager] Failed to process 
the lease granted message [msg=LeaseGrantedMessageImpl [force=true, 
groupId=77_part_14, leaseExpirationTimeLong=112219169697759232, 
leaseStartTimeLong=112219161833439373]].
java.util.concurrent.TimeoutException
    at 
java.base/java.util.concurrent.CompletableFuture$Timeout.run(CompletableFuture.java:2792)
    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
2024-04-05 17:50:39:190 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-34][NodeImpl]
 Node <213_part_14/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=2.{code}

  was:
*Steps to reproduce:*

Perform a long (about 2 hours) load test with a balance transfer scenario (see 
scenario pseudo code in attachments).

*Expected result:*

No errors happen.

*Actual result:*

Get error in server logs - {{Failed to process replica request}}
{code:java}
2024-04-05 17:50:55:802 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-AppendEntries-Processor-2][NodeImpl]
 Node <193_part_15/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=2.
2024-04-05 17:50:55:805 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%Raft-Group-Client-19][ReplicaManager]
 Failed to process replica request [request=TxFinishReplicaRequestImpl 
[commit=false, commitTimestampLong=0, 
enlistmentConsistencyToken=112218720633356321, groupId=123_part_21, 
groups=HashMap {141_part_13=poc-tester-SERVER-192.168.1.27-id-0, 
139_part_9=poc-tester-SERVER-192.168.1.97-id-0, 
193_part_3=poc-tester-SERVER-192.168.1.27-id-0, 
19_part_23=poc-tester-SERVER-192.168.1.27-id-0, 
117_part_17=poc-tester-SERVER-192.168.1.18-id-0, 
45_part_9=poc-tester-SERVER-192.168.1.18-id-0, 
39_part_3=poc-tester-SERVER-192.168.1.18-id-0, 
77_part_4=poc-tester-SERVER-192.168.1.18-id-0, 
105_part_4=poc-tester-SERVER-192.168.1.18-id-0, 
123_part_21=poc-tester-SERVER-192.168.1.97-id-0, 
103_part_9=poc-tester-SERVER-192.168.1.18-id-0, 
161_part_15=poc-tester-SERVER-192.168.1.27-id-0, 
103_part_22=poc-tester-SERVER-192.168.1.27-id-0, 
89_part_10=poc-tester-SERVER-192.168.1.18-id-0, 
39_part_19=poc-tester-SERVER-192.168.1.27-id-0, 
149_part_13=poc-tester-SERVER-192.168.1.27-id-0, 
97_part_24=poc-tester-SERVER-192.168.1.97-id-0, 
83_part_9=poc-tester-SERVER-192.168.1.27-id-0, 
209_part_10=poc-tester-SERVER-192.168.1.27-id-0, 
185_part_5=poc-tester-SERVER-192.168.1.18-id-0, 
117_part_9=poc-tester-SERVER-192.168.1.27-id-0, 
105_part_22=poc-tester-SERVER-192.168.1.18-id-0}, 
timestampLong=112219170129903617, txId=018eaebd-88ba-0001-606d-62250001]].
java.util.concurrent.CompletionException: 
org.apache.ignite.tx.TransactionException: IGN-TX-7 
TraceId:cb1577e6-ec35-47f0-ab7d-56a0687344ed 
java.util.concurrent.TimeoutException
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
    at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:932)
    at 
java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$applyCmdWithRetryOnSafeTimeReorderException$126(PartitionReplicaListener.java:2806)
    at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
    at 
java.base/java.util.concurrent.CompletableFuture$UniWhe

[jira] [Created] (IGNITE-22006) 'Failed to process the lease granted message' error under load with balance transfer scenario

2024-04-08 Thread Nikita Sivkov (Jira)
Nikita Sivkov created IGNITE-22006:
--

 Summary: 'Failed to process the lease granted message' error under 
load with balance transfer scenario
 Key: IGNITE-22006
 URL: https://issues.apache.org/jira/browse/IGNITE-22006
 Project: Ignite
  Issue Type: Bug
Affects Versions: 3.0.0-beta2
 Environment: Cluster of 3 nodes
Reporter: Nikita Sivkov
 Attachments: transfer_ign3.yaml

*Steps to reproduce:*

Perform a long (about 2 hours) load test with a balance transfer scenario (see 
scenario pseudo code in attachments).

*Expected result:*

No errors happen.

*Actual result:*

Get error in server logs - {{Failed to process replica request}}
{code:java}
2024-04-05 17:50:55:802 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-AppendEntries-Processor-2][NodeImpl]
 Node <193_part_15/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=2.
2024-04-05 17:50:55:805 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%Raft-Group-Client-19][ReplicaManager]
 Failed to process replica request [request=TxFinishReplicaRequestImpl 
[commit=false, commitTimestampLong=0, 
enlistmentConsistencyToken=112218720633356321, groupId=123_part_21, 
groups=HashMap {141_part_13=poc-tester-SERVER-192.168.1.27-id-0, 
139_part_9=poc-tester-SERVER-192.168.1.97-id-0, 
193_part_3=poc-tester-SERVER-192.168.1.27-id-0, 
19_part_23=poc-tester-SERVER-192.168.1.27-id-0, 
117_part_17=poc-tester-SERVER-192.168.1.18-id-0, 
45_part_9=poc-tester-SERVER-192.168.1.18-id-0, 
39_part_3=poc-tester-SERVER-192.168.1.18-id-0, 
77_part_4=poc-tester-SERVER-192.168.1.18-id-0, 
105_part_4=poc-tester-SERVER-192.168.1.18-id-0, 
123_part_21=poc-tester-SERVER-192.168.1.97-id-0, 
103_part_9=poc-tester-SERVER-192.168.1.18-id-0, 
161_part_15=poc-tester-SERVER-192.168.1.27-id-0, 
103_part_22=poc-tester-SERVER-192.168.1.27-id-0, 
89_part_10=poc-tester-SERVER-192.168.1.18-id-0, 
39_part_19=poc-tester-SERVER-192.168.1.27-id-0, 
149_part_13=poc-tester-SERVER-192.168.1.27-id-0, 
97_part_24=poc-tester-SERVER-192.168.1.97-id-0, 
83_part_9=poc-tester-SERVER-192.168.1.27-id-0, 
209_part_10=poc-tester-SERVER-192.168.1.27-id-0, 
185_part_5=poc-tester-SERVER-192.168.1.18-id-0, 
117_part_9=poc-tester-SERVER-192.168.1.27-id-0, 
105_part_22=poc-tester-SERVER-192.168.1.18-id-0}, 
timestampLong=112219170129903617, txId=018eaebd-88ba-0001-606d-62250001]].
java.util.concurrent.CompletionException: 
org.apache.ignite.tx.TransactionException: IGN-TX-7 
TraceId:cb1577e6-ec35-47f0-ab7d-56a0687344ed 
java.util.concurrent.TimeoutException
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
    at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:932)
    at 
java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$applyCmdWithRetryOnSafeTimeReorderException$126(PartitionReplicaListener.java:2806)
    at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
    at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.sendWithRetry(RaftGroupServiceImpl.java:550)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.lambda$handleErrorResponse$44(RaftGroupServiceImpl.java:653)
    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.ignite.tx.TransactionException: IGN-TX-7 
TraceId:cb1577e6-ec35-47f0-ab7d-56a0687344ed 
java.util.concurrent.TimeoutException
    at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$finishTransaction$70(PartitionReplicaListener.java:1867)
    at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(Co

[jira] [Updated] (IGNITE-22005) 'Failed to process replica request' error under load with balance transfer scenario

2024-04-08 Thread Nikita Sivkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Sivkov updated IGNITE-22005:
---
Description: 
*Steps to reproduce:*

Perform a long (about 2 hours) load test with a balance transfer scenario (see 
scenario pseudo code in attachments).

*Expected result:*

No errors happen.

*Actual result:*

Get error in server logs - {{Failed to process replica request}}
{code:java}
2024-04-05 17:50:55:802 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-AppendEntries-Processor-2][NodeImpl]
 Node <193_part_15/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=2.
2024-04-05 17:50:55:805 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%Raft-Group-Client-19][ReplicaManager]
 Failed to process replica request [request=TxFinishReplicaRequestImpl 
[commit=false, commitTimestampLong=0, 
enlistmentConsistencyToken=112218720633356321, groupId=123_part_21, 
groups=HashMap {141_part_13=poc-tester-SERVER-192.168.1.27-id-0, 
139_part_9=poc-tester-SERVER-192.168.1.97-id-0, 
193_part_3=poc-tester-SERVER-192.168.1.27-id-0, 
19_part_23=poc-tester-SERVER-192.168.1.27-id-0, 
117_part_17=poc-tester-SERVER-192.168.1.18-id-0, 
45_part_9=poc-tester-SERVER-192.168.1.18-id-0, 
39_part_3=poc-tester-SERVER-192.168.1.18-id-0, 
77_part_4=poc-tester-SERVER-192.168.1.18-id-0, 
105_part_4=poc-tester-SERVER-192.168.1.18-id-0, 
123_part_21=poc-tester-SERVER-192.168.1.97-id-0, 
103_part_9=poc-tester-SERVER-192.168.1.18-id-0, 
161_part_15=poc-tester-SERVER-192.168.1.27-id-0, 
103_part_22=poc-tester-SERVER-192.168.1.27-id-0, 
89_part_10=poc-tester-SERVER-192.168.1.18-id-0, 
39_part_19=poc-tester-SERVER-192.168.1.27-id-0, 
149_part_13=poc-tester-SERVER-192.168.1.27-id-0, 
97_part_24=poc-tester-SERVER-192.168.1.97-id-0, 
83_part_9=poc-tester-SERVER-192.168.1.27-id-0, 
209_part_10=poc-tester-SERVER-192.168.1.27-id-0, 
185_part_5=poc-tester-SERVER-192.168.1.18-id-0, 
117_part_9=poc-tester-SERVER-192.168.1.27-id-0, 
105_part_22=poc-tester-SERVER-192.168.1.18-id-0}, 
timestampLong=112219170129903617, txId=018eaebd-88ba-0001-606d-62250001]].
java.util.concurrent.CompletionException: 
org.apache.ignite.tx.TransactionException: IGN-TX-7 
TraceId:cb1577e6-ec35-47f0-ab7d-56a0687344ed 
java.util.concurrent.TimeoutException
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:319)
    at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:932)
    at 
java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:907)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$applyCmdWithRetryOnSafeTimeReorderException$126(PartitionReplicaListener.java:2806)
    at 
java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859)
    at 
java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.sendWithRetry(RaftGroupServiceImpl.java:550)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.lambda$handleErrorResponse$44(RaftGroupServiceImpl.java:653)
    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.ignite.tx.TransactionException: IGN-TX-7 
TraceId:cb1577e6-ec35-47f0-ab7d-56a0687344ed 
java.util.concurrent.TimeoutException
    at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$finishTransaction$70(PartitionReplicaListener.java:1867)
    at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
    ... 16 more
Caused by: java.util.concurrent.CompletionException: 
java.util.concurrent.TimeoutException
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFut

[jira] [Created] (IGNITE-22005) 'Failed to process replica request' error under load with balance transfer scenario

2024-04-08 Thread Nikita Sivkov (Jira)
Nikita Sivkov created IGNITE-22005:
--

 Summary: 'Failed to process replica request' error under load with 
balance transfer scenario
 Key: IGNITE-22005
 URL: https://issues.apache.org/jira/browse/IGNITE-22005
 Project: Ignite
  Issue Type: Bug
Affects Versions: 3.0.0-beta2
 Environment: Cluster of 3 nodes
Reporter: Nikita Sivkov
 Attachments: transfer_ign3.yaml

*Steps to reproduce:*

Perform a long (about 2 hours) load test with a balance transfer scenario (see 
scenario pseudo code in attachments).

*Expected result:*

No errors happen.

*Actual result:*

Get error in server logs - {{Failed to process delayed response}}
{code:java}
2024-04-05 17:50:50:776 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-1][NodeImpl]
 Node <27_part_23/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=2.
2024-04-05 17:50:50:778 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%Raft-Group-Client-5][ReplicaManager]
 Failed to process delayed response 
[request=ReadWriteSingleRowReplicaRequestImpl 
[commitPartitionId=TablePartitionIdMessageImpl [partitionId=21, tableId=123], 
coordinatorId=3de6f999-7ab9-4405-aff0-ee0c7e4886ce, 
enlistmentConsistencyToken=112218720633356321, full=false, groupId=123_part_21, 
requestType=RW_UPSERT, schemaVersion=1, timestampLong=112219169796915211, 
transactionId=018eaebd-88ba-0001-606d-62250001]]
java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
    at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.sendWithRetry(RaftGroupServiceImpl.java:550)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.lambda$handleErrorResponse$44(RaftGroupServiceImpl.java:653)
    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.util.concurrent.TimeoutException
    ... 8 more
2024-04-05 17:50:50:780 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-27][NodeImpl]
 Node <99_part_6/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=3. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22004) 'Failed to process delayed response' error under load with balance transfer scenario

2024-04-08 Thread Nikita Sivkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Sivkov updated IGNITE-22004:
---
Attachment: transfer_ign3.yaml

> 'Failed to process delayed response' error under load with balance transfer 
> scenario
> 
>
> Key: IGNITE-22004
> URL: https://issues.apache.org/jira/browse/IGNITE-22004
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta2
> Environment: Cluster of 3 nodes
>Reporter: Nikita Sivkov
>Priority: Major
>  Labels: ignite-3
> Attachments: transfer_ign3.yaml
>
>
> *Steps to reproduce:*
> Perform a long (about 2 hours) load test with a balance transfer scenario 
> (see scenario pseudo code in attachments).
> *Expected result:*
> No errors happen.
> *Actual result:*
> Get error in server logs - {{Failed to process delayed response}}
> {code:java}
> 2024-04-05 17:50:50:776 +0300 
> [WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-1][NodeImpl]
>  Node <27_part_23/poc-tester-SERVER-192.168.1.97-id-0> is not in active 
> state, currTerm=2.
> 2024-04-05 17:50:50:778 +0300 
> [WARNING][%poc-tester-SERVER-192.168.1.97-id-0%Raft-Group-Client-5][ReplicaManager]
>  Failed to process delayed response 
> [request=ReadWriteSingleRowReplicaRequestImpl 
> [commitPartitionId=TablePartitionIdMessageImpl [partitionId=21, tableId=123], 
> coordinatorId=3de6f999-7ab9-4405-aff0-ee0c7e4886ce, 
> enlistmentConsistencyToken=112218720633356321, full=false, 
> groupId=123_part_21, requestType=RW_UPSERT, schemaVersion=1, 
> timestampLong=112219169796915211, 
> transactionId=018eaebd-88ba-0001-606d-62250001]]
> java.util.concurrent.CompletionException: 
> java.util.concurrent.TimeoutException
>     at 
> java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
>     at 
> java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
>     at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
>     at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>     at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.sendWithRetry(RaftGroupServiceImpl.java:550)
>     at 
> org.apache.ignite.internal.raft.RaftGroupServiceImpl.lambda$handleErrorResponse$44(RaftGroupServiceImpl.java:653)
>     at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>     at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>     at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.util.concurrent.TimeoutException
>     ... 8 more
> 2024-04-05 17:50:50:780 +0300 
> [WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-27][NodeImpl]
>  Node <99_part_6/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
> currTerm=3. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22004) 'Failed to process delayed response' error under load with balance transfer scenario

2024-04-08 Thread Nikita Sivkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Sivkov updated IGNITE-22004:
---
Description: 
*Steps to reproduce:*

Perform a long (about 2 hours) load test with a balance transfer scenario (see 
scenario pseudo code in attachments).

*Expected result:*

No errors happen.

*Actual result:*

Get error in server logs - {{Failed to process delayed response}}
{code:java}
2024-04-05 17:50:50:776 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-1][NodeImpl]
 Node <27_part_23/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=2.
2024-04-05 17:50:50:778 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%Raft-Group-Client-5][ReplicaManager]
 Failed to process delayed response 
[request=ReadWriteSingleRowReplicaRequestImpl 
[commitPartitionId=TablePartitionIdMessageImpl [partitionId=21, tableId=123], 
coordinatorId=3de6f999-7ab9-4405-aff0-ee0c7e4886ce, 
enlistmentConsistencyToken=112218720633356321, full=false, groupId=123_part_21, 
requestType=RW_UPSERT, schemaVersion=1, timestampLong=112219169796915211, 
transactionId=018eaebd-88ba-0001-606d-62250001]]
java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
    at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.sendWithRetry(RaftGroupServiceImpl.java:550)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.lambda$handleErrorResponse$44(RaftGroupServiceImpl.java:653)
    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.util.concurrent.TimeoutException
    ... 8 more
2024-04-05 17:50:50:780 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-27][NodeImpl]
 Node <99_part_6/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=3. {code}

  was:
*Steps to reproduce:*

Perform a long (about 2 hours) load test with a balance transfer scenario.

*Expected result:*

No errors happen.

*Actual result:*

Get error in server logs - {{Failed to process delayed response}}
{code:java}
2024-04-05 17:50:50:776 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-1][NodeImpl]
 Node <27_part_23/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=2.
2024-04-05 17:50:50:778 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%Raft-Group-Client-5][ReplicaManager]
 Failed to process delayed response 
[request=ReadWriteSingleRowReplicaRequestImpl 
[commitPartitionId=TablePartitionIdMessageImpl [partitionId=21, tableId=123], 
coordinatorId=3de6f999-7ab9-4405-aff0-ee0c7e4886ce, 
enlistmentConsistencyToken=112218720633356321, full=false, groupId=123_part_21, 
requestType=RW_UPSERT, schemaVersion=1, timestampLong=112219169796915211, 
transactionId=018eaebd-88ba-0001-606d-62250001]]
java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
    at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.sendWithRetry(RaftGroupServiceImpl.java:550)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.lambda$handleErrorResponse$44(RaftGroupServiceImpl.java:653)
    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
    at 
ja

[jira] [Created] (IGNITE-22004) 'Failed to process delayed response' error under load with balance transfer scenario

2024-04-08 Thread Nikita Sivkov (Jira)
Nikita Sivkov created IGNITE-22004:
--

 Summary: 'Failed to process delayed response' error under load 
with balance transfer scenario
 Key: IGNITE-22004
 URL: https://issues.apache.org/jira/browse/IGNITE-22004
 Project: Ignite
  Issue Type: Bug
Affects Versions: 3.0.0-beta2
 Environment: Cluster of 3 nodes
Reporter: Nikita Sivkov


*Steps to reproduce:*

Perform a long (about 2 hours) load test with a balance transfer scenario.

*Expected result:*

No errors happen.

*Actual result:*

Get error in server logs - {{Failed to process delayed response}}
{code:java}
2024-04-05 17:50:50:776 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-1][NodeImpl]
 Node <27_part_23/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=2.
2024-04-05 17:50:50:778 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%Raft-Group-Client-5][ReplicaManager]
 Failed to process delayed response 
[request=ReadWriteSingleRowReplicaRequestImpl 
[commitPartitionId=TablePartitionIdMessageImpl [partitionId=21, tableId=123], 
coordinatorId=3de6f999-7ab9-4405-aff0-ee0c7e4886ce, 
enlistmentConsistencyToken=112218720633356321, full=false, groupId=123_part_21, 
requestType=RW_UPSERT, schemaVersion=1, timestampLong=112219169796915211, 
transactionId=018eaebd-88ba-0001-606d-62250001]]
java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException
    at 
java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331)
    at 
java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346)
    at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:632)
    at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
    at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.sendWithRetry(RaftGroupServiceImpl.java:550)
    at 
org.apache.ignite.internal.raft.RaftGroupServiceImpl.lambda$handleErrorResponse$44(RaftGroupServiceImpl.java:653)
    at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.util.concurrent.TimeoutException
    ... 8 more
2024-04-05 17:50:50:780 +0300 
[WARNING][%poc-tester-SERVER-192.168.1.97-id-0%JRaft-Request-Processor-27][NodeImpl]
 Node <99_part_6/poc-tester-SERVER-192.168.1.97-id-0> is not in active state, 
currTerm=3. {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22003) JRaft Replicator is not stopped when cluster init is canceled

2024-04-08 Thread Tiago Marques Godinho (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tiago Marques Godinho updated IGNITE-22003:
---
Description: 
The JRaft Group Replicator should be stopped when the cluster initialization is 
canceled.

My understanding is that this resource is created during the initialization 
process, and therefore, should be closed properly if it fails. Moreover, it 
seems that it also keeps references to other resources created in the same 
context (like the LogManager), preventing them from being properly cleaned.

Here is a stacktrace of the replicator routine running after the init is 
canceled:
{code:java}
getEntry:387, RocksDbSharedLogStorage 
(org.apache.ignite.internal.raft.storage.impl)
getTermFromLogStorage:838, LogManagerImpl 
(org.apache.ignite.raft.jraft.storage.impl)
getTerm:834, LogManagerImpl (org.apache.ignite.raft.jraft.storage.impl)
fillCommonFields:1550, Replicator (org.apache.ignite.raft.jraft.core)
sendEmptyEntries:749, Replicator (org.apache.ignite.raft.jraft.core)
sendEmptyEntries:735, Replicator (org.apache.ignite.raft.jraft.core)
sendHeartbeat:1748, Replicator (org.apache.ignite.raft.jraft.core)
lambda$onError$8:1065, Replicator (org.apache.ignite.raft.jraft.core)
call:515, Executors$RunnableAdapter (java.util.concurrent)
run:264, FutureTask (java.util.concurrent)
runWorker:1128, ThreadPoolExecutor (java.util.concurrent)
run:628, ThreadPoolExecutor$Worker (java.util.concurrent)
run:829, Thread (java.lang){code}

  was:
The JRaft Group Replicator should be stopped when the cluster initialization is 
canceled.

My understanding is that this resource is created during the initialization 
process, and therefore, should be closed properly if it fails. Moreover, it 
seems that it also keeps references to other resources created in the same 
context (like the LogManager), preventing them from being properly cleaned.

 


> JRaft Replicator is not stopped when cluster init is canceled
> -
>
> Key: IGNITE-22003
> URL: https://issues.apache.org/jira/browse/IGNITE-22003
> Project: Ignite
>  Issue Type: Bug
>Reporter: Tiago Marques Godinho
>Priority: Major
>  Labels: ignite-3
>
> The JRaft Group Replicator should be stopped when the cluster initialization 
> is canceled.
> My understanding is that this resource is created during the initialization 
> process, and therefore, should be closed properly if it fails. Moreover, it 
> seems that it also keeps references to other resources created in the same 
> context (like the LogManager), preventing them from being properly cleaned.
> Here is a stacktrace of the replicator routine running after the init is 
> canceled:
> {code:java}
> getEntry:387, RocksDbSharedLogStorage 
> (org.apache.ignite.internal.raft.storage.impl)
> getTermFromLogStorage:838, LogManagerImpl 
> (org.apache.ignite.raft.jraft.storage.impl)
> getTerm:834, LogManagerImpl (org.apache.ignite.raft.jraft.storage.impl)
> fillCommonFields:1550, Replicator (org.apache.ignite.raft.jraft.core)
> sendEmptyEntries:749, Replicator (org.apache.ignite.raft.jraft.core)
> sendEmptyEntries:735, Replicator (org.apache.ignite.raft.jraft.core)
> sendHeartbeat:1748, Replicator (org.apache.ignite.raft.jraft.core)
> lambda$onError$8:1065, Replicator (org.apache.ignite.raft.jraft.core)
> call:515, Executors$RunnableAdapter (java.util.concurrent)
> run:264, FutureTask (java.util.concurrent)
> runWorker:1128, ThreadPoolExecutor (java.util.concurrent)
> run:628, ThreadPoolExecutor$Worker (java.util.concurrent)
> run:829, Thread (java.lang){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22003) JRaft Replicator is not stopped when cluster init is canceled

2024-04-08 Thread Tiago Marques Godinho (Jira)
Tiago Marques Godinho created IGNITE-22003:
--

 Summary: JRaft Replicator is not stopped when cluster init is 
canceled
 Key: IGNITE-22003
 URL: https://issues.apache.org/jira/browse/IGNITE-22003
 Project: Ignite
  Issue Type: Bug
Reporter: Tiago Marques Godinho


The JRaft Group Replicator should be stopped when the cluster initialization is 
canceled.

My understanding is that this resource is created during the initialization 
process, and therefore, should be closed properly if it fails. Moreover, it 
seems that it also keeps references to other resources created in the same 
context (like the LogManager), preventing them from being properly cleaned.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21920) Cover SQL E051-04 (Basic query specification, GROUP BY can contain columns not in ) feature by tests

2024-04-08 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-21920:
--
Fix Version/s: 3.0.0-beta2

> Cover SQL E051-04 (Basic query specification, GROUP BY can contain columns 
> not in ) feature by tests
> -
>
> Key: IGNITE-21920
> URL: https://issues.apache.org/jira/browse/IGNITE-21920
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We don't have at all any tests for  E051-04 (Basic query specification, GROUP 
> BY can contain columns not in )  SQL feature.
> Let's cover it and create tickets to fix them in case find any issues related 
> to covered area



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22002) AssertionError: Updated lease start time should be greater than current

2024-04-08 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-22002:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> AssertionError: Updated lease start time should be greater than current
> ---
>
> Key: IGNITE-22002
> URL: https://issues.apache.org/jira/browse/IGNITE-22002
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> PrimaryReplicaChangeCommandImpl commands might be reordered as any other raft 
> commands. In case of PrimaryReplicaChangeCommand there's no need to force the 
> order it's just enough to skip inconsistent one, thus let's substitute 
> assertion with corresponding check and stop command evaluation in case of 
> miss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22002) AssertionError: Updated lease start time should be greater than current

2024-04-08 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-22002:
-
Labels: ignite-3  (was: )

> AssertionError: Updated lease start time should be greater than current
> ---
>
> Key: IGNITE-22002
> URL: https://issues.apache.org/jira/browse/IGNITE-22002
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> PrimaryReplicaChangeCommandImpl commands might be reordered as any other raft 
> commands. In case of PrimaryReplicaChangeCommand there's no need to force the 
> order it's just enough to skip inconsistent one, thus let's substitute 
> assertion with corresponding check and stop command evaluation in case of 
> miss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22002) AssertionError: Updated lease start time should be greater than current

2024-04-08 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-22002:


 Summary: AssertionError: Updated lease start time should be 
greater than current
 Key: IGNITE-22002
 URL: https://issues.apache.org/jira/browse/IGNITE-22002
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


PrimaryReplicaChangeCommandImpl commands might be reordered as any other raft 
commands. In case of PrimaryReplicaChangeCommand there's no need to force the 
order it's just enough to skip inconsistent one, thus let's substitute 
assertion with corresponding check and stop command evaluation in case of miss.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21922) Cover SQL E141-01(Basic integrity constraints, NOT NULL constraints) feature by tests

2024-04-08 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-21922:
-

Assignee: Maksim Zhuravkov

> Cover SQL E141-01(Basic integrity constraints, NOT NULL constraints) feature 
> by tests
> -
>
> Key: IGNITE-21922
> URL: https://issues.apache.org/jira/browse/IGNITE-21922
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> We don't have at all any tests for E141-01(Basic integrity constraints, NOT 
> NULL constraints)  SQL feature.
> Let's cover it and create tickets to fix them in case find any issues related 
> to the covered area



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21920) Cover SQL E051-04 (Basic query specification, GROUP BY can contain columns not in ) feature by tests

2024-04-08 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov reassigned IGNITE-21920:
-

Assignee: Andrey Mashenkov

> Cover SQL E051-04 (Basic query specification, GROUP BY can contain columns 
> not in ) feature by tests
> -
>
> Key: IGNITE-21920
> URL: https://issues.apache.org/jira/browse/IGNITE-21920
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
>
> We don't have at all any tests for  E051-04 (Basic query specification, GROUP 
> BY can contain columns not in )  SQL feature.
> Let's cover it and create tickets to fix them in case find any issues related 
> to covered area



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22001) Throw specific exception if during writeTableAssignmentsToMetastore process was interrupted

2024-04-08 Thread Mikhail Efremov (Jira)
Mikhail Efremov created IGNITE-22001:


 Summary: Throw specific exception if during 
writeTableAssignmentsToMetastore process was interrupted
 Key: IGNITE-22001
 URL: https://issues.apache.org/jira/browse/IGNITE-22001
 Project: Ignite
  Issue Type: Improvement
Reporter: Mikhail Efremov
Assignee: Mikhail Efremov


h2. The problem

In {{TableManager#writeTableAssignmentsToMetastore:752}} as a result of output 
{{CompletableFuture}} returns {{{}null{}}}-value. In cases of 
{{writeTableAssignmentsToMetastore}} method's using it leads to sudden 
{{NullPointerException}} without clear understandings of the reasons of such 
situation.
h2. The solution

Instead of returning {{{}null{}}}-value re-throw more specific exception with 
given assignments list to write in metastorage, and table identifier that 
should help to investigate cases of the method interruption.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22000) Sql. Get rid of DdlSqlToCommandConverter

2024-04-08 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-22000:
--
Description: 
Every DDL command goes through two levels of conversion now.

>From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
And from {{DdlCommand}} into {{CatalogCommand}} (using 
{{DdlToCatalogCommandConverter}}).

It looks like we should get rid of this intermediate conversion into 
{{DdlCommand}}, and do only single conversion {{AST}} => {{CatalogCommand}}}.

  was:
Each DDL command passes 2 layer conversion now.
>From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
And right after that from {{DdlCommand}} into {{CatalogCommand}} 
({{DdlToCatalogCommandConverter}}).

It looks like we should get rid of this intermediate conversion into 
{{DdlCommand}}, and do only single conversion {{AST}} => {{CatalogCommand}}}.


> Sql. Get rid of DdlSqlToCommandConverter
> 
>
> Key: IGNITE-22000
> URL: https://issues.apache.org/jira/browse/IGNITE-22000
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3, refactoring
>
> Every DDL command goes through two levels of conversion now.
> From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
> And from {{DdlCommand}} into {{CatalogCommand}} (using 
> {{DdlToCatalogCommandConverter}}).
> It looks like we should get rid of this intermediate conversion into 
> {{DdlCommand}}, and do only single conversion {{AST}} => {{CatalogCommand}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22000) Sql. Get rid of DdlSqlToCommandConverter

2024-04-08 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-22000:
--
Description: 
Every DDL command goes through two levels of conversion now.

>From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
And from {{DdlCommand}} into {{CatalogCommand}} (using 
{{DdlToCatalogCommandConverter}}).

It looks like we should do only single conversion {{AST}} => 
{{CatalogCommand}}}.

  was:
Every DDL command goes through two levels of conversion now.

>From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
And from {{DdlCommand}} into {{CatalogCommand}} (using 
{{DdlToCatalogCommandConverter}}).

It looks like we should get rid of this intermediate conversion into 
{{DdlCommand}}, and do only single conversion {{AST}} => {{CatalogCommand}}}.


> Sql. Get rid of DdlSqlToCommandConverter
> 
>
> Key: IGNITE-22000
> URL: https://issues.apache.org/jira/browse/IGNITE-22000
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3, refactoring
>
> Every DDL command goes through two levels of conversion now.
> From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
> And from {{DdlCommand}} into {{CatalogCommand}} (using 
> {{DdlToCatalogCommandConverter}}).
> It looks like we should do only single conversion {{AST}} => 
> {{CatalogCommand}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22000) Sql. Get rid of DdlSqlToCommandConverter

2024-04-08 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-22000:
--
Description: 
Every DDL command goes through two levels of conversion now.

>From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
And from {{DdlCommand}} into {{CatalogCommand}} (using 
{{DdlToCatalogCommandConverter}}).

It looks like we should do only single conversion {{AST}} => {{CatalogCommand}}.

  was:
Every DDL command goes through two levels of conversion now.

>From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
And from {{DdlCommand}} into {{CatalogCommand}} (using 
{{DdlToCatalogCommandConverter}}).

It looks like we should do only single conversion {{AST}} => 
{{CatalogCommand}}}.


> Sql. Get rid of DdlSqlToCommandConverter
> 
>
> Key: IGNITE-22000
> URL: https://issues.apache.org/jira/browse/IGNITE-22000
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3, refactoring
>
> Every DDL command goes through two levels of conversion now.
> From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
> And from {{DdlCommand}} into {{CatalogCommand}} (using 
> {{DdlToCatalogCommandConverter}}).
> It looks like we should do only single conversion {{AST}} => 
> {{CatalogCommand}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22000) Sql. Get rid of DdlSqlToCommandConverter

2024-04-08 Thread Pavel Pereslegin (Jira)
Pavel Pereslegin created IGNITE-22000:
-

 Summary: Sql. Get rid of DdlSqlToCommandConverter
 Key: IGNITE-22000
 URL: https://issues.apache.org/jira/browse/IGNITE-22000
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Pavel Pereslegin


Each DDL command passes 2 layer conversion now.
>From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
And right after that from {{DdlCommand}} into {{CatalogCommand}} 
({{DdlToCatalogCommandConverter}}).

It looks like we should get rid of this intermediate conversion into 
{{DdlCommand}}, and do only single conversion {{AST}} => {{CatalogCommand}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21914) Cover SQL T631(IN predicate with one list element) feature by tests

2024-04-08 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov reassigned IGNITE-21914:
-

Assignee: Andrey Mashenkov

> Cover SQL T631(IN predicate with one list element) feature by tests
> ---
>
> Key: IGNITE-21914
> URL: https://issues.apache.org/jira/browse/IGNITE-21914
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
>
> We don't have at all any tests for T631 (IN predicate with one list element) 
> SQL feature.
> Let's cover it and create tickets to fix them in case find any issues related 
> to covered area



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21999) Merge partition free-lists into one

2024-04-08 Thread Ivan Bessonov (Jira)
Ivan Bessonov created IGNITE-21999:
--

 Summary: Merge partition free-lists into one
 Key: IGNITE-21999
 URL: https://issues.apache.org/jira/browse/IGNITE-21999
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Bessonov


Current implementation has 2 free-lists:
 * version chains
 * index tuples

These lists have separate buckets for different types of data pages. There's an 
issue with this approach:
 * overhead on pages - we have to allocate more pages to store buckets
 * overhead on checkpoints - we have to save twice as many free-lists on every 
checkpoint

The reason, to my understanding, is the fact that FreeList class is 
parameterized with the specific type of data that it stores. It makes no sense 
to me, to be completely honest, because the algorithm is always the same, and 
we always use the code from abstract free-list implementation.

What I propose:
 * get rid of abstract implementation and only have the concrete implementation 
of free lists
 * same for data pages
 * serialization code will be fully moved to implementations of Storeable

We're losing some guarantees if we do this change - we can no longer check that 
type of the page is correct. My response to this issue is that every Storeable 
could add a 1-byte header to the data, in order to validate it when being read, 
that should be enough. If we could find a way to store less than 1 byte then 
that's nice, I didn't look too much into the question.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19719) Make CatalogDataStorageDescriptor support for each storage engine

2024-04-08 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-19719:
--
Epic Link: IGNITE-21991  (was: IGNITE-21211)

> Make CatalogDataStorageDescriptor support for each storage engine
> -
>
> Key: IGNITE-19719
> URL: https://issues.apache.org/jira/browse/IGNITE-19719
> Project: Ignite
>  Issue Type: Improvement
> Environment: Catalog service Part 3
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> At the moment, we do not have the ability to implement 
> *org.apache.ignite.internal.catalog.descriptors.CatalogDataStorageDescriptor* 
> for each storage engine, we need to come up with a mechanism for how to do 
> this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21608) Fix catalog compaction.

2024-04-08 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-21608:
--
Epic Link: IGNITE-21991  (was: IGNITE-21211)

> Fix catalog compaction.
> ---
>
> Key: IGNITE-21608
> URL: https://issues.apache.org/jira/browse/IGNITE-21608
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0
>
>
> Catalog compaction was disabled in IGNITE-21585, because we need consensus on 
> all the nodes about safe timestamp/version the Catalog` history could be 
> truncated to.
> Let's enable compations and fix all the issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20046) Improve the Catalog interfaces

2024-04-08 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-20046:
--
Labels: ignite-3 tech-debt  (was: ignite-3)

> Improve the Catalog interfaces
> --
>
> Key: IGNITE-20046
> URL: https://issues.apache.org/jira/browse/IGNITE-20046
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3, tech-debt
> Fix For: 3.0.0-beta2
>
>
> Catalog related interfaces and classes are missing javadocs, need to fix this.
> CatalogService getters, which accepts ID and timestamp, looks useless and, 
> probably, could be removed.
> Let's also change timestamp type `long->HybridTimestamp`



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19670) Improve CatalogService test coverage.

2024-04-08 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-19670:
--
Epic Link: IGNITE-21991  (was: IGNITE-21211)

> Improve CatalogService test coverage.
> -
>
> Key: IGNITE-19670
> URL: https://issues.apache.org/jira/browse/IGNITE-19670
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3, tech-debt-test
>
> 1. CatalogServiceSelftTest.testCreateTable (+testDropTable) looks a bit 
> complicated. It checks creation of more than one table. Let's simplify the 
> test by reverting last changes
> 2. We use shared counter to generate unique identifier for schema objects. 
> Some tests checks schema object id, and some doesn't.  Let's move 
> schema-object's id check into separate test, to verify which command 
> increments the counter, and which doesn't.
> 3. Let's add a test that will check ABA problem. E.g. create-drop-create 
> table (or index) with same name and check the object can be resolved 
> correctly by name and by id (regarding object versioning in Catalog, of 
> course).
> 4. Move Catalog operations tests to separate class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21998) .NET: TestSchemaUpdateWhileStreaming is flaky

2024-04-08 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21998:

Description: 
https://ci.ignite.apache.org/test/4007460749051703227?currentProjectId=ApacheIgnite3xGradle_Test&branch=%3Cdefault%3E&expandTestHistoryChartSection=true

Fails due to a 15s timeout or throws an exception:
{code}
Apache.Ignite.IgniteException : Table schema was updated after the transaction 
was started [table=108, startSchema=1, operationSchema=2]
  > Apache.Ignite.IgniteException : 
org.apache.ignite.internal.table.distributed.replicator.IncompatibleSchemaException:
 IGN-TX-12 TraceId:e32316b9-eb22-49b2-a45b-01ecb067da4f Table schema was 
updated after the transaction was started [table=108, startSchema=1, 
operationSchema=2]
  at 
org.apache.ignite.internal.table.distributed.replicator.SchemaCompatibilityValidator.failIfSchemaChangedAfterTxStart(SchemaCompatibilityValidator.java:238)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.failIfSchemaChangedSinceTxStart(PartitionReplicaListener.java:3788)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$validateWriteAgainstSchemaAfterTakingLocks$215(PartitionReplicaListener.java:3713)
  at 
java.base/java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:680)
  at 
java.base/java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:658)
  at 
java.base/java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:2094)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.validateWriteAgainstSchemaAfterTakingLocks(PartitionReplicaListener.java:3712)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$processMultiEntryAction$106(PartitionReplicaListener.java:2377)
  at 
java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
  at 
java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.processMultiEntryAction(PartitionReplicaListener.java:2349)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$processOperationRequest$11(PartitionReplicaListener.java:644)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.appendTxCommand(PartitionReplicaListener.java:1926)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.processOperationRequest(PartitionReplicaListener.java:644)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.processOperationRequestWithTxRwCounter(PartitionReplicaListener.java:3923)
  at 
org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener.lambda$processRequest$5(PartitionReplicaListener.java:440)
  at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072)
  at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
  at 
java.base/java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:610)
  at 
java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:649)
  at 
java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:834)
   at Apache.Ignite.Internal.ClientSocket.DoOutInOpAsyncInternal(ClientOp 
clientOp, PooledArrayBuffer request, Boolean expectNotifications) in 
/opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientSocket.cs:line
 626
   at 
Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAndGetSocketAsync(ClientOp 
clientOp, Transaction tx, PooledArrayBuffer request, PreferredNode 
preferredNode, IRetryPolicy retryPolicyOverride, Boolean expectNotifications) 
in 
/opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
 213
   at Apache.Ignite.Internal.Table.DataStreamer.SendBatchAsync(Table table, 
PooledArrayBuffer buf, Int32 count, PreferredNode preferredNode, IRetryPolicy 
retryPolicy) in 
/opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/DataStreamer.cs:line
 509
   at 
Apache.Ignite.Internal.Table.DataStreamer.<>c__DisplayClass1_0`1.d.MoveNext()
 in 
/opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/DataStreamer.cs:line
 312
--- End of stack trace from previous location ---
   at 
Apache

[jira] [Updated] (IGNITE-21998) .NET: TestSchemaUpdateWhileStreaming is flaky

2024-04-08 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21998:

Description: 
https://ci.ignite.apache.org/test/4007460749051703227?currentProjectId=ApacheIgnite3xGradle_Test&branch=%3Cdefault%3E&expandTestHistoryChartSection=true

> .NET: TestSchemaUpdateWhileStreaming is flaky
> -
>
> Key: IGNITE-21998
> URL: https://issues.apache.org/jira/browse/IGNITE-21998
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> https://ci.ignite.apache.org/test/4007460749051703227?currentProjectId=ApacheIgnite3xGradle_Test&branch=%3Cdefault%3E&expandTestHistoryChartSection=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21998) .NET: TestSchemaUpdateWhileStreaming is flaky

2024-04-08 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-21998:
---

 Summary: .NET: TestSchemaUpdateWhileStreaming is flaky
 Key: IGNITE-21998
 URL: https://issues.apache.org/jira/browse/IGNITE-21998
 Project: Ignite
  Issue Type: Bug
  Components: thin client
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21889) Remove MVCC code from DataPageIO class

2024-04-08 Thread Julia Bakulina (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julia Bakulina updated IGNITE-21889:

Summary: Remove MVCC code from DataPageIO class  (was: Remove TxState)

> Remove MVCC code from DataPageIO class
> --
>
> Key: IGNITE-21889
> URL: https://issues.apache.org/jira/browse/IGNITE-21889
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Julia Bakulina
>Assignee: Julia Bakulina
>Priority: Major
>  Labels: ise
>
> Delete TxState



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-20095) Sql. Map part of MAP/REDUCE aggregate sometimes can not be moved past Exchange.

2024-04-08 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov resolved IGNITE-20095.
---
Resolution: Duplicate

Fixed within IGNITE-21580

> Sql. Map part of MAP/REDUCE aggregate sometimes can not be moved past 
> Exchange.
> ---
>
> Key: IGNITE-20095
> URL: https://issues.apache.org/jira/browse/IGNITE-20095
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
>
> See examples in MapReduceSortAggregatePlannerTest. 
> Looks like, Map aggregate can't utilize sorting from the input in some cases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-20083) Sql. Investigate cost calculation for MAP/REDUCE aggregate.

2024-04-08 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov resolved IGNITE-20083.
---
Fix Version/s: None
   Resolution: Duplicate

Fixed in IGNITE-21580

> Sql. Investigate cost calculation for MAP/REDUCE aggregate.
> ---
>
> Key: IGNITE-20083
> URL: https://issues.apache.org/jira/browse/IGNITE-20083
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
> Fix For: None
>
>
> After removing complex objects between MAP/REDUCE phase cost computations 
> changed and some plans that were expected to choose MAP/REDUCE aggregates are 
> now choose colocated aggregations or exchange is moved prior to MAP phase, 
> and not inserted MAP/REDUCE phases.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21919) Cover SQL E031-03 (Identifiers, Trailing underscore feature) by tests

2024-04-08 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-21919:
-

Assignee: Maksim Zhuravkov

> Cover SQL E031-03 (Identifiers, Trailing underscore feature) by tests
> -
>
> Key: IGNITE-21919
> URL: https://issues.apache.org/jira/browse/IGNITE-21919
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Assignee: Maksim Zhuravkov
>Priority: Major
>  Labels: ignite-3
>
> We don't have at all any tests for E031-03 (Identifiers, Trailing underscore 
> feature)  SQL feature.
> Let's cover it and create tickets to fix them in case find any issues related 
> to covered area



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20384) Clean up abandoned resources for destroyed tables in catalog

2024-04-08 Thread Iurii Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iurii Gerzhedovich updated IGNITE-20384:

Epic Link: IGNITE-21991  (was: IGNITE-21211)

> Clean up abandoned resources for destroyed tables in catalog
> 
>
> Key: IGNITE-20384
> URL: https://issues.apache.org/jira/browse/IGNITE-20384
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0
>
>
> We need to clean up abandoned resources (from vault and metastore) for 
> destroyed tables from the catalog.
> Perhaps it will be two separate ticket.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-21906) Consider disabling inline in PK index by default

2024-04-08 Thread Aleksandr Polovtcev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtcev resolved IGNITE-21906.
--
Resolution: Won't Fix

> Consider disabling inline in PK index by default
> 
>
> Key: IGNITE-21906
> URL: https://issues.apache.org/jira/browse/IGNITE-21906
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In aipersist/aimem we attempt to inline binary tuples into pages for hash 
> indexes by default. This, in theory, saves us from the necessity of accessing 
> binary tuples from data pages for comparison, which is slower than comparing 
> inlined data.
> But, assuming the good hash distribution, we would only have to do the real 
> comparison for the matched tuple. At the same time, inlined data might be 
> substantially larger than hash+link, meaning that B+Tree with inlined data 
> has bigger height, which correlates with slower search speed.
> So, we have both pros and cons for inlining, and the only real way to 
> reconcile them is to compare them with some benchmarks. This is exactly what 
> I propose.
> TL;DR: force inline size to be 0 for hash indices and benchmark for put/get 
> operations, with large enough amount of data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21906) Consider disabling inline in PK index by default

2024-04-08 Thread Aleksandr Polovtcev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834856#comment-17834856
 ] 

Aleksandr Polovtcev commented on IGNITE-21906:
--

I've run the benchmark a couple more times and every time the results were a 
little bit worse than the baseline. Therefore, we decided to stop further 
investigation and close this ticket.

> Consider disabling inline in PK index by default
> 
>
> Key: IGNITE-21906
> URL: https://issues.apache.org/jira/browse/IGNITE-21906
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Aleksandr Polovtcev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In aipersist/aimem we attempt to inline binary tuples into pages for hash 
> indexes by default. This, in theory, saves us from the necessity of accessing 
> binary tuples from data pages for comparison, which is slower than comparing 
> inlined data.
> But, assuming the good hash distribution, we would only have to do the real 
> comparison for the matched tuple. At the same time, inlined data might be 
> substantially larger than hash+link, meaning that B+Tree with inlined data 
> has bigger height, which correlates with slower search speed.
> So, we have both pros and cons for inlining, and the only real way to 
> reconcile them is to compare them with some benchmarks. This is exactly what 
> I propose.
> TL;DR: force inline size to be 0 for hash indices and benchmark for put/get 
> operations, with large enough amount of data.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21490) .NET: Thin 3.0: DataStreamer data removal

2024-04-08 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834851#comment-17834851
 ] 

Pavel Tupitsyn commented on IGNITE-21490:
-

Merged to main: 9a24cc614535c2e668cc0871bec1f1fee5928d29

> .NET: Thin 3.0: DataStreamer data removal
> -
>
> Key: IGNITE-21490
> URL: https://issues.apache.org/jira/browse/IGNITE-21490
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement data removal in .NET data streamer - see Java API changes in 
> IGNITE-21403.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21996) Sql. SET DATA TYPE command allow change from null to not null

2024-04-08 Thread Iurii Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iurii Gerzhedovich updated IGNITE-21996:

Description: 
As of now absent validation during the amend column from  NULLABLE to NOT 
NULLABLE in case using DDL command
{code:java}
ALTER TABLE ... ALTER COLUMN ... SET DATA TYPE   NOT NULL ...{code}
 

Let's fix it.

  was:
As of now absent validation during the amend column from NOT NULLABLE to 
NULLABLE in case using DDL command
{code:java}
ALTER TABLE ... ALTER COLUMN ... SET DATA TYPE   NOT NULL ...{code}
 

Let's fix it.


> Sql. SET DATA TYPE command allow change from null to not null
> -
>
> Key: IGNITE-21996
> URL: https://issues.apache.org/jira/browse/IGNITE-21996
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Priority: Major
>  Labels: ignite-3
>
> As of now absent validation during the amend column from  NULLABLE to NOT 
> NULLABLE in case using DDL command
> {code:java}
> ALTER TABLE ... ALTER COLUMN ... SET DATA TYPE   NOT NULL ...{code}
>  
> Let's fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21996) Sql. SET DATA TYPE command allow change from null to not null

2024-04-08 Thread Iurii Gerzhedovich (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Iurii Gerzhedovich updated IGNITE-21996:

Summary: Sql. SET DATA TYPE command allow change from null to not null  
(was: Sql. SET DATA TYPE command allow change from not null to null)

> Sql. SET DATA TYPE command allow change from null to not null
> -
>
> Key: IGNITE-21996
> URL: https://issues.apache.org/jira/browse/IGNITE-21996
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Iurii Gerzhedovich
>Priority: Major
>  Labels: ignite-3
>
> As of now absent validation during the amend column from NOT NULLABLE to 
> NULLABLE in case using DDL command
> {code:java}
> ALTER TABLE ... ALTER COLUMN ... SET DATA TYPE   NOT NULL ...{code}
>  
> Let's fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21859) Causality token stays 0 for default zone

2024-04-08 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834838#comment-17834838
 ] 

Mirza Aliev edited comment on IGNITE-21859 at 4/8/24 8:21 AM:
--

The root cause of this problem is the way how the default zone is created. It 
is just injected to the Catalog constructor and is not created through the 
common flow of Catalog entity creation, hence Catalog with this default zone is 
not saved in the meta storage on the initialisation phase and default zone 
descriptor doesn't have correct {{CatalogObjectDescriptor#updateToken()}}. I 
expect this bug is being fixed in the default zone refactoring epic 
automatically because the initialisation of the default zone will be changed   
https://issues.apache.org/jira/browse/IGNITE-20613


was (Author: maliev):
The root cause of this problem is the way how the default zone is created. It 
is just injected to the Catalog constructor and is not created through the 
common flow of Catalog entity creation, hence Catalog with this default zone is 
not saved in the meta storage on the initialisation phase and default zone 
descriptor doesn't have correct {{CatalogObjectDescriptor#updateToken()}}. I 
expect this bug is being fixed in the default zone refactoring epic 
https://issues.apache.org/jira/browse/IGNITE-20613

> Causality token stays 0 for default zone
> 
>
> Key: IGNITE-21859
> URL: https://issues.apache.org/jira/browse/IGNITE-21859
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 3.0.0-beta1
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>
> We have a problem where if no alter or other action was performed on default 
> zone causality token in CatalogZoneDescriptor will remain 0. 
> It will cause an error on any attempt of rebalacing any tables in that zone:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-18][DistributionZoneRebalanceEngine]
>  Failed to update stable keys for tables [[TESTTABLE]]
> {code}
> If we will add stacktrace to output we will get following:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-13][DistributionZoneRebalanceEngine]
>  CATCH, 
>  java.lang.IllegalArgumentException: causalityToken must be greater then zero 
> [causalityToken=0"
>   at 
> org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.dataNodes(CausalityDataNodesEngine.java:139)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.DistributionZoneManager.dataNodes(DistributionZoneManager.java:324)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine.calculateAssignments(DistributionZoneRebalanceEngine.java:346)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.RebalanceRaftGroupEventsListener.doStableKeySwitch(RebalanceRaftGroupEventsListener.java:408)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$0(DistributionZoneRebalanceEngine.java:294)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at java.base/java.util.HashMap.forEach(HashMap.java:1337) ~[?:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$1(DistributionZoneRebalanceEngine.java:293)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  [?:?]
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 
> [?:?]
>   at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
> {code}
> The workaround is creating a zone and specifying this zone to table. 
> Also wouldn't be a bad idea to print stacktrace for  "Failed to update stable 
> keys for tables" at least on DEBUG log level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21859) Causality token stays 0 for default zone

2024-04-08 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834838#comment-17834838
 ] 

Mirza Aliev edited comment on IGNITE-21859 at 4/8/24 8:19 AM:
--

The root cause of this problem is the way how the default zone is created. It 
is just injected to the Catalog constructor and is not created through the 
common flow of Catalog entity creation, hence Catalog with this default zone is 
not saved in the meta storage on the initialisation phase and default zone 
descriptor doesn't have correct {{CatalogObjectDescriptor#updateToken()}}. I 
expect this bug is being fixed in the default zone refactoring epic 
https://issues.apache.org/jira/browse/IGNITE-20613


was (Author: maliev):
The root cause of this problem is the way how the default zone is created. It 
is just injected to the Catalog constructor and is not created through the 
common flow of Catalog entity creation, hence Catalog with this default zone is 
not saved in the meta storage on the initialisation phase and default zone 
descriptor doesn't have correct {{CatalogObjectDescriptor#updateToken()}}. I 
expect this bug is fixed in the default zone refactoring epic 
https://issues.apache.org/jira/browse/IGNITE-20613

> Causality token stays 0 for default zone
> 
>
> Key: IGNITE-21859
> URL: https://issues.apache.org/jira/browse/IGNITE-21859
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 3.0.0-beta1
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>
> We have a problem where if no alter or other action was performed on default 
> zone causality token in CatalogZoneDescriptor will remain 0. 
> It will cause an error on any attempt of rebalacing any tables in that zone:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-18][DistributionZoneRebalanceEngine]
>  Failed to update stable keys for tables [[TESTTABLE]]
> {code}
> If we will add stacktrace to output we will get following:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-13][DistributionZoneRebalanceEngine]
>  CATCH, 
>  java.lang.IllegalArgumentException: causalityToken must be greater then zero 
> [causalityToken=0"
>   at 
> org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.dataNodes(CausalityDataNodesEngine.java:139)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.DistributionZoneManager.dataNodes(DistributionZoneManager.java:324)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine.calculateAssignments(DistributionZoneRebalanceEngine.java:346)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.RebalanceRaftGroupEventsListener.doStableKeySwitch(RebalanceRaftGroupEventsListener.java:408)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$0(DistributionZoneRebalanceEngine.java:294)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at java.base/java.util.HashMap.forEach(HashMap.java:1337) ~[?:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$1(DistributionZoneRebalanceEngine.java:293)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  [?:?]
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 
> [?:?]
>   at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
> {code}
> The workaround is creating a zone and specifying this zone to table. 
> Also wouldn't be a bad idea to print stacktrace for  "Failed to update stable 
> keys for tables" at least on DEBUG log level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21859) Causality token stays 0 for default zone

2024-04-08 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834838#comment-17834838
 ] 

Mirza Aliev edited comment on IGNITE-21859 at 4/8/24 8:18 AM:
--

The root cause of this problem is the way how the default zone is created. It 
is just injected to the Catalog constructor and is not created through the 
common flow of Catalog entity creation, hence Catalog with this default zone is 
not saved in the meta storage on the initialisation phase and default zone 
descriptor doesn't have correct {{CatalogObjectDescriptor#updateToken()}}. I 
expect this bug is fixed in the default zone refactoring epic 
https://issues.apache.org/jira/browse/IGNITE-20613


was (Author: maliev):
The root cause of this problem is the way how the default zone is created. It 
is just injected to the Catalog constructor and is not created through the 
common flow of Catalog entity creation, hence Catalog with this default zone is 
not saved in the meta storage on the initialisation phase. I expect this bug is 
fixed in the default zone refactoring epic 
https://issues.apache.org/jira/browse/IGNITE-20613

> Causality token stays 0 for default zone
> 
>
> Key: IGNITE-21859
> URL: https://issues.apache.org/jira/browse/IGNITE-21859
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 3.0.0-beta1
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>
> We have a problem where if no alter or other action was performed on default 
> zone causality token in CatalogZoneDescriptor will remain 0. 
> It will cause an error on any attempt of rebalacing any tables in that zone:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-18][DistributionZoneRebalanceEngine]
>  Failed to update stable keys for tables [[TESTTABLE]]
> {code}
> If we will add stacktrace to output we will get following:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-13][DistributionZoneRebalanceEngine]
>  CATCH, 
>  java.lang.IllegalArgumentException: causalityToken must be greater then zero 
> [causalityToken=0"
>   at 
> org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.dataNodes(CausalityDataNodesEngine.java:139)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.DistributionZoneManager.dataNodes(DistributionZoneManager.java:324)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine.calculateAssignments(DistributionZoneRebalanceEngine.java:346)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.RebalanceRaftGroupEventsListener.doStableKeySwitch(RebalanceRaftGroupEventsListener.java:408)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$0(DistributionZoneRebalanceEngine.java:294)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at java.base/java.util.HashMap.forEach(HashMap.java:1337) ~[?:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$1(DistributionZoneRebalanceEngine.java:293)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  [?:?]
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 
> [?:?]
>   at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
> {code}
> The workaround is creating a zone and specifying this zone to table. 
> Also wouldn't be a bad idea to print stacktrace for  "Failed to update stable 
> keys for tables" at least on DEBUG log level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21859) Causality token stays 0 for default zone

2024-04-08 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-21859:
-
Epic Link: IGNITE-20613

> Causality token stays 0 for default zone
> 
>
> Key: IGNITE-21859
> URL: https://issues.apache.org/jira/browse/IGNITE-21859
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 3.0.0-beta1
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>
> We have a problem where if no alter or other action was performed on default 
> zone causality token in CatalogZoneDescriptor will remain 0. 
> It will cause an error on any attempt of rebalacing any tables in that zone:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-18][DistributionZoneRebalanceEngine]
>  Failed to update stable keys for tables [[TESTTABLE]]
> {code}
> If we will add stacktrace to output we will get following:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-13][DistributionZoneRebalanceEngine]
>  CATCH, 
>  java.lang.IllegalArgumentException: causalityToken must be greater then zero 
> [causalityToken=0"
>   at 
> org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.dataNodes(CausalityDataNodesEngine.java:139)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.DistributionZoneManager.dataNodes(DistributionZoneManager.java:324)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine.calculateAssignments(DistributionZoneRebalanceEngine.java:346)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.RebalanceRaftGroupEventsListener.doStableKeySwitch(RebalanceRaftGroupEventsListener.java:408)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$0(DistributionZoneRebalanceEngine.java:294)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at java.base/java.util.HashMap.forEach(HashMap.java:1337) ~[?:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$1(DistributionZoneRebalanceEngine.java:293)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  [?:?]
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 
> [?:?]
>   at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
> {code}
> The workaround is creating a zone and specifying this zone to table. 
> Also wouldn't be a bad idea to print stacktrace for  "Failed to update stable 
> keys for tables" at least on DEBUG log level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21859) Causality token stays 0 for default zone

2024-04-08 Thread Mirza Aliev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17834838#comment-17834838
 ] 

Mirza Aliev commented on IGNITE-21859:
--

The root cause of this problem is the way how the default zone is created. It 
is just injected to the Catalog constructor and is not created through the 
common flow of Catalog entity creation, hence Catalog with this default zone is 
not saved in the meta storage on the initialisation phase. I expect this bug is 
fixed in the default zone refactoring epic 
https://issues.apache.org/jira/browse/IGNITE-20613

> Causality token stays 0 for default zone
> 
>
> Key: IGNITE-21859
> URL: https://issues.apache.org/jira/browse/IGNITE-21859
> Project: Ignite
>  Issue Type: Task
>Affects Versions: 3.0.0-beta1
>Reporter: Ivan Zlenko
>Priority: Major
>  Labels: ignite-3
>
> We have a problem where if no alter or other action was performed on default 
> zone causality token in CatalogZoneDescriptor will remain 0. 
> It will cause an error on any attempt of rebalacing any tables in that zone:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-18][DistributionZoneRebalanceEngine]
>  Failed to update stable keys for tables [[TESTTABLE]]
> {code}
> If we will add stacktrace to output we will get following:
> {code}
> [2024-03-27T14:27:22,231][ERROR][%icbt_tacwdws_0%rebalance-scheduler-13][DistributionZoneRebalanceEngine]
>  CATCH, 
>  java.lang.IllegalArgumentException: causalityToken must be greater then zero 
> [causalityToken=0"
>   at 
> org.apache.ignite.internal.distributionzones.causalitydatanodes.CausalityDataNodesEngine.dataNodes(CausalityDataNodesEngine.java:139)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.DistributionZoneManager.dataNodes(DistributionZoneManager.java:324)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine.calculateAssignments(DistributionZoneRebalanceEngine.java:346)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.RebalanceRaftGroupEventsListener.doStableKeySwitch(RebalanceRaftGroupEventsListener.java:408)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$0(DistributionZoneRebalanceEngine.java:294)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at java.base/java.util.HashMap.forEach(HashMap.java:1337) ~[?:?]
>   at 
> org.apache.ignite.internal.distributionzones.rebalance.DistributionZoneRebalanceEngine$3.lambda$onUpdate$1(DistributionZoneRebalanceEngine.java:293)
>  ~[ignite-distribution-zones-9.0.127-SNAPSHOT.jar:?]
>   at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>  [?:?]
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) 
> [?:?]
>   at 
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  [?:?]
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  [?:?]
>   at java.base/java.lang.Thread.run(Thread.java:829) [?:?]
> {code}
> The workaround is creating a zone and specifying this zone to table. 
> Also wouldn't be a bad idea to print stacktrace for  "Failed to update stable 
> keys for tables" at least on DEBUG log level. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-21679) Add gatling plugin for load testing to extensions

2024-04-08 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky resolved IGNITE-21679.
--
Resolution: Fixed

[~serge.korotkov] Thanks for your tremendous effort, merged to master. Great 
job!

> Add gatling plugin for load testing to extensions
> -
>
> Key: IGNITE-21679
> URL: https://issues.apache.org/jira/browse/IGNITE-21679
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Minor
>  Labels: ducktests
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)