[jira] [Created] (IGNITE-22140) Possible pagination bug in GridCacheQueryManager#runQuery()

2024-04-29 Thread Oleg Valuyskiy (Jira)
Oleg Valuyskiy created IGNITE-22140:
---

 Summary: Possible pagination bug in 
GridCacheQueryManager#runQuery()
 Key: IGNITE-22140
 URL: https://issues.apache.org/jira/browse/IGNITE-22140
 Project: Ignite
  Issue Type: Task
Reporter: Oleg Valuyskiy
Assignee: Oleg Valuyskiy


It looks like there is a pagination bug in the GridCacheQueryManager#runQuery() 
method caused by fact that the ‘cnt’ counter doesn’t get reset after sending 
the first page with query results.

It is advised to find out whether the bug really exists and fix it of that’s 
the case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20475) .NET: Thin 3.0: EF Core provider (investigate)

2024-04-29 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842217#comment-17842217
 ] 

Igor Sapego commented on IGNITE-20475:
--

[~ptupitsyn] Looks great to me. Feel free to resolve the ticket.

> .NET: Thin 3.0: EF Core provider (investigate)
> --
>
> Key: IGNITE-20475
> URL: https://issues.apache.org/jira/browse/IGNITE-20475
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> EF Core provider will allow using Ignite as any other DB supported by EF. 
> This makes adoption incredibly easy for the users - a matter of a few changed 
> lines to switch from Postgres/MsSQL/MySQL, keeping all existing code.
> This is also a big undertaking and needs more investigation/PoC.
> * https://learn.microsoft.com/en-us/ef/core/providers/writing-a-provider
> * 
> https://blog.oneunicorn.com/2016/11/11/so-you-want-to-write-an-ef-core-provider/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-22086) Thin 3.0: observableTimestamp is 0 after handshake

2024-04-29 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842216#comment-17842216
 ] 

Igor Sapego commented on IGNITE-22086:
--

[~ptupitsyn] Looks good overall, but I have left one comment that needs to be 
addressed.

> Thin 3.0: observableTimestamp is 0 after handshake
> --
>
> Key: IGNITE-22086
> URL: https://issues.apache.org/jira/browse/IGNITE-22086
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We propagate *observableTimestamp* to client with every response (see 
> *ClientInboundMessageHandler#writeResponseHeader*), but not on handshake. As 
> a result, the very first operation from the client has 
> *observableTimestamp=0*, which can lead to causality issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22139) JDBC request to degraded cluster freezes

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22139:
--
Summary: JDBC request to degraded cluster freezes  (was: JDBC request to 
degraded cluster stucks)

> JDBC request to degraded cluster freezes
> 
>
> Key: IGNITE-22139
> URL: https://issues.apache.org/jira/browse/IGNITE-22139
> Project: Ignite
>  Issue Type: Bug
>  Components: general, jdbc, networking, persistence
>Affects Versions: 3.0.0-beta1
> Environment: The 2 or 3 nodes cluster running locally.
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
>  # Create zone with replication equals to amount of nodes (2 or 3 
> corresponding)
>  # Create 10 tables inside the zone.
>  # Insert 100 rows in every table.
>  # Await all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "AVAILABLE"
>  # Kill first node with kill -9.
>  # Assert all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
> cluster or "DEGRADED" for 3 nodes cluster,
>  # Execute select query using JDBC connecting to the second node (which is 
> alive).
> *Expected:*
> Data is returned.
> *Actual:*
> The select query at step 9 stucks forever.
> The errors on the server side:
> {code:java}
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:981 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
>  Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
> 2024-04-30 00:04:02:981 +0200 
> [WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
> Recoverable error during the request occurred (will be retried on the 
> randomly selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, 
> -117, -128, -8, -15, -83, -4, -54, -57, 1], 
> deserializedCommand=SafeTimeSyncCommandImpl 
> [safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
> java.util.concurrent.CompletionException: 
> io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
> refused: no further information: /192.168.100.5:3344
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
>   at 
> io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
>   at 
> io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
>   at 
> io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEve

[jira] [Updated] (IGNITE-22139) JDBC request to degraded cluster freezes

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22139:
--
Description: 
*Steps to reproduce:*
 # Create zone with replication equals to amount of nodes (2 or 3 corresponding)
 # Create 10 tables inside the zone.
 # Insert 100 rows in every table.
 # Await all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "AVAILABLE"
 # Kill first node with kill -9.
 # Assert all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
cluster or "DEGRADED" for 3 nodes cluster,
 # Execute select query using JDBC connecting to the second node (which is 
alive).

*Expected:*

Data is returned.

*Actual:*
The select query at step 9 freezes forever.
The errors on the server side:
{code:java}
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:981 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
 Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
2024-04-30 00:04:02:981 +0200 
[WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
Recoverable error during the request occurred (will be retried on the randomly 
selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, -117, -128, 
-8, -15, -83, -4, -54, -57, 1], deserializedCommand=SafeTimeSyncCommandImpl 
[safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
java.util.concurrent.CompletionException: 
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: 
no further information: /192.168.100.5:3344
  at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
  at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
  at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
  at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
  at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
  at 
org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
  at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
  at 
io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
  at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
  at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
  at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
  at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
  at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: 
Connection refused: no further information: /192.168.100.5:3344
Caused by: java.net.ConnectException: Conn

[jira] [Updated] (IGNITE-22139) JDBC request to degraded cluster freezes forever

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22139:
--
Summary: JDBC request to degraded cluster freezes forever  (was: JDBC 
request to degraded cluster freezes)

> JDBC request to degraded cluster freezes forever
> 
>
> Key: IGNITE-22139
> URL: https://issues.apache.org/jira/browse/IGNITE-22139
> Project: Ignite
>  Issue Type: Bug
>  Components: general, jdbc, networking, persistence
>Affects Versions: 3.0.0-beta1
> Environment: The 2 or 3 nodes cluster running locally.
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
>  # Create zone with replication equals to amount of nodes (2 or 3 
> corresponding)
>  # Create 10 tables inside the zone.
>  # Insert 100 rows in every table.
>  # Await all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "AVAILABLE"
>  # Kill first node with kill -9.
>  # Assert all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
> cluster or "DEGRADED" for 3 nodes cluster,
>  # Execute select query using JDBC connecting to the second node (which is 
> alive).
> *Expected:*
> Data is returned.
> *Actual:*
> The select query at step 9 freezes forever.
> The errors on the server side:
> {code:java}
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:981 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
>  Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
> 2024-04-30 00:04:02:981 +0200 
> [WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
> Recoverable error during the request occurred (will be retried on the 
> randomly selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, 
> -117, -128, -8, -15, -83, -4, -54, -57, 1], 
> deserializedCommand=SafeTimeSyncCommandImpl 
> [safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
> java.util.concurrent.CompletionException: 
> io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
> refused: no further information: /192.168.100.5:3344
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
>   at 
> io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
>   at 
> io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
>   at 
> io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
>   at 
> io.netty.channel.nio.NioEventLoop

[jira] [Updated] (IGNITE-22139) JDBC request to degraded cluster stucks

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor updated IGNITE-22139:
--
Summary: JDBC request to degraded cluster stucks  (was: JDBC request to 
degraded cluster stuck)

> JDBC request to degraded cluster stucks
> ---
>
> Key: IGNITE-22139
> URL: https://issues.apache.org/jira/browse/IGNITE-22139
> Project: Ignite
>  Issue Type: Bug
>  Components: general, jdbc, networking, persistence
>Affects Versions: 3.0.0-beta1
> Environment: The 2 or 3 nodes cluster running locally.
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Steps to reproduce:*
>  # Create zone with replication equals to amount of nodes (2 or 3 
> corresponding)
>  # Create 10 tables inside the zone.
>  # Insert 100 rows in every table.
>  # Await all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "AVAILABLE"
>  # Kill first node with kill -9.
>  # Assert all tables*partitions*nodes local state is "HEALTHY"
>  # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
> cluster or "DEGRADED" for 3 nodes cluster,
>  # Execute select query using JDBC connecting to the second node (which is 
> alive).
> *Expected:*
> Data is returned.
> *Actual:*
> The select query at step 9 stucks forever.
> The errors on the server side:
> {code:java}
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:965 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
>  Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
> java.util.concurrent.TimeoutException.
> 2024-04-30 00:04:02:980 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
>  Fail to check replicator connection to 
> peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
> 2024-04-30 00:04:02:981 +0200 
> [ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
>  Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
> 2024-04-30 00:04:02:981 +0200 
> [WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
> Recoverable error during the request occurred (will be retried on the 
> randomly selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, 
> -117, -128, -8, -15, -83, -4, -54, -57, 1], 
> deserializedCommand=SafeTimeSyncCommandImpl 
> [safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
> [consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
> java.util.concurrent.CompletionException: 
> io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection 
> refused: no further information: /192.168.100.5:3344
>   at 
> java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
>   at 
> org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
>   at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
>   at 
> io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
>   at 
> io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
>   at 
> io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
>   at 
> io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
>   at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLo

[jira] [Created] (IGNITE-22139) JDBC request to degraded cluster stuck

2024-04-29 Thread Igor (Jira)
Igor created IGNITE-22139:
-

 Summary: JDBC request to degraded cluster stuck
 Key: IGNITE-22139
 URL: https://issues.apache.org/jira/browse/IGNITE-22139
 Project: Ignite
  Issue Type: Bug
  Components: general, jdbc, networking, persistence
Affects Versions: 3.0.0-beta1
 Environment: The 2 or 3 nodes cluster running locally.
Reporter: Igor


*Steps to reproduce:*
 # Create zone with replication equals to amount of nodes (2 or 3 corresponding)
 # Create 10 tables inside the zone.
 # Insert 100 rows in every table.
 # Await all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "AVAILABLE"
 # Kill first node with kill -9.
 # Assert all tables*partitions*nodes local state is "HEALTHY"
 # Await all tables*partitions*nodes global state is "READ_ONLY" for 2 nodes 
cluster or "DEGRADED" for 3 nodes cluster,
 # Execute select query using JDBC connecting to the second node (which is 
alive).



*Expected:*

Data is returned.

*Actual:*
The select query at step 9 stucks forever.
The errors on the server side:
{code:java}
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:965 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-StepDownTimer-8][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][AbstractClientService]
 Fail to connect ClusterFailover3NodesTest_cluster_0, exception: 
java.util.concurrent.TimeoutException.
2024-04-30 00:04:02:980 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][ReplicatorGroupImpl]
 Fail to check replicator connection to 
peer=ClusterFailover3NodesTest_cluster_0, replicatorType=Follower.
2024-04-30 00:04:02:981 +0200 
[ERROR][%ClusterFailover3NodesTest_cluster_1%JRaft-Response-Processor-1][NodeImpl]
 Fail to add a replicator, peer=ClusterFailover3NodesTest_cluster_0.
2024-04-30 00:04:02:981 +0200 
[WARNING][ClusterFailover3NodesTest_cluster_1-client-8][RaftGroupServiceImpl] 
Recoverable error during the request occurred (will be retried on the randomly 
selected node) [request=WriteActionRequestImpl [command=[0, 9, 41, -117, -128, 
-8, -15, -83, -4, -54, -57, 1], deserializedCommand=SafeTimeSyncCommandImpl 
[safeTimeLong=112356769098760202], groupId=26_part_10], peer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_0, idx=0], newPeer=Peer 
[consistentId=ClusterFailover3NodesTest_cluster_1, idx=0]].
java.util.concurrent.CompletionException: 
io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: 
no further information: /192.168.100.5:3344
  at 
java.base/java.util.concurrent.CompletableFuture.encodeRelay(CompletableFuture.java:367)
  at 
java.base/java.util.concurrent.CompletableFuture.completeRelay(CompletableFuture.java:376)
  at 
java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1074)
  at 
java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
  at 
java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088)
  at 
org.apache.ignite.internal.network.netty.NettyUtils.lambda$toCompletableFuture$0(NettyUtils.java:74)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:590)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:583)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:559)
  at 
io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:492)
  at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:636)
  at 
io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:629)
  at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:118)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:326)
  at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:342)
  at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
  at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
  at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
  at 
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
  at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  at 
io.netty.util.concurrent.FastThreadLocalRunnable.

[jira] [Commented] (IGNITE-21908) Add metrics of distribution among stripes in disruptor

2024-04-29 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842133#comment-17842133
 ] 

Vladislav Pyatkov commented on IGNITE-21908:


Following metrics are added as a part of the ticket:
||Name||Description||
|jraft.fsmcaller.disruptor.Batch|The histogram of the batch size to handle in 
the state machine for partitions|
|jraft.fsmcaller.disruptor.Stripes|The histogram of distribution data by 
stripes in the state machine for partitions|
|jraft.nodeimpl.disruptor.Batch|The histogram of the batch size to handle node 
operations for partitions|
|jraft.nodeimpl.disruptor.Stripes|The histogram of distribution data by stripes 
for node operations for partitions|
|jraft.readonlyservice.disruptor.Batch|The histogram of the batch size to 
handle readonly operations for partitions|
|jraft.readonlyservice.disruptor.Stripes|The histogram of distribution data by 
stripes readonly operations for partitions|
|jraft.logmanager.disruptor.Batch|The histogram of the batch size to handle in 
the log for partitions|
|jraft.logmanager.disruptor.Stripes|The histogram of distribution data by 
stripes in the log for partitions|


> Add metrics of distribution among stripes in disruptor
> --
>
> Key: IGNITE-21908
> URL: https://issues.apache.org/jira/browse/IGNITE-21908
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> The metrics are useful to estimate the uniformity of the distribution.
> h3. Implementation notes
> These metrics can be implemented using the common approach, which is based on 
> {{MetricSource}} interface.
> h3. Definition of done
> Metrics that become available:
> * histogramm of batch size
> * operations were processed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19670) Improve CatalogService test coverage.

2024-04-29 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov reassigned IGNITE-19670:
-

Assignee: (was: Maksim Zhuravkov)

> Improve CatalogService test coverage.
> -
>
> Key: IGNITE-19670
> URL: https://issues.apache.org/jira/browse/IGNITE-19670
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3, tech-debt-test
>
> 1. CatalogServiceSelftTest.testCreateTable (+testDropTable) looks a bit 
> complicated. It checks creation of more than one table. Let's simplify the 
> test by reverting last changes
> 2. We use shared counter to generate unique identifier for schema objects. 
> Some tests checks schema object id, and some doesn't.  Let's move 
> schema-object's id check into separate test, to verify which command 
> increments the counter, and which doesn't.
> 3. Let's add a test that will check ABA problem. E.g. create-drop-create 
> table (or index) with same name and check the object can be resolved 
> correctly by name and by id (regarding object versioning in Catalog, of 
> course).
> 4. Move Catalog operations tests to separate class.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-18657) Sql. Decimal casts DECIMAL::VARCHAR

2024-04-29 Thread Iurii Gerzhedovich (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842003#comment-17842003
 ] 

Iurii Gerzhedovich commented on IGNITE-18657:
-

Right now the queries expected fails, but we have not so user-friendly error 
message:
org.apache.ignite.sql.SqlException: IGN-SQL-9 
TraceId:0460f382-120f-49a4-9ee2-e09ee90d4764 Numeric field overflow

For example, error in PG looks so:
ERROR: numeric field overflow DETAIL: A field with precision 3, scale 4 must 
round to an absolute value less than 10^-1.

> Sql. Decimal casts DECIMAL::VARCHAR
> ---
>
> Key: IGNITE-18657
> URL: https://issues.apache.org/jira/browse/IGNITE-18657
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: calcite2-required, calcite3-required, ignite-3
> Fix For: 3.0.0-beta2
>
>
> The following test cases in test_decimal.test do not fail:
> {code:java}
> # any value >= 1 becomes out of range, though
> skipif ignite3
> statement error
> SELECT '1'::DECIMAL(3, 3)::VARCHAR;
> skipif ignite3
> statement error
> SELECT '-1'::DECIMAL(3, 3)::VARCHAR;
> # various error conditions
> # scale must be bigger than or equal to width
> skipif ignite3
> statement error
> SELECT '0.1'::DECIMAL(3, 4);
> # width/scale out of range
> skipif ignite3
> statement error
> SELECT '0.1'::DECIMAL(1000);
>   {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22138) Sql. Get rid ability to set CHARACTER SET for character strings

2024-04-29 Thread Iurii Gerzhedovich (Jira)
Iurii Gerzhedovich created IGNITE-22138:
---

 Summary: Sql. Get rid ability to set CHARACTER SET for character 
strings
 Key: IGNITE-22138
 URL: https://issues.apache.org/jira/browse/IGNITE-22138
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Iurii Gerzhedovich


AI doesn't support charsets for character strings, but we still can set it, at 
least, through DDL, using CAST expressions.

Let's consider getting rid of using CHARACTER SET from a parser. If it is not 
so simple let's forbid to set it through DDL.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22137) Rename RocksDb storage engine to "rocksdb" in configuration

2024-04-29 Thread Aleksandr Polovtsev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtsev updated IGNITE-22137:
-
Priority: Minor  (was: Major)

> Rename RocksDb storage engine to "rocksdb" in configuration
> ---
>
> Key: IGNITE-22137
> URL: https://issues.apache.org/jira/browse/IGNITE-22137
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtsev
>Priority: Minor
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently {{RocksDbStorageEngine}} is called "rocksDb" in configuration which 
> is inconsistent  with other storage engines, like "aipersist" and "aimem". I 
> propose to rename to "rocksdb". However, this is an incompatible change in 
> terms of configuration API, so extra caution must be taken.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22137) Rename RocksDb storage engine to "rocksdb" in configuration

2024-04-29 Thread Aleksandr Polovtsev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtsev updated IGNITE-22137:
-
Description: Currently {{RocksDbStorageEngine}} is called "rocksDb" in 
configuration which is inconsistent  with other storage engines, like 
"aipersist" and "aimem". I propose to rename to "rocksdb". However, this is an 
incompatible change in terms of configuration API, so extra caution must be 
taken.

> Rename RocksDb storage engine to "rocksdb" in configuration
> ---
>
> Key: IGNITE-22137
> URL: https://issues.apache.org/jira/browse/IGNITE-22137
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtsev
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently {{RocksDbStorageEngine}} is called "rocksDb" in configuration which 
> is inconsistent  with other storage engines, like "aipersist" and "aimem". I 
> propose to rename to "rocksdb". However, this is an incompatible change in 
> terms of configuration API, so extra caution must be taken.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22137) Rename RocksDb storage engine to "rocksdb" in configuration

2024-04-29 Thread Aleksandr Polovtsev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtsev updated IGNITE-22137:
-
Labels: ignite-3  (was: )

> Rename RocksDb storage engine to "rocksdb" in configuration
> ---
>
> Key: IGNITE-22137
> URL: https://issues.apache.org/jira/browse/IGNITE-22137
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtsev
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22137) Rename RocksDb storage engine to "rocksdb" in configuration

2024-04-29 Thread Aleksandr Polovtsev (Jira)
Aleksandr Polovtsev created IGNITE-22137:


 Summary: Rename RocksDb storage engine to "rocksdb" in 
configuration
 Key: IGNITE-22137
 URL: https://issues.apache.org/jira/browse/IGNITE-22137
 Project: Ignite
  Issue Type: Improvement
Reporter: Aleksandr Polovtsev






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22137) Rename RocksDb storage engine to "rocksdb" in configuration

2024-04-29 Thread Aleksandr Polovtsev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtsev updated IGNITE-22137:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Rename RocksDb storage engine to "rocksdb" in configuration
> ---
>
> Key: IGNITE-22137
> URL: https://issues.apache.org/jira/browse/IGNITE-22137
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtsev
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22137) Rename RocksDb storage engine to "rocksdb" in configuration

2024-04-29 Thread Aleksandr Polovtsev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtsev updated IGNITE-22137:
-
Fix Version/s: 3.0.0-beta2

> Rename RocksDb storage engine to "rocksdb" in configuration
> ---
>
> Key: IGNITE-22137
> URL: https://issues.apache.org/jira/browse/IGNITE-22137
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksandr Polovtsev
>Priority: Major
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21743) .NET: LINQ: Cast to decimal loses precision

2024-04-29 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21743:

Priority: Minor  (was: Major)

> .NET: LINQ: Cast to decimal loses precision
> ---
>
> Key: IGNITE-21743
> URL: https://issues.apache.org/jira/browse/IGNITE-21743
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET, LINQ, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently we use *as decimal* cast, which results in *scale=0*, truncating 
> any digits after the decimal point.
> See ticket mention in *IgniteQueryExpressionVisitor* - we should specify 
> scale and precision when performing cast to decimal.
> Blocked by IGNITE-21745 : .NET can't read small values with high scale.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-22088) retryPolicy of IgniteClient doesn't work on transaction fail

2024-04-29 Thread Igor (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor resolved IGNITE-22088.
---
Resolution: Not A Bug

>From the doc:
_Sets the retry policy. When a request fails due to a connection error, and 
multiple server connections are available, Ignite will retry the request if the 
specified policy allows it._

> retryPolicy of IgniteClient doesn't work on transaction fail
> 
>
> Key: IGNITE-22088
> URL: https://issues.apache.org/jira/browse/IGNITE-22088
> Project: Ignite
>  Issue Type: Bug
>  Components: clients, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Details:*
> IgniteClient do not run retry with set retryPolicy on transaction lock. The 
> default retry policy also doesn't work. The debugging also shows that no any 
> code inside `RetryReadPolicy` is not used during transaction lock exception.
> *Steps to reproduce:*
> Run the next code:
> {code:java}
> AtomicInteger retriesCount = new AtomicInteger(0);
> RetryReadPolicy retry = new RetryReadPolicy() {
> @Override
> public boolean shouldRetry(RetryPolicyContext context) {
> System.out.println("CHECK IF RETRY SHOULD HAPPEN");
> retriesCount.addAndGet(1);
> return super.shouldRetry(context);
> }
> };
> try (IgniteClient igniteClient1 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build();
> IgniteClient igniteClient2 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build())
>  {
> igniteClient1.sql().execute(null, "CREATE TABLE teachers(id INTEGER 
> PRIMARY KEY, name VARCHAR(200))");
> Transaction tr1 = igniteClient1.transactions().begin();
> Transaction tr2 = igniteClient2.transactions().begin();
> igniteClient1.sql().execute(tr1, "INSERT INTO TEACHERS (id, name) VALUES 
> (" + 3 + ", '" + "Pavel" + "')");
> SqlException exception = assertThrows(SqlException.class, () -> 
> igniteClient2.sql().execute(tr2, "SELECT * FROM teachers"));
> assertTrue(exception.getMessage().contains("Failed to acquire a lock due 
> to a possible deadlock "));
> }
> assertEquals(16, retriesCount.get()); {code}
> *Expected:*
> Executed without errors.
> *Actual:*
> Fails on the last step expected 16 retries, actual 0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-22088) retryPolicy of IgniteClient doesn't work on transaction fail

2024-04-29 Thread Igor (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841970#comment-17841970
 ] 

Igor commented on IGNITE-22088:
---

[~isapego] sorry, misread the doc. Then it is not the issue.

> retryPolicy of IgniteClient doesn't work on transaction fail
> 
>
> Key: IGNITE-22088
> URL: https://issues.apache.org/jira/browse/IGNITE-22088
> Project: Ignite
>  Issue Type: Bug
>  Components: clients, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Details:*
> IgniteClient do not run retry with set retryPolicy on transaction lock. The 
> default retry policy also doesn't work. The debugging also shows that no any 
> code inside `RetryReadPolicy` is not used during transaction lock exception.
> *Steps to reproduce:*
> Run the next code:
> {code:java}
> AtomicInteger retriesCount = new AtomicInteger(0);
> RetryReadPolicy retry = new RetryReadPolicy() {
> @Override
> public boolean shouldRetry(RetryPolicyContext context) {
> System.out.println("CHECK IF RETRY SHOULD HAPPEN");
> retriesCount.addAndGet(1);
> return super.shouldRetry(context);
> }
> };
> try (IgniteClient igniteClient1 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build();
> IgniteClient igniteClient2 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build())
>  {
> igniteClient1.sql().execute(null, "CREATE TABLE teachers(id INTEGER 
> PRIMARY KEY, name VARCHAR(200))");
> Transaction tr1 = igniteClient1.transactions().begin();
> Transaction tr2 = igniteClient2.transactions().begin();
> igniteClient1.sql().execute(tr1, "INSERT INTO TEACHERS (id, name) VALUES 
> (" + 3 + ", '" + "Pavel" + "')");
> SqlException exception = assertThrows(SqlException.class, () -> 
> igniteClient2.sql().execute(tr2, "SELECT * FROM teachers"));
> assertTrue(exception.getMessage().contains("Failed to acquire a lock due 
> to a possible deadlock "));
> }
> assertEquals(16, retriesCount.get()); {code}
> *Expected:*
> Executed without errors.
> *Actual:*
> Fails on the last step expected 16 retries, actual 0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22048) .NET: Thin 3.0: Add Gradle task to run tests

2024-04-29 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-22048:

Priority: Trivial  (was: Major)

> .NET: Thin 3.0: Add Gradle task to run tests
> 
>
> Key: IGNITE-22048
> URL: https://issues.apache.org/jira/browse/IGNITE-22048
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Add a Gradle task to run .NET tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22049) C++: Thin 3.0: Add Gradle task to run tests

2024-04-29 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-22049:

Priority: Trivial  (was: Major)

> C++: Thin 3.0: Add Gradle task to run tests
> ---
>
> Key: IGNITE-22049
> URL: https://issues.apache.org/jira/browse/IGNITE-22049
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Igor Sapego
>Priority: Trivial
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Add a Gradle task to run C++ tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-22049) C++: Thin 3.0: Add Gradle task to run tests

2024-04-29 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego reassigned IGNITE-22049:


Assignee: Igor Sapego  (was: Pavel Tupitsyn)

> C++: Thin 3.0: Add Gradle task to run tests
> ---
>
> Key: IGNITE-22049
> URL: https://issues.apache.org/jira/browse/IGNITE-22049
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Add a Gradle task to run C++ tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-22088) retryPolicy of IgniteClient doesn't work on transaction fail

2024-04-29 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841967#comment-17841967
 ] 

Igor Sapego commented on IGNITE-22088:
--

[~lunigorn] Originally, the client-side Retry Policy was implemented to retry 
operations on network failures, not on transaction lock. Retry Policy for 
transactions should be probably implemented on the server side and have 
different name and implementation.

> retryPolicy of IgniteClient doesn't work on transaction fail
> 
>
> Key: IGNITE-22088
> URL: https://issues.apache.org/jira/browse/IGNITE-22088
> Project: Ignite
>  Issue Type: Bug
>  Components: clients, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Igor
>Priority: Major
>  Labels: ignite-3
>
> *Details:*
> IgniteClient do not run retry with set retryPolicy on transaction lock. The 
> default retry policy also doesn't work. The debugging also shows that no any 
> code inside `RetryReadPolicy` is not used during transaction lock exception.
> *Steps to reproduce:*
> Run the next code:
> {code:java}
> AtomicInteger retriesCount = new AtomicInteger(0);
> RetryReadPolicy retry = new RetryReadPolicy() {
> @Override
> public boolean shouldRetry(RetryPolicyContext context) {
> System.out.println("CHECK IF RETRY SHOULD HAPPEN");
> retriesCount.addAndGet(1);
> return super.shouldRetry(context);
> }
> };
> try (IgniteClient igniteClient1 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build();
> IgniteClient igniteClient2 = 
> IgniteClient.builder().retryPolicy(retry).addresses("localhost:10800").build())
>  {
> igniteClient1.sql().execute(null, "CREATE TABLE teachers(id INTEGER 
> PRIMARY KEY, name VARCHAR(200))");
> Transaction tr1 = igniteClient1.transactions().begin();
> Transaction tr2 = igniteClient2.transactions().begin();
> igniteClient1.sql().execute(tr1, "INSERT INTO TEACHERS (id, name) VALUES 
> (" + 3 + ", '" + "Pavel" + "')");
> SqlException exception = assertThrows(SqlException.class, () -> 
> igniteClient2.sql().execute(tr2, "SELECT * FROM teachers"));
> assertTrue(exception.getMessage().contains("Failed to acquire a lock due 
> to a possible deadlock "));
> }
> assertEquals(16, retriesCount.get()); {code}
> *Expected:*
> Executed without errors.
> *Actual:*
> Fails on the last step expected 16 retries, actual 0.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-22126) C++: Thin 3.0: Implement MapReduce API

2024-04-29 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego reassigned IGNITE-22126:


Assignee: Igor Sapego

> C++: Thin 3.0: Implement MapReduce API
> --
>
> Key: IGNITE-22126
> URL: https://issues.apache.org/jira/browse/IGNITE-22126
> Project: Ignite
>  Issue Type: Improvement
>  Components: compute, platforms, thin client
>Reporter: Vadim Pakhnushev
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
>
> Implement {{ClientTaskExecution}} in C++ client.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22090) Thin 3.0: Avoid TX_BEGIN round-trip

2024-04-29 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-22090:

Epic Link: IGNITE-19479

> Thin 3.0: Avoid TX_BEGIN round-trip
> ---
>
> Key: IGNITE-22090
> URL: https://issues.apache.org/jira/browse/IGNITE-22090
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> IGNITE-19681 implements tx partition awareness, where TX_BEGIN request is 
> performed together with the first enlisted operation. However, this still 
> involves a separate round-trip to start the transaction.
> We can change the protocol to do two things in one go:
> * Start the transaction
> * Enlist first operation
> See the comment from [~ascherbakov]: 
> https://github.com/apache/ignite-3/pull/3640#discussion_r1575943518
> {code}
> This can be optimized even further.
> Currently we still have +1RTT due to begin tx request/response here, which 
> may be sensitive to small transactions.
> A transaction should be started on first map request.
> For this to work logical client tx id should be assigned on client.
> For example, id can consist of local client counter combined with client 
> unique id assigned by server on handshake.
> One bit of 64 bit id is reserved for "first" flag.
> If an operation is "first", the txn is implicitly started.
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19762) Revisit data region management in rocksdb

2024-04-29 Thread Aleksandr Polovtsev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtsev updated IGNITE-19762:
-
Fix Version/s: 3.0.0-beta2

> Revisit data region management in rocksdb
> -
>
> Key: IGNITE-19762
> URL: https://issues.apache.org/jira/browse/IGNITE-19762
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Aleksandr Polovtsev
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Right now it's not really straightforward, what these regions mean.
> After https://issues.apache.org/jira/browse/IGNITE-19591, they are associated 
> wit a separate rocksdb instance each, in which case it's unfair to call them 
> regions anymore. More like storages.
> But we still use the same non-configurable path, so there are no immediate 
> advantages over it. We must provide an adequate design for regions and 
> storages, basicacaly, and clean the mess that we currently have.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19762) Revisit data region management in rocksdb

2024-04-29 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841919#comment-17841919
 ] 

Roman Puchkovskiy commented on IGNITE-19762:


The patch looks good to me

> Revisit data region management in rocksdb
> -
>
> Key: IGNITE-19762
> URL: https://issues.apache.org/jira/browse/IGNITE-19762
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Aleksandr Polovtsev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now it's not really straightforward, what these regions mean.
> After https://issues.apache.org/jira/browse/IGNITE-19591, they are associated 
> wit a separate rocksdb instance each, in which case it's unfair to call them 
> regions anymore. More like storages.
> But we still use the same non-configurable path, so there are no immediate 
> advantages over it. We must provide an adequate design for regions and 
> storages, basicacaly, and clean the mess that we currently have.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-22000) Sql. Get rid of DdlSqlToCommandConverter

2024-04-29 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin reassigned IGNITE-22000:
-

Assignee: Pavel Pereslegin

> Sql. Get rid of DdlSqlToCommandConverter
> 
>
> Key: IGNITE-22000
> URL: https://issues.apache.org/jira/browse/IGNITE-22000
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ignite-3, refactoring
>
> Every DDL command goes through two levels of conversion now.
> From {{AST}} into {{DdlCommand}} using {{DdlSqlToCommandConverter}}.
> And from {{DdlCommand}} into {{CatalogCommand}} (using 
> {{DdlToCatalogCommandConverter}}).
> It looks like we should do only single conversion {{AST}} => 
> {{CatalogCommand}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21300) Implement disaster recovery for secondary indexes

2024-04-29 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-21300:
-
Fix Version/s: 3.0.0-beta2

> Implement disaster recovery for secondary indexes
> -
>
> Key: IGNITE-21300
> URL: https://issues.apache.org/jira/browse/IGNITE-21300
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> It is possible that if we lost part of the log, some available indexes might 
> become "locally" unavailable. We will have to finish build process second 
> time in such a case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-22136) Update AI3 documentation

2024-04-29 Thread Igor Gusev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-22136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Gusev updated IGNITE-22136:

Component/s: documentation

> Update AI3 documentation
> 
>
> Key: IGNITE-22136
> URL: https://issues.apache.org/jira/browse/IGNITE-22136
> Project: Ignite
>  Issue Type: Task
>  Components: documentation
>Reporter: Igor Gusev
>Priority: Major
>
> It's been a long time since the last documentation update. Lets make sure the 
> docs are more up-to-date



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22136) Update AI3 documentation

2024-04-29 Thread Igor Gusev (Jira)
Igor Gusev created IGNITE-22136:
---

 Summary: Update AI3 documentation
 Key: IGNITE-22136
 URL: https://issues.apache.org/jira/browse/IGNITE-22136
 Project: Ignite
  Issue Type: Task
Reporter: Igor Gusev


It's been a long time since the last documentation update. Lets make sure the 
docs are more up-to-date



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19762) Revisit data region management in rocksdb

2024-04-29 Thread Aleksandr Polovtsev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17841870#comment-17841870
 ] 

Aleksandr Polovtsev commented on IGNITE-19762:
--

I'm going to close this issue and remove this TODO along with removal of any 
mentions of data regions in the storage. This ticket is old and we should 
probably create a new one if we find out that current profile configuration is 
not sufficient.

> Revisit data region management in rocksdb
> -
>
> Key: IGNITE-19762
> URL: https://issues.apache.org/jira/browse/IGNITE-19762
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Aleksandr Polovtsev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now it's not really straightforward, what these regions mean.
> After https://issues.apache.org/jira/browse/IGNITE-19591, they are associated 
> wit a separate rocksdb instance each, in which case it's unfair to call them 
> regions anymore. More like storages.
> But we still use the same non-configurable path, so there are no immediate 
> advantages over it. We must provide an adequate design for regions and 
> storages, basicacaly, and clean the mess that we currently have.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19762) Revisit data region management in rocksdb

2024-04-29 Thread Aleksandr Polovtsev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Polovtsev updated IGNITE-19762:
-
Reviewer: Roman Puchkovskiy

> Revisit data region management in rocksdb
> -
>
> Key: IGNITE-19762
> URL: https://issues.apache.org/jira/browse/IGNITE-19762
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Aleksandr Polovtsev
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now it's not really straightforward, what these regions mean.
> After https://issues.apache.org/jira/browse/IGNITE-19591, they are associated 
> wit a separate rocksdb instance each, in which case it's unfair to call them 
> regions anymore. More like storages.
> But we still use the same non-configurable path, so there are no immediate 
> advantages over it. We must provide an adequate design for regions and 
> storages, basicacaly, and clean the mess that we currently have.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-22135) Sql. Investigate get ride support of CHAR datatype.

2024-04-29 Thread Iurii Gerzhedovich (Jira)
Iurii Gerzhedovich created IGNITE-22135:
---

 Summary: Sql. Investigate get ride support of CHAR datatype.
 Key: IGNITE-22135
 URL: https://issues.apache.org/jira/browse/IGNITE-22135
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Iurii Gerzhedovich


CHAR type is not widely used by users, however, the type has specific need to 
be supported and requires significant time to do it right.

Let's consider the ability to get rid of the type.
List possibility solutions ordered by priority:
1. Make CHAR type an alias for VARCHAR, but take into account that in the 
future we can start supporting the type in the right way and need to be sure we 
can support smooth migration from the version that does not support CHAR to the 
new one with such support ( as example - metadata for such type should reflect 
real type VARCHAR)
2. Forbid CHAR type at all. No alias even.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)