[jira] [Commented] (IGNITE-21550) ignite-cdc doesn't expose metrics via push metric exporters

2024-02-23 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820141#comment-17820141
 ] 

Ignite TC Bot commented on IGNITE-21550:


{panel:title=Branch: [pull/11248/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/11248/head] Base: [master] : New Tests 
(2)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}PDS 2{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=7756026]]
* {color:#013220}IgnitePdsTestSuite2: 
CdcPushMetricsExporterTest.testPushMetricsExporter - PASSED{color}

{color:#8b}Disk Page Compressions 2{color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=7756070]]
* {color:#013220}IgnitePdsCompressionTestSuite2: 
CdcPushMetricsExporterTest.testPushMetricsExporter - PASSED{color}

{panel}
[TeamCity *--> Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7756074&buildTypeId=IgniteTests24Java8_RunAll]

> ignite-cdc doesn't expose metrics via push metric exporters
> ---
>
> Key: IGNITE-21550
> URL: https://issues.apache.org/jira/browse/IGNITE-21550
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Korotkov
>Assignee: Sergey Korotkov
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For example cdc-related metric are not exposed via the 
> OpenCensusMetricExporterSpi



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21600) Make partition-operations a plain executor

2024-02-23 Thread Roman Puchkovskiy (Jira)
Roman Puchkovskiy created IGNITE-21600:
--

 Summary: Make partition-operations a plain executor
 Key: IGNITE-21600
 URL: https://issues.apache.org/jira/browse/IGNITE-21600
 Project: Ignite
  Issue Type: Improvement
Reporter: Roman Puchkovskiy
Assignee: Roman Puchkovskiy
 Fix For: 3.0.0-beta2


partition-operations thread pool is a striped pool now.

It was decided that we don't need it to be striped, so it should be replaced 
with a plain thread pool to avoid confusion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20553) Unexpected rebalancing immediately after table creation

2024-02-23 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820074#comment-17820074
 ] 

Roman Puchkovskiy commented on IGNITE-20553:


The patch looks good to me

> Unexpected rebalancing immediately after table creation
> ---
>
> Key: IGNITE-20553
> URL: https://issues.apache.org/jira/browse/IGNITE-20553
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Tkalenko
>Assignee: Alexander Lapin
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> During the implementation of IGNITE-20330, it was discovered that when 
> running 
> {*}org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest#checkSchemasCorrectlyRestore{*},
>  a situation may occur that after creating the table, rebalancing will begin 
> and this test will freeze on the first insert ({*}sql(ignite1, 
> String.format("INSERT INTO " + TABLE_NAME + " VALUES(%d, %d, %d)", i, i, 2 * 
> i));{*}). The situation is not reproduced often; you need to run the test 
> several times.
> h3. Upd#1
> It's a known issue that node restart is broken. Before proceeding with the 
> given ticket metastorage compaction epic should be finished, especially 
> https://issues.apache.org/jira/browse/IGNITE-20210
> h3. Upd#2
> This ticket should refined in the term of awaiting logical topology be the 
> same as on the CMG leader for any node before creating any table



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21599) Detect AI2 running and add an error to the log

2024-02-23 Thread Stephen Darlington (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820064#comment-17820064
 ] 

Stephen Darlington commented on IGNITE-21599:
-

I'm not sure we need to explicitly check for AI2, but we do need better errors. 
Like, is the underlying cause that we can't bind to port 10800 (thin-client for 
both AI2 and 3) or something similar?

> Detect AI2 running and add an error to the log
> --
>
> Key: IGNITE-21599
> URL: https://issues.apache.org/jira/browse/IGNITE-21599
> Project: Ignite
>  Issue Type: Task
>Reporter: Igor Gusev
>Priority: Major
>  Labels: ignite-3
>
> Currently, it is impossible to run AI2 and AI3 in the same VM. If you try to 
> start AI3 node, it will stop without any specific error message for debugging 
> purpose.
> We should check for AI2 so that users know why the node failed to start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (IGNITE-21599) Detect AI2 running and add an error to the log

2024-02-23 Thread Stephen Darlington (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820063#comment-17820063
 ] 

Stephen Darlington edited comment on IGNITE-21599 at 2/23/24 1:40 PM:
--

Repro:
 # Start Ignite 2 node (with ignite.sh)
 # Start AI3 node (ignite3db start)
 # Initialise AI3 cluster
 # AI3 node dies

There are a bunch of warnings, but the only error I see is:

2024-02-23 13:31:04:208 + 
[ERROR][%defaultNode%start-0][MetaStorageLeaderElectionListener] Unable to 
start Idle Safe Time scheduler


was (Author: sdarlington):
Repro:
 # Start Ignite 2 / GG 8 node (with ignite.sh)
 # Start GG9 node (ignite3db start)
 # Initialise GG9 cluster
 # GG9 node dies

There are a bunch of warnings, but the only error I see is:

2024-02-23 13:31:04:208 + 
[ERROR][%defaultNode%start-0][MetaStorageLeaderElectionListener] Unable to 
start Idle Safe Time scheduler

> Detect AI2 running and add an error to the log
> --
>
> Key: IGNITE-21599
> URL: https://issues.apache.org/jira/browse/IGNITE-21599
> Project: Ignite
>  Issue Type: Task
>Reporter: Igor Gusev
>Priority: Major
>  Labels: ignite-3
>
> Currently, it is impossible to run AI2 and AI3 in the same VM. If you try to 
> start AI3 node, it will stop without any specific error message for debugging 
> purpose.
> We should check for AI2 so that users know why the node failed to start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21428) Additional WITH params for zone, table

2024-02-23 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev reassigned IGNITE-21428:
-

Assignee: Vadim Pakhnushev

> Additional WITH params for zone, table
> --
>
> Key: IGNITE-21428
> URL: https://issues.apache.org/jira/browse/IGNITE-21428
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vadim Kolodin
>Assignee: Vadim Pakhnushev
>Priority: Major
>  Labels: ignite-3
>
> Implement currently supported WITH params for zone, table - adjust, filter, 
> dataregion, affinityFunction etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21284) Internal API for manual raft group configuration update

2024-02-23 Thread Ivan Bessonov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov updated IGNITE-21284:
---
Description: 
We need an API (with implementation) that's analogous to 
"reset-lost-partitions", but with the ability to reuse living minority of nodes.

This API should gather the states of partitions, identify healthy peers, and 
use them as a new raft group configuration (through the update of assignments).

We have to make sure that node with latest log index will become a leader, so 
we will have to propagate desired minimum for log index in assignments and use 
it during the voting.
h2. What's implemented

"resetPartitions" operation in distributed zone manager. It identifies 
partitions where only a minority of nodes is online (thus they won't be able to 
execute "changePeersAsync"), and writes a "forced pending assignments" for them.

Forced assignment excludes stable nodes, that are not present in pending 
assignment, from a new raft group configuration. It also performs a 
"resetPeers" operation on alive nodes from the stable assignment.

Complete loss of all nodes from stable assignments is not yet implemented, at 
least one node is required to be elected as a leader.

  was:
We need an API (with implementation) that's analogous to 
"reset-lost-partitions", but with the ability to reuse living minority of nodes.

This API should gather the states of partitions, identify healthy peers, and 
use them as a new raft group configuration (through the update of assignments).

We have to make sure that node with latest log index will become a leader, so 
we will have to propagate desired minimum for log index in assignments and use 
it during the voting.


> Internal API for manual raft group configuration update
> ---
>
> Key: IGNITE-21284
> URL: https://issues.apache.org/jira/browse/IGNITE-21284
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need an API (with implementation) that's analogous to 
> "reset-lost-partitions", but with the ability to reuse living minority of 
> nodes.
> This API should gather the states of partitions, identify healthy peers, and 
> use them as a new raft group configuration (through the update of 
> assignments).
> We have to make sure that node with latest log index will become a leader, so 
> we will have to propagate desired minimum for log index in assignments and 
> use it during the voting.
> h2. What's implemented
> "resetPartitions" operation in distributed zone manager. It identifies 
> partitions where only a minority of nodes is online (thus they won't be able 
> to execute "changePeersAsync"), and writes a "forced pending assignments" for 
> them.
> Forced assignment excludes stable nodes, that are not present in pending 
> assignment, from a new raft group configuration. It also performs a 
> "resetPeers" operation on alive nodes from the stable assignment.
> Complete loss of all nodes from stable assignments is not yet implemented, at 
> least one node is required to be elected as a leader.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21599) Detect AI2 running and add an error to the log

2024-02-23 Thread Igor Gusev (Jira)
Igor Gusev created IGNITE-21599:
---

 Summary: Detect AI2 running and add an error to the log
 Key: IGNITE-21599
 URL: https://issues.apache.org/jira/browse/IGNITE-21599
 Project: Ignite
  Issue Type: Task
Reporter: Igor Gusev


Currently, it is impossible to run AI2 and AI3 in the same VM. If you try to 
start AI3 node, it will stop without any specific error message for debugging 
purpose.

We should check for AI2 so that users know why the node failed to start.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21598) Use environment variables if defined by user

2024-02-23 Thread Igor Gusev (Jira)
Igor Gusev created IGNITE-21598:
---

 Summary: Use environment variables if defined by user
 Key: IGNITE-21598
 URL: https://issues.apache.org/jira/browse/IGNITE-21598
 Project: Ignite
  Issue Type: Task
Reporter: Igor Gusev


Currently vars.env file overwrites environment variables, thus making it 
impossible to set them without editing this file.

We should check if the environment variable is defined and use it if possible, 
and provide clear information on the fact we are using environment variable in 
node logs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21597) Create tables from builders

2024-02-23 Thread Vadim Pakhnushev (Jira)
Vadim Pakhnushev created IGNITE-21597:
-

 Summary: Create tables from builders
 Key: IGNITE-21597
 URL: https://issues.apache.org/jira/browse/IGNITE-21597
 Project: Ignite
  Issue Type: Improvement
Reporter: Vadim Pakhnushev


Implement currently supported WITH params for zone, table - adjust, filter, 
dataregion, affinityFunction etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21597) Create tables from builders

2024-02-23 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev updated IGNITE-21597:
--
Description: Implement builders to create tables from.  (was: Implement 
currently supported WITH params for zone, table - adjust, filter, dataregion, 
affinityFunction etc.)

> Create tables from builders
> ---
>
> Key: IGNITE-21597
> URL: https://issues.apache.org/jira/browse/IGNITE-21597
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vadim Pakhnushev
>Assignee: Vadim Pakhnushev
>Priority: Major
>  Labels: ignite-3
>
> Implement builders to create tables from.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21597) Create tables from builders

2024-02-23 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev reassigned IGNITE-21597:
-

Assignee: Vadim Pakhnushev

> Create tables from builders
> ---
>
> Key: IGNITE-21597
> URL: https://issues.apache.org/jira/browse/IGNITE-21597
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vadim Pakhnushev
>Assignee: Vadim Pakhnushev
>Priority: Major
>  Labels: ignite-3
>
> Implement currently supported WITH params for zone, table - adjust, filter, 
> dataregion, affinityFunction etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21541) Avoid partition-operations pool when it not lead to starvation

2024-02-23 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820021#comment-17820021
 ] 

Vladislav Pyatkov commented on IGNITE-21541:


This issue may not be needed after the thread model is corrected.

> Avoid partition-operations pool when it not lead to starvation
> --
>
> Key: IGNITE-21541
> URL: https://issues.apache.org/jira/browse/IGNITE-21541
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> Chnaging pools and related parking/unparking lead to an increase in latency. 
> Sometimes we can avoid extra pool changing, for example, by doing it for 
> embedded operations.
> {code:title=ReplicaManager#onReplicaMessageReceived}
> ExecutorService stripeExecutor = 
> ReplicationGroupStripes.stripeFor(request.groupId(), requestsExecutor);
> stripeExecutor.execute(() -> handleReplicaRequest(request, 
> senderConsistentId, correlationId));
> {code}
> This code changes a thread, even if it is not necessary.
> h3. Definition of done
> ReplicaManager should not switch threads if it does not lead to starvation 
> (in my opinion, the splitting is needed only in the case of the network 
> thread).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20553) Unexpected rebalancing immediately after table creation

2024-02-23 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-20553:
-
Reviewer: Roman Puchkovskiy

> Unexpected rebalancing immediately after table creation
> ---
>
> Key: IGNITE-20553
> URL: https://issues.apache.org/jira/browse/IGNITE-20553
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Tkalenko
>Assignee: Alexander Lapin
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> During the implementation of IGNITE-20330, it was discovered that when 
> running 
> {*}org.apache.ignite.internal.runner.app.ItDataSchemaSyncTest#checkSchemasCorrectlyRestore{*},
>  a situation may occur that after creating the table, rebalancing will begin 
> and this test will freeze on the first insert ({*}sql(ignite1, 
> String.format("INSERT INTO " + TABLE_NAME + " VALUES(%d, %d, %d)", i, i, 2 * 
> i));{*}). The situation is not reproduced often; you need to run the test 
> several times.
> h3. Upd#1
> It's a known issue that node restart is broken. Before proceeding with the 
> given ticket metastorage compaction epic should be finished, especially 
> https://issues.apache.org/jira/browse/IGNITE-20210
> h3. Upd#2
> This ticket should refined in the term of awaiting logical topology be the 
> same as on the CMG leader for any node before creating any table



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21596) ItRebalanceTriggersRecoveryTest#testRebalanceTriggersRecoveryWhenUpdatesWereProcessedByAnotherNodesAlready fails with AssertionError

2024-02-23 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-21596:


 Summary: 
ItRebalanceTriggersRecoveryTest#testRebalanceTriggersRecoveryWhenUpdatesWereProcessedByAnotherNodesAlready
 fails with AssertionError
 Key: IGNITE-21596
 URL: https://issues.apache.org/jira/browse/IGNITE-21596
 Project: Ignite
  Issue Type: Bug
Reporter: Alexander Lapin


{code:java}
org.opentest4j.AssertionFailedError: expected: <41> but was: <47>  at 
app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
  at 
app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
  at 
app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)  at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:166)  at 
app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:161)  at 
app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:632)  at 
app//org.apache.ignite.internal.rebalance.ItRebalanceTriggersRecoveryTest.testRebalanceTriggersRecoveryWhenUpdatesWereProcessedByAnotherNodesAlready(ItRebalanceTriggersRecoveryTest.java:216)
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21596) ItRebalanceTriggersRecoveryTest#testRebalanceTriggersRecoveryWhenUpdatesWereProcessedByAnotherNodesAlready fails with AssertionError

2024-02-23 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-21596:
-
Labels: ignite-3  (was: )

> ItRebalanceTriggersRecoveryTest#testRebalanceTriggersRecoveryWhenUpdatesWereProcessedByAnotherNodesAlready
>  fails with AssertionError
> 
>
> Key: IGNITE-21596
> URL: https://issues.apache.org/jira/browse/IGNITE-21596
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> {code:java}
> org.opentest4j.AssertionFailedError: expected: <41> but was: <47>  at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
>   at 
> app//org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
>   at 
> app//org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:166)  
> at 
> app//org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:161)  
> at app//org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:632)  
> at 
> app//org.apache.ignite.internal.rebalance.ItRebalanceTriggersRecoveryTest.testRebalanceTriggersRecoveryWhenUpdatesWereProcessedByAnotherNodesAlready(ItRebalanceTriggersRecoveryTest.java:216)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20745) TableManager.tableAsync(int tableId) is slowing down thin clients

2024-02-23 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17820004#comment-17820004
 ] 

Pavel Tupitsyn commented on IGNITE-20745:
-

Merged to main: eb16d37a15f4ca9c146a3f20c03b0fffd59160b9

> TableManager.tableAsync(int tableId) is slowing down thin clients
> -
>
> Key: IGNITE-20745
> URL: https://issues.apache.org/jira/browse/IGNITE-20745
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
> Attachments: ItThinClientPutGetBenchmark.java
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Performance difference between embedded and client modes is affected 
> considerably by the call to *IgniteTablesInternal#tableAsync(int id)*. This 
> call has to be performed on every individual table operation.
> We should make it as fast as possible. Something like a dictionary lookup + 
> quick check for deleted table.
> ||Part||Duration, us||
> |Network & msgpack|19.30|
> |Get table|14.29|
> |Get tuple & serialize|12.86|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21142) "SqlException: Table with name 'N' already exists" upon creating a non-existing table

2024-02-23 Thread Ivan Artiukhov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Artiukhov updated IGNITE-21142:

Priority: Blocker  (was: Major)

> "SqlException: Table with name 'N' already exists" upon creating a 
> non-existing table
> -
>
> Key: IGNITE-21142
> URL: https://issues.apache.org/jira/browse/IGNITE-21142
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Ivan Artiukhov
>Priority: Blocker
>  Labels: ignite-3
>
> h1. Steps
> - Start a cluster of 1 node with default settings, init the cluster.
> - Create 1000 unique tables in a loop using 
> {{{}IgniteCluster.dataSource().getConnection().createStatement(){}}}:
> Example of a DDL request:
>  
> {noformat}
> create table test_table_123(id INTEGER not null, column_1 VARCHAR(50) not 
> null, column_2 VARCHAR(50) not null, column_3 VARCHAR(50) not null, column_4 
> VARCHAR(50) not null, primary key (id)){noformat}
> h1. Expected result
> All 1000 tables were created successfullt
> h1. Actual result
> The following exception occurs on the server node:
> {noformat}
> 2023-12-21 02:52:27:298 + 
> [INFO][%TablesAmountCapacityTest_cluster_0%JRaft-FSMCaller-Disruptor-metastorage-_stripe_0-0][JdbcQueryEventHandlerImpl]
>  Exception while executing query [query=create table test_table_523(id 
> INTEGER not null, column_1 VARCHAR(50) not null, column_2 VARCHAR(50) not 
> null, column_3 VARCHAR(50) not null, column_4 VARCHAR(50) not null, primary 
> key (id))]
> org.apache.ignite.sql.SqlException: IGN-SQL-6 
> TraceId:12fb2fcf-71f1-4373-b85c-71f10b9b58aa Failed to validate query. Table 
> with name 'PUBLIC.TEST_TABLE_523' already exists
>   at 
> org.apache.ignite.internal.sql.engine.util.SqlExceptionMapperProvider.lambda$mappers$3(SqlExceptionMapperProvider.java:60)
>   at 
> org.apache.ignite.internal.lang.IgniteExceptionMapper.map(IgniteExceptionMapper.java:61)
>   at 
> org.apache.ignite.internal.lang.IgniteExceptionMapperUtil.map(IgniteExceptionMapperUtil.java:149)
>   at 
> org.apache.ignite.internal.lang.IgniteExceptionMapperUtil.mapToPublicException(IgniteExceptionMapperUtil.java:103)
>   at 
> org.apache.ignite.internal.lang.SqlExceptionMapperUtil.mapToPublicSqlException(SqlExceptionMapperUtil.java:49)
>   at 
> org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.wrapIfNecessary(AsyncSqlCursorImpl.java:191)
>   at 
> org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.lambda$requestNextAsync$2(AsyncSqlCursorImpl.java:123)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:950)
>   at 
> java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2340)
>   at 
> org.apache.ignite.internal.sql.engine.AsyncSqlCursorImpl.lambda$requestNextAsync$3(AsyncSqlCursorImpl.java:122)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:990)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:974)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
>   at 
> java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
>   at 
> org.apache.ignite.internal.util.AsyncWrapper.lambda$requestNextAsync$2(AsyncWrapper.java:113)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:990)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:974)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$completeWaitersOnUpdate$0(PendingComparableValuesTracker.java:169)
>   at 
> java.base/java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:122)
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.completeWaitersOnUpdate(PendingComparableValuesTracker.java:169)
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.update(PendingComparableValuesTracker.java:103)
>   at 
> org.apache.ignite.internal.catalog.CatalogManagerImpl$OnUpdateHandlerImpl.lambda$handle$1(CatalogManagerImpl.java:439)
>   at 
> java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFi

[jira] [Commented] (IGNITE-20745) TableManager.tableAsync(int tableId) is slowing down thin clients

2024-02-23 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17819991#comment-17819991
 ] 

Igor Sapego commented on IGNITE-20745:
--

Looks good to me.

> TableManager.tableAsync(int tableId) is slowing down thin clients
> -
>
> Key: IGNITE-20745
> URL: https://issues.apache.org/jira/browse/IGNITE-20745
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
> Attachments: ItThinClientPutGetBenchmark.java
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Performance difference between embedded and client modes is affected 
> considerably by the call to *IgniteTablesInternal#tableAsync(int id)*. This 
> call has to be performed on every individual table operation.
> We should make it as fast as possible. Something like a dictionary lookup + 
> quick check for deleted table.
> ||Part||Duration, us||
> |Network & msgpack|19.30|
> |Get table|14.29|
> |Get tuple & serialize|12.86|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-20745) TableManager.tableAsync(int tableId) is slowing down thin clients

2024-02-23 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-20745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17819988#comment-17819988
 ] 

Pavel Tupitsyn commented on IGNITE-20745:
-

[~isapego] [~v.pyatkov] please review: 
https://github.com/apache/ignite-3/pull/3279

> TableManager.tableAsync(int tableId) is slowing down thin clients
> -
>
> Key: IGNITE-20745
> URL: https://issues.apache.org/jira/browse/IGNITE-20745
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
> Attachments: ItThinClientPutGetBenchmark.java
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Performance difference between embedded and client modes is affected 
> considerably by the call to *IgniteTablesInternal#tableAsync(int id)*. This 
> call has to be performed on every individual table operation.
> We should make it as fast as possible. Something like a dictionary lookup + 
> quick check for deleted table.
> ||Part||Duration, us||
> |Network & msgpack|19.30|
> |Get table|14.29|
> |Get tuple & serialize|12.86|



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21585) Disable catalog compaction

2024-02-23 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov updated IGNITE-21585:
--
Description: 
Catalog compaction is not synchronized with transactions on nodes.
There are a bunch of issues:
* We expect Catalog compaction should be called for minimal LWM on all the 
nodes. But LWM is node-local value.
* Recovery RW transaction may require old catalog version to get list of 
indexes, that should be updated.
* incoming snapshot may require outdated version of catalog in some cases.
* LWM wait for RO transaction, but not RW.

So, 
1. Let's just do nothing with catalog history on compaction.
2. Let's node update some key in the metastorage and put it's id. Make node 
will react on it's own updates only. On the key update event, node should 
trigger destroy event for table/index, which are under the LWM.
We can't trigger local events instead, because we need a causality token for VV 
in several components.
2. Fix recovery procedure if needed.



  was:
Catalog compaction is not synchronized with transactions on nodes.
There are a bunch of issues:
* We expect Catalog compaction should be called for minimal LWM on all the 
nodes. But LWM is node-local value.
* Recovery RW transaction may require old catalog version to get list of 
indexes, that should be updated.
* incoming snapshot may require outdated version of catalog in some cases.
* LWM wait for RO transaction, but not RW.

So, 
1. Let's just do nothing with catalog history on compaction.
2. Let's update some key with node id, and node should react on own updates 
only and skip updates from other nodes. On the key update event, node should 
trigger destroy event for table/index, which are under the LWM.
We can't trigger local events on LWM change, because we need a causality token 
for VV in several components.
2. Fix recovery procedure if needed.




> Disable catalog compaction
> --
>
> Key: IGNITE-21585
> URL: https://issues.apache.org/jira/browse/IGNITE-21585
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
>
> Catalog compaction is not synchronized with transactions on nodes.
> There are a bunch of issues:
> * We expect Catalog compaction should be called for minimal LWM on all the 
> nodes. But LWM is node-local value.
> * Recovery RW transaction may require old catalog version to get list of 
> indexes, that should be updated.
> * incoming snapshot may require outdated version of catalog in some cases.
> * LWM wait for RO transaction, but not RW.
> So, 
> 1. Let's just do nothing with catalog history on compaction.
> 2. Let's node update some key in the metastorage and put it's id. Make node 
> will react on it's own updates only. On the key update event, node should 
> trigger destroy event for table/index, which are under the LWM.
> We can't trigger local events instead, because we need a causality token for 
> VV in several components.
> 2. Fix recovery procedure if needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)