[jira] [Updated] (IGNITE-16675) Need to investigate why initialisation of raft groups could be time-consuming.
[ https://issues.apache.org/jira/browse/IGNITE-16675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kirill Gusakov updated IGNITE-16675: Attachment: 1000-starts-cpu-1ms-1node-electSelf-issue.html > Need to investigate why initialisation of raft groups could be > time-consuming. > --- > > Key: IGNITE-16675 > URL: https://issues.apache.org/jira/browse/IGNITE-16675 > Project: Ignite > Issue Type: Task >Reporter: Mirza Aliev >Assignee: Kirill Gusakov >Priority: Major > Labels: ignite-3 > Attachments: 1000-starts-cpu-1ms-1node-electSelf-issue.html, > screenshot-1.png, screenshot-2.png > > > After some investigation that was made under IGNITE-16559 (see the > [comment|https://issues.apache.org/jira/browse/IGNITE-16559?focusedCommentId=17495362&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17495362]), > we came up with the idea that we could investigate why initialisation of > raft groups could be time-consuming. > We see that init phase of starting raft group contains some time-consuming > operations like {{fsync}} or {{RocksDB.open}} > {noformat} > at sun.nio.ch.FileDispatcherImpl.force0(FileDispatcherImpl.java:-1) > at sun.nio.ch.FileDispatcherImpl.force(FileDispatcherImpl.java:82) > at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:461) > at org.apache.ignite.raft.jraft.util.Utils.fsync(Utils.java:366) > at > org.apache.ignite.raft.jraft.storage.io.MessageFile.save(MessageFile.java:94) > at > org.apache.ignite.raft.jraft.storage.impl.LocalRaftMetaStorage.save(LocalRaftMetaStorage.java:114) > at > org.apache.ignite.raft.jraft.storage.impl.LocalRaftMetaStorage.setTermAndVotedFor(LocalRaftMetaStorage.java:186) > at > org.apache.ignite.raft.jraft.core.NodeImpl.electSelf(NodeImpl.java:1271) > at org.apache.ignite.raft.jraft.core.NodeImpl.init(NodeImpl.java:1054) > at org.apache.ignite.raft.jraft.core.NodeImpl.init(NodeImpl.java:126) > at > org.apache.ignite.raft.jraft.RaftGroupService.start(RaftGroupService.java:108) > - locked <0x1e42> (a org.apache.ignite.raft.jraft.RaftGroupService) > at > org.apache.ignite.internal.raft.server.impl.JraftServerImpl.startRaftGroup(JraftServerImpl.java:341) > - locked <0x1deb> (a > org.apache.ignite.internal.raft.server.impl.JraftServerImpl) > at > org.apache.ignite.internal.raft.Loza.prepareRaftGroupInternal(Loza.java:193) > at > org.apache.ignite.internal.raft.Loza.prepareRaftGroup(Loza.java:168) > {noformat} > {noformat} > at org.rocksdb.RocksDB.open(RocksDB.java:-1) > at org.rocksdb.RocksDB.open(RocksDB.java:306) > at > org.apache.ignite.raft.jraft.storage.impl.RocksDBLogStorage.openDB(RocksDBLogStorage.java:308) > at > org.apache.ignite.raft.jraft.storage.impl.RocksDBLogStorage.initAndLoad(RocksDBLogStorage.java:221) > at > org.apache.ignite.raft.jraft.storage.impl.RocksDBLogStorage.init(RocksDBLogStorage.java:198) > at > org.apache.ignite.raft.jraft.storage.impl.RocksDBLogStorage.init(RocksDBLogStorage.java:68) > at > org.apache.ignite.raft.jraft.storage.impl.LogManagerImpl.init(LogManagerImpl.java:183) > at > org.apache.ignite.raft.jraft.storage.impl.LogManagerImpl.init(LogManagerImpl.java:65) > at > org.apache.ignite.raft.jraft.core.NodeImpl.initLogStorage(NodeImpl.java:557) > at org.apache.ignite.raft.jraft.core.NodeImpl.init(NodeImpl.java:946) > at org.apache.ignite.raft.jraft.core.NodeImpl.init(NodeImpl.java:126) > at > org.apache.ignite.raft.jraft.RaftGroupService.start(RaftGroupService.java:108) > - locked (a org.apache.ignite.raft.jraft.RaftGroupService) > at > org.apache.ignite.internal.raft.server.impl.JraftServerImpl.startRaftGroup(JraftServerImpl.java:341) > - locked (a > org.apache.ignite.internal.raft.server.impl.JraftServerImpl) > at > org.apache.ignite.internal.raft.Loza.prepareRaftGroupInternal(Loza.java:193) > at > org.apache.ignite.internal.raft.Loza.prepareRaftGroup(Loza.java:168) > {noformat} > We made some pre-investigation, we started a raft group on one node 1000 > times, red line on the screenshot is a local run, other lines are from TC. Y > axis shows milliseconds. X axis represents test attmept. We measured > {{org.apache.ignite.raft.jraft.RaftGroupService#start}}. In general, we could > see that attempts are stable and they are not time-consuming (in some bad > cases, start could last about 1 second. We saw that behavour in TC), but > there are some statistical outliers, probably the are related to GC pauses. > !screenshot-2.png! -- This message was sent
[jira] [Commented] (IGNITE-16675) Need to investigate why initialisation of raft groups could be time-consuming.
[ https://issues.apache.org/jira/browse/IGNITE-16675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17561194#comment-17561194 ] Kirill Gusakov commented on IGNITE-16675: - I made some investigations about the time-consuming parts of {{JraftServerImpl.startRaftGroup}} : * If started raft group has only one member of the group - it takes ~40ms to {{electSelf}} as a leader immediately. This process made significant amount of IO operations (see attached flamegraph, startRaftGroup column) * If raft group has more members - it takes 0-1ms and doesn't affected by electSelf effected So, it's looks like that performance of the startRaftGroup in general is ok. But, we have another issue: {{startRaftGroup}} is synchronized around the whole {{JraftServerImpl}}. It will slow down the cases: start of the table with many partitions with 1 replica or start many tables with any number of partitions with 1 replica. It must be investigated under https://issues.apache.org/jira/browse/IGNITE-16676 > Need to investigate why initialisation of raft groups could be > time-consuming. > --- > > Key: IGNITE-16675 > URL: https://issues.apache.org/jira/browse/IGNITE-16675 > Project: Ignite > Issue Type: Task >Reporter: Mirza Aliev >Assignee: Kirill Gusakov >Priority: Major > Labels: ignite-3 > Attachments: screenshot-1.png, screenshot-2.png > > > After some investigation that was made under IGNITE-16559 (see the > [comment|https://issues.apache.org/jira/browse/IGNITE-16559?focusedCommentId=17495362&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17495362]), > we came up with the idea that we could investigate why initialisation of > raft groups could be time-consuming. > We see that init phase of starting raft group contains some time-consuming > operations like {{fsync}} or {{RocksDB.open}} > {noformat} > at sun.nio.ch.FileDispatcherImpl.force0(FileDispatcherImpl.java:-1) > at sun.nio.ch.FileDispatcherImpl.force(FileDispatcherImpl.java:82) > at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:461) > at org.apache.ignite.raft.jraft.util.Utils.fsync(Utils.java:366) > at > org.apache.ignite.raft.jraft.storage.io.MessageFile.save(MessageFile.java:94) > at > org.apache.ignite.raft.jraft.storage.impl.LocalRaftMetaStorage.save(LocalRaftMetaStorage.java:114) > at > org.apache.ignite.raft.jraft.storage.impl.LocalRaftMetaStorage.setTermAndVotedFor(LocalRaftMetaStorage.java:186) > at > org.apache.ignite.raft.jraft.core.NodeImpl.electSelf(NodeImpl.java:1271) > at org.apache.ignite.raft.jraft.core.NodeImpl.init(NodeImpl.java:1054) > at org.apache.ignite.raft.jraft.core.NodeImpl.init(NodeImpl.java:126) > at > org.apache.ignite.raft.jraft.RaftGroupService.start(RaftGroupService.java:108) > - locked <0x1e42> (a org.apache.ignite.raft.jraft.RaftGroupService) > at > org.apache.ignite.internal.raft.server.impl.JraftServerImpl.startRaftGroup(JraftServerImpl.java:341) > - locked <0x1deb> (a > org.apache.ignite.internal.raft.server.impl.JraftServerImpl) > at > org.apache.ignite.internal.raft.Loza.prepareRaftGroupInternal(Loza.java:193) > at > org.apache.ignite.internal.raft.Loza.prepareRaftGroup(Loza.java:168) > {noformat} > {noformat} > at org.rocksdb.RocksDB.open(RocksDB.java:-1) > at org.rocksdb.RocksDB.open(RocksDB.java:306) > at > org.apache.ignite.raft.jraft.storage.impl.RocksDBLogStorage.openDB(RocksDBLogStorage.java:308) > at > org.apache.ignite.raft.jraft.storage.impl.RocksDBLogStorage.initAndLoad(RocksDBLogStorage.java:221) > at > org.apache.ignite.raft.jraft.storage.impl.RocksDBLogStorage.init(RocksDBLogStorage.java:198) > at > org.apache.ignite.raft.jraft.storage.impl.RocksDBLogStorage.init(RocksDBLogStorage.java:68) > at > org.apache.ignite.raft.jraft.storage.impl.LogManagerImpl.init(LogManagerImpl.java:183) > at > org.apache.ignite.raft.jraft.storage.impl.LogManagerImpl.init(LogManagerImpl.java:65) > at > org.apache.ignite.raft.jraft.core.NodeImpl.initLogStorage(NodeImpl.java:557) > at org.apache.ignite.raft.jraft.core.NodeImpl.init(NodeImpl.java:946) > at org.apache.ignite.raft.jraft.core.NodeImpl.init(NodeImpl.java:126) > at > org.apache.ignite.raft.jraft.RaftGroupService.start(RaftGroupService.java:108) > - locked (a org.apache.ignite.raft.jraft.RaftGroupService) > at > org.apache.ignite.internal.raft.server.impl.JraftServerImpl.startRaftGroup(JraftServerImpl.java:341) > - locked (a > org.apache.ignite.internal.raft.server.impl.JraftServerImpl) >
[jira] [Commented] (IGNITE-17048) Some failing tests make other tests fail too
[ https://issues.apache.org/jira/browse/IGNITE-17048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17561155#comment-17561155 ] Roman Puchkovskiy commented on IGNITE-17048: Original build is not available on TC anymore, but there is a log in IGNITE-17280 (the log is probably different, but the symptoms are same) > Some failing tests make other tests fail too > > > Key: IGNITE-17048 > URL: https://issues.apache.org/jira/browse/IGNITE-17048 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtcev >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > Looks like after some tests fail, they do not correctly free the used ports, > because other tests start to fail with a "Failed to start the connection > manager: No available port in range [3346-3346]" message. > This error occurs due to previous test failing to stop a node with > AssertionError "Raft groups are still running > [602f614c-9d19-4467-bcdf-d248d9e3a410_part_1, > 602f614c-9d19-4467-bcdf-d248d9e3a410_part_0, > 602f614c-9d19-4467-bcdf-d248d9e3a410_part_6, > 602f614c-9d19-4467-bcdf-d248d9e3a410_part_4, > 602f614c-9d19-4467-bcdf-d248d9e3a410_part_3, > 602f614c-9d19-4467-bcdf-d248d9e3a410_part_8, > 602f614c-9d19-4467-bcdf-d248d9e3a410_part_7]" > Example of a failed build: > https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_RunAllTests/6583070 > A possible solution might be to create a diagnostics tool that will print the > name of the process that occupies the blocked port. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17280) ItComputeTest.executesColocatedByClassNameWithTupleKey failed on TC
[ https://issues.apache.org/jira/browse/IGNITE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy updated IGNITE-17280: --- Attachment: _Integration_Tests_Module_Runner_3992.log.zip > ItComputeTest.executesColocatedByClassNameWithTupleKey failed on TC > --- > > Key: IGNITE-17280 > URL: https://issues.apache.org/jira/browse/IGNITE-17280 > Project: Ignite > Issue Type: Bug > Components: compute >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Attachments: _Integration_Tests_Module_Runner_3992.log.zip > > > [https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6655808] > > ava.lang.AssertionError > java.lang.AssertionError: Raft groups are still running > [b839ce7f-370c-4553-882e-34b471029c9c_part_0, > b839ce7f-370c-4553-882e-34b471029c9c_part_19, > b839ce7f-370c-4553-882e-34b471029c9c_part_8, > b839ce7f-370c-4553-882e-34b471029c9c_part_11, > b839ce7f-370c-4553-882e-34b471029c9c_part_4, > b839ce7f-370c-4553-882e-34b471029c9c_part_15, > b839ce7f-370c-4553-882e-34b471029c9c_part_13, > b839ce7f-370c-4553-882e-34b471029c9c_part_3, > b839ce7f-370c-4553-882e-34b471029c9c_part_14] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17285) Ambiguous output of INDEXES SytemView if cache is created via DDL
[ https://issues.apache.org/jira/browse/IGNITE-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-17285: --- Description: There is a difference in 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES', when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL, indexes does not have '_KEY' column, and have explicit set of columns from primary key. Also, there is a duplication of affinity column 'ID' for : {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} In case of creation table via Cache API + QueryEntity, no exlicit primary key columns are shown, but '_KEY' column is, and there is no duplication of affinity column 'ID' in '_key_PK_hash' index. Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} If such difference is not a bug, it should be documented. was: There is a difference in 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES', when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL, indexes does not have '_KEY' column, and have explicit set of columns from primary key. Also, there is a duplication of affinity column 'ID' for : {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} In case of creation table via Cache API + QueryEntity, no exlicit primary key columns are shown, but '_KEY' column is, and there is no duplication of affinity column 'ID' in '_key_PK_hash' index. Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} If such difference is not a bug, we should document it. > Ambiguous output of INDEXES SytemView if cache is created via DDL > -- > > Key: IGNITE-17285 > URL: https://issues.apache.org/jira/browse/IGNITE-17285 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Shishkov >Priority: Minor > Labels: ise > Attachments: SqlIndexSystemViewReproducerTest.patch, > create_table.txt, query_entity.txt > > > There is a diff
[jira] [Updated] (IGNITE-17285) Ambiguous output of INDEXES SytemView if cache is created via DDL
[ https://issues.apache.org/jira/browse/IGNITE-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-17285: --- Description: There is a difference in 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES', when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL, indexes does not have '_KEY' column, and have explicit set of columns from primary key. Also, there is a duplication of affinity column 'ID' for : {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} In case of creation table via Cache API + QueryEntity, no exlicit primary key columns are shown, but '_KEY' column is, and there is no duplication of affinity column 'ID' in '_key_PK_hash' index. Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} If such difference is not a bug, we should document it. was: There is a difference in 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES', when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL, indexes does not have '_KEY' column, and have explicit set of columns from primary key. Also, there is a duplication of affinity column 'ID' for : {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} In case of creation table via Cache API + QueryEntity, no exlicit primary key columns are shown, but '_KEY' column is, and there is no duplication of affinity column 'ID' in '_key_PK_hash' index. Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} > Ambiguous output of INDEXES SytemView if cache is created via DDL > -- > > Key: IGNITE-17285 > URL: https://issues.apache.org/jira/browse/IGNITE-17285 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Shishkov >Priority: Minor > Labels: ise > Attachments: SqlIndexSystemViewReproducerTest.patch, > create_table.txt, query_entity.txt > > > There is a difference in 'COLUMNS ' and 'INLINE_SIZE' columns content in
[jira] [Updated] (IGNITE-17285) Ambiguous output of INDEXES SytemView if cache is created via DDL
[ https://issues.apache.org/jira/browse/IGNITE-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-17285: --- Description: There is a difference in 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES', when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL, indexes does not have '_KEY' column, and have explicit set of columns from primary key. Also, there is a duplication of affinity column 'ID' for : {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} In case of creation table via Cache API + QueryEntity, no exlicit primary key columns are shown, but '_KEY' column is, and there is no duplication of affinity column 'ID' in '_key_PK_hash' index. Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} was: There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL, indexes does not have '_KEY' column, and have explicit set of columns from primary key. Also, there is a duplication of affinity column 'ID' for : {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} In case of creation table via Cache API + QueryEntity, no exlicit primary key columns are shown, but '_KEY' column is, and there is no duplication of affinity column 'ID' in '_key_PK_hash' index. Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} > Ambiguous output of INDEXES SytemView if cache is created via DDL > -- > > Key: IGNITE-17285 > URL: https://issues.apache.org/jira/browse/IGNITE-17285 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Shishkov >Priority: Minor > Labels: ise > Attachments: SqlIndexSystemViewReproducerTest.patch, > create_table.txt, query_entity.txt > > > There is a difference in 'COLUMNS ' and 'INLINE_SIZE' columns content in > system view 'INDEXES', when you create SQL-cache by means
[jira] [Updated] (IGNITE-17285) Ambiguous output of INDEXES SytemView if cache is created via DDL
[ https://issues.apache.org/jira/browse/IGNITE-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-17285: --- Labels: ise (was: ) > Ambiguous output of INDEXES SytemView if cache is created via DDL > -- > > Key: IGNITE-17285 > URL: https://issues.apache.org/jira/browse/IGNITE-17285 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Shishkov >Priority: Minor > Labels: ise > Attachments: SqlIndexSystemViewReproducerTest.patch, > create_table.txt, query_entity.txt > > > There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system > view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by > means of DDL. > As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , > there are two "equal" attepmts to create cache: via DDL, and via Cache API + > QueryEntity. > Primary keys contains equal set of fields, affinity fields are the same. > Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the > same for both ways of cache creation. Table content (i.e. select *) is also > the same, if you do not take into account the order of output. > There are example sqlline outputs for table from reproducer: > # [^create_table.txt] - for table, created by DDL. > # [^query_entity.txt] - for table, created by Cache API. > As you can see, colums and content differs in INDEXES view: in case of DDL, > indexes does not have '_KEY' column, and have explicit set of columns from > primary key. Also, there is a duplication of affinity column 'ID' for : > {code} > "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC > {code} > In case of creation table via Cache API + QueryEntity, no exlicit primary key > columns are shown, but '_KEY' column is, and there is no duplication of > affinity column 'ID' in '_key_PK_hash' index. > Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which > is obtained from {{GridH2Table#getIndexes}}. It seems, that information > differs in this class too. > Example output: > {code:java|title=Cache API + QueryEntity} > Index name Columns > _key_PK__SCAN_ [_KEY, ID] > _key_PK_hash [_KEY, ID] > _key_PK [_KEY, ID] > AFFINITY_KEY [ID, _KEY] > PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] > {code} > {code:java|title=DDL} > Index name Columns > _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] > _key_PK_hash [_KEY, ID] > _key_PK [ID, FIRSTNAME, LASTNAME] > AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] > PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17285) Ambiguous output of INDEXES SytemView if cache is created via DDL
[ https://issues.apache.org/jira/browse/IGNITE-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-17285: --- Description: There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL, indexes does not have '_KEY' column, and have explicit set of columns from primary key. Also, there is a duplication of affinity column 'ID' for : {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} In case of creation table via Cache API + QueryEntity, no exlicit primary key columns are shown, but '_KEY' column is, and there is no duplication of affinity column 'ID' in '_key_PK_hash' index. Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} was: There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL, indexes does not have '_KEY' column, and have explicit set of columns from primary key. In case of creation table via Cache API + QueryEntity, no exlicip column are shown, but 'KEY' column is. Also, there is a duplication of 'ID' column for '_key_PR_hash' index: {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} > Ambiguous output of INDEXES SytemView if cache is created via DDL > -- > > Key: IGNITE-17285 > URL: https://issues.apache.org/jira/browse/IGNITE-17285 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Shishkov >Priority: Minor > Attachments: SqlIndexSystemViewReproducerTest.patch, > create_table.txt, query_entity.txt > > > There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system > view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by > means of DDL. > As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch
[jira] [Updated] (IGNITE-17284) Fix missing features on the Thin Clients documentation
[ https://issues.apache.org/jira/browse/IGNITE-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maxim Muzafarov updated IGNITE-17284: - Ignite Flags: (was: Docs Required,Release Notes Required) > Fix missing features on the Thin Clients documentation > --- > > Key: IGNITE-17284 > URL: https://issues.apache.org/jira/browse/IGNITE-17284 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Maxim Muzafarov >Assignee: Maxim Muzafarov >Priority: Major > Fix For: 2.13 > > Time Spent: 10m > Remaining Estimate: 0h > > There is lack of DataStreamer support on our documentation pages: > https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients > The DataStreamer has been implemented for the .NET thin client. > https://issues.apache.org/jira/browse/IGNITE-14187 > Full list of supported features > https://cwiki.apache.org/confluence/display/IGNITE/Thin+clients+features > Retry Policy are also implemented: > https://issues.apache.org/jira/browse/IGNITE-16026 > https://issues.apache.org/jira/browse/IGNITE-16025 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17285) Ambiguous output of INDEXES SytemView if cache is created via DDL
[ https://issues.apache.org/jira/browse/IGNITE-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-17285: --- Description: There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL, indexes does not have '_KEY' column, and have explicit set of columns from primary key. In case of creation table via Cache API + QueryEntity, no exlicip column are shown, but 'KEY' column is. Also, there is a duplication of 'ID' column for '_key_PR_hash' index: {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} was: There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL indexes does not have '_KEY' column, and have explicit set of columns from primary key. In case of creation table via Cache API + QueryEntity, no exlicip column are shown, but 'KEY' column is. Also, there is a duplication of 'ID' column for '_key_PR_hash' index: {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} > Ambiguous output of INDEXES SytemView if cache is created via DDL > -- > > Key: IGNITE-17285 > URL: https://issues.apache.org/jira/browse/IGNITE-17285 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Shishkov >Priority: Minor > Attachments: SqlIndexSystemViewReproducerTest.patch, > create_table.txt, query_entity.txt > > > There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system > view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by > means of DDL. > As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , > there are two "equal" attepmts to create cache: via DDL, and via Cache API
[jira] [Updated] (IGNITE-17285) Ambiguous output of INDEXES SytemView if cache is created via DDL
[ https://issues.apache.org/jira/browse/IGNITE-17285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Shishkov updated IGNITE-17285: --- Description: There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs in INDEXES view: in case of DDL indexes does not have '_KEY' column, and have explicit set of columns from primary key. In case of creation table via Cache API + QueryEntity, no exlicip column are shown, but 'KEY' column is. Also, there is a duplication of 'ID' column for '_key_PR_hash' index: {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} was: There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs, in case of DDL indexes does not have '_KEY' column, and have explicit set of columns from primary key. In case of creation table via Cache API + QueryEntity, no exlicip column are shown, but 'KEY' column is. Also, there is a duplication of 'ID' column for '_key_PR_hash' index: {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} > Ambiguous output of INDEXES SytemView if cache is created via DDL > -- > > Key: IGNITE-17285 > URL: https://issues.apache.org/jira/browse/IGNITE-17285 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Shishkov >Priority: Minor > Attachments: SqlIndexSystemViewReproducerTest.patch, > create_table.txt, query_entity.txt > > > There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system > view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by > means of DDL. > As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , > there are two "equal" attepmts to create cache: via DDL, and via Cache API + > QueryEntity
[jira] [Created] (IGNITE-17285) Ambiguous output of INDEXES SytemView if cache is created via DDL
Ilya Shishkov created IGNITE-17285: -- Summary: Ambiguous output of INDEXES SytemView if cache is created via DDL Key: IGNITE-17285 URL: https://issues.apache.org/jira/browse/IGNITE-17285 Project: Ignite Issue Type: Bug Reporter: Ilya Shishkov Attachments: SqlIndexSystemViewReproducerTest.patch, create_table.txt, query_entity.txt There is a difference 'COLUMNS ' and 'INLINE_SIZE' columns content in system view 'INDEXES' , when you create SQL-cache by means of QueryEntity and by means of DDL. As you can see in reproducer [^SqlIndexSystemViewReproducerTest.patch] , there are two "equal" attepmts to create cache: via DDL, and via Cache API + QueryEntity. Primary keys contains equal set of fields, affinity fields are the same. Outputs of system views TABLES, TABLE_COLUMNS and BINARY_METADATA are the same for both ways of cache creation. Table content (i.e. select *) is also the same, if you do not take into account the order of output. There are example sqlline outputs for table from reproducer: # [^create_table.txt] - for table, created by DDL. # [^query_entity.txt] - for table, created by Cache API. As you can see, colums and content differs, in case of DDL indexes does not have '_KEY' column, and have explicit set of columns from primary key. In case of creation table via Cache API + QueryEntity, no exlicip column are shown, but 'KEY' column is. Also, there is a duplication of 'ID' column for '_key_PR_hash' index: {code} "ID" ASC, "FIRSTNAME" ASC, "LASTNAME" ASC, "ID" ASC {code} Reproducer dumps indexes ({{org.h2.index.Index}}) collection content, which is obtained from {{GridH2Table#getIndexes}}. It seems, that information differs in this class too. Example output: {code:java|title=Cache API + QueryEntity} Index name Columns _key_PK__SCAN_ [_KEY, ID] _key_PK_hash [_KEY, ID] _key_PK [_KEY, ID] AFFINITY_KEY [ID, _KEY] PERSONINFO_CITYNAME_IDX [CITYNAME, _KEY, ID] {code} {code:java|title=DDL} Index name Columns _key_PK__SCAN_ [ID, FIRSTNAME, LASTNAME] _key_PK_hash [_KEY, ID] _key_PK [ID, FIRSTNAME, LASTNAME] AFFINITY_KEY [ID, FIRSTNAME, LASTNAME] PERSONINFO_CITYNAME_IDX [CITYNAME, ID, FIRSTNAME, LASTNAME] {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17284) Fix missing features on the Thin Clients documentation
[ https://issues.apache.org/jira/browse/IGNITE-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maxim Muzafarov updated IGNITE-17284: - Description: There is lack of DataStreamer support on our documentation pages: https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients The DataStreamer has been implemented for the .NET thin client. https://issues.apache.org/jira/browse/IGNITE-14187 Full list of supported features https://cwiki.apache.org/confluence/display/IGNITE/Thin+clients+features Retry Policy are also implemented: https://issues.apache.org/jira/browse/IGNITE-16026 https://issues.apache.org/jira/browse/IGNITE-16025 was: There is lack of DataStreamer support on our documentation pages: https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients The DataStreamer has been implemented for the .NET thin client. https://issues.apache.org/jira/browse/IGNITE-14187 Full list of supported features https://cwiki.apache.org/confluence/display/IGNITE/Thin+clients+features > Fix missing features on the Thin Clients documentation > --- > > Key: IGNITE-17284 > URL: https://issues.apache.org/jira/browse/IGNITE-17284 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Maxim Muzafarov >Assignee: Maxim Muzafarov >Priority: Major > Fix For: 2.13 > > > There is lack of DataStreamer support on our documentation pages: > https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients > The DataStreamer has been implemented for the .NET thin client. > https://issues.apache.org/jira/browse/IGNITE-14187 > Full list of supported features > https://cwiki.apache.org/confluence/display/IGNITE/Thin+clients+features > Retry Policy are also implemented: > https://issues.apache.org/jira/browse/IGNITE-16026 > https://issues.apache.org/jira/browse/IGNITE-16025 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17284) Fix missing features on the Thin Clients documentation
[ https://issues.apache.org/jira/browse/IGNITE-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maxim Muzafarov updated IGNITE-17284: - Description: There is lack of DataStreamer support on our documentation pages: https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients The DataStreamer has been implemented for the .NET thin client. https://issues.apache.org/jira/browse/IGNITE-14187 Full list of supported features https://cwiki.apache.org/confluence/display/IGNITE/Thin+clients+features was: There is lack of DataStreamer support on our documentation pages: https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients The DataStreamer has been implemented for the .NET thin client. https://issues.apache.org/jira/browse/IGNITE-14187 > Fix missing features on the Thin Clients documentation > --- > > Key: IGNITE-17284 > URL: https://issues.apache.org/jira/browse/IGNITE-17284 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Maxim Muzafarov >Assignee: Maxim Muzafarov >Priority: Major > Fix For: 2.13 > > > There is lack of DataStreamer support on our documentation pages: > https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients > The DataStreamer has been implemented for the .NET thin client. > https://issues.apache.org/jira/browse/IGNITE-14187 > Full list of supported features > https://cwiki.apache.org/confluence/display/IGNITE/Thin+clients+features -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17284) Fix missing features on the Thin Clients documentation
[ https://issues.apache.org/jira/browse/IGNITE-17284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maxim Muzafarov updated IGNITE-17284: - Summary: Fix missing features on the Thin Clients documentation (was: Fix missing DataStreamer feature on the Thin Clients documentation ) > Fix missing features on the Thin Clients documentation > --- > > Key: IGNITE-17284 > URL: https://issues.apache.org/jira/browse/IGNITE-17284 > Project: Ignite > Issue Type: Task > Components: documentation >Reporter: Maxim Muzafarov >Assignee: Maxim Muzafarov >Priority: Major > Fix For: 2.13 > > > There is lack of DataStreamer support on our documentation pages: > https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients > The DataStreamer has been implemented for the .NET thin client. > https://issues.apache.org/jira/browse/IGNITE-14187 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17284) Fix missing DataStreamer feature on the Thin Clients documentation
Maxim Muzafarov created IGNITE-17284: Summary: Fix missing DataStreamer feature on the Thin Clients documentation Key: IGNITE-17284 URL: https://issues.apache.org/jira/browse/IGNITE-17284 Project: Ignite Issue Type: Task Components: documentation Reporter: Maxim Muzafarov Assignee: Maxim Muzafarov Fix For: 2.13 There is lack of DataStreamer support on our documentation pages: https://ignite.apache.org/docs/latest/thin-clients/getting-started-with-thin-clients The DataStreamer has been implemented for the .NET thin client. https://issues.apache.org/jira/browse/IGNITE-14187 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-17282) Public thread pool's idle metric is always 0
[ https://issues.apache.org/jira/browse/IGNITE-17282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin resolved IGNITE-17282. --- Resolution: Cannot Reproduce [~liyuj], I'll close the ticket, feel free to open it again if necessary. > Public thread pool's idle metric is always 0 > > > Key: IGNITE-17282 > URL: https://issues.apache.org/jira/browse/IGNITE-17282 > Project: Ignite > Issue Type: Improvement >Affects Versions: 2.10, 2.12, 2.13, 2.11.1 >Reporter: YuJue Li >Priority: Minor > Fix For: 2.14 > > Attachments: 2022-06-30_22-21.png > > > !2022-06-30_22-21.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17282) Public thread pool's idle metric is always 0
[ https://issues.apache.org/jira/browse/IGNITE-17282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17561115#comment-17561115 ] Pavel Pereslegin commented on IGNITE-17282: --- [~liyuj], I don't see any issues with this public pool metric. This pool allows idle threads to be terminated, each idle thread has a keep-alive timeout (60 seconds) after which it terminates. > Public thread pool's idle metric is always 0 > > > Key: IGNITE-17282 > URL: https://issues.apache.org/jira/browse/IGNITE-17282 > Project: Ignite > Issue Type: Improvement >Affects Versions: 2.10, 2.12, 2.13, 2.11.1 >Reporter: YuJue Li >Priority: Minor > Fix For: 2.14 > > Attachments: 2022-06-30_22-21.png > > > !2022-06-30_22-21.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17277) Add a user info to the sql queries view
[ https://issues.apache.org/jira/browse/IGNITE-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amelchev Nikita updated IGNITE-17277: - Description: Add a user info who started a query to the queries system view. > Add a user info to the sql queries view > > > Key: IGNITE-17277 > URL: https://issues.apache.org/jira/browse/IGNITE-17277 > Project: Ignite > Issue Type: Improvement >Reporter: Amelchev Nikita >Assignee: Amelchev Nikita >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Add a user info who started a query to the queries system view. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17277) Add a user info to the sql queries view
[ https://issues.apache.org/jira/browse/IGNITE-17277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amelchev Nikita updated IGNITE-17277: - Environment: (was: Add a user info who started a query to the queries system view.) > Add a user info to the sql queries view > > > Key: IGNITE-17277 > URL: https://issues.apache.org/jira/browse/IGNITE-17277 > Project: Ignite > Issue Type: Improvement >Reporter: Amelchev Nikita >Assignee: Amelchev Nikita >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17283) ItCmgRaftServiceTest should start Raft groups in parallel
[ https://issues.apache.org/jira/browse/IGNITE-17283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Bessonov updated IGNITE-17283: --- Ignite Flags: (was: Docs Required,Release Notes Required) > ItCmgRaftServiceTest should start Raft groups in parallel > - > > Key: IGNITE-17283 > URL: https://issues.apache.org/jira/browse/IGNITE-17283 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Minor > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Time Spent: 20m > Remaining Estimate: 0h > > ItCmgRaftServiceTest starts a couple of Raft groups sequentially, so the > first group waits for other members to appear before it times out. This leads > to this test running for quite a long time. It is proposed to start these > groups in parallel. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17283) ItCmgRaftServiceTest should start Raft groups in parallel
[ https://issues.apache.org/jira/browse/IGNITE-17283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17561099#comment-17561099 ] Ivan Bessonov commented on IGNITE-17283: Looks good, thank you for the improvement! > ItCmgRaftServiceTest should start Raft groups in parallel > - > > Key: IGNITE-17283 > URL: https://issues.apache.org/jira/browse/IGNITE-17283 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Minor > Labels: ignite-3 > Time Spent: 10m > Remaining Estimate: 0h > > ItCmgRaftServiceTest starts a couple of Raft groups sequentially, so the > first group waits for other members to appear before it times out. This leads > to this test running for quite a long time. It is proposed to start these > groups in parallel. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17130) Profilies support for CLI configuration
[ https://issues.apache.org/jira/browse/IGNITE-17130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Pochatkin updated IGNITE-17130: --- Description: h3. Config file site: The Ignite CLI currently only has one set of properties. Need to support multiple profiles, this means that the config must have multiple sections for each user profile. INI format completely covers everything we need, so need migrate config file format from properties to ini. Default profile name [default]. h3. CLI command site: Create profile command {code:java} ignite cli config create-profile --name profile_name (REQUIRED) --copy-from profile-name (OPTIONAL) --activate true\false (OPTIONAL, DEFAULT VALUE = false){code} Activate profile command as default profile {code:java} ignite cli config use profile_name {code} Read\write property with profile {code:java} ignite cli config --profile profile_name get\set propertyKey=propertyValue {code} Read current profile {code:java} ignite cli config profile{code} was: h3. Config file site: The Ignite CLI currently only has one set of properties. Need to support multiple profiles, this means that the config must have multiple sections for each user profile. INI format completely covers everything we need, so need migrate config file format from properties to ini. Default profile name [default]. h3. CLI command site: Create profile command {code:java} ignite cli config create-profile --name profile_name (REQUIRED) --copy-from profile-name (OPTIONAL) --activate true\false (OPTIONAL, DEFAULT VALUE = false){code} Activate profile command as default profile {code:java} ignite cli config use profile_name {code} Read\write property with profile {code:java} ignite cli config --profile profile_name get\set propertyKey=propertyValue {code} > Profilies support for CLI configuration > --- > > Key: IGNITE-17130 > URL: https://issues.apache.org/jira/browse/IGNITE-17130 > Project: Ignite > Issue Type: New Feature >Reporter: Mikhail Pochatkin >Assignee: Mikhail Pochatkin >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > Time Spent: 50m > Remaining Estimate: 0h > > h3. Config file site: > The Ignite CLI currently only has one set of properties. Need to support > multiple profiles, this means that the config must have multiple sections for > each user profile. INI format completely covers everything we need, so need > migrate config file format from properties to ini. > Default profile name [default]. > h3. CLI command site: > Create profile command > {code:java} > ignite cli config create-profile > --name profile_name (REQUIRED) > --copy-from profile-name (OPTIONAL) > --activate true\false (OPTIONAL, DEFAULT VALUE = false){code} > Activate profile command as default profile > {code:java} > ignite cli config use profile_name {code} > Read\write property with profile > {code:java} > ignite cli config --profile profile_name get\set propertyKey=propertyValue > {code} > Read current profile > {code:java} > ignite cli config profile{code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17130) Profilies support for CLI configuration
[ https://issues.apache.org/jira/browse/IGNITE-17130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Pochatkin updated IGNITE-17130: --- Description: h3. Config file site: The Ignite CLI currently only has one set of properties. Need to support multiple profiles, this means that the config must have multiple sections for each user profile. INI format completely covers everything we need, so need migrate config file format from properties to ini. Default profile name [default]. h3. CLI command site: Create profile command {code:java} ignite cli config create-profile --name profile_name (REQUIRED) --copy-from profile-name (OPTIONAL) --activate true\false (OPTIONAL, DEFAULT VALUE = false){code} Activate profile command as default profile {code:java} ignite cli config use profile_name {code} Read\write property with profile {code:java} ignite cli config --profile profile_name get\set propertyKey=propertyValue {code} was: h3. Config file site: The Ignite CLI currently only has one set of properties. Need to support multiple profiles, this means that the config must have multiple sections for each user profile. INI format completely covers everything we need, so need migrate config file format from properties to ini. Default profile name [default]. h3. CLI command site: Create profile command {code:java} ignite cli config create-profile --name profile_name (REQUIRED) --copy-from profile-name (OPTIONAL) --activate true\false (OPTIONAL, DEFAULT VALUE = false){code} Activate profile command as default profile {code:java} ignite cli config activate profile_name {code} Read\write property with profile {code:java} ignite cli config --profile profile_name get\set propertyKey=propertyValue {code} > Profilies support for CLI configuration > --- > > Key: IGNITE-17130 > URL: https://issues.apache.org/jira/browse/IGNITE-17130 > Project: Ignite > Issue Type: New Feature >Reporter: Mikhail Pochatkin >Assignee: Mikhail Pochatkin >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > Time Spent: 50m > Remaining Estimate: 0h > > h3. Config file site: > The Ignite CLI currently only has one set of properties. Need to support > multiple profiles, this means that the config must have multiple sections for > each user profile. INI format completely covers everything we need, so need > migrate config file format from properties to ini. > Default profile name [default]. > h3. CLI command site: > Create profile command > {code:java} > ignite cli config create-profile > --name profile_name (REQUIRED) > --copy-from profile-name (OPTIONAL) > --activate true\false (OPTIONAL, DEFAULT VALUE = false){code} > Activate profile command as default profile > {code:java} > ignite cli config use profile_name {code} > Read\write property with profile > {code:java} > ignite cli config --profile profile_name get\set propertyKey=propertyValue > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17279) Mapping of partition states to nodes can erroneously skip lost partitions on the coordinator node
[ https://issues.apache.org/jira/browse/IGNITE-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-17279: - Description: It seems that a coordinator node does not correctly update node2part mapping for lost partitions. {noformat} [test-runner-#1%distributed.CachePartitionLostAfterSupplierHasLeftTest%][root] dump partitions state for : preload sync futures nodeId=b57ca812-416d-40d7-bb4f-27199490 consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest0 isDone=true nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest1 isDone=true rebalance futures nodeId=b57ca812-416d-40d7-bb4f-27199490 isDone=true res=true topVer=null remaining: {} nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 isDone=true res=false topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0] remaining: {} partition state localNodeId=b57ca812-416d-40d7-bb4f-27199490 grid=distributed.CachePartitionLostAfterSupplierHasLeftTest0 local part=0 counters=Counter [lwm=200, missed=[], maxApplied=200, hwm=200] fullSize=200 *state=LOST* reservations=0 isAffNode=true nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 part=0 *state=LOST* isAffNode=true ... localNodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 grid=distributed.CachePartitionLostAfterSupplierHasLeftTest1 local part=0 counters=Counter [lwm=0, missed=[], maxApplied=0, hwm=0] fullSize=100 *state=LOST* reservations=0 isAffNode=true nodeId=b57ca812-416d-40d7-bb4f-27199490 part=0 *state=OWNING* isAffNode=true ... {noformat} was: It seems that a coordinator node does not correctly update node2part mapping for lost partitions. {noformat} [test-runner-#1%distributed.CachePartitionLostAfterSupplierHasLeftTest%][root] dump partitions state for : preload sync futures nodeId=b57ca812-416d-40d7-bb4f-27199490 consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest0 isDone=true nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest1 isDone=true rebalance futures nodeId=b57ca812-416d-40d7-bb4f-27199490 isDone=true res=true topVer=null remaining: {} nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 isDone=true res=false topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0] remaining: {} partition state localNodeId=b57ca812-416d-40d7-bb4f-27199490 grid=distributed.CachePartitionLostAfterSupplierHasLeftTest0 local part=0 counters=Counter [lwm=200, missed=[], maxApplied=200, hwm=200] fullSize=200 state=LOST reservations=0 isAffNode=true nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 part=0 state=LOST isAffNode=true ... localNodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 grid=distributed.CachePartitionLostAfterSupplierHasLeftTest1 local part=0 counters=Counter [lwm=0, missed=[], maxApplied=0, hwm=0] fullSize=100 state=LOST reservations=0 isAffNode=true nodeId=b57ca812-416d-40d7-bb4f-27199490 part=0 state=OWNING isAffNode=true ... {noformat} > Mapping of partition states to nodes can erroneously skip lost partitions on > the coordinator node > - > > Key: IGNITE-17279 > URL: https://issues.apache.org/jira/browse/IGNITE-17279 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Minor > > It seems that a coordinator node does not correctly update node2part mapping > for lost partitions. > {noformat} > [test-runner-#1%distributed.CachePartitionLostAfterSupplierHasLeftTest%][root] > dump partitions state for : > preload sync futures > nodeId=b57ca812-416d-40d7-bb4f-27199490 > consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest0 > isDone=true > nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 > consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest1 > isDone=true > rebalance futures > nodeId=b57ca812-416d-40d7-bb4f-27199490 isDone=true res=true topVer=null > remaining: {} > nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 isDone=true res=false > topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0] > remaining: {} > partition state > localNodeId=b57ca812-416d-40d7-bb4f-27199490 > grid=distributed.CachePartitionLostAfterSupplierHasLeftTest0 > local part=0 counters=Counter [lwm=200, missed=[], maxApplied=200, hwm=200] > fullSize=200 *state=LOST* reservations=0 isAffNode=true > nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 part=0 *state=LOST* > isAffNode=true > ... > localNodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 > grid=distributed.CachePartitionLostAfterSupplierHasLeftTest1 > local part=0 counters=Counter [lwm=0, missed=[], maxApplied=0, hwm=0] > fullSize=100 *state=LOST* rese
[jira] [Updated] (IGNITE-17279) Mapping of partition states to nodes can erroneously skip lost partitions on the coordinator node
[ https://issues.apache.org/jira/browse/IGNITE-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-17279: - Description: It seems that a coordinator node does not correctly update node2part mapping for lost partitions. {noformat} [test-runner-#1%distributed.CachePartitionLostAfterSupplierHasLeftTest%][root] dump partitions state for : preload sync futures nodeId=b57ca812-416d-40d7-bb4f-27199490 consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest0 isDone=true nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest1 isDone=true rebalance futures nodeId=b57ca812-416d-40d7-bb4f-27199490 isDone=true res=true topVer=null remaining: {} nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 isDone=true res=false topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0] remaining: {} partition state localNodeId=b57ca812-416d-40d7-bb4f-27199490 grid=distributed.CachePartitionLostAfterSupplierHasLeftTest0 local part=0 counters=Counter [lwm=200, missed=[], maxApplied=200, hwm=200] fullSize=200 state=LOST reservations=0 isAffNode=true nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 part=0 state=LOST isAffNode=true ... localNodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 grid=distributed.CachePartitionLostAfterSupplierHasLeftTest1 local part=0 counters=Counter [lwm=0, missed=[], maxApplied=0, hwm=0] fullSize=100 state=LOST reservations=0 isAffNode=true nodeId=b57ca812-416d-40d7-bb4f-27199490 part=0 state=OWNING isAffNode=true ... {noformat} was:It seems that a coordinator node does not correctly update node2part mapping for lost partitions. > Mapping of partition states to nodes can erroneously skip lost partitions on > the coordinator node > - > > Key: IGNITE-17279 > URL: https://issues.apache.org/jira/browse/IGNITE-17279 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Minor > > It seems that a coordinator node does not correctly update node2part mapping > for lost partitions. > {noformat} > [test-runner-#1%distributed.CachePartitionLostAfterSupplierHasLeftTest%][root] > dump partitions state for : > preload sync futures > nodeId=b57ca812-416d-40d7-bb4f-27199490 > consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest0 > isDone=true > nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 > consistentId=distributed.CachePartitionLostAfterSupplierHasLeftTest1 > isDone=true > rebalance futures > nodeId=b57ca812-416d-40d7-bb4f-27199490 isDone=true res=true topVer=null > remaining: {} > nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 isDone=true res=false > topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0] > remaining: {} > partition state > localNodeId=b57ca812-416d-40d7-bb4f-27199490 > grid=distributed.CachePartitionLostAfterSupplierHasLeftTest0 > local part=0 counters=Counter [lwm=200, missed=[], maxApplied=200, hwm=200] > fullSize=200 state=LOST reservations=0 isAffNode=true > nodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 part=0 state=LOST isAffNode=true > ... > localNodeId=20fdfa4a-ddf6-4229-b25e-38cd8d31 > grid=distributed.CachePartitionLostAfterSupplierHasLeftTest1 > local part=0 counters=Counter [lwm=0, missed=[], maxApplied=0, hwm=0] > fullSize=100 state=LOST reservations=0 isAffNode=true > nodeId=b57ca812-416d-40d7-bb4f-27199490 part=0 state=OWNING > isAffNode=true > ... > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17280) ItComputeTest.executesColocatedByClassNameWithTupleKey failed on TC
[ https://issues.apache.org/jira/browse/IGNITE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17561072#comment-17561072 ] Roman Puchkovskiy commented on IGNITE-17280: Closed as a duplicate of IGNITE-17048 > ItComputeTest.executesColocatedByClassNameWithTupleKey failed on TC > --- > > Key: IGNITE-17280 > URL: https://issues.apache.org/jira/browse/IGNITE-17280 > Project: Ignite > Issue Type: Bug > Components: compute >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > [https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6655808] > > ava.lang.AssertionError > java.lang.AssertionError: Raft groups are still running > [b839ce7f-370c-4553-882e-34b471029c9c_part_0, > b839ce7f-370c-4553-882e-34b471029c9c_part_19, > b839ce7f-370c-4553-882e-34b471029c9c_part_8, > b839ce7f-370c-4553-882e-34b471029c9c_part_11, > b839ce7f-370c-4553-882e-34b471029c9c_part_4, > b839ce7f-370c-4553-882e-34b471029c9c_part_15, > b839ce7f-370c-4553-882e-34b471029c9c_part_13, > b839ce7f-370c-4553-882e-34b471029c9c_part_3, > b839ce7f-370c-4553-882e-34b471029c9c_part_14] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (IGNITE-17280) ItComputeTest.executesColocatedByClassNameWithTupleKey failed on TC
[ https://issues.apache.org/jira/browse/IGNITE-17280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy resolved IGNITE-17280. Resolution: Duplicate > ItComputeTest.executesColocatedByClassNameWithTupleKey failed on TC > --- > > Key: IGNITE-17280 > URL: https://issues.apache.org/jira/browse/IGNITE-17280 > Project: Ignite > Issue Type: Bug > Components: compute >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > > [https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6655808] > > ava.lang.AssertionError > java.lang.AssertionError: Raft groups are still running > [b839ce7f-370c-4553-882e-34b471029c9c_part_0, > b839ce7f-370c-4553-882e-34b471029c9c_part_19, > b839ce7f-370c-4553-882e-34b471029c9c_part_8, > b839ce7f-370c-4553-882e-34b471029c9c_part_11, > b839ce7f-370c-4553-882e-34b471029c9c_part_4, > b839ce7f-370c-4553-882e-34b471029c9c_part_15, > b839ce7f-370c-4553-882e-34b471029c9c_part_13, > b839ce7f-370c-4553-882e-34b471029c9c_part_3, > b839ce7f-370c-4553-882e-34b471029c9c_part_14] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17283) ItCmgRaftServiceTest should start Raft groups in parallel
[ https://issues.apache.org/jira/browse/IGNITE-17283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtcev updated IGNITE-17283: - Issue Type: Improvement (was: Bug) > ItCmgRaftServiceTest should start Raft groups in parallel > - > > Key: IGNITE-17283 > URL: https://issues.apache.org/jira/browse/IGNITE-17283 > Project: Ignite > Issue Type: Improvement >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Minor > Labels: ignite-3 > > ItCmgRaftServiceTest starts a couple of Raft groups sequentially, so the > first group waits for other members to appear before it times out. This leads > to this test running for quite a long time. It is proposed to start these > groups in parallel. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17283) ItCmgRaftServiceTest should start Raft groups in parallel
Aleksandr Polovtcev created IGNITE-17283: Summary: ItCmgRaftServiceTest should start Raft groups in parallel Key: IGNITE-17283 URL: https://issues.apache.org/jira/browse/IGNITE-17283 Project: Ignite Issue Type: Bug Reporter: Aleksandr Polovtcev Assignee: Aleksandr Polovtcev ItCmgRaftServiceTest starts a couple of Raft groups sequentially, so the first group waits for other members to appear before it times out. This leads to this test running for quite a long time. It is proposed to start these groups in parallel. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17193) Map IgniteException to problem json
[ https://issues.apache.org/jira/browse/IGNITE-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17561071#comment-17561071 ] Aleksandr commented on IGNITE-17193: Hi [~slava.koptilin], Thanks for the review, I've fixed all comments. > Map IgniteException to problem json > --- > > Key: IGNITE-17193 > URL: https://issues.apache.org/jira/browse/IGNITE-17193 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr >Assignee: Aleksandr >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Time Spent: 40m > Remaining Estimate: 0h > > According to https://www.rfc-editor.org/rfc/rfc7807.html HTTP API has to > return application/problem+json if an error happens. > https://cwiki.apache.org/confluence/display/IGNITE/IEP-84%3A+Error+handling > describes how Ignite 3 throws exceptions. > The aim of this ticket is to map IgniteException to application/problem+json. > Note, that there is no implementation of IEP-84. So, leave TODOs with Jira > tickets where it is needed. > Mapping strategy: > “title”: a short, human-readable summary of the problem type > “status”: HTTP status code > “code”: error code > “type”: URI to the error documentation (optional) > “detail”: a human-readable explanation of the problem (optional) > “invalid-params”: list of parameters that did not pass the validation > (optional) > “node”: Ignite 3 node name (optional) > “trace-id”: unique identifier that will help to trace the error in the log > (optional). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17281) Return error groups from REST API
[ https://issues.apache.org/jira/browse/IGNITE-17281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr updated IGNITE-17281: --- Labels: iep-87 ignite-3 (was: ignite-3) > Return error groups from REST API > - > > Key: IGNITE-17281 > URL: https://issues.apache.org/jira/browse/IGNITE-17281 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr >Priority: Major > Labels: iep-87, ignite-3 > > After IGNITE-14931 and IGNITE-17193 are done, traceId and human-readable > error code should be returned from any rest component. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17282) Public thread pool's idle metric is always 0
YuJue Li created IGNITE-17282: - Summary: Public thread pool's idle metric is always 0 Key: IGNITE-17282 URL: https://issues.apache.org/jira/browse/IGNITE-17282 Project: Ignite Issue Type: Improvement Affects Versions: 2.11.1, 2.13, 2.12, 2.10 Reporter: YuJue Li Fix For: 2.14 Attachments: 2022-06-30_22-21.png !2022-06-30_22-21.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17281) Return error groups from REST API
Aleksandr created IGNITE-17281: -- Summary: Return error groups from REST API Key: IGNITE-17281 URL: https://issues.apache.org/jira/browse/IGNITE-17281 Project: Ignite Issue Type: Task Reporter: Aleksandr After IGNITE-14931 and IGNITE-17193 are done, traceId and human-readable error code should be returned from any rest component. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17272) Logical recovery works incorrectly for encrypted caches
[ https://issues.apache.org/jira/browse/IGNITE-17272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtcev updated IGNITE-17272: - Reviewer: Ivan Bessonov > Logical recovery works incorrectly for encrypted caches > --- > > Key: IGNITE-17272 > URL: https://issues.apache.org/jira/browse/IGNITE-17272 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > When encryption is enabled for a particular cache, its WAL records get > encrypted and wrapped in an {{EncryptedRecord}}. This encrypted record type > is considered a {{PHYSICAL}} record, which leads to such records being > omitted during logical recovery regardless of the fact that it can contain > logical records. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17272) Logical recovery works incorrectly for encrypted caches
[ https://issues.apache.org/jira/browse/IGNITE-17272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17561042#comment-17561042 ] Ignite TC Bot commented on IGNITE-17272: {panel:title=Branch: [pull/10122/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/10122/head] Base: [master] : New Tests (1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1} {color:#8b}PDS (Indexing){color} [[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=6656823]] * {color:#013220}IgnitePdsWithIndexingCoreTestSuite: IgniteLogicalRecoveryEncryptionTest.testRecoverPartitionStates - PASSED{color} {panel} [TeamCity *--> Run :: All* Results|https://ci.ignite.apache.org/viewLog.html?buildId=6656860&buildTypeId=IgniteTests24Java8_RunAll] > Logical recovery works incorrectly for encrypted caches > --- > > Key: IGNITE-17272 > URL: https://issues.apache.org/jira/browse/IGNITE-17272 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > When encryption is enabled for a particular cache, its WAL records get > encrypted and wrapped in an {{EncryptedRecord}}. This encrypted record type > is considered a {{PHYSICAL}} record, which leads to such records being > omitted during logical recovery regardless of the fact that it can contain > logical records. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17280) ItComputeTest.executesColocatedByClassNameWithTupleKey failed on TC
Roman Puchkovskiy created IGNITE-17280: -- Summary: ItComputeTest.executesColocatedByClassNameWithTupleKey failed on TC Key: IGNITE-17280 URL: https://issues.apache.org/jira/browse/IGNITE-17280 Project: Ignite Issue Type: Bug Components: compute Reporter: Roman Puchkovskiy Assignee: Roman Puchkovskiy Fix For: 3.0.0-alpha6 [https://ci.ignite.apache.org/buildConfiguration/ignite3_Test_IntegrationTests_ModuleRunner/6655808] ava.lang.AssertionError java.lang.AssertionError: Raft groups are still running [b839ce7f-370c-4553-882e-34b471029c9c_part_0, b839ce7f-370c-4553-882e-34b471029c9c_part_19, b839ce7f-370c-4553-882e-34b471029c9c_part_8, b839ce7f-370c-4553-882e-34b471029c9c_part_11, b839ce7f-370c-4553-882e-34b471029c9c_part_4, b839ce7f-370c-4553-882e-34b471029c9c_part_15, b839ce7f-370c-4553-882e-34b471029c9c_part_13, b839ce7f-370c-4553-882e-34b471029c9c_part_3, b839ce7f-370c-4553-882e-34b471029c9c_part_14] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17279) Mapping of partition states to nodes can erroneously skip lost partitions on the coordinator node
[ https://issues.apache.org/jira/browse/IGNITE-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-17279: - Description: It seems that a coordinator node does not correctly update node2part mapping for lost partitions. > Mapping of partition states to nodes can erroneously skip lost partitions on > the coordinator node > - > > Key: IGNITE-17279 > URL: https://issues.apache.org/jira/browse/IGNITE-17279 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Minor > > It seems that a coordinator node does not correctly update node2part mapping > for lost partitions. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17279) Mapping of partition states to nodes can erroneously skip lost partitions on the coordinator node
[ https://issues.apache.org/jira/browse/IGNITE-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-17279: - Priority: Minor (was: Major) > Mapping of partition states to nodes can erroneously skip lost partitions on > the coordinator node > - > > Key: IGNITE-17279 > URL: https://issues.apache.org/jira/browse/IGNITE-17279 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Minor > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17279) Mapping of partition states to nodes can erroneously skip lost partitions on the coordinator node
[ https://issues.apache.org/jira/browse/IGNITE-17279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-17279: - Ignite Flags: (was: Docs Required,Release Notes Required) > Mapping of partition states to nodes can erroneously skip lost partitions on > the coordinator node > - > > Key: IGNITE-17279 > URL: https://issues.apache.org/jira/browse/IGNITE-17279 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17279) Mapping of partition states to nodes can erroneously skip lost partitions on the coordinator node
Vyacheslav Koptilin created IGNITE-17279: Summary: Mapping of partition states to nodes can erroneously skip lost partitions on the coordinator node Key: IGNITE-17279 URL: https://issues.apache.org/jira/browse/IGNITE-17279 Project: Ignite Issue Type: Bug Reporter: Vyacheslav Koptilin Assignee: Vyacheslav Koptilin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17193) Map IgniteException to problem json
[ https://issues.apache.org/jira/browse/IGNITE-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17561032#comment-17561032 ] Vyacheslav Koptilin commented on IGNITE-17193: -- Hello [~aleksandr.pakhomov], In general, the proposed patch looks good to me. I left a few comments in your PR. Please take a look. > Map IgniteException to problem json > --- > > Key: IGNITE-17193 > URL: https://issues.apache.org/jira/browse/IGNITE-17193 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr >Assignee: Aleksandr >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > According to https://www.rfc-editor.org/rfc/rfc7807.html HTTP API has to > return application/problem+json if an error happens. > https://cwiki.apache.org/confluence/display/IGNITE/IEP-84%3A+Error+handling > describes how Ignite 3 throws exceptions. > The aim of this ticket is to map IgniteException to application/problem+json. > Note, that there is no implementation of IEP-84. So, leave TODOs with Jira > tickets where it is needed. > Mapping strategy: > “title”: a short, human-readable summary of the problem type > “status”: HTTP status code > “code”: error code > “type”: URI to the error documentation (optional) > “detail”: a human-readable explanation of the problem (optional) > “invalid-params”: list of parameters that did not pass the validation > (optional) > “node”: Ignite 3 node name (optional) > “trace-id”: unique identifier that will help to trace the error in the log > (optional). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17193) Map IgniteException to problem json
[ https://issues.apache.org/jira/browse/IGNITE-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-17193: - Ignite Flags: (was: Docs Required,Release Notes Required) > Map IgniteException to problem json > --- > > Key: IGNITE-17193 > URL: https://issues.apache.org/jira/browse/IGNITE-17193 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr >Assignee: Aleksandr >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > According to https://www.rfc-editor.org/rfc/rfc7807.html HTTP API has to > return application/problem+json if an error happens. > https://cwiki.apache.org/confluence/display/IGNITE/IEP-84%3A+Error+handling > describes how Ignite 3 throws exceptions. > The aim of this ticket is to map IgniteException to application/problem+json. > Note, that there is no implementation of IEP-84. So, leave TODOs with Jira > tickets where it is needed. > Mapping strategy: > “title”: a short, human-readable summary of the problem type > “status”: HTTP status code > “code”: error code > “type”: URI to the error documentation (optional) > “detail”: a human-readable explanation of the problem (optional) > “invalid-params”: list of parameters that did not pass the validation > (optional) > “node”: Ignite 3 node name (optional) > “trace-id”: unique identifier that will help to trace the error in the log > (optional). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17193) Map IgniteException to problem json
[ https://issues.apache.org/jira/browse/IGNITE-17193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-17193: - Reviewer: Slava Koptilin > Map IgniteException to problem json > --- > > Key: IGNITE-17193 > URL: https://issues.apache.org/jira/browse/IGNITE-17193 > Project: Ignite > Issue Type: Task >Reporter: Aleksandr >Assignee: Aleksandr >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Time Spent: 0.5h > Remaining Estimate: 0h > > According to https://www.rfc-editor.org/rfc/rfc7807.html HTTP API has to > return application/problem+json if an error happens. > https://cwiki.apache.org/confluence/display/IGNITE/IEP-84%3A+Error+handling > describes how Ignite 3 throws exceptions. > The aim of this ticket is to map IgniteException to application/problem+json. > Note, that there is no implementation of IEP-84. So, leave TODOs with Jira > tickets where it is needed. > Mapping strategy: > “title”: a short, human-readable summary of the problem type > “status”: HTTP status code > “code”: error code > “type”: URI to the error documentation (optional) > “detail”: a human-readable explanation of the problem (optional) > “invalid-params”: list of parameters that did not pass the validation > (optional) > “node”: Ignite 3 node name (optional) > “trace-id”: unique identifier that will help to trace the error in the log > (optional). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-16655) Volatile RAFT log for pure in-memory storages
[ https://issues.apache.org/jira/browse/IGNITE-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Puchkovskiy reassigned IGNITE-16655: -- Assignee: Roman Puchkovskiy > Volatile RAFT log for pure in-memory storages > - > > Key: IGNITE-16655 > URL: https://issues.apache.org/jira/browse/IGNITE-16655 > Project: Ignite > Issue Type: Improvement >Reporter: Sergey Chugunov >Assignee: Roman Puchkovskiy >Priority: Major > Labels: iep-74, ignite-3 > > h3. Original issue description > For in-memory storage Raft logging can be optimized as we don't need to have > it active when topology is stable. > Each write can directly go to in-memory storage at much lower cost than > synchronizing it with disk so it is possible to avoid writing Raft log. > As nodes don't have any state and always join cluster clean we always need to > transfer full snapshot during rebalancing - no need to keep long Raft log for > historical rebalancing purposes. > So we need to implement API for Raft component enabling configuration of Raft > logging process. > h3. More detailed description > Apparently, we can't completely ignore writing to log. There are several > situations where it needs to be collected: > * During a regular workload, each node needs to have a small portion of log > in case if it becomes a leader. There might be a number of "slow" nodes > outside of "quorum" that require older data to be re-sent to them. Log entry > can be truncated only when all nodes reply with "ack" or fail, otherwise log > entry should be preserved. > * During a clean node join - it will need to apply part of the log that > wasn't included in the full-rebalance snapshot. So, everything, starting with > snapshots applied index, will have to be preserved. > It feels like the second option is just a special case of the first one - we > can't truncate log until we receive all acks. And we can't receive an ack > from the joining node until it finishes its rebalancing procedure. > So, it all comes to the aggressive log truncation to make it short. > Preserved log can be quite big in reality, there must be a disk offloading > operation available. > The easiest way to achieve it is to write into a RocksDB instance with WAL > disabled. It'll store everything in memory until the flush, and even then the > amount of flushed data will be small on stable topology. Absence of WAL is > not an issue, the entire rocks instance can be dropped on restart, since it's > supposed to be volatile. > To avoid even the smallest flush, we can use additional volatile structure, > like ring buffer or concurrent map, to store part of the log, and transfer > records into RocksDB only on structure overflow. This sounds more compilcated > and makes memory management more difficult. But, we should take it into > consideration anyways. > * Potentially, we could use a volatile page memory region for this purpose, > since it already has a good control over the amount of memory used. But, > memory overflow should be carefully processed, usually it's treated as an > error and might even cause node failure. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17251) IgniteWalConverter is unable to show WAL of encrypted caches
[ https://issues.apache.org/jira/browse/IGNITE-17251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Chugunov updated IGNITE-17251: - Fix Version/s: 2.14 > IgniteWalConverter is unable to show WAL of encrypted caches > > > Key: IGNITE-17251 > URL: https://issues.apache.org/jira/browse/IGNITE-17251 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Minor > Fix For: 2.14 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > {{RecordDataV1Serializer}} contains a bug when parsing WALs of encrypted > caches. This leads to {{StandaloneWalRecordsIterator}} incorrectly > interpreting encrypted WAL records thus failing to parse the whole WAL if any > of these records are encountered. > The easiest way to reproduce the issue is to populate a cache and print its > WAL with and without encryption enabled. WAL output with encryption enabled > will be significantly shorter. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17251) IgniteWalConverter is unable to show WAL of encrypted caches
[ https://issues.apache.org/jira/browse/IGNITE-17251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17560998#comment-17560998 ] Roman Puchkovskiy commented on IGNITE-17251: The patch looks good to me, thanks! > IgniteWalConverter is unable to show WAL of encrypted caches > > > Key: IGNITE-17251 > URL: https://issues.apache.org/jira/browse/IGNITE-17251 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtcev >Assignee: Aleksandr Polovtcev >Priority: Minor > Time Spent: 1.5h > Remaining Estimate: 0h > > {{RecordDataV1Serializer}} contains a bug when parsing WALs of encrypted > caches. This leads to {{StandaloneWalRecordsIterator}} incorrectly > interpreting encrypted WAL records thus failing to parse the whole WAL if any > of these records are encountered. > The easiest way to reproduce the issue is to populate a cache and print its > WAL with and without encryption enabled. WAL output with encryption enabled > will be significantly shorter. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17278) TableManager#directTableIds can't be implemented effectively
Ivan Bessonov created IGNITE-17278: -- Summary: TableManager#directTableIds can't be implemented effectively Key: IGNITE-17278 URL: https://issues.apache.org/jira/browse/IGNITE-17278 Project: Ignite Issue Type: Improvement Reporter: Ivan Bessonov Assignee: Ivan Bessonov I propose adding a special method "internalIds" to direct proxy, so that there won't be the case for reading all tables. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17246) Get rid of the index partition
[ https://issues.apache.org/jira/browse/IGNITE-17246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17560984#comment-17560984 ] Aleksandr Polovtcev commented on IGNITE-17246: -- Looks good, thank you > Get rid of the index partition > -- > > Key: IGNITE-17246 > URL: https://issues.apache.org/jira/browse/IGNITE-17246 > Project: Ignite > Issue Type: Improvement >Reporter: Kirill Tkalenko >Assignee: Kirill Tkalenko >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Time Spent: 4h 50m > Remaining Estimate: 0h > > Since indexes will be stored in partition files, index partitions should be > disposed of in the current codebase. > See: > * *org.apache.ignite.internal.pagememory.persistence.store.PageStore#TYPE_IDX* > * *org.apache.ignite.internal.pagememory.PageIdAllocator#INDEX_PARTITION* > * > *org.apache.ignite.internal.pagememory.persistence.store.GroupPageStoreHolder* > * > *org.apache.ignite.internal.pagememory.persistence.store.FilePageStoreManager#INDEX_FILE_NAME* > It will also be useful to correct the flaky > [FilePageStoreManagerTest#testStopAllGroupFilePageStores|https://ci.ignite.apache.org/test/6999203413272911470?currentProjectId=ignite3_Test&branch=%3Cdefault%3E]. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-16760) Performance degradation of IgniteTables#tables after configuration changes
[ https://issues.apache.org/jira/browse/IGNITE-16760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Taras Ledkov updated IGNITE-16760: -- Description: Performance degradation of configuration changes: Steps to reproduce: 1. Start cluster with 3 nodes 2. Run in the loop {code} CREATE TABLE TEST(ID INTEGER PRIMARY KEY, V INTEGER) for (Table t : ign.tables().tables()) { ; } {code} On begin {{IgniteTables#tables}} takes ~ 0.7 sec. The time of the operation is grown. The time after ~100 iteration is about 20 sec. was: Performance degradation of configuration changes: Steps to reproduce: 1. Start cluster with 3 nodes 2. Run in the loop {code} CREATE TABLE TEST(ID INTEGER PRIMARY KEY, V INTEGER) DROP TABLE TEST {code} On begin {{IgniteTables#tables}} takes ~ 0.7 sec. The time of the operation is grown. The time after ~100 iteration is about 20 sec. > Performance degradation of IgniteTables#tables after configuration changes > -- > > Key: IGNITE-16760 > URL: https://issues.apache.org/jira/browse/IGNITE-16760 > Project: Ignite > Issue Type: Bug >Affects Versions: 3.0.0-alpha4 >Reporter: Taras Ledkov >Priority: Major > Labels: ignite-3 > > Performance degradation of configuration changes: > Steps to reproduce: > 1. Start cluster with 3 nodes > 2. Run in the loop > {code} > CREATE TABLE TEST(ID INTEGER PRIMARY KEY, V INTEGER) > for (Table t : ign.tables().tables()) { > ; > } > {code} > On begin {{IgniteTables#tables}} takes ~ 0.7 sec. > The time of the operation is grown. > The time after ~100 iteration is about 20 sec. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17252) Introduce Replica, ReplicaServer(?), ReplicaService and ReplicaListener interfaces
[ https://issues.apache.org/jira/browse/IGNITE-17252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Lapin updated IGNITE-17252: - Summary: Introduce Replica, ReplicaServer(?), ReplicaService and ReplicaListener interfaces (was: Introduce Replica, ReplicaService and ReplicaListener interfaces) > Introduce Replica, ReplicaServer(?), ReplicaService and ReplicaListener > interfaces > -- > > Key: IGNITE-17252 > URL: https://issues.apache.org/jira/browse/IGNITE-17252 > Project: Ignite > Issue Type: Improvement >Reporter: Alexander Lapin >Priority: Major > Labels: ignite-3, transaction3_rw > > h2. General context > According to tx design document new abstraction is introduced to encapsulate > replication engine (e.g. Raft) from business logic, called {*}primary > replica{*}: > {code:java} > A primary replica is a replica which serves a special purpose in the > transaction protocol.Only one primary replica can exist at a time. Each > replica is identified by liveness interval (startTs, endTs). All such > intervals are disjoint, so the new primary replica liveness interval can’t > overlap with the previous. Timestamps used for defining the intervals must be > comparable with timestamps assigned to committing transactions. For example, > HLC timestamps can be used for this purpose. > Primary replica is used to execute CC protocol (so all reads and writes go > through it), thus maintaining serializable executions, as described in the > next section. > The simplest implementation would be piggy-backing to RAFT protocol for tying > a primary replica to a RAFT leader. See the leaseholder section from the RAFT > paper for details. For this approach a RAFT leader is identical to a primary > replica node. The endTs is constantly extended using RAFT heart beating. > A primary replica’s status can be voluntarily transferred to another replica. > This is only possible after its liveness interval expires. This can be > useful, for example, for RAFT leaders balancing. {code} > Besides obvious lease-based disjoint replication leader detection, primary > replica is also responsible for handling messages acting as a storage and > replication pre-and-post-processor. It's up to replica to > * acquire, release and await locks > * propagate requests to storage directly > * convert message to an appropriate replication(Raft) command and propagate > it to the replication engine. > Let's check following example: > *As-Is (currently):* > {code:java} > // client-side > InternalTable.upsert() > enlistInTx() > raftService.run(upsertCommand) > raftGroupService.sendWithRetry(ActionRequest.of(upsertCommand)) > messagingService().invoke(actionRequest) > // server-side > ActionRequestProcessor.handleRequest(actionRequest) > future = > JraftServerImpl.DelegatingStateMachine.getListener().onBeforeApply(request.command()); > // Lock management > future.handle(actionRequest.command() instanceof WriteCommand ? > applyWrite(actionRequest) : applyRead(actionRequest)){code} > Please pay attention to *onBeforeApply* step. It was introduced in order to > manage(acquire) locks with further locks awaiting *outside* the raft. It is > critical not to occupy the linearized in-raft execution with such lengthy > operations as waiting for locks to be released. > It worth to mention, that such approach has several disadvantages, e.g. > onBeforeApply step is executed before isLeader() check, so that, it might > acquire lock on non-leader-node that is not the expected behavior. > *To-Be (should be implemented):* > {code:java} > // client-side > InternalTable.upsert() > enlistInTx() > replicaService.invoke(upsertRequest, primary=true) > // server-side > Replica.handleRequest(actionRequest) >if (actionRequest.isPrimaryEvaluationExpected()) > checkLease(); // Return failure if not valid > >if (actionRequest instanceOf WriteRequest) { > // validate writeRequest locally > > // acquire all locks !locally! > fut = txManager.intentWriteLock(table); > > fut.handle(()-> > return > future.of(async(replicationEngine.replicate(ReplicationCommand.of(writeRequest > ) >}{code} > in other word: > * Instead of raftGroupService, replicaService should be used. > * ReplicaService uses messages (actionRequests) instead of raft commands. > * Within the scope of RW transactions replicaService always sends requests > to the *primary* replica, however within the RO transactions non-primary > replicas will also participate in requests handling, so that I believe we > should introduce common Replica instead of strict PrimaryReplica. > * Replica is aware of requests
[jira] [Updated] (IGNITE-17236) inline size usage of index-reader
[ https://issues.apache.org/jira/browse/IGNITE-17236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolay Izhikov updated IGNITE-17236: - Fix Version/s: 2.14 > inline size usage of index-reader > - > > Key: IGNITE-17236 > URL: https://issues.apache.org/jira/browse/IGNITE-17236 > Project: Ignite > Issue Type: Improvement >Reporter: Nikolay Izhikov >Assignee: Nikolay Izhikov >Priority: Minor > Fix For: 2.14 > > Time Spent: 1h > Remaining Estimate: 0h > > It will be useful to analyze and output information about actual usage of > inline space in index. > Those information can hint use about suboptimal usage of space in index > entries. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17277) Add a user info to the sql queries view
Amelchev Nikita created IGNITE-17277: Summary: Add a user info to the sql queries view Key: IGNITE-17277 URL: https://issues.apache.org/jira/browse/IGNITE-17277 Project: Ignite Issue Type: Improvement Environment: Add a user info who started a query to the queries system view. Reporter: Amelchev Nikita Assignee: Amelchev Nikita -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17234) "version" and "probe" REST commands should not require authentication
[ https://issues.apache.org/jira/browse/IGNITE-17234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Borunov updated IGNITE-17234: - Reviewer: Andrey Novikov > "version" and "probe" REST commands should not require authentication > - > > Key: IGNITE-17234 > URL: https://issues.apache.org/jira/browse/IGNITE-17234 > Project: Ignite > Issue Type: Improvement > Components: rest >Reporter: Dmitriy Borunov >Assignee: Dmitriy Borunov >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > *Actual:* > /ignite?cmd=version and /ignite?cmd=probe both return: > {code:java} > {"successStatus":2,"error":"Failed to authenticate remote client (secure > session SPI not set?): GridRestRequest [destId=null, > clientId=3fbf0a38-4d80-42f3-9f77-a0ba7e2da396, addr=/127.0.0.1:54649, > cmd=, authCtx=null]","sessionToken":null,"response":null} {code} > *Expected:* > {code:java} > {"successStatus":0,"error":null,"sessionToken":null,"response":"grid has > started"} {code} > These two commands should not require authentication because it could cause a > timeout due to system cache usage (transactions + pme ). These commands may > be blocked for some time or timed out. It can be interpreted as a cluster > failure. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-17271) Sort out sqlogic tests on Ignite 3
[ https://issues.apache.org/jira/browse/IGNITE-17271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-17271: -- Epic Link: IGNITE-17185 > Sort out sqlogic tests on Ignite 3 > -- > > Key: IGNITE-17271 > URL: https://issues.apache.org/jira/browse/IGNITE-17271 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Taras Ledkov >Priority: Major > Labels: ignite-3, tech-debt > > All the tests from the lists below are muted by rename {{}} -> > {{_ignored}}. > If the muted test already contains muted part (file {{_ignored}} > already exists) the file is renamed to {{_ignored_old}}. > Tests to investigate: > *Implement: {{STRING_AGG}}* > {code} > sql/aggregate/aggregates/test_aggregate_types.test > sql/aggregate/aggregates/test_distinct_string_agg.test > sql/aggregate/aggregates/test_perfect_ht.test > sql/aggregate/aggregates/test_string_agg.test > sql/aggregate/aggregates/test_string_agg_big.test > sql/aggregate/aggregates/test_string_agg_many_groups.test > {code} > *Metadata issue* > {code} > Caused by: java.lang.AssertionError: Unexpected type of result: NULL > at > org.apache.ignite.internal.sql.engine.util.TypeUtils.columnType(TypeUtils.java:343) > at > org.apache.ignite.internal.sql.engine.prepare.PrepareServiceImpl.lambda$resultSetMetadata$5(PrepareServiceImpl.java:271) > at > org.apache.ignite.internal.sql.engine.prepare.LazyResultSetMetadata.columns(LazyResultSetMetadata.java:43) > at > org.apache.ignite.internal.sql.script.SqlScriptRunner.lambda$sql$2(SqlScriptRunner.java:173) > at > java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) > {code} > {code} > sql/aggregate/aggregates/test_aggregate_types_scalar.test > sql/aggregate/aggregates/test_avg.test > sql/aggregate/aggregates/test_scalar_aggr.test > sql/function/generic/test_nvl.test > sql/function/numeric/test_truncate.test > sql/function/string/test_caseconvert.test > sql/function/string/test_initcap.test > sql/function/string/test_left.test > sql/function/string/test_repeat.test > sql/function/string/test_replace.test > sql/function/string/test_reverse.test > sql/function/string/test_right.test > sql/function/string/test_substring.test > sql/function/string/test_trim.test > sql/types/null/test_null.test > {code} > *Support INTERVAL TYPE* > {code} > sql/function/interval/test_extract.test > sql/types/interval/test_interval_ops.test > {code} > *Unsorted / Not expected results* > {code} > sql/function/timestamp/test_extract.test > sql/function/timestamp/test_extract_ms.test > sql/function/timestamp/test_timestampadd.test > sql/insert/test_insert_type.test > sql/order/test_order_same_value.test_slow > sql/types/blob/test_blob.test > sql/types/blob/test_blob_function.test > sql/types/blob/test_blob_operator.test > sql/types/blob/test_blob_string.test > sql/types/collections/array.test > sql/types/collections/array.test > sql/types/collections/array.test > sql/types/collections/array.test > sql/types/decimal/cast_from_decimal.test > sql/types/decimal/cast_to_decimal.test > sql/types/interval/interval_constants.test > sql/types/interval/test_interval_addition.test > sql/types/null/test_is_null.test > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-17198) Minimal implementation of pure in-memory storage (with in-memory RAFT)
[ https://issues.apache.org/jira/browse/IGNITE-17198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17560912#comment-17560912 ] Roman Puchkovskiy commented on IGNITE-17198: Thank you for the review > Minimal implementation of pure in-memory storage (with in-memory RAFT) > -- > > Key: IGNITE-17198 > URL: https://issues.apache.org/jira/browse/IGNITE-17198 > Project: Ignite > Issue Type: New Feature > Components: persistence >Reporter: Roman Puchkovskiy >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-alpha6 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > This is about the simpliest case possible: stable topology, nodes never > leave, corner cases are not handled. > What has to be done: > # Volatile RAFT meta storage > # Create volatile RAFT storages for a RAFT group (meta and logs) when RAFT > group is about volatile storage; create persistent RAFT storages otherwise > What does not need to be done: > # Now, RAFT snapshots use files. We should not change this behavior in this > task -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17276) CompletableFuture#supplyAsync never completes with Ignite ExecutorService
Alexey Kukushkin created IGNITE-17276: - Summary: CompletableFuture#supplyAsync never completes with Ignite ExecutorService Key: IGNITE-17276 URL: https://issues.apache.org/jira/browse/IGNITE-17276 Project: Ignite Issue Type: Bug Components: compute Affects Versions: 2.13 Reporter: Alexey Kukushkin h3. Scenario: CompletableFuture#supplyAsync with Ignite ExecutorService Given a cluster of two Ignite nodes And Ignite {{ExecutorService}} for a remote node When a task is executed using CompletableFuture#supplyAsync with the Ignite's {{ExecutorService}} Then the task eventually completes Actual result: the task never completes. Waiting on the task's future blocks forever. Note; the scenario works fine if the task is executed on the local Ignite node. {code:java} public class Reproducer { @Test public void supplyIgniteExecutorToCompletableFuture() throws InterruptedException, ExecutionException { Function igniteCfgFactory = igniteName -> new IgniteConfiguration() .setIgniteInstanceName(igniteName) .setDiscoverySpi( new TcpDiscoverySpi() .setIpFinder(new TcpDiscoveryVmIpFinder().setAddresses(Collections.singleton("127.0.0.1:47500"))) ); try (var ignored = Ignition.start(igniteCfgFactory.apply("server")); final var ignite = Ignition.start(igniteCfgFactory.apply("app"))) { final var igniteExecutor = ignite.executorService(ignite.cluster().forRemotes()); final var future = CompletableFuture.supplyAsync(new Task(), igniteExecutor); final var result = future.get(); assertNotNull(result); } } private static class Task implements Supplier { @Override public String get() { return UUID.randomUUID().toString(); } } } {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-17275) Performance testing of pure SQL execution
Yury Gerzhedovich created IGNITE-17275: -- Summary: Performance testing of pure SQL execution Key: IGNITE-17275 URL: https://issues.apache.org/jira/browse/IGNITE-17275 Project: Ignite Issue Type: Improvement Components: sql Reporter: Yury Gerzhedovich Let's try implement way for performance testing pure SQL, without influence of any other component like table infrastructure, RAFT and e.t.c. For example it could be separate pluggable storage with generation on the fly data. -- This message was sent by Atlassian Jira (v8.20.10#820010)