[jira] [Updated] (IGNITE-22668) StripeAwareLogManager may skip commits to underlying Log Storage
[ https://issues.apache.org/jira/browse/IGNITE-22668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtsev updated IGNITE-22668: - Description: Originally this issue was discovered, when I was trying to start an Ignite cluster with 3 nodes and a table that used a volatile storage profile. By filling this table with data, I encountered multiple internal Raft errors related to the Meta Storage group. The problem is that {{StripeAwareLogManager}} manages multiple log storages from different Raft groups that are colocated to the same stripe in the {{LogManagerDisruptor}}. In my case, some volatile log storages from the table were colocated with the persistent log storage from the Meta Storage. However, when flushing the buffered data inside {{StripeAwareLogManager}}, we have the following code: {code:java} // Since the storage is shared, any batcher can flush it. appendBatchers.iterator().next().commitWriteBatch(); {code} {{commitWriteBatch}} for volatile storages is a no-op, while for persistent storages it writes the data into the DB. If we are unlucky and the {{appendBatcher}} for the volatile storage comes first, then the data for the persistent storage will not get flushed at all (until the next flush, but it's enough to break everything). was: Originally this issue was discovered, when I was trying to start an Ignite cluster with 3 nodes and a table that used a volatile storage profile. By filling this table with data, I encountered multiple internal Raft errors related to the Meta Storage group. The problem is that {{StripeAwareLogManager}} manages multiple log storages from different Raft groups that were colocated to the same stripe in the {{LogManagerDisruptor}}. In my case, some volatile log storages from the table were colocated with the persistent log storage from the Meta Storage. However, when flushing the buffered data inside {{StripeAwareLogManager}}, we have the following code: {code:java} // Since the storage is shared, any batcher can flush it. appendBatchers.iterator().next().commitWriteBatch(); {code} {{commitWriteBatch}} for volatile storages is a no-op, while for persistent storages it writes the data into the DB. If we are unlucky and the {{appendBatcher}} for the volatile storage comes first, then the data for the persistent storage will not get flushed at all (until the next flush, but it's enough to break everything). > StripeAwareLogManager may skip commits to underlying Log Storage > > > Key: IGNITE-22668 > URL: https://issues.apache.org/jira/browse/IGNITE-22668 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtsev >Assignee: Aleksandr Polovtsev >Priority: Major > Labels: ignite-3 > > Originally this issue was discovered, when I was trying to start an Ignite > cluster with 3 nodes and a table that used a volatile storage profile. By > filling this table with data, I encountered multiple internal Raft errors > related to the Meta Storage group. > The problem is that {{StripeAwareLogManager}} manages multiple log storages > from different Raft groups that are colocated to the same stripe in the > {{LogManagerDisruptor}}. In my case, some volatile log storages from the > table were colocated with the persistent log storage from the Meta Storage. > However, when flushing the buffered data inside {{StripeAwareLogManager}}, we > have the following code: > {code:java} > // Since the storage is shared, any batcher can flush it. > appendBatchers.iterator().next().commitWriteBatch(); > {code} > {{commitWriteBatch}} for volatile storages is a no-op, while for persistent > storages it writes the data into the DB. If we are unlucky and the > {{appendBatcher}} for the volatile storage comes first, then the data for the > persistent storage will not get flushed at all (until the next flush, but > it's enough to break everything). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22668) StripeAwareLogManager may skip commits to underlying Log Storage
[ https://issues.apache.org/jira/browse/IGNITE-22668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtsev updated IGNITE-22668: - Description: Originally this issue was discovered, when I was trying to start an Ignite cluster with 3 nodes and a table that used a volatile storage profile. By filling this table with data, I encountered multiple internal Raft errors related to the Meta Storage group. The problem is that {{StripeAwareLogManager}} manages multiple log storages from different Raft groups that were colocated to the same stripe in the {{LogManagerDisruptor}}. In my case, some volatile log storages from the table were colocated with the persistent log storage from the Meta Storage. However, when flushing the buffered data inside {{StripeAwareLogManager}}, we have the following code: {code:java} // Since the storage is shared, any batcher can flush it. appendBatchers.iterator().next().commitWriteBatch(); {code} {{commitWriteBatch}} for volatile storages is a no-op, while for persistent storages it writes the data into the DB. If we are unlucky and the {{appendBatcher}} for the volatile storage comes first, then the data for the persistent storage will not get flushed at all (until the next flush, but it's enough to break everything). was: Originally this issue was discovered, when I was trying to start an Ignite cluster with 3 nodes and a table that used a volatile storage profile. By filling this table with data, I encountered multiple internal Raft errors. The problem is that > StripeAwareLogManager may skip commits to underlying Log Storage > > > Key: IGNITE-22668 > URL: https://issues.apache.org/jira/browse/IGNITE-22668 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtsev >Assignee: Aleksandr Polovtsev >Priority: Major > Labels: ignite-3 > > Originally this issue was discovered, when I was trying to start an Ignite > cluster with 3 nodes and a table that used a volatile storage profile. By > filling this table with data, I encountered multiple internal Raft errors > related to the Meta Storage group. > The problem is that {{StripeAwareLogManager}} manages multiple log storages > from different Raft groups that were colocated to the same stripe in the > {{LogManagerDisruptor}}. In my case, some volatile log storages from the > table were colocated with the persistent log storage from the Meta Storage. > However, when flushing the buffered data inside {{StripeAwareLogManager}}, we > have the following code: > {code:java} > // Since the storage is shared, any batcher can flush it. > appendBatchers.iterator().next().commitWriteBatch(); > {code} > {{commitWriteBatch}} for volatile storages is a no-op, while for persistent > storages it writes the data into the DB. If we are unlucky and the > {{appendBatcher}} for the volatile storage comes first, then the data for the > persistent storage will not get flushed at all (until the next flush, but > it's enough to break everything). -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22668) StripeAwareLogManager may skip commits to underlying Log Storage
[ https://issues.apache.org/jira/browse/IGNITE-22668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtsev updated IGNITE-22668: - Description: Originally this issue was discovered, when I was trying to start an Ignite cluster with 3 nodes and a table that used a volatile storage profile. By filling this table with data, I encountered multiple internal Raft errors. The problem is that > StripeAwareLogManager may skip commits to underlying Log Storage > > > Key: IGNITE-22668 > URL: https://issues.apache.org/jira/browse/IGNITE-22668 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtsev >Assignee: Aleksandr Polovtsev >Priority: Major > Labels: ignite-3 > > Originally this issue was discovered, when I was trying to start an Ignite > cluster with 3 nodes and a table that used a volatile storage profile. By > filling this table with data, I encountered multiple internal Raft errors. > The problem is that -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22668) StripeAwareLogManager may skip commits to underlying Log Storage
[ https://issues.apache.org/jira/browse/IGNITE-22668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr Polovtsev updated IGNITE-22668: - Summary: StripeAwareLogManager may skip commits to underlying Log Storage (was: StripeAwareLogManager may skip commits to underlying storage) > StripeAwareLogManager may skip commits to underlying Log Storage > > > Key: IGNITE-22668 > URL: https://issues.apache.org/jira/browse/IGNITE-22668 > Project: Ignite > Issue Type: Bug >Reporter: Aleksandr Polovtsev >Assignee: Aleksandr Polovtsev >Priority: Major > Labels: ignite-3 > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22668) StripeAwareLogManager may skip commits to underlying storage
Aleksandr Polovtsev created IGNITE-22668: Summary: StripeAwareLogManager may skip commits to underlying storage Key: IGNITE-22668 URL: https://issues.apache.org/jira/browse/IGNITE-22668 Project: Ignite Issue Type: Bug Reporter: Aleksandr Polovtsev Assignee: Aleksandr Polovtsev -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22667) Optimise RocksDB sorted indexes
Philipp Shergalis created IGNITE-22667: -- Summary: Optimise RocksDB sorted indexes Key: IGNITE-22667 URL: https://issues.apache.org/jira/browse/IGNITE-22667 Project: Ignite Issue Type: Improvement Components: persistence Reporter: Philipp Shergalis We should write comparator for RocksDB in C++, contact [~ibessonov] for references -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22664) Insanely memory consuming checkpoints in aipersist
[ https://issues.apache.org/jira/browse/IGNITE-22664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philipp Shergalis updated IGNITE-22664: --- Description: Every page in persistent page memory delta file uses ~15-20 kilobytes (default settings). Examples. Inserting via storageUpdateHandler. No indexes: - Inserting 1 row at a time, doing checkpoint after every insertion. Checkpoint saves 7 pages, pageSize is 1024, but delta file created uses up 144 KB. - Inserting million rows, every checkpoint with ~11000 pages creates delta files of ~180 megabytes Also, after compaction one row uses ~300 bytes, while the same row in rocksDB uses ~100 bytes Checkpoints / compaction are hardcoded to use 4 threads was: Every page in persistent page memory delta file uses ~15-20 kilobytes. Examples. Inserting via storageUpdateHandler. No indexes: - Inserting 1 row at a time, doing checkpoint after every insertion. Checkpoint saves 7 pages, pageSize is 1024, but delta file created uses up 144 KB. - Inserting million rows, every checkpoint with ~11000 pages creates delta files of ~180 megabytes Also, after compaction one row uses ~300 bytes, while similar row in rocksDB uses ~100 bytes > Insanely memory consuming checkpoints in aipersist > -- > > Key: IGNITE-22664 > URL: https://issues.apache.org/jira/browse/IGNITE-22664 > Project: Ignite > Issue Type: Improvement > Components: persistence >Reporter: Philipp Shergalis >Priority: Major > Labels: ignite-3 > > Every page in persistent page memory delta file uses ~15-20 kilobytes > (default settings). > > Examples. Inserting via storageUpdateHandler. No indexes: > - Inserting 1 row at a time, doing checkpoint after every insertion. > Checkpoint saves 7 pages, pageSize is 1024, but delta file created uses up > 144 KB. > - Inserting million rows, every checkpoint with ~11000 pages creates delta > files of ~180 megabytes > > Also, after compaction one row uses ~300 bytes, while the same row in rocksDB > uses ~100 bytes > > Checkpoints / compaction are hardcoded to use 4 threads -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22666) Long RocksDb engine.stop() after intensive writes
Philipp Shergalis created IGNITE-22666: -- Summary: Long RocksDb engine.stop() after intensive writes Key: IGNITE-22666 URL: https://issues.apache.org/jira/browse/IGNITE-22666 Project: Ignite Issue Type: Improvement Reporter: Philipp Shergalis Seems like rocksdb waits for compaction to finish before stopping, so it takes 10+ minutes after writing several million rows. In aipersist we skip compaction when node is stopping, I think we should add the same behaviour -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-22536) CDC metrics for rejected entries by conflict resolver
[ https://issues.apache.org/jira/browse/IGNITE-22536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17863071#comment-17863071 ] Maksim Davydov commented on IGNITE-22536: - https://ci2.ignite.apache.org/buildConfiguration/IgniteExtensions_Tests_Cdc/7948184?hideTestsFromDependencies=false=false=false=true > CDC metrics for rejected entries by conflict resolver > - > > Key: IGNITE-22536 > URL: https://issues.apache.org/jira/browse/IGNITE-22536 > Project: Ignite > Issue Type: New Feature > Components: extensions >Reporter: Maksim Davydov >Assignee: Maksim Davydov >Priority: Major > Labels: IEP-59, ise > Time Spent: 20m > Remaining Estimate: 0h > > The metric is needed for to track the number of entries rejected by conflict > resolver during CDC -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22664) Insanely memory consuming checkpoints in aipersist
[ https://issues.apache.org/jira/browse/IGNITE-22664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philipp Shergalis updated IGNITE-22664: --- Description: Every page in persistent page memory delta file uses ~15-20 kilobytes. Examples. Inserting via storageUpdateHandler. No indexes: - Inserting 1 row at a time, doing checkpoint after every insertion. Checkpoint saves 7 pages, pageSize is 1024, but delta file created uses up 144 KB. - Inserting million rows, every checkpoint with ~11000 pages creates delta files of ~180 megabytes Also, after compaction one row uses ~300 bytes, while similar row in rocksDB uses ~100 bytes was: Every page in persistent page memory delta file uses ~15-20 kilobytes. Examples. Inserting via storageUpdateHandler. No indexes: - Inserting 1 row at a time, doing checkpoint after every insertion. Checkpoint saves 7 pages, pageSize is 1024, but delta file created uses up 144 KB. - Inserting million rows, every checkpoint with ~11000 pages creates delta files of ~180 megabytes > Insanely memory consuming checkpoints in aipersist > -- > > Key: IGNITE-22664 > URL: https://issues.apache.org/jira/browse/IGNITE-22664 > Project: Ignite > Issue Type: Improvement > Components: persistence >Reporter: Philipp Shergalis >Priority: Major > Labels: ignite-3 > > Every page in persistent page memory delta file uses ~15-20 kilobytes. > > Examples. Inserting via storageUpdateHandler. No indexes: > - Inserting 1 row at a time, doing checkpoint after every insertion. > Checkpoint saves 7 pages, pageSize is 1024, but delta file created uses up > 144 KB. > - Inserting million rows, every checkpoint with ~11000 pages creates delta > files of ~180 megabytes > > Also, after compaction one row uses ~300 bytes, while similar row in rocksDB > uses ~100 bytes -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22665) Optimise RocksDB settings and make them configurable
Philipp Shergalis created IGNITE-22665: -- Summary: Optimise RocksDB settings and make them configurable Key: IGNITE-22665 URL: https://issues.apache.org/jira/browse/IGNITE-22665 Project: Ignite Issue Type: Improvement Components: persistence Reporter: Philipp Shergalis Currently we configure DBOptions and column families settings in different places - SharedRocksDBInstanceCreator, RocksDbKeyValueStorage, DefaultLogStorageFactory, etc. Most of these settings are hardcoded, also some important configurations are omitted (i.e. shared instance creator doesn't configure max_background_jobs and max_write_buffer_number) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22664) Insanely memory consuming checkpoints in aipersist
Philipp Shergalis created IGNITE-22664: -- Summary: Insanely memory consuming checkpoints in aipersist Key: IGNITE-22664 URL: https://issues.apache.org/jira/browse/IGNITE-22664 Project: Ignite Issue Type: Improvement Components: persistence Reporter: Philipp Shergalis Every page in persistent page memory delta file uses ~15-20 kilobytes. Examples. Inserting via storageUpdateHandler. No indexes: - Inserting 1 row at a time, doing checkpoint after every insertion. Checkpoint saves 7 pages, pageSize is 1024, but delta file created uses up 144 KB. - Inserting million rows, every checkpoint with ~11000 pages creates delta files of ~180 megabytes -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22663) Assertion error on kill transaction command
Nikita Amelchev created IGNITE-22663: Summary: Assertion error on kill transaction command Key: IGNITE-22663 URL: https://issues.apache.org/jira/browse/IGNITE-22663 Project: Ignite Issue Type: Bug Reporter: Nikita Amelchev Attachments: logs.txt Steps before: 1. Start a few transactions. 2. control.sh --tx --limit 2 --info 3. control.sh --tx --limit 2 --kill Cache cfg: {code:java} ClientCacheConfiguration cacheCfg = new ClientCacheConfiguration().setName("transactionsCache").setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL).setStatisticsEnabled(true); {code} Error: {code:java} 2024-07-04 17:16:26.757 [ERROR][sys-stripe-4-#5][] Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=CRITICAL_ERROR, err=class o.a.i.i.transactions.IgniteTxHeuristicCheckedException: Committing a transaction has produced runtime exception]] org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException: Committing a transaction has produced runtime exception at org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.heuristicException(IgniteTxAdapter.java:789) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:896) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:786) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.localFinish(GridDhtTxLocal.java:570) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.finishTx(GridDhtTxLocal.java:453) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitDhtLocalAsync(GridDhtTxLocal.java:498) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitAsync(GridDhtTxLocal.java:513) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:791) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:118) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:542) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.util.future.GridCompoundFuture.checkComplete(GridCompoundFuture.java:350) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.util.future.GridCompoundFuture.markInitialized(GridCompoundFuture.java:339) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1368) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:724) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1130) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.prepareAsync(GridDhtTxLocal.java:390) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:612) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:409) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:197) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest(IgniteTxHandler.java:174) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.lambda$new$f18f0bb1$1(IgniteTxHandler.java:216) ~[ignite-core-2.16.0.jar:2.16.0] at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1164) [ignite-core-2.16.0.jar:2.16.0] at
[jira] [Updated] (IGNITE-22663) Assertion error on kill transaction command
[ https://issues.apache.org/jira/browse/IGNITE-22663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikita Amelchev updated IGNITE-22663: - Attachment: logs.txt > Assertion error on kill transaction command > --- > > Key: IGNITE-22663 > URL: https://issues.apache.org/jira/browse/IGNITE-22663 > Project: Ignite > Issue Type: Bug >Reporter: Nikita Amelchev >Priority: Critical > Labels: ise > Attachments: logs.txt > > > Steps before: > 1. Start a few transactions. > 2. control.sh --tx --limit 2 --info > 3. control.sh --tx --limit 2 --kill > Cache cfg: > {code:java} > ClientCacheConfiguration cacheCfg = new > ClientCacheConfiguration().setName("transactionsCache").setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL).setStatisticsEnabled(true); > {code} > Error: > {code:java} > 2024-07-04 17:16:26.757 [ERROR][sys-stripe-4-#5][] Critical system error > detected. Will be handled accordingly to configured handler > [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, > super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet > [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], > failureCtx=FailureContext [type=CRITICAL_ERROR, err=class > o.a.i.i.transactions.IgniteTxHeuristicCheckedException: Committing a > transaction has produced runtime exception]] > org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException: > Committing a transaction has produced runtime exception > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.heuristicException(IgniteTxAdapter.java:789) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:896) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocalAdapter.localFinish(GridDhtTxLocalAdapter.java:786) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.localFinish(GridDhtTxLocal.java:570) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.finishTx(GridDhtTxLocal.java:453) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitDhtLocalAsync(GridDhtTxLocal.java:498) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.commitAsync(GridDhtTxLocal.java:513) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:791) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.onDone(GridDhtTxPrepareFuture.java:118) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:542) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.util.future.GridCompoundFuture.checkComplete(GridCompoundFuture.java:350) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.util.future.GridCompoundFuture.markInitialized(GridCompoundFuture.java:339) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1368) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.mapIfLocked(GridDhtTxPrepareFuture.java:724) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare(GridDhtTxPrepareFuture.java:1130) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxLocal.prepareAsync(GridDhtTxLocal.java:390) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:612) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:409) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:197) > ~[ignite-core-2.16.0.jar:2.16.0] > at > org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest(IgniteTxHandler.java:174) >
[jira] [Updated] (IGNITE-22662) Snapshot check as distributed process
[ https://issues.apache.org/jira/browse/IGNITE-22662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Steshin updated IGNITE-22662: -- Labels: ise (was: ) > Snapshot check as distributed process > - > > Key: IGNITE-22662 > URL: https://issues.apache.org/jira/browse/IGNITE-22662 > Project: Ignite > Issue Type: Improvement >Reporter: Vladimir Steshin >Priority: Major > Labels: ise > > The snapshot validation should be a distributed process. Currently, the > validation is a computation tasks/jobs. Stopping or declining 2 concurrent > 'heavy' checks aren't handy. Also, snapshot check status or snapshot check > metrics are simpler when the validation is only one per snapshot and > implemented similar to snapshot creation or restoration. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22662) Snapshot check as distributed process
Vladimir Steshin created IGNITE-22662: - Summary: Snapshot check as distributed process Key: IGNITE-22662 URL: https://issues.apache.org/jira/browse/IGNITE-22662 Project: Ignite Issue Type: Improvement Reporter: Vladimir Steshin The snapshot validation should be a distributed process. Currently, the validation is a computation tasks/jobs. Stopping or declining 2 concurrent 'heavy' checks aren't handy. Also, snapshot check status or snapshot check metrics are simpler when the validation is only one per snapshot and implemented similar to snapshot creation or restoration. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22661) Add catalog version into Assignments
Kirill Sizov created IGNITE-22661: -- Summary: Add catalog version into Assignments Key: IGNITE-22661 URL: https://issues.apache.org/jira/browse/IGNITE-22661 Project: Ignite Issue Type: Improvement Reporter: Kirill Sizov This task emerged from the analysis of IGNITE-22415. The observed failure was caused by a missing table in the catalog. It happens when tables and zones are dropped and the zone is altered. Such activity triggers the pending assignment change and the table descriptor could not be found in the latest version of catalog. The suggested fix is to bundle catalog version into Assignments and use it instead of taking the latest available one. *Other catalog version issues* The catalog version issue for table counters was fixed in IGNITE-22415. This task covers the fix of catalog version in the assignments handlers. Indexes are also affected by mismatching catalog version, there is already a task for that IGNITE-22656 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-13202) Javadoc HTML can't be generated correctly with maven-javadoc-plugin on JDK 11+
[ https://issues.apache.org/jira/browse/IGNITE-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17863042#comment-17863042 ] Ignite TC Bot commented on IGNITE-13202: {panel:title=Branch: [IGNITE-13202__jdk11_javadoc] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [IGNITE-13202__jdk11_javadoc] Base: [master] : No new tests found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel} [TeamCity *-- Run :: All* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7922853buildTypeId=IgniteTests24Java8_RunAll] > Javadoc HTML can't be generated correctly with maven-javadoc-plugin on JDK 11+ > -- > > Key: IGNITE-13202 > URL: https://issues.apache.org/jira/browse/IGNITE-13202 > Project: Ignite > Issue Type: Bug >Reporter: Aleksey Plekhanov >Assignee: Maksim Timonin >Priority: Major > Labels: ise > Fix For: 2.17 > > > Javadoc utility has some bugs which don't allow to build Ignite Javadocs > correctly. > Building javadoc under JDK 11+ throws an error "The code being documented > uses modules but the packages defined in > [https://docs.oracle.com/javase/8/docs/api] are in the unnamed module". To > workaround this error argument "source=1.8" can be specified, but there is > another bug related to "source" and "subpackages" argument usages: > [https://bugs.openjdk.java.net/browse/JDK-8193030.] We still can build > javadoc with disabled {{detectJavaApiLink}} maven-javadoc-plugin option, but > in this case there will be no references to Java API from Ignite Javadoc. > Also, there is a bug with "-exclude" argument in JDK 11+, it doesn't exclude > subpackages of specified to exclude packages, so in generated output there is > a lot of javadocs for internal packages. > Javadoc generation command: > {{mvn -DskipTests -Pall-java clean install}} > {{mvn initialize -Pjavadoc}} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-22555) Assertion in ReplicaStateManager.onPrimaryElected
[ https://issues.apache.org/jira/browse/IGNITE-22555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Chudov reassigned IGNITE-22555: - Assignee: Denis Chudov > Assertion in ReplicaStateManager.onPrimaryElected > - > > Key: IGNITE-22555 > URL: https://issues.apache.org/jira/browse/IGNITE-22555 > Project: Ignite > Issue Type: Bug >Reporter: Konstantin Orlov >Assignee: Denis Chudov >Priority: Major > Labels: ignite-3 > > If you increase the number of replicas in default zone to 3, random > integration tests will start to fail with {{java.lang.AssertionError: Replica > is elected as primary but not reserved}}. > Full stack trace: > {code} > [2024-06-21T15:04:04,132][ERROR][%isit_n_0%JRaft-FSMCaller-Disruptormetastorage_stripe_0-0][FailureProcessor] > Critical system error detected. Will be handled accordingly to configured > handler [hnd=NoOpFailureHandler [], failureCtx=FailureContext > [type=CRITICAL_ERROR, err=java.util.concurrent.CompletionException: > java.lang.AssertionError: Replica is elected as primary but not reserved > [groupId=34_part_19].]] > java.util.concurrent.CompletionException: java.lang.AssertionError: Replica > is elected as primary but not reserved [groupId=34_part_19]. > at > java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) > ~[?:?] > at > java.base/java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:346) > ~[?:?] > at > java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:870) > ~[?:?] > at > java.base/java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:883) > ~[?:?] > at > java.base/java.util.concurrent.CompletableFuture.whenComplete(CompletableFuture.java:2257) > ~[?:?] > at > org.apache.ignite.internal.metastorage.server.WatchProcessor.notifyWatches(WatchProcessor.java:246) > ~[main/:?] > at > org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$notifyWatches$4(WatchProcessor.java:193) > ~[main/:?] > at > java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072) > ~[?:?] > at > java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478) > ~[?:?] > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > ~[?:?] > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > ~[?:?] > at java.base/java.lang.Thread.run(Thread.java:829) [?:?] > Caused by: java.lang.AssertionError: Replica is elected as primary but not > reserved [groupId=34_part_19]. > at > org.apache.ignite.internal.replicator.ReplicaManager$ReplicaStateContext.assertReservation(ReplicaManager.java:1551) > ~[main/:?] > at > org.apache.ignite.internal.replicator.ReplicaManager$ReplicaStateManager.onPrimaryElected(ReplicaManager.java:1271) > ~[main/:?] > at > org.apache.ignite.internal.event.AbstractEventProducer.fireEvent(AbstractEventProducer.java:88) > ~[main/:?] > at > org.apache.ignite.internal.placementdriver.leases.LeaseTracker.fireEventReplicaBecomePrimary(LeaseTracker.java:404) > ~[main/:?] > at > org.apache.ignite.internal.placementdriver.leases.LeaseTracker$UpdateListener.lambda$onUpdate$2(LeaseTracker.java:198) > ~[main/:?] > at > org.apache.ignite.internal.util.IgniteUtils.inBusyLockAsync(IgniteUtils.java:890) > ~[main/:?] > at > org.apache.ignite.internal.placementdriver.leases.LeaseTracker$UpdateListener.onUpdate(LeaseTracker.java:173) > ~[main/:?] > at > org.apache.ignite.internal.metastorage.server.Watch.onUpdate(Watch.java:67) > ~[main/:?] > at > org.apache.ignite.internal.metastorage.server.WatchProcessor.notifyWatches(WatchProcessor.java:245) > ~[main/:?] > ... 6 more > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-22633) Catalog compaction. Choosing coordinator.
[ https://issues.apache.org/jira/browse/IGNITE-22633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Orlov updated IGNITE-22633: -- Fix Version/s: 3.0.0-beta2 > Catalog compaction. Choosing coordinator. > - > > Key: IGNITE-22633 > URL: https://issues.apache.org/jira/browse/IGNITE-22633 > Project: Ignite > Issue Type: Improvement >Reporter: Pavel Pereslegin >Assignee: Konstantin Orlov >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 10m > Remaining Estimate: 0h > > By design, catalog compaction should only be performed by the coordinator > node. > Catalog compaction coordinator is the same node as the metastorage group > leader. > Basically, we need to implement logic (as part of the CatalogManager) that > will listen the metastorage leader election > (MetaStorageLeaderElectionListener?) and toggle an internal flag that will > indicate that the current node can initiate the catalog compaction procedure. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-22295) Sql. Allow comparable types for common columns in NATURAL / USING join conditions
[ https://issues.apache.org/jira/browse/IGNITE-22295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17862979#comment-17862979 ] Pavel Pereslegin edited comment on IGNITE-22295 at 7/4/24 9:34 AM: --- Looks like there is no easy fix for this. As described in the linked Calcite ticket, this issue does not affect JOIN...ON because when such an AST is validated, additional CASTs are added for the conditions. For example: {code:sql} select e.empno from emp e join (select '7369' as empno) c on e.empno = c.empno {code} AST will be transformed into the following {code:sql} SELECT E.EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT '7369' AS EMPNO) AS C ON E.EMPNO = CAST(C.EMPNO AS INTEGER) {code} But NATURAL/USING JOIN AST doesn't have conditions, it only have a list of columns, and the validator currently only checks that they are Comparable. h3. Possible solutions: h4. A. Rewrite USING/NATURAL to ON so that the necessary type casts can be added to conditions. But there is also a difference in processing unqualified column names for NATURAL/USING. 1. When star ( * ) is used common columns displayed once in the output. 2. When unqualified column name is specified it is wrapped into COALESCE with common columns from tables, e.g. {code:sql} select empno from emp e join (select 7369 as empno) c using (empno) {code} transformed to {code:sql} SELECT COALESCE(E.EMPNO, C.EMPNO) AS EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT 7369 AS EMPNO) AS C USING (EMPNO) {code} To keep this behavior it is required to override following methods of the {{SqlValidatorImpl}}: * List usingNames(SqlJoin join) * SqlNode expandExprFromJoin(SqlJoin join, SqlIdentifier identifier, SelectScope scope) The problem is that {{expandExprFromJoin}} is a static private method that is called from package private {{SelectExpander}} class, and to override him it is required to copy-paste a lot of code from {{SqlValidatorImpl}} into {{IgniteSqlValidator}}. At the same time, such a fix on the calcite side seems *unacceptable*, since doing such AST transformations in a validator is a bad idea (check [this|https://issues.apache.org/jira/browse/CALCITE-6413?focusedCommentId=17862822=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17862822] comment). h4. B. Add necessary casts on the SqlToRelConverter side. The implementation of this approach on the Ignite side seems unacceptable, for the same reason that too much code will need to be copy-pasted from Calcite. On the Calcite side, at first glance, such a fix requires two modifications. 1. Add necessary casts to conditions in {{convertUsing}} method. 2. Add necessary casts to {{COALESCE}} in {{expandExprFromJoin}} method (may be this can be done in a separate ticket, but without cast COALESCE for unqualified column will not work with different types). But this require deeper investigation. was (Author: xtern): Looks like there is no easy fix for this. As described in the linked Calcite ticket, this issue does not affect JOIN...ON because when such an AST is validated, additional CASTs are added for the conditions. For example: {code:sql} select e.empno from emp e join (select '7369' as empno) c on e.empno = c.empno {code} AST will be transformed into the following {code:sql} SELECT E.EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT '7369' AS EMPNO) AS C ON E.EMPNO = CAST(C.EMPNO AS INTEGER) {code} But NATURAL/USING JOIN AST doesn't have conditions, it only have a list of columns, and the validator currently only checks that they are Comparable. h3. Possible solutions: h4. A. Rewrite USING/NATURAL to ON so that the necessary type casts can be added to conditions. But there is also a difference in processing unqualified column names for NATURAL/USING. 1. When star ( * ) is used common columns displayed once in the output. 2. When unqualified column name is specified it is wrapped into COALESCE with common columns from tables, e.g. {code:sql} select empno from emp e join (select 7369 as empno) c using (empno) {code} transformed to {code:sql} SELECT COALESCE(E.EMPNO, C.EMPNO) AS EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT 7369 AS EMPNO) AS C USING (EMPNO) {code} To keep this behavior it is required to override following methods of the {{SqlValidatorImpl}}: * List usingNames(SqlJoin join) * SqlNode expandExprFromJoin(SqlJoin join, SqlIdentifier identifier, SelectScope scope) The problem is that {{expandExprFromJoin}} is a static private method that is called from package private {{SelectExpander}} class, and to override him it is required to copy-paste a lot of code from {{SqlValidatorImpl}} into {{IgniteSqlValidator}}. At the same time, such a fix on the calcite side seems *unacceptable*, since doing such AST transformations in a validator is a bad idea (check
[jira] [Comment Edited] (IGNITE-22295) Sql. Allow comparable types for common columns in NATURAL / USING join conditions
[ https://issues.apache.org/jira/browse/IGNITE-22295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17862979#comment-17862979 ] Pavel Pereslegin edited comment on IGNITE-22295 at 7/4/24 9:31 AM: --- Looks like there is no easy fix for this. As described in the linked Calcite ticket, this issue does not affect JOIN...ON because when such an AST is validated, additional CASTs are added for the conditions. For example: {code:sql} select e.empno from emp e join (select '7369' as empno) c on e.empno = c.empno {code} AST will be transformed into the following {code:sql} SELECT E.EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT '7369' AS EMPNO) AS C ON E.EMPNO = CAST(C.EMPNO AS INTEGER) {code} But NATURAL/USING JOIN AST doesn't have conditions, it only have a list of columns, and the validator currently only checks that they are Comparable. h3. Possible solutions: h4. A. Rewrite USING/NATURAL to ON so that the necessary type casts can be added to conditions. But there is also a difference in processing unqualified column names for NATURAL/USING. 1. When star ( * ) is used common columns displayed once in the output. 2. When unqualified column name is specified it is wrapped into COALESCE with common columns from tables, e.g. {code:sql} select empno from emp e join (select 7369 as empno) c using (empno) {code} transformed to {code:sql} SELECT COALESCE(E.EMPNO, C.EMPNO) AS EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT 7369 AS EMPNO) AS C USING (EMPNO) {code} To keep this behavior it is required to override following methods of the {{SqlValidatorImpl}}: * List usingNames(SqlJoin join) * SqlNode expandExprFromJoin(SqlJoin join, SqlIdentifier identifier, SelectScope scope) The problem is that {{expandExprFromJoin}} is a static private method that is called from package private {{SelectExpander}} class, and to override him it is required to copy-paste a lot of code from {{SqlValidatorImpl}} into {{IgniteSqlValidator}}. At the same time, such a fix on the calcite side seems *unacceptable*, since doing such AST transformations in a validator is a bad idea (check [this|https://issues.apache.org/jira/browse/CALCITE-6413?focusedCommentId=17862822=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17862822] comment). h4. B. Add necessary casts on the SqlToRelConverter side. The implementation of this approach on the Ignite side seems unacceptable, for the same reason that too much code will need to be copy-pasted from Calcite. On the Calcite side, at first glance, such a fix requires two modifications. 1. Add necessary casts to conditions in {{convertUsing}} method. 2. Add necessary casts to {{COALESCE}} in {{expandExprFromJoin}} method (may be this can be done in separate ticket). But this require deeper investigation. was (Author: xtern): Looks like there is no easy fix for this. As described in the linked Calcite ticket, this issue does not affect JOIN...ON because when such an AST is validated, additional CASTs are added for the conditions. For example: {code:sql} select e.empno from emp e join (select '7369' as empno) c on e.empno = c.empno {code} AST will be transformed into the following {code:sql} SELECT E.EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT '7369' AS EMPNO) AS C ON E.EMPNO = CAST(C.EMPNO AS INTEGER) {code} But NATURAL/USING JOIN AST doesn't have conditions, it only have a list of columns, and the validator currently only checks that they are Comparable. h3. Possible solutions: h4. A. Rewrite USING/NATURAL to ON so that the necessary type casts can be added to conditions. But there is also a difference in processing unqualified column names for NATURAL/USING. 1. When star ( * ) is used common columns displayed once in the output. 2. When unqualified column name is specified it is wrapped into COALESCE with common columns from tables, e.g. {code:sql} select empno from emp e join (select 7369 as empno) c using (empno) {code} transformed to {code:sql} SELECT COALESCE(E.EMPNO, C.EMPNO) AS EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT 7369 AS EMPNO) AS C USING (EMPNO) {code} To keep this behavior it is required to override following methods of the {{SqlValidatorImpl}}: * List usingNames(SqlJoin join) * SqlNode expandExprFromJoin(SqlJoin join, SqlIdentifier identifier, SelectScope scope) The problem is that {{expandExprFromJoin}} is a static private method that is called from package private {{SelectExpander}} class, and to override him it is required to copy-paste a lot of code from {{SqlValidatorImpl}} into {{IgniteSqlValidator}}. At the same time, such a fix on the calcite side seems unacceptable, since doing such AST transformations in a validator is a bad idea. h4. B. Add necessary casts on the SqlToRelConverter side. The implementation of this approach on the Ignite side seems unacceptable, for
[jira] [Assigned] (IGNITE-22636) Catalog compaction. Implement catalog compaction coordinator.
[ https://issues.apache.org/jira/browse/IGNITE-22636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Pereslegin reassigned IGNITE-22636: - Assignee: Pavel Pereslegin > Catalog compaction. Implement catalog compaction coordinator. > - > > Key: IGNITE-22636 > URL: https://issues.apache.org/jira/browse/IGNITE-22636 > Project: Ignite > Issue Type: Improvement >Reporter: Maksim Zhuravkov >Assignee: Pavel Pereslegin >Priority: Major > Labels: ignite-3 > > Compaction coordinator initiates the process of catalog compaction: > 1. Ask each node in logical topology for its minimal required timed (for > node_i let's call this time Tmin_node_i ). > 2. Choose minimum required time of Tmin_node_i (let's call it Tmin) > 3. Select catalog version Cv with catalog timestamp Ct which Ct < Tmin. > 4. Send a command to calls CatalogManager::compactCatalog with Ct to every > node. > If Catalog up to version Cv has tables T1, ... Tn, then: > Each compaction initiation run should be postponed (step 4 is not performed), > if at least one node that owns partitions for tables (T1,.., Tn) is missing > from the current logical topology. > If a remote node leaves the cluster, then: > If this node hosts at least one partition, the coordinator should wait until > this node either enters the cluster and then retry asking for Minimal Time, > or is removed from all assignments. In the latter case the absence of the > node may be simply ignored. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-22295) Sql. Allow comparable types for common columns in NATURAL / USING join conditions
[ https://issues.apache.org/jira/browse/IGNITE-22295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17862979#comment-17862979 ] Pavel Pereslegin edited comment on IGNITE-22295 at 7/4/24 9:21 AM: --- Looks like there is no easy fix for this. As described in the linked Calcite ticket, this issue does not affect JOIN...ON because when such an AST is validated, additional CASTs are added for the conditions. For example: {code:sql} select e.empno from emp e join (select '7369' as empno) c on e.empno = c.empno {code} AST will be transformed into the following {code:sql} SELECT E.EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT '7369' AS EMPNO) AS C ON E.EMPNO = CAST(C.EMPNO AS INTEGER) {code} But NATURAL/USING JOIN AST doesn't have conditions, it only have a list of columns, and the validator currently only checks that they are Comparable. h3. Possible solutions: h4. A. Rewrite USING/NATURAL to ON so that the necessary type casts can be added to conditions. But there is also a difference in processing unqualified column names for NATURAL/USING. 1. When star ( * ) is used common columns displayed once in the output. 2. When unqualified column name is specified it is wrapped into COALESCE with common columns from tables, e.g. {code:sql} select empno from emp e join (select 7369 as empno) c using (empno) {code} transformed to {code:sql} SELECT COALESCE(E.EMPNO, C.EMPNO) AS EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT 7369 AS EMPNO) AS C USING (EMPNO) {code} To keep this behavior it is required to override following methods of the {{SqlValidatorImpl}}: * List usingNames(SqlJoin join) * SqlNode expandExprFromJoin(SqlJoin join, SqlIdentifier identifier, SelectScope scope) The problem is that {{expandExprFromJoin}} is a static private method that is called from package private {{SelectExpander}} class, and to override him it is required to copy-paste a lot of code from {{SqlValidatorImpl}} into {{IgniteSqlValidator}}. At the same time, such a fix on the calcite side seems unacceptable, since doing such AST transformations in a validator is a bad idea. h4. B. Add necessary casts on the SqlToRelConverter side. The implementation of this approach on the Ignite side seems unacceptable, for the same reason that too much code will need to be copy-pasted from Calcite. On the Calcite side, at first glance, such a fix requires two modifications. 1. Add necessary casts to conditions in {{convertUsing}} method. 2. Add necessary casts to {{COALESCE}} in {{expandExprFromJoin}} method (may be this can be done in separate ticket). But this require deeper investigation. was (Author: xtern): Looks like there is no easy fix for this. As described in the linked Calcite ticket, this issue does not affect JOIN...ON because when such an AST is validated, additional CASTs are added for the conditions. For example: {code:sql} select e.empno from emp e join (select '7369' as empno) c on e.empno = c.empno {code} AST will be transformed into the following {code:sql} SELECT E.EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT '7369' AS EMPNO) AS C ON E.EMPNO = CAST(C.EMPNO AS INTEGER) {code} But NATURAL/USING JOIN AST doesn't have conditions, it only have a list of columns, and the validator currently only checks that they are Comparable. h3. Possible solutions: h4. A. Rewrite USING/NATURAL to ON so that the necessary type casts can be added to conditions. But there is also a difference in processing unqualified column names for NATURAL/USING. 1. When star ( * ) is used common columns displayed once in the output. 2. When unqualified column name is specified it is wrapped into COALESCE with common columns from tables, e.g. {code:sql} select empno from emp e join (select 7369 as empno) c using (empno) {code} transformed to {code:sql} SELECT COALESCE(E.EMPNO, C.EMPNO) AS EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT 7369 AS EMPNO) AS C USING (EMPNO) {code} To keep this behavior it is required to override following methods of the {{SqlValidatorImpl}}: * List usingNames(SqlJoin join) * SqlNode expandExprFromJoin(SqlJoin join, SqlIdentifier identifier, SelectScope scope) The problem is that {{expandExprFromJoin}} is a static private method that is called from package private {{SelectExpander}} class, and to override him it is required to copy-paste a lot of code from {{SqlValidatorImpl}} into {{IgniteSqlValidator}}. At the same time, such a fix on the calcite side seems unacceptable, since doing such AST transformations in a validator is a bad idea. h4. B. Add necessary casts on the SqlToRelConverter side. The implementation of this approach on the Ignite side seems unacceptable, for the same reason that too much code will need to be copy-pasted from Calcite. On the Calcite side, at first glance, such a fix requires two modifications. 1. Add necessary casts to
[jira] [Commented] (IGNITE-22295) Sql. Allow comparable types for common columns in NATURAL / USING join conditions
[ https://issues.apache.org/jira/browse/IGNITE-22295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17862979#comment-17862979 ] Pavel Pereslegin commented on IGNITE-22295: --- Look like there is no easy fix for this. As described in the linked Calcite ticket, this issue does not affect JOIN...ON because when such an AST is validated, additional CASTs are added for the conditions. For example: {code:sql} select e.empno from emp e join (select '7369' as empno) c on e.empno = c.empno {code} AST will be transformed into the following {code:sql} SELECT E.EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT '7369' AS EMPNO) AS C ON E.EMPNO = CAST(C.EMPNO AS INTEGER) {code} But NATURAL/USING JOIN AST doesn't have conditions, it only have a list of columns, and the validator currently only checks that they are Comparable. h3. Possible solutions: h4. A. Rewrite USING/NATURAL to ON so that the necessary type casts can be added to conditions. But there is also a difference in processing unqualified column names for NATURAL/USING. 1. When star (*) is used common columns displayed once in the output. 2. When unqualified column name is specified it is wrapped into COALESCE with common columns from tables, e.g. {code:sql} select empno from emp e join (select 7369 as empno) c using (empno) {code} transformed to {code:sql} SELECT COALESCE(E.EMPNO, C.EMPNO) AS EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT 7369 AS EMPNO) AS C USING (EMPNO) {code} To keep this behavior it is required to override following methods of the {{SqlValidatorImpl}}: * List usingNames(SqlJoin join) * SqlNode expandExprFromJoin(SqlJoin join, SqlIdentifier identifier, SelectScope scope) The problem is that {{expandExprFromJoin}} is a static private method that is called from package private {{SelectExpander}} class, and to override him it is required to copy-paste a lot of code from {{SqlValidatorImpl}} into {{IgniteSqlValidator}}. At the same time, such a fix on the calcite side seems unacceptable, since doing such AST transformations in a validator is a bad idea. h4. B. Add necessary casts on the SqlToRelConverter side. The implementation of this approach on the Ignite side seems unacceptable, for the same reason that too much code will need to be copy-pasted from Calcite. On the Calcite side, at first glance, such a fix requires two modifications. 1. Add necessary casts to conditions in `convertUsing` method. 2. Add necessary casts to COALESCE in expandExprFromJoin method (this can be done in separate ticket). But this require deeper investigation. > Sql. Allow comparable types for common columns in NATURAL / USING join > conditions > - > > Key: IGNITE-22295 > URL: https://issues.apache.org/jira/browse/IGNITE-22295 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Maksim Zhuravkov >Assignee: Pavel Pereslegin >Priority: Minor > Labels: ignite-3 > > As type coercion for NATURAL/USING join conditions is not invoked, such > queries produce incorrect results when column types of common columns do not > match. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (IGNITE-22295) Sql. Allow comparable types for common columns in NATURAL / USING join conditions
[ https://issues.apache.org/jira/browse/IGNITE-22295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17862979#comment-17862979 ] Pavel Pereslegin edited comment on IGNITE-22295 at 7/4/24 9:18 AM: --- Looks like there is no easy fix for this. As described in the linked Calcite ticket, this issue does not affect JOIN...ON because when such an AST is validated, additional CASTs are added for the conditions. For example: {code:sql} select e.empno from emp e join (select '7369' as empno) c on e.empno = c.empno {code} AST will be transformed into the following {code:sql} SELECT E.EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT '7369' AS EMPNO) AS C ON E.EMPNO = CAST(C.EMPNO AS INTEGER) {code} But NATURAL/USING JOIN AST doesn't have conditions, it only have a list of columns, and the validator currently only checks that they are Comparable. h3. Possible solutions: h4. A. Rewrite USING/NATURAL to ON so that the necessary type casts can be added to conditions. But there is also a difference in processing unqualified column names for NATURAL/USING. 1. When star ( * ) is used common columns displayed once in the output. 2. When unqualified column name is specified it is wrapped into COALESCE with common columns from tables, e.g. {code:sql} select empno from emp e join (select 7369 as empno) c using (empno) {code} transformed to {code:sql} SELECT COALESCE(E.EMPNO, C.EMPNO) AS EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT 7369 AS EMPNO) AS C USING (EMPNO) {code} To keep this behavior it is required to override following methods of the {{SqlValidatorImpl}}: * List usingNames(SqlJoin join) * SqlNode expandExprFromJoin(SqlJoin join, SqlIdentifier identifier, SelectScope scope) The problem is that {{expandExprFromJoin}} is a static private method that is called from package private {{SelectExpander}} class, and to override him it is required to copy-paste a lot of code from {{SqlValidatorImpl}} into {{IgniteSqlValidator}}. At the same time, such a fix on the calcite side seems unacceptable, since doing such AST transformations in a validator is a bad idea. h4. B. Add necessary casts on the SqlToRelConverter side. The implementation of this approach on the Ignite side seems unacceptable, for the same reason that too much code will need to be copy-pasted from Calcite. On the Calcite side, at first glance, such a fix requires two modifications. 1. Add necessary casts to conditions in `convertUsing` method. 2. Add necessary casts to COALESCE in expandExprFromJoin method (this can be done in separate ticket). But this require deeper investigation. was (Author: xtern): Look like there is no easy fix for this. As described in the linked Calcite ticket, this issue does not affect JOIN...ON because when such an AST is validated, additional CASTs are added for the conditions. For example: {code:sql} select e.empno from emp e join (select '7369' as empno) c on e.empno = c.empno {code} AST will be transformed into the following {code:sql} SELECT E.EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT '7369' AS EMPNO) AS C ON E.EMPNO = CAST(C.EMPNO AS INTEGER) {code} But NATURAL/USING JOIN AST doesn't have conditions, it only have a list of columns, and the validator currently only checks that they are Comparable. h3. Possible solutions: h4. A. Rewrite USING/NATURAL to ON so that the necessary type casts can be added to conditions. But there is also a difference in processing unqualified column names for NATURAL/USING. 1. When star (*) is used common columns displayed once in the output. 2. When unqualified column name is specified it is wrapped into COALESCE with common columns from tables, e.g. {code:sql} select empno from emp e join (select 7369 as empno) c using (empno) {code} transformed to {code:sql} SELECT COALESCE(E.EMPNO, C.EMPNO) AS EMPNO FROM CATALOG.SALES.EMP AS E INNER JOIN (SELECT 7369 AS EMPNO) AS C USING (EMPNO) {code} To keep this behavior it is required to override following methods of the {{SqlValidatorImpl}}: * List usingNames(SqlJoin join) * SqlNode expandExprFromJoin(SqlJoin join, SqlIdentifier identifier, SelectScope scope) The problem is that {{expandExprFromJoin}} is a static private method that is called from package private {{SelectExpander}} class, and to override him it is required to copy-paste a lot of code from {{SqlValidatorImpl}} into {{IgniteSqlValidator}}. At the same time, such a fix on the calcite side seems unacceptable, since doing such AST transformations in a validator is a bad idea. h4. B. Add necessary casts on the SqlToRelConverter side. The implementation of this approach on the Ignite side seems unacceptable, for the same reason that too much code will need to be copy-pasted from Calcite. On the Calcite side, at first glance, such a fix requires two modifications. 1. Add necessary casts to conditions in
[jira] [Created] (IGNITE-22660) C++ 3.0: Add version info to the ODBC dll
Vadim Pakhnushev created IGNITE-22660: - Summary: C++ 3.0: Add version info to the ODBC dll Key: IGNITE-22660 URL: https://issues.apache.org/jira/browse/IGNITE-22660 Project: Ignite Issue Type: Improvement Components: platforms Reporter: Vadim Pakhnushev Assignee: Igor Sapego We need to add version info to the Windows ODBC dll. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-22636) Catalog compaction. Implement catalog compaction coordinator.
[ https://issues.apache.org/jira/browse/IGNITE-22636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Zhuravkov reassigned IGNITE-22636: - Assignee: (was: Maksim Zhuravkov) > Catalog compaction. Implement catalog compaction coordinator. > - > > Key: IGNITE-22636 > URL: https://issues.apache.org/jira/browse/IGNITE-22636 > Project: Ignite > Issue Type: Improvement >Reporter: Maksim Zhuravkov >Priority: Major > Labels: ignite-3 > > Compaction coordinator initiates the process of catalog compaction: > 1. Ask each node in logical topology for its minimal required timed (for > node_i let's call this time Tmin_node_i ). > 2. Choose minimum required time of Tmin_node_i (let's call it Tmin) > 3. Select catalog version Cv with catalog timestamp Ct which Ct < Tmin. > 4. Send a command to calls CatalogManager::compactCatalog with Ct to every > node. > If Catalog up to version Cv has tables T1, ... Tn, then: > Each compaction initiation run should be postponed (step 4 is not performed), > if at least one node that owns partitions for tables (T1,.., Tn) is missing > from the current logical topology. > If a remote node leaves the cluster, then: > If this node hosts at least one partition, the coordinator should wait until > this node either enters the cluster and then retry asking for Minimal Time, > or is removed from all assignments. In the latter case the absence of the > node may be simply ignored. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-22519) Sql. Numerics. Conversion produces incorrect results
[ https://issues.apache.org/jira/browse/IGNITE-22519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Zhuravkov reassigned IGNITE-22519: - Assignee: Maksim Zhuravkov > Sql. Numerics. Conversion produces incorrect results > > > Key: IGNITE-22519 > URL: https://issues.apache.org/jira/browse/IGNITE-22519 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Maksim Zhuravkov >Assignee: Maksim Zhuravkov >Priority: Major > Labels: ignite-3 > > 1. > {noformat} > SELECT CAST('100.1' AS BIGINT); > Values(tuples=[[{ 100 }]]): > {noformat} > *Expected result* > It should be an error because '100.1' can not be converted to BIGINT/long. > 2. > {noformat} > SELECT CAST(1e39 AS REAL); > Values(tuples=[[{ 1E39 }]]): > {noformat} > *Expected result* > Should be an overflow error > 3. > {noformat} > SELECT CAST(1e39 AS FLOAT); > Values(tuples=[[{ 1E39 }]]): > {noformat} > *Expected result* > Should be an overflow error -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-22659) Error floods on logs
Alexander Belyak created IGNITE-22659: - Summary: Error floods on logs Key: IGNITE-22659 URL: https://issues.apache.org/jira/browse/IGNITE-22659 Project: Ignite Issue Type: Bug Components: general Affects Versions: 3.0 Reporter: Alexander Belyak Each particular error writes it's own log message, even if there is 1000s of them already. For example, in two nodes cluster if we kill one node we get: *1857* of "[{*}ERROR{*}][%ClusterFailover2NodesTest_cluster_0%JRaft-Common-Executor-0][AbstractClientService] Fail to connect ClusterFailover2NodesTest_cluster_1, exception: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: /172.120.1.2:3345." *160 000+* {*}WARNING{*}s of "Recoverable error during the request occurred (will be retried on the randomly selected node)" In a minute! It makes log system useless, because original error messages would be rotated when somebody have change to analyze it -- This message was sent by Atlassian Jira (v8.20.10#820010)