[jira] [Created] (IGNITE-21255) Java thin 3.0: Cache tables in ClientTables

2024-01-15 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-21255:
---

 Summary: Java thin 3.0: Cache tables in ClientTables
 Key: IGNITE-21255
 URL: https://issues.apache.org/jira/browse/IGNITE-21255
 Project: Ignite
  Issue Type: Improvement
  Components: thin client
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21255) Java thin 3.0: Cache tables in ClientTables

2024-01-15 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21255:

Ignite Flags:   (was: Release Notes Required)

> Java thin 3.0: Cache tables in ClientTables
> ---
>
> Key: IGNITE-21255
> URL: https://issues.apache.org/jira/browse/IGNITE-21255
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21255) Java thin 3.0: Cache tables in ClientTables

2024-01-15 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21255:

Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> Java thin 3.0: Cache tables in ClientTables
> ---
>
> Key: IGNITE-21255
> URL: https://issues.apache.org/jira/browse/IGNITE-21255
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Deleted] (IGNITE-21255) Java thin 3.0: Cache tables in ClientTables

2024-01-15 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn deleted IGNITE-21255:



> Java thin 3.0: Cache tables in ClientTables
> ---
>
> Key: IGNITE-21255
> URL: https://issues.apache.org/jira/browse/IGNITE-21255
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21140) Ignite 3 Disaster Recovery

2024-01-15 Thread Ivan Bessonov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov updated IGNITE-21140:
---
Description: 
This epic is related to issues that may happen with users when part of their 
data becomes unavailable for some reasons, like "node is lost", or "part of the 
storage is lost", etc.

Following definitions will be used throughout:

Local partition states. A local property of replica, storage, state machine, 
etc., associated with the partition:
 * _Healthy_
State machine is running, everything’s fine.
 * _Initializing_
Ignite node is online, but the corresponding raft group is yet to complete its 
initialization.
 * _Snapshot installation_
Full state transfer is taking place. Once it’s finished, the partition will 
become _healthy_ or {_}catching-up{_}. Before that, data can’t be read, and log 
replication is also on pause.
 * _Catching-up_
Node is in the process of replicating data from the leader, and its data is a 
little bit in the past. This state can only be observed from the leader, 
because only the leader has the latest committed index and the state of every 
peer.
 * _Broken_
Something’s wrong with the state machine. Some data might be unavailable for 
reading, log can’t be replicated, and this state won’t be changed automatically 
without intervention.

 * Global partition states. A global property of a partition, that specifies 
its apparent functionality from user’s point of view:
 * _Available partition_
Healthy partition that can process read and write requests. This means that the 
majority of peers are healthy at the moment.
 * _Read-only partition_
Partition that can process read requests, but can’t process write requests. 
There’s no healthy majority, but there’s at least one alive (healthy/catch-up) 
peer that can process historical read-only queries.
 * _Unavailable partition_
Partition that can’t process any requests.

Building blocks are a set of operations that can be executed by Ignite or by 
the user in order to improve cluster state.

Each building block must either be an automatic action with configurable 
timeout (if applicable), or a documented API, with mandatory 
diagnostics/metrics that would allow users to make decisions about these 
actions.
 # Offline Ignite node is brought back online, having all recent data.
_Not a disaster recovery mechanism, but worth mentioning._
A node with usable data, that doesn’t require full state transfer, will become 
a peer, will participate in voting and replication, allowing partition to be 
_available_ if majority is healthy. This is the best case for the user, where 
they simply restart offline nodes and the cluster continues being operable.
 # Automatic group scale-down.
Should happen when an Ignite node is offline for too long.
Not a disaster recovery mechanism, but worth mentioning.
Only happens when the majority is online, meaning that user data is safe.
 # Manual partition restart.
Should be performed manually for broken peers.
 # Manual group peers/learners reconfiguration.
Should be performed on a group manually, if the majority is considered 
permanently lost.
 # Freshly re-entering the group.
Should happen when an Ignite node is returned back to the group, but partition 
data is missing.
 # Cleaning the partition data.
If, for some reason, we know that a certain partition on a certain node is 
broken, we may ask Ignite to drop its data and re-enter the group empty (as 
stated in option 5).
Having a dedicated operation for cleaning the partition is preferable, because:
 ## partition is be stored in several storages
 ## not all of them have a “file per partition” storage format, not even close
 ## there’s also raft log that should be cleaned, most likely
 ## maybe raft meta as well
 # Partial truncation of the log’s suffix.
This is a case of partial cleanup of partition data. This operation might be 
useful if we know that there’s junk in the log, but storages are not corrupted, 
so there’s a chance to save some data. Can be replaced with “clean partition 
data”.

In order for the user to make decisions about manual operations, we must 
provide partition states for all partitions in all tables/zones. Both global 
and local states. Global states are more important, because they directly 
correlate with user experience.

Some states will automatically lead to “available” partitions, if the system 
overall is healthy and we simply wait for some time. For example, we wait until 
a snapshot installation, or a rebalance is complete, and we’re happy. This is 
not considered a building block, because it’s a natural artifact of the 
architecture.

Current list is not exhaustive, it consists of basic actions that we could 
implement that would cover a wide range of potential issues.
Any other addition to the list of basic blocks would simply refine it, 
potentially allowing users to recover faster, or with less data being lo

[jira] [Created] (IGNITE-21256) Internal API for local partition states

2024-01-15 Thread Ivan Bessonov (Jira)
Ivan Bessonov created IGNITE-21256:
--

 Summary: Internal API for local partition states
 Key: IGNITE-21256
 URL: https://issues.apache.org/jira/browse/IGNITE-21256
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Bessonov


Please refer to https://issues.apache.org/jira/browse/IGNITE-21140 for the 
list. We need an API to access the list of local partitions and their states. 
The way to determine them:
 * comparing current assignments with replica states
 * check the state machine, it might be broken or installing snapshot



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21257) Public API to get global partition states

2024-01-15 Thread Ivan Bessonov (Jira)
Ivan Bessonov created IGNITE-21257:
--

 Summary: Public API to get global partition states
 Key: IGNITE-21257
 URL: https://issues.apache.org/jira/browse/IGNITE-21257
 Project: Ignite
  Issue Type: Improvement
Reporter: Ivan Bessonov


Please refer to https://issues.apache.org/jira/browse/IGNITE-21140 for the list.

We should use local partition states, implemented in IGNITE-21256, and combine 
them in cluster-wide compute call, before returning to the user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21258) Handle primary replica events in the striped pool

2024-01-15 Thread Vladislav Pyatkov (Jira)
Vladislav Pyatkov created IGNITE-21258:
--

 Summary: Handle primary replica events in the striped pool
 Key: IGNITE-21258
 URL: https://issues.apache.org/jira/browse/IGNITE-21258
 Project: Ignite
  Issue Type: Improvement
Reporter: Vladislav Pyatkov


h3. Motivation
We seek to decrease the load on the metastorage thread because when the MC 
thread is holding, it blocks the handling of other events.
Currently, both primary replica events are handled on the MC thread, though the 
striped executer is a well-suited place to do it.

h3. Defenition of done
Move handling of the primary replica events (PRIMARY_REPLICA_ELECTED, 
PRIMARY_REPLICA_EXPIRED) in the striped pool.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21259) InlineIndexTree corrupted tree exception

2024-01-15 Thread Nikita Amelchev (Jira)
Nikita Amelchev created IGNITE-21259:


 Summary: InlineIndexTree corrupted tree exception
 Key: IGNITE-21259
 URL: https://issues.apache.org/jira/browse/IGNITE-21259
 Project: Ignite
  Issue Type: Bug
Reporter: Nikita Amelchev


{noformat}
2024-01-10 17:06:22.677 [ERROR][build-idx-runner-#163171][] Critical system 
error detected. Will be handled accordingly to configured handler 
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet 
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], 
failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is 
corrupted [groupId=-2039304639, pageIds=[844420686668951], cacheId=-2039304639, 
cacheName=V1, indexName=V1_DATE_IDX, msg=Runtime failure on row: Row@21883308[ 
key: BinaryObject [idHash=1498817286, hash=-2130242954], val: Data hidden due 
to IGNITE_TO_STRING_INCLUDE_SENSITIVE flag. ][ data hidden, data hidden, data 
hidden, data hidden, data hidden 
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
 B+Tree is corrupted [groupId=-2039304639, pageIds=[844420686668951], 
cacheId=-2039304639, cacheName=V1, indexName=V1_DATE_IDX, msg=Runtime failure 
on row: Row@21883308[ key: BinaryObject [idHash=1498817286, hash=-2130242954], 
val: Data hidden due to IGNITE_TO_STRING_INCLUDE_SENSITIVE flag. ][ data 
hidden, data hidden, data hidden, data hidden, data hidden ]]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexTree.corruptedTreeException(InlineIndexTree.java:561)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2724)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.put(BPlusTree.java:2655)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexImpl.putx(InlineIndexImpl.java:371)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexImpl.onUpdate(InlineIndexImpl.java:348)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.IndexProcessor.lambda$createIndexDynamically$0(IndexProcessor.java:225)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCachePartitionWorker$SchemaIndexCacheVisitorClosureWrapper.apply(SchemaIndexCachePartitionWorker.java:302)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateIndex(GridCacheMapEntry.java:4193)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCachePartitionWorker.processKey(SchemaIndexCachePartitionWorker.java:236)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCachePartitionWorker.processPartition(SchemaIndexCachePartitionWorker.java:191)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCachePartitionWorker.body(SchemaIndexCachePartitionWorker.java:130)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) 
~[ignite-core-14.1.2.jar:14.1.2]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
~[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
~[?:?]
at java.lang.Thread.run(Thread.java:829) ~[?:?]
Caused by: 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTreeRuntimeException:
 java.lang.AssertionError: 144396680232459590
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.doInitFromLink(CacheDataRowAdapter.java:345)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:165)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:136)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexTree.createIndexRow(InlineIndexTree.java:360)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.io.AbstractInlineInnerIO.getLookupRow(AbstractInlineInnerIO.java:129)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.io.AbstractInlineInnerIO.getLookupRow(AbstractInlineInnerIO.java:37)
 ~[ignite-core-14.1.2.jar:14

[jira] [Updated] (IGNITE-21259) InlineIndexTree corrupted tree exception

2024-01-15 Thread Nikita Amelchev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikita Amelchev updated IGNITE-21259:
-
Description: 
{noformat}
2024-01-10 17:06:22.677 [ERROR][build-idx-runner-#163171][] Critical system 
error detected. Will be handled accordingly to configured handler 
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, 
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet 
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], 
failureCtx=FailureContext [type=CRITICAL_ERROR, err=class 
o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is 
corrupted [groupId=-2039304639, pageIds=[844420686668951], cacheId=-2039304639, 
cacheName=V1, indexName=V1_DATE_IDX, msg=Runtime failure on row: Row@21883308[ 
key: BinaryObject [idHash=1498817286, hash=-2130242954], val: Data hidden due 
to IGNITE_TO_STRING_INCLUDE_SENSITIVE flag. ][ data hidden, data hidden, data 
hidden, data hidden, data hidden 
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
 B+Tree is corrupted [groupId=-2039304639, pageIds=[844420686668951], 
cacheId=-2039304639, cacheName=V1, indexName=V1_DATE_IDX, msg=Runtime failure 
on row: Row@21883308[ key: BinaryObject [idHash=1498817286, hash=-2130242954], 
val: Data hidden due to IGNITE_TO_STRING_INCLUDE_SENSITIVE flag. ][ data 
hidden, data hidden, data hidden, data hidden, data hidden ]]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexTree.corruptedTreeException(InlineIndexTree.java:561)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2724)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.put(BPlusTree.java:2655)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexImpl.putx(InlineIndexImpl.java:371)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexImpl.onUpdate(InlineIndexImpl.java:348)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.IndexProcessor.lambda$createIndexDynamically$0(IndexProcessor.java:225)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCachePartitionWorker$SchemaIndexCacheVisitorClosureWrapper.apply(SchemaIndexCachePartitionWorker.java:302)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateIndex(GridCacheMapEntry.java:4193)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCachePartitionWorker.processKey(SchemaIndexCachePartitionWorker.java:236)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCachePartitionWorker.processPartition(SchemaIndexCachePartitionWorker.java:191)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.query.schema.SchemaIndexCachePartitionWorker.body(SchemaIndexCachePartitionWorker.java:130)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) 
~[ignite-core-14.1.2.jar:14.1.2]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
~[?:?]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
~[?:?]
at java.lang.Thread.run(Thread.java:829) ~[?:?]
Caused by: 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTreeRuntimeException:
 java.lang.AssertionError: 144396680232459590
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.doInitFromLink(CacheDataRowAdapter.java:345)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:165)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:136)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexTree.createIndexRow(InlineIndexTree.java:360)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.io.AbstractInlineInnerIO.getLookupRow(AbstractInlineInnerIO.java:129)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.io.AbstractInlineInnerIO.getLookupRow(AbstractInlineInnerIO.java:37)
 ~[ignite-core-14.1.2.jar:14.1.2]
at 
org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexTree.getRow(InlineIndexTree.java:

[jira] [Updated] (IGNITE-21258) Handle primary replica events in the striped pool

2024-01-15 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-21258:
---
Description: 
h3. Motivation
We seek to decrease the load on the metastorage thread because when the MC 
thread is holding, it blocks the handling of other events.
{code}
placementDriver.listen(PrimaryReplicaEvent.PRIMARY_REPLICA_ELECTED, 
this::onPrimaryReplicaElected);
placementDriver.listen(PrimaryReplicaEvent.PRIMARY_REPLICA_EXPIRED, 
this::onPrimaryReplicaExpired);
{code}
Currently, both primary replica events are handled on the MC thread, though the 
striped executer is a well-suited place to do it.

h3. Defenition of done
Move handling of the primary replica events (PRIMARY_REPLICA_ELECTED, 
PRIMARY_REPLICA_EXPIRED) in the striped pool.

  was:
h3. Motivation
We seek to decrease the load on the metastorage thread because when the MC 
thread is holding, it blocks the handling of other events.
Currently, both primary replica events are handled on the MC thread, though the 
striped executer is a well-suited place to do it.

h3. Defenition of done
Move handling of the primary replica events (PRIMARY_REPLICA_ELECTED, 
PRIMARY_REPLICA_EXPIRED) in the striped pool.


> Handle primary replica events in the striped pool
> -
>
> Key: IGNITE-21258
> URL: https://issues.apache.org/jira/browse/IGNITE-21258
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> We seek to decrease the load on the metastorage thread because when the MC 
> thread is holding, it blocks the handling of other events.
> {code}
> placementDriver.listen(PrimaryReplicaEvent.PRIMARY_REPLICA_ELECTED, 
> this::onPrimaryReplicaElected);
> placementDriver.listen(PrimaryReplicaEvent.PRIMARY_REPLICA_EXPIRED, 
> this::onPrimaryReplicaExpired);
> {code}
> Currently, both primary replica events are handled on the MC thread, though 
> the striped executer is a well-suited place to do it.
> h3. Defenition of done
> Move handling of the primary replica events (PRIMARY_REPLICA_ELECTED, 
> PRIMARY_REPLICA_EXPIRED) in the striped pool.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20593) Sql. Add implicit cast coercion rules to DdlSqlToCommandConverter.

2024-01-15 Thread Maksim Zhuravkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Zhuravkov updated IGNITE-20593:
--
Description: 
In order to make behaviour of types in DDL commands consistent with other SQL 
statements, we need to update `DdlSqlToCommandConverter` to make it in sync 
with implicit type coercion rules 
- if type T1 can be converted from T2, then DEFAULT for type T1 can accept 
values of type T2.


Example:

{code:java}
@Test
public void testDdl() {
sql("CREATE TABLE date_dim (id INTEGER PRIMARY KEY, dim DATE DEFAULT 
'2000-01-01')");
}

//ERROR:

Caused by: java.lang.ClassCastException: class 
org.apache.calcite.sql.SqlCharStringLiteral cannot be cast to class 
org.apache.calcite.sql.SqlUnknownLiteral 
(org.apache.calcite.sql.SqlCharStringLiteral and 
org.apache.calcite.sql.SqlUnknownLiteral are in unnamed module of loader 'app')
at 
org.apache.ignite.internal.sql.engine.prepare.ddl.DdlSqlToCommandConverter.fromLiteral(DdlSqlToCommandConverter.java:837)
... 18 more
{code}


  was:
In order to make behaviour of types in DDL commands consistent with other SQL 
statements, we need to update `DdlSqlToCommandConverter` to make it in sync 
with implicit type coercion rules 
- if type T1 can be converted from T2, then DEFAULT for type T1 can accept 
values of type T2.



> Sql. Add implicit cast coercion rules to DdlSqlToCommandConverter.
> --
>
> Key: IGNITE-20593
> URL: https://issues.apache.org/jira/browse/IGNITE-20593
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Maksim Zhuravkov
>Priority: Minor
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> In order to make behaviour of types in DDL commands consistent with other SQL 
> statements, we need to update `DdlSqlToCommandConverter` to make it in sync 
> with implicit type coercion rules 
> - if type T1 can be converted from T2, then DEFAULT for type T1 can accept 
> values of type T2.
> Example:
> {code:java}
> @Test
> public void testDdl() {
> sql("CREATE TABLE date_dim (id INTEGER PRIMARY KEY, dim DATE DEFAULT 
> '2000-01-01')");
> }
> //ERROR:
> Caused by: java.lang.ClassCastException: class 
> org.apache.calcite.sql.SqlCharStringLiteral cannot be cast to class 
> org.apache.calcite.sql.SqlUnknownLiteral 
> (org.apache.calcite.sql.SqlCharStringLiteral and 
> org.apache.calcite.sql.SqlUnknownLiteral are in unnamed module of loader 
> 'app')
>   at 
> org.apache.ignite.internal.sql.engine.prepare.ddl.DdlSqlToCommandConverter.fromLiteral(DdlSqlToCommandConverter.java:837)
>   ... 18 more
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20790) Implement Catalog compaction

2024-01-15 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20790:
---
Description: 
The Catalog is managed by the CatalogManager component. It stores Catalog 
versions in the Metastorage.

When a Catalog version is not needed anymore (this has to be defined), we 
should remove it.

Only the full prefixes can be removed. This means that if version N is still 
needed, we cannot remove versions M>N.

A design is needed.

  was:
When a Catalog version is not needed anymore (this has to be defined), we 
should remove it.

Only the full prefixes can be removed. This means that if version N is still 
needed, we cannot remove versions M>N.


> Implement Catalog compaction
> 
>
> Key: IGNITE-20790
> URL: https://issues.apache.org/jira/browse/IGNITE-20790
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
>
> The Catalog is managed by the CatalogManager component. It stores Catalog 
> versions in the Metastorage.
> When a Catalog version is not needed anymore (this has to be defined), we 
> should remove it.
> Only the full prefixes can be removed. This means that if version N is still 
> needed, we cannot remove versions M>N.
> A design is needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20790) Implement Catalog compaction

2024-01-15 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-20790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-20790:
---
Description: 
The Catalog is managed by the CatalogManager component. It stores Catalog 
versions in the Metastorage.

When a Catalog version is not needed anymore (this has to be defined), we 
should remove it.

Only the full prefixes of the Catalog history (consisting of Catalog versions) 
can be removed. This means that if version N is still needed, we cannot remove 
versions M>N.

A design is needed.

  was:
The Catalog is managed by the CatalogManager component. It stores Catalog 
versions in the Metastorage.

When a Catalog version is not needed anymore (this has to be defined), we 
should remove it.

Only the full prefixes can be removed. This means that if version N is still 
needed, we cannot remove versions M>N.

A design is needed.


> Implement Catalog compaction
> 
>
> Key: IGNITE-20790
> URL: https://issues.apache.org/jira/browse/IGNITE-20790
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
>
> The Catalog is managed by the CatalogManager component. It stores Catalog 
> versions in the Metastorage.
> When a Catalog version is not needed anymore (this has to be defined), we 
> should remove it.
> Only the full prefixes of the Catalog history (consisting of Catalog 
> versions) can be removed. This means that if version N is still needed, we 
> cannot remove versions M>N.
> A design is needed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21260) testPubSubStreamer is flaky

2024-01-15 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21260:

Description: 
{code}
java.lang.AssertionError: Failed to wait latch completion, still wait 100 events
  at 
org.apache.ignite.stream.pubsub.PubSubStreamerSelfTest.consumerStream(PubSubStreamerSelfTest.java:214)
  at 
org.apache.ignite.stream.pubsub.PubSubStreamerSelfTest.testPubSubStreamer(PubSubStreamerSelfTest.java:138)
{code}

> testPubSubStreamer is flaky
> ---
>
> Key: IGNITE-21260
> URL: https://issues.apache.org/jira/browse/IGNITE-21260
> Project: Ignite
>  Issue Type: Bug
>  Components: extensions
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>
> {code}
> java.lang.AssertionError: Failed to wait latch completion, still wait 100 
> events
>   at 
> org.apache.ignite.stream.pubsub.PubSubStreamerSelfTest.consumerStream(PubSubStreamerSelfTest.java:214)
>   at 
> org.apache.ignite.stream.pubsub.PubSubStreamerSelfTest.testPubSubStreamer(PubSubStreamerSelfTest.java:138)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21260) testPubSubStreamer is flaky

2024-01-15 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-21260:
---

 Summary: testPubSubStreamer is flaky
 Key: IGNITE-21260
 URL: https://issues.apache.org/jira/browse/IGNITE-21260
 Project: Ignite
  Issue Type: Bug
  Components: extensions
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21160) .NET: Thin 3.0: TestExecuteColocatedUpdatesTableCacheOnTableDrop is flaky

2024-01-15 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17806745#comment-17806745
 ] 

Igor Sapego commented on IGNITE-21160:
--

Looks good to me.

> .NET: Thin 3.0: TestExecuteColocatedUpdatesTableCacheOnTableDrop is flaky
> -
>
> Key: IGNITE-21160
> URL: https://issues.apache.org/jira/browse/IGNITE-21160
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Apache.Ignite.Tests.Compute.ComputeTests.TestExecuteColocatedUpdatesTableCacheOnTableDrop
>  is flaky:
> {code}
> Apache.Ignite.TableNotFoundException : Table not found: 46
>   > Apache.Ignite.IgniteException : 
> org.apache.ignite.lang.TableNotFoundException: IGN-TBL-2 
> TraceId:ee0df118-c9cf-499e-9325-6fe842d5251b Table not found: 46
>   at 
> org.apache.ignite.client.handler.ClientPrimaryReplicaTracker.tableNotFoundException(ClientPrimaryReplicaTracker.java:344)
>   at 
> org.apache.ignite.client.handler.ClientPrimaryReplicaTracker.partitionsNoWait(ClientPrimaryReplicaTracker.java:231)
>   at 
> org.apache.ignite.client.handler.ClientPrimaryReplicaTracker.lambda$partitionsAsync$3(ClientPrimaryReplicaTracker.java:224)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$completeWaitersOnUpdate$0(PendingComparableValuesTracker.java:169)
>   at 
> java.base/java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:122)
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.completeWaitersOnUpdate(PendingComparableValuesTracker.java:169)
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.update(PendingComparableValuesTracker.java:103)
>   at 
> org.apache.ignite.internal.metastorage.server.time.ClusterTimeImpl.updateSafeTime(ClusterTimeImpl.java:146)
>   at 
> org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl.onSafeTimeAdvanced(MetaStorageManagerImpl.java:849)
>   at 
> org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl$1.onSafeTimeAdvanced(MetaStorageManagerImpl.java:456)
>   at 
> org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$advanceSafeTime$9(WatchProcessor.java:322)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
>   at 
> java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834)
>at Apache.Ignite.Internal.ClientSocket.DoOutInOpAsync(ClientOp clientOp, 
> PooledArrayBuffer request, Boolean expectNotifications) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientSocket.cs:line
>  304
>at 
> Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAndGetSocketAsync(ClientOp
>  clientOp, Transaction tx, PooledArrayBuffer request, PreferredNode 
> preferredNode, IRetryPolicy retryPolicyOverride, Boolean expectNotifications) 
> in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  204
>at Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAsync(ClientOp 
> clientOp, PooledArrayBuffer request, PreferredNode preferredNode, Boolean 
> expectNotifications) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  153
>at Apache.Ignite.Internal.Table.Table.LoadPartitionAssignmentAsync(Int64 
> timestamp) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/Table.cs:line
>  404
>at Apache.Ignite.Internal.Table.Table.GetPartitionAssignmentAsync() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/Table.cs:line
>  239
>at Apache.Ignite.Internal.Table.Table.GetPreferredNode(Int32 
> colocationHash, ITransaction transaction) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/Table.cs:line
>  198
>at 
> Apache.Ignite.Inte

[jira] [Updated] (IGNITE-21181) Failure to resolve a primary replica after stopping a node

2024-01-15 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-21181:
--
Description: 
The scenario is that the cluster consists of 3 nodes (0, 1, 2). Primary replica 
of the sole partition is on node 0. Then node 0 is stopped and an attempt to do 
a put via node 2 is done. The partition still has majority, but the put results 
in the following:
 
{code:java}
org.apache.ignite.tx.TransactionException: IGN-REP-5 
TraceId:55c59c96-17d1-4efc-8e3c-cca81b8b41ad Failed to resolve the primary 
replica node [consistentId=itrst_ncisasiti_0]
 
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.lambda$enlist$69(InternalTableImpl.java:1749)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946)
at 
java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2266)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlist(InternalTableImpl.java:1739)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlistWithRetry(InternalTableImpl.java:480)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlistInTx(InternalTableImpl.java:301)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.upsert(InternalTableImpl.java:965)
at 
org.apache.ignite.internal.table.KeyValueViewImpl.lambda$putAsync$10(KeyValueViewImpl.java:196)
at 
org.apache.ignite.internal.table.AbstractTableView.lambda$withSchemaSync$1(AbstractTableView.java:111)
at 
java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
at 
java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235)
at 
org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:111)
at 
org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:102)
at 
org.apache.ignite.internal.table.KeyValueViewImpl.putAsync(KeyValueViewImpl.java:193)
at 
org.apache.ignite.internal.table.KeyValueViewImpl.put(KeyValueViewImpl.java:185)
at 
org.apache.ignite.internal.raftsnapshot.ItTableRaftSnapshotsTest.putToNode(ItTableRaftSnapshotsTest.java:257)
at 
org.apache.ignite.internal.raftsnapshot.ItTableRaftSnapshotsTest.putToNode(ItTableRaftSnapshotsTest.java:253)
at 
org.apache.ignite.internal.raftsnapshot.ItTableRaftSnapshotsTest.nodeCanInstallSnapshotsAfterSnapshotInstalledToIt(ItTableRaftSnapshotsTest.java:473){code}
 
This can be reproduced using 
ItTableRaftSnapshotsTest#nodeCanInstallSnapshotsAfterSnapshotInstalledToIt().

The reason is that the leader of partition group is transferred on node 0, 
which means that this node most probably will be selected as primary, and after 
that the node 0 is stopped, and then the transaction is started. Node 0 is 
still a leaseholder in the current time interval, but it's already left the 
topology.

We can fix the test to make it await the new primary, which would be present in 
the cluster, or make the restries on the very first transactional request. In 
the case of latter, we need to ensure that the request is actually first and 
single, no other request in any parallel thread was sent, otherwise we cant 
retry the request on another primary .

  was:
The scenario is that the cluster consists of 3 nodes (0, 1, 2). Primary replica 
of the sole partition is on node 0. Then node 0 is stopped and an attempt to do 
a put via node 2 is done. The partition still has majority, but the put results 
in the following:
 
{code:java}
org.apache.ignite.tx.TransactionException: IGN-REP-5 
TraceId:55c59c96-17d1-4efc-8e3c-cca81b8b41ad Failed to resolve the primary 
replica node [consistentId=itrst_ncisasiti_0]
 
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.lambda$enlist$69(InternalTableImpl.java:1749)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946)
at 
java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2266)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlist(InternalTableImpl.java:1739)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlistWithRetry(InternalTableImpl.java:480)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlistInTx(InternalTableImpl.java:301)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.upsert(InternalTableImpl.java:965)
at 
org.apache.ignite.internal.table.KeyValueViewImpl.lambda$putAsync$10(KeyValueViewImpl.java:196)
at 
org.apache.ignite.internal.table.AbstractTableView.lambda$withSchemaSync$1(AbstractTableView.java:111)
at 
java.base

[jira] [Updated] (IGNITE-21181) Failure to resolve a primary replica after stopping a node

2024-01-15 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov updated IGNITE-21181:
--
Description: 
The scenario is that the cluster consists of 3 nodes (0, 1, 2). Primary replica 
of the sole partition is on node 0. Then node 0 is stopped and an attempt to do 
a put via node 2 is done. The partition still has majority, but the put results 
in the following:
 
{code:java}
org.apache.ignite.tx.TransactionException: IGN-REP-5 
TraceId:55c59c96-17d1-4efc-8e3c-cca81b8b41ad Failed to resolve the primary 
replica node [consistentId=itrst_ncisasiti_0]
 
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.lambda$enlist$69(InternalTableImpl.java:1749)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946)
at 
java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2266)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlist(InternalTableImpl.java:1739)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlistWithRetry(InternalTableImpl.java:480)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlistInTx(InternalTableImpl.java:301)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.upsert(InternalTableImpl.java:965)
at 
org.apache.ignite.internal.table.KeyValueViewImpl.lambda$putAsync$10(KeyValueViewImpl.java:196)
at 
org.apache.ignite.internal.table.AbstractTableView.lambda$withSchemaSync$1(AbstractTableView.java:111)
at 
java.base/java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:1106)
at 
java.base/java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2235)
at 
org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:111)
at 
org.apache.ignite.internal.table.AbstractTableView.withSchemaSync(AbstractTableView.java:102)
at 
org.apache.ignite.internal.table.KeyValueViewImpl.putAsync(KeyValueViewImpl.java:193)
at 
org.apache.ignite.internal.table.KeyValueViewImpl.put(KeyValueViewImpl.java:185)
at 
org.apache.ignite.internal.raftsnapshot.ItTableRaftSnapshotsTest.putToNode(ItTableRaftSnapshotsTest.java:257)
at 
org.apache.ignite.internal.raftsnapshot.ItTableRaftSnapshotsTest.putToNode(ItTableRaftSnapshotsTest.java:253)
at 
org.apache.ignite.internal.raftsnapshot.ItTableRaftSnapshotsTest.nodeCanInstallSnapshotsAfterSnapshotInstalledToIt(ItTableRaftSnapshotsTest.java:473){code}
 
This can be reproduced using 
ItTableRaftSnapshotsTest#nodeCanInstallSnapshotsAfterSnapshotInstalledToIt().

The reason is that, according to the test, the leader of partition group is 
transferred on node 0, which means that this node most probably will be 
selected as primary, and after that the node 0 is stopped, and then the 
transaction is started. Node 0 is still a leaseholder in the current time 
interval, but it's already left the topology.

We can fix the test to make it await the new primary, which would be present in 
the cluster, or make the restries on the very first transactional request. In 
the case of latter, we need to ensure that the request is actually first and 
single, no other request in any parallel thread was sent, otherwise we cant 
retry the request on another primary .

  was:
The scenario is that the cluster consists of 3 nodes (0, 1, 2). Primary replica 
of the sole partition is on node 0. Then node 0 is stopped and an attempt to do 
a put via node 2 is done. The partition still has majority, but the put results 
in the following:
 
{code:java}
org.apache.ignite.tx.TransactionException: IGN-REP-5 
TraceId:55c59c96-17d1-4efc-8e3c-cca81b8b41ad Failed to resolve the primary 
replica node [consistentId=itrst_ncisasiti_0]
 
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.lambda$enlist$69(InternalTableImpl.java:1749)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:930)
at 
java.base/java.util.concurrent.CompletableFuture.uniHandleStage(CompletableFuture.java:946)
at 
java.base/java.util.concurrent.CompletableFuture.handle(CompletableFuture.java:2266)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlist(InternalTableImpl.java:1739)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlistWithRetry(InternalTableImpl.java:480)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.enlistInTx(InternalTableImpl.java:301)
at 
org.apache.ignite.internal.table.distributed.storage.InternalTableImpl.upsert(InternalTableImpl.java:965)
at 
org.apache.ignite.internal.table.KeyValueViewImpl.lambda$putAsync$10(KeyValueViewImpl.java:196)
at 
org.apache.ignite.internal.table.AbstractTableView.lambda$withSchemaSync$1(AbstractTableView

[jira] [Created] (IGNITE-21261) Fix exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater

2024-01-15 Thread Alexey Gidaspov (Jira)
Alexey Gidaspov created IGNITE-21261:


 Summary: Fix exception 'Unknown topic' is never thrown in 
KafkaToIgniteMetadataUpdater
 Key: IGNITE-21261
 URL: https://issues.apache.org/jira/browse/IGNITE-21261
 Project: Ignite
  Issue Type: Task
Reporter: Alexey Gidaspov
Assignee: Alexey Gidaspov


exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater since 
kafka lib was upgraded to 3.4.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21262) Sql. Push down predicate under correlate

2024-01-15 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-21262:
-

 Summary: Sql. Push down predicate under correlate
 Key: IGNITE-21262
 URL: https://issues.apache.org/jira/browse/IGNITE-21262
 Project: Ignite
  Issue Type: Bug
  Components: sql
Reporter: Konstantin Orlov


Currently, predicate doesn't pushed down under LogicalCorrelate node. As a 
result, complex queries with multiple joins, such q2, q4, q21, q22 from TPC-H 
suite, executed as a humongous cross join of several tables with post 
filtration afterwards.

This could simply be addressed by introducing {{CoreRules.FILTER_CORRELATE}} to 
rule set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21160) .NET: Thin 3.0: TestExecuteColocatedUpdatesTableCacheOnTableDrop is flaky

2024-01-15 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17806771#comment-17806771
 ] 

Pavel Tupitsyn commented on IGNITE-21160:
-

Merged to main: 4fa174c269bb2ec1db5cfc4a2e978a5d0a43cf73

> .NET: Thin 3.0: TestExecuteColocatedUpdatesTableCacheOnTableDrop is flaky
> -
>
> Key: IGNITE-21160
> URL: https://issues.apache.org/jira/browse/IGNITE-21160
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Apache.Ignite.Tests.Compute.ComputeTests.TestExecuteColocatedUpdatesTableCacheOnTableDrop
>  is flaky:
> {code}
> Apache.Ignite.TableNotFoundException : Table not found: 46
>   > Apache.Ignite.IgniteException : 
> org.apache.ignite.lang.TableNotFoundException: IGN-TBL-2 
> TraceId:ee0df118-c9cf-499e-9325-6fe842d5251b Table not found: 46
>   at 
> org.apache.ignite.client.handler.ClientPrimaryReplicaTracker.tableNotFoundException(ClientPrimaryReplicaTracker.java:344)
>   at 
> org.apache.ignite.client.handler.ClientPrimaryReplicaTracker.partitionsNoWait(ClientPrimaryReplicaTracker.java:231)
>   at 
> org.apache.ignite.client.handler.ClientPrimaryReplicaTracker.lambda$partitionsAsync$3(ClientPrimaryReplicaTracker.java:224)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
>   at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
>   at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.lambda$completeWaitersOnUpdate$0(PendingComparableValuesTracker.java:169)
>   at 
> java.base/java.util.concurrent.ConcurrentMap.forEach(ConcurrentMap.java:122)
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.completeWaitersOnUpdate(PendingComparableValuesTracker.java:169)
>   at 
> org.apache.ignite.internal.util.PendingComparableValuesTracker.update(PendingComparableValuesTracker.java:103)
>   at 
> org.apache.ignite.internal.metastorage.server.time.ClusterTimeImpl.updateSafeTime(ClusterTimeImpl.java:146)
>   at 
> org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl.onSafeTimeAdvanced(MetaStorageManagerImpl.java:849)
>   at 
> org.apache.ignite.internal.metastorage.impl.MetaStorageManagerImpl$1.onSafeTimeAdvanced(MetaStorageManagerImpl.java:456)
>   at 
> org.apache.ignite.internal.metastorage.server.WatchProcessor.lambda$advanceSafeTime$9(WatchProcessor.java:322)
>   at 
> java.base/java.util.concurrent.CompletableFuture$UniRun.tryFire(CompletableFuture.java:783)
>   at 
> java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:478)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:834)
>at Apache.Ignite.Internal.ClientSocket.DoOutInOpAsync(ClientOp clientOp, 
> PooledArrayBuffer request, Boolean expectNotifications) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientSocket.cs:line
>  304
>at 
> Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAndGetSocketAsync(ClientOp
>  clientOp, Transaction tx, PooledArrayBuffer request, PreferredNode 
> preferredNode, IRetryPolicy retryPolicyOverride, Boolean expectNotifications) 
> in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  204
>at Apache.Ignite.Internal.ClientFailoverSocket.DoOutInOpAsync(ClientOp 
> clientOp, PooledArrayBuffer request, PreferredNode preferredNode, Boolean 
> expectNotifications) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/ClientFailoverSocket.cs:line
>  153
>at Apache.Ignite.Internal.Table.Table.LoadPartitionAssignmentAsync(Int64 
> timestamp) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/Table.cs:line
>  404
>at Apache.Ignite.Internal.Table.Table.GetPartitionAssignmentAsync() in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/Table.cs:line
>  239
>at Apache.Ignite.Internal.Table.Table.GetPreferredNode(Int32 
> colocationHash, ITransaction transaction) in 
> /opt/buildagent/work/b8d4df1365f1f1e5/modules/platforms/dotnet/Apache.Ignite/Internal/Table/Table

[jira] [Created] (IGNITE-21263) Failure getting 1 entry out of 10M on a single node cluster

2024-01-15 Thread Ivan Artiukhov (Jira)
Ivan Artiukhov created IGNITE-21263:
---

 Summary: Failure getting 1 entry out of 10M on a single node 
cluster
 Key: IGNITE-21263
 URL: https://issues.apache.org/jira/browse/IGNITE-21263
 Project: Ignite
  Issue Type: Bug
Reporter: Ivan Artiukhov
 Attachments: 1132-logs.zip

AI3 rev. 36450ff06da48c06567d9e79eaabf9c017a651e9

Benchmark which use key-value API: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.10/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteClient.java]
 
h1. Steps

Start a 1 node AI3 cluster

Run YCSB benchmark in {{load}} mode to put 10M unique entries:
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=25 -p warmupops=5 -p dataintegrity=true -p 
measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.90 -p 
recordcount=1000 -p operationcount=1000 -s {code}
 Run YCSB benchmark in {{run}} mode to get loaded 10M entries:
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=25 -p recordcount=25 -p warmupops=5 -p 
dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
hosts=192.168.1.90 -p recordcount=1000 -p operationcount=1000 -s {code}
h1. Expected behavior

All 10M entries were read without errors
h1. Actual behavior

Failure to read 1 entry out of 10M
h1. Notes

Logs, config: [^1132-logs.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21264) Severe performance drop during key-value get()

2024-01-15 Thread Ivan Artiukhov (Jira)
Ivan Artiukhov created IGNITE-21264:
---

 Summary: Severe performance drop during key-value get()
 Key: IGNITE-21264
 URL: https://issues.apache.org/jira/browse/IGNITE-21264
 Project: Ignite
  Issue Type: Bug
Reporter: Ivan Artiukhov
 Attachments: 1132-get.png, 1132-logs.zip, 1132-put.png

AI3 rev. 36450ff06da48c06567d9e79eaabf9c017a651e9

Benchmark which use key-value API: 
[https://github.com/gridgain/YCSB/blob/ycsb-2023.10/ignite3/src/main/java/site/ycsb/db/ignite3/IgniteClient.java]
 
h1. Steps

Start a 1 node AI3 cluster

Run YCSB benchmark in {{load}} mode to put 10M unique entries:
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -load -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
recordcount=25 -p warmupops=5 -p dataintegrity=true -p 
measurementtype=timeseries -p status.interval=1 -p hosts=192.168.1.90 -p 
recordcount=1000 -p operationcount=1000 -s {code}
 Run YCSB benchmark in {{run}} mode to get loaded 10M entries:
{code:java}
Command line: -db site.ycsb.db.ignite3.IgniteClient -t -P 
/opt/pubagent/poc/config/ycsb/workloads/workloadc -threads 1 -p 
operationcount=25 -p recordcount=25 -p warmupops=5 -p 
dataintegrity=true -p measurementtype=timeseries -p status.interval=1 -p 
hosts=192.168.1.90 -p recordcount=1000 -p operationcount=1000 -s {code}
h1. Actual behavior

High performance drop during {{{}get(){}}}.
h2. Single put()

Average throughput:

!1132-put.png!
h2. Single get()

Average throughput:

!1132-get.png!
h1. Notes

Logs, config: [^1132-logs.zip]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21261) Fix exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater

2024-01-15 Thread Alexey Gidaspov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17806816#comment-17806816
 ] 

Alexey Gidaspov commented on IGNITE-21261:
--

https://tc2.sbt-ignite-dev.ru/viewLog.html?tab=dependencies&depsTab=snapshot&buildId=7705635&buildTypeId=IgniteExtensions_Tests_RunAllTests&fromSakuraUI=true#_expand=block_bt1193-7705635&hpos=&vpos=

> Fix exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater
> -
>
> Key: IGNITE-21261
> URL: https://issues.apache.org/jira/browse/IGNITE-21261
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Gidaspov
>Assignee: Alexey Gidaspov
>Priority: Major
>  Labels: ise
>
> exception 'Unknown topic' is never thrown in KafkaToIgniteMetadataUpdater 
> since kafka lib was upgraded to 3.4.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17615) Close local cursors on primary replica expiration

2024-01-15 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17806830#comment-17806830
 ] 

Alexander Lapin commented on IGNITE-17615:
--

[~v.pyatkov] LGTM, Thanks!

> Close local cursors on primary replica expiration
> -
>
> Key: IGNITE-17615
> URL: https://issues.apache.org/jira/browse/IGNITE-17615
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Uttsel
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> h3. Motivation
> According to our tx protocol, it’s impossible to commit a transaction if any 
> of the enlisted primary replicas have expired. It also means that there’s no 
> sense in preserving tx related volatile state such as locks and cursors. Pay 
> attention, that it’s still useful to preserve txnState in the 
> txnStateLocalMap because it will ease write intent resolution procedure. 
> Locks release on primary replica expiration was already implemented, so this 
> ticket is only about closing cursors on primary expiration.
> h3. Definition of Done
>  * On primary replica expiration all RW-scoped cursors are closed.
> h3. Implementation Notes
> 1.In 
> `org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener#onPrimaryExpired`
>  we release all tx locks
> {code:java}
> futs.add(allOf(txFuts).whenComplete((unused, throwable) -> 
> releaseTxLocks(txId)));
> {code}
> Seems reasonable to reuse same event to close the cursors. Worth mentioning 
> that given action should be asynchronous. I believe that we may do the 
> cursors close in partition striped pool. See StripedThreadPoolExecutor for 
> more details. Another option here is to introduce special dedicated cleanup 
> thread and use it instead. That will be a part of TX Resourse Cleanup design.
> 2. So, that was about when to close, let’s clarify what to close. Seems that 
> it’s trivial. We have 
> `org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener#cursors`
>  right in partition replica listeners. We even have corresponding helper 
> method 
> `org.apache.ignite.internal.table.distributed.replicator.PartitionReplicaListener#closeAllTransactionCursors`
>  
> All in all, seems that it's required to substitute
>  
> {code:java}
> futs.add(allOf(txFuts).whenComplete((unused, throwable) -> 
> releaseTxLocks(txId)));{code}
> with
> {code:java}
>                     futs.add(allOf(txFuts).whenComplete((unused, throwable) 
> -> {
>                         releaseTxLocks(txId);
>                         try {
>                             closeAllTransactionCursors(txId);
>                         } catch (Exception e) {
>                             LOG.warn("Unable to close cursor on primary 
> replica expiration", e);
>                         }
>                     }));{code}
> Tests are trickey though.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20681) Remove limit on write intent switch attempts

2024-01-15 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IGNITE-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

 Kirill Sizov updated IGNITE-20681:
---
Summary: Remove limit on write intent switch attempts  (was: Remove limit 
on cleanup attempts)

> Remove limit on write intent switch attempts
> 
>
> Key: IGNITE-20681
> URL: https://issues.apache.org/jira/browse/IGNITE-20681
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> PartitionReplicaListener#ATTEMPTS_TO_CLEANUP_REPLICA is not actually needed 
> and can be removed. After that, the code of durable cleanup can be refactored 
> a bit in order to unify the logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-20681) Remove limit on write intent switch attempts

2024-01-15 Thread Jira


 [ 
https://issues.apache.org/jira/browse/IGNITE-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

 Kirill Sizov updated IGNITE-20681:
---
Description: {{WriteIntentSwitchProcessor#ATTEMPTS_TO_SWITCH_WI}} is not 
actually needed and can be removed. After that, the code of durable cleanup can 
be refactored a bit in order to unify the logic.  (was: 
PartitionReplicaListener#ATTEMPTS_TO_CLEANUP_REPLICA is not actually needed and 
can be removed. After that, the code of durable cleanup can be refactored a bit 
in order to unify the logic.)

> Remove limit on write intent switch attempts
> 
>
> Key: IGNITE-20681
> URL: https://issues.apache.org/jira/browse/IGNITE-20681
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> {{WriteIntentSwitchProcessor#ATTEMPTS_TO_SWITCH_WI}} is not actually needed 
> and can be removed. After that, the code of durable cleanup can be refactored 
> a bit in order to unify the logic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-21262) Sql. Push down predicate under correlate

2024-01-15 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov reassigned IGNITE-21262:
-

Assignee: Konstantin Orlov

> Sql. Push down predicate under correlate
> 
>
> Key: IGNITE-21262
> URL: https://issues.apache.org/jira/browse/IGNITE-21262
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, predicate doesn't pushed down under LogicalCorrelate node. As a 
> result, complex queries with multiple joins, such q2, q4, q21, q22 from TPC-H 
> suite, executed as a humongous cross join of several tables with post 
> filtration afterwards.
> This could simply be addressed by introducing {{CoreRules.FILTER_CORRELATE}} 
> to rule set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21262) Sql. Push down predicate under correlate

2024-01-15 Thread Konstantin Orlov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17806882#comment-17806882
 ] 

Konstantin Orlov commented on IGNITE-21262:
---

[~zstan], [~mzhuravkov], folks, do a review please

> Sql. Push down predicate under correlate
> 
>
> Key: IGNITE-21262
> URL: https://issues.apache.org/jira/browse/IGNITE-21262
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, predicate doesn't pushed down under LogicalCorrelate node. As a 
> result, complex queries with multiple joins, such q2, q4, q21, q22 from TPC-H 
> suite, executed as a humongous cross join of several tables with post 
> filtration afterwards.
> This could simply be addressed by introducing {{CoreRules.FILTER_CORRELATE}} 
> to rule set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21262) Sql. Push down predicate under correlate

2024-01-15 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-21262:
--
Fix Version/s: 3.0.0-beta2

> Sql. Push down predicate under correlate
> 
>
> Key: IGNITE-21262
> URL: https://issues.apache.org/jira/browse/IGNITE-21262
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, predicate doesn't pushed down under LogicalCorrelate node. As a 
> result, complex queries with multiple joins, such q2, q4, q21, q22 from TPC-H 
> suite, executed as a humongous cross join of several tables with post 
> filtration afterwards.
> This could simply be addressed by introducing {{CoreRules.FILTER_CORRELATE}} 
> to rule set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21262) Sql. Push down predicate under correlate

2024-01-15 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-21262:
--
Description: 
Currently, predicate doesn't pushed down under LogicalCorrelate node. As a 
result, complex queries with multiple joins, such q2, q4, q21, q22 from TPC-H 
suite, executed as a humongous cross join of several tables with post 
filtration afterwards.

This could simply be addressed by introducing {{CoreRules.FILTER_CORRELATE}} to 
rule set.


  was:
Currently, predicate doesn't pushed down under LogicalCorrelate node. As a 
result, complex queries with multiple joins, such q2, q4, q21, q22 from TPC-H 
suite, executed as a humongous cross join of several tables with post 
filtration afterwards.

This could simply be addressed by introducing {{CoreRules.FILTER_CORRELATE}} to 
rule set.


> Sql. Push down predicate under correlate
> 
>
> Key: IGNITE-21262
> URL: https://issues.apache.org/jira/browse/IGNITE-21262
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, predicate doesn't pushed down under LogicalCorrelate node. As a 
> result, complex queries with multiple joins, such q2, q4, q21, q22 from TPC-H 
> suite, executed as a humongous cross join of several tables with post 
> filtration afterwards.
> This could simply be addressed by introducing {{CoreRules.FILTER_CORRELATE}} 
> to rule set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21061) Durable cleanup requires additional replication group command

2024-01-15 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17806887#comment-17806887
 ] 

Vladislav Pyatkov commented on IGNITE-21061:


LGTM

> Durable cleanup requires additional replication group command
> -
>
> Key: IGNITE-21061
> URL: https://issues.apache.org/jira/browse/IGNITE-21061
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> h3. Motivation
> After locks are released, it is required to write the information to 
> transaction persistent storage and replicate it on all commit partition 
> replication group nodes. That is performed by the replication command 
> ({{MarkLocksReleasedCommand}}). As a result, we have an additional 
> replication command for the entire transaction process.
> h3. Implementation notes
> In my opinion, we can resolve this situation in the transaction resolution 
> procedure ({{OrphanDetector}}). It just needs to do nothing additional: 
> either release the locks by durable finish from the transaction coordinator 
> or release them by recovery by {{OrphanDetector}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-17666) Scan subscription cancel does not close a server side cursor

2024-01-15 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov reassigned IGNITE-17666:
--

Assignee: Vladislav Pyatkov  (was: Denis Chudov)

> Scan subscription cancel does not close a server side cursor
> 
>
> Key: IGNITE-17666
> URL: https://issues.apache.org/jira/browse/IGNITE-17666
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> h3. Motivation
> Read-only transactions start a scan operation and save transaction cursors on 
> the replica side. The cursors stay on the server until they are closed. 
> Read-only transactions finish locally and do not notify all replicas of the 
> transaction.
> h3. Definition of done
> # Enlist node to RO transaction.
> # Prohibit adding operations to the read-only transaction after the 
> transaction is finished.
> # Send the finish transaction request to all enlisted nodes to close cursors.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21265) Implement cleaner worker

2024-01-15 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-21265:
---
Labels: ignite-3  (was: )

> Implement cleaner worker
> 
>
> Key: IGNITE-21265
> URL: https://issues.apache.org/jira/browse/IGNITE-21265
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Scherbakov
>Priority: Major
>  Labels: ignite-3
>
> We need a basic cleaner infrastructure for:
>  # Collecting garbage version
>  # Collecting expired entries on table
>  # Collecting expired entries on cache
> The cleaner is a worker thread accepting a various types of fine-grained 
> tasks.
> Each task represents a small actions: for example, collect garbage for a 
> single version chain or to collect single expired record.
> Number of workers can be configured by a user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21265) Implement cleaner worker

2024-01-15 Thread Alexey Scherbakov (Jira)
Alexey Scherbakov created IGNITE-21265:
--

 Summary: Implement cleaner worker
 Key: IGNITE-21265
 URL: https://issues.apache.org/jira/browse/IGNITE-21265
 Project: Ignite
  Issue Type: Task
Reporter: Alexey Scherbakov


We need a basic cleaner infrastructure for:
 # Collecting garbage version
 # Collecting expired entries on table
 # Collecting expired entries on cache

The cleaner is a worker thread accepting a various types of fine-grained tasks.
Each task represents a small actions: for example, collect garbage for a single 
version chain or to collect single expired record.
Number of workers can be configured by a user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21265) Implement cleaner worker

2024-01-15 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-21265:
---
Description: 
We need a basic cleaner infrastructure for:
 # Collecting garbage versions
 # Collecting expired entries on table
 # Collecting expired entries on cache

The cleaner is a worker thread accepting a various types of fine-grained tasks.
Each task represents a small actions: for example, collect garbage for a single 
version chain or to collect single expired record.
Number of workers can be configured by a user.

  was:
We need a basic cleaner infrastructure for:
 # Collecting garbage version
 # Collecting expired entries on table
 # Collecting expired entries on cache

The cleaner is a worker thread accepting a various types of fine-grained tasks.
Each task represents a small actions: for example, collect garbage for a single 
version chain or to collect single expired record.
Number of workers can be configured by a user.


> Implement cleaner worker
> 
>
> Key: IGNITE-21265
> URL: https://issues.apache.org/jira/browse/IGNITE-21265
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Scherbakov
>Priority: Major
>  Labels: ignite-3
>
> We need a basic cleaner infrastructure for:
>  # Collecting garbage versions
>  # Collecting expired entries on table
>  # Collecting expired entries on cache
> The cleaner is a worker thread accepting a various types of fine-grained 
> tasks.
> Each task represents a small actions: for example, collect garbage for a 
> single version chain or to collect single expired record.
> Number of workers can be configured by a user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21265) Implement cleaner worker

2024-01-15 Thread Alexey Scherbakov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Scherbakov updated IGNITE-21265:
---
Description: 
We need a basic cleaner infrastructure for:
 # Collecting garbage versions
 # Collecting expired entries on table
 # Collecting expired entries on cache

The cleaner is a worker thread accepting a various types of fine-grained tasks.
Each task represents a small action: for example, collect garbage for a single 
version chain or to collect single expired record.
Number of workers can be configured by a user.

  was:
We need a basic cleaner infrastructure for:
 # Collecting garbage versions
 # Collecting expired entries on table
 # Collecting expired entries on cache

The cleaner is a worker thread accepting a various types of fine-grained tasks.
Each task represents a small actions: for example, collect garbage for a single 
version chain or to collect single expired record.
Number of workers can be configured by a user.


> Implement cleaner worker
> 
>
> Key: IGNITE-21265
> URL: https://issues.apache.org/jira/browse/IGNITE-21265
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexey Scherbakov
>Priority: Major
>  Labels: ignite-3
>
> We need a basic cleaner infrastructure for:
>  # Collecting garbage versions
>  # Collecting expired entries on table
>  # Collecting expired entries on cache
> The cleaner is a worker thread accepting a various types of fine-grained 
> tasks.
> Each task represents a small action: for example, collect garbage for a 
> single version chain or to collect single expired record.
> Number of workers can be configured by a user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21064) Refactor authentication naming and enum in Thin Client for clarity

2024-01-15 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17806890#comment-17806890
 ] 

Pavel Tupitsyn commented on IGNITE-21064:
-

The initial idea was exactly as you say: server and client authenticators 
should match. I'm not sure what we are trying to solve here.

> Refactor authentication naming and enum in Thin Client for clarity
> --
>
> Key: IGNITE-21064
> URL: https://issues.apache.org/jira/browse/IGNITE-21064
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Ivan Gagarkin
>Assignee: Ivan Gagarkin
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, the Thin Client utilizes 
> {{org.apache.ignite.security.AuthenticationType}} to specify the 
> authentication method during the handshake process. This approach can be 
> confusing due to its interaction with the type of authentication defined in 
> the configuration. To resolve this, we propose creating a separate 
> enumeration specifically for the client. Additionally, the 'BASIC' 
> authentication type should be renamed to 'PASSWORD' for clearer understanding.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-15889) Add 'contains' method to Record API

2024-01-15 Thread Mikhail Efremov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Efremov reassigned IGNITE-15889:


Assignee: Mikhail Efremov

> Add 'contains' method to Record API
> ---
>
> Key: IGNITE-15889
> URL: https://issues.apache.org/jira/browse/IGNITE-15889
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Andrey Mashenkov
>Assignee: Mikhail Efremov
>Priority: Major
>  Labels: ignite-3, newbie
>
> There is no method in Record API with the same semantic as the 'contains' 
> method in KV views.
> Add *RecordView.contains* similar to *KeyValueView.contains*.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21266) [Java Thin Client] PA do not work after cluster restart

2024-01-15 Thread Mikhail Petrov (Jira)
Mikhail Petrov created IGNITE-21266:
---

 Summary: [Java Thin Client] PA do not work after cluster restart
 Key: IGNITE-21266
 URL: https://issues.apache.org/jira/browse/IGNITE-21266
 Project: Ignite
  Issue Type: Bug
Reporter: Mikhail Petrov


Reproducer: 


{code:java}
/** */
public class PartitionAwarenessClusterRestartTest extends 
ThinClientAbstractPartitionAwarenessTest {
/** */
@Test
public void testGroupNodesAfterClusterRestart() throws Exception {
prepareCluster();

initClient(getClientConfiguration(0), 0, 1);

checkPartitionAwareness();

stopAllGrids();

prepareCluster();

checkPartitionAwareness();
}

/** */
private void checkPartitionAwareness() throws Exception {
ClientCache cache = client.cache(DEFAULT_CACHE_NAME);

cache.put(0, 0);

opsQueue.clear();

for (int i = 1; i < 1000; i++) {
cache.put(i, i);


assertOpOnChannel(nodeChannel(grid(0).affinity(DEFAULT_CACHE_NAME).mapKeyToNode(i).id()),
 ClientOperation.CACHE_PUT);
}
}

/** */
private void prepareCluster() throws Exception {
startGrids(3);

grid(0).createCache(new CacheConfiguration<>(DEFAULT_CACHE_NAME));
}
}

{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21266) [Java Thin Client] Partition Awareness does not work after cluster restart

2024-01-15 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-21266:

Ignite Flags: Release Notes Required  (was: Docs Required,Release Notes 
Required)

> [Java Thin Client] Partition Awareness does not work after cluster restart
> --
>
> Key: IGNITE-21266
> URL: https://issues.apache.org/jira/browse/IGNITE-21266
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Major
>
> Reproducer: 
> {code:java}
> /** */
> public class PartitionAwarenessClusterRestartTest extends 
> ThinClientAbstractPartitionAwarenessTest {
> /** */
> @Test
> public void testGroupNodesAfterClusterRestart() throws Exception {
> prepareCluster();
> initClient(getClientConfiguration(0), 0, 1);
> checkPartitionAwareness();
> stopAllGrids();
> prepareCluster();
> checkPartitionAwareness();
> }
> /** */
> private void checkPartitionAwareness() throws Exception {
> ClientCache cache = client.cache(DEFAULT_CACHE_NAME);
> cache.put(0, 0);
> opsQueue.clear();
> for (int i = 1; i < 1000; i++) {
> cache.put(i, i);
> 
> assertOpOnChannel(nodeChannel(grid(0).affinity(DEFAULT_CACHE_NAME).mapKeyToNode(i).id()),
>  ClientOperation.CACHE_PUT);
> }
> }
> /** */
> private void prepareCluster() throws Exception {
> startGrids(3);
> grid(0).createCache(new CacheConfiguration<>(DEFAULT_CACHE_NAME));
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21266) [Java Thin Client] Partition Awareness does not work after cluster restart

2024-01-15 Thread Mikhail Petrov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Petrov updated IGNITE-21266:

Summary: [Java Thin Client] Partition Awareness does not work after cluster 
restart  (was: [Java Thin Client] PA do not work after cluster restart)

> [Java Thin Client] Partition Awareness does not work after cluster restart
> --
>
> Key: IGNITE-21266
> URL: https://issues.apache.org/jira/browse/IGNITE-21266
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mikhail Petrov
>Priority: Major
>
> Reproducer: 
> {code:java}
> /** */
> public class PartitionAwarenessClusterRestartTest extends 
> ThinClientAbstractPartitionAwarenessTest {
> /** */
> @Test
> public void testGroupNodesAfterClusterRestart() throws Exception {
> prepareCluster();
> initClient(getClientConfiguration(0), 0, 1);
> checkPartitionAwareness();
> stopAllGrids();
> prepareCluster();
> checkPartitionAwareness();
> }
> /** */
> private void checkPartitionAwareness() throws Exception {
> ClientCache cache = client.cache(DEFAULT_CACHE_NAME);
> cache.put(0, 0);
> opsQueue.clear();
> for (int i = 1; i < 1000; i++) {
> cache.put(i, i);
> 
> assertOpOnChannel(nodeChannel(grid(0).affinity(DEFAULT_CACHE_NAME).mapKeyToNode(i).id()),
>  ClientOperation.CACHE_PUT);
> }
> }
> /** */
> private void prepareCluster() throws Exception {
> startGrids(3);
> grid(0).createCache(new CacheConfiguration<>(DEFAULT_CACHE_NAME));
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-19686) Sql. Erroneous processing SUBSTRING with null literals in values

2024-01-15 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky resolved IGNITE-19686.
-
Release Note: resolved in calcite
  Resolution: Won't Do

> Sql. Erroneous processing SUBSTRING with null literals in values
> 
>
> Key: IGNITE-19686
> URL: https://issues.apache.org/jira/browse/IGNITE-19686
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Evgeny Stanilovsky
>Priority: Major
>  Labels: calcite3-required, ignite-3
>
> Such a case is not working for now:
> {noformat}
> select SUBSTRING(s from i for l) from (values ('abc', null, 2)) as t (s, i, 
> l);
> {noformat}
> it will be fixed after [1] will be implemented.
> [1] https://issues.apache.org/jira/browse/CALCITE-5708



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21114) .NET: Thin 3.0: TestZeroRetryLimitDoesNotLimitRetryCount is flaky (timeout)

2024-01-15 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807070#comment-17807070
 ] 

Pavel Tupitsyn commented on IGNITE-21114:
-

* Can't reproduce locally (running repeatedly for 30 minutes)
* TC history shows 100% success rate 
https://ci.ignite.apache.org/test/5024113460172328851?currentProjectId=ApacheIgnite3xGradle_Test&expandTestHistoryChartSection=true
* Linked build does not exist anymore

Closing this ticket for now.

> .NET: Thin 3.0: TestZeroRetryLimitDoesNotLimitRetryCount is flaky (timeout)
> ---
>
> Key: IGNITE-21114
> URL: https://issues.apache.org/jira/browse/IGNITE-21114
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Failed once due to a timeout (exceeded 15 seconds):
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunNetTests/7701632?hideProblemsFromDependencies=false&hideTestsFromDependencies=false&expandBuildTestsSection=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-21114) .NET: Thin 3.0: TestZeroRetryLimitDoesNotLimitRetryCount is flaky (timeout)

2024-01-15 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn resolved IGNITE-21114.
-
Resolution: Cannot Reproduce

> .NET: Thin 3.0: TestZeroRetryLimitDoesNotLimitRetryCount is flaky (timeout)
> ---
>
> Key: IGNITE-21114
> URL: https://issues.apache.org/jira/browse/IGNITE-21114
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET, ignite-3
> Fix For: 3.0.0-beta2
>
>
> Failed once due to a timeout (exceeded 15 seconds):
> https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunNetTests/7701632?hideProblemsFromDependencies=false&hideTestsFromDependencies=false&expandBuildTestsSection=true



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21232) Support sending by node ID in MessagingService

2024-01-15 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21232:
---
Summary: Support sending by node ID in MessagingService  (was: Fail sending 
a message if resolved sender's destination node ID differs from target node ID)

> Support sending by node ID in MessagingService
> --
>
> Key: IGNITE-21232
> URL: https://issues.apache.org/jira/browse/IGNITE-21232
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> # Methods of MessagingService that accept ClusterNode for recipient must fail 
> the send/invocation attempt (with same exception which is used when the 
> target node has gone) if the resolved sender's destination node ID is 
> different from the node ID specified in the ClusterNode for recipient.
>  # Methods that accept consistentId should behave in the same way: they 
> should just resolve the ClusterNode by the provided consistentId and then 
> proceed as in item 1



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21232) Support sending by node ID in MessagingService

2024-01-15 Thread Roman Puchkovskiy (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Puchkovskiy updated IGNITE-21232:
---
Description: 
# In addition to methods that use consistentId to identify the recipient, 
methods that use [ephemeral] node ID for this purpose should be added
 # Methods that accept ClusterNode should be removed
 # A method to resolve a ClusterNode by node ID should be added to 
TopologyService
 # Methods that accept consistentId must fail the send/invocation attempt (with 
same exception which is used when the target node has gone) if the resolved 
sender's destination node ID is different from the node ID specified in the 
ClusterNode obtained from the topology at the moment of the call.

  was:
# Methods of MessagingService that accept ClusterNode for recipient must fail 
the send/invocation attempt (with same exception which is used when the target 
node has gone) if the resolved sender's destination node ID is different from 
the node ID specified in the ClusterNode for recipient.
 # Methods that accept consistentId should behave in the same way: they should 
just resolve the ClusterNode by the provided consistentId and then proceed as 
in item 1


> Support sending by node ID in MessagingService
> --
>
> Key: IGNITE-21232
> URL: https://issues.apache.org/jira/browse/IGNITE-21232
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Puchkovskiy
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> # In addition to methods that use consistentId to identify the recipient, 
> methods that use [ephemeral] node ID for this purpose should be added
>  # Methods that accept ClusterNode should be removed
>  # A method to resolve a ClusterNode by node ID should be added to 
> TopologyService
>  # Methods that accept consistentId must fail the send/invocation attempt 
> (with same exception which is used when the target node has gone) if the 
> resolved sender's destination node ID is different from the node ID specified 
> in the ClusterNode obtained from the topology at the moment of the call.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21267) Thin 3.0: Async ClientHandlerModule startup

2024-01-15 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-21267:
---

 Summary: Thin 3.0: Async ClientHandlerModule startup
 Key: IGNITE-21267
 URL: https://issues.apache.org/jira/browse/IGNITE-21267
 Project: Ignite
  Issue Type: Bug
  Components: thin client
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21267) Thin 3.0: Async ClientHandlerModule startup

2024-01-15 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21267:

Description: IGNITE-20477 introduces async component startup. Update 
*ClientHandlerModule.start* accordingly.

> Thin 3.0: Async ClientHandlerModule startup
> ---
>
> Key: IGNITE-21267
> URL: https://issues.apache.org/jira/browse/IGNITE-21267
> Project: Ignite
>  Issue Type: Bug
>  Components: thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> IGNITE-20477 introduces async component startup. Update 
> *ClientHandlerModule.start* accordingly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21234) Acquired checkpoint read lock waits for schedules checkpoint write unlock sometimes

2024-01-15 Thread Ivan Bessonov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov updated IGNITE-21234:
---
Reviewer: Kirill Tkalenko

> Acquired checkpoint read lock waits for schedules checkpoint write unlock 
> sometimes
> ---
>
> Key: IGNITE-21234
> URL: https://issues.apache.org/jira/browse/IGNITE-21234
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In a situation where we have "too many dirty pages" we trigger checkpoint and 
> wait until it starts. This can take seconds, because we have to flush 
> free-lists before acquiring checkpoint write lock. This can cause severe dips 
> in performance for no good reason.
> I suggest introducing two modes for triggering checkpoints when we have too 
> many dirty pages: soft threshold and hard threshold.
>  * soft - trigger checkpoint, but don't wait for its start. Just continue all 
> operations as usual. Make it like a current threshold  - 75% of any existing 
> memory segment must be dirty.
>  * hard - trigger checkpoint and wait until it starts. The way it behaves 
> right now. Make it higher than current threshold - 90% of any existing memory 
> segment must be dirty.
> Maybe we should use different values for thresholds, that should be discussed 
> during the review



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-21268) .NET: Thin 3.0: Expose metric names as public constants

2024-01-15 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-21268:
---

 Summary: .NET: Thin 3.0: Expose metric names as public constants
 Key: IGNITE-21268
 URL: https://issues.apache.org/jira/browse/IGNITE-21268
 Project: Ignite
  Issue Type: Improvement
  Components: platforms, thin client
Affects Versions: 3.0.0-beta1
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 3.0.0-beta2






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-21268) .NET: Thin 3.0: Expose metric names as public constants

2024-01-15 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-21268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-21268:

Description: 
Currently, the only way for the user to get a full list of available metrics is 
to look at the source code of internal 
[Metrics|https://github.com/apache/ignite-3/blob/81fe252ee7847da92ff770035af00cbb930764c3/modules/platforms/dotnet/Apache.Ignite/Internal/Metrics.cs]
 class.

Let's expose a public class with a set of constants:
* Meter name (Apache.Ignite)
* Metric names (connections-active, etc)

> .NET: Thin 3.0: Expose metric names as public constants
> ---
>
> Key: IGNITE-21268
> URL: https://issues.apache.org/jira/browse/IGNITE-21268
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms, thin client
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> Currently, the only way for the user to get a full list of available metrics 
> is to look at the source code of internal 
> [Metrics|https://github.com/apache/ignite-3/blob/81fe252ee7847da92ff770035af00cbb930764c3/modules/platforms/dotnet/Apache.Ignite/Internal/Metrics.cs]
>  class.
> Let's expose a public class with a set of constants:
> * Meter name (Apache.Ignite)
> * Metric names (connections-active, etc)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-21061) Durable cleanup requires additional replication group command

2024-01-15 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-21061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17807085#comment-17807085
 ] 

Vladislav Pyatkov commented on IGNITE-21061:


Merged 1955452453aebbcb38175b36bec390257c5222a6

> Durable cleanup requires additional replication group command
> -
>
> Key: IGNITE-21061
> URL: https://issues.apache.org/jira/browse/IGNITE-21061
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Vladislav Pyatkov
>Assignee:  Kirill Sizov
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> h3. Motivation
> After locks are released, it is required to write the information to 
> transaction persistent storage and replicate it on all commit partition 
> replication group nodes. That is performed by the replication command 
> ({{MarkLocksReleasedCommand}}). As a result, we have an additional 
> replication command for the entire transaction process.
> h3. Implementation notes
> In my opinion, we can resolve this situation in the transaction resolution 
> procedure ({{OrphanDetector}}). It just needs to do nothing additional: 
> either release the locks by durable finish from the transaction coordinator 
> or release them by recovery by {{OrphanDetector}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)