[jira] [Commented] (IGNITE-15067) Add custom destination path to the snapshost API

2022-05-31 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544521#comment-17544521
 ] 

Ignite TC Bot commented on IGNITE-15067:


{panel:title=Branch: [pull/10052/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}AOP{color} [[tests 0 Exit Code 
|https://ci2.ignite.apache.org/viewLog.html?buildId=6462513]]

{panel}
{panel:title=Branch: [pull/10052/head] Base: [master] : New Tests 
(7)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Control Utility{color} [[tests 
2|https://ci2.ignite.apache.org/viewLog.html?buildId=6462434]]
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerTest.testSnapshotCreateCheckAndRestoreCustomDir - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerWithSSLTest.testSnapshotCreateCheckAndRestoreCustomDir - 
PASSED{color}

{color:#8b}Control Utility (Zookeeper){color} [[tests 
1|https://ci2.ignite.apache.org/viewLog.html?buildId=6462435]]
* {color:#013220}ZookeeperIgniteControlUtilityTestSuite: 
GridCommandHandlerTest.testSnapshotCreateCheckAndRestoreCustomDir - 
PASSED{color}

{color:#8b}Snapshots{color} [[tests 
4|https://ci2.ignite.apache.org/viewLog.html?buildId=6462485]]
* {color:#013220}IgniteSnapshotTestSuite: 
IgniteClusterSnapshotRestoreSelfTest.testClusterSnapshotRestoreFromCustomDir[Encryption=true]
 - PASSED{color}
* {color:#013220}IgniteSnapshotTestSuite: 
IgniteClusterSnapshotRestoreSelfTest.testClusterSnapshotRestoreFromCustomDir[Encryption=false]
 - PASSED{color}
* {color:#013220}IgniteSnapshotTestSuite: 
IgniteClusterSnapshotHandlerTest.testHandlerSnapshotLocation[Encryption=true] - 
PASSED{color}
* {color:#013220}IgniteSnapshotTestSuite: 
IgniteClusterSnapshotHandlerTest.testHandlerSnapshotLocation[Encryption=false] 
- PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=6462626buildTypeId=IgniteTests24Java8_RunAll]

> Add custom destination path to the snapshost API
> 
>
> Key: IGNITE-15067
> URL: https://issues.apache.org/jira/browse/IGNITE-15067
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maxim Muzafarov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: iep-43, ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default configuration path obtains from the IgniteConfiguration. However, 
> in some circumstances, it is good to set this destination path at runtime. 
> This path must be configured relatively in the node working directory and 
> must be accessible from the security point of view.
> Proposed API:
> {code}
> public IgniteFuture createSnapshot(String name, String locPath);
> {code}
> control.sh *create* snapshot command syntax
> {noformat}
> control.(sh|bat) --snapshot create snapshot_name [--dest path] [--sync]
> Parameters:
>   snapshot_name  - Snapshot name.
>   path   - Path to the directory where the snapshot will be saved. If 
> not specified, the default snapshot directory will be used.
>   sync   - Run the operation synchronously, the command will wait for 
> the entire operation to complete. Otherwise, it will be performed in the 
> background, and the command will immediately return control.
> {noformat}
> control.sh *check* snapshot command syntax
> {noformat}
> control.(sh|bat) --snapshot check snapshot_name [--src path]
> Parameters:
>   snapshot_name  - Snapshot name.
>   path   - Path to the directory where the snapshot files are 
> located. If not specified, the default snapshot directory will be used.
> {noformat}
> control.sh *restore* snapshot command syntax
> {noformat}
> control.(sh|bat) --snapshot restore snapshot_name --start [--groups 
> group1,...groupN] [--src path] [--sync]
> Parameters:
>   snapshot_name - Snapshot name.
>   group1,...groupN  - Cache group names.
>   path  - Path to the directory where the snapshot files are 
> located. If not specified, the default snapshot directory will be used.
>   sync  - Run the operation synchronously, the command will wait 
> for the entire operation to complete. Otherwise, it will be performed in the 
> background, and the command will immediately return control.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-14341) Reduce contention in the PendingEntriesTree when cleaning up expired entries.

2022-05-31 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544514#comment-17544514
 ] 

Ignite TC Bot commented on IGNITE-14341:


{panel:title=Branch: [pull/9992/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/9992/head] Base: [master] : New Tests 
(105)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}PDS 5{color} [[tests 
42|https://ci.ignite.apache.org/viewLog.html?buildId=6599122]]
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testPutRemove_1_20_mm2_1 - PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testPutRemove_1_20_mp2_1 - PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testMassiveRemove2_true_range - PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testMassiveRemove3_true_range - PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testIterateConcurrentPutRemoveRange_10 - 
PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testPutRemove_2_40_pp2_1 - PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testIterateConcurrentPutRemove_10 - PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testBasicBatchRemove - PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testMassiveRemove1_true_range - PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testPutRemove_2_40_mp2_1 - PASSED{color}
* {color:#013220}IgnitePdsTestSuite5: 
BPlusTreePageMemoryImplTest.testPutRemove_2_40_pm2_1 - PASSED{color}
... and 31 new tests

{color:#8b}Basic 4{color} [[tests 
63|https://ci.ignite.apache.org/viewLog.html?buildId=6599116]]
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeSelfTest.testMassiveRemove2_true_range - PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeSelfTest.testMassiveRemove3_true_range - PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeSelfTest.testIterateConcurrentPutRemoveRange_10 - PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeSelfTest.testPutRemove_2_40_pp2_1 - PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeSelfTest.testIterateConcurrentPutRemove_10 - PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: BPlusTreeSelfTest.testBasicBatchRemove 
- PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeSelfTest.testMassiveRemove1_true_range - PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeSelfTest.testPutRemove_2_40_mp2_1 - PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeSelfTest.testPutRemove_2_40_pm2_1 - PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeSelfTest.testPutRemove_2_40_mm2_1 - PASSED{color}
* {color:#013220}IgniteBasicTestSuite2: 
BPlusTreeFakeReuseSelfTest.testPutRemove_2_40_mm2_1 - PASSED{color}
... and 52 new tests

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6599135buildTypeId=IgniteTests24Java8_RunAll]

> Reduce contention in the PendingEntriesTree when cleaning up expired entries.
> -
>
> Key: IGNITE-14341
> URL: https://issues.apache.org/jira/browse/IGNITE-14341
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ise
> Fix For: 2.14
>
> Attachments: JmhCacheExpireBenchmark.java, bench3.png, 
> bench_diagram.png, expire1.png, expire2.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, there is a significant performance drop when expired entries 
> concurrently evicted by threads that perform some actions with cache (see 
> attached reproducer):
> {noformat}
> Benchmark  Mode  Cnt Score Error   
> Units
> JmhCacheExpireBenchmark.putWithExpire thrpt3   100,132 ±  21,025  
> ops/ms
> JmhCacheExpireBenchmark.putWithoutExpire  thrpt3  2133,122 ± 559,694  
> ops/ms{noformat}
> Root cause: pending entries tree (offheap BPlusTree) is used to track expired 
> entries, after each cache operation (and by timeout thread) there is an 
> attempt to evict some amount of expired entries. these entries looked up from 
> the start of the pending entries tree and there is a contention on the first 
> leaf page of that tree.
> All threads waiting for the same page lock:
> {noformat}
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> 

[jira] (IGNITE-15834) Read Repair should support arrays and collections as values

2022-05-31 Thread Anton Vinogradov (Jira)


[ https://issues.apache.org/jira/browse/IGNITE-15834 ]


Anton Vinogradov deleted comment on IGNITE-15834:
---

was (Author: ignitetcbot):
{panel:title=Branch: [pull/10047/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10047/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6592739buildTypeId=IgniteTests24Java8_RunAll]

> Read Repair should support arrays and collections as values
> ---
>
> Key: IGNITE-15834
> URL: https://issues.apache.org/jira/browse/IGNITE-15834
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-31
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-15834) Read Repair should support arrays and collections as values

2022-05-31 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544505#comment-17544505
 ] 

Ignite TC Bot commented on IGNITE-15834:


{panel:title=Branch: [pull/10047/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10047/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6596898buildTypeId=IgniteTests24Java8_RunAll]

> Read Repair should support arrays and collections as values
> ---
>
> Key: IGNITE-15834
> URL: https://issues.apache.org/jira/browse/IGNITE-15834
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-31
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17005) Implement metrics for a snapshot create operation

2022-05-31 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544495#comment-17544495
 ] 

Ignite TC Bot commented on IGNITE-17005:


{panel:title=Branch: [pull/10036/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10036/head] Base: [master] : New Tests 
(6)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Snapshots With Indexes{color} [[tests 
6|https://ci.ignite.apache.org/viewLog.html?buildId=6599550]]
* {color:#013220}IgniteSnapshotWithIndexingTestSuite: 
IgniteClusterSnapshotMetricsTest.testRestoreSnapshotProgress[Encryption=true] - 
PASSED{color}
* {color:#013220}IgniteSnapshotWithIndexingTestSuite: 
IgniteClusterSnapshotMetricsTest.testRestoreSnapshotError[Encryption=true] - 
PASSED{color}
* {color:#013220}IgniteSnapshotWithIndexingTestSuite: 
IgniteClusterSnapshotMetricsTest.testCreateSnapshotProgress[Encryption=true] - 
PASSED{color}
* {color:#013220}IgniteSnapshotWithIndexingTestSuite: 
IgniteClusterSnapshotMetricsTest.testRestoreSnapshotProgress[Encryption=false] 
- PASSED{color}
* {color:#013220}IgniteSnapshotWithIndexingTestSuite: 
IgniteClusterSnapshotMetricsTest.testRestoreSnapshotError[Encryption=false] - 
PASSED{color}
* {color:#013220}IgniteSnapshotWithIndexingTestSuite: 
IgniteClusterSnapshotMetricsTest.testCreateSnapshotProgress[Encryption=false] - 
PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6599573buildTypeId=IgniteTests24Java8_RunAll]

> Implement metrics for a snapshot create operation
> -
>
> Key: IGNITE-17005
> URL: https://issues.apache.org/jira/browse/IGNITE-17005
> Project: Ignite
>  Issue Type: Task
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: ise
> Fix For: 2.14
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Snapshot create operation can take a long time, we must be able to track its 
> progress using metrics.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17067) Response for the init command should be sent only after a CMG leader was elected

2022-05-31 Thread Aleksandr Polovtcev (Jira)
Aleksandr Polovtcev created IGNITE-17067:


 Summary: Response for the init command should be sent only after a 
CMG leader was elected
 Key: IGNITE-17067
 URL: https://issues.apache.org/jira/browse/IGNITE-17067
 Project: Ignite
  Issue Type: Bug
Reporter: Aleksandr Polovtcev
Assignee: Aleksandr Polovtcev


When a user sends the init command to a cluster, it should return a response 
only after the CMG leader was elected (it currently returns a response after 
the CMG Raft group was started). This will improve the error reporting (since 
errors can happen after the CMG Raft group was started) and decrease the chance 
of sending concurrent init commands.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16801) Implement error handling for rebalance onReconfigurationError callback

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-16801:
-
Description: 
We have the listener {{onReconfigurationError}} for handling errors during the 
rebalance, but not implementation yet.

At the moment, it looks like, that we can receive only 1 kind of errors - 
{{RaftError.ECATCHUP}}

UPD: 

Retry logic of changePeersAsync was implemented. 

  was:
We have the listener {{onReconfigurationError}} for handling errors during the 
rebalance, but not implementation yet.

At the moment, it looks like, that we can receive only 1 kind of errors - 
{{RaftError.ECATCHUP}}


> Implement error handling for rebalance onReconfigurationError callback
> --
>
> Key: IGNITE-16801
> URL: https://issues.apache.org/jira/browse/IGNITE-16801
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> We have the listener {{onReconfigurationError}} for handling errors during 
> the rebalance, but not implementation yet.
> At the moment, it looks like, that we can receive only 1 kind of errors - 
> {{RaftError.ECATCHUP}}
> UPD: 
> Retry logic of changePeersAsync was implemented. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16801) Implement error handling for rebalance onReconfigurationError callback

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-16801:
-
Description: 
We have the listener {{onReconfigurationError}} for handling errors during the 
rebalance, but not implementation yet.

At the moment, it looks like, that we can receive only 1 kind of errors - 
{{RaftError.ECATCHUP}}

  was:
We have the listener \{{onReconfigurationError}} for handling errors during the 
rebalance, but not implementation yet.

At the moment, it looks like, that we can receive only 1 kind of errors - 
\{{RaftError.ECATCHUP}}


> Implement error handling for rebalance onReconfigurationError callback
> --
>
> Key: IGNITE-16801
> URL: https://issues.apache.org/jira/browse/IGNITE-16801
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> We have the listener {{onReconfigurationError}} for handling errors during 
> the rebalance, but not implementation yet.
> At the moment, it looks like, that we can receive only 1 kind of errors - 
> {{RaftError.ECATCHUP}}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16801) Implement error handling for rebalance onReconfigurationError callback

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-16801:
-
Description: 
We have the listener \{{onReconfigurationError}} for handling errors during the 
rebalance, but not implementation yet.

At the moment, it looks like, that we can receive only 1 kind of errors - 
\{{RaftError.ECATCHUP}}

  was:
We have the listener `onReconfigurationError` for handling errors during the 
rebalance, but not implementation yet.

At the moment, it looks like, that we can receive only 1 kind of errors - 
`RaftError.ECATCHUP`


> Implement error handling for rebalance onReconfigurationError callback
> --
>
> Key: IGNITE-16801
> URL: https://issues.apache.org/jira/browse/IGNITE-16801
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> We have the listener \{{onReconfigurationError}} for handling errors during 
> the rebalance, but not implementation yet.
> At the moment, it looks like, that we can receive only 1 kind of errors - 
> \{{RaftError.ECATCHUP}}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16801) Implement error handling for rebalance onReconfigurationError callbac

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-16801:
-
Summary: Implement error handling for rebalance onReconfigurationError 
callbac  (was: Implement error handling for rebalance )

> Implement error handling for rebalance onReconfigurationError callbac
> -
>
> Key: IGNITE-16801
> URL: https://issues.apache.org/jira/browse/IGNITE-16801
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> We have the listener `onReconfigurationError` for handling errors during the 
> rebalance, but not implementation yet.
> At the moment, it looks like, that we can receive only 1 kind of errors - 
> `RaftError.ECATCHUP`



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16801) Implement error handling for rebalance onReconfigurationError callback

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-16801:
-
Summary: Implement error handling for rebalance onReconfigurationError 
callback  (was: Implement error handling for rebalance onReconfigurationError 
callbac)

> Implement error handling for rebalance onReconfigurationError callback
> --
>
> Key: IGNITE-16801
> URL: https://issues.apache.org/jira/browse/IGNITE-16801
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> We have the listener `onReconfigurationError` for handling errors during the 
> rebalance, but not implementation yet.
> At the moment, it looks like, that we can receive only 1 kind of errors - 
> `RaftError.ECATCHUP`



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16801) Implement error handling for rebalance

2022-05-31 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544471#comment-17544471
 ] 

Alexander Lapin commented on IGNITE-16801:
--

[~maliev] LGTM

> Implement error handling for rebalance 
> ---
>
> Key: IGNITE-16801
> URL: https://issues.apache.org/jira/browse/IGNITE-16801
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Gusakov
>Assignee: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> We have the listener `onReconfigurationError` for handling errors during the 
> rebalance, but not implementation yet.
> At the moment, it looks like, that we can receive only 1 kind of errors - 
> `RaftError.ECATCHUP`



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (IGNITE-17062) Add ability to process local events synchronously

2022-05-31 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov reassigned IGNITE-17062:
-

Assignee: Denis Chudov

> Add ability to process local events synchronously
> -
>
> Key: IGNITE-17062
> URL: https://issues.apache.org/jira/browse/IGNITE-17062
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexander Lapin
>Assignee: Denis Chudov
>Priority: Major
>  Labels: ignite-3
>
> Local events (org.apache.ignite.internal.manager.Event) logic should be 
> reworked a bit in order to await all subscribers to process corresponding 
> event before publishing the action. Let's check an example of an expected 
> behavior:
>  # TableManager prepares table for creation.
>  # TableManager notifies all subscribers with Table.CREATE event propagating 
> to-be-created table.
>  # TableManager (with the help of 
> org.apache.ignite.internal.manager.Producer) awaits all subscribers to 
> acknowledge that Table.CREATE event was successfully processed. E.g.  
> SqlQueryProcessor prepares calcite schemes and sends ack by completing 
> event's Future like it's done within ConfigurationRegistry events.
>  # TableManager on Compound-Like-Future.allOff(events) publishes 
> corresponding action, e.g. makes table visible to everyone.
> Proposed solution is very similar to what we already have in 
> ConfigurationRegistry
> {code:java}
> public interface ConfigurationListener {
> /**
>  * Called on property value update.
>  *
>  * @param ctx Notification context.
>  * @return Future that signifies the end of the listener execution.
>  */
> CompletableFuture onUpdate(ConfigurationNotificationEvent ctx);
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-15067) Add custom destination path to the snapshost API

2022-05-31 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-15067:
--
Description: 
The default configuration path obtains from the IgniteConfiguration. However, 
in some circumstances, it is good to set this destination path at runtime. This 
path must be configured relatively in the node working directory and must be 
accessible from the security point of view.

Proposed API:
{code}
public IgniteFuture createSnapshot(String name, String locPath);
{code}

control.sh *create* snapshot command syntax
{noformat}
control.(sh|bat) --snapshot create snapshot_name [--dest path] [--sync]

Parameters:
  snapshot_name  - Snapshot name.
  path   - Path to the directory where the snapshot will be saved. If 
not specified, the default snapshot directory will be used.
  sync   - Run the operation synchronously, the command will wait for 
the entire operation to complete. Otherwise, it will be performed in the 
background, and the command will immediately return control.
{noformat}

control.sh *check* snapshot command syntax
{noformat}
control.(sh|bat) --snapshot check snapshot_name [--src path]

Parameters:
  snapshot_name  - Snapshot name.
  path   - Path to the directory where the snapshot files are located. 
If not specified, the default snapshot directory will be used.
{noformat}

control.sh *restore* snapshot command syntax
{noformat}
control.(sh|bat) --snapshot restore snapshot_name --start [--groups 
group1,...groupN] [--src path] [--sync]

Parameters:
  snapshot_name - Snapshot name.
  group1,...groupN  - Cache group names.
  path  - Path to the directory where the snapshot files are 
located. If not specified, the default snapshot directory will be used.
  sync  - Run the operation synchronously, the command will wait 
for the entire operation to complete. Otherwise, it will be performed in the 
background, and the command will immediately return control.
{noformat}

  was:
The default configuration path obtains from the IgniteConfiguration. However, 
in some circumstances, it is good to set this destination path at runtime. This 
path must be configured relatively in the node working directory and must be 
accessible from the security point of view.

Proposed API:
{code}
public IgniteFuture createSnapshot(String name, String locPath);
{code}

control.sh *create* snapshot command syntax
{noformat}
control.(sh|bat) --snapshot create snapshot_name [--dest path] [--sync]

Parameters:
  snapshot_name  - Snapshot name.
  path   - Path to the directory where the snapshot will be saved. 
If not specified, the default snapshot directory will be used.
  sync   - Run the operation synchronously, the command will wait 
for the entire operation to complete. Otherwise, it will be performed in the 
background, and the command will immediately return control.
{noformat}

control.sh *check* snapshot command syntax
{noformat}
control.(sh|bat) --snapshot check snapshot_name [--src path]

Parameters:
  snapshot_name  - Snapshot name.
  path   - Path to the directory where the snapshot files are 
located. If not specified, the default snapshot directory will be used.
{noformat}

control.sh *restore* snapshot command syntax
{noformat}
control.(sh|bat) --snapshot restore snapshot_name --start [--groups 
group1,...groupN] [--src path] [--sync]

Parameters:
  snapshot_name - Snapshot name.
  group1,...groupN  - Cache group names.
  path  - Path to the directory where the snapshot files are 
located. If not specified, the default snapshot directory will be used.
  sync  - Run the operation synchronously, the command will 
wait for the entire operation to complete. Otherwise, it will be performed in 
the background, and the command will immediately return control.
{noformat}


> Add custom destination path to the snapshost API
> 
>
> Key: IGNITE-15067
> URL: https://issues.apache.org/jira/browse/IGNITE-15067
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maxim Muzafarov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: iep-43, ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default configuration path obtains from the IgniteConfiguration. However, 
> in some circumstances, it is good to set this destination path at runtime. 
> This path must be configured relatively in the node working directory and 
> must be accessible from the security point of view.
> Proposed API:
> {code}
> public IgniteFuture createSnapshot(String name, String locPath);
> {code}
> control.sh *create* snapshot command syntax
> {noformat}
> control.(sh|bat) --snapshot create 

[jira] [Updated] (IGNITE-15067) Add custom destination path to the snapshost API

2022-05-31 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-15067:
--
Description: 
The default configuration path obtains from the IgniteConfiguration. However, 
in some circumstances, it is good to set this destination path at runtime. This 
path must be configured relatively in the node working directory and must be 
accessible from the security point of view.

Proposed API:
{code}
public IgniteFuture createSnapshot(String name, String locPath);
{code}

control.sh *create* snapshot command syntax
{noformat}
control.(sh|bat) --snapshot create snapshot_name [--dest path] [--sync]

Parameters:
  snapshot_name  - Snapshot name.
  path   - Path to the directory where the snapshot will be saved. 
If not specified, the default snapshot directory will be used.
  sync   - Run the operation synchronously, the command will wait 
for the entire operation to complete. Otherwise, it will be performed in the 
background, and the command will immediately return control.
{noformat}

control.sh *check* snapshot command syntax
{noformat}
control.(sh|bat) --snapshot check snapshot_name [--src path]

Parameters:
  snapshot_name  - Snapshot name.
  path   - Path to the directory where the snapshot files are 
located. If not specified, the default snapshot directory will be used.
{noformat}

control.sh *restore* snapshot command syntax
{noformat}
control.(sh|bat) --snapshot restore snapshot_name --start [--groups 
group1,...groupN] [--src path] [--sync]

Parameters:
  snapshot_name - Snapshot name.
  group1,...groupN  - Cache group names.
  path  - Path to the directory where the snapshot files are 
located. If not specified, the default snapshot directory will be used.
  sync  - Run the operation synchronously, the command will 
wait for the entire operation to complete. Otherwise, it will be performed in 
the background, and the command will immediately return control.
{noformat}

  was:
The default configuration path obtains from the IgniteConfiguration. However, 
in some circumstances, it is good to set this destination path at runtime. This 
path must be configured relatively in the node working directory and must be 
accessible from the security point of view.

Proposed API:
{code}
public IgniteFuture createSnapshot(String name, Path locPath);
{code}


> Add custom destination path to the snapshost API
> 
>
> Key: IGNITE-15067
> URL: https://issues.apache.org/jira/browse/IGNITE-15067
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maxim Muzafarov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: iep-43, ise
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The default configuration path obtains from the IgniteConfiguration. However, 
> in some circumstances, it is good to set this destination path at runtime. 
> This path must be configured relatively in the node working directory and 
> must be accessible from the security point of view.
> Proposed API:
> {code}
> public IgniteFuture createSnapshot(String name, String locPath);
> {code}
> control.sh *create* snapshot command syntax
> {noformat}
> control.(sh|bat) --snapshot create snapshot_name [--dest path] [--sync]
> Parameters:
>   snapshot_name  - Snapshot name.
>   path   - Path to the directory where the snapshot will be 
> saved. If not specified, the default snapshot directory will be used.
>   sync   - Run the operation synchronously, the command will wait 
> for the entire operation to complete. Otherwise, it will be performed in the 
> background, and the command will immediately return control.
> {noformat}
> control.sh *check* snapshot command syntax
> {noformat}
> control.(sh|bat) --snapshot check snapshot_name [--src path]
> Parameters:
>   snapshot_name  - Snapshot name.
>   path   - Path to the directory where the snapshot files are 
> located. If not specified, the default snapshot directory will be used.
> {noformat}
> control.sh *restore* snapshot command syntax
> {noformat}
> control.(sh|bat) --snapshot restore snapshot_name --start [--groups 
> group1,...groupN] [--src path] [--sync]
> Parameters:
>   snapshot_name - Snapshot name.
>   group1,...groupN  - Cache group names.
>   path  - Path to the directory where the snapshot files are 
> located. If not specified, the default snapshot directory will be used.
>   sync  - Run the operation synchronously, the command will 
> wait for the entire operation to complete. Otherwise, it will be performed in 
> the background, and the command will immediately return 

[jira] [Updated] (IGNITE-17044) [Native Persistence 3.0] End-to-end test for in-memory PageMemory

2022-05-31 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-17044:
-
Reviewer: Aleksandr Polovtcev

[~apolovtcev] Please make code review.

> [Native Persistence 3.0] End-to-end test for in-memory PageMemory
> -
>
> Key: IGNITE-17044
> URL: https://issues.apache.org/jira/browse/IGNITE-17044
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In-memory PageMemory storage was ported and fully integrated into ignite-3, 
> though only unit tests were written covering this piece of functionality.
> We need to write an end-to-end integration test for PageMemory-based 
> in-memory storage. The test should include:
> * New storage creation with necessary configuration;
> * Simple store/retrieve operations showing that storage actually performs its 
> tasks.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16980) PageMemoryPartitionStorage#write() can leak page slots

2022-05-31 Thread Sergey Chugunov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544375#comment-17544375
 ] 

Sergey Chugunov commented on IGNITE-16980:
--

[~rpuch] , it seems this ticket is about non-MV version of Storage and this 
code will be eventually replaced with MV-enabled version. So I don't see any 
value in fixing it. Could you please close the ticket?

> PageMemoryPartitionStorage#write() can leak page slots
> --
>
> Key: IGNITE-16980
> URL: https://issues.apache.org/jira/browse/IGNITE-16980
> Project: Ignite
>  Issue Type: Bug
>  Components: persistence
>Reporter: Roman Puchkovskiy
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>
> {{public void write(DataRow row) throws StorageException {}}
> {{    try {}}
> {{        TableDataRow dataRow = wrap(row);}}
> {{        freeList.insertDataRow(dataRow);}}
> {{        tree.put(dataRow);}}
> {{    } catch (IgniteInternalCheckedException e) {}}
> {{        throw new StorageException("Error writing row", e);}}
> {{    }}}
> {{}}}
> This code always occupies a slot in a data page, even if the key was already 
> put to the partition. So, if 2 puts with same key occur, one page slot is 
> wasted.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-14341) Reduce contention in the PendingEntriesTree when cleaning up expired entries.

2022-05-31 Thread Pavel Pereslegin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Pereslegin updated IGNITE-14341:
--
Fix Version/s: 2.14

> Reduce contention in the PendingEntriesTree when cleaning up expired entries.
> -
>
> Key: IGNITE-14341
> URL: https://issues.apache.org/jira/browse/IGNITE-14341
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Aleksey Plekhanov
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: ise
> Fix For: 2.14
>
> Attachments: JmhCacheExpireBenchmark.java, bench3.png, 
> bench_diagram.png, expire1.png, expire2.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently, there is a significant performance drop when expired entries 
> concurrently evicted by threads that perform some actions with cache (see 
> attached reproducer):
> {noformat}
> Benchmark  Mode  Cnt Score Error   
> Units
> JmhCacheExpireBenchmark.putWithExpire thrpt3   100,132 ±  21,025  
> ops/ms
> JmhCacheExpireBenchmark.putWithoutExpire  thrpt3  2133,122 ± 559,694  
> ops/ms{noformat}
> Root cause: pending entries tree (offheap BPlusTree) is used to track expired 
> entries, after each cache operation (and by timeout thread) there is an 
> attempt to evict some amount of expired entries. these entries looked up from 
> the start of the pending entries tree and there is a contention on the first 
> leaf page of that tree.
> All threads waiting for the same page lock:
> {noformat}
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.waitAcquireWriteLock(OffheapReadWriteLock.java:503)
>   at 
> org.apache.ignite.internal.util.OffheapReadWriteLock.writeLock(OffheapReadWriteLock.java:244)
>   at 
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.writeLock(PageMemoryNoStoreImpl.java:528)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writeLock(PageHandler.java:422)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:350)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:325)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$13200(BPlusTree.java:100)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.doRemoveFromLeaf(BPlusTree.java:4588)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.removeFromLeaf(BPlusTree.java:4567)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.tryRemoveFromLeaf(BPlusTree.java:5196)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Remove.access$6800(BPlusTree.java:4209)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2189)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removeDown(BPlusTree.java:2165)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doRemove(BPlusTree.java:2076)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.removex(BPlusTree.java:1905)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expireInternal(IgniteCacheOffheapManagerImpl.java:1426)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.expire(IgniteCacheOffheapManagerImpl.java:1375)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:246)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:882){noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17066) [Native Persistence 3.0] Implement a listener for deleting/updating data region's configuration

2022-05-31 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-17066:


 Summary: [Native Persistence 3.0] Implement a listener for 
deleting/updating data region's configuration
 Key: IGNITE-17066
 URL: https://issues.apache.org/jira/browse/IGNITE-17066
 Project: Ignite
  Issue Type: Task
Reporter: Kirill Tkalenko


Currently there is a listener for adding data region configuration, but no 
listener for deleting/updating data region configuration.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17065) CorruptedTreeException: B+Tree is corrupted(Duplicate row in index)

2022-05-31 Thread YuJue Li (Jira)
YuJue Li created IGNITE-17065:
-

 Summary: CorruptedTreeException: B+Tree is corrupted(Duplicate row 
in index)
 Key: IGNITE-17065
 URL: https://issues.apache.org/jira/browse/IGNITE-17065
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.10
Reporter: YuJue Li
 Attachments: 2021-12-21_13-59-27.574.log

In a pure memory cluster, the B+Tree is corrupted during node restart. 


See the attachment log for details.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17064) ItRaftGroupServiceTest#testTransferLeadership is flaky

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17064:
-
Ignite Flags:   (was: Docs Required,Release Notes Required)

> ItRaftGroupServiceTest#testTransferLeadership is flaky
> --
>
> Key: IGNITE-17064
> URL: https://issues.apache.org/jira/browse/IGNITE-17064
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Blocker
>  Labels: ignite-3
>
> ItRaftGroupServiceTest#testTransferLeadership is flaky with 
> {noformat}
> java.util.concurrent.ExecutionException: class 
> org.apache.ignite.raft.jraft.rpc.impl.RaftException: EBUSY:Changing the 
> configuration
> {noformat}
> There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
> when {{EBUSY:Changing the configuration}} is thrown:
>  
> {noformat}
> if (this.confCtx.isBusy()) {
> // It's very messy to deal with the case when the |peer| 
> received
> // TimeoutNowRequest and increase the term while somehow 
> another leader
> // which was not replicated with the newest configuration has 
> been
> // elected. If no add_peer with this very |peer| is to be 
> invoked ever
> // after nor this peer is to be killed, this peer will spin 
> in the voting
> // procedure and make the each new leader stepped down when 
> the peer
> // reached vote timeout and it starts to vote (because it 
> will increase
> // the term of the group)
> // To make things simple, refuse the operation and force 
> users to
> // invoke transfer_leadership_to after configuration changing 
> is
> // completed so that the peer's configuration is up-to-date 
> when it
> // receives the TimeOutNowRequest.
> LOG.warn(
> "Node {} refused to transfer leadership to peer {} when 
> the leader is changing the configuration.",
> getNodeId(), peer);
> return new Status(RaftError.EBUSY, "Changing the 
> configuration");
> }
> {noformat}
> The current limitation must be investigated.
> Seems like the easiest way to fix test is to rewrite it and repeat transfer 
> leadership invocation.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17064) ItRaftGroupServiceTest#testTransferLeadership is flaky

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17064:
-
Description: 
ItRaftGroupServiceTest#testTransferLeadership is flaky with 

{noformat}
java.util.concurrent.ExecutionException: class 
org.apache.ignite.raft.jraft.rpc.impl.RaftException: EBUSY:Changing the 
configuration
{noformat}


There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when {{EBUSY:Changing the configuration}} is thrown:

 
{noformat}
if (this.confCtx.isBusy()) {
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
LOG.warn(
"Node {} refused to transfer leadership to peer {} when the 
leader is changing the configuration.",
getNodeId(), peer);
return new Status(RaftError.EBUSY, "Changing the 
configuration");
}
{noformat}

The current limitation must be investigated.
Seems like the easiest way to fix test is to rewrite it and repeat transfer 
leadership invocation.



 

  was:
ItRaftGroupServiceTest#testTransferLeadership is flaky with 

{noformat}
java.util.concurrent.ExecutionException: class 
org.apache.ignite.raft.jraft.rpc.impl.RaftException: EBUSY:Changing the 
configuration
{noformat}


There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
if (this.confCtx.isBusy()) {
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
LOG.warn(
"Node {} refused to transfer leadership to peer {} when the 
leader is changing the configuration.",
getNodeId(), peer);
return new Status(RaftError.EBUSY, "Changing the 
configuration");
}
{noformat}

The current limitation must be investigated.
Seems like the easiest way to fix test is to rewrite it and repeat transfer 
leadership invocation.



 


> ItRaftGroupServiceTest#testTransferLeadership is flaky
> --
>
> Key: IGNITE-17064
> URL: https://issues.apache.org/jira/browse/IGNITE-17064
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Blocker
>  Labels: ignite-3
>
> ItRaftGroupServiceTest#testTransferLeadership is flaky with 
> {noformat}
> java.util.concurrent.ExecutionException: class 
> org.apache.ignite.raft.jraft.rpc.impl.RaftException: EBUSY:Changing the 
> configuration
> {noformat}
> There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
> when {{EBUSY:Changing the configuration}} is thrown:
>  
> {noformat}
> if (this.confCtx.isBusy()) {
> // It's very messy to deal with the case when the |peer| 
> received
> // TimeoutNowRequest and increase the term while somehow 
> another leader
> // which was not replicated with the newest configuration has 
> been
> // elected. If no add_peer with this very |peer| is to be 
> invoked ever

[jira] [Updated] (IGNITE-17062) Add ability to process local events synchronously

2022-05-31 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-17062:
-
Description: 
Local events (org.apache.ignite.internal.manager.Event) logic should be 
reworked a bit in order to await all subscribers to process corresponding event 
before publishing the action. Let's check an example of an expected behavior:
 # TableManager prepares table for creation.
 # TableManager notifies all subscribers with Table.CREATE event propagating 
to-be-created table.
 # TableManager (with the help of org.apache.ignite.internal.manager.Producer) 
awaits all subscribers to acknowledge that Table.CREATE event was successfully 
processed. E.g.  SqlQueryProcessor prepares calcite schemes and sends ack by 
completing event's Future like it's done within ConfigurationRegistry events.
 # TableManager on Compound-Like-Future.allOff(events) publishes corresponding 
action, e.g. makes table visible to everyone.

Proposed solution is very similar to what we already have in 
ConfigurationRegistry
{code:java}
public interface ConfigurationListener {
/**
 * Called on property value update.
 *
 * @param ctx Notification context.
 * @return Future that signifies the end of the listener execution.
 */
CompletableFuture onUpdate(ConfigurationNotificationEvent ctx);
}{code}

  was:
Local events (org.apache.ignite.internal.manager.Event) logic should be 
reworked a bit in order to await all signers to process corresponding event 
before publishing the action. Let's check an example of an expected behavior:
 # TableManager prepares table for creation.
 # TableManager notifies all signers with Table.CREATE event propagating given 
to-be-created table.
 # TableManager (with the help of org.apache.ignite.internal.manager.Producer) 
awaits all signers to acknowledge that Table.CREATE event was successfully 
processed. E.g.  SqlQueryProcessor prepares calcite schemes on mentioned above 
Table.CREATE and when ready sends the ack by completing events Future like it's 
done within ConfigurationRegistry events.
 # TableManager on Compound-Like-Future.allOff(events) publishes corresponding 
action, e.g. makes table visible to everyone.


> Add ability to process local events synchronously
> -
>
> Key: IGNITE-17062
> URL: https://issues.apache.org/jira/browse/IGNITE-17062
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> Local events (org.apache.ignite.internal.manager.Event) logic should be 
> reworked a bit in order to await all subscribers to process corresponding 
> event before publishing the action. Let's check an example of an expected 
> behavior:
>  # TableManager prepares table for creation.
>  # TableManager notifies all subscribers with Table.CREATE event propagating 
> to-be-created table.
>  # TableManager (with the help of 
> org.apache.ignite.internal.manager.Producer) awaits all subscribers to 
> acknowledge that Table.CREATE event was successfully processed. E.g.  
> SqlQueryProcessor prepares calcite schemes and sends ack by completing 
> event's Future like it's done within ConfigurationRegistry events.
>  # TableManager on Compound-Like-Future.allOff(events) publishes 
> corresponding action, e.g. makes table visible to everyone.
> Proposed solution is very similar to what we already have in 
> ConfigurationRegistry
> {code:java}
> public interface ConfigurationListener {
> /**
>  * Called on property value update.
>  *
>  * @param ctx Notification context.
>  * @return Future that signifies the end of the listener execution.
>  */
> CompletableFuture onUpdate(ConfigurationNotificationEvent ctx);
> }{code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17064) ItRaftGroupServiceTest#testTransferLeadership is flaky

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17064:
-
Description: 
ItRaftGroupServiceTest#testTransferLeadership is flaky with 
{{java.util.concurrent.ExecutionException: class 
org.apache.ignite.raft.jraft.rpc.impl.RaftException: EBUSY:Changing the 
configuration}}

There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
if (this.confCtx.isBusy()) {
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
LOG.warn(
"Node {} refused to transfer leadership to peer {} when the 
leader is changing the configuration.",
getNodeId(), peer);
return new Status(RaftError.EBUSY, "Changing the 
configuration");
}
{noformat}

The current limitation must be investigated.
Seems like the easiest way to fix test is to rewrite it and repeat transfer 
leadership invocation.



 

  was:
ItRaftGroupServiceTest#testTransferLeadership is flaky.

There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
if (this.confCtx.isBusy()) {
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
LOG.warn(
"Node {} refused to transfer leadership to peer {} when the 
leader is changing the configuration.",
getNodeId(), peer);
return new Status(RaftError.EBUSY, "Changing the 
configuration");
}
{noformat}

The current limitation must be investigated.
Seems like the easiest way to fix test is to rewrite it and repeat transfer 
leadership invocation.



 


> ItRaftGroupServiceTest#testTransferLeadership is flaky
> --
>
> Key: IGNITE-17064
> URL: https://issues.apache.org/jira/browse/IGNITE-17064
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Blocker
>  Labels: ignite-3
>
> ItRaftGroupServiceTest#testTransferLeadership is flaky with 
> {{java.util.concurrent.ExecutionException: class 
> org.apache.ignite.raft.jraft.rpc.impl.RaftException: EBUSY:Changing the 
> configuration}}
> There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
> when EBUSY:Changing the configuration is thrown:
>  
> {noformat}
> if (this.confCtx.isBusy()) {
> // It's very messy to deal with the case when the |peer| 
> received
> // TimeoutNowRequest and increase the term while somehow 
> another leader
> // which was not replicated with the newest configuration has 
> been
> // elected. If no add_peer with this very |peer| is to be 
> invoked ever
> // after nor this peer is to be killed, this peer will spin 
> in the voting
> // procedure and make the each new leader stepped down when 
> the peer
> // reached 

[jira] [Updated] (IGNITE-17064) ItRaftGroupServiceTest#testTransferLeadership is flaky

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17064:
-
Description: 
ItRaftGroupServiceTest#testTransferLeadership is flaky with 

{noformat}
java.util.concurrent.ExecutionException: class 
org.apache.ignite.raft.jraft.rpc.impl.RaftException: EBUSY:Changing the 
configuration
{noformat}


There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
if (this.confCtx.isBusy()) {
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
LOG.warn(
"Node {} refused to transfer leadership to peer {} when the 
leader is changing the configuration.",
getNodeId(), peer);
return new Status(RaftError.EBUSY, "Changing the 
configuration");
}
{noformat}

The current limitation must be investigated.
Seems like the easiest way to fix test is to rewrite it and repeat transfer 
leadership invocation.



 

  was:
ItRaftGroupServiceTest#testTransferLeadership is flaky with 
{{java.util.concurrent.ExecutionException: class 
org.apache.ignite.raft.jraft.rpc.impl.RaftException: EBUSY:Changing the 
configuration}}

There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
if (this.confCtx.isBusy()) {
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
LOG.warn(
"Node {} refused to transfer leadership to peer {} when the 
leader is changing the configuration.",
getNodeId(), peer);
return new Status(RaftError.EBUSY, "Changing the 
configuration");
}
{noformat}

The current limitation must be investigated.
Seems like the easiest way to fix test is to rewrite it and repeat transfer 
leadership invocation.



 


> ItRaftGroupServiceTest#testTransferLeadership is flaky
> --
>
> Key: IGNITE-17064
> URL: https://issues.apache.org/jira/browse/IGNITE-17064
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Blocker
>  Labels: ignite-3
>
> ItRaftGroupServiceTest#testTransferLeadership is flaky with 
> {noformat}
> java.util.concurrent.ExecutionException: class 
> org.apache.ignite.raft.jraft.rpc.impl.RaftException: EBUSY:Changing the 
> configuration
> {noformat}
> There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
> when EBUSY:Changing the configuration is thrown:
>  
> {noformat}
> if (this.confCtx.isBusy()) {
> // It's very messy to deal with the case when the |peer| 
> received
> // TimeoutNowRequest and increase the term while somehow 
> another leader
> // which was not replicated with the newest configuration has 
> been
> // elected. If no add_peer with this very |peer| is to be 
> invoked ever
> // after 

[jira] [Updated] (IGNITE-17064) ItRaftGroupServiceTest#testTransferLeadership is flaky

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17064:
-
Description: 
ItRaftGroupServiceTest#testTransferLeadership is flaky.

There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
if (this.confCtx.isBusy()) {
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
LOG.warn(
"Node {} refused to transfer leadership to peer {} when the 
leader is changing the configuration.",
getNodeId(), peer);
return new Status(RaftError.EBUSY, "Changing the 
configuration");
}
{noformat}

The current limitation must be investigated.
Seems like the easies way to fix test is to rewrite test and repeat transfer 
leadership invocation  


 

  was:
ItRaftGroupServiceTest#testTransferLeadership is flaky.

There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
{noformat}

The current limitation must be investigated.
Seems like the easies way to fix test is to rewrite test and repeat transfer 
leadership invocation  


 


> ItRaftGroupServiceTest#testTransferLeadership is flaky
> --
>
> Key: IGNITE-17064
> URL: https://issues.apache.org/jira/browse/IGNITE-17064
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Blocker
>  Labels: ignite-3
>
> ItRaftGroupServiceTest#testTransferLeadership is flaky.
> There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
> when EBUSY:Changing the configuration is thrown:
>  
> {noformat}
> if (this.confCtx.isBusy()) {
> // It's very messy to deal with the case when the |peer| 
> received
> // TimeoutNowRequest and increase the term while somehow 
> another leader
> // which was not replicated with the newest configuration has 
> been
> // elected. If no add_peer with this very |peer| is to be 
> invoked ever
> // after nor this peer is to be killed, this peer will spin 
> in the voting
> // procedure and make the each new leader stepped down when 
> the peer
> // reached vote timeout and it starts to vote (because it 
> will increase
> // the term of the group)
> // To make things simple, refuse the operation and force 
> users to
> // invoke transfer_leadership_to after configuration changing 
> is
> // completed so that the peer's configuration is up-to-date 
> when it
> // receives the TimeOutNowRequest.
> LOG.warn(
> "Node {} refused to transfer leadership to peer {} when 
> the leader is changing the configuration.",
> getNodeId(), peer);

[jira] [Updated] (IGNITE-17064) ItRaftGroupServiceTest#testTransferLeadership is flaky

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17064:
-
Description: 
ItRaftGroupServiceTest#testTransferLeadership is flaky.

There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
if (this.confCtx.isBusy()) {
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
LOG.warn(
"Node {} refused to transfer leadership to peer {} when the 
leader is changing the configuration.",
getNodeId(), peer);
return new Status(RaftError.EBUSY, "Changing the 
configuration");
}
{noformat}

The current limitation must be investigated.
Seems like the easiest way to fix test is to rewrite it and repeat transfer 
leadership invocation.



 

  was:
ItRaftGroupServiceTest#testTransferLeadership is flaky.

There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
if (this.confCtx.isBusy()) {
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
LOG.warn(
"Node {} refused to transfer leadership to peer {} when the 
leader is changing the configuration.",
getNodeId(), peer);
return new Status(RaftError.EBUSY, "Changing the 
configuration");
}
{noformat}

The current limitation must be investigated.
Seems like the easies way to fix test is to rewrite test and repeat transfer 
leadership invocation  


 


> ItRaftGroupServiceTest#testTransferLeadership is flaky
> --
>
> Key: IGNITE-17064
> URL: https://issues.apache.org/jira/browse/IGNITE-17064
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Blocker
>  Labels: ignite-3
>
> ItRaftGroupServiceTest#testTransferLeadership is flaky.
> There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
> when EBUSY:Changing the configuration is thrown:
>  
> {noformat}
> if (this.confCtx.isBusy()) {
> // It's very messy to deal with the case when the |peer| 
> received
> // TimeoutNowRequest and increase the term while somehow 
> another leader
> // which was not replicated with the newest configuration has 
> been
> // elected. If no add_peer with this very |peer| is to be 
> invoked ever
> // after nor this peer is to be killed, this peer will spin 
> in the voting
> // procedure and make the each new leader stepped down when 
> the peer
> // reached vote timeout and it starts to vote (because it 
> will increase
> // the term of the group)
> // To make things simple, refuse the operation and force 
> users to
> // invoke transfer_leadership_to after configuration changing 
> is
>

[jira] [Updated] (IGNITE-14972) Thin 3.0: Implement SQL API for java thin client

2022-05-31 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-14972:

Fix Version/s: 3.0.0-alpha5

> Thin 3.0: Implement SQL API for java thin client
> 
>
> Key: IGNITE-14972
> URL: https://issues.apache.org/jira/browse/IGNITE-14972
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql, thin client
>Reporter: Igor Sapego
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We need to implement basic SQL API for java thin client in 3.0. Maybe this 
> task should involve creating an IEP and discussion on the dev list.
> Also, keep in mind that protocol messages themselves should re-use as much as 
> possible JDBC protocol.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-14972) Thin 3.0: Implement SQL API for java thin client

2022-05-31 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-14972:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Thin 3.0: Implement SQL API for java thin client
> 
>
> Key: IGNITE-14972
> URL: https://issues.apache.org/jira/browse/IGNITE-14972
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql, thin client
>Reporter: Igor Sapego
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha5
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We need to implement basic SQL API for java thin client in 3.0. Maybe this 
> task should involve creating an IEP and discussion on the dev list.
> Also, keep in mind that protocol messages themselves should re-use as much as 
> possible JDBC protocol.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-14972) Thin 3.0: Implement SQL API for java thin client

2022-05-31 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544325#comment-17544325
 ] 

Pavel Tupitsyn commented on IGNITE-14972:
-

Merged to main: 
https://github.com/apache/ignite-3/commit/95025fe9046ca278b65c0fc48478a0bf9bc8bdfb

> Thin 3.0: Implement SQL API for java thin client
> 
>
> Key: IGNITE-14972
> URL: https://issues.apache.org/jira/browse/IGNITE-14972
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql, thin client
>Reporter: Igor Sapego
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We need to implement basic SQL API for java thin client in 3.0. Maybe this 
> task should involve creating an IEP and discussion on the dev list.
> Also, keep in mind that protocol messages themselves should re-use as much as 
> possible JDBC protocol.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (IGNITE-16478) RocksDB returns unexpected cursor.hasNext equals false after leader is changed

2022-05-31 Thread Denis Chudov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Chudov resolved IGNITE-16478.
---
Resolution: Cannot Reproduce

> RocksDB returns unexpected cursor.hasNext equals false after leader is changed
> --
>
> Key: IGNITE-16478
> URL: https://issues.apache.org/jira/browse/IGNITE-16478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Assignee: Denis Chudov
>Priority: Blocker
>  Labels: ignite-3
>
> After investigation of https://issues.apache.org/jira/browse/IGNITE-16406 we 
> found a bug in {{PartitionListener#handleScanRetrieveBatchCommand}} when 
> leader is changed. In some case, {{cursorDesc.cursor().hasNext()}} returns 
> false, hence select operation return empty response, but should response 
> several rows. 
> Investigation showed that cursor that is returned from 
> {{RocksDbPartitionStorage#scan}}
> is inconsistent after leader changing because its hasNext returns unexpected 
> false.
> Further investigation is needed.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16478) RocksDB returns unexpected cursor.hasNext equals false after leader is changed

2022-05-31 Thread Denis Chudov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544315#comment-17544315
 ] 

Denis Chudov commented on IGNITE-16478:
---

I have merged actual main to ignite-16406-old-main where this error is 
reproducing. After >20k runs of the test, the error did not reproduce, which 
allows to assume that it was fixed by IGNITE-16718.

> RocksDB returns unexpected cursor.hasNext equals false after leader is changed
> --
>
> Key: IGNITE-16478
> URL: https://issues.apache.org/jira/browse/IGNITE-16478
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Assignee: Denis Chudov
>Priority: Blocker
>  Labels: ignite-3
>
> After investigation of https://issues.apache.org/jira/browse/IGNITE-16406 we 
> found a bug in {{PartitionListener#handleScanRetrieveBatchCommand}} when 
> leader is changed. In some case, {{cursorDesc.cursor().hasNext()}} returns 
> false, hence select operation return empty response, but should response 
> several rows. 
> Investigation showed that cursor that is returned from 
> {{RocksDbPartitionStorage#scan}}
> is inconsistent after leader changing because its hasNext returns unexpected 
> false.
> Further investigation is needed.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17064) ItRaftGroupServiceTest#testTransferLeadership is flaky

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17064:
-
Description: 
ItRaftGroupServiceTest#testTransferLeadership is flaky.

There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
when EBUSY:Changing the configuration is thrown:

 
{noformat}
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
{noformat}

The current limitation must be investigated.
Seems like the easies way to fix test is to rewrite test and repeat transfer 
leadership invocation  


 

  was:
ItRaftGroupServiceTest#testTransferLeadership is flaky.

There is a comment in {{ItRaftGroupServiceTest#testTransferLeadership}} about 
the situation when EBUSY:Changing the configuration is thrown:

 
{noformat}
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
{noformat}

The current limitation must be investigated.
Seems like the easies way to fix test is to rewrite test and repeat transfer 
leadership invocation  


 


> ItRaftGroupServiceTest#testTransferLeadership is flaky
> --
>
> Key: IGNITE-17064
> URL: https://issues.apache.org/jira/browse/IGNITE-17064
> Project: Ignite
>  Issue Type: Bug
>Reporter: Mirza Aliev
>Priority: Blocker
>  Labels: ignite-3
>
> ItRaftGroupServiceTest#testTransferLeadership is flaky.
> There is a comment in {{NodeImpl#transferLeadershipTo}} about the situation 
> when EBUSY:Changing the configuration is thrown:
>  
> {noformat}
> // It's very messy to deal with the case when the |peer| 
> received
> // TimeoutNowRequest and increase the term while somehow 
> another leader
> // which was not replicated with the newest configuration has 
> been
> // elected. If no add_peer with this very |peer| is to be 
> invoked ever
> // after nor this peer is to be killed, this peer will spin 
> in the voting
> // procedure and make the each new leader stepped down when 
> the peer
> // reached vote timeout and it starts to vote (because it 
> will increase
> // the term of the group)
> // To make things simple, refuse the operation and force 
> users to
> // invoke transfer_leadership_to after configuration changing 
> is
> // completed so that the peer's configuration is up-to-date 
> when it
> // receives the TimeOutNowRequest.
> {noformat}
> The current limitation must be investigated.
> Seems like the easies way to fix test is to rewrite test and repeat transfer 
> leadership invocation  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17064) ItRaftGroupServiceTest#testTransferLeadership is flaky

2022-05-31 Thread Mirza Aliev (Jira)
Mirza Aliev created IGNITE-17064:


 Summary: ItRaftGroupServiceTest#testTransferLeadership is flaky
 Key: IGNITE-17064
 URL: https://issues.apache.org/jira/browse/IGNITE-17064
 Project: Ignite
  Issue Type: Bug
Reporter: Mirza Aliev


ItRaftGroupServiceTest#testTransferLeadership is flaky.

There is a comment in {{ItRaftGroupServiceTest#testTransferLeadership}} about 
the situation when EBUSY:Changing the configuration is thrown:

 
{noformat}
// It's very messy to deal with the case when the |peer| 
received
// TimeoutNowRequest and increase the term while somehow 
another leader
// which was not replicated with the newest configuration has 
been
// elected. If no add_peer with this very |peer| is to be 
invoked ever
// after nor this peer is to be killed, this peer will spin in 
the voting
// procedure and make the each new leader stepped down when the 
peer
// reached vote timeout and it starts to vote (because it will 
increase
// the term of the group)
// To make things simple, refuse the operation and force users 
to
// invoke transfer_leadership_to after configuration changing is
// completed so that the peer's configuration is up-to-date 
when it
// receives the TimeOutNowRequest.
{noformat}

The current limitation must be investigated.
Seems like the easies way to fix test is to rewrite test and repeat transfer 
leadership invocation  


 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17062) Add ability to process local events synchronously

2022-05-31 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-17062:
-
Description: 
Local events (org.apache.ignite.internal.manager.Event) logic should be 
reworked a bit in order to await all signers to process corresponding event 
before publishing the action. Let's check an example of an expected behavior:
 # TableManager prepares table for creation.
 # TableManager notifies all signers with Table.CREATE event propagating given 
to-be-created table.
 # TableManager (with the help of org.apache.ignite.internal.manager.Producer) 
awaits all signers to acknowledge that Table.CREATE event was successfully 
processed. E.g.  SqlQueryProcessor prepares calcite schemes on mentioned above 
Table.CREATE and when ready sends the ack by completing events Future like it's 
done within ConfigurationRegistry events.
 # TableManager on Compound-Like-Future.allOff(events) publishes corresponding 
action, e.g. makes table visible to everyone.

> Add ability to process local events synchronously
> -
>
> Key: IGNITE-17062
> URL: https://issues.apache.org/jira/browse/IGNITE-17062
> Project: Ignite
>  Issue Type: Task
>Reporter: Alexander Lapin
>Priority: Major
>  Labels: ignite-3
>
> Local events (org.apache.ignite.internal.manager.Event) logic should be 
> reworked a bit in order to await all signers to process corresponding event 
> before publishing the action. Let's check an example of an expected behavior:
>  # TableManager prepares table for creation.
>  # TableManager notifies all signers with Table.CREATE event propagating 
> given to-be-created table.
>  # TableManager (with the help of 
> org.apache.ignite.internal.manager.Producer) awaits all signers to 
> acknowledge that Table.CREATE event was successfully processed. E.g.  
> SqlQueryProcessor prepares calcite schemes on mentioned above Table.CREATE 
> and when ready sends the ack by completing events Future like it's done 
> within ConfigurationRegistry events.
>  # TableManager on Compound-Like-Future.allOff(events) publishes 
> corresponding action, e.g. makes table visible to everyone.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17063) .NET: Failed to load libjvm.so in some environments

2022-05-31 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-17063:

Priority: Trivial  (was: Major)

> .NET: Failed to load libjvm.so in some environments
> ---
>
> Key: IGNITE-17063
> URL: https://issues.apache.org/jira/browse/IGNITE-17063
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 2.11
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Trivial
>  Labels: .NET
> Fix For: 2.14
>
>
> We rely on "readlink -f /usr/bin/java" to locate the JVM on Linux.
> However, in some cases "readlink" is not in PATH and this fails.
> # Try full path "/usr/bin/readlink" as well as short path
> # Capture stderr when running commands



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-12152) Disk space not getting released after deleting rows from a table or after records expired with expiration policy

2022-05-31 Thread Sergey Chugunov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-12152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544304#comment-17544304
 ] 

Sergey Chugunov commented on IGNITE-12152:
--

[~shm] , this is expected behavior for 2.7 version, Storage Engine 
implementation of Apache Ignite cannot free up disk space on the fly due to how 
storage is designed on the most fundamental level.

Yet it is possible to reclaim disk space back in a newer versions starting from 
2.10 using mechanism of defragmentation and Maintenance Mode. I added a link to 
the ticket where CLI API was implemented for defragmentation.

If you don't mind I'll close the ticket with "Won't Fix" resolution.

> Disk space not getting released after deleting rows from a table or after 
> records expired with expiration policy
> 
>
> Key: IGNITE-12152
> URL: https://issues.apache.org/jira/browse/IGNITE-12152
> Project: Ignite
>  Issue Type: Bug
>  Components: documentation, persistence, sql
>Affects Versions: 2.7
>Reporter: shivakumar
>Priority: Critical
>
> To reproduce,
> create a cache group and create a sql table using that cache group, then 
> insert considerable rows of records to the table, monitor the disk space 
> usage, now stop inserting records and delete records from the table using 
> *DELETE FROM table;  (not a drop table operation)*
> once this is done the *select count(\*) from table;* shows the count 0 but 
> after this when disk space is monitored it will be in the same usage level as 
> it is in the level before rows deletion.
> When I start inserting records again, the new records re-used the space 
> occupied by the deleted records but still, it is not a good idea to 
> unnecessarily hold disk resource.
> The same can be reproduced by configuring cache expiry policy 
> (CreatedExpiryPolicy) where after records expires it will not be visible in 
> sql query (select * from table) but still it will hold disk resource.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-13109) Skip distributed metastorage entries that can not be unmarshalled

2022-05-31 Thread Nikolay Izhikov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544303#comment-17544303
 ] 

Nikolay Izhikov commented on IGNITE-13109:
--

[~sergeychugunov] 

1. It seems that this scenario not possible on vanilla Ignite.
2. This feature required only during migration from one Ignite fork to another 
with the same PDS.
3. Products that is based on Ignite can done private patches to overcome 
possible issue.

Do you have another reason to have this feature in master?
I can update patch and merge it if it really required.

> Skip distributed metastorage entries that can not be unmarshalled
> -
>
> Key: IGNITE-13109
> URL: https://issues.apache.org/jira/browse/IGNITE-13109
> Project: Ignite
>  Issue Type: Bug
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Need to skip distributed metastorage entries that can not be unmarshalled 
> (These entries can be created by the product that built on Apache Ignite and 
> incorporate additional features). It leads that nodes can't join to the first 
> started node:
> {noformat}
> [SEVERE][main][PersistenceBasicCompatibilityTest1] Got exception while 
> starting (will rollback startup routine).
> class org.apache.ignite.IgniteCheckedException: Failed to start manager: 
> GridManagerAdapter [enabled=true, 
> name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:2035)
> at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1314)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2063)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1116)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:636)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:562)
> at org.apache.ignite.Ignition.start(Ignition.java:328)
> at 
> org.apache.ignite.testframework.junits.multijvm.IgniteNodeRunner.main(IgniteNodeRunner.java:74)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start 
> SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, 
> marsh=JdkMarshaller 
> [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@77b14724], 
> reconCnt=10, reconDelay=2000, maxAckTimeout=60, soLinger=5, 
> forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, 
> skipAddrsRandomization=false]
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:302)
> at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:948)
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:2030)
> ... 8 more
> Caused by: class org.apache.ignite.spi.IgniteSpiException: Unable to 
> unmarshal key=ignite.testOldKey
> at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:2009)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1116)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:427)
> at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2111)
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:299)
> ... 10 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17063) .NET: Failed to load libjvm.so in some environments

2022-05-31 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-17063:
---

 Summary: .NET: Failed to load libjvm.so in some environments
 Key: IGNITE-17063
 URL: https://issues.apache.org/jira/browse/IGNITE-17063
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 2.11
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.14


We rely on "readlink -f /usr/bin/java" to locate the JVM on Linux.
However, in some cases "readlink" is not in PATH and this fails.

# Try full path "/usr/bin/readlink" as well as short path
# Capture stderr when running commands



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17062) Add ability to process local events synchronously

2022-05-31 Thread Alexander Lapin (Jira)
Alexander Lapin created IGNITE-17062:


 Summary: Add ability to process local events synchronously
 Key: IGNITE-17062
 URL: https://issues.apache.org/jira/browse/IGNITE-17062
 Project: Ignite
  Issue Type: Task
Reporter: Alexander Lapin






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-13109) Skip distributed metastorage entries that can not be unmarshalled

2022-05-31 Thread Sergey Chugunov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544295#comment-17544295
 ] 

Sergey Chugunov commented on IGNITE-13109:
--

[~nizhikov] , could you please add a comment about why this issue was closed 
with "Won't fix" resolution? It seems to me as a real problem, there was some 
work done, but eventually the issue was silently closed without any additional 
information.

> Skip distributed metastorage entries that can not be unmarshalled
> -
>
> Key: IGNITE-13109
> URL: https://issues.apache.org/jira/browse/IGNITE-13109
> Project: Ignite
>  Issue Type: Bug
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Need to skip distributed metastorage entries that can not be unmarshalled 
> (These entries can be created by the product that built on Apache Ignite and 
> incorporate additional features). It leads that nodes can't join to the first 
> started node:
> {noformat}
> [SEVERE][main][PersistenceBasicCompatibilityTest1] Got exception while 
> starting (will rollback startup routine).
> class org.apache.ignite.IgniteCheckedException: Failed to start manager: 
> GridManagerAdapter [enabled=true, 
> name=org.apache.ignite.internal.managers.discovery.GridDiscoveryManager]
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:2035)
> at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1314)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2063)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1116)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:636)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:562)
> at org.apache.ignite.Ignition.start(Ignition.java:328)
> at 
> org.apache.ignite.testframework.junits.multijvm.IgniteNodeRunner.main(IgniteNodeRunner.java:74)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start 
> SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, 
> marsh=JdkMarshaller 
> [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@77b14724], 
> reconCnt=10, reconDelay=2000, maxAckTimeout=60, soLinger=5, 
> forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, 
> skipAddrsRandomization=false]
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:302)
> at 
> org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:948)
> at 
> org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:2030)
> ... 8 more
> Caused by: class org.apache.ignite.spi.IgniteSpiException: Unable to 
> unmarshal key=ignite.testOldKey
> at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:2009)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1116)
> at 
> org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:427)
> at 
> org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2111)
> at 
> org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:299)
> ... 10 more
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17056) Implement rebalance cancel mechanism

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17056:
-
Epic Link: IGNITE-14209

> Implement rebalance cancel mechanism
> 
>
> Key: IGNITE-17056
> URL: https://issues.apache.org/jira/browse/IGNITE-17056
> Project: Ignite
>  Issue Type: Task
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> There are cases when a current leader cannot perform rebalance on specified 
> set of nodes, for example, when some node from the raft group permanently 
> fails with \{{RaftError#ECATCHUP}}. For such scenario retry mechanism is 
> implemented in IGNITE-16801, but we cannot retry rebalance intent infinitely, 
> so there should be implemented mechanism for canceling a rebalance. 
> Naive canceling could be implemented by removing {{pending key}} and 
> replacing it with {{planned key}}. But this approach has several crucial 
> limitations and may cause inconsistency in the current rebalance protocol, 
> for example, when there is a race between cancel and applying new assignment 
> to the {{stable key}} from the new leader. We can remove {{pending key}} 
> right before applying new assignment to the {{stable key}}, so we cannot 
> resolve peers to ClusterIds, which is made on a union of pending and stable 
> keys. 
> Also there is a case, when we can lost planned rebalance:
>  # Current leader retries failed rebalance
>  # Current leader stops being leader for some reasons and sleeps
>  # New leader performs rebalance and calls 
> {{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}}
>  # At this moment old leader wakes up and cancels the current rebalance, so 
> it removes pending and writes to it planned key.
>  # At this moment we receive 
> {{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}} from the 
> new leader, see that planned is empty, so we just delete pending key, but 
> this is not correct to delete this key as far as the rebalance that is 
> associated to the removed key hasn't been performed yet.
> Also we should consider separating scenarios for recoverable and 
> unrecoverable errors, because it might be useless to retry rebalance, if some 
> participating node fails with unrecoverable error. 
> Seems like we should properly think about introducing some failure handling 
> for such exceptional scenarios. 
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17056) Implement rebalance cancel mechanism

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17056:
-
Description: 
There are cases when a current leader cannot perform rebalance on specified set 
of nodes, for example, when some node from the raft group permanently fails 
with \{{RaftError#ECATCHUP}}. For such scenario retry mechanism is implemented 
in IGNITE-16801, but we cannot retry rebalance intent infinitely, so there 
should be implemented mechanism for canceling a rebalance. 

Naive canceling could be implemented by removing {{pending key}} and replacing 
it with {{planned key}}. But this approach has several crucial limitations and 
may cause inconsistency in the current rebalance protocol, for example, when 
there is a race between cancel and applying new assignment to the {{stable 
key}} from the new leader. We can remove {{pending key}} right before applying 
new assignment to the {{stable key}}, so we cannot resolve peers to ClusterIds, 
which is made on a union of pending and stable keys. 

Also there is a case, when we can lost planned rebalance:
 # Current leader retries failed rebalance
 # Current leader stops being leader for some reasons and sleeps
 # New leader performs rebalance and calls 
{{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}}
 # At this moment old leader wakes up and cancels the current rebalance, so it 
removes pending and writes to it planned key.
 # At this moment we receive 
{{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}} from the 
new leader, see that planned is empty, so we just delete pending key, but this 
is not correct to delete this key as far as the rebalance that is associated to 
the removed key hasn't been performed yet.

Also we should consider separating scenarios for recoverable and unrecoverable 
errors, because it might be useless to retry rebalance, if some participating 
node fails with unrecoverable error. 
Seems like we should properly think about introducing some failure handling for 
such exceptional scenarios. 
 

  was:
There are cases when current leader cannot perform rebalance on specified set 
of nodes, for example, when some node from the raft group permanently fails 
with \{{RaftError#ECATCHUP}}. For such scenario retry mechanism is implemented 
in IGNITE-16801, but we cannot retry rebalance intent infinitely, so there 
should be implemented mechanism for canceling a rebalance. 

Naive canceling could be implemented by removing {{pending key}} and replacing 
it with {{planned key}}. But this approach has several crucial limitations and 
may cause inconsistency in the current rebalance protocol, for example, when 
there is a race between cancel and applying new assignment to the {{stable 
key}} from the new leader. We can remove {{pending key}} right before applying 
new assignment to the {{stable key}}, so we cannot resolve peers to ClusterIds, 
which is made on a union of pending and stable keys. 

Also there is a case, when we can lost planned rebalance:
 # Current leader retries failed rebalance
 # Current leader stops being leader for some reasons and sleeps
 # New leader performs rebalance and calls 
{{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}}
 # At this moment old leader wakes up and cancels the current rebalance, so it 
removes pending and writes to it planned key.
 # At this moment we receive 
{{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}} from the 
new leader, see that planned is empty, so we just delete pending key, but this 
is not correct to delete this key as far as the rebalance that is associated to 
the removed key hasn't been performed yet.

Also we should consider separating scenarios for recoverable and unrecoverable 
errors, because it might be useless to retry rebalance, if some participating 
node fails with unrecoverable error. 
Seems like we should properly think about introducing some failure handling for 
such exceptional scenarios. 
 


> Implement rebalance cancel mechanism
> 
>
> Key: IGNITE-17056
> URL: https://issues.apache.org/jira/browse/IGNITE-17056
> Project: Ignite
>  Issue Type: Task
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> There are cases when a current leader cannot perform rebalance on specified 
> set of nodes, for example, when some node from the raft group permanently 
> fails with \{{RaftError#ECATCHUP}}. For such scenario retry mechanism is 
> implemented in IGNITE-16801, but we cannot retry rebalance intent infinitely, 
> so there should be implemented mechanism for canceling a rebalance. 
> Naive canceling could be implemented by removing {{pending key}} and 
> replacing it with {{planned key}}. But this approach has several crucial 
> limitations and may cause 

[jira] [Updated] (IGNITE-17056) Implement rebalance cancel mechanism

2022-05-31 Thread Mirza Aliev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mirza Aliev updated IGNITE-17056:
-
Description: 
There are cases when current leader cannot perform rebalance on specified set 
of nodes, for example, when some node from the raft group permanently fails 
with \{{RaftError#ECATCHUP}}. For such scenario retry mechanism is implemented 
in IGNITE-16801, but we cannot retry rebalance intent infinitely, so there 
should be implemented mechanism for canceling a rebalance. 

Naive canceling could be implemented by removing {{pending key}} and replacing 
it with {{planned key}}. But this approach has several crucial limitations and 
may cause inconsistency in the current rebalance protocol, for example, when 
there is a race between cancel and applying new assignment to the {{stable 
key}} from the new leader. We can remove {{pending key}} right before applying 
new assignment to the {{stable key}}, so we cannot resolve peers to ClusterIds, 
which is made on a union of pending and stable keys. 

Also there is a case, when we can lost planned rebalance:
 # Current leader retries failed rebalance
 # Current leader stops being leader for some reasons and sleeps
 # New leader performs rebalance and calls 
{{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}}
 # At this moment old leader wakes up and cancels the current rebalance, so it 
removes pending and writes to it planned key.
 # At this moment we receive 
{{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}} from the 
new leader, see that planned is empty, so we just delete pending key, but this 
is not correct to delete this key as far as the rebalance that is associated to 
the removed key hasn't been performed yet.

Also we should consider separating scenarios for recoverable and unrecoverable 
errors, because it might be useless to retry rebalance, if some participating 
node fails with unrecoverable error. 
Seems like we should properly think about introducing some failure handling for 
such exceptional scenarios. 
 

  was:TBD


> Implement rebalance cancel mechanism
> 
>
> Key: IGNITE-17056
> URL: https://issues.apache.org/jira/browse/IGNITE-17056
> Project: Ignite
>  Issue Type: Task
>Reporter: Mirza Aliev
>Priority: Major
>  Labels: ignite-3
>
> There are cases when current leader cannot perform rebalance on specified set 
> of nodes, for example, when some node from the raft group permanently fails 
> with \{{RaftError#ECATCHUP}}. For such scenario retry mechanism is 
> implemented in IGNITE-16801, but we cannot retry rebalance intent infinitely, 
> so there should be implemented mechanism for canceling a rebalance. 
> Naive canceling could be implemented by removing {{pending key}} and 
> replacing it with {{planned key}}. But this approach has several crucial 
> limitations and may cause inconsistency in the current rebalance protocol, 
> for example, when there is a race between cancel and applying new assignment 
> to the {{stable key}} from the new leader. We can remove {{pending key}} 
> right before applying new assignment to the {{stable key}}, so we cannot 
> resolve peers to ClusterIds, which is made on a union of pending and stable 
> keys. 
> Also there is a case, when we can lost planned rebalance:
>  # Current leader retries failed rebalance
>  # Current leader stops being leader for some reasons and sleeps
>  # New leader performs rebalance and calls 
> {{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}}
>  # At this moment old leader wakes up and cancels the current rebalance, so 
> it removes pending and writes to it planned key.
>  # At this moment we receive 
> {{RebalanceRaftGroupEventsListener#onNewPeersConfigurationApplied}} from the 
> new leader, see that planned is empty, so we just delete pending key, but 
> this is not correct to delete this key as far as the rebalance that is 
> associated to the removed key hasn't been performed yet.
> Also we should consider separating scenarios for recoverable and 
> unrecoverable errors, because it might be useless to retry rebalance, if some 
> participating node fails with unrecoverable error. 
> Seems like we should properly think about introducing some failure handling 
> for such exceptional scenarios. 
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-14972) Thin 3.0: Implement SQL API for java thin client

2022-05-31 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544253#comment-17544253
 ] 

Igor Sapego commented on IGNITE-14972:
--

[~ptupitsyn] see my comments in PR

> Thin 3.0: Implement SQL API for java thin client
> 
>
> Key: IGNITE-14972
> URL: https://issues.apache.org/jira/browse/IGNITE-14972
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql, thin client
>Reporter: Igor Sapego
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We need to implement basic SQL API for java thin client in 3.0. Maybe this 
> task should involve creating an IEP and discussion on the dev list.
> Also, keep in mind that protocol messages themselves should re-use as much as 
> possible JDBC protocol.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-14636) Calcite engine. Support for LISTAGG (aka GROUP_CONCAT, STRING_AGG) aggregate function

2022-05-31 Thread Ivan Daschinsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Daschinsky updated IGNITE-14636:
-
Fix Version/s: 2.14

> Calcite engine. Support for LISTAGG (aka GROUP_CONCAT, STRING_AGG) aggregate 
> function
> -
>
> Key: IGNITE-14636
> URL: https://issues.apache.org/jira/browse/IGNITE-14636
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite2-required, calcite3-required
> Fix For: 2.14
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Tests:
> {{aggregate/aggregates/test_aggregate_types.test}}
> {{aggregate/aggregates/test_aggregate_types_scalar.test}}
> {{aggregate/aggregates/test_distinct_string_agg.test_ignore}}
> {{aggregate/aggregates/test_string_agg.test_ignore}}
> {{aggregate/aggregates/test_string_agg_big.test_ignore}}
> {{aggregate/aggregates/test_string_agg_many_groups.test_slow_ignored}}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-14636) Calcite engine. Support for LISTAGG (aka GROUP_CONCAT, STRING_AGG) aggregate function

2022-05-31 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544238#comment-17544238
 ] 

Ignite TC Bot commented on IGNITE-14636:


{panel:title=Branch: [pull/10023/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10023/head] Base: [master] : New Tests 
(4)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}Calcite SQL{color} [[tests 
4|https://ci.ignite.apache.org/viewLog.html?buildId=6576051]]
* {color:#013220}IgniteCalciteTestSuite: test_string_agg_many_groups.test - 
PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: test_distinct_string_agg.test - 
PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: test_string_agg.test - PASSED{color}
* {color:#013220}IgniteCalciteTestSuite: test_string_agg_big.test - 
PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6575181buildTypeId=IgniteTests24Java8_RunAll]

> Calcite engine. Support for LISTAGG (aka GROUP_CONCAT, STRING_AGG) aggregate 
> function
> -
>
> Key: IGNITE-14636
> URL: https://issues.apache.org/jira/browse/IGNITE-14636
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Taras Ledkov
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite2-required, calcite3-required
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Tests:
> {{aggregate/aggregates/test_aggregate_types.test}}
> {{aggregate/aggregates/test_aggregate_types_scalar.test}}
> {{aggregate/aggregates/test_distinct_string_agg.test_ignore}}
> {{aggregate/aggregates/test_string_agg.test_ignore}}
> {{aggregate/aggregates/test_string_agg_big.test_ignore}}
> {{aggregate/aggregates/test_string_agg_many_groups.test_slow_ignored}}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17049) Move ignite-spark modules to the Ignite Extensions project

2022-05-31 Thread Maxim Muzafarov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544234#comment-17544234
 ] 

Maxim Muzafarov commented on IGNITE-17049:
--

The changes related to the Apache Ignite Spark 2.4 integration are moved to the 
master branch.
The changes related to the Apache Ignite Spark 2.3 integration are in the 
release branch: 
[release/ignite-spark-ext-1.0.0|https://github.com/apache/ignite-extensions/tree/release/ignite-spark-ext-1.0.0/modules/spark-ext].

A new Apache Ignite Extendsion RDD suite has created, tests have passed.
https://ci.ignite.apache.org/viewLog.html?buildId=6598912=IgniteExtensions_Tests_Rdd

> Move ignite-spark modules to the Ignite Extensions project
> --
>
> Key: IGNITE-17049
> URL: https://issues.apache.org/jira/browse/IGNITE-17049
> Project: Ignite
>  Issue Type: Task
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Major
> Fix For: 2.14
>
>
> The ignite-spark, ignite-spark2.4 modules must be moved to the Ignite 
> Extension project.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17002) Indexes rebuild in Maintenance Mode

2022-05-31 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544191#comment-17544191
 ] 

Ignite TC Bot commented on IGNITE-17002:


{panel:title=Branch: [pull/10042/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10042/head] Base: [master] : New Tests 
(9)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}PDS (Indexing){color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=6591334]]
* {color:#013220}IgnitePdsWithIndexingTestSuite: 
MaintenanceRebuildIndexUtilsSelfTest.testConstructFromMap - PASSED{color}

{color:#8b}Control Utility{color} [[tests 
8|https://ci.ignite.apache.org/viewLog.html?buildId=6591342]]
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testCorruptedIndexRebuildCacheWithGroupOnAllNodes
 - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testErrors - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testConsecutiveCommandInvocations - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testCorruptedIndexRebuildCacheWithGroup - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testCorruptedIndexRebuildCache - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testCorruptedIndexRebuildCacheOnAllNodes - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testRebuild - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
CommandHandlerParsingTest.testIndexRebuildWrongArgs - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6598030buildTypeId=IgniteTests24Java8_RunAll]

> Indexes rebuild in Maintenance Mode
> ---
>
> Key: IGNITE-17002
> URL: https://issues.apache.org/jira/browse/IGNITE-17002
> Project: Ignite
>  Issue Type: Improvement
>  Components: control.sh, persistence
>Reporter: Sergey Chugunov
>Assignee: Semyon Danilov
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Now Ignite supports entering Maintenance Mode after index corruption 
> automatically - this was implemented in linked issue.
> But there are use-cases when user needs to request rebuilding specific 
> indexes in MM, so we need to provide a control.sh API to make these requests.
> Also for better integration with monitoring tools it is nice to provide an 
> API to check status of rebuilding task and print message to logs when each 
> task is finished and all tasks are finished.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17002) Indexes rebuild in Maintenance Mode

2022-05-31 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544190#comment-17544190
 ] 

Ignite TC Bot commented on IGNITE-17002:


{panel:title=Branch: [pull/10042/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10042/head] Base: [master] : New Tests 
(9)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}
{color:#8b}PDS (Indexing){color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=6591334]]
* {color:#013220}IgnitePdsWithIndexingTestSuite: 
MaintenanceRebuildIndexUtilsSelfTest.testConstructFromMap - PASSED{color}

{color:#8b}Control Utility{color} [[tests 
8|https://ci.ignite.apache.org/viewLog.html?buildId=6591342]]
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testCorruptedIndexRebuildCacheWithGroupOnAllNodes
 - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testErrors - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testConsecutiveCommandInvocations - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testCorruptedIndexRebuildCacheWithGroup - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testCorruptedIndexRebuildCache - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testCorruptedIndexRebuildCacheOnAllNodes - 
PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
GridCommandHandlerIndexRebuildTest.testRebuild - PASSED{color}
* {color:#013220}IgniteControlUtilityTestSuite: 
CommandHandlerParsingTest.testIndexRebuildWrongArgs - PASSED{color}

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6598030buildTypeId=IgniteTests24Java8_RunAll]

> Indexes rebuild in Maintenance Mode
> ---
>
> Key: IGNITE-17002
> URL: https://issues.apache.org/jira/browse/IGNITE-17002
> Project: Ignite
>  Issue Type: Improvement
>  Components: control.sh, persistence
>Reporter: Sergey Chugunov
>Assignee: Semyon Danilov
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Now Ignite supports entering Maintenance Mode after index corruption 
> automatically - this was implemented in linked issue.
> But there are use-cases when user needs to request rebuilding specific 
> indexes in MM, so we need to provide a control.sh API to make these requests.
> Also for better integration with monitoring tools it is nice to provide an 
> API to check status of rebuilding task and print message to logs when each 
> task is finished and all tasks are finished.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17061) Invalid detection of SPI on configuration validation.

2022-05-31 Thread Vasiliy Sisko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasiliy Sisko updated IGNITE-17061:
---
Description: 
When I configure different deployment SPI for different nodes I receive the 
next message
_Remote SPI with the same name is not configured_
when I expect to get the message
_Remote SPI with the same name is of different type_
I got it because *org.apache.ignite.spi.IgniteSpiAdapter* use 
*U.getSimpleName(getClass())* for SPI name generation in 
org.apache.ignite.spi.IgniteSpiAdapter#IgniteSpiAdapter`, that looks like a 
mistake.

  was:
When I configure different deployment SPI for different nodes I receive the 
next message
Remote SPI with the same name is not configured
when I expect to get the message
Remote SPI with the same name is of different type
I got it because *org.apache.ignite.spi.IgniteSpiAdapter* use 
*U.getSimpleName(getClass())* for SPI name generation in 
org.apache.ignite.spi.IgniteSpiAdapter#IgniteSpiAdapter`, that looks like a 
mistake.


> Invalid detection of SPI on configuration validation.
> -
>
> Key: IGNITE-17061
> URL: https://issues.apache.org/jira/browse/IGNITE-17061
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.13
>Reporter: Vasiliy Sisko
>Priority: Major
>
> When I configure different deployment SPI for different nodes I receive the 
> next message
> _Remote SPI with the same name is not configured_
> when I expect to get the message
> _Remote SPI with the same name is of different type_
> I got it because *org.apache.ignite.spi.IgniteSpiAdapter* use 
> *U.getSimpleName(getClass())* for SPI name generation in 
> org.apache.ignite.spi.IgniteSpiAdapter#IgniteSpiAdapter`, that looks like a 
> mistake.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17061) Invalid detection of SPI on configuration validation.

2022-05-31 Thread Vasiliy Sisko (Jira)
Vasiliy Sisko created IGNITE-17061:
--

 Summary: Invalid detection of SPI on configuration validation.
 Key: IGNITE-17061
 URL: https://issues.apache.org/jira/browse/IGNITE-17061
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.13
Reporter: Vasiliy Sisko


When I configure different deployment SPI for different nodes I receive the 
next message
Remote SPI with the same name is not configured
when I expect to get the message
Remote SPI with the same name is of different type
I got it because *org.apache.ignite.spi.IgniteSpiAdapter* use 
*U.getSimpleName(getClass())* for SPI name generation in 
org.apache.ignite.spi.IgniteSpiAdapter#IgniteSpiAdapter`, that looks like a 
mistake.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17060) Thin 3.0: Implement script SQL API for java thin client

2022-05-31 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-17060:
---

 Summary: Thin 3.0: Implement script SQL API for java thin client
 Key: IGNITE-17060
 URL: https://issues.apache.org/jira/browse/IGNITE-17060
 Project: Ignite
  Issue Type: Improvement
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17059) Thin 3.0: Implement batch SQL API for java thin client

2022-05-31 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-17059:
---

 Summary: Thin 3.0: Implement batch SQL API for java thin client
 Key: IGNITE-17059
 URL: https://issues.apache.org/jira/browse/IGNITE-17059
 Project: Ignite
  Issue Type: Improvement
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17058) Thin 3.0: Implement reactive SQL API for java thin client

2022-05-31 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-17058:
---

 Summary: Thin 3.0: Implement reactive SQL API for java thin client
 Key: IGNITE-17058
 URL: https://issues.apache.org/jira/browse/IGNITE-17058
 Project: Ignite
  Issue Type: Improvement
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17057) Thin 3.0: Implement synchronous SQL API for java thin client

2022-05-31 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-17057:
---

 Summary: Thin 3.0: Implement synchronous SQL API for java thin 
client
 Key: IGNITE-17057
 URL: https://issues.apache.org/jira/browse/IGNITE-17057
 Project: Ignite
  Issue Type: Improvement
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn






--
This message was sent by Atlassian Jira
(v8.20.7#820007)