[jira] [Commented] (IGNITE-18255) C++ 3.0: Add KeyValueView

2023-04-17 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713396#comment-17713396
 ] 

Pavel Tupitsyn commented on IGNITE-18255:
-

[~isapego] looks good to me.

> C++ 3.0: Add KeyValueView
> -
>
> Key: IGNITE-18255
> URL: https://issues.apache.org/jira/browse/IGNITE-18255
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement KeyValueView in C++ client. Implement column mapping for user 
> objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17961) .NET: Migrate analysis settings from RuleSet to EditorConfig

2023-04-17 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-17961:

Fix Version/s: (was: 2.15)

> .NET: Migrate analysis settings from RuleSet to EditorConfig
> 
>
> Key: IGNITE-17961
> URL: https://issues.apache.org/jira/browse/IGNITE-17961
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET
>
> See 
> https://learn.microsoft.com/en-us/dotnet/fundamentals/code-analysis/configuration-files



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17961) .NET: Migrate analysis settings from RuleSet to EditorConfig

2023-04-17 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713376#comment-17713376
 ] 

Pavel Tupitsyn commented on IGNITE-17961:
-

[~alex_pl] this is minor and does not affect the users. Removed the fix version.

> .NET: Migrate analysis settings from RuleSet to EditorConfig
> 
>
> Key: IGNITE-17961
> URL: https://issues.apache.org/jira/browse/IGNITE-17961
> Project: Ignite
>  Issue Type: Improvement
>  Components: platforms
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .NET
>
> See 
> https://learn.microsoft.com/en-us/dotnet/fundamentals/code-analysis/configuration-files



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17853) Fix release build on TC (.NET documentation)

2023-04-17 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713375#comment-17713375
 ] 

Pavel Tupitsyn commented on IGNITE-17853:
-

[~alex_pl] I'm not working on this ticket. But documentation step is still 
disabled on TC, so I guess we need this:
https://ci.ignite.apache.org/admin/editBuildRunners.html?id=buildType:Releases_ApacheIgniteMain_ReleaseBuild

> Fix release build on TC (.NET documentation)
> 
>
> Key: IGNITE-17853
> URL: https://issues.apache.org/jira/browse/IGNITE-17853
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Reporter: Taras Ledkov
>Assignee: Pavel Tupitsyn
>Priority: Critical
> Fix For: 2.15
>
>
> There is not .NET documentation in the release build.
> [ignite-2.14|https://ci.ignite.apache.org/viewLog.html?buildId=6794645=Releases_ApacheIg[…]_Releases_ApacheIgniteMain=ignite-1377&_focus=70896]
>  release build.
> Build log:
> {code}
> [08:50:17]Step 11/15: Copy .NET docs (Command Line)
> [08:50:17][Step 11/15] Disabled build step Copy .NET docs (Command Line) is 
> skipped
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18255) C++ 3.0: Add KeyValueView

2023-04-17 Thread Igor Sapego (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Sapego updated IGNITE-18255:
-
Reviewer: Pavel Tupitsyn

> C++ 3.0: Add KeyValueView
> -
>
> Key: IGNITE-18255
> URL: https://issues.apache.org/jira/browse/IGNITE-18255
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Reporter: Igor Sapego
>Assignee: Igor Sapego
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement KeyValueView in C++ client. Implement column mapping for user 
> objects.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19286) NPE in case of simultaneous cache destroy and transaction rollback

2023-04-17 Thread Nikita Amelchev (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713200#comment-17713200
 ] 

Nikita Amelchev commented on IGNITE-19286:
--

Merged into the master.

[~alexpl], Thank you for the review.

> NPE in case of simultaneous cache destroy and transaction rollback
> --
>
> Key: IGNITE-19286
> URL: https://issues.apache.org/jira/browse/IGNITE-19286
> Project: Ignite
>  Issue Type: Bug
>Reporter: Nikita Amelchev
>Assignee: Nikita Amelchev
>Priority: Major
>  Labels: ise
> Fix For: 2.15
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Reproducer attached. NPE in case of simultaneous cache destroy and 
> transaction rollback:
> {noformat}
> java.lang.NullPointerException: null
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager.notifyEvictions(IgniteTxManager.java:1967)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager.rollbackTx(IgniteTxManager.java:1723)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userRollback(IgniteTxLocalAdapter.java:1103)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3736)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:468)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:417)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$24.apply(GridNearTxLocal.java:4032)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$24.apply(GridNearTxLocal.java:4005)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:464)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:348)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:336)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:576)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheCompoundFuture.onDone(GridCacheCompoundFuture.java:56)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:555)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.onComplete(GridNearOptimisticTxPrepareFuture.java:298)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.onDone(GridNearOptimisticTxPrepareFuture.java:274)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFuture.onDone(GridNearOptimisticTxPrepareFuture.java:79)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:543)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.prepareOnTopology(GridNearOptimisticTxPrepareFutureAdapter.java:201)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearOptimisticTxPrepareFutureAdapter.lambda$prepareOnTopology$27f50bf2$1(GridNearOptimisticTxPrepareFutureAdapter.java:234)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$2.apply(GridTimeoutProcessor.java:181)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$2.apply(GridTimeoutProcessor.java:173)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:464)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:348)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:336)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:576)
>  ~[classes/:?]
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:555)
>  ~[classes/:?]
>   at 
> 

[jira] [Updated] (IGNITE-19239) Checkpoint read lock acquisition timeouts during snapshot restore

2023-04-17 Thread Ilya Shishkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Shishkov updated IGNITE-19239:
---
Description: 
There may be possible error messages about checkpoint read lock acquisition 
timeouts and critical threads blocking during snapshot restore process (just 
after caches start):
{quote} 
[2023-04-06T10:55:46,561][ERROR]\[ttl-cleanup-worker-#475%node%][CheckpointTimeoutLock]
 Checkpoint read lock acquisition has been timed out. 
{quote} 

{quote} 
[2023-04-06T10:55:47,487][ERROR]\[tcp-disco-msg-worker-[crd]\-#23%node%\-#446%node%][G]
 Blocked system-critical thread has been detected. This can lead to 
cluster-wide undefined behaviour \[workerName=db-checkpoint-thread, 
threadName=db-checkpoint-thread-#457%snapshot.BlockingThreadsOnSnapshotRestoreReproducerTest0%,
 {color:red}blockedFor=100s{color}] 
{quote} 

Also there are active exchange process, which finishes with such timings 
(timing will be approximatelly equal to blocking time of threads): 
{quote} 
[2023-04-06T10:55:52,211][INFO 
]\[exchange-worker-#450%node%][GridDhtPartitionsExchangeFuture] Exchange 
timings [startVer=AffinityTopologyVersion [topVer=1, minorTopVer=5], 
resVer=AffinityTopologyVersion [topVer=1, minorTopVer=5], stage="Waiting in 
exchange queue" (0 ms), ..., stage="Restore partition states" 
({color:red}100163 ms{color}), ..., stage="Total time" ({color:red}100334 
ms{color})] 
{quote} 
 

As I understand, such errors do not affect restoring, but can confuse, so it 
would be perfect if we fix them.

 

How to reproduce:
 # Set checkpoint frequency less than failure detection timeout.
 # Ensure, that cache groups partitions states restoring lasts more than 
failure detection timeout, i.e. it is actual to sufficiently large caches.

Reproducer: [^BlockingThreadsOnSnapshotRestoreReproducerTest.patch]

  was:
There may be possible error messages about checkpoint read lock acquisition 
timeouts and critical threads blocking during snapshot restore process (just 
after caches start):
{quote} 
[2023-04-06T10:55:46,561][ERROR]\[ttl-cleanup-worker-#475%node%][CheckpointTimeoutLock]
 Checkpoint read lock acquisition has been timed out. 
{quote} 

{quote} 
[2023-04-06T10:55:47,487][ERROR]\[tcp-disco-msg-worker-[crd]\-#23%node%\-#446%node%][G]
 Blocked system-critical thread has been detected. This can lead to 
cluster-wide undefined behaviour \[workerName=db-checkpoint-thread, 
threadName=db-checkpoint-thread-#457%snapshot.BlockingThreadsOnSnapshotRestoreReproducerTest0%,
 {color:red}blockedFor=100s{color}] 
{quote} 

Also there are active exchange process, which finishes with such timings 
(timing will be approximatelly equal to blocking time of threads): 
{quote} 
[2023-04-06T10:55:52,211][INFO 
]\[exchange-worker-#450%node%][GridDhtPartitionsExchangeFuture] Exchange 
timings [startVer=AffinityTopologyVersion [topVer=1, minorTopVer=5], 
resVer=AffinityTopologyVersion [topVer=1, minorTopVer=5], stage="Waiting in 
exchange queue" (0 ms), ..., stage="Restore partition states" 
({color:red}100163 ms{color}), ..., stage="Total time" ({color:red}100334 
ms{color})] 
{quote} 
 

Is I understand, such errors do not affect restoring, but can confuse.

 

How to reproduce:
 # Set checkpoint frequency less than failure detection timeout.
 # Ensure, that cache groups partitions states restoring lasts more than 
failure detection timeout, i.e. it is actual to sufficiently large caches.

Reproducer: [^BlockingThreadsOnSnapshotRestoreReproducerTest.patch]


> Checkpoint read lock acquisition timeouts during snapshot restore
> -
>
> Key: IGNITE-19239
> URL: https://issues.apache.org/jira/browse/IGNITE-19239
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ilya Shishkov
>Priority: Minor
>  Labels: iep-43, ise
> Attachments: BlockingThreadsOnSnapshotRestoreReproducerTest.patch
>
>
> There may be possible error messages about checkpoint read lock acquisition 
> timeouts and critical threads blocking during snapshot restore process (just 
> after caches start):
> {quote} 
> [2023-04-06T10:55:46,561][ERROR]\[ttl-cleanup-worker-#475%node%][CheckpointTimeoutLock]
>  Checkpoint read lock acquisition has been timed out. 
> {quote} 
> {quote} 
> [2023-04-06T10:55:47,487][ERROR]\[tcp-disco-msg-worker-[crd]\-#23%node%\-#446%node%][G]
>  Blocked system-critical thread has been detected. This can lead to 
> cluster-wide undefined behaviour \[workerName=db-checkpoint-thread, 
> threadName=db-checkpoint-thread-#457%snapshot.BlockingThreadsOnSnapshotRestoreReproducerTest0%,
>  {color:red}blockedFor=100s{color}] 
> {quote} 
> Also there are active exchange process, which finishes with such timings 
> (timing will be approximatelly equal to blocking time of threads): 
> {quote} 
> 

[jira] [Assigned] (IGNITE-19262) Sql. DDL silently ignores transaction

2023-04-17 Thread Andrey Mashenkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Mashenkov reassigned IGNITE-19262:
-

Assignee: Andrey Mashenkov

> Sql. DDL silently ignores transaction
> -
>
> Key: IGNITE-19262
> URL: https://issues.apache.org/jira/browse/IGNITE-19262
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 3.0.0-beta1
>Reporter: Pavel Tupitsyn
>Assignee: Andrey Mashenkov
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> In *ItSqlSynchronousApiTest*, update *checkDdl*:
> {code:java}
> private static void checkDdl(boolean expectedApplied, Session ses, String 
> sql) {
> Transaction tx = CLUSTER_NODES.get(0).transactions().begin();
> ResultSet res = ses.execute(
> tx,
> sql
> );
> assertEquals(expectedApplied, res.wasApplied());
> assertFalse(res.hasRowSet());
> assertEquals(-1, res.affectedRows());
> res.close();
> tx.rollback();
> }
> {code}
> All tests pass, even though we call rollback. 
> DDL does not support transactions. We should throw an exception when *tx* is 
> not null with a DDL statement to make this clear to the users.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19295) Update openapi-generator

2023-04-17 Thread Vadim Pakhnushev (Jira)
Vadim Pakhnushev created IGNITE-19295:
-

 Summary: Update openapi-generator
 Key: IGNITE-19295
 URL: https://issues.apache.org/jira/browse/IGNITE-19295
 Project: Ignite
  Issue Type: Task
  Components: cli
Reporter: Vadim Pakhnushev
Assignee: Vadim Pakhnushev


openapi-generator doesn't generate proper request body for lists of objects 
which is required for IGNITE-19021 as of version 5.4.0. Updating to version 6 
solves this problem but it introduced the 
[issue|https://github.com/OpenAPITools/openapi-generator/issues/14041] which 
manifested itself after updating OpenAPI spec generator in IGNITE-19192.
Temporary solution is to not use a generated client for this particular call 
and write a client for it instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19295) Update openapi-generator

2023-04-17 Thread Vadim Pakhnushev (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Pakhnushev updated IGNITE-19295:
--
Labels: ignite-3  (was: )

> Update openapi-generator
> 
>
> Key: IGNITE-19295
> URL: https://issues.apache.org/jira/browse/IGNITE-19295
> Project: Ignite
>  Issue Type: Task
>  Components: cli
>Reporter: Vadim Pakhnushev
>Assignee: Vadim Pakhnushev
>Priority: Major
>  Labels: ignite-3
>
> openapi-generator doesn't generate proper request body for lists of objects 
> which is required for IGNITE-19021 as of version 5.4.0. Updating to version 6 
> solves this problem but it introduced the 
> [issue|https://github.com/OpenAPITools/openapi-generator/issues/14041] which 
> manifested itself after updating OpenAPI spec generator in IGNITE-19192.
> Temporary solution is to not use a generated client for this particular call 
> and write a client for it instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19294) Union configuration root for several modules

2023-04-17 Thread Mikhail Pochatkin (Jira)
Mikhail Pochatkin created IGNITE-19294:
--

 Summary: Union configuration root for several modules
 Key: IGNITE-19294
 URL: https://issues.apache.org/jira/browse/IGNITE-19294
 Project: Ignite
  Issue Type: New Feature
Reporter: Mikhail Pochatkin






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-18452) CLI should store only configuration keys instead of whole config

2023-04-17 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin resolved IGNITE-18452.

Resolution: Won't Fix

CLI should have access to whole cluter configuration now, we cannot reduce it 
to keys only

> CLI should store only configuration keys instead of whole config
> 
>
> Key: IGNITE-18452
> URL: https://issues.apache.org/jira/browse/IGNITE-18452
> Project: Ignite
>  Issue Type: Improvement
>  Components: cli
>Reporter: Ivan Gagarkin
>Priority: Major
>  Labels: ignite-3
>
> Currently, the CLI stores the whole cluster config and node config for 
> completions, but only keys are enough. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19290) Removing garbage from the partition, taking into account the safe time

2023-04-17 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko reassigned IGNITE-19290:


Assignee: Kirill Tkalenko

> Removing garbage from the partition, taking into account the safe time
> --
>
> Key: IGNITE-19290
> URL: https://issues.apache.org/jira/browse/IGNITE-19290
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-beta2
>
>
> At the moment, we remove garbage from the partition when getting a low 
> watermark without taking into account the safe time of the partition, which 
> is not entirely correct and needs to be fixed.
> Now garbage removal occurs in:
> * 
> *org.apache.ignite.internal.table.distributed.StorageUpdateHandler#executeBatchGc*
> * *org.apache.ignite.internal.table.distributed.StorageUpdateHandler#vacuum*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19293) Validate cluster configuration before cluster initialization

2023-04-17 Thread Mikhail Pochatkin (Jira)
Mikhail Pochatkin created IGNITE-19293:
--

 Summary: Validate cluster configuration before cluster 
initialization
 Key: IGNITE-19293
 URL: https://issues.apache.org/jira/browse/IGNITE-19293
 Project: Ignite
  Issue Type: New Feature
Reporter: Mikhail Pochatkin


We need to validate cluster configration before start cluster initialization 
process. In case when cluster configration is not valid, initialization 
shouldn't start and error with explanation should be returned.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19292) Sql. JdbcQueryEventHandlerImpl#getWriterWithStackTrace cut stack trace.

2023-04-17 Thread Evgeny Stanilovsky (Jira)
Evgeny Stanilovsky created IGNITE-19292:
---

 Summary: Sql. JdbcQueryEventHandlerImpl#getWriterWithStackTrace 
cut stack trace.
 Key: IGNITE-19292
 URL: https://issues.apache.org/jira/browse/IGNITE-19292
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 3.0.0-beta1
Reporter: Evgeny Stanilovsky


JdbcQueryEventHandlerImpl#getWriterWithStackTrace cut stack trace thus jdbc 
errors stores non informative one level error, like :

 
{noformat}
java.sql.BatchUpdateException: IGN-CMN-65535 
TraceId:c4a76a49-68f0-4621-8e2d-ed7e12d12382 Remote query execution    at 
org.apache.ignite.internal.jdbc.JdbcPreparedStatement.executeBatch(JdbcPreparedStatement.java:124)
    at 
org.apache.ignite.jdbc.ItJdbcBatchSelfTest.test0(ItJdbcBatchSelfTest.java:426){noformat}
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19291) Generate default node configuration file on compile-time

2023-04-17 Thread Mikhail Pochatkin (Jira)
Mikhail Pochatkin created IGNITE-19291:
--

 Summary: Generate default node configuration file on compile-time
 Key: IGNITE-19291
 URL: https://issues.apache.org/jira/browse/IGNITE-19291
 Project: Ignite
  Issue Type: New Feature
Reporter: Mikhail Pochatkin


Currently all configration defaults exists only in schema description (in 
code). This is not clear for users which configration values is used. It is 
proposed to generate HOCON configuration file with all defaults on compile 
time. This file should be passed to all distibutions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19077) Apply cutom cluster config on cluster init

2023-04-17 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin updated IGNITE-19077:
---
Summary: Apply cutom cluster config on cluster init  (was: Investigation: 
Apply cutom cluster config on cluster init)

> Apply cutom cluster config on cluster init
> --
>
> Key: IGNITE-19077
> URL: https://issues.apache.org/jira/browse/IGNITE-19077
> Project: Ignite
>  Issue Type: Task
>  Components: rest
>Reporter: Aleksandr
>Priority: Major
>  Labels: ignite-3
>
> After IGNITE-18576 its possible to provide Authentication cluster 
> configuration on cluster init. 
> Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that 
> REST authentication configuration is applied to the distributed configuration 
> on leader election. This happens because there is no any other way to put any 
> values to the cluster configuration on init.
> -This leads to the following situation:-
>  - -cluster init in progress, some REST endpoints are blocked 
> (cluster/configuration for example)-
>  - -cluster initialized, REST is available without auth *anybody can use the 
> REST*-
>  - -authentication configuration is applied to the distributed configuration 
> and REST is secured-
> After IGNITE-18943 this is not possible because configuration REST endpoints 
> are disabled until cluter initialization will successfuly finished.
> It is proposed to extend this approach to whole cluster configration. Instead 
> of cluster authentication configuration init endpoint should accept whole 
> cluster configuration in HOCON format and apply it as it currently. 
> CLI should have option to provide HOCON file. This file should be readed and 
> provided tgo init REST endpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19290) Removing garbage from the partition, taking into account the safe time

2023-04-17 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-19290:


 Summary: Removing garbage from the partition, taking into account 
the safe time
 Key: IGNITE-19290
 URL: https://issues.apache.org/jira/browse/IGNITE-19290
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Tkalenko
 Fix For: 3.0.0-beta2


At the moment, we remove garbage from the partition when getting a low 
watermark without taking into account the safe time of the partition, which is 
not entirely correct and needs to be fixed.

Now garbage removal occurs in:
* 
*org.apache.ignite.internal.table.distributed.StorageUpdateHandler#executeBatchGc*
* *org.apache.ignite.internal.table.distributed.StorageUpdateHandler#vacuum*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19077) Investigation: Apply cutom cluster config on cluster init

2023-04-17 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin updated IGNITE-19077:
---
Description: 
After IGNITE-18576 its possible to provide Authentication cluster configuration 
on cluster init. 

Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that REST 
authentication configuration is applied to the distributed configuration on 
leader election. This happens because there is no any other way to put any 
values to the cluster configuration on init.

-This leads to the following situation:-
 - -cluster init in progress, some REST endpoints are blocked 
(cluster/configuration for example)-
 - -cluster initialized, REST is available without auth *anybody can use the 
REST*-
 - -authentication configuration is applied to the distributed configuration 
and REST is secured-

After IGNITE-18943 this is not possible because configuration REST endpoints 
are disabled until cluter initialization will successfuly finished.

It is proposed to extend this approach to whole cluster configration. Instead 
of cluster authentication configuration init endpoint should accept whole 
cluster configuration in HOCON format and apply it as it currently. 

CLI should have option to provide HOCON file. This file should be readed and 
provided tgo init REST endpoint.

  was:
-  

To fix this issue we have to design the solution for "atomic configuration 
initialization" of something like this.

 

After IGNITE-18576 its possible to provide Authentication cluster configuration 
on cluster init. 

Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that REST 
authentication configuration is applied to the distributed configuration on 
leader election. This happens because there is no any other way to put any 
values to the cluster configuration on init.

~~This leads to the following situation:~~
 - cluster init in progress, some REST endpoints are blocked 
(cluster/configuration for example)
 - cluster initialized, REST is available without auth
*anybody can use the REST*
 - authentication configuration is applied to the distributed configuration and 
REST is secured~~


> Investigation: Apply cutom cluster config on cluster init
> -
>
> Key: IGNITE-19077
> URL: https://issues.apache.org/jira/browse/IGNITE-19077
> Project: Ignite
>  Issue Type: Task
>  Components: rest
>Reporter: Aleksandr
>Priority: Major
>  Labels: ignite-3
>
> After IGNITE-18576 its possible to provide Authentication cluster 
> configuration on cluster init. 
> Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that 
> REST authentication configuration is applied to the distributed configuration 
> on leader election. This happens because there is no any other way to put any 
> values to the cluster configuration on init.
> -This leads to the following situation:-
>  - -cluster init in progress, some REST endpoints are blocked 
> (cluster/configuration for example)-
>  - -cluster initialized, REST is available without auth *anybody can use the 
> REST*-
>  - -authentication configuration is applied to the distributed configuration 
> and REST is secured-
> After IGNITE-18943 this is not possible because configuration REST endpoints 
> are disabled until cluter initialization will successfuly finished.
> It is proposed to extend this approach to whole cluster configration. Instead 
> of cluster authentication configuration init endpoint should accept whole 
> cluster configuration in HOCON format and apply it as it currently. 
> CLI should have option to provide HOCON file. This file should be readed and 
> provided tgo init REST endpoint.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19077) Investigation: Apply cutom cluster config on cluster init

2023-04-17 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin updated IGNITE-19077:
---
Description: 
-  

To fix this issue we have to design the solution for "atomic configuration 
initialization" of something like this.

 

After IGNITE-18576 its possible to provide Authentication cluster configuration 
on cluster init. 

Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that REST 
authentication configuration is applied to the distributed configuration on 
leader election. This happens because there is no any other way to put any 
values to the cluster configuration on init.

~~This leads to the following situation:~~
 - cluster init in progress, some REST endpoints are blocked 
(cluster/configuration for example)
 - cluster initialized, REST is available without auth
*anybody can use the REST*
 - authentication configuration is applied to the distributed configuration and 
REST is secured~~

  was:
Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that REST 
authentication configuration is applied to the distributed configuration on 
leader election. This happens because there is no any other way to put any 
values to the cluster configuration on init. This leads to the following 
situation:
 - cluster init in progress, some REST endpoints are blocked 
(cluster/configuration for example)
 - cluster initialized, REST is available without auth
*anybody can use the REST*
 - authentication configuration is applied to the distributed configuration and 
REST is secured

To fix this issue we have to design the solution for "atomic configuration 
initialization" of something like this.

 

After IGNITE-18576 its possible to provide Authentication cluster configuration 
on cluster init. 


> Investigation: Apply cutom cluster config on cluster init
> -
>
> Key: IGNITE-19077
> URL: https://issues.apache.org/jira/browse/IGNITE-19077
> Project: Ignite
>  Issue Type: Task
>  Components: rest
>Reporter: Aleksandr
>Priority: Major
>  Labels: ignite-3
>
> -  
> To fix this issue we have to design the solution for "atomic configuration 
> initialization" of something like this.
>  
> After IGNITE-18576 its possible to provide Authentication cluster 
> configuration on cluster init. 
> Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that 
> REST authentication configuration is applied to the distributed configuration 
> on leader election. This happens because there is no any other way to put any 
> values to the cluster configuration on init.
> ~~This leads to the following situation:~~
>  - cluster init in progress, some REST endpoints are blocked 
> (cluster/configuration for example)
>  - cluster initialized, REST is available without auth
> *anybody can use the REST*
>  - authentication configuration is applied to the distributed configuration 
> and REST is secured~~



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19077) Investigation: Apply cutom cluster config on cluster init

2023-04-17 Thread Mikhail Pochatkin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Pochatkin updated IGNITE-19077:
---
Description: 
Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that REST 
authentication configuration is applied to the distributed configuration on 
leader election. This happens because there is no any other way to put any 
values to the cluster configuration on init. This leads to the following 
situation:
 - cluster init in progress, some REST endpoints are blocked 
(cluster/configuration for example)
 - cluster initialized, REST is available without auth
*anybody can use the REST*
 - authentication configuration is applied to the distributed configuration and 
REST is secured

To fix this issue we have to design the solution for "atomic configuration 
initialization" of something like this.

 

After IGNITE-18576 its possible to provide Authentication cluster configuration 
on cluster init. 

  was:
Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that REST 
authentication configuration is applied to the distributed configuration on 
leader election. This happens because there is no any other way to put any 
values to the cluster configuration on init. This leads to the following 
situation:

- cluster init in progress, some REST endpoints are blocked 
(cluster/configuration for example)
- cluster initialized, REST is available without auth 
*anybody can use the REST*
- authentication configuration is applied to the distributed configuration and 
REST is secured

To fix this issue we have to design the solution for "atomic configuration 
initialization" of something like this. 


> Investigation: Apply cutom cluster config on cluster init
> -
>
> Key: IGNITE-19077
> URL: https://issues.apache.org/jira/browse/IGNITE-19077
> Project: Ignite
>  Issue Type: Task
>  Components: rest
>Reporter: Aleksandr
>Priority: Major
>  Labels: ignite-3
>
> Looking at ClusterManagementGroupManager#onElectedAsLeader we can see that 
> REST authentication configuration is applied to the distributed configuration 
> on leader election. This happens because there is no any other way to put any 
> values to the cluster configuration on init. This leads to the following 
> situation:
>  - cluster init in progress, some REST endpoints are blocked 
> (cluster/configuration for example)
>  - cluster initialized, REST is available without auth
> *anybody can use the REST*
>  - authentication configuration is applied to the distributed configuration 
> and REST is secured
> To fix this issue we have to design the solution for "atomic configuration 
> initialization" of something like this.
>  
> After IGNITE-18576 its possible to provide Authentication cluster 
> configuration on cluster init. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-19289) Restoring snapshots fails after Ignite#destroyCache

2023-04-17 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin reassigned IGNITE-19289:
---

Assignee: Maksim Timonin

> Restoring snapshots fails after Ignite#destroyCache
> ---
>
> Key: IGNITE-19289
> URL: https://issues.apache.org/jira/browse/IGNITE-19289
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
>
> Ignite#destroy cache returns earlier than it actually cleaning a cache 
> directory. Then 
> {code:java}
> ignite.destroyCache(CACHE)
> ignite.snapshot().restoreSnapshot(...)
> {code}
> might fail with error:
> {code:java}
> Unable to restore cache group - directory is not empty. Cache group should be 
> destroyed manually before perform restore operation [group=CACHE, 
> dir=/data/tcAgent/work/7bc1c54bc719b67c/work/db/incremental_IncrementalSnapshotRestoreTest1/cache-CACHE]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-17383) IdleVerify hangs when called on inactive cluster with persistence

2023-04-17 Thread Julia Bakulina (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713064#comment-17713064
 ] 

Julia Bakulina commented on IGNITE-17383:
-

There needs to be 2 checks on the cluster state and persistent data region.

What is still to be done:
 * 1st check - add the condition on an inactive cluster in 
IdleVerify.execute(), i.e. before executeTaskByNameOnNode() - in order to fail 
fast;
 * 2nd check is already done;
 * if smb changes the cluster state after the 1st check then do the same as 
with other errors, i.e. return code OK and write the error into idle_verify.txt

> IdleVerify hangs when called on inactive cluster with persistence
> -
>
> Key: IGNITE-17383
> URL: https://issues.apache.org/jira/browse/IGNITE-17383
> Project: Ignite
>  Issue Type: Bug
>  Components: control.sh
>Reporter: Ilya Shishkov
>Assignee: Julia Bakulina
>Priority: Minor
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> When you call {{control.sh --cache idle_verify}} on inactive cluster with 
> persistence, control script hangs and no actions are performed. As you can 
> see below in 'rest' thread dump, {{VerifyBackupPartitionsTaskV2}} waits for 
> checkpoint start in {{GridCacheDatabaseSharedManager#waitForCheckpoint}}.
> It seems, that we can interrupt task execution and print message in control 
> script output, that IdleVerify can't work on inactive cluster.
> {code:title=Thread dump}
> "rest-#82%ignite-server%" #146 prio=5 os_prio=31 tid=0x7fe0cf97c000 
> nid=0x3607 waiting on condition [0x700010149000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.waitForCheckpoint(GridCacheDatabaseSharedManager.java:1869)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.waitForCheckpoint(IgniteCacheDatabaseSharedManager.java:1107)
>   at 
> org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2$VerifyBackupPartitionsJobV2.execute(VerifyBackupPartitionsTaskV2.java:199)
>   at 
> org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2$VerifyBackupPartitionsJobV2.execute(VerifyBackupPartitionsTaskV2.java:171)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:620)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7366)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:614)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:539)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>   at 
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1343)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.sendRequest(GridTaskWorker.java:1444)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.processMappedJobs(GridTaskWorker.java:674)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.body(GridTaskWorker.java:540)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:860)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:470)
>   at 
> org.apache.ignite.internal.IgniteComputeImpl.executeAsync0(IgniteComputeImpl.java:514)
>   at 
> org.apache.ignite.internal.IgniteComputeImpl.executeAsync(IgniteComputeImpl.java:496)
>   at 
> org.apache.ignite.internal.visor.verify.VisorIdleVerifyJob.run(VisorIdleVerifyJob.java:70)
>   at 
> org.apache.ignite.internal.visor.verify.VisorIdleVerifyJob.run(VisorIdleVerifyJob.java:35)
>   at org.apache.ignite.internal.visor.VisorJob.execute(VisorJob.java:69)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:620)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7366)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:614)
>   at 
> 

[jira] [Assigned] (IGNITE-17383) IdleVerify hangs when called on inactive cluster with persistence

2023-04-17 Thread Julia Bakulina (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julia Bakulina reassigned IGNITE-17383:
---

Assignee: (was: Julia Bakulina)

> IdleVerify hangs when called on inactive cluster with persistence
> -
>
> Key: IGNITE-17383
> URL: https://issues.apache.org/jira/browse/IGNITE-17383
> Project: Ignite
>  Issue Type: Bug
>  Components: control.sh
>Reporter: Ilya Shishkov
>Priority: Minor
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> When you call {{control.sh --cache idle_verify}} on inactive cluster with 
> persistence, control script hangs and no actions are performed. As you can 
> see below in 'rest' thread dump, {{VerifyBackupPartitionsTaskV2}} waits for 
> checkpoint start in {{GridCacheDatabaseSharedManager#waitForCheckpoint}}.
> It seems, that we can interrupt task execution and print message in control 
> script output, that IdleVerify can't work on inactive cluster.
> {code:title=Thread dump}
> "rest-#82%ignite-server%" #146 prio=5 os_prio=31 tid=0x7fe0cf97c000 
> nid=0x3607 waiting on condition [0x700010149000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.waitForCheckpoint(GridCacheDatabaseSharedManager.java:1869)
>   at 
> org.apache.ignite.internal.processors.cache.persistence.IgniteCacheDatabaseSharedManager.waitForCheckpoint(IgniteCacheDatabaseSharedManager.java:1107)
>   at 
> org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2$VerifyBackupPartitionsJobV2.execute(VerifyBackupPartitionsTaskV2.java:199)
>   at 
> org.apache.ignite.internal.processors.cache.verify.VerifyBackupPartitionsTaskV2$VerifyBackupPartitionsJobV2.execute(VerifyBackupPartitionsTaskV2.java:171)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:620)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7366)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:614)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:539)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>   at 
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1343)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.sendRequest(GridTaskWorker.java:1444)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.processMappedJobs(GridTaskWorker.java:674)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.body(GridTaskWorker.java:540)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:860)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:470)
>   at 
> org.apache.ignite.internal.IgniteComputeImpl.executeAsync0(IgniteComputeImpl.java:514)
>   at 
> org.apache.ignite.internal.IgniteComputeImpl.executeAsync(IgniteComputeImpl.java:496)
>   at 
> org.apache.ignite.internal.visor.verify.VisorIdleVerifyJob.run(VisorIdleVerifyJob.java:70)
>   at 
> org.apache.ignite.internal.visor.verify.VisorIdleVerifyJob.run(VisorIdleVerifyJob.java:35)
>   at org.apache.ignite.internal.visor.VisorJob.execute(VisorJob.java:69)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:620)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7366)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:614)
>   at 
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:539)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125)
>   at 
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1343)
>   at 
> org.apache.ignite.internal.processors.task.GridTaskWorker.sendRequest(GridTaskWorker.java:1444)
>   at 
> 

[jira] [Updated] (IGNITE-19281) Fix flaky SnapshotMXBeanTest#testStatus

2023-04-17 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-19281:

Description: Test uses IgniteSnapshotManager#restoringSnapshotName for 
verifying that snapshot has started. But this method isn't correct, it check 
local future also, but should opCtx only (as other similar method do)

> Fix flaky SnapshotMXBeanTest#testStatus
> ---
>
> Key: IGNITE-19281
> URL: https://issues.apache.org/jira/browse/IGNITE-19281
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Test uses IgniteSnapshotManager#restoringSnapshotName for verifying that 
> snapshot has started. But this method isn't correct, it check local future 
> also, but should opCtx only (as other similar method do)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (IGNITE-19281) Fix flaky SnapshotMXBeanTest#testStatus

2023-04-17 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin resolved IGNITE-19281.
-
  Reviewer: Nikita Amelchev
Resolution: Fixed

[~NSAmelchev] thanks for the review! Merged to master

> Fix flaky SnapshotMXBeanTest#testStatus
> ---
>
> Key: IGNITE-19281
> URL: https://issues.apache.org/jira/browse/IGNITE-19281
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19281) Fix flaky SnapshotMXBeanTest#testStatus

2023-04-17 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-19281:

Labels: ise  (was: )

> Fix flaky SnapshotMXBeanTest#testStatus
> ---
>
> Key: IGNITE-19281
> URL: https://issues.apache.org/jira/browse/IGNITE-19281
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Labels: ise
> Fix For: 2.16
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19281) Fix flaky SnapshotMXBeanTest#testStatus

2023-04-17 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-19281:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Fix flaky SnapshotMXBeanTest#testStatus
> ---
>
> Key: IGNITE-19281
> URL: https://issues.apache.org/jira/browse/IGNITE-19281
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19281) Fix flaky SnapshotMXBeanTest#testStatus

2023-04-17 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-19281:

Fix Version/s: 2.16

> Fix flaky SnapshotMXBeanTest#testStatus
> ---
>
> Key: IGNITE-19281
> URL: https://issues.apache.org/jira/browse/IGNITE-19281
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
> Fix For: 2.16
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-19289) Restoring snapshots fails after Ignite#destroyCache

2023-04-17 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-19289:

Ignite Flags:   (was: Docs Required,Release Notes Required)

> Restoring snapshots fails after Ignite#destroyCache
> ---
>
> Key: IGNITE-19289
> URL: https://issues.apache.org/jira/browse/IGNITE-19289
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Timonin
>Priority: Major
>
> Ignite#destroy cache returns earlier than it actually cleaning a cache 
> directory. Then 
> {code:java}
> ignite.destroyCache(CACHE)
> ignite.snapshot().restoreSnapshot(...)
> {code}
> might fail with error:
> {code:java}
> Unable to restore cache group - directory is not empty. Cache group should be 
> destroyed manually before perform restore operation [group=CACHE, 
> dir=/data/tcAgent/work/7bc1c54bc719b67c/work/db/incremental_IncrementalSnapshotRestoreTest1/cache-CACHE]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17383) IdleVerify hangs when called on inactive cluster with persistence

2023-04-17 Thread Julia Bakulina (Jira)


[jira] [Updated] (IGNITE-19289) Restoring snapshots fails after Ignite#destroyCache

2023-04-17 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-19289:

Labels: ise  (was: )

> Restoring snapshots fails after Ignite#destroyCache
> ---
>
> Key: IGNITE-19289
> URL: https://issues.apache.org/jira/browse/IGNITE-19289
> Project: Ignite
>  Issue Type: Bug
>Reporter: Maksim Timonin
>Priority: Major
>  Labels: ise
>
> Ignite#destroy cache returns earlier than it actually cleaning a cache 
> directory. Then 
> {code:java}
> ignite.destroyCache(CACHE)
> ignite.snapshot().restoreSnapshot(...)
> {code}
> might fail with error:
> {code:java}
> Unable to restore cache group - directory is not empty. Cache group should be 
> destroyed manually before perform restore operation [group=CACHE, 
> dir=/data/tcAgent/work/7bc1c54bc719b67c/work/db/incremental_IncrementalSnapshotRestoreTest1/cache-CACHE]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (IGNITE-19289) Restoring snapshots fails after Ignite#destroyCache

2023-04-17 Thread Maksim Timonin (Jira)
Maksim Timonin created IGNITE-19289:
---

 Summary: Restoring snapshots fails after Ignite#destroyCache
 Key: IGNITE-19289
 URL: https://issues.apache.org/jira/browse/IGNITE-19289
 Project: Ignite
  Issue Type: Bug
Reporter: Maksim Timonin


Ignite#destroy cache returns earlier than it actually cleaning a cache 
directory. Then 
{code:java}
ignite.destroyCache(CACHE)
ignite.snapshot().restoreSnapshot(...)
{code}
might fail with error:
{code:java}
Unable to restore cache group - directory is not empty. Cache group should be 
destroyed manually before perform restore operation [group=CACHE, 
dir=/data/tcAgent/work/7bc1c54bc719b67c/work/db/incremental_IncrementalSnapshotRestoreTest1/cache-CACHE]{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18961) Implement lease prolongation on placement driver side

2023-04-17 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-18961:
---
Description: 
A part of replication groups' processing logic, described in IGNITE-18742 
should be the lease prolongation. 

Placement driver should prolong existing leases, saved to local in-memory map, 
which lease time is going to expire. Let say, the lease should be prolonged 
when there is less than leaseInterval / 2 time left before expiration. 
Prolongation should be done via making meta storage invoke, without direct 
communication with replicas.

  was:
A part of replication groups' processing logic, described in IGNITE-18742 
should be the lease prolongation. 

Th placement driver should prolong existing leases, saved to local in-memory 
map, which lease time is going to expire. Let say, the lease should be 
prolonged when there is less than leaseInterval / 2 time left before 
expiration. Prolongation should be done via making meta storage invoke, without 
direct communication with replicas.


> Implement lease prolongation on placement driver side
> -
>
> Key: IGNITE-18961
> URL: https://issues.apache.org/jira/browse/IGNITE-18961
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> A part of replication groups' processing logic, described in IGNITE-18742 
> should be the lease prolongation. 
> Placement driver should prolong existing leases, saved to local in-memory 
> map, which lease time is going to expire. Let say, the lease should be 
> prolonged when there is less than leaseInterval / 2 time left before 
> expiration. Prolongation should be done via making meta storage invoke, 
> without direct communication with replicas.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (IGNITE-18961) Implement lease prolongation on placement driver side

2023-04-17 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov reassigned IGNITE-18961:
--

Assignee: Vladislav Pyatkov

> Implement lease prolongation on placement driver side
> -
>
> Key: IGNITE-18961
> URL: https://issues.apache.org/jira/browse/IGNITE-18961
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Denis Chudov
>Assignee: Vladislav Pyatkov
>Priority: Major
>  Labels: ignite-3
>
> A part of replication groups' processing logic, described in IGNITE-18742 
> should be the lease prolongation. 
> Th placement driver should prolong existing leases, saved to local in-memory 
> map, which lease time is going to expire. Let say, the lease should be 
> prolonged when there is less than leaseInterval / 2 time left before 
> expiration. Prolongation should be done via making meta storage invoke, 
> without direct communication with replicas.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-19281) Fix flaky SnapshotMXBeanTest#testStatus

2023-04-17 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17713055#comment-17713055
 ] 

Ignite TC Bot commented on IGNITE-19281:


{panel:title=Branch: [pull/10646/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Platform .NET (Core Linux){color} [[tests 0 TIMEOUT , Exit Code 
, TC_SERVICE_MESSAGE 
|https://ci2.ignite.apache.org/viewLog.html?buildId=7141740]]

{panel}
{panel:title=Branch: [pull/10646/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7137448buildTypeId=IgniteTests24Java8_RunAll]

> Fix flaky SnapshotMXBeanTest#testStatus
> ---
>
> Key: IGNITE-19281
> URL: https://issues.apache.org/jira/browse/IGNITE-19281
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IGNITE-18756) Awaiting for nodes are appeared in distribution zone data nodes

2023-04-17 Thread Vladislav Pyatkov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17712951#comment-17712951
 ] 

Vladislav Pyatkov commented on IGNITE-18756:


[~Sergey Uttsel] thank you for the contribution.
Merged 6ada47270d21795ba583d6be415bf91f5c64e319

> Awaiting for nodes are appeared in distribution zone data nodes
> ---
>
> Key: IGNITE-18756
> URL: https://issues.apache.org/jira/browse/IGNITE-18756
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Uttsel
>Assignee: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> h3. Motivation
> Awaiting for nodes are appeared in distribution zone data nodes.
> Now when nodes were started and joined to physical topology it's enough  to 
> create table with assignments on these nodes. When we get rid of 
> BaselineManager#nodes then the data nodes of the distribution zone will be 
> used to create assignments for partitions.
> In order for the nodes from the physical topology are in the assignments when 
> table is creating, it's need to wait until these nodes are added to the data 
> nodes of the distribution zone.
> h3. Definition of Done
> At the time of table creating data nodes from the physical topology must also 
> be in the data nodes of the distribution zone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-18756) Awaiting for nodes are appeared in distribution zone data nodes

2023-04-17 Thread Vladislav Pyatkov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladislav Pyatkov updated IGNITE-18756:
---
Summary: Awaiting for nodes are appeared in distribution zone data nodes  
(was: Awaiting for nodes are appeared in distribution zone data nodes.)

> Awaiting for nodes are appeared in distribution zone data nodes
> ---
>
> Key: IGNITE-18756
> URL: https://issues.apache.org/jira/browse/IGNITE-18756
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Sergey Uttsel
>Assignee: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> h3. Motivation
> Awaiting for nodes are appeared in distribution zone data nodes.
> Now when nodes were started and joined to physical topology it's enough  to 
> create table with assignments on these nodes. When we get rid of 
> BaselineManager#nodes then the data nodes of the distribution zone will be 
> used to create assignments for partitions.
> In order for the nodes from the physical topology are in the assignments when 
> table is creating, it's need to wait until these nodes are added to the data 
> nodes of the distribution zone.
> h3. Definition of Done
> At the time of table creating data nodes from the physical topology must also 
> be in the data nodes of the distribution zone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (IGNITE-17599) Calcite engine. Support LocalDate/LocalTime types

2023-04-17 Thread Evgeny Stanilovsky (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Stanilovsky updated IGNITE-17599:

Labels: calcite ise  (was: calcite calcite3-required ise)

> Calcite engine. Support LocalDate/LocalTime types
> -
>
> Key: IGNITE-17599
> URL: https://issues.apache.org/jira/browse/IGNITE-17599
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 2.14
>Reporter: Aleksey Plekhanov
>Assignee: Ivan Daschinsky
>Priority: Major
>  Labels: calcite, ise
> Fix For: 2.15
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> H2-based engine works with LocalData/LocalTime types as 
> java.sql.Data/java.sql.Time types. To check if value with LocalDate type can 
> be inserted to descriptor with type java.sql.Data some logic from 
> \{{IgniteH2Indexing.isConvertibleToColumnType}} is used. If Calcite engine 
> used without ignite-indexing this logic is unavailable.
> We should:
>  # Provide an ability to work in Calcite-based engine with 
> LocalDate/LocalTime type in the same way as java.sql.Data/java.sql.Time types.
>  # Move \{{IgniteH2Indexing.isConvertibleToColumnType}} logic to the core 
> module (perhaps delegating this call from the core to the QueryEngine 
> instance)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)