[jira] [Assigned] (IGNITE-17091) Implement status command

2022-06-22 Thread Aleksandr (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr reassigned IGNITE-17091:
--

Assignee: Aleksandr

> Implement status command
> 
>
> Key: IGNITE-17091
> URL: https://issues.apache.org/jira/browse/IGNITE-17091
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr
>Assignee: Aleksandr
>Priority: Major
>  Labels: ignite-3, ignite-3-cli-tool
>
> Now "ignite status" will display the status based on local running nodes and 
> some hacks. There should be a special endpoint to get the cluster status. CLI 
> should call this endpoint.
>  * create a  REST endpoint
>  * Implement status command based on new endpoint
> The status command has to display at least: a cluster name, a number of 
> nodes, initialized/not initialized.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17184) Ignite OSGi Karaf 2.13.0 doesn't exist in maven central

2022-06-22 Thread Valeria Borodina (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557695#comment-17557695
 ] 

Valeria Borodina commented on IGNITE-17184:
---

You cacn watch camel-examples: they create new repository 
[https://github.com/apache/camel-karaf]  when everything related to karaf.

Also apache camel has ignite-component and they have example uses it in karaf 

> Ignite OSGi Karaf 2.13.0 doesn't exist in maven central
> ---
>
> Key: IGNITE-17184
> URL: https://issues.apache.org/jira/browse/IGNITE-17184
> Project: Ignite
>  Issue Type: Bug
>  Components: osgi
>Affects Versions: 2.13
>Reporter: Valeria Borodina
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17216) Ignite modules in maven central aren't "bundle".

2022-06-22 Thread Valeria Borodina (Jira)
Valeria Borodina created IGNITE-17216:
-

 Summary: Ignite modules in maven central aren't "bundle".
 Key: IGNITE-17216
 URL: https://issues.apache.org/jira/browse/IGNITE-17216
 Project: Ignite
  Issue Type: Bug
Affects Versions: 2.13, 2.12
Reporter: Valeria Borodina


In osgi we use bundle not just jar. Ignite-osgi-karaf also installation how 
bundle.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (IGNITE-17212) Sql. Add support for DEFAULT operator

2022-06-22 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov reassigned IGNITE-17212:
-

Assignee: Konstantin Orlov

> Sql. Add support for DEFAULT operator
> -
>
> Key: IGNITE-17212
> URL: https://issues.apache.org/jira/browse/IGNITE-17212
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> We need to support DEFAULT operator. This is technically a port of 
> IGNITE-16018



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16955) Improve logging of rebalance process

2022-06-22 Thread Vyacheslav Koptilin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557555#comment-17557555
 ] 

Vyacheslav Koptilin commented on IGNITE-16955:
--

Hello [~maliev],

Could you please take a look at PR?

> Improve logging of rebalance process
> 
>
> Key: IGNITE-16955
> URL: https://issues.apache.org/jira/browse/IGNITE-16955
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> We must improve logging of rebalance logic with the following points:
>  - log triggers of rebalance (change of number of replicas at the moment)
>  - log receiving of changes on pending/stable keys
>  - log start/stop raft nodes with the info about the reason
>  - log the result of multi-invoke call (we can use `yield` to detect which 
> branch was executed)
>  - log the call of changePeersAsync on the client side
>  - log the real error of the call above, if error occurred (we need to change 
> the error reporting of sendWithRetry method to support this point)
>  - check if needed extensions of raft logs for changes in raft group 
> configuration
>  - log the progress of raft replication during rebalance (it can be tricky 
> enough, due to the fact, that replicator has no information about the reason 
> of replication)
>  - logging for 
> onLeaderElected/onNewPeersConfigurationApplied/onReconfigurationError
> All logs should contains the most detailed context:
>  * table
>  * partition
>  * metastorage keys' values (old/new)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16955) Improve logging of rebalance process

2022-06-22 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-16955:
-
Reviewer: Mirza Aliev

> Improve logging of rebalance process
> 
>
> Key: IGNITE-16955
> URL: https://issues.apache.org/jira/browse/IGNITE-16955
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Gusakov
>Assignee: Kirill Gusakov
>Priority: Major
>  Labels: ignite-3
>
> We must improve logging of rebalance logic with the following points:
>  - log triggers of rebalance (change of number of replicas at the moment)
>  - log receiving of changes on pending/stable keys
>  - log start/stop raft nodes with the info about the reason
>  - log the result of multi-invoke call (we can use `yield` to detect which 
> branch was executed)
>  - log the call of changePeersAsync on the client side
>  - log the real error of the call above, if error occurred (we need to change 
> the error reporting of sendWithRetry method to support this point)
>  - check if needed extensions of raft logs for changes in raft group 
> configuration
>  - log the progress of raft replication during rebalance (it can be tricky 
> enough, due to the fact, that replicator has no information about the reason 
> of replication)
>  - logging for 
> onLeaderElected/onNewPeersConfigurationApplied/onReconfigurationError
> All logs should contains the most detailed context:
>  * table
>  * partition
>  * metastorage keys' values (old/new)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17215) Write ClusterSnapshotRecord to WAL

2022-06-22 Thread Maksim Timonin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maksim Timonin updated IGNITE-17215:

Description: 
For PITR [1] process of recovering based on ClusterSnapshot + archived WALs.

It's required to have a point in WAL which splits whole WAL on 2 areas:
 # Before this point all data changes are contained within ClusterSnapshot, and 
no need to recover them from WAL archived files.
 # After this point all data need to be recovered from WAL archived files.

It's proposed to write ClusterSnapshotRecord at the moment checkpoint process 
starts (cp#writeLock has acquired). ClusterSnapshot process guarantees:
 # there is no active transactions (or any data changes) in moment of writing 
begin CheckpointRecord.
 # ClusterSnapshot consist of data pages that are materialized within this 
checkpoint process.

Then every logical record after begin CheckointRecord doesn't belong to 
ClusterSnapshot. Then it's safe to write ClusterSnapshotRecord align with 
CheckpointRecord.

 

[1] [https://cwiki.apache.org/confluence/pages/editpage.action?pageId=211884314]

  was:
For PITR [1] process of recovering based on ClusterSnapshot + archived WALs.

It's required to have a point in WAL which splits whole WAL on 2 areas:
 # Before this point all data changes are contained within ClusterSnapshot, and 
no need to recover them from WAL archived files.
 # After this point all data need to be recovered from WAL archived files.

It's proposed to write ClusterSnapshotRecord in the moment begin 
CheckpointRecord has written to WAL. ClusterSnapshot process guarantees:
 # there is no active transactions (or any data changes) in moment of writing 
begin CheckpointRecord.
 # ClusterSnapshot consist of data pages that are materialized within this 
checkpoint process.

Then every logical record after begin CheckointRecord doesn't belong to 
ClusterSnapshot. Then it's safe to write ClusterSnapshotRecord align with 
CheckpointRecord.

 

[1] https://cwiki.apache.org/confluence/pages/editpage.action?pageId=211884314


> Write ClusterSnapshotRecord to WAL
> --
>
> Key: IGNITE-17215
> URL: https://issues.apache.org/jira/browse/IGNITE-17215
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Maksim Timonin
>Assignee: Maksim Timonin
>Priority: Major
>
> For PITR [1] process of recovering based on ClusterSnapshot + archived WALs.
> It's required to have a point in WAL which splits whole WAL on 2 areas:
>  # Before this point all data changes are contained within ClusterSnapshot, 
> and no need to recover them from WAL archived files.
>  # After this point all data need to be recovered from WAL archived files.
> It's proposed to write ClusterSnapshotRecord at the moment checkpoint process 
> starts (cp#writeLock has acquired). ClusterSnapshot process guarantees:
>  # there is no active transactions (or any data changes) in moment of writing 
> begin CheckpointRecord.
>  # ClusterSnapshot consist of data pages that are materialized within this 
> checkpoint process.
> Then every logical record after begin CheckointRecord doesn't belong to 
> ClusterSnapshot. Then it's safe to write ClusterSnapshotRecord align with 
> CheckpointRecord.
>  
> [1] 
> [https://cwiki.apache.org/confluence/pages/editpage.action?pageId=211884314]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17215) Write ClusterSnapshotRecord to WAL

2022-06-22 Thread Maksim Timonin (Jira)
Maksim Timonin created IGNITE-17215:
---

 Summary: Write ClusterSnapshotRecord to WAL
 Key: IGNITE-17215
 URL: https://issues.apache.org/jira/browse/IGNITE-17215
 Project: Ignite
  Issue Type: New Feature
Reporter: Maksim Timonin
Assignee: Maksim Timonin


For PITR [1] process of recovering based on ClusterSnapshot + archived WALs.

It's required to have a point in WAL which splits whole WAL on 2 areas:
 # Before this point all data changes are contained within ClusterSnapshot, and 
no need to recover them from WAL archived files.
 # After this point all data need to be recovered from WAL archived files.

It's proposed to write ClusterSnapshotRecord in the moment begin 
CheckpointRecord has written to WAL. ClusterSnapshot process guarantees:
 # there is no active transactions (or any data changes) in moment of writing 
begin CheckpointRecord.
 # ClusterSnapshot consist of data pages that are materialized within this 
checkpoint process.

Then every logical record after begin CheckointRecord doesn't belong to 
ClusterSnapshot. Then it's safe to write ClusterSnapshotRecord align with 
CheckpointRecord.

 

[1] https://cwiki.apache.org/confluence/pages/editpage.action?pageId=211884314



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (IGNITE-17213) Sql. Refactoring of SQL dialects and supported functions enumeration

2022-06-22 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov reassigned IGNITE-17213:
-

Assignee: Konstantin Orlov

> Sql. Refactoring of SQL dialects and supported functions enumeration
> 
>
> Key: IGNITE-17213
> URL: https://issues.apache.org/jira/browse/IGNITE-17213
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Need to introduce an Ignite dialect with all supported function being 
> enumerated in a single place. This ticket is a port of IGNITE-15128



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17214) Implement HLC

2022-06-22 Thread Sergey Uttsel (Jira)
Sergey Uttsel created IGNITE-17214:
--

 Summary: Implement HLC
 Key: IGNITE-17214
 URL: https://issues.apache.org/jira/browse/IGNITE-17214
 Project: Ignite
  Issue Type: Task
Reporter: Sergey Uttsel
Assignee: Sergey Uttsel






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17214) Implement HLC

2022-06-22 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel updated IGNITE-17214:
---
  Docs Text:   (was: Need to implement Hybrid Logical Clocks, that combines 
logical clocks and physical clocks.)
Description: Need to implement Hybrid Logical Clocks, that combines logical 
clocks and physical clocks.

> Implement HLC
> -
>
> Key: IGNITE-17214
> URL: https://issues.apache.org/jira/browse/IGNITE-17214
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Uttsel
>Assignee: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
>
> Need to implement Hybrid Logical Clocks, that combines logical clocks and 
> physical clocks.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17213) Sql. Refactoring of SQL dialects and supported functions enumeration

2022-06-22 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-17213:
--
Labels: ignite-3  (was: )

> Sql. Refactoring of SQL dialects and supported functions enumeration
> 
>
> Key: IGNITE-17213
> URL: https://issues.apache.org/jira/browse/IGNITE-17213
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> Need to introduce an Ignite dialect with all supported function being 
> enumerated in a single place. This ticket is a port of IGNITE-15128



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17213) Sql. Refactoring of SQL dialects and supported functions enumeration

2022-06-22 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-17213:
-

 Summary: Sql. Refactoring of SQL dialects and supported functions 
enumeration
 Key: IGNITE-17213
 URL: https://issues.apache.org/jira/browse/IGNITE-17213
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Konstantin Orlov


Need to introduce an Ignite dialect with all supported function being 
enumerated in a single place. This ticket is a port of IGNITE-15128



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16870) Extend Schema with ability to specify function as default value generator

2022-06-22 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-16870:
--
Component/s: sql

> Extend Schema with ability to specify function as default value generator
> -
>
> Key: IGNITE-16870
> URL: https://issues.apache.org/jira/browse/IGNITE-16870
> Project: Ignite
>  Issue Type: New Feature
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> In order to make IGNITE-16860 possible, we need to add an ability to specify 
> a function as a default value generator. It worth to note, that a behaviour 
> of KV API and SQL should be consistent, thus this feature should be derived 
> from a SQL runtime to the common place.
> Within this task we need to extend {{ColumnConfigurationSchema}} in order to 
> support several types of default value generators (constant and function for 
> now), as well as introduce a new default value supplier for {{{}Column{}}}.
> As a first step, I would propose to support only a few predefined system 
> functions. This could be possibly extended to support an arbitrary function 
> though.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16870) Extend Schema with ability to specify function as default value generator

2022-06-22 Thread Konstantin Orlov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557509#comment-17557509
 ] 

Konstantin Orlov commented on IGNITE-16870:
---

The issue is blocked by a lack of DEFAULT operator support

> Extend Schema with ability to specify function as default value generator
> -
>
> Key: IGNITE-16870
> URL: https://issues.apache.org/jira/browse/IGNITE-16870
> Project: Ignite
>  Issue Type: New Feature
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> In order to make IGNITE-16860 possible, we need to add an ability to specify 
> a function as a default value generator. It worth to note, that a behaviour 
> of KV API and SQL should be consistent, thus this feature should be derived 
> from a SQL runtime to the common place.
> Within this task we need to extend {{ColumnConfigurationSchema}} in order to 
> support several types of default value generators (constant and function for 
> now), as well as introduce a new default value supplier for {{{}Column{}}}.
> As a first step, I would propose to support only a few predefined system 
> functions. This could be possibly extended to support an arbitrary function 
> though.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17212) Sql. Add support for DEFAULT operator

2022-06-22 Thread Konstantin Orlov (Jira)
Konstantin Orlov created IGNITE-17212:
-

 Summary: Sql. Add support for DEFAULT operator
 Key: IGNITE-17212
 URL: https://issues.apache.org/jira/browse/IGNITE-17212
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Konstantin Orlov


We need to support DEFAULT operator. This is technically a port of IGNITE-16018



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17212) Sql. Add support for DEFAULT operator

2022-06-22 Thread Konstantin Orlov (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Orlov updated IGNITE-17212:
--
Ignite Flags:   (was: Docs Required,Release Notes Required)

> Sql. Add support for DEFAULT operator
> -
>
> Key: IGNITE-17212
> URL: https://issues.apache.org/jira/browse/IGNITE-17212
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Konstantin Orlov
>Priority: Major
>  Labels: ignite-3
>
> We need to support DEFAULT operator. This is technically a port of 
> IGNITE-16018



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-15818) [Native Persistence 3.0] Checkpoint, lifecycle and file store refactoring and re-implementation

2022-06-22 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-15818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-15818:
-
Fix Version/s: 3.0.0-alpha6

> [Native Persistence 3.0] Checkpoint, lifecycle and file store refactoring and 
> re-implementation
> ---
>
> Key: IGNITE-15818
> URL: https://issues.apache.org/jira/browse/IGNITE-15818
> Project: Ignite
>  Issue Type: Task
>Reporter: Sergey Chugunov
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha6
>
>
> h2. Goal
> Port and refactor core classes implementing page-based persistent store in 
> Ignite 2.x: GridCacheOffheapManager, GridCacheDatabaseSharedManager, 
> PageMemoryImpl, Checkpointer, FileWriteAheadLogManager.
> New checkpoint implementation to avoid excessive logging.
> Store lifecycle clarification to avoid complicated and invasive code of 
> custom lifecycle managed mostly by DatabaseSharedManager.
> h2. Items to pay attention to
> New checkpoint implementation based on split-file storage, new page index 
> structure to maintain disk-memory page mapping.
> File page store implementation should be extracted from 
> GridCacheOffheapManager to a separate entity, target implementation should 
> support new version of checkpoint (split-file store to enable 
> always-consistent store and to eliminate binary recovery phase).
> Support of big pages (256+ kB).
> Support of throttling algorithms.
> h2. References
> New checkpoint design overview is available 
> [here|https://github.com/apache/ignite-3/blob/ignite-14647/modules/vault/README.md]
> h2. Thoughts
> Although there is a technical opportunity to have independent checkpoints for 
> different data regions, managing them could be a nightmare and it's 
> definitely in the realm of optimizations and out of scope right now.
> So, let's assume that there's one good old checkpoint process. There's still 
> a requirement to have checkpoint markers, but they will not have a reference 
> to WAL, because there's no WAL. Instead, we will have to store RAFT log 
> revision per partition. Or not, I'm not that familiar with a recovery 
> procedure that's currently in development.
> Unlike checkpoints in Ignite 2.x, that had DO and REDO operations, new 
> version will have DO and UNDO. This drastically simplifies both checkpoint 
> itself and node recovery. But is complicates data access.
> There will be two process that will share storage resource: "checkpointer" 
> and "compactor". Let's examine what compactor should or shouldn't do:
>  * it should not work in parallel with checkpointer, except for cases when 
> there are too many layers (more on that later)
>  * it should merge later checkpoint delta files into main partition files
>  * it should delete checkpoint markers once all merges are completed for it, 
> thus markers are decoupled from RAFT log
> About "cases when there are too many layers" - too many layers could 
> compromise reading speed. Number of layers should not increase 
> uncontrollably. So, when a threshold is exceeded, compactor should start 
> working no mater what. If anything, writing load can be throttled, reading 
> matters more.
> Recovery procedure:
>  * read the list of checkpoint markers on engines start
>  * remove all data from unfinished checkpoint, if it's there
>  * trim main partition files to their proper size (should check it it's 
> actually beneficial)
> Table start procedure:
>  * read all layer files headers according to the list of checkpoints
>  * construct a list oh hash tables (pageId -> pageIndex) for all layers, make 
> it as effective as possible
>  * everything else is just like before
> Partition removal might be tricky, but we'll see. It's tricky in Ignite 2.x 
> after all. "Restore partition states" procedure could be revisited, I don't 
> know how this will work yet.
> How to store hashmaps:
> regular maps might be too much, we should consider roaring map implementation 
> or something similar that'll occupy less space. This is only a concern for 
> in-memory structures. Files on disk may have a list of pairs, that's fine. 
> Generally speaking, checkpoints with a size of 100 thousand pages are close 
> to the top limit for most users. Splitting that to 500 partitions, for 
> example, gives us 200 pages per partition. Entire map should fit into a 
> single page.
> The only exception to these calculations is index.bin. Amount of pages per 
> checkpoint can be an orders of magnitudes higher, so we should keep an eye on 
> it, It'll be the main target for testing/benchmarking. Anyway, 4 kilobytes is 
> enough to fit 512 integer pairs, scaling to 2048 for regular 16 kilobytes 
> pages. Map won't be too big IMO.
> Another important moment 

[jira] [Commented] (IGNITE-17147) Ignite should not talk to kubernetes default service to get its own IP

2022-06-22 Thread laptimus (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557493#comment-17557493
 ] 

laptimus commented on IGNITE-17147:
---

Ignite needs to know its IP address and in Kubernetes environment, its doing it 
by contacting [https://kubernetes.default.svc.cluster.local:443.] But in our 
kubernetes cluster we have calico network policy implemented that is preventing 
Ignite to talk to [https://kubernetes.default.svc.cluster.local:443.]

There should be an alternate way for Ignite to know its own IP address in 
kubernetes environment.

 

thanks

> Ignite should not talk to kubernetes default service to get its own IP
> --
>
> Key: IGNITE-17147
> URL: https://issues.apache.org/jira/browse/IGNITE-17147
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.11.1
> Environment: Kubernetes
>Reporter: laptimus
>Priority: Major
>
> Ignite should not talk to kubernetes default service to get its own IP
> We have kubernetes cluster with calico network policies and seems like ignite 
> is the only application in our cluster that needs access to kubernetes 
> default service 
> I see this as a security risk
> Please implement an alternative way in IP Finder as that the class that talks 
> to kubernetes default service to know pod IP address
>  
> thanks



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Comment Edited] (IGNITE-17083) Universal full rebalance procedure for MV storage

2022-06-22 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557475#comment-17557475
 ] 

Roman Puchkovskiy edited comment on IGNITE-17083 at 6/22/22 2:40 PM:
-

A snapshot must have (associated with it) the largest index of an applied 
command included in the snapshot. If a snapshot is created from the 'current 
state' of the state machine, then we can do the following to obtain the index 
corresponding to the state that the state machine had at the beginning of the 
snapshot:
 # Each write is accompanied with a RAFT log index (it's the index of the 
command that executes the write) (this is already suggested in IGNITE-16907)
 # When a write is executed, its index is saved (it's persisted: either 
eventually (during a checkpoint), or immediately)
 # When starting a snapshot, we take a lock making sure that no command is 
executed concurrently with us, and read the current index (corresponding to the 
last executed write). We release the lock immediately after reading it. Then we 
send the index to the recipient node as a part of the snapshot metadata.

NOTE: item 3 allows a command to make more than 1 write.


was (Author: rpuch):
A snapshot must have (associated with it) the largest index of an applied 
command included in the snapshot. If a snapshot is created from the 'current 
state' of the state machine, then we can do the following to obtain the index 
corresponding to the state that the state machine had at the beginning of the 
snapshot:
 # Each write is accompanied with a RAFT log index (it's the index of the 
command that executes the write) (this is already suggested in IGNITE-16907)
 # When a write is executed, its index is saved (it's persisted: either 
eventually (during a checkpoint), or immediately)
 # When starting a snapshot, we take a lock making sure that no command is 
executed concurrently with us, and read the current index (corresponding to the 
last executed write). We release the lock immediately after reading it. Then we 
send the index to the recipient node as a part of the snapshot metadata.

> Universal full rebalance procedure for MV storage
> -
>
> Key: IGNITE-17083
> URL: https://issues.apache.org/jira/browse/IGNITE-17083
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>
> Canonical way to make "full rebalance" in RAFT is to have a persisted 
> snapshots of data. This is not always a good idea. First of all, for 
> persistent data is already stored somewhere and can be read at any time. 
> Second, for volatile storage this requirement is just absurd.
> So, a "rebalance snapshot" should be streamed from one node to another 
> instead of being written to a storage. What's good is that this approach can 
> be implemented independently from the storage engine (with few adjustments to 
> storage API, of course).
> h2. General idea
> Once a "rebalance snapshot" operation is triggered, we open a special type of 
> cursor from the partition storage, that is able to give us all versioned 
> chains in {_}some fixed order{_}. Every time the next chain has been read, 
> it's remembered as the last read (let's call it\{{ lastRowId}} for now). Then 
> all versions for the specific row id should be sent to receiver node in 
> "Oldest to Newest" order to simplify insertion.
> This works fine without concurrent load. To account for that we need to have 
> a additional collection of row ids, associated with a snapshot. Let's call it 
> {{{}overwrittenRowIds{}}}.
> With this in mind, every write command should look similar to this:
> {noformat}
> for (var rebalanceSnaphot : ongoingRebalanceSnapshots) {
>   try (var lock = rebalanceSnaphot.lock()) {
> if (rowId <= rebalanceSnaphot.lastRowId())
>   continue;
> if (!rebalanceSnaphot.overwrittenRowIds().put(rowId))
>   continue;
> rebalanceSnapshot.sendRowToReceiver(rowId);
>   }
> }
> // Now modification can be freely performed.
> // Snapshot itself will skip everything from the "overwrittenRowIds" 
> collection.{noformat}
> NOTE: rebalance snapshot scan must also return uncommitted write intentions. 
> Their commit will be replicated later from the RAFT log.
> NOTE: receiving side will have to rebuild indexes during the rebalancing. 
> Just like it works in Ignite 2.x.
> NOTE: Technically it is possible to have several nodes entering the cluster 
> that require a full rebalance. So, while triggering a rebalance snapshot 
> cursor, we could wait for other nodes that might want to read the same data 
> and process all of them with a single scan. This is an optimization, 
> obviously.
> h2. Implementation
> The implementation will have to be split into several parts, because we need:
>  * Support 

[jira] [Comment Edited] (IGNITE-17083) Universal full rebalance procedure for MV storage

2022-06-22 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557475#comment-17557475
 ] 

Roman Puchkovskiy edited comment on IGNITE-17083 at 6/22/22 2:39 PM:
-

A snapshot must have (associated with it) the largest index of an applied 
command included in the snapshot. If a snapshot is created from the 'current 
state' of the state machine, then we can do the following to obtain the index 
corresponding to the state that the state machine had at the beginning of the 
snapshot:
 # Each write is accompanied with a RAFT log index (it's the index of the 
command that executes the write) (this is already suggested in IGNITE-16907)
 # When a write is executed, its index is saved (it's persisted: either 
eventually (during a checkpoint), or immediately)
 # When starting a snapshot, we take a lock making sure that no command is 
executed concurrently with us, and read the current index (corresponding to the 
last executed write). We release the lock immediately after reading it. Then we 
send the index to the recipient node as a part of the snapshot metadata.


was (Author: rpuch):
A snapshot must have (associated with it) the largest index of an applied 
command included in the snapshot. If a snapshot is created from the 'current 
state' of the state machine, then we can do the following to obtain the index 
corresponding to the state that the state machine had at the beginning of the 
snapshot:
 # Each write is accompanied with a RAFT log index (it's the index of the 
command that executes the write) (this is already suggested in IGNITE-16907)
 # When a write is executed, its index is saved (it's persisted: either 
eventually (during a checkpoint), or immediately)
 # When starting a snapshot, we take a lock making sure that no write is 
executed concurrently with us, and read the current index (corresponding to the 
last executed write). We release the lock immediately after reading it. Then we 
send the index to the recipient node as a part of the snapshot metadata.

NOTE: for this to work, it's required that each command executes at most one 
write operation, otherwise we might end up in a situation when a command (on 
the recepient node) is just partly applied.

> Universal full rebalance procedure for MV storage
> -
>
> Key: IGNITE-17083
> URL: https://issues.apache.org/jira/browse/IGNITE-17083
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>
> Canonical way to make "full rebalance" in RAFT is to have a persisted 
> snapshots of data. This is not always a good idea. First of all, for 
> persistent data is already stored somewhere and can be read at any time. 
> Second, for volatile storage this requirement is just absurd.
> So, a "rebalance snapshot" should be streamed from one node to another 
> instead of being written to a storage. What's good is that this approach can 
> be implemented independently from the storage engine (with few adjustments to 
> storage API, of course).
> h2. General idea
> Once a "rebalance snapshot" operation is triggered, we open a special type of 
> cursor from the partition storage, that is able to give us all versioned 
> chains in {_}some fixed order{_}. Every time the next chain has been read, 
> it's remembered as the last read (let's call it\{{ lastRowId}} for now). Then 
> all versions for the specific row id should be sent to receiver node in 
> "Oldest to Newest" order to simplify insertion.
> This works fine without concurrent load. To account for that we need to have 
> a additional collection of row ids, associated with a snapshot. Let's call it 
> {{{}overwrittenRowIds{}}}.
> With this in mind, every write command should look similar to this:
> {noformat}
> for (var rebalanceSnaphot : ongoingRebalanceSnapshots) {
>   try (var lock = rebalanceSnaphot.lock()) {
> if (rowId <= rebalanceSnaphot.lastRowId())
>   continue;
> if (!rebalanceSnaphot.overwrittenRowIds().put(rowId))
>   continue;
> rebalanceSnapshot.sendRowToReceiver(rowId);
>   }
> }
> // Now modification can be freely performed.
> // Snapshot itself will skip everything from the "overwrittenRowIds" 
> collection.{noformat}
> NOTE: rebalance snapshot scan must also return uncommitted write intentions. 
> Their commit will be replicated later from the RAFT log.
> NOTE: receiving side will have to rebuild indexes during the rebalancing. 
> Just like it works in Ignite 2.x.
> NOTE: Technically it is possible to have several nodes entering the cluster 
> that require a full rebalance. So, while triggering a rebalance snapshot 
> cursor, we could wait for other nodes that might want to read the same data 
> and process all of them with a single scan. This is an 

[jira] [Updated] (IGNITE-17211) Nested polymorphic configuration cannot be without a default type

2022-06-22 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-17211:
-
Description: 
*Problem*
If the polymorphic configuration is nested (sub), then when trying to create an 
instance of the configuration, we will receive the error *Polymorphic 
configuration type is not defined*, since the type of the polymorphic 
configuration is not known and can be changed in the future, in order to get 
around this limitation, you have to set the default type, which may not always 
be correct.

*Stack trace example:*
{noformat}
Caused by: java.lang.IllegalStateException: Polymorphic configuration type is 
not defined: 
org.apache.ignite.configuration.schemas.table.ColumnDefaultConfigurationSchema. 
See @PolymorphicConfig documentation.
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.construct(Unknown 
Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil$3.visitInnerNode(ConfigurationUtil.java:327)
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.traverseChildren(Unknown
 Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil.addDefaults(ConfigurationUtil.java:308)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.newElementDescriptor(NamedListNode.java:492)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.create(NamedListNode.java:156)
{noformat}

*Configuration scheme example:*
{code:java}
@ConfigurationRoot(rootName = "parent", type = LOCAL)
public static class ParentConfigurationSchema {
@ConfigValue
public ChildConfigurationSchema child;
}

@PolymorphicConfig
public static class ChildConfigurationSchema {
public static final String FIRST = "first";
public static final String SECOND = "second";

// When creating a configuration instance, there will be an error if there 
is no default value.
@PolymorphicId
public String type;
}

@PolymorphicConfigInstance(ChildConfigurationSchema.FIRST)
public static class FirstChildConfigurationSchema extends 
ChildConfigurationSchema {
}

@PolymorphicConfigInstance(ChildConfigurationSchema.SECOND)
public static class SecondChildConfigurationSchema extends 
ChildConfigurationSchema {
}
{code}

*Notes on a possible implementation*
* We can consider weakening the condition of necessarily polymorphic 
configuration type at the stage of adding default values for configuration 
fields, you can see here 
*org.apache.ignite.internal.configuration.util.ConfigurationUtil#addDefaults*.

  was:
*Problem*
If the polymorphic configuration is nested (sub), then when trying to create an 
instance of the configuration, we will receive the error *Polymorphic 
configuration type is not defined*, since the type of the polymorphic 
configuration is not known and can be changed in the future, in order to get 
around this limitation, you have to set the default type, which may not always 
be correct.

*Stack trace example:*
{noformat}
Caused by: java.lang.IllegalStateException: Polymorphic configuration type is 
not defined: 
org.apache.ignite.configuration.schemas.table.ColumnDefaultConfigurationSchema. 
See @PolymorphicConfig documentation.
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.construct(Unknown 
Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil$3.visitInnerNode(ConfigurationUtil.java:327)
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.traverseChildren(Unknown
 Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil.addDefaults(ConfigurationUtil.java:308)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.newElementDescriptor(NamedListNode.java:492)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.create(NamedListNode.java:156)
{noformat}

*Configuration scheme example:*
{code:java}
@ConfigurationRoot(rootName = "parent", type = LOCAL)
public static class ParentConfigurationSchema {
@ConfigValue
public ChildConfigurationSchema child;
}

@PolymorphicConfig
public static class ChildConfigurationSchema {
public static final String FIRST = "first";
public static final String SECOND = "second";

// When creating a configuration instance, there will be an error if there 
is no default value.
@PolymorphicId
public String type;
}

@PolymorphicConfigInstance(ChildConfigurationSchema.FIRST)
public static class FirstChildConfigurationSchema extends 
ChildConfigurationSchema {
}

@PolymorphicConfigInstance(ChildConfigurationSchema.SECOND)
public static class SecondChildConfigurationSchema extends 
ChildConfigurationSchema {
}
{code}



> Nested polymorphic configuration cannot be without a default type
> -
>
> Key: IGNITE-17211
> 

[jira] [Updated] (IGNITE-17211) Nested polymorphic configuration cannot be without a default type

2022-06-22 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-17211:
-
Description: 
*Problem*
If the polymorphic configuration is nested (sub), then when trying to create an 
instance of the configuration, we will receive the error *Polymorphic 
configuration type is not defined*, since the type of the polymorphic 
configuration is not known and can be changed in the future, in order to get 
around this limitation, you have to set the default type, which may not always 
be correct.

*Stack trace example:*
{noformat}
Caused by: java.lang.IllegalStateException: Polymorphic configuration type is 
not defined: 
org.apache.ignite.configuration.schemas.table.ColumnDefaultConfigurationSchema. 
See @PolymorphicConfig documentation.
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.construct(Unknown 
Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil$3.visitInnerNode(ConfigurationUtil.java:327)
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.traverseChildren(Unknown
 Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil.addDefaults(ConfigurationUtil.java:308)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.newElementDescriptor(NamedListNode.java:492)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.create(NamedListNode.java:156)
{noformat}

*Configuration scheme example:*
{code:java}
@ConfigurationRoot(rootName = "parent", type = LOCAL)
public static class ParentConfigurationSchema {
@ConfigValue
public ChildConfigurationSchema child;
}

@PolymorphicConfig
public static class ChildConfigurationSchema {
public static final String FIRST = "first";
public static final String SECOND = "second";

// When creating a configuration instance, there will be an error if there 
is no default value.
@PolymorphicId
public String type;
}

@PolymorphicConfigInstance(ChildConfigurationSchema.FIRST)
public static class FirstChildConfigurationSchema extends 
ChildConfigurationSchema {
}

@PolymorphicConfigInstance(ChildConfigurationSchema.SECOND)
public static class SecondChildConfigurationSchema extends 
ChildConfigurationSchema {
}
{code}


  was:
*Problem*
If the polymorphic configuration is nested (sub), then when trying to create an 
instance of the configuration, we will receive the error *Polymorphic 
configuration type is not defined*, since the type of the polymorphic 
configuration is not known and can be changed in the future, in order to get 
around this limitation, you have to set the default type, which may not always 
be correct.

*Stack trace example:*
{noformat}
Caused by: java.lang.IllegalStateException: Polymorphic configuration type is 
not defined: 
org.apache.ignite.configuration.schemas.table.ColumnDefaultConfigurationSchema. 
See @PolymorphicConfig documentation.
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.construct(Unknown 
Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil$3.visitInnerNode(ConfigurationUtil.java:327)
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.traverseChildren(Unknown
 Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil.addDefaults(ConfigurationUtil.java:308)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.newElementDescriptor(NamedListNode.java:492)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.create(NamedListNode.java:156)
{noformat}



> Nested polymorphic configuration cannot be without a default type
> -
>
> Key: IGNITE-17211
> URL: https://issues.apache.org/jira/browse/IGNITE-17211
> Project: Ignite
>  Issue Type: Bug
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: iep-55, ignite-3, tech-debt
> Fix For: 3.0.0-alpha6
>
>
> *Problem*
> If the polymorphic configuration is nested (sub), then when trying to create 
> an instance of the configuration, we will receive the error *Polymorphic 
> configuration type is not defined*, since the type of the polymorphic 
> configuration is not known and can be changed in the future, in order to get 
> around this limitation, you have to set the default type, which may not 
> always be correct.
> *Stack trace example:*
> {noformat}
> Caused by: java.lang.IllegalStateException: Polymorphic configuration type is 
> not defined: 
> org.apache.ignite.configuration.schemas.table.ColumnDefaultConfigurationSchema.
>  See @PolymorphicConfig documentation.
>   at 
> org.apache.ignite.configuration.schemas.table.ColumnNode.construct(Unknown 
> Source)
>   at 
> 

[jira] [Commented] (IGNITE-17083) Universal full rebalance procedure for MV storage

2022-06-22 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557485#comment-17557485
 ] 

Roman Puchkovskiy commented on IGNITE-17083:


At first sight, it might seem that we can just use MVCC for getting snapshots 
(as we can store RAFT log index with each MV record). But with the current MV 
stores it will not work as these stores are not append-only. Some commands 
change data in-place (for example, addWrite() replaces an existing write intent 
with a new one), others even remove records (abortWrite()).

So we need to either change our MV stores to append-only mode, or use 
copy-on-write approach. The latter was chosen here. It also has a bonus: we 
don't seem to use any MV-related storage specifics.

> Universal full rebalance procedure for MV storage
> -
>
> Key: IGNITE-17083
> URL: https://issues.apache.org/jira/browse/IGNITE-17083
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>
> Canonical way to make "full rebalance" in RAFT is to have a persisted 
> snapshots of data. This is not always a good idea. First of all, for 
> persistent data is already stored somewhere and can be read at any time. 
> Second, for volatile storage this requirement is just absurd.
> So, a "rebalance snapshot" should be streamed from one node to another 
> instead of being written to a storage. What's good is that this approach can 
> be implemented independently from the storage engine (with few adjustments to 
> storage API, of course).
> h2. General idea
> Once a "rebalance snapshot" operation is triggered, we open a special type of 
> cursor from the partition storage, that is able to give us all versioned 
> chains in {_}some fixed order{_}. Every time the next chain has been read, 
> it's remembered as the last read (let's call it\{{ lastRowId}} for now). Then 
> all versions for the specific row id should be sent to receiver node in 
> "Oldest to Newest" order to simplify insertion.
> This works fine without concurrent load. To account for that we need to have 
> a additional collection of row ids, associated with a snapshot. Let's call it 
> {{{}overwrittenRowIds{}}}.
> With this in mind, every write command should look similar to this:
> {noformat}
> for (var rebalanceSnaphot : ongoingRebalanceSnapshots) {
>   try (var lock = rebalanceSnaphot.lock()) {
> if (rowId <= rebalanceSnaphot.lastRowId())
>   continue;
> if (!rebalanceSnaphot.overwrittenRowIds().put(rowId))
>   continue;
> rebalanceSnapshot.sendRowToReceiver(rowId);
>   }
> }
> // Now modification can be freely performed.
> // Snapshot itself will skip everything from the "overwrittenRowIds" 
> collection.{noformat}
> NOTE: rebalance snapshot scan must also return uncommitted write intentions. 
> Their commit will be replicated later from the RAFT log.
> NOTE: receiving side will have to rebuild indexes during the rebalancing. 
> Just like it works in Ignite 2.x.
> NOTE: Technically it is possible to have several nodes entering the cluster 
> that require a full rebalance. So, while triggering a rebalance snapshot 
> cursor, we could wait for other nodes that might want to read the same data 
> and process all of them with a single scan. This is an optimization, 
> obviously.
> h2. Implementation
> The implementation will have to be split into several parts, because we need:
>  * Support for snapshot streaming in RAFT state machine.
>  * Storage API for this type of scan.
>  * Every storage must implement the new scan method.
>  * Streamer itself should be implemented, along with a specific logic in 
> write commands.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17211) Nested polymorphic configuration cannot be without a default type

2022-06-22 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-17211:


 Summary: Nested polymorphic configuration cannot be without a 
default type
 Key: IGNITE-17211
 URL: https://issues.apache.org/jira/browse/IGNITE-17211
 Project: Ignite
  Issue Type: Bug
Reporter: Kirill Tkalenko
 Fix For: 3.0.0-alpha6


*Problem*
If the polymorphic configuration is nested (sub), then when trying to create an 
instance of the configuration, we will receive the error *Polymorphic 
configuration type is not defined*, since the type of the polymorphic 
configuration is not known and can be changed in the future, in order to get 
around this limitation, you have to set the default type, which may not always 
be correct.

*Stack trace example:*
{noformat}
Caused by: java.lang.IllegalStateException: Polymorphic configuration type is 
not defined: 
org.apache.ignite.configuration.schemas.table.ColumnDefaultConfigurationSchema. 
See @PolymorphicConfig documentation.
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.construct(Unknown 
Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil$3.visitInnerNode(ConfigurationUtil.java:327)
at 
org.apache.ignite.configuration.schemas.table.ColumnNode.traverseChildren(Unknown
 Source)
at 
org.apache.ignite.internal.configuration.util.ConfigurationUtil.addDefaults(ConfigurationUtil.java:308)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.newElementDescriptor(NamedListNode.java:492)
at 
org.apache.ignite.internal.configuration.tree.NamedListNode.create(NamedListNode.java:156)
{noformat}




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17083) Universal full rebalance procedure for MV storage

2022-06-22 Thread Roman Puchkovskiy (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557475#comment-17557475
 ] 

Roman Puchkovskiy commented on IGNITE-17083:


A snapshot must have (associated with it) the largest index of an applied 
command included in the snapshot. If a snapshot is created from the 'current 
state' of the state machine, then we can do the following to obtain the index 
corresponding to the state that the state machine had at the beginning of the 
snapshot:
 # Each write is accompanied with a RAFT log index (it's the index of the 
command that executes the write) (this is already suggested in IGNITE-16907)
 # When a write is executed, its index is saved (it's persisted: either 
eventually (during a checkpoint), or immediately)
 # When starting a snapshot, we take a lock making sure that no write is 
executed concurrently with us, and read the current index (corresponding to the 
last executed write). We release the lock immediately after reading it. Then we 
send the index to the recipient node as a part of the snapshot metadata.

NOTE: for this to work, it's required that each command executes at most one 
write operation, otherwise we might end up in a situation when a command (on 
the recepient node) is just partly applied.

> Universal full rebalance procedure for MV storage
> -
>
> Key: IGNITE-17083
> URL: https://issues.apache.org/jira/browse/IGNITE-17083
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Ivan Bessonov
>Priority: Major
>  Labels: ignite-3
>
> Canonical way to make "full rebalance" in RAFT is to have a persisted 
> snapshots of data. This is not always a good idea. First of all, for 
> persistent data is already stored somewhere and can be read at any time. 
> Second, for volatile storage this requirement is just absurd.
> So, a "rebalance snapshot" should be streamed from one node to another 
> instead of being written to a storage. What's good is that this approach can 
> be implemented independently from the storage engine (with few adjustments to 
> storage API, of course).
> h2. General idea
> Once a "rebalance snapshot" operation is triggered, we open a special type of 
> cursor from the partition storage, that is able to give us all versioned 
> chains in {_}some fixed order{_}. Every time the next chain has been read, 
> it's remembered as the last read (let's call it\{{ lastRowId}} for now). Then 
> all versions for the specific row id should be sent to receiver node in 
> "Oldest to Newest" order to simplify insertion.
> This works fine without concurrent load. To account for that we need to have 
> a additional collection of row ids, associated with a snapshot. Let's call it 
> {{{}overwrittenRowIds{}}}.
> With this in mind, every write command should look similar to this:
> {noformat}
> for (var rebalanceSnaphot : ongoingRebalanceSnapshots) {
>   try (var lock = rebalanceSnaphot.lock()) {
> if (rowId <= rebalanceSnaphot.lastRowId())
>   continue;
> if (!rebalanceSnaphot.overwrittenRowIds().put(rowId))
>   continue;
> rebalanceSnapshot.sendRowToReceiver(rowId);
>   }
> }
> // Now modification can be freely performed.
> // Snapshot itself will skip everything from the "overwrittenRowIds" 
> collection.{noformat}
> NOTE: rebalance snapshot scan must also return uncommitted write intentions. 
> Their commit will be replicated later from the RAFT log.
> NOTE: receiving side will have to rebuild indexes during the rebalancing. 
> Just like it works in Ignite 2.x.
> NOTE: Technically it is possible to have several nodes entering the cluster 
> that require a full rebalance. So, while triggering a rebalance snapshot 
> cursor, we could wait for other nodes that might want to read the same data 
> and process all of them with a single scan. This is an optimization, 
> obviously.
> h2. Implementation
> The implementation will have to be split into several parts, because we need:
>  * Support for snapshot streaming in RAFT state machine.
>  * Storage API for this type of scan.
>  * Every storage must implement the new scan method.
>  * Streamer itself should be implemented, along with a specific logic in 
> write commands.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17093) Map error codes for cli commands

2022-06-22 Thread Kirill Gusakov (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557458#comment-17557458
 ] 

Kirill Gusakov commented on IGNITE-17093:
-

LGTM, thanks!

> Map error codes for cli commands
> 
>
> Key: IGNITE-17093
> URL: https://issues.apache.org/jira/browse/IGNITE-17093
> Project: Ignite
>  Issue Type: Task
>Reporter: Aleksandr
>Assignee: Vadim Pakhnushev
>Priority: Major
>  Labels: ignite-3, ignite-3-cli-tool
>  Time Spent: 7h 10m
>  Remaining Estimate: 0h
>
> Invoking Ignite CLI tool in non-REPL mode produces the following exit codes:
> 0. Successful completion.
> 1. An error occured during the execution.
> 2. An error occured during parsing command line arguments.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17210) Modification of documentation on data regions for PageMemory

2022-06-22 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-17210:


 Summary: Modification of documentation on data regions for 
PageMemory
 Key: IGNITE-17210
 URL: https://issues.apache.org/jira/browse/IGNITE-17210
 Project: Ignite
  Issue Type: Improvement
  Components: documentation
Reporter: Kirill Tkalenko
 Fix For: 3.0.0-alpha6


Due to the fact that the storage engine based on *PageMemory* was divided into 
2 (*in-memory*, *persistent*), then the configuration of the region data was 
also divided, we need to correct the documentation in 
*docs/_docs/config/data-region.adoc* about it.

We can look at:
* 
*org.apache.ignite.internal.pagememory.configuration.schema.BasePageMemoryDataRegionConfigurationSchema*
* 
*org.apache.ignite.internal.pagememory.configuration.schema.VolatilePageMemoryDataRegionConfigurationSchema*
* 
*org.apache.ignite.internal.pagememory.configuration.schema.PersistentPageMemoryDataRegionConfigurationSchema*



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17209) Allow passing sql query as a parameter

2022-06-22 Thread Vadim Pakhnushev (Jira)
Vadim Pakhnushev created IGNITE-17209:
-

 Summary: Allow passing sql query as a parameter
 Key: IGNITE-17209
 URL: https://issues.apache.org/jira/browse/IGNITE-17209
 Project: Ignite
  Issue Type: Task
Reporter: Vadim Pakhnushev


As stated in the 
[IEP-88|https://cwiki.apache.org/confluence/display/IGNITE/IEP-88:+CLI+Tool#IEP88:CLITool-sql],
 sql command should take sql query without the need of an --exec option



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17159) Server node failed due to java.lang.AssertionError: Client already created

2022-06-22 Thread Ignite TC Bot (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557445#comment-17557445
 ] 

Ignite TC Bot commented on IGNITE-17159:


{panel:title=Branch: [pull/10090/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
{panel:title=Branch: [pull/10090/head] Base: [master] : No new tests 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=6637438buildTypeId=IgniteTests24Java8_RunAll]

> Server node failed due to java.lang.AssertionError: Client already created
> --
>
> Key: IGNITE-17159
> URL: https://issues.apache.org/jira/browse/IGNITE-17159
> Project: Ignite
>  Issue Type: Bug
>  Components: networking
>Reporter: Semyon Danilov
>Assignee: Semyon Danilov
>Priority: Major
> Fix For: 2.14
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It seems like we release recovery descriptor prior to removing communication 
> client from the connection pool. So another thread successfully reserves 
> descriptor, creates a client and tries to put a newly created client into the 
> pool and fails because there is a stale client which we didn’t remove yet. I 
> think we should release descriptor AFTER we remove communication client and 
> it should fix the issue.
> {noformat}
> at 
> org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.addNodeClient(ConnectionClientPool.java:638)
> at 
> org.apache.ignite.spi.communication.tcp.internal.ConnectionClientPool.reserveClient(ConnectionClientPool.java:242)
> at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:1174)
> at 
> org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:1123)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1817)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:1944)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1265)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1304)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.sendDhtRequests(GridDhtAtomicAbstractUpdateFuture.java:489)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicAbstractUpdateFuture.map(GridDhtAtomicAbstractUpdateFuture.java:445)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1921)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1685)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:319)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:496)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:454)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:267)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1164)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:627)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2073)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2048)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1311)
> at 
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:817)
> at 
> com.autozone.supplychain.csr.receiver.QuantityCacheTupleReceiver.processRecord(QuantityCacheTupleReceiver.java:123)
> at 
> com.autozone.supplychain.csr.receiver.QuantityCacheTupleReceiver.receive(QuantityCacheTupleReceiver.java:47)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:137)
> at 
> 

[jira] [Commented] (IGNITE-17147) Ignite should not talk to kubernetes default service to get its own IP

2022-06-22 Thread Alexandr Shapkin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557442#comment-17557442
 ] 

Alexandr Shapkin commented on IGNITE-17147:
---

[~laptimus] Could you please elaborate on this task? 

What's you improvement suggestion? 

Am I right that you are referring to this configuration: 
[https://www.gridgain.com/sdk/latest/javadoc/org/apache/ignite/kubernetes/configuration/KubernetesConnectionConfiguration.html#setMasterUrl-java.lang.String-]

and its default value - 'https://kubernetes.default.svc.cluster.local:443'?

> Ignite should not talk to kubernetes default service to get its own IP
> --
>
> Key: IGNITE-17147
> URL: https://issues.apache.org/jira/browse/IGNITE-17147
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.11.1
> Environment: Kubernetes
>Reporter: laptimus
>Priority: Major
>
> Ignite should not talk to kubernetes default service to get its own IP
> We have kubernetes cluster with calico network policies and seems like ignite 
> is the only application in our cluster that needs access to kubernetes 
> default service 
> I see this as a security risk
> Please implement an alternative way in IP Finder as that the class that talks 
> to kubernetes default service to know pod IP address
>  
> thanks



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17149) Separation of the PageMemoryStorageEngineConfigurationSchema into in-memory and persistent

2022-06-22 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-17149:
-
Ignite Flags: Docs Required

> Separation of the PageMemoryStorageEngineConfigurationSchema into in-memory 
> and persistent
> --
>
> Key: IGNITE-17149
> URL: https://issues.apache.org/jira/browse/IGNITE-17149
> Project: Ignite
>  Issue Type: Task
>Reporter: Kirill Tkalenko
>Assignee: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Problem*
> At the moment, the 
> *org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryStorageEngineConfigurationSchema*
>  contains configuration for in-memory and persistent 
> *org.apache.ignite.internal.pagememory.configuration.schema.PageMemoryDataRegionConfigurationSchema*,
>  which can be inconvenient for the user for several reasons:
>  * *PageMemoryDataRegionConfigurationSchema* contains the configuration for 
> in-memory and the persistent case, which can be confusing because it's not 
> obvious which properties to set for each;
>  * User does not have the ability to set a different size 
> *PageMemoryStorageEngineConfigurationSchema#pageSize* for in-memory and the 
> persistent case;
>  * When creating a table through SQL, it would be more convenient for the 
> user to simply specify the engine and use the default region than specify the 
> data region, let's look at the examples.
> {code:java}
> CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE pagememory 
> dataRegion='in-memory' 
> CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE pagememory 
> dataRegion='persistnet'{code}
> {code:java}
> CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE 
> in-memory-pagememory
> CREATE TABLE user (id INT PRIMARY KEY, name VARCHAR(255)) ENGINE 
> persistnet-pagememory
> {code}
> *Implementation proposal*
> Divide by two (in-memory and persistent):
> * 
> *org.apache.ignite.internal.pagememory.configuration.schema.PageMemoryDataRegionConfigurationSchema*
> * 
> *org.apache.ignite.internal.storage.pagememory.configuration.schema.PageMemoryStorageEngineConfigurationSchema*
> * *org.apache.ignite.internal.storage.pagememory.PageMemoryStorageEngine*



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17208) Change storage engine names based on PageMemory

2022-06-22 Thread Kirill Tkalenko (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kirill Tkalenko updated IGNITE-17208:
-
Description: 
PageMemory based storage engine (*pagememory*) has been split into two *aimem* 
(in-memory) and *aipersist* (persistent), these names don't seem convenient and 
memorable like *rocksdb*, we need something similar for them.

What needs to be changed in the code:
* 
*org.apache.ignite.internal.storage.pagememory.VolatilePageMemoryStorageEngine#ENGINE_NAME*
* 
*org.apache.ignite.internal.storage.pagememory.PersistentPageMemoryStorageEngine#ENGINE_NAME*
* 
*org.apache.ignite.configuration.schemas.table.TablesConfigurationSchema#defaultDataStorage*
* *org.apache.ignite.example.storage.PersistentPageMemoryStorageExample*
* *org.apache.ignite.example.storage.VolatilePageMemoryStorageExample*
* Fallen tests.


  was:
PageMemory based storage engine (*pagememory*) has been split into two *aimem* 
(in-memory) and *aipersist* (persistent), these names don't seem convenient and 
memorable like *rocksdb*, we need something similar for them.

What needs to be changed in the code:
* 
org.apache.ignite.internal.storage.pagememory.VolatilePageMemoryStorageEngine#ENGINE_NAME
* 
org.apache.ignite.internal.storage.pagememory.PersistentPageMemoryStorageEngine#ENGINE_NAME
* 
org.apache.ignite.configuration.schemas.table.TablesConfigurationSchema#defaultDataStorage
* org.apache.ignite.example.storage.PersistentPageMemoryStorageExample
* org.apache.ignite.example.storage.VolatilePageMemoryStorageExample
* Fallen tests.



> Change storage engine names based on PageMemory
> ---
>
> Key: IGNITE-17208
> URL: https://issues.apache.org/jira/browse/IGNITE-17208
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Kirill Tkalenko
>Priority: Major
>  Labels: ignite-3
> Fix For: 3.0.0-alpha6
>
>
> PageMemory based storage engine (*pagememory*) has been split into two 
> *aimem* (in-memory) and *aipersist* (persistent), these names don't seem 
> convenient and memorable like *rocksdb*, we need something similar for them.
> What needs to be changed in the code:
> * 
> *org.apache.ignite.internal.storage.pagememory.VolatilePageMemoryStorageEngine#ENGINE_NAME*
> * 
> *org.apache.ignite.internal.storage.pagememory.PersistentPageMemoryStorageEngine#ENGINE_NAME*
> * 
> *org.apache.ignite.configuration.schemas.table.TablesConfigurationSchema#defaultDataStorage*
> * *org.apache.ignite.example.storage.PersistentPageMemoryStorageExample*
> * *org.apache.ignite.example.storage.VolatilePageMemoryStorageExample*
> * Fallen tests.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-16881) Integrate MV-storage into current tx implementation

2022-06-22 Thread Alexander Lapin (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557415#comment-17557415
 ] 

Alexander Lapin commented on IGNITE-16881:
--

[~Sergey Uttsel] LGTM

> Integrate MV-storage into current tx implementation
> ---
>
> Key: IGNITE-16881
> URL: https://issues.apache.org/jira/browse/IGNITE-16881
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3, transaction3_rw
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> VersionedRowStore contract is replaced with new MV-storage API that should be 
> used inside tx protocol instead of old VersionedRowStore one.
> Tx.Phase1



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-16881) Integrate MV-storage into current tx implementation

2022-06-22 Thread Alexander Lapin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Lapin updated IGNITE-16881:
-
Reviewer: Alexander Lapin

> Integrate MV-storage into current tx implementation
> ---
>
> Key: IGNITE-16881
> URL: https://issues.apache.org/jira/browse/IGNITE-16881
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3, transaction3_rw
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> VersionedRowStore contract is replaced with new MV-storage API that should be 
> used inside tx protocol instead of old VersionedRowStore one.
> Tx.Phase1



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17208) Change storage engine names based on PageMemory

2022-06-22 Thread Kirill Tkalenko (Jira)
Kirill Tkalenko created IGNITE-17208:


 Summary: Change storage engine names based on PageMemory
 Key: IGNITE-17208
 URL: https://issues.apache.org/jira/browse/IGNITE-17208
 Project: Ignite
  Issue Type: Improvement
Reporter: Kirill Tkalenko
 Fix For: 3.0.0-alpha6


PageMemory based storage engine (*pagememory*) has been split into two *aimem* 
(in-memory) and *aipersist* (persistent), these names don't seem convenient and 
memorable like *rocksdb*, we need something similar for them.

What needs to be changed in the code:
* 
org.apache.ignite.internal.storage.pagememory.VolatilePageMemoryStorageEngine#ENGINE_NAME
* 
org.apache.ignite.internal.storage.pagememory.PersistentPageMemoryStorageEngine#ENGINE_NAME
* 
org.apache.ignite.configuration.schemas.table.TablesConfigurationSchema#defaultDataStorage
* org.apache.ignite.example.storage.PersistentPageMemoryStorageExample
* org.apache.ignite.example.storage.VolatilePageMemoryStorageExample
* Fallen tests.




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17131) Wrong result if subquery is on the left child of LEFT JOIN operator

2022-06-22 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-17131:
-
Fix Version/s: 2.14

> Wrong result if subquery is on the left child of LEFT JOIN operator
> ---
>
> Key: IGNITE-17131
> URL: https://issues.apache.org/jira/browse/IGNITE-17131
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.13
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Critical
> Fix For: 2.14
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently a query with a filtering subquery of a left side of a LEFT JOIN 
> returns an invalid result. Looks like the problem is somewhere inside 
> {{{}GridSubqueryJoinOptimizer{}}}.
> The possible workaround is to turn the join rewriting off by setting the 
> system property {{IGNITE_ENABLE_SUBQUERY_REWRITE_OPTIMIZATION}} to false.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17183) Sql. Test testCurrentDateTimeTimeStamp fails

2022-06-22 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-17183:
-
Labels: ignite-3  (was: )

> Sql. Test testCurrentDateTimeTimeStamp fails
> 
>
> Key: IGNITE-17183
> URL: https://issues.apache.org/jira/browse/IGNITE-17183
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Reporter: Konstantin Orlov
>Assignee: Konstantin Orlov
>Priority: Blocker
>  Labels: ignite-3
> Fix For: 3.0.0-alpha6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The problem caused by the test itself. Or, to be more precise, in converting 
> the date value to a string and then comparing.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17178) Notify preconfigured event listeners about node start after resources were injected

2022-06-22 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-17178:
-
Summary: Notify preconfigured event listeners about node start after 
resources were injected  (was: Notify preconfigured event listeners about node 
start after resources were injected.)

> Notify preconfigured event listeners about node start after resources were 
> injected
> ---
>
> Key: IGNITE-17178
> URL: https://issues.apache.org/jira/browse/IGNITE-17178
> Project: Ignite
>  Issue Type: Task
>Reporter: Mikhail Petrov
>Assignee: Mikhail Petrov
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We need to notify preconfigured event listeners about node start after 
> resources were injected.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17176) Add 'set partition' in index api

2022-06-22 Thread Vyacheslav Koptilin (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Koptilin updated IGNITE-17176:
-
Labels: IEP-71 ise  (was: ise)

> Add 'set partition' in index api
> 
>
> Key: IGNITE-17176
> URL: https://issues.apache.org/jira/browse/IGNITE-17176
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Dmitry Frolov
>Priority: Minor
>  Labels: IEP-71, ise
>
> In SQL api it's possible to 'set partition' to speed up query. Will be great 
> to have same feature in index api (iep-71). With support in both thin & thick 
> clients. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17052) Thin 3.0: Implement query metadata

2022-06-22 Thread Pavel Tupitsyn (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557378#comment-17557378
 ] 

Pavel Tupitsyn commented on IGNITE-17052:
-

Merged to main: 
https://github.com/apache/ignite-3/commit/4119f80b6ddcb56af522ba61f038b82c64382da7

> Thin 3.0: Implement query metadata
> --
>
> Key: IGNITE-17052
> URL: https://issues.apache.org/jira/browse/IGNITE-17052
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> * Pass ColumnMetadata to the client
> * Use metadata to simplify row serialization



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (IGNITE-17206) .NET: Thin client: Add IgniteSet

2022-06-22 Thread Pavel Tupitsyn (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-17206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-17206:

Labels: .NET  (was: )

> .NET: Thin client: Add IgniteSet
> 
>
> Key: IGNITE-17206
> URL: https://issues.apache.org/jira/browse/IGNITE-17206
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms, thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: .NET
>
> Add IgniteSet data structure to .NET thin client.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17207) .NET: Add IgniteSet

2022-06-22 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-17207:
---

 Summary: .NET: Add IgniteSet
 Key: IGNITE-17207
 URL: https://issues.apache.org/jira/browse/IGNITE-17207
 Project: Ignite
  Issue Type: New Feature
  Components: platforms
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn


Add IgniteSet to "thick" .NET API.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (IGNITE-17206) .NET: Thin client: Add IgniteSet

2022-06-22 Thread Pavel Tupitsyn (Jira)
Pavel Tupitsyn created IGNITE-17206:
---

 Summary: .NET: Thin client: Add IgniteSet
 Key: IGNITE-17206
 URL: https://issues.apache.org/jira/browse/IGNITE-17206
 Project: Ignite
  Issue Type: New Feature
  Components: platforms, thin client
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn


Add IgniteSet data structure to .NET thin client.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (IGNITE-17052) Thin 3.0: Implement query metadata

2022-06-22 Thread Igor Sapego (Jira)


[ 
https://issues.apache.org/jira/browse/IGNITE-17052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557347#comment-17557347
 ] 

Igor Sapego commented on IGNITE-17052:
--

[~ptupitsyn] looks good to me

> Thin 3.0: Implement query metadata
> --
>
> Key: IGNITE-17052
> URL: https://issues.apache.org/jira/browse/IGNITE-17052
> Project: Ignite
>  Issue Type: Improvement
>  Components: thin client
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Major
>  Labels: ignite-3
>
> * Pass ColumnMetadata to the client
> * Use metadata to simplify row serialization



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (IGNITE-16893) Implement HLC and clock syncrhonization logic

2022-06-22 Thread Sergey Uttsel (Jira)


 [ 
https://issues.apache.org/jira/browse/IGNITE-16893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Uttsel reassigned IGNITE-16893:
--

Assignee: Sergey Uttsel

> Implement HLC and clock syncrhonization logic
> -
>
> Key: IGNITE-16893
> URL: https://issues.apache.org/jira/browse/IGNITE-16893
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexander Lapin
>Assignee: Sergey Uttsel
>Priority: Major
>  Labels: ignite-3
> Attachments: Screenshot from 2022-04-22 16-14-27.png, Screenshot from 
> 2022-04-22 16-16-10.png
>
>
> HLC is similar to LC, but has a physical meaning and accurately represents 
> physical time within a bound error.
> The rules for updating HLC:
> {code:java}
> Initially l.j = 0; c.j = 0 
> Send or local event 
> l’.j = l.j; 
> l.j = max(l’.j, pt.j); 
> If (l.j = l’.j) then c.j = c.j + 1
> Else c.j = 0;
> Timestamp with l.j, c.j 
> {code}
> Receive event of message m
> {code:java}
> l’.j = l.j; 
> l.j = max(l’.j, l.m, pt.j); 
> If (l.j = l’.j =l.m) then c.j = max(c.j, c.m)+1 
> Elseif (l.j =l’.j) then c.j = c.j + 1 
> Elseif (l.j =l.m) then c.j = c.m + 1 
> Else c.j = 0 
> Timestamp with l.j, c.j{code}
> !Screenshot from 2022-04-22 16-14-27.png!The following statements hold true 
> for HLC, represented by (l,c):
>  # For any two events e and f, e <- f => (l.e; c.e) < (l.f; c.f) // 
> Lexicographic comparison
>  # For any event f, l.f >= pt.f
>  # l.f > pt.f => (∃g : g <- f && pt.g = l.f)
>  # For any event f, |l.f - pt.f| < ϶,
> where ϶ represents clock sync uncertainty. For NTP e ~= 2 * ntp_offset
>  # For any event f, c.f = k && k > 0 => (
> ∃g1; g2; … ; gk : (∀j: 1 <= j < k : gj <- gj+1) && (∀j : 1 <= j < k : l.(gj) 
> = l.f) && (gk hb f)
> )
>  # For any event f, c.f <= |\{g : g <- f && l.g = l.f}|
>  # For any event f, c.f <=  N * (϶ + 1), if a physical clock of a node is 
> incremented by at least one between any two events on that node
> HLC *may overflow* if physical time is not catching up with logical time. In 
> this case only a causal counter can be used to move the timestamp forward. 
> The *bounded time staleness requirement* is aimed to fix this.
> HLC is not limited by NTP and can be used with other time sync protocols, 
> like PTP.
> h4. Consistent Cut
> HLC can also be used to take a consistent snapshot at a logical time {_}t{_}.
> The consistent cut should capture all causal relationships. It can be written 
> down as follows:
> *(e ∈ C) && (e’ <- e) => e’ ∈ C*
> [https://www.cs.cornell.edu/courses/cs5414/2010fa/publications/BM93.pdf]
> !Screenshot from 2022-04-22 16-16-10.png!
> In the picture are shown two cuts: C - is the consistent cut, C’ - is the 
> inconsistent cut.
> HLC allows taking a consistent cut at logical time l=t, c=K
> A global time, corresponding to the cut, lies in [t-϶,t]
> h4. HLC Update Rules
> We assume two major event sources for updating HLC for enlisted nodes:
> h5. RAFT events
> RAFT events help to synchronize HLC between RAFT replicas. All RAFT 
> communications are initiated by a leader, and only one leader can exist at a 
> time. This enforces monotonic growth of HLC on raft group replicas. 
> RequestVote and AppendEntries RPC calls are enriched with sender’s HLC. The 
> HLC update rules are applied on receiving messages. RAFT lease intervals are 
> bound to the HLC range.
> h5. Transaction protocol events
> Another source of HLC sync is a transaction protocol. Each message involved 
> in the execution of the transaction carries the sender’s HLC and updates 
> receiver HLC according to rules.
>  
> Tx.Phase1



--
This message was sent by Atlassian Jira
(v8.20.7#820007)