[jira] [Commented] (FLINK-6761) Limitation for maximum state size per key in RocksDB backend

2021-04-29 Thread Flink Jira Bot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17336987#comment-17336987
 ] 

Flink Jira Bot commented on FLINK-6761:
---

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> Limitation for maximum state size per key in RocksDB backend
> 
>
> Key: FLINK-6761
> URL: https://issues.apache.org/jira/browse/FLINK-6761
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.2.1, 1.3.0
>Reporter: Stefan Richter
>Priority: Critical
>  Labels: stale-critical
>
> RocksDB`s JNI bridge allows for putting and getting {{byte[]}} as keys and 
> values. 
> States that internally use RocksDB's merge operator, e.g. {{ListState}}, can 
> currently merge multiple {{byte[]}} under one key, which will be internally 
> concatenated to one value in RocksDB. 
> This becomes problematic, as soon as the accumulated state size under one key 
> grows larger than {{Integer.MAX_VALUE}} bytes. Whenever Java code tries to 
> access a state that grew beyond this limit through merging, we will encounter 
> an {{ArrayIndexOutOfBoundsException}} at best and a segfault at worst.
> This behaviour is problematic, because RocksDB silently stores states that 
> exceed this limitation, but on access (e.g. in checkpointing), the code fails 
> unexpectedly.
> I think the only proper solution to this is for RocksDB's JNI bridge to build 
> on {{(Direct)ByteBuffer}} - which can go around the size limitation - as 
> input and output types, instead of simple {{byte[]}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-6761) Limitation for maximum state size per key in RocksDB backend

2021-04-22 Thread Flink Jira Bot (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17329076#comment-17329076
 ] 

Flink Jira Bot commented on FLINK-6761:
---

This critical issue is unassigned and itself and all of its Sub-Tasks have not 
been updated for 7 days. So, it has been labeled "stale-critical". If this 
ticket is indeed critical, please either assign yourself or give an update. 
Afterwards, please remove the label. In 7 days the issue will be deprioritized.

> Limitation for maximum state size per key in RocksDB backend
> 
>
> Key: FLINK-6761
> URL: https://issues.apache.org/jira/browse/FLINK-6761
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.2.1, 1.3.0
>Reporter: Stefan Richter
>Priority: Critical
>  Labels: stale-critical
>
> RocksDB`s JNI bridge allows for putting and getting {{byte[]}} as keys and 
> values. 
> States that internally use RocksDB's merge operator, e.g. {{ListState}}, can 
> currently merge multiple {{byte[]}} under one key, which will be internally 
> concatenated to one value in RocksDB. 
> This becomes problematic, as soon as the accumulated state size under one key 
> grows larger than {{Integer.MAX_VALUE}} bytes. Whenever Java code tries to 
> access a state that grew beyond this limit through merging, we will encounter 
> an {{ArrayIndexOutOfBoundsException}} at best and a segfault at worst.
> This behaviour is problematic, because RocksDB silently stores states that 
> exceed this limitation, but on access (e.g. in checkpointing), the code fails 
> unexpectedly.
> I think the only proper solution to this is for RocksDB's JNI bridge to build 
> on {{(Direct)ByteBuffer}} - which can go around the size limitation - as 
> input and output types, instead of simple {{byte[]}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-6761) Limitation for maximum state size per key in RocksDB backend

2017-07-04 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16073788#comment-16073788
 ] 

mingleizhang commented on FLINK-6761:
-

+1

> Limitation for maximum state size per key in RocksDB backend
> 
>
> Key: FLINK-6761
> URL: https://issues.apache.org/jira/browse/FLINK-6761
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Stefan Richter
>Priority: Critical
>
> RocksDB`s JNI bridge allows for putting and getting {{byte[]}} as keys and 
> values. 
> States that internally use RocksDB's merge operator, e.g. {{ListState}}, can 
> currently merge multiple {{byte[]}} under one key, which will be internally 
> concatenated to one value in RocksDB. 
> This becomes problematic, as soon as the accumulated state size under one key 
> grows larger than {{Integer.MAX_VALUE}} bytes. Whenever Java code tries to 
> access a state that grew beyond this limit through merging, we will encounter 
> an {{ArrayIndexOutOfBoundsException}} at best and a segfault at worst.
> This behaviour is problematic, because RocksDB silently stores states that 
> exceed this limitation, but on access (e.g. in checkpointing), the code fails 
> unexpectedly.
> I think the only proper solution to this is for RocksDB's JNI bridge to build 
> on {{(Direct)ByteBuffer}} - which can go around the size limitation - as 
> input and output types, instead of simple {{byte[]}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6761) Limitation for maximum state size per key in RocksDB backend

2017-05-29 Thread Stefan Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16028363#comment-16028363
 ] 

Stefan Richter commented on FLINK-6761:
---

Opened a corresponding issue in the RocksDB tracker:

https://github.com/facebook/rocksdb/issues/2383

> Limitation for maximum state size per key in RocksDB backend
> 
>
> Key: FLINK-6761
> URL: https://issues.apache.org/jira/browse/FLINK-6761
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Stefan Richter
>Priority: Critical
>
> RocksDB`s JNI bridge allows for putting and getting {{byte[]}} as keys and 
> values. 
> States that internally use RocksDB's merge operator, e.g. {{ListState}}, can 
> currently merge multiple {{byte[]}} under one key, which will be internally 
> concatenated to one value in RocksDB. 
> This becomes problematic, as soon as the accumulated state size under one key 
> grows larger than {{Integer.MAX_VALUE}} bytes. Whenever Java code tries to 
> access a state that grew beyond this limit through merging, we will encounter 
> an {{ArrayIndexOutOfBoundsException}} at best and a segfault at worst.
> This behaviour is problematic, because RocksDB silently stores states that 
> exceed this limitation, but on access (e.g. in checkpointing), the code fails 
> unexpectedly.
> I think the only proper solution to this is for RocksDB's JNI bridge to build 
> on {{(Direct)ByteBuffer}} - which can go around the size limitation - as 
> input and output types, instead of simple {{byte[]}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)