[jira] [Comment Edited] (FLINK-18729) Make flink streaming kafka producer auto discovery partition

2020-07-27 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165593#comment-17165593
 ] 

shengjk1 edited comment on FLINK-18729 at 7/27/20, 11:53 AM:
-

if no any question, i will write code and modify document.


was (Author: shengjk1):
if no any question, i will write code about it.

> Make flink streaming kafka producer auto discovery partition
> 
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> In streaming api, flink kafka producer can't auto discovery partition when 
> the change of kafka partition, such as add partition. Maybe we can 
> improvement it, use parameter: flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18729) Flink streaming kafka producer auto discovery partition

2020-07-27 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-18729:
-
Description: In streaming api, flink kafka producer can't auto discovery 
partition when the change of kafka partition, such as add partition. Maybe we 
can improvement it, use parameter: flink.partition-discovery.interval-millis  
(was: Now, In streaming api, flink kafka producer can't auto discovery 
partition when the change of kafka partition, such as add partition. Maybe we 
can improvement it, use parameter: flink.partition-discovery.interval-millis)

> Flink streaming kafka producer auto discovery partition
> ---
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> In streaming api, flink kafka producer can't auto discovery partition when 
> the change of kafka partition, such as add partition. Maybe we can 
> improvement it, use parameter: flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18729) Make flink streaming kafka producer auto discovery partition

2020-07-27 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-18729:
-
Summary: Make flink streaming kafka producer auto discovery partition  
(was: Flink streaming kafka producer auto discovery partition)

> Make flink streaming kafka producer auto discovery partition
> 
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> In streaming api, flink kafka producer can't auto discovery partition when 
> the change of kafka partition, such as add partition. Maybe we can 
> improvement it, use parameter: flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18729) Flink streaming kafka producer auto discovery partition

2020-07-27 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-18729:
-
Description: Now, In streaming api, flink kafka producer can't auto 
discovery partition when the change of kafka partition, such as add partition. 
Maybe we can improvement it, use parameter: 
flink.partition-discovery.interval-millis  (was: Now, flink streaming kafka 
producer can't auto discovery partition when the change of kafka partition, 
such as add partition. Maybe we can improvement it, use parameter: 
flink.partition-discovery.interval-millis)

> Flink streaming kafka producer auto discovery partition
> ---
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> Now, In streaming api, flink kafka producer can't auto discovery partition 
> when the change of kafka partition, such as add partition. Maybe we can 
> improvement it, use parameter: flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18729) Flink streaming kafka producer auto discovery partition

2020-07-27 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-18729:
-
Description: Now, flink streaming kafka producer can't auto discovery 
partition when the change of kafka partition, such as add partition. Maybe we 
can improvement it, use parameter: flink.partition-discovery.interval-millis  
(was: Now, flink streaming kafka producer can't auto discovery partition when 
the change of kafka partition. Maybe we can improvement it, use parameter: 
flink.partition-discovery.interval-millis)

> Flink streaming kafka producer auto discovery partition
> ---
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> Now, flink streaming kafka producer can't auto discovery partition when the 
> change of kafka partition, such as add partition. Maybe we can improvement 
> it, use parameter: flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18729) Flink streaming kafka producer auto discovery partition

2020-07-27 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-18729:
-
Description: Now, flink streaming kafka producer can't auto discovery 
partition when the change of kafka partition. Maybe we can improvement it, use 
parameter: flink.partition-discovery.interval-millis  (was: Now, flink kafka 
producer can't auto discovery partition when the change of kafka partition. 
Maybe we can improvement it, use parameter: 
flink.partition-discovery.interval-millis)
Summary: Flink streaming kafka producer auto discovery partition  (was: 
Flink kafka producer auto discovery partition)

> Flink streaming kafka producer auto discovery partition
> ---
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> Now, flink streaming kafka producer can't auto discovery partition when the 
> change of kafka partition. Maybe we can improvement it, use parameter: 
> flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18729) Flink kafka producer auto discovery partition

2020-07-27 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-18729:
-
Summary: Flink kafka producer auto discovery partition  (was: kafka)

> Flink kafka producer auto discovery partition
> -
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> Now, flink kafka producer can't auto discovery partition when the change of 
> kafka partition. Maybe we can improvement it, use parameter: 
> flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18729) kafka

2020-07-27 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-18729:
-
Description: Now, flink kafka producer can't auto discovery partition when 
the change of kafka partition. Maybe we can improvement it, use parameter: 
flink.partition-discovery.interval-millis  (was: Now, flink kafka producer 
can't auto discovery partition when the change of kafka partition. maybe we can 
improvement it, use parameter: flink.partition-discovery.interval-millis)

> kafka
> -
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> Now, flink kafka producer can't auto discovery partition when the change of 
> kafka partition. Maybe we can improvement it, use parameter: 
> flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18729) kafka

2020-07-27 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165593#comment-17165593
 ] 

shengjk1 commented on FLINK-18729:
--

if no any question, i will write code about it.

> kafka
> -
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> Now, flink kafka producer can't auto discovery partition when the change of 
> kafka partition. maybe we can improvement it, use parameter: 
> flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18729) kafka

2020-07-27 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-18729:
-
  Component/s: Connectors / Kafka
Affects Version/s: 1.10.0
   1.10.1
   1.11.0
   1.11.1
  Description: Now, flink kafka producer can't auto discovery partition 
when the change of kafka partition. maybe we can improvement it, use parameter: 
flink.partition-discovery.interval-millis

> kafka
> -
>
> Key: FLINK-18729
> URL: https://issues.apache.org/jira/browse/FLINK-18729
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.10.0, 1.10.1, 1.11.0, 1.11.1
>Reporter: shengjk1
>Priority: Major
>
> Now, flink kafka producer can't auto discovery partition when the change of 
> kafka partition. maybe we can improvement it, use parameter: 
> flink.partition-discovery.interval-millis



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18729) kafka

2020-07-27 Thread shengjk1 (Jira)
shengjk1 created FLINK-18729:


 Summary: kafka
 Key: FLINK-18729
 URL: https://issues.apache.org/jira/browse/FLINK-18729
 Project: Flink
  Issue Type: Improvement
Reporter: shengjk1






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-30 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-7289:

Comment: was deleted

(was: Thanks [~yunta] and [~mikekap]. I have solved it. As Mike Kaplinskiy 
said, i put the jar  which only include BackendOptions.class  in the flink boot 
classpath - not in my application code classpath.)

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-28 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961652#comment-16961652
 ] 

shengjk1 edited comment on FLINK-7289 at 10/29/19 3:55 AM:
---

Thanks [~yunta] and [~mikekap]. I have solved it. As Mike Kaplinskiy said, i 
put the jar  which only include BackendOptions.class  in the flink boot 
classpath - not in my application code classpath.


was (Author: shengjk1):
Thanks [~yunta] and [~mikekap]. I have solved it. As Mike Kaplinskiy said, i 
put the jar  which only include BackendOptions.class  in the flink boot 
classpath - not in my application code classpath .

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-28 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961652#comment-16961652
 ] 

shengjk1 commented on FLINK-7289:
-

Thanks [~yunta] and [~mikekap]. I have solved it. As Mike Kaplinskiy said, i 
put the jar  which only include BackendOptions.class  in the flink boot 
classpath - not in my application code classpath .

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-24 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16958725#comment-16958725
 ] 

shengjk1 commented on FLINK-7289:
-

[~yunta] I don't know how to get the LOG file rockdb,

this is my code :
{code:java}
class BackendOptions implements ConfigurableOptionsFactory {
   protected final static org.slf4j.Logger logger = 
LoggerFactory.getLogger(Main.class);
   //
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
  return 
currentOptions.setInfoLogLevel(InfoLogLevel.INFO_LEVEL).setLogger(new 
Logger(currentOptions) {
 @Override
 protected void log(InfoLogLevel infoLogLevel, String s) {
System.out.println("rockdb=="+s);
logger.debug("rockdb =={}",s);
 }
  }).setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) {
  return currentOptions
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)
.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
  new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)
//  increases read amplification but decreases memory 
useage and space amplification
.setBlockSize(4 * 32 * 1024))
;
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) {
  return this;
   }
}
{code}
This is my cli:
{code:java}
-p 3 -ytm 2048M -ytm early -m yarn-cluster -yD 
"state.backend.rocksdb.ttl.compaction.filter.enabled=true"
{code}

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-22 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956749#comment-16956749
 ] 

shengjk1 edited comment on FLINK-7289 at 10/22/19 7:11 AM:
---

 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.

 


was (Author: shengjk1):
 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.
{code:java}
class BackendOptions implements ConfigurableOptionsFactory {
 
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
  return currentOptions
.setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) {
  return currentOptions
.setLevelCompactionDynamicLevelBytes(true)
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)
.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
  new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)n
.setBlockSize(4 * 32 * 1024));
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) {
  return this;
   }
}
{code}
 

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-22 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956749#comment-16956749
 ] 

shengjk1 edited comment on FLINK-7289 at 10/22/19 7:10 AM:
---

 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.
{code:java}
class BackendOptions implements ConfigurableOptionsFactory {
 
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
  return currentOptions
.setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) {
  return currentOptions
.setLevelCompactionDynamicLevelBytes(true)
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)
.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
  new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)n
.setBlockSize(4 * 32 * 1024));
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) {
  return this;
   }
}
{code}
 


was (Author: shengjk1):
 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.
{code:java}
class BackendOptions implements ConfigurableOptionsFactory {
 
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
  return currentOptions
.setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) {
  return currentOptions
.setLevelCompactionDynamicLevelBytes(true)
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)

.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
  new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)n
.setBlockSize(4 * 32 * 1024))
;
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) {
  return this;
   }
}
{code}
 

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in 

[jira] [Comment Edited] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-22 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956749#comment-16956749
 ] 

shengjk1 edited comment on FLINK-7289 at 10/22/19 7:09 AM:
---

 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.
{code:java}
class BackendOptions implements ConfigurableOptionsFactory {
 
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
  return currentOptions
.setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) {
  return currentOptions
.setLevelCompactionDynamicLevelBytes(true)
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)

.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
  new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)n
.setBlockSize(4 * 32 * 1024))
;
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) {
  return this;
   }
}
{code}
 


was (Author: shengjk1):
 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.
{code:java}

class BackendOptions implements ConfigurableOptionsFactory {
   //
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
  return currentOptions
.setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) {
  return currentOptions
.setLevelCompactionDynamicLevelBytes(true)
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)

.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
  new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)
//  increases read amplification but decreases memory 
useage and space amplification
.setBlockSize(4 * 32 * 1024))
;
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) {
  return this;
   }
}
{code}
 

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same 

[jira] [Comment Edited] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-22 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956749#comment-16956749
 ] 

shengjk1 edited comment on FLINK-7289 at 10/22/19 7:09 AM:
---

 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.
{code:java}

class BackendOptions implements ConfigurableOptionsFactory {
   //
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
  return currentOptions
.setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) {
  return currentOptions
.setLevelCompactionDynamicLevelBytes(true)
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)

.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
  new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)
//  increases read amplification but decreases memory 
useage and space amplification
.setBlockSize(4 * 32 * 1024))
;
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) {
  return this;
   }
}
{code}
 


was (Author: shengjk1):
 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.
class BackendOptions implements ConfigurableOptionsFactory \{
   //
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
  return currentOptions
.setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) \{
  return currentOptions
.setLevelCompactionDynamicLevelBytes(true)
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)

.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
  new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)
.setBlockSize(4 * 32 * 1024))
;
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) \{
  return this;
   }
}

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make 

[jira] [Commented] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-22 Thread shengjk1 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956749#comment-16956749
 ] 

shengjk1 commented on FLINK-7289:
-

 I also rebuild flink and use writeBufferManager. For one specific job I'm 
working on, I went from a OOM kill every 5-7 hours to at least 20 hours of no 
OOM events . But finally it still OOM . So I think maybe we also need other 
parameters which easy to use to solve it.
class BackendOptions implements ConfigurableOptionsFactory \{
   //
   private static final WriteBufferManager writeBufferManager = new 
WriteBufferManager(1 << 30, new LRUCache(1 << 18));
   
   @Override
   public DBOptions createDBOptions(DBOptions currentOptions) {
  return currentOptions
.setMaxBackgroundJobs(4)
.setUseFsync(false)
.setMaxBackgroundFlushes(3)
.setWriteBufferManager(writeBufferManager);
   }
   
   @Override
   public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions 
currentOptions) \{
  return currentOptions
.setLevelCompactionDynamicLevelBytes(true)
.setMinWriteBufferNumberToMerge(2)
.setMaxWriteBufferNumber(5)

.setOptimizeFiltersForHits(true)
.setMaxWriteBufferNumberToMaintain(3)
.setTableFormatConfig(
  new BlockBasedTableConfig()
.setCacheIndexAndFilterBlocks(true)
.setCacheIndexAndFilterBlocksWithHighPriority(true)
.setBlockCacheSize(256 * 1024 * 1024)
.setBlockSize(4 * 32 * 1024))
;
   }
   
   @Override
   public OptionsFactory configure(Configuration configuration) \{
  return this;
   }
}

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2019-10-09 Thread shengjk1 (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-7289:

Affects Version/s: 1.7.2
   1.8.2
   1.9.0

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-11608) Translate the "Local Setup Tutorial" page into Chinese

2019-08-14 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907048#comment-16907048
 ] 

shengjk1 commented on FLINK-11608:
--

Ok, I have appended an update commit [~jark] [~WangHW]

> Translate the "Local Setup Tutorial" page into Chinese
> --
>
> Key: FLINK-11608
> URL: https://issues.apache.org/jira/browse/FLINK-11608
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: shengjk1
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/tutorials/local_setup.html
> The markdown file is located in flink/docs/tutorials/local_setup.zh.md
> The markdown file will be created once FLINK-11530 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-11608) Translate the "Local Setup Tutorial" page into Chinese

2019-08-13 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905962#comment-16905962
 ] 

shengjk1 edited comment on FLINK-11608 at 8/13/19 8:51 AM:
---

Hi [~WangHW] , thanks for your comments. I also read [“Flink Translation 
Specifications”|https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications],
 so we can start work [~jark].  But I am also confused. I have  read  [submit 
your 
contribution|https://flink.apache.org/contributing/contribute-documentation.html#submit-your-contribution]
 , but i do not how to update *.md . Shoule i create a new PR or update and 
commit change in the old PR ?

 


was (Author: shengjk1):
Hi [~WangHW] , thanks for your comments. I also read [“Flink Translation 
Specifications”|https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications],
 so we can start work. [~jark].  But I am also confused. I have  read  [submit 
your 
contribution|https://flink.apache.org/contributing/contribute-documentation.html#submit-your-contribution]
 , but i do not how to update *.md . Shoule i create a new PR or update and 
commit change in the old PR ?

 

> Translate the "Local Setup Tutorial" page into Chinese
> --
>
> Key: FLINK-11608
> URL: https://issues.apache.org/jira/browse/FLINK-11608
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: shengjk1
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/tutorials/local_setup.html
> The markdown file is located in flink/docs/tutorials/local_setup.zh.md
> The markdown file will be created once FLINK-11530 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-11608) Translate the "Local Setup Tutorial" page into Chinese

2019-08-13 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905962#comment-16905962
 ] 

shengjk1 commented on FLINK-11608:
--

Hi [~WangHW] , thanks for your comments. I also read [“Flink Translation 
Specifications”|https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications],
 so we can start work. [~jark].  But I am also confused. I have  read  [submit 
your 
contribution|https://flink.apache.org/contributing/contribute-documentation.html#submit-your-contribution]
 , but i do not how to update *.md . Shoule i create a new PR or update and 
commit change in the old PR ?

 

> Translate the "Local Setup Tutorial" page into Chinese
> --
>
> Key: FLINK-11608
> URL: https://issues.apache.org/jira/browse/FLINK-11608
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: shengjk1
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/tutorials/local_setup.html
> The markdown file is located in flink/docs/tutorials/local_setup.zh.md
> The markdown file will be created once FLINK-11530 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-11789) Checkpoint directories are not cleaned up after job termination

2019-03-05 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16785178#comment-16785178
 ] 

shengjk1 commented on FLINK-11789:
--

i agree  [~yunta] , I also think it would be worthy to discuss current 
checkpoint directory layout, suggest to create a separate thread , in order to 
not blow up the scope of this issue 

> Checkpoint directories are not cleaned up after job termination
> ---
>
> Key: FLINK-11789
> URL: https://issues.apache.org/jira/browse/FLINK-11789
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Major
>
> Flink currently does not clean up all checkpoint directories when a job 
> reaches a globally terminal state. Having configured the checkpoint directory 
> {{checkpoints}}, I observe that after cancelling the job {{JOB_ID}} there are 
> still
> {code}
> checkpoints/JOB_ID/shared
> checkpoints/JOB_ID/taskowned
> {code}
> I think it would be good if would delete {{checkpoints/JOB_ID}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11789) Checkpoint directories are not cleaned up after job termination

2019-03-01 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781722#comment-16781722
 ] 

shengjk1 commented on FLINK-11789:
--

i think  delete checkpoints/JOB_ID}} should contains three situations at 
least:}}

  1. the job is killed by user

  2. the job is canceled and the  \{{ExternalizedCheckpointCleanup}} is 
\{{DELETE_ON_CANCELLATION}}

  3. the job is fail

If the user manually specifies \{{checkpoints/JOB_ID}} as savepoint dir , this 
directory should be kept, such as  flink cancel -s checkpoints/JOB_ID

 

> Checkpoint directories are not cleaned up after job termination
> ---
>
> Key: FLINK-11789
> URL: https://issues.apache.org/jira/browse/FLINK-11789
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Major
>
> Flink currently does not clean up all checkpoint directories when a job 
> reaches a globally terminal state. Having configured the checkpoint directory 
> {{checkpoints}}, I observe that after cancelling the job {{JOB_ID}} there are 
> still
> {code}
> checkpoints/JOB_ID/shared
> checkpoints/JOB_ID/taskowned
> {code}
> I think it would be good if would delete {{checkpoints/JOB_ID}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11336) Flink HA didn't remove ZK metadata

2019-03-01 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781683#comment-16781683
 ] 

shengjk1 commented on FLINK-11336:
--

[~till.rohrmann]  Yay, i think too, thank you 

> Flink HA didn't remove ZK metadata
> --
>
> Key: FLINK-11336
> URL: https://issues.apache.org/jira/browse/FLINK-11336
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: shengjk1
>Assignee: Till Rohrmann
>Priority: Major
> Attachments: image-2019-01-15-19-42-21-902.png
>
>
> Flink HA didn't remove ZK metadata
> such as 
> go to zk cli  : ls /flinkone
> !image-2019-01-15-19-42-21-902.png!
>  
> i suggest we should delete this metadata when the application  cancel or 
> throw exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11336) Flink HA didn't remove ZK metadata

2019-03-01 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16781618#comment-16781618
 ] 

shengjk1 commented on FLINK-11336:
--

hi, [~till.rohrmann] 

I have other questions and suggestions:

 1. I want to know if  will also delete invalid directories on HDFS, similar to 
zk metadata?  because most of the metadata of HA is stored on HDFS. such as 
when  job is failed.

 2. when the job is canceled, the job's metadata is  deleted as default , but i 
think it also should  delete the corresponding directory, such as 
\{{jobId}}/shared and \{{jobId}}/taskowned. 

 

 

> Flink HA didn't remove ZK metadata
> --
>
> Key: FLINK-11336
> URL: https://issues.apache.org/jira/browse/FLINK-11336
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: shengjk1
>Assignee: Till Rohrmann
>Priority: Major
> Attachments: image-2019-01-15-19-42-21-902.png
>
>
> Flink HA didn't remove ZK metadata
> such as 
> go to zk cli  : ls /flinkone
> !image-2019-01-15-19-42-21-902.png!
>  
> i suggest we should delete this metadata when the application  cancel or 
> throw exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-24 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16776504#comment-16776504
 ] 

shengjk1 commented on FLINK-11725:
--

1.7.2已修复

> use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the applied resources are 
> erratic
> --
>
> Key: FLINK-11725
> URL: https://issues.apache.org/jira/browse/FLINK-11725
> Project: Flink
>  Issue Type: Bug
>  Components: ResourceManager
>Affects Versions: 1.7.2
> Environment: flink1.7.2
> linux
> java8
> kafka1.1
> scala 2.1
> CDH 
>Reporter: shengjk1
>Priority: Major
> Attachments: .png, 2.png, 3.png, MainConnect3.java, 
> MainConnect4.java
>
>
> I provide a simple example and used resource's picture for this issue 
> this application is about flink consumer kafka,  when submit job such as : 
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
> -cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &
> or 
> flink-1.7.1/bin/flink  run -m yarn-cluster   -ynm test  -cMainConnect4  
> ./flinkDemo-1.0-SNAPSHOT.jar &
> from yarn web ,i see, the total resources are erratic, and in most cases, all 
> the remaining resources of the cluster will be used by the application
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-24 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 closed FLINK-11725.

   Resolution: Fixed
Fix Version/s: 1.7.2

> use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the applied resources are 
> erratic
> --
>
> Key: FLINK-11725
> URL: https://issues.apache.org/jira/browse/FLINK-11725
> Project: Flink
>  Issue Type: Bug
>  Components: ResourceManager
>Affects Versions: 1.7.2
> Environment: flink1.7.2
> linux
> java8
> kafka1.1
> scala 2.1
> CDH 
>Reporter: shengjk1
>Priority: Major
> Fix For: 1.7.2
>
> Attachments: .png, 2.png, 3.png, MainConnect3.java, 
> MainConnect4.java
>
>
> I provide a simple example and used resource's picture for this issue 
> this application is about flink consumer kafka,  when submit job such as : 
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
> -cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &
> or 
> flink-1.7.1/bin/flink  run -m yarn-cluster   -ynm test  -cMainConnect4  
> ./flinkDemo-1.0-SNAPSHOT.jar &
> from yarn web ,i see, the total resources are erratic, and in most cases, all 
> the remaining resources of the cluster will be used by the application
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-24 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11725:
-
Environment: 
flink1.7.2

linux

java8

kafka1.1

scala 2.1

CDH 

  was:
flink1.7.1

linux

java8

kafka1.1


> use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the applied resources are 
> erratic
> --
>
> Key: FLINK-11725
> URL: https://issues.apache.org/jira/browse/FLINK-11725
> Project: Flink
>  Issue Type: Bug
>  Components: ResourceManager
>Affects Versions: 1.7.2
> Environment: flink1.7.2
> linux
> java8
> kafka1.1
> scala 2.1
> CDH 
>Reporter: shengjk1
>Priority: Major
> Attachments: .png, 2.png, 3.png, MainConnect3.java, 
> MainConnect4.java
>
>
> I provide a simple example and used resource's picture for this issue 
> this application is about flink consumer kafka,  when submit job such as : 
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
> -cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &
> or 
> flink-1.7.1/bin/flink  run -m yarn-cluster   -ynm test  -cMainConnect4  
> ./flinkDemo-1.0-SNAPSHOT.jar &
> from yarn web ,i see, the total resources are erratic, and in most cases, all 
> the remaining resources of the cluster will be used by the application
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-24 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11725:
-
Description: 
I provide a simple example and used resource's picture for this issue 

this application is about flink consumer kafka,  when submit job such as : 

flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
-cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &

or 

flink-1.7.1/bin/flink  run -m yarn-cluster   -ynm test  -cMainConnect4  
./flinkDemo-1.0-SNAPSHOT.jar &

from yarn web ,i see, the total resources are erratic, and in most cases, all 
the remaining resources of the cluster will be used by the application

 

  was:
I provide a simple example and used resource's picture for this issue 

this application is about flink consumer kafka,  when submit job such as : 

flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
-cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &

from yarn web ,i see, the total resources are erratic, and in most cases, all 
the remaining resources of the cluster will be used by the application

 


> use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the applied resources are 
> erratic
> --
>
> Key: FLINK-11725
> URL: https://issues.apache.org/jira/browse/FLINK-11725
> Project: Flink
>  Issue Type: Bug
>  Components: ResourceManager
>Affects Versions: 1.7.2
> Environment: flink1.7.1
> linux
> java8
> kafka1.1
>Reporter: shengjk1
>Priority: Major
> Attachments: .png, 2.png, 3.png, MainConnect3.java, 
> MainConnect4.java
>
>
> I provide a simple example and used resource's picture for this issue 
> this application is about flink consumer kafka,  when submit job such as : 
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
> -cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &
> or 
> flink-1.7.1/bin/flink  run -m yarn-cluster   -ynm test  -cMainConnect4  
> ./flinkDemo-1.0-SNAPSHOT.jar &
> from yarn web ,i see, the total resources are erratic, and in most cases, all 
> the remaining resources of the cluster will be used by the application
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-24 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774999#comment-16774999
 ] 

shengjk1 edited comment on FLINK-11725 at 2/25/19 2:54 AM:
---

i think two ways solve this issue:

1).Add judgment about  if {{-yn}} * {{-ys}} < {{-p}} system.exit(1)  and Print 
message: {{-yn}} * {{-ys}}  shoule more than the {{-p}}

2).Create taskManager with one solt until {{-yn}} * {{-ys}} >= {{-p}}  as 
default

maybe 1) is more friendly to users


was (Author: shengjk1):
i think two ways solve this issue:

1).Add judgment in org.apache.flink.client.cli.CliFrontend  if {{-yn}} * 
{{-ys}} < {{-p}} system.exit(1)  and Print message: {{-yn}} * {{-ys}}  shoule 
more than the {{-p}}

2).Create taskManager with one solt until {{-yn}} * {{-ys}} >= {{-p}}  as 
default

maybe 1) is more friendly to users

> use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the applied resources are 
> erratic
> --
>
> Key: FLINK-11725
> URL: https://issues.apache.org/jira/browse/FLINK-11725
> Project: Flink
>  Issue Type: Bug
>  Components: ResourceManager
>Affects Versions: 1.7.2
> Environment: flink1.7.1
> linux
> java8
> kafka1.1
>Reporter: shengjk1
>Priority: Major
> Attachments: .png, 2.png, 3.png, MainConnect3.java, 
> MainConnect4.java
>
>
> I provide a simple example and used resource's picture for this issue 
> this application is about flink consumer kafka,  when submit job such as : 
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
> -cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &
> from yarn web ,i see, the total resources are erratic, and in most cases, all 
> the remaining resources of the cluster will be used by the application
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-24 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11725:
-
Attachment: MainConnect4.java

> use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the applied resources are 
> erratic
> --
>
> Key: FLINK-11725
> URL: https://issues.apache.org/jira/browse/FLINK-11725
> Project: Flink
>  Issue Type: Bug
>  Components: ResourceManager
>Affects Versions: 1.7.2
> Environment: flink1.7.1
> linux
> java8
> kafka1.1
>Reporter: shengjk1
>Priority: Major
> Attachments: .png, 2.png, 3.png, MainConnect3.java, 
> MainConnect4.java
>
>
> I provide a simple example and used resource's picture for this issue 
> this application is about flink consumer kafka,  when submit job such as : 
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
> -cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &
> from yarn web ,i see, the total resources are erratic, and in most cases, all 
> the remaining resources of the cluster will be used by the application
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-22 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11725:
-
Environment: 
flink1.7.1

linux

java8

kafka1.1

  was:
flink1.7.2

linux

java8

kafka1.1


> use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the applied resources are 
> erratic
> --
>
> Key: FLINK-11725
> URL: https://issues.apache.org/jira/browse/FLINK-11725
> Project: Flink
>  Issue Type: Bug
>  Components: ResourceManager
>Affects Versions: 1.7.2
> Environment: flink1.7.1
> linux
> java8
> kafka1.1
>Reporter: shengjk1
>Priority: Major
> Attachments: .png, 2.png, 3.png, MainConnect3.java
>
>
> I provide a simple example and used resource's picture for this issue 
> this application is about flink consumer kafka,  when submit job such as : 
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
> -cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &
> from yarn web ,i see, the total resources are erratic, and in most cases, all 
> the remaining resources of the cluster will be used by the application
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-22 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774999#comment-16774999
 ] 

shengjk1 edited comment on FLINK-11725 at 2/22/19 10:53 AM:


i think two ways solve this issue:

1).Add judgment in org.apache.flink.client.cli.CliFrontend  if {{-yn}} * 
{{-ys}} < {{-p}} system.exit(1)  and Print message: {{-yn}} * {{-ys}}  shoule 
more than the {{-p}}

2).Create taskManager with one solt until {{-yn}} * {{-ys}} >= {{-p}}  as 
default

maybe 1) is more friendly to users


was (Author: shengjk1):
i think two ways solve this issue:

1.Add judgment in org.apache.flink.client.cli.CliFrontend  if \{{-yn}} * 
\{{-ys}} < \{{-p}} system.exit(1)  and Print message: \{{-yn}} * \{{-ys}}  
shoule more than the \{{-p}}

2.Create taskManager with one solt until \{{-yn}} * \{{-ys}} >= \{{-p}}  as 
default

> use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the applied resources are 
> erratic
> --
>
> Key: FLINK-11725
> URL: https://issues.apache.org/jira/browse/FLINK-11725
> Project: Flink
>  Issue Type: Bug
>  Components: ResourceManager
>Affects Versions: 1.7.2
> Environment: flink1.7.2
> linux
> java8
> kafka1.1
>Reporter: shengjk1
>Priority: Major
> Attachments: .png, 2.png, 3.png, MainConnect3.java
>
>
> I provide a simple example and used resource's picture for this issue 
> this application is about flink consumer kafka,  when submit job such as : 
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
> -cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &
> from yarn web ,i see, the total resources are erratic, and in most cases, all 
> the remaining resources of the cluster will be used by the application
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-22 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11725:


 Summary: use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the 
applied resources are erratic
 Key: FLINK-11725
 URL: https://issues.apache.org/jira/browse/FLINK-11725
 Project: Flink
  Issue Type: Bug
  Components: ResourceManager
Affects Versions: 1.7.2
 Environment: flink1.7.2

linux

java8

kafka1.1
Reporter: shengjk1
 Attachments: .png, 2.png, 3.png, MainConnect3.java

I provide a simple example and used resource's picture for this issue 

this application is about flink consumer kafka,  when submit job such as : 

flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
-cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &

from yarn web ,i see, the total resources are erratic, and in most cases, all 
the remaining resources of the cluster will be used by the application

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11725) use flink cli when {{-yn}} * {{-ys}} < {{-p}}, the applied resources are erratic

2019-02-22 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16774999#comment-16774999
 ] 

shengjk1 commented on FLINK-11725:
--

i think two ways solve this issue:

1.Add judgment in org.apache.flink.client.cli.CliFrontend  if \{{-yn}} * 
\{{-ys}} < \{{-p}} system.exit(1)  and Print message: \{{-yn}} * \{{-ys}}  
shoule more than the \{{-p}}

2.Create taskManager with one solt until \{{-yn}} * \{{-ys}} >= \{{-p}}  as 
default

> use flink cli  when  {{-yn}} * {{-ys}} < {{-p}}, the applied resources are 
> erratic
> --
>
> Key: FLINK-11725
> URL: https://issues.apache.org/jira/browse/FLINK-11725
> Project: Flink
>  Issue Type: Bug
>  Components: ResourceManager
>Affects Versions: 1.7.2
> Environment: flink1.7.2
> linux
> java8
> kafka1.1
>Reporter: shengjk1
>Priority: Major
> Attachments: .png, 2.png, 3.png, MainConnect3.java
>
>
> I provide a simple example and used resource's picture for this issue 
> this application is about flink consumer kafka,  when submit job such as : 
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1   -ynm test -p 5  
> -cMainConnect3  ./flinkDemo-1.0-SNAPSHOT.jar &
> from yarn web ,i see, the total resources are erratic, and in most cases, all 
> the remaining resources of the cluster will be used by the application
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11672) Add example for streaming operators's connect

2019-02-20 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11672:
-
Description: add example for streaming operators's connect such as 
\{{datastream1.connect(datastream2)}} in code  (was: add example for streaming 
operator connect in code)
Summary: Add example for streaming operators's connect(was: Add 
example for streaming operator connect  )

> Add example for streaming operators's connect  
> ---
>
> Key: FLINK-11672
> URL: https://issues.apache.org/jira/browse/FLINK-11672
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: shengjk1
>Assignee: shengjk1
>Priority: Major
>
> add example for streaming operators's connect such as 
> \{{datastream1.connect(datastream2)}} in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11672) Add example for streaming operator connect

2019-02-20 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16773601#comment-16773601
 ] 

shengjk1 commented on FLINK-11672:
--

hi, [~StephanEwen] ,Thanks for your attention, I mean to add example for 
streaming operators's connect  {{ DataStream1.connect({{DataStream2}}) }}, This 
is the error I expressed, i will update it.

> Add example for streaming operator connect  
> 
>
> Key: FLINK-11672
> URL: https://issues.apache.org/jira/browse/FLINK-11672
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: shengjk1
>Assignee: shengjk1
>Priority: Major
>
> add example for streaming operator connect in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11672) Add example for streaming operator connect

2019-02-20 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11672:
-
Description: add example for streaming operator connect in code  (was: add 
example for streaming operator in code)

> Add example for streaming operator connect  
> 
>
> Key: FLINK-11672
> URL: https://issues.apache.org/jira/browse/FLINK-11672
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: shengjk1
>Assignee: shengjk1
>Priority: Major
>
> add example for streaming operator connect in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11672) Add example for streaming operator connect

2019-02-20 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11672:
-
Component/s: Examples

> Add example for streaming operator connect  
> 
>
> Key: FLINK-11672
> URL: https://issues.apache.org/jira/browse/FLINK-11672
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples
>Reporter: shengjk1
>Assignee: shengjk1
>Priority: Major
>
> add example for streaming operator in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11673) add example for streaming operators's broadcast

2019-02-20 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11673:


 Summary: add example for streaming operators's broadcast
 Key: FLINK-11673
 URL: https://issues.apache.org/jira/browse/FLINK-11673
 Project: Flink
  Issue Type: Improvement
  Components: Examples
Reporter: shengjk1
Assignee: shengjk1


add example for streaming operators's broadcast in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11672) Add example for streaming operator connect

2019-02-20 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11672:


 Summary: Add example for streaming operator connect  
 Key: FLINK-11672
 URL: https://issues.apache.org/jira/browse/FLINK-11672
 Project: Flink
  Issue Type: Improvement
Reporter: shengjk1
Assignee: shengjk1


add example for streaming operator in code



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11608) Translate the "Local Setup Tutorial" page into Chinese

2019-02-15 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770027#comment-16770027
 ] 

shengjk1 edited comment on FLINK-11608 at 2/16/19 6:58 AM:
---

hi [~jark]  

I want to know when and where I can create a PR for it. because i not find 
something about  local_setup.md in flink-web and think create PR  for this in 
flink repositories is not  a wise move

where:

fling-web repostitories  or flink  repositories or others

when:

PR can only be created after FLINK-11529 is merged  or any time


was (Author: shengjk1):
hi [~jark]  

I want to know when and where I can create a PR for it. 

where:

fling-web repostitories  or flink  repositories or others

when:

PR can only be created after FLINK-11529 is merged  or any time

> Translate the "Local Setup Tutorial" page into Chinese
> --
>
> Key: FLINK-11608
> URL: https://issues.apache.org/jira/browse/FLINK-11608
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: shengjk1
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/tutorials/local_setup.html
> The markdown file is located in flink/docs/tutorials/local_setup.zh.md
> The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11608) Translate the "Local Setup Tutorial" page into Chinese

2019-02-15 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16770027#comment-16770027
 ] 

shengjk1 commented on FLINK-11608:
--

hi [~jark]  

I want to know when and where I can create a PR for it. 

where:

fling-web repostitories  or flink  repositories or others

when:

PR can only be created after FLINK-11529 is merged  or any time

> Translate the "Local Setup Tutorial" page into Chinese
> --
>
> Key: FLINK-11608
> URL: https://issues.apache.org/jira/browse/FLINK-11608
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: shengjk1
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/tutorials/local_setup.html
> The markdown file is located in flink/docs/tutorials/local_setup.zh.md
> The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11608) Translate the "Local Setup Tutorial" page into Chinese

2019-02-13 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 reassigned FLINK-11608:


Assignee: shengjk1

> Translate the "Local Setup Tutorial" page into Chinese
> --
>
> Key: FLINK-11608
> URL: https://issues.apache.org/jira/browse/FLINK-11608
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: shengjk1
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/tutorials/local_setup.html
> The markdown file is located in flink/docs/tutorials/local_setup.zh.md
> The markdown file will be created once FLINK-11529 is merged.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11469) fix Tuning Checkpoints and Large State doc

2019-01-30 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11469:


 Summary: fix  Tuning Checkpoints and Large State doc
 Key: FLINK-11469
 URL: https://issues.apache.org/jira/browse/FLINK-11469
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.7.1, 1.7.0, 1.6.3, 1.6.2, 1.6.4, 1.7.2, 1.8.0
Reporter: shengjk1


Sample code for subtitle Tuning RocksDB in Tuning Checkpoints and Large State 
is wrong

 

 

Affects Version:All versions after 1.1

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11336) Flink HA didn't remove ZK metadata

2019-01-22 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16749548#comment-16749548
 ] 

shengjk1 commented on FLINK-11336:
--

Yarn (per job or as a session)

> Flink HA didn't remove ZK metadata
> --
>
> Key: FLINK-11336
> URL: https://issues.apache.org/jira/browse/FLINK-11336
> Project: Flink
>  Issue Type: Improvement
>Reporter: shengjk1
>Priority: Major
> Attachments: image-2019-01-15-19-42-21-902.png
>
>
> Flink HA didn't remove ZK metadata
> such as 
> go to zk cli  : ls /flinkone
> !image-2019-01-15-19-42-21-902.png!
>  
> i suggest we should delete this metadata when the application  cancel or 
> throw exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (FLINK-11336) Flink HA didn't remove ZK metadata

2019-01-22 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16749548#comment-16749548
 ] 

shengjk1 edited comment on FLINK-11336 at 1/23/19 5:53 AM:
---

Yarn (per job or as a session)


was (Author: shengjk1):
Yarn (per job or as a session)

> Flink HA didn't remove ZK metadata
> --
>
> Key: FLINK-11336
> URL: https://issues.apache.org/jira/browse/FLINK-11336
> Project: Flink
>  Issue Type: Improvement
>Reporter: shengjk1
>Priority: Major
> Attachments: image-2019-01-15-19-42-21-902.png
>
>
> Flink HA didn't remove ZK metadata
> such as 
> go to zk cli  : ls /flinkone
> !image-2019-01-15-19-42-21-902.png!
>  
> i suggest we should delete this metadata when the application  cancel or 
> throw exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11336) Flink HA didn't remove ZK metadata

2019-01-19 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16747322#comment-16747322
 ] 

shengjk1 commented on FLINK-11336:
--

1.No matter what form stop flink, such  as cancel,failed with no further 
retries,kill, metadata not be deleted.

2.when cancel,failed with no further retries,kill,manually deleting metadata 
has no effect on newly launched programs even if there has a savepoint

this is my observed behavior

> Flink HA didn't remove ZK metadata
> --
>
> Key: FLINK-11336
> URL: https://issues.apache.org/jira/browse/FLINK-11336
> Project: Flink
>  Issue Type: Improvement
>Reporter: shengjk1
>Priority: Major
> Attachments: image-2019-01-15-19-42-21-902.png
>
>
> Flink HA didn't remove ZK metadata
> such as 
> go to zk cli  : ls /flinkone
> !image-2019-01-15-19-42-21-902.png!
>  
> i suggest we should delete this metadata when the application  cancel or 
> throw exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11336) Flink HA didn't remove ZK metadata

2019-01-17 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745782#comment-16745782
 ] 

shengjk1 commented on FLINK-11336:
--

Unfamiliar with batch and bounded streams,so Inconvenient conclusion but  such 
as unbounded streams

 when

    failed with no further retries

    cancelled

we can remove the metadata ,As for how to start, you can start normally.I have 
already tried it, no problems in 1.8.0_151  flink 1.7.1 CDH5.13.1

 

> Flink HA didn't remove ZK metadata
> --
>
> Key: FLINK-11336
> URL: https://issues.apache.org/jira/browse/FLINK-11336
> Project: Flink
>  Issue Type: Improvement
>Reporter: shengjk1
>Priority: Major
> Attachments: image-2019-01-15-19-42-21-902.png
>
>
> Flink HA didn't remove ZK metadata
> such as 
> go to zk cli  : ls /flinkone
> !image-2019-01-15-19-42-21-902.png!
>  
> i suggest we should delete this metadata when the application  cancel or 
> throw exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11376) flink cli -yn -ys is not effect if (yn * ys)

2019-01-17 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 closed FLINK-11376.

Resolution: Not A Problem

> flink cli  -yn -ys is not  effect if  (yn * ys) parallelism form env.setParallelism(parallelism) ) ;
> -
>
> Key: FLINK-11376
> URL: https://issues.apache.org/jira/browse/FLINK-11376
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
> Environment: java: jdk1.8.0_151
> flink: flink-1.7.1
> CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
>Reporter: shengjk1
>Priority: Major
> Attachments: Main222.java, image-2019-01-17-14-25-34-206.png
>
>
> Such as the title
> if  (yn * ys) env.setParallelism(parallelism) ) the yn and ys is not effect
> my application is flink streaming read kafka . this kafka topic has 3 
> partition,and setParallelism(3) in code.when i use cli submiitjobs
> flink-1.7.1/bin/flink run -m yarn-cluster -yn 1 -ys 1 -ynm test 
> -ccom.ishansong.bigdata.Main222 ./flinkDemo-1.0-SNAPSHOT.jar
> the application  apply for 4 cpu cores and 4 containers from yarn web ui
> !image-2019-01-17-14-25-34-206.png!
> but if code is not write env.setParallelism(parallelism) or 
> (yn*ys)>parallelism ,the  yn、ys will effect. if code write   
> env.setParallelism(parallelism) ,the final application resources are yn 
> multiples and ys  multiples. such  as parallelism=10,yn=1 ys=5,the final 
> application resources:cpu cores=11 containers=3
>  
> Reproduce for the convenience of bugs,offer codes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11377) AbstractYarnClusterDescriptor's validClusterSpecification is not final application resources from yarn if cli -yn -ys not effect

2019-01-17 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745724#comment-16745724
 ] 

shengjk1 commented on FLINK-11377:
--

yes thanks

> AbstractYarnClusterDescriptor's validClusterSpecification is not final 
> application resources from yarn  if   cli  -yn -ys not effect
> 
>
> Key: FLINK-11377
> URL: https://issues.apache.org/jira/browse/FLINK-11377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
> Environment: java: jdk1.8.0_151
> flink: flink-1.7.1
> CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
>Reporter: shengjk1
>Priority: Major
> Attachments: image-2019-01-17-14-57-24-060.png
>
>
> when cli  -yn -ys not effect ,AbstractYarnClusterDescriptor's 
> validClusterSpecification is not final application resources from yarn  (cli  
> -yn -ys not effect can refer to 
> https://issues.apache.org/jira/browse/FLINK-11376)
>  
> the cli :
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1 -ys 1  -ynm test     
> -ccom.ishansong.bigdata.Main222  ./flinkDemo-1.0-SNAPSHOT.jar
> AbstractYarnClusterDescriptor's log :
>  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Cluster 
> specification: ClusterSpecification\{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}
> but yarn web ui:
> allocated containers=4 and allocated cpu cores=4
> !image-2019-01-17-14-57-24-060.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11377) AbstractYarnClusterDescriptor's validClusterSpecification is not final application resources from yarn if cli -yn -ys not effect

2019-01-17 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 closed FLINK-11377.

Resolution: Not A Problem

> AbstractYarnClusterDescriptor's validClusterSpecification is not final 
> application resources from yarn  if   cli  -yn -ys not effect
> 
>
> Key: FLINK-11377
> URL: https://issues.apache.org/jira/browse/FLINK-11377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
> Environment: java: jdk1.8.0_151
> flink: flink-1.7.1
> CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
>Reporter: shengjk1
>Priority: Major
> Attachments: image-2019-01-17-14-57-24-060.png
>
>
> when cli  -yn -ys not effect ,AbstractYarnClusterDescriptor's 
> validClusterSpecification is not final application resources from yarn  (cli  
> -yn -ys not effect can refer to 
> https://issues.apache.org/jira/browse/FLINK-11376)
>  
> the cli :
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1 -ys 1  -ynm test     
> -ccom.ishansong.bigdata.Main222  ./flinkDemo-1.0-SNAPSHOT.jar
> AbstractYarnClusterDescriptor's log :
>  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Cluster 
> specification: ClusterSpecification\{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}
> but yarn web ui:
> allocated containers=4 and allocated cpu cores=4
> !image-2019-01-17-14-57-24-060.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11376) flink cli -yn -ys is not effect if (yn * ys)

2019-01-17 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745721#comment-16745721
 ] 

shengjk1 commented on FLINK-11376:
--

Hi,Till Rohrmann,thanks for you comment,I agree with your suggestion and 
revisited the flink source code ,i think it is not a bug.I will close all about 
 the issue


thanks
shengjk1


On 01/17/2019 21:41,Till Rohrmann (JIRA) wrote:

[ 
https://issues.apache.org/jira/browse/FLINK-11376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16745055#comment-16745055
 ]

Till Rohrmann commented on FLINK-11376:
---

Hi [~shengjk1], I don't fully understand the problem description. One comment, 
though, {{-yn}} will only be respected by Flink's legacy mode. The legacy mode 
has been removed with Flink 1.7. I would recommend not setting the parallelism 
in code and in stead use the {{-p}} flag of Flink's CLI to specify the desired 
parallelism.

flink cli  -yn -ys is not  effect if  (yn * ys)https://issues.apache.org/jira/browse/FLINK-11376
Project: Flink
Issue Type: Bug
Affects Versions: 1.7.1
Environment: java: jdk1.8.0_151
flink: flink-1.7.1
CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
Reporter: shengjk1
Priority: Major
Attachments: Main222.java, image-2019-01-17-14-25-34-206.png


Such as the title
if  (yn * ys)parallelism 
,the  yn、ys will effect. if code write   env.setParallelism(parallelism) ,the 
final application resources are yn multiples and ys  multiples. such  as 
parallelism=10,yn=1 ys=5,the final application resources:cpu cores=11 
containers=3
 
Reproduce for the convenience of bugs,offer codes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


> flink cli  -yn -ys is not  effect if  (yn * ys) parallelism form env.setParallelism(parallelism) ) ;
> -
>
> Key: FLINK-11376
> URL: https://issues.apache.org/jira/browse/FLINK-11376
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
> Environment: java: jdk1.8.0_151
> flink: flink-1.7.1
> CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
>Reporter: shengjk1
>Priority: Major
> Attachments: Main222.java, image-2019-01-17-14-25-34-206.png
>
>
> Such as the title
> if  (yn * ys) env.setParallelism(parallelism) ) the yn and ys is not effect
> my application is flink streaming read kafka . this kafka topic has 3 
> partition,and setParallelism(3) in code.when i use cli submiitjobs
> flink-1.7.1/bin/flink run -m yarn-cluster -yn 1 -ys 1 -ynm test 
> -ccom.ishansong.bigdata.Main222 ./flinkDemo-1.0-SNAPSHOT.jar
> the application  apply for 4 cpu cores and 4 containers from yarn web ui
> !image-2019-01-17-14-25-34-206.png!
> but if code is not write env.setParallelism(parallelism) or 
> (yn*ys)>parallelism ,the  yn、ys will effect. if code write   
> env.setParallelism(parallelism) ,the final application resources are yn 
> multiples and ys  multiples. such  as parallelism=10,yn=1 ys=5,the final 
> application resources:cpu cores=11 containers=3
>  
> Reproduce for the convenience of bugs,offer codes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11376) flink cli -yn -ys is not effect if (yn * ys)

2019-01-16 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11376:
-
Attachment: Main222.java

> flink cli  -yn -ys is not  effect if  (yn * ys) parallelism form env.setParallelism(parallelism) ) ;
> -
>
> Key: FLINK-11376
> URL: https://issues.apache.org/jira/browse/FLINK-11376
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
> Environment: java: jdk1.8.0_151
> flink: flink-1.7.1
> CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
>Reporter: shengjk1
>Priority: Major
> Attachments: Main222.java, image-2019-01-17-14-25-34-206.png
>
>
> Such as the title
> if  (yn * ys) env.setParallelism(parallelism) ) the yn and ys is not effect
> my application is flink streaming read kafka . this kafka topic has 3 
> partition,and setParallelism(3) in code.when i use cli submiitjobs
> flink-1.7.1/bin/flink run -m yarn-cluster -yn 1 -ys 1 -ynm test 
> -ccom.ishansong.bigdata.Main222 ./flinkDemo-1.0-SNAPSHOT.jar
> the application  apply for 4 cpu cores and 4 containers from yarn web ui
> !image-2019-01-17-14-25-34-206.png!
> but if code is not write env.setParallelism(parallelism) or 
> (yn*ys)>parallelism ,the  yn、ys will effect. if code write   
> env.setParallelism(parallelism) ,the final application resources are yn 
> multiples and ys  multiples. such  as parallelism=10,yn=1 ys=5,the final 
> application resources:cpu cores=11 containers=3
>  
> Reproduce for the convenience of bugs,offer codes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11376) flink cli -yn -ys is not effect if (yn * ys)

2019-01-16 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11376:
-
Attachment: (was: Main222.java)

> flink cli  -yn -ys is not  effect if  (yn * ys) parallelism form env.setParallelism(parallelism) ) ;
> -
>
> Key: FLINK-11376
> URL: https://issues.apache.org/jira/browse/FLINK-11376
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
> Environment: java: jdk1.8.0_151
> flink: flink-1.7.1
> CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
>Reporter: shengjk1
>Priority: Major
> Attachments: image-2019-01-17-14-25-34-206.png
>
>
> Such as the title
> if  (yn * ys) env.setParallelism(parallelism) ) the yn and ys is not effect
> my application is flink streaming read kafka . this kafka topic has 3 
> partition,and setParallelism(3) in code.when i use cli submiitjobs
> flink-1.7.1/bin/flink run -m yarn-cluster -yn 1 -ys 1 -ynm test 
> -ccom.ishansong.bigdata.Main222 ./flinkDemo-1.0-SNAPSHOT.jar
> the application  apply for 4 cpu cores and 4 containers from yarn web ui
> !image-2019-01-17-14-25-34-206.png!
> but if code is not write env.setParallelism(parallelism) or 
> (yn*ys)>parallelism ,the  yn、ys will effect. if code write   
> env.setParallelism(parallelism) ,the final application resources are yn 
> multiples and ys  multiples. such  as parallelism=10,yn=1 ys=5,the final 
> application resources:cpu cores=11 containers=3
>  
> Reproduce for the convenience of bugs,offer codes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11377) AbstractYarnClusterDescriptor's validClusterSpecification is not final application resources from yarn if cli -yn -ys not effect

2019-01-16 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11377:
-
Description: 
when cli  -yn -ys not effect ,AbstractYarnClusterDescriptor's 
validClusterSpecification is not final application resources from yarn  (cli  
-yn -ys not effect can refer to 
https://issues.apache.org/jira/browse/FLINK-11376)

 

the cli :

flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1 -ys 1  -ynm test     
-ccom.ishansong.bigdata.Main222  ./flinkDemo-1.0-SNAPSHOT.jar

AbstractYarnClusterDescriptor's log :

 org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Cluster 
specification: ClusterSpecification\{masterMemoryMB=1024, 
taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}

but yarn web ui:

allocated containers=4 and allocated cpu cores=4

!image-2019-01-17-14-57-24-060.png!

  was:
when cli  -yn -ys not effect ,AbstractYarnClusterDescriptor's 
validClusterSpecification is not final application resources from yarn  (cli  
-yn -ys not effect can refer to 
https://issues.apache.org/jira/browse/FLINK-11376)

 

the cli :

flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1 -ys 1  -ynm test     
-ccom.ishansong.bigdata.Main222  ./flinkDemo-1.0-SNAPSHOT.jar

AbstractYarnClusterDescriptor's log :

 org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Cluster 
specification: ClusterSpecification\{masterMemoryMB=1024, 
taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}

but yarn web ui:

!image-2019-01-17-14-57-24-060.png!


> AbstractYarnClusterDescriptor's validClusterSpecification is not final 
> application resources from yarn  if   cli  -yn -ys not effect
> 
>
> Key: FLINK-11377
> URL: https://issues.apache.org/jira/browse/FLINK-11377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
> Environment: java: jdk1.8.0_151
> flink: flink-1.7.1
> CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
>Reporter: shengjk1
>Priority: Major
> Attachments: image-2019-01-17-14-57-24-060.png
>
>
> when cli  -yn -ys not effect ,AbstractYarnClusterDescriptor's 
> validClusterSpecification is not final application resources from yarn  (cli  
> -yn -ys not effect can refer to 
> https://issues.apache.org/jira/browse/FLINK-11376)
>  
> the cli :
> flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1 -ys 1  -ynm test     
> -ccom.ishansong.bigdata.Main222  ./flinkDemo-1.0-SNAPSHOT.jar
> AbstractYarnClusterDescriptor's log :
>  org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Cluster 
> specification: ClusterSpecification\{masterMemoryMB=1024, 
> taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}
> but yarn web ui:
> allocated containers=4 and allocated cpu cores=4
> !image-2019-01-17-14-57-24-060.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11376) flink cli -yn -ys is not effect if (yn * ys)

2019-01-16 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11376:


 Summary: flink cli  -yn -ys is not  effect if  (yn * 
ys)https://issues.apache.org/jira/browse/FLINK-11376
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.7.1
 Environment: java: jdk1.8.0_151

flink: flink-1.7.1

CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
Reporter: shengjk1
 Attachments: Main222.java, image-2019-01-17-14-25-34-206.png

Such as the title

if  (yn * ys)parallelism 
,the  yn、ys will effect. if code write   env.setParallelism(parallelism) ,the 
final application resources are yn multiples and ys  multiples. such  as 
parallelism=10,yn=1 ys=5,the final application resources:cpu cores=11 
containers=3

 

Reproduce for the convenience of bugs,offer codes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11377) AbstractYarnClusterDescriptor's validClusterSpecification is not final application resources from yarn if cli -yn -ys not effect

2019-01-16 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11377:


 Summary: AbstractYarnClusterDescriptor's validClusterSpecification 
is not final application resources from yarn  if   cli  -yn -ys not effect
 Key: FLINK-11377
 URL: https://issues.apache.org/jira/browse/FLINK-11377
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.7.1
 Environment: java: jdk1.8.0_151

flink: flink-1.7.1

CDH:CDH-5.13.1-1.cdh5.13.1.p0.2
Reporter: shengjk1
 Attachments: image-2019-01-17-14-57-24-060.png

when cli  -yn -ys not effect ,AbstractYarnClusterDescriptor's 
validClusterSpecification is not final application resources from yarn  (cli  
-yn -ys not effect can refer to 
https://issues.apache.org/jira/browse/FLINK-11376)

 

the cli :

flink-1.7.1/bin/flink  run -m yarn-cluster -yn 1 -ys 1  -ynm test     
-ccom.ishansong.bigdata.Main222  ./flinkDemo-1.0-SNAPSHOT.jar

AbstractYarnClusterDescriptor's log :

 org.apache.flink.yarn.AbstractYarnClusterDescriptor           - Cluster 
specification: ClusterSpecification\{masterMemoryMB=1024, 
taskManagerMemoryMB=1024, numberTaskManagers=1, slotsPerTaskManager=1}

but yarn web ui:

!image-2019-01-17-14-57-24-060.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11336) Flink HA didn't remove ZK metadata

2019-01-15 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11336:


 Summary: Flink HA didn't remove ZK metadata
 Key: FLINK-11336
 URL: https://issues.apache.org/jira/browse/FLINK-11336
 Project: Flink
  Issue Type: Improvement
Reporter: shengjk1
 Attachments: image-2019-01-15-19-42-21-902.png

Flink HA didn't remove ZK metadata

such as 

go to zk cli  : ls /flinkone

!image-2019-01-15-19-42-21-902.png!

 

i suggest we should delete this metadata when the application  cancel or throw 
exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-11257) FlinkKafkaConsumer should support assgin partition

2019-01-03 Thread shengjk1 (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16732855#comment-16732855
 ] 

shengjk1 commented on FLINK-11257:
--

hi ,[~till.rohrmann],In order to ensure the compatibility of subsequent code. i 
think  should  change

FlinkKafkaConsumerBase ,such  as add one class KafkaTopicAndPartition and one

interface  KafkaDescriptor ,KafkaTopicAndPartition and KafkaTopicsDescriptor  
implements KafkaDescriptor,FlinkKafkaConsumerBase's 

topicsDescriptor change  to KafkaDescriptor,then FlinkKafkaConsumerBase add one 
Constructor,this constructor modify 

this.kafkaDescriptor = new 
KafkaTopicsAndPartitionDescriptor(topicAndPartitions, topicAndPartitionPattern);

may be have better ways ,welcome the discussion

> FlinkKafkaConsumer should support assgin partition 
> ---
>
> Key: FLINK-11257
> URL: https://issues.apache.org/jira/browse/FLINK-11257
> Project: Flink
>  Issue Type: Sub-task
>  Components: Kafka Connector
>Affects Versions: 1.7.1
>Reporter: shengjk1
>Assignee: vinoyang
>Priority: Major
>
> i find flink 1.7 also has  universal Kafka connector ,if the kakfa-connector  
> support assgin partition ,the the kakfa-connector should prefect.  such as a 
> kafka topci has  3 partition, i only use 1 partition,but i should read all 
> partition then filter.this method Not only waste resources but also 
> relatively low efficiency.so i suggest FlinkKafkaConsumer should support 
> assgin partition 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-11257) FlinkKafkaConsumer should support assgin partition

2019-01-02 Thread shengjk1 (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shengjk1 updated FLINK-11257:
-
Priority: Major  (was: Minor)

> FlinkKafkaConsumer should support assgin partition 
> ---
>
> Key: FLINK-11257
> URL: https://issues.apache.org/jira/browse/FLINK-11257
> Project: Flink
>  Issue Type: New Feature
>  Components: Kafka Connector
>Affects Versions: 1.7.1
>Reporter: shengjk1
>Priority: Major
>
> i find flink 1.7 also has  universal Kafka connector ,if the kakfa-connector  
> support assgin partition ,the the kakfa-connector should prefect.  such as a 
> kafka topci has  3 partition, i only use 1 partition,but i should read all 
> partition then filter.this method Not only waste resources but also 
> relatively low efficiency.so i suggest FlinkKafkaConsumer should support 
> assgin partition 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11257) FlinkKafkaConsumer should support assgin partition

2019-01-02 Thread shengjk1 (JIRA)
shengjk1 created FLINK-11257:


 Summary: FlinkKafkaConsumer should support assgin partition 
 Key: FLINK-11257
 URL: https://issues.apache.org/jira/browse/FLINK-11257
 Project: Flink
  Issue Type: New Feature
  Components: Kafka Connector
Affects Versions: 1.7.1
Reporter: shengjk1


i find flink 1.7 also has  universal Kafka connector ,if the kakfa-connector  
support assgin partition ,the the kakfa-connector should prefect.  such as a 
kafka topci has  3 partition, i only use 1 partition,but i should read all 
partition then filter.this method Not only waste resources but also relatively 
low efficiency.so i suggest FlinkKafkaConsumer should support assgin partition 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)