[jira] [Commented] (FLINK-9640) Checkpointing is aways aborted if any task has been finished

2018-06-28 Thread Hai Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527141#comment-16527141
 ] 

Hai Zhou commented on FLINK-9640:
-

CC [~StephanEwen], what do you think about this ticket?




> Checkpointing is aways aborted if any task has been finished
> 
>
> Key: FLINK-9640
> URL: https://issues.apache.org/jira/browse/FLINK-9640
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Major
> Fix For: 1.6.0
>
>
> steps to reproduce:
> 1. build a standalone flink cluster.
> 2. submit a test job like this below:
> {code:java}
> public class DemoJob {
> public static void main(String[] args) throws Exception {
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.disableOperatorChaining();
> env.setParallelism(4);
> env.enableCheckpointing(3000);
> 
> env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
> DataStream inputStream = env.addSource(new 
> StringGeneratorParallelSourceFunction());
> inputStream.map(String::hashCode).print();
> env.execute();
> }
> public static class StringGeneratorParallelSourceFunction extends 
> RichParallelSourceFunction {
> private static final Logger LOG = 
> LoggerFactory.getLogger(StringGeneratorParallelSourceFunction.class);
> private volatile boolean running = true;
> private int index;
> private int subtask_nums;
> @Override
> public void open(Configuration parameters) throws Exception {
> index = getRuntimeContext().getIndexOfThisSubtask();
> subtask_nums = getRuntimeContext().getNumberOfParallelSubtasks();
> }
> @Override
> public void run(SourceContext ctx) throws Exception {
> while (running) {
> String data = UUID.randomUUID().toString();
> ctx.collect(data);
> LOG.info("subtask_index = {}, emit string = {}", index, data);
> Thread.sleep(50);
> if (index == subtask_nums / 2) {
> running = false;
> LOG.info("subtask_index = {}, finished.", index);
> }
> }
> }
> @Override
> public void cancel() {
> running = false;
> }
> }
> }
> {code}
> 3. observer jm and tm logs can be found.
> *taskmanager.log*
> {code:java}
> 2018-06-21 17:05:54,144 INFO  
> com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
> subtask_index = 2, emit string = 5b0c2467-ad2e-4b53-b1a4-7a0f64560570
> 2018-06-21 17:05:54,151 INFO  
> com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
> subtask_index = 0, emit string = 11af78b3-59ea-467c-a267-7c2238e44ffe
> 2018-06-21 17:05:54,195 INFO  
> com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
> subtask_index = 2, finished.
> 2018-06-21 17:05:54,200 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: Custom Source (3/4) 
> (6b2a374bec5f31112811613537dd4fd9) switched from RUNNING to FINISHED.
> 2018-06-21 17:05:54,201 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: Custom Source (3/4) 
> (6b2a374bec5f31112811613537dd4fd9).
> 2018-06-21 17:05:54,201 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Ensuring all FileSystem streams are closed for task Source: 
> Custom Source (3/4) (6b2a374bec5f31112811613537dd4fd9) [FINISHED]
> 2018-06-21 17:05:54,202 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> FINISHED to JobManager for task Source: Custom Source 
> (6b2a374bec5f31112811613537dd4fd9)
> 2018-06-21 17:05:54,211 INFO  
> com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
> subtask_index = 0, emit string = f29f48fd-ca53-4a96-b596-93948c09581d
> {code}
> *jobmanager.log*
> {code:java}
> 2018-06-21 17:05:52,682 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Sink: Unnamed 
> (2/4) (3aee8bf5103065e8b57ebd0e214141ae) switched from DEPLOYING to RUNNING.
> 2018-06-21 17:05:52,683 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Map (2/4) 
> (de90106d04f63cb9ea531308202e233f) switched from DEPLOYING to RUNNING.
> 2018-06-21 17:05:54,219 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Source: 
> Custom Source (3/4) (6b2a374bec5f31112811613537dd4fd9) switched from RUNNING 
> to FINISHED.

[jira] [Updated] (FLINK-9640) Checkpointing is aways aborted if any task has been finished

2018-06-21 Thread Hai Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-9640:

Description: 
steps to reproduce:
1. build a standalone flink cluster.
2. submit a test job like this below:
{code:java}
public class DemoJob {

public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.disableOperatorChaining();
env.setParallelism(4);
env.enableCheckpointing(3000);

env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);

DataStream inputStream = env.addSource(new 
StringGeneratorParallelSourceFunction());

inputStream.map(String::hashCode).print();

env.execute();
}

public static class StringGeneratorParallelSourceFunction extends 
RichParallelSourceFunction {
private static final Logger LOG = 
LoggerFactory.getLogger(StringGeneratorParallelSourceFunction.class);
private volatile boolean running = true;
private int index;
private int subtask_nums;

@Override
public void open(Configuration parameters) throws Exception {
index = getRuntimeContext().getIndexOfThisSubtask();
subtask_nums = getRuntimeContext().getNumberOfParallelSubtasks();
}

@Override
public void run(SourceContext ctx) throws Exception {

while (running) {
String data = UUID.randomUUID().toString();
ctx.collect(data);
LOG.info("subtask_index = {}, emit string = {}", index, data);
Thread.sleep(50);
if (index == subtask_nums / 2) {
running = false;
LOG.info("subtask_index = {}, finished.", index);
}
}
}

@Override
public void cancel() {
running = false;
}
}
}
{code}

3. observer jm and tm logs can be found.
*taskmanager.log*
{code:java}
2018-06-21 17:05:54,144 INFO  
com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
subtask_index = 2, emit string = 5b0c2467-ad2e-4b53-b1a4-7a0f64560570
2018-06-21 17:05:54,151 INFO  
com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
subtask_index = 0, emit string = 11af78b3-59ea-467c-a267-7c2238e44ffe
2018-06-21 17:05:54,195 INFO  
com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
subtask_index = 2, finished.
2018-06-21 17:05:54,200 INFO  org.apache.flink.runtime.taskmanager.Task 
- Source: Custom Source (3/4) (6b2a374bec5f31112811613537dd4fd9) 
switched from RUNNING to FINISHED.
2018-06-21 17:05:54,201 INFO  org.apache.flink.runtime.taskmanager.Task 
- Freeing task resources for Source: Custom Source (3/4) 
(6b2a374bec5f31112811613537dd4fd9).
2018-06-21 17:05:54,201 INFO  org.apache.flink.runtime.taskmanager.Task 
- Ensuring all FileSystem streams are closed for task Source: 
Custom Source (3/4) (6b2a374bec5f31112811613537dd4fd9) [FINISHED]
2018-06-21 17:05:54,202 INFO  org.apache.flink.yarn.YarnTaskManager 
- Un-registering task and sending final execution state FINISHED to 
JobManager for task Source: Custom Source (6b2a374bec5f31112811613537dd4fd9)
2018-06-21 17:05:54,211 INFO  
com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
subtask_index = 0, emit string = f29f48fd-ca53-4a96-b596-93948c09581d
{code}

*jobmanager.log*
{code:java}
2018-06-21 17:05:52,682 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Sink: Unnamed 
(2/4) (3aee8bf5103065e8b57ebd0e214141ae) switched from DEPLOYING to RUNNING.
2018-06-21 17:05:52,683 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Map (2/4) 
(de90106d04f63cb9ea531308202e233f) switched from DEPLOYING to RUNNING.
2018-06-21 17:05:54,219 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Source: Custom 
Source (3/4) (6b2a374bec5f31112811613537dd4fd9) switched from RUNNING to 
FINISHED.
2018-06-21 17:05:54,224 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Map (3/4) 
(8f523afb97dc848a9578f9cae5870421) switched from RUNNING to FINISHED.
2018-06-21 17:05:54,224 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Sink: Unnamed 
(3/4) (39f76a87d20cfc491e11f0b5b08ec5c2) switched from RUNNING to FINISHED.
2018-06-21 17:05:55,069 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Checkpoint 
triggering task Source: Custom Source (3/4) is not being executed at the 
moment. Aborting checkpoint.
2018-06-21 17:05:58,067 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Checkpoint 
triggering task Source: Custom Source (3/4) is not being executed at the 
moment. Aborting 

[jira] [Updated] (FLINK-9640) Checkpointing is aways aborted if any task has been finished

2018-06-21 Thread Hai Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-9640:

Affects Version/s: (was: 1.3.2)

> Checkpointing is aways aborted if any task has been finished
> 
>
> Key: FLINK-9640
> URL: https://issues.apache.org/jira/browse/FLINK-9640
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0, 1.4.0, 1.5.0, 1.6.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Major
> Fix For: 1.6.0
>
>
> steps to reproduce:
> 1. build a standalone flink cluster.
> 2. submit a test job like this below:
> {code:java}
> public class DemoJob {
> public static void main(String[] args) throws Exception {
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.disableOperatorChaining();
> env.setParallelism(4);
> env.enableCheckpointing(3000);
> 
> env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
> DataStream inputStream = env.addSource(new 
> StringGeneratorParallelSourceFunction());
> inputStream.map(String::hashCode).print();
> env.execute();
> }
> public static class StringGeneratorParallelSourceFunction extends 
> RichParallelSourceFunction {
> private static final Logger LOG = 
> LoggerFactory.getLogger(StringGeneratorParallelSourceFunction.class);
> private volatile boolean running = true;
> private int index;
> private int subtask_nums;
> @Override
> public void open(Configuration parameters) throws Exception {
> index = getRuntimeContext().getIndexOfThisSubtask();
> subtask_nums = getRuntimeContext().getNumberOfParallelSubtasks();
> }
> @Override
> public void run(SourceContext ctx) throws Exception {
> while (running) {
> String data = UUID.randomUUID().toString();
> ctx.collect(data);
> LOG.info("subtask_index = {}, emit string = {}", index, data);
> Thread.sleep(50);
> if (index == subtask_nums / 2) {
> running = false;
> LOG.info("subtask_index = {}, finished.", index);
> }
> }
> }
> @Override
> public void cancel() {
> running = false;
> }
> }
> }
> {code}
> 3. observer jm and tm logs can be found.
> *taskmanager.log*
> {code:java}
> 2018-06-21 17:05:54,144 INFO  
> com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
> subtask_index = 2, emit string = 5b0c2467-ad2e-4b53-b1a4-7a0f64560570
> 2018-06-21 17:05:54,151 INFO  
> com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
> subtask_index = 0, emit string = 11af78b3-59ea-467c-a267-7c2238e44ffe
> 2018-06-21 17:05:54,195 INFO  
> com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
> subtask_index = 2, finished.
> 2018-06-21 17:05:54,200 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Source: Custom Source (3/4) 
> (6b2a374bec5f31112811613537dd4fd9) switched from RUNNING to FINISHED.
> 2018-06-21 17:05:54,201 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for Source: Custom Source (3/4) 
> (6b2a374bec5f31112811613537dd4fd9).
> 2018-06-21 17:05:54,201 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Ensuring all FileSystem streams are closed for task Source: 
> Custom Source (3/4) (6b2a374bec5f31112811613537dd4fd9) [FINISHED]
> 2018-06-21 17:05:54,202 INFO  org.apache.flink.yarn.YarnTaskManager   
>   - Un-registering task and sending final execution state 
> FINISHED to JobManager for task Source: Custom Source 
> (6b2a374bec5f31112811613537dd4fd9)
> 2018-06-21 17:05:54,211 INFO  
> com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
> subtask_index = 0, emit string = f29f48fd-ca53-4a96-b596-93948c09581d
> {code}
> *jobmanager.log*
> {code:java}
> 2018-06-21 17:05:52,682 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Sink: Unnamed 
> (2/4) (3aee8bf5103065e8b57ebd0e214141ae) switched from DEPLOYING to RUNNING.
> 2018-06-21 17:05:52,683 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Map (2/4) 
> (de90106d04f63cb9ea531308202e233f) switched from DEPLOYING to RUNNING.
> 2018-06-21 17:05:54,219 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph- Source: 
> Custom Source (3/4) (6b2a374bec5f31112811613537dd4fd9) switched from RUNNING 
> to FINISHED.
> 2018-06-21 17:05:54,224 INFO  
> 

[jira] [Created] (FLINK-9640) Checkpointing is aways aborted if any task has been finished

2018-06-21 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-9640:
---

 Summary: Checkpointing is aways aborted if any task has been 
finished
 Key: FLINK-9640
 URL: https://issues.apache.org/jira/browse/FLINK-9640
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.5.0, 1.3.2, 1.4.0, 1.3.0, 1.6.0
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.6.0


steps to reproduce:
1. build a standalone flink cluster.
2. submit a test job like this below:
{code:java}
public class DemoJob {

public static void main(String[] args) throws Exception {
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.disableOperatorChaining();
env.setParallelism(4);
env.enableCheckpointing(3000);

env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);

DataStream inputStream = env.addSource(new 
StringGeneratorParallelSourceFunction());

inputStream.map(String::hashCode).print();

env.execute();
}

public static class StringGeneratorParallelSourceFunction extends 
RichParallelSourceFunction {
private static final Logger LOG = 
LoggerFactory.getLogger(StringGeneratorParallelSourceFunction.class);
private volatile boolean running = true;
private int index;
private int subtask_nums;

@Override
public void open(Configuration parameters) throws Exception {
index = getRuntimeContext().getIndexOfThisSubtask();
subtask_nums = getRuntimeContext().getNumberOfParallelSubtasks();
}

@Override
public void run(SourceContext ctx) throws Exception {

while (running) {
String data = UUID.randomUUID().toString();
ctx.collect(data);
LOG.info("subtask_index = {}, emit string = {}", index, data);
Thread.sleep(50);
if (index == subtask_nums / 2) {
running = false;
LOG.info("subtask_index = {}, finished.", index);
}
}
}

@Override
public void cancel() {
running = false;
}
}
}
{code}

3. observer jm and tm logs can be found.
*taskmanager.log*
{code:java}
2018-06-21 17:05:54,144 INFO  
com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
subtask_index = 2, emit string = 5b0c2467-ad2e-4b53-b1a4-7a0f64560570
2018-06-21 17:05:54,151 INFO  
com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
subtask_index = 0, emit string = 11af78b3-59ea-467c-a267-7c2238e44ffe
2018-06-21 17:05:54,195 INFO  
com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
subtask_index = 2, finished.
2018-06-21 17:05:54,200 INFO  org.apache.flink.runtime.taskmanager.Task 
- Source: Custom Source (3/4) (6b2a374bec5f31112811613537dd4fd9) 
switched from RUNNING to FINISHED.
2018-06-21 17:05:54,201 INFO  org.apache.flink.runtime.taskmanager.Task 
- Freeing task resources for Source: Custom Source (3/4) 
(6b2a374bec5f31112811613537dd4fd9).
2018-06-21 17:05:54,201 INFO  org.apache.flink.runtime.taskmanager.Task 
- Ensuring all FileSystem streams are closed for task Source: 
Custom Source (3/4) (6b2a374bec5f31112811613537dd4fd9) [FINISHED]
2018-06-21 17:05:54,202 INFO  org.apache.flink.yarn.YarnTaskManager 
- Un-registering task and sending final execution state FINISHED to 
JobManager for task Source: Custom Source (6b2a374bec5f31112811613537dd4fd9)
2018-06-21 17:05:54,211 INFO  
com.yew1eb.flink.demo.DemoJob$StringGeneratorParallelSourceFunction  - 
subtask_index = 0, emit string = f29f48fd-ca53-4a96-b596-93948c09581d
{code}

*jobmanager.log*
{code:java}
2018-06-21 17:05:52,682 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Sink: Unnamed 
(2/4) (3aee8bf5103065e8b57ebd0e214141ae) switched from DEPLOYING to RUNNING.
2018-06-21 17:05:52,683 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Map (2/4) 
(de90106d04f63cb9ea531308202e233f) switched from DEPLOYING to RUNNING.
2018-06-21 17:05:54,219 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Source: Custom 
Source (3/4) (6b2a374bec5f31112811613537dd4fd9) switched from RUNNING to 
FINISHED.
2018-06-21 17:05:54,224 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Map (3/4) 
(8f523afb97dc848a9578f9cae5870421) switched from RUNNING to FINISHED.
2018-06-21 17:05:54,224 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Sink: Unnamed 
(3/4) (39f76a87d20cfc491e11f0b5b08ec5c2) switched from RUNNING to FINISHED.
2018-06-21 17:05:55,069 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator - Checkpoint 

[jira] [Commented] (FLINK-9525) Missing META-INF/services/*FileSystemFactory in flink-hadoop-fs module

2018-06-05 Thread Hai Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501359#comment-16501359
 ] 

Hai Zhou commented on FLINK-9525:
-

After adding this missing file in the {{flink-hadoop-fs}} module, and again 
build flink from source ,  my flink job will work properly。

> Missing META-INF/services/*FileSystemFactory in flink-hadoop-fs module
> --
>
> Key: FLINK-9525
> URL: https://issues.apache.org/jira/browse/FLINK-9525
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystem
>Affects Versions: 1.4.0, 1.5.0, 1.6.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Blocker
> Fix For: 1.6.0, 1.5.1
>
> Attachments: wx20180605-142...@2x.png
>
>
> if flink job dependencies includes `hadoop-common` and `hadoop-hdfs`, will 
> throw runtime error.
> like this case: 
> [https://stackoverflow.com/questions/47890596/java-util-serviceconfigurationerror-org-apache-hadoop-fs-filesystem-provider-o].
> the root cause: 
> see  {{org.apache.flink.core.fs.FileSystem}}
> This class will load all available file system factories via 
> {{ServiceLoader.load(FileSystemFactory.class)}}. 
> Since  {{ META-INF / services / org.apache.flink.core.fs.FileSystemFactory }} 
> file in the classpath does not have an 
> `org.apache.flink.runtime.fs.hdfs.HadoopFsFactory`, 
> and finaly only loaded one available {{LocalFileSystemFactory}} .
> more error messages see this screenshot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9525) Missing META-INF/services/*FileSystemFactory in flink-hadoop-fs module

2018-06-05 Thread Hai Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-9525:

Description: 
if flink job dependencies includes `hadoop-common` and `hadoop-hdfs`, will 
throw runtime error.
like this case: 
[https://stackoverflow.com/questions/47890596/java-util-serviceconfigurationerror-org-apache-hadoop-fs-filesystem-provider-o].

the root cause: 
see  {{org.apache.flink.core.fs.FileSystem}}
This class will load all available file system factories via 
{{ServiceLoader.load(FileSystemFactory.class)}}. 
Since  {{ META-INF / services / org.apache.flink.core.fs.FileSystemFactory }} 
file in the classpath does not have an 
`org.apache.flink.runtime.fs.hdfs.HadoopFsFactory`, 
and finaly only loaded one available {{LocalFileSystemFactory}} .

more error messages see this screenshot.







  was:
if flink job dependencies includes `hadoop-common` and `hadoop-hdfs`, will 
throw runtime error.
like this case: 
[https://stackoverflow.com/questions/47890596/java-util-serviceconfigurationerror-org-apache-hadoop-fs-filesystem-provider-o].

the root cause: 
see  {{org.apache.flink.core.fs.FileSystem}}
This class will load all available file system factories via 
{{ServiceLoader.load(FileSystemFactory.class)}}. 
Because  {{META-INF / services / org.apache.flink.core.fs.FileSystemFactory}} 
file in the classpath does not have an 
`org.apache.flink.runtime.fs.hdfs.HadoopFsFactory`, 
so just only loaded one available {{LocalFileSystemFactory}} .

more error messages see this screenshot.








> Missing META-INF/services/*FileSystemFactory in flink-hadoop-fs module
> --
>
> Key: FLINK-9525
> URL: https://issues.apache.org/jira/browse/FLINK-9525
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystem
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Major
> Fix For: 1.5.1
>
> Attachments: wx20180605-142...@2x.png
>
>
> if flink job dependencies includes `hadoop-common` and `hadoop-hdfs`, will 
> throw runtime error.
> like this case: 
> [https://stackoverflow.com/questions/47890596/java-util-serviceconfigurationerror-org-apache-hadoop-fs-filesystem-provider-o].
> the root cause: 
> see  {{org.apache.flink.core.fs.FileSystem}}
> This class will load all available file system factories via 
> {{ServiceLoader.load(FileSystemFactory.class)}}. 
> Since  {{ META-INF / services / org.apache.flink.core.fs.FileSystemFactory }} 
> file in the classpath does not have an 
> `org.apache.flink.runtime.fs.hdfs.HadoopFsFactory`, 
> and finaly only loaded one available {{LocalFileSystemFactory}} .
> more error messages see this screenshot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9525) Missing META-INF/services/*FileSystemFactory in flink-hadoop-fs module

2018-06-05 Thread Hai Zhou (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-9525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-9525:

Description: 
if flink job dependencies includes `hadoop-common` and `hadoop-hdfs`, will 
throw runtime error.
like this case: 
[https://stackoverflow.com/questions/47890596/java-util-serviceconfigurationerror-org-apache-hadoop-fs-filesystem-provider-o].

the root cause: 
see  {{org.apache.flink.core.fs.FileSystem}}
This class will load all available file system factories via 
{{ServiceLoader.load(FileSystemFactory.class)}}. 
Because  {{META-INF / services / org.apache.flink.core.fs.FileSystemFactory}} 
file in the classpath does not have an 
`org.apache.flink.runtime.fs.hdfs.HadoopFsFactory`, 
so just only loaded one available {{LocalFileSystemFactory}} .

more error messages see this screenshot.







  was:
if flink job dependencies includes `hadoop-common` and `hadoop-hdfs`, throw 
runtime error.
like this case: 
[https://stackoverflow.com/questions/47890596/java-util-serviceconfigurationerror-org-apache-hadoop-fs-filesystem-provider-o].

the root cause: 
see  {{org.apache.flink.core.fs.FileSystem}}
This class will load all available file system factories via 
{{ServiceLoader.load(FileSystemFactory.class)}}. 
Because  {{META-INF / services / org.apache.flink.core.fs.FileSystemFactory}} 
file in the classpath does not have an 
`org.apache.flink.runtime.fs.hdfs.HadoopFsFactory`, 
so just only loaded one available {{LocalFileSystemFactory}} .

more error messages see this screenshot.








> Missing META-INF/services/*FileSystemFactory in flink-hadoop-fs module
> --
>
> Key: FLINK-9525
> URL: https://issues.apache.org/jira/browse/FLINK-9525
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystem
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Major
> Fix For: 1.5.1
>
> Attachments: wx20180605-142...@2x.png
>
>
> if flink job dependencies includes `hadoop-common` and `hadoop-hdfs`, will 
> throw runtime error.
> like this case: 
> [https://stackoverflow.com/questions/47890596/java-util-serviceconfigurationerror-org-apache-hadoop-fs-filesystem-provider-o].
> the root cause: 
> see  {{org.apache.flink.core.fs.FileSystem}}
> This class will load all available file system factories via 
> {{ServiceLoader.load(FileSystemFactory.class)}}. 
> Because  {{META-INF / services / org.apache.flink.core.fs.FileSystemFactory}} 
> file in the classpath does not have an 
> `org.apache.flink.runtime.fs.hdfs.HadoopFsFactory`, 
> so just only loaded one available {{LocalFileSystemFactory}} .
> more error messages see this screenshot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9525) Missing META-INF/services/*FileSystemFactory in flink-hadoop-fs module

2018-06-05 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-9525:
---

 Summary: Missing META-INF/services/*FileSystemFactory in 
flink-hadoop-fs module
 Key: FLINK-9525
 URL: https://issues.apache.org/jira/browse/FLINK-9525
 Project: Flink
  Issue Type: Bug
  Components: FileSystem
Affects Versions: 1.5.0, 1.4.0
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.5.1
 Attachments: wx20180605-142...@2x.png

if flink job dependencies includes `hadoop-common` and `hadoop-hdfs`, throw 
runtime error.
like this case: 
[https://stackoverflow.com/questions/47890596/java-util-serviceconfigurationerror-org-apache-hadoop-fs-filesystem-provider-o].

the root cause: 
see  {{org.apache.flink.core.fs.FileSystem}}
This class will load all available file system factories via 
{{ServiceLoader.load(FileSystemFactory.class)}}. 
Because  {{META-INF / services / org.apache.flink.core.fs.FileSystemFactory}} 
file in the classpath does not have an 
`org.apache.flink.runtime.fs.hdfs.HadoopFsFactory`, 
so just only loaded one available {{LocalFileSystemFactory}} .

more error messages see this screenshot.









--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9498) Disable dependency convergence for "flink-end-to-end-tests"

2018-06-01 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-9498:
---

 Summary: Disable dependency convergence for 
"flink-end-to-end-tests"
 Key: FLINK-9498
 URL: https://issues.apache.org/jira/browse/FLINK-9498
 Project: Flink
  Issue Type: Sub-task
  Components: Build System
Reporter: Hai Zhou
Assignee: Hai Zhou






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-7952) Add metrics for counting logging events

2018-04-23 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou closed FLINK-7952.
---
Resolution: Won't Fix

> Add metrics for counting logging events 
> 
>
> Key: FLINK-7952
> URL: https://issues.apache.org/jira/browse/FLINK-7952
> Project: Flink
>  Issue Type: Wish
>  Components: Metrics
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Critical
> Fix For: 1.5.0
>
>
> It would be useful to track logging events .
> *impl:*
> adds event counting via a custom Log4J Appender, this tracks the number of 
> INFO, WARN, ERROR and FATAL logging events.
> *ref:*
> hadoop-common: [org.apache.hadoop.log.metrics. EventCounter 
> |https://github.com/apache/hadoop/blob/f67237cbe7bc48a1b9088e990800b37529f1db2a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/metrics/EventCounter.java]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9233) Merging state may cause runtime exception when windows trigger onMerge

2018-04-22 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447268#comment-16447268
 ] 

Hai Zhou commented on FLINK-9233:
-

[~StephanEwen],  I saw this {{IllegalArgumentException: Illegal value provided 
for SubCode.}} bug fixed in RocksDB's latest release 5.12.2,  Should we upgrade 
the RocksDB version to 5.12.2?

> Merging state may cause runtime exception when windows  trigger onMerge
> ---
>
> Key: FLINK-9233
> URL: https://issues.apache.org/jira/browse/FLINK-9233
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
>Priority: Major
>
> the main logic of my flink job is as follows:
> {code:java}
> clickStream.coGroup(exposureStream).where(...).equalTo(...)
> .window(EventTimeSessionWindows.withGap())
> .trigger(new SessionMatchTrigger)
> .evictor()
> .apply();
> {code}
> {code:java}
> SessionMatchTrigger{
> ReducingStateDescriptor  stateDesc = new ReducingStateDescriptor()
> ...
> public boolean canMerge() {
> return true;
> }
> public void onMerge(TimeWindow window, OnMergeContext ctx) {
> ctx.mergePartitionedState(this.stateDesc);
> ctx.registerEventTimeTimer(window.maxTimestamp());
> }
> 
> }
> {code}
> {panel:title=detailed error logs}
> java.lang.RuntimeException: Error while merging state.
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.mergePartitionedState(WindowOperator.java:895)
>  at 
> com.package.trigger.SessionMatchTrigger.onMerge(SessionMatchTrigger.java:56)
>  at 
> com.package.trigger.SessionMatchTrigger.onMerge(SessionMatchTrigger.java:14)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onMerge(WindowOperator.java:939)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator$1.merge(EvictingWindowOperator.java:141)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator$1.merge(EvictingWindowOperator.java:120)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.MergingWindowSet.addWindow(MergingWindowSet.java:212)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator.processElement(EvictingWindowOperator.java:119)
>  at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:207)
>  at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:264)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: java.lang.Exception: Error while merging state in RocksDB
>  at 
> org.apache.flink.contrib.streaming.state.RocksDBReducingState.mergeNamespaces(RocksDBReducingState.java:186)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.mergePartitionedState(WindowOperator.java:887)
>  ... 12 more
>  Caused by: java.lang.IllegalArgumentException: Illegal value provided for 
> SubCode.
>  at org.rocksdb.Status$SubCode.getSubCode(Status.java:109)
>  at org.rocksdb.Status.(Status.java:30)
>  at org.rocksdb.RocksDB.delete(Native Method)
>  at org.rocksdb.RocksDB.delete(RocksDB.java:1110)
>  at 
> org.apache.flink.contrib.streaming.state.RocksDBReducingState.mergeNamespaces(RocksDBReducingState.java:143)
>  ... 13 more
> {panel}
>  
> I found the reason of this error. 
>  Due to Java's
> {RocksDB.Status.SubCode}
> was out of sync with
> {include/rocksdb/status.h:SubCode}
> .
>  When running out of disc space this led to an
> {IllegalArgumentException}
> because of an invalid status code, rather than just returning the 
> corresponding status code without an exception.
>  more details:<[https://github.com/facebook/rocksdb/pull/3050]>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9233) Merging state may cause runtime exception when windows trigger onMerge

2018-04-22 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447234#comment-16447234
 ] 

Hai Zhou commented on FLINK-9233:
-

I am not quite sure that upgrading the rocksdb version to 5.9 or higher can 
solve this problem.

[~StephanEwen], [~aljoscha], Do you have any ideas?

> Merging state may cause runtime exception when windows  trigger onMerge
> ---
>
> Key: FLINK-9233
> URL: https://issues.apache.org/jira/browse/FLINK-9233
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
>Priority: Major
>
> the main logic of my flink job is as follows:
> {code:java}
> clickStream.coGroup(exposureStream).where(...).equalTo(...)
> .window(EventTimeSessionWindows.withGap())
> .trigger(new SessionMatchTrigger)
> .evictor()
> .apply();
> {code}
> {code:java}
> SessionMatchTrigger{
> ReducingStateDescriptor  stateDesc = new ReducingStateDescriptor()
> ...
> public boolean canMerge() {
> return true;
> }
> public void onMerge(TimeWindow window, OnMergeContext ctx) {
> ctx.mergePartitionedState(this.stateDesc);
> ctx.registerEventTimeTimer(window.maxTimestamp());
> }
> 
> }
> {code}
> {panel:title=detailed error logs}
> java.lang.RuntimeException: Error while merging state.
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.mergePartitionedState(WindowOperator.java:895)
>  at 
> com.package.trigger.SessionMatchTrigger.onMerge(SessionMatchTrigger.java:56)
>  at 
> com.package.trigger.SessionMatchTrigger.onMerge(SessionMatchTrigger.java:14)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onMerge(WindowOperator.java:939)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator$1.merge(EvictingWindowOperator.java:141)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator$1.merge(EvictingWindowOperator.java:120)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.MergingWindowSet.addWindow(MergingWindowSet.java:212)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator.processElement(EvictingWindowOperator.java:119)
>  at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:207)
>  at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:264)
>  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: java.lang.Exception: Error while merging state in RocksDB
>  at 
> org.apache.flink.contrib.streaming.state.RocksDBReducingState.mergeNamespaces(RocksDBReducingState.java:186)
>  at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.mergePartitionedState(WindowOperator.java:887)
>  ... 12 more
>  Caused by: java.lang.IllegalArgumentException: Illegal value provided for 
> SubCode.
>  at org.rocksdb.Status$SubCode.getSubCode(Status.java:109)
>  at org.rocksdb.Status.(Status.java:30)
>  at org.rocksdb.RocksDB.delete(Native Method)
>  at org.rocksdb.RocksDB.delete(RocksDB.java:1110)
>  at 
> org.apache.flink.contrib.streaming.state.RocksDBReducingState.mergeNamespaces(RocksDBReducingState.java:143)
>  ... 13 more
> {panel}
>  
> I found the reason of this error. 
>  Due to Java's
> {RocksDB.Status.SubCode}
> was out of sync with
> {include/rocksdb/status.h:SubCode}
> .
>  When running out of disc space this led to an
> {IllegalArgumentException}
> because of an invalid status code, rather than just returning the 
> corresponding status code without an exception.
>  more details:<[https://github.com/facebook/rocksdb/pull/3050]>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9233) Merging state may cause runtime exception when windows trigger onMerge

2018-04-22 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-9233:
---

 Summary: Merging state may cause runtime exception when windows  
trigger onMerge
 Key: FLINK-9233
 URL: https://issues.apache.org/jira/browse/FLINK-9233
 Project: Flink
  Issue Type: Bug
  Components: State Backends, Checkpointing
Affects Versions: 1.4.0
Reporter: Hai Zhou


the main logic of my flink job is as follows:
{code:java}
clickStream.coGroup(exposureStream).where(...).equalTo(...)
.window(EventTimeSessionWindows.withGap())
.trigger(new SessionMatchTrigger)
.evictor()
.apply();
{code}
{code:java}
SessionMatchTrigger{

ReducingStateDescriptor  stateDesc = new ReducingStateDescriptor()
...
public boolean canMerge() {
return true;
}


public void onMerge(TimeWindow window, OnMergeContext ctx) {
ctx.mergePartitionedState(this.stateDesc);
ctx.registerEventTimeTimer(window.maxTimestamp());
}

}
{code}
{panel:title=detailed error logs}
java.lang.RuntimeException: Error while merging state.
 at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.mergePartitionedState(WindowOperator.java:895)
 at com.package.trigger.SessionMatchTrigger.onMerge(SessionMatchTrigger.java:56)
 at com.package.trigger.SessionMatchTrigger.onMerge(SessionMatchTrigger.java:14)
 at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.onMerge(WindowOperator.java:939)
 at 
org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator$1.merge(EvictingWindowOperator.java:141)
 at 
org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator$1.merge(EvictingWindowOperator.java:120)
 at 
org.apache.flink.streaming.runtime.operators.windowing.MergingWindowSet.addWindow(MergingWindowSet.java:212)
 at 
org.apache.flink.streaming.runtime.operators.windowing.EvictingWindowOperator.processElement(EvictingWindowOperator.java:119)
 at 
org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:207)
 at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
 at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:264)
 at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
 at java.lang.Thread.run(Thread.java:745)
 Caused by: java.lang.Exception: Error while merging state in RocksDB
 at 
org.apache.flink.contrib.streaming.state.RocksDBReducingState.mergeNamespaces(RocksDBReducingState.java:186)
 at 
org.apache.flink.streaming.runtime.operators.windowing.WindowOperator$Context.mergePartitionedState(WindowOperator.java:887)
 ... 12 more
 Caused by: java.lang.IllegalArgumentException: Illegal value provided for 
SubCode.
 at org.rocksdb.Status$SubCode.getSubCode(Status.java:109)
 at org.rocksdb.Status.(Status.java:30)
 at org.rocksdb.RocksDB.delete(Native Method)
 at org.rocksdb.RocksDB.delete(RocksDB.java:1110)
 at 
org.apache.flink.contrib.streaming.state.RocksDBReducingState.mergeNamespaces(RocksDBReducingState.java:143)
 ... 13 more
{panel}
 

I found the reason of this error. 
 Due to Java's

{RocksDB.Status.SubCode}

was out of sync with

{include/rocksdb/status.h:SubCode}

.
 When running out of disc space this led to an

{IllegalArgumentException}

because of an invalid status code, rather than just returning the corresponding 
status code without an exception.
 more details:<[https://github.com/facebook/rocksdb/pull/3050]>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9091) Failure while enforcing releasability in building flink-json module

2018-04-22 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447205#comment-16447205
 ] 

Hai Zhou commented on FLINK-9091:
-

+1.

> Failure while enforcing releasability in building flink-json module
> ---
>
> Key: FLINK-9091
> URL: https://issues.apache.org/jira/browse/FLINK-9091
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Ted Yu
>Assignee: Hai Zhou
>Priority: Major
> Attachments: f-json.out
>
>
> Got the following when building flink-json module:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce 
> (dependency-convergence) on project flink-json: Some Enforcer rules have 
> failed.   Look above for specific messages explaining why the rule failed. -> 
> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9032) Deprecate ConfigConstants#YARN_CONTAINER_START_COMMAND_TEMPLATE

2018-04-22 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447140#comment-16447140
 ] 

Hai Zhou commented on FLINK-9032:
-

Hi  [~aljoscha], [~till.rohrmann] , what do you think about this?

> Deprecate ConfigConstants#YARN_CONTAINER_START_COMMAND_TEMPLATE
> ---
>
> Key: FLINK-9032
> URL: https://issues.apache.org/jira/browse/FLINK-9032
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Documentation
>Affects Versions: 1.5.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Major
> Fix For: 1.5.0, 1.6.0
>
>
> this contants("yarn.container-start-command-template") has disappeared from 
> the [1.5.0-SNAPSHOT 
> docs|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html].
> We should restore it, and I think it should be renamed 
> "containerized.start-command-template".
> [~Zentol], what do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9231) Enable SO_REUSEADDR on listen sockets

2018-04-22 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16447110#comment-16447110
 ] 

Hai Zhou commented on FLINK-9231:
-

hi [~triones], This ticket is assigned to you and you can work for it now.:D

> Enable SO_REUSEADDR on listen sockets
> -
>
> Key: FLINK-9231
> URL: https://issues.apache.org/jira/browse/FLINK-9231
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Triones Deng
>Priority: Major
>
> This allows sockets to be bound even if there are sockets
> from a previous application that are still pending closure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9231) Enable SO_REUSEADDR on listen sockets

2018-04-22 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-9231:
---

Assignee: Triones Deng

> Enable SO_REUSEADDR on listen sockets
> -
>
> Key: FLINK-9231
> URL: https://issues.apache.org/jira/browse/FLINK-9231
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Triones Deng
>Priority: Major
>
> This allows sockets to be bound even if there are sockets
> from a previous application that are still pending closure.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9212) Port SubtasksAllAccumulatorsHandler to new REST endpoint

2018-04-18 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-9212:
---

Assignee: Hai Zhou

> Port SubtasksAllAccumulatorsHandler to new REST endpoint
> 
>
> Key: FLINK-9212
> URL: https://issues.apache.org/jira/browse/FLINK-9212
> Project: Flink
>  Issue Type: Sub-task
>  Components: REST
>Affects Versions: 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Hai Zhou
>Priority: Blocker
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9032) Deprecate ConfigConstants#YARN_CONTAINER_START_COMMAND_TEMPLATE

2018-03-26 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16415050#comment-16415050
 ] 

Hai Zhou commented on FLINK-9032:
-

[~Zentol] , What do you think about this ticket?

> Deprecate ConfigConstants#YARN_CONTAINER_START_COMMAND_TEMPLATE
> ---
>
> Key: FLINK-9032
> URL: https://issues.apache.org/jira/browse/FLINK-9032
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Documentation
>Affects Versions: 1.5.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Major
> Fix For: 1.5.0, 1.6.0
>
>
> this contants("yarn.container-start-command-template") has disappeared from 
> the [1.5.0-SNAPSHOT 
> docs|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html].
> We should restore it, and I think it should be renamed 
> "containerized.start-command-template".
> [~Zentol], what do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9090) Replace UUID.randomUUID with deterministic PRNG

2018-03-26 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-9090:
---

Assignee: Hai Zhou

> Replace UUID.randomUUID with deterministic PRNG
> ---
>
> Key: FLINK-9090
> URL: https://issues.apache.org/jira/browse/FLINK-9090
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Hai Zhou
>Priority: Major
>
> Currently UUID.randomUUID is called in various places in the code base.
> * It is non-deterministic.
> * It uses a single secure random for UUID generation. This uses a single JVM 
> wide lock, and this can lead to lock contention and other performance 
> problems.
> We should move to something that is deterministic by using seeded PRNGs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9088) Upgrade Nifi connector dependency to 1.6.0

2018-03-26 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-9088:
---

Assignee: Hai Zhou

> Upgrade Nifi connector dependency to 1.6.0
> --
>
> Key: FLINK-9088
> URL: https://issues.apache.org/jira/browse/FLINK-9088
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Hai Zhou
>Priority: Major
>
> Currently dependency of Nifi is 0.6.1
> We should upgrade to 1.6.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-9091) Failure while enforcing releasability in building flink-json module

2018-03-26 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-9091:
---

Assignee: Hai Zhou

> Failure while enforcing releasability in building flink-json module
> ---
>
> Key: FLINK-9091
> URL: https://issues.apache.org/jira/browse/FLINK-9091
> Project: Flink
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Hai Zhou
>Priority: Major
> Attachments: f-json.out
>
>
> Got the following when building flink-json module:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability. See above detailed error message.
> ...
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce 
> (dependency-convergence) on project flink-json: Some Enforcer rules have 
> failed.   Look above for specific messages explaining why the rule failed. -> 
> [Help 1]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9068) Website documentation issue - html tag visible on screen

2018-03-23 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412385#comment-16412385
 ] 

Hai Zhou commented on FLINK-9068:
-

It seems I can't find you inside the Assignee search box, you may need to apply 
for to become a contributor. (next monday, you can mail to 
*dev*@flink.apache.org, and CC: [~Zentol] , apply to be a apache flink 
contributor.)

 

The above is not particularly important :), now you can submit PR to solve 
it.(Please read the[ Contribute 
code|http://flink.apache.org/contribute-code.html] guide before you start to 
work on a code contribution.)

> Website documentation issue - html tag visible on screen
> 
>
> Key: FLINK-9068
> URL: https://issues.apache.org/jira/browse/FLINK-9068
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: SHANKAR GANESH
>Priority: Minor
> Attachments: Screen Shot 2018-03-23 at 7.56.48 PM.png
>
>
> In the documentation at the following url
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/#physical-partitioning]
> In the section which explains the 'Reduce' operator (*Reduce*
> KeyedStream → DataStream), an html tag () is visible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-9068) Website documentation issue - html tag visible on screen

2018-03-23 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16412380#comment-16412380
 ] 

Hai Zhou commented on FLINK-9068:
-

Thank you [~shankarganesh1234], found this typo issue, are you interested in 
fix it?

> Website documentation issue - html tag visible on screen
> 
>
> Key: FLINK-9068
> URL: https://issues.apache.org/jira/browse/FLINK-9068
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: SHANKAR GANESH
>Priority: Minor
> Attachments: Screen Shot 2018-03-23 at 7.56.48 PM.png
>
>
> In the documentation at the following url
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/stream/operators/#physical-partitioning]
> In the section which explains the 'Reduce' operator (*Reduce*
> KeyedStream → DataStream), an html tag () is visible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9064) Add Scaladocs link to documentation

2018-03-23 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-9064:

Issue Type: Improvement  (was: Bug)

> Add Scaladocs link to documentation
> ---
>
> Key: FLINK-9064
> URL: https://issues.apache.org/jira/browse/FLINK-9064
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.5.0
>Reporter: Matt Hagen
>Priority: Minor
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Browse to the [Apache Flink 
> Documentation|https://ci.apache.org/projects/flink/flink-docs-master/] page.
> On the sidebar, under the Javadocs link, I recommend that you add a 
> [Scaladocs|https://ci.apache.org/projects/flink/flink-docs-release-1.5/api/scala/index.html#org.apache.flink.api.scala.package]
>  link.
> Thanks!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9033) Replace usages of deprecated TASK_MANAGER_NUM_TASK_SLOTS

2018-03-20 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-9033:
---

 Summary: Replace usages of deprecated TASK_MANAGER_NUM_TASK_SLOTS
 Key: FLINK-9033
 URL: https://issues.apache.org/jira/browse/FLINK-9033
 Project: Flink
  Issue Type: Improvement
  Components: Configuration
Affects Versions: 1.5.0
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.5.0


The deprecated ConfigConstants#TASK_MANAGER_NUM_TASK_SLOTS is still used a lot.

We should replace these usages with TaskManagerOptions#NUM_TASK_SLOTS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (FLINK-9032) Deprecate ConfigConstants#YARN_CONTAINER_START_COMMAND_TEMPLATE

2018-03-20 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-9032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-9032:

Component/s: Configuration

> Deprecate ConfigConstants#YARN_CONTAINER_START_COMMAND_TEMPLATE
> ---
>
> Key: FLINK-9032
> URL: https://issues.apache.org/jira/browse/FLINK-9032
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Documentation
>Affects Versions: 1.5.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Major
> Fix For: 1.5.0, 1.6.0
>
>
> this contants("yarn.container-start-command-template") has disappeared from 
> the [1.5.0-SNAPSHOT 
> docs|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html].
> We should restore it, and I think it should be renamed 
> "containerized.start-command-template".
> [~Zentol], what do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9032) Deprecate ConfigConstants#YARN_CONTAINER_START_COMMAND_TEMPLATE

2018-03-20 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-9032:
---

 Summary: Deprecate 
ConfigConstants#YARN_CONTAINER_START_COMMAND_TEMPLATE
 Key: FLINK-9032
 URL: https://issues.apache.org/jira/browse/FLINK-9032
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.5.0
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.5.0, 1.6.0


this contants("yarn.container-start-command-template") has disappeared from the 
[1.5.0-SNAPSHOT 
docs|https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html].

We should restore it, and I think it should be renamed 
"containerized.start-command-template".

[~Zentol], what do you think?




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-7624) Add kafka-topic for "KafkaProducer" metrics

2017-09-16 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-7624:
---

Assignee: (was: Hai Zhou)

> Add kafka-topic for "KafkaProducer" metrics
> ---
>
> Key: FLINK-7624
> URL: https://issues.apache.org/jira/browse/FLINK-7624
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Reporter: Hai Zhou
> Fix For: 1.4.0
>
>
> Currently, metric in "KafkaProducer" MetricGroup, Such as:
> {code:java}
> localhost.taskmanager.dc4092a96ea4e54ecdbd13b9a5c209b2.Flink Streaming 
> Job.Sink--MTKafkaProducer08.0.KafkaProducer.record-queue-time-avg
> {code}
> The metric name in the "KafkaProducer" group does not have a kafka-topic name 
> part,  if the job writes data to two different kafka sinks, these metrics 
> will not distinguish.
> I wish that modify the above metric name as follows:
> {code:java}
> localhost.taskmanager.dc4092a96ea4e54ecdbd13b9a5c209b2.Flink Streaming 
> Job.Sink--MTKafkaProducer08.0.KafkaProducer. topic>.record-queue-time-avg
> {code}
> Best,
> Hai Zhou



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7625) typo in docs metrics sections

2017-09-16 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou closed FLINK-7625.
---
Resolution: Fixed

>  typo in docs metrics sections
> --
>
> Key: FLINK-7625
> URL: https://issues.apache.org/jira/browse/FLINK-7625
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Metrics
>Affects Versions: 1.3.2
>Reporter: Hai Zhou
>Assignee: Hai Zhou
> Fix For: 1.4.0, 1.3.3
>
>
> Infix  Metrics
> Status.JVM.Memory  *Memory.Heap.Used*
> changed to
> Status.JVM.Memory *Heap.Used*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7494) No license headers in ".travis.yml" file

2017-09-16 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou closed FLINK-7494.
---
   Resolution: Fixed
Fix Version/s: 1.4.0

> No license headers in ".travis.yml" file
> 
>
> Key: FLINK-7494
> URL: https://issues.apache.org/jira/browse/FLINK-7494
> Project: Flink
>  Issue Type: Wish
>  Components: Travis
>Reporter: Hai Zhou
>Assignee: Hai Zhou
> Fix For: 1.4.0
>
>
> I will fix the ".travis.yml" file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7626) Add some metric description about checkpoints

2017-09-14 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7626:
---

 Summary: Add some metric description about checkpoints
 Key: FLINK-7626
 URL: https://issues.apache.org/jira/browse/FLINK-7626
 Project: Flink
  Issue Type: Bug
  Components: Documentation, Metrics
Affects Versions: 1.3.2
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.4.0, 1.3.3


Add some metric description in"Debugging & Monitoring / Metrics"  part of  docs:

{noformat}

//Number of total checkpoints (in progress, completed, failed)
totalNumberOfCheckpoints

 //Number of in progress checkpoints.
numberOfInProgressCheckpoints

//Number of successfully completed checkpoints
numberOfCompletedCheckpoints

//Number of failed checkpoints.
numberOfFailedCheckpoints

//Timestamp when the checkpoint was restored at the coordinator.
lastCheckpointRestoreTimestamp

//Buffered bytes during alignment over all subtasks.
lastCheckpointAlignmentBuffered
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7625) typo in docs metrics sections

2017-09-14 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7625:
---

 Summary:  typo in docs metrics sections
 Key: FLINK-7625
 URL: https://issues.apache.org/jira/browse/FLINK-7625
 Project: Flink
  Issue Type: Bug
  Components: Documentation, Metrics
Affects Versions: 1.3.2
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.4.0, 1.3.3


InfixMetrics
Status.JVM.Memory*Memory.Heap.Used*
changed to
Status.JVM.Memory *Heap.Used*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7624) Add kafka-topic for "KafkaProducer" metrics

2017-09-14 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7624:
---

 Summary: Add kafka-topic for "KafkaProducer" metrics
 Key: FLINK-7624
 URL: https://issues.apache.org/jira/browse/FLINK-7624
 Project: Flink
  Issue Type: Bug
  Components: Metrics
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.4.0


Currently, metric in "KafkaProducer" MetricGroup, Such as:
{code:java}
localhost.taskmanager.dc4092a96ea4e54ecdbd13b9a5c209b2.Flink Streaming 
Job.Sink--MTKafkaProducer08.0.KafkaProducer.record-queue-time-avg
{code}
The metric name in the "KafkaProducer" group does not have a kafka-topic name 
part,  if the job writes data to two different kafka sinks, these metrics will 
not distinguish.

I wish that modify the above metric name as follows:
{code:java}
localhost.taskmanager.dc4092a96ea4e54ecdbd13b9a5c209b2.Flink Streaming 
Job.Sink--MTKafkaProducer08.0.KafkaProducer..record-queue-time-avg
{code}
Best,
Hai Zhou



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-2428) Clean up unused properties in StreamConfig

2017-09-11 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-2428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou closed FLINK-2428.
---
Resolution: Fixed

> Clean up unused properties in StreamConfig
> --
>
> Key: FLINK-2428
> URL: https://issues.apache.org/jira/browse/FLINK-2428
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API
>Affects Versions: 0.10.0
>Reporter: Stephan Ewen
>Priority: Minor
>
> There is a multitude of unused properties in the {{StreamConfig}}, which 
> should be removed, if no longer relevant.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-3542) FlinkKafkaConsumer09 cannot handle changing number of partitions

2017-09-11 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16162054#comment-16162054
 ] 

Hai Zhou commented on FLINK-3542:
-

[~till.rohrmann]
In flink 1.3.x,   FlinkKafkaConsumer08 also have this problem ?


> FlinkKafkaConsumer09 cannot handle changing number of partitions
> 
>
> Key: FLINK-3542
> URL: https://issues.apache.org/jira/browse/FLINK-3542
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Affects Versions: 1.0.0
>Reporter: Till Rohrmann
>Priority: Minor
>
> The current {{FlinkKafkaConsumer09}} cannot handle increasing the number of 
> partitions of a topic while running. The consumer will simply leave the newly 
> created partitions out and thus miss all data which is written to the new 
> partitions. The reason seems to be a static assignment of partitions to 
> consumer tasks when the job is started.
> We should either fix this behaviour or clearly document it in the online and 
> code docs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6444) Add a check that '@VisibleForTesting' methods are only used in tests

2017-09-11 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16161119#comment-16161119
 ] 

Hai Zhou commented on FLINK-6444:
-

I already have an idea.
When I am not busy with work;), I will go to fix it.

> Add a check that '@VisibleForTesting' methods are only used in tests
> 
>
> Key: FLINK-6444
> URL: https://issues.apache.org/jira/browse/FLINK-6444
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Stephan Ewen
>
> Some methods are annotated with {{@VisibleForTesting}}. These methods should 
> only be called from tests.
> This is currently not enforced / checked during the build. We should add such 
> a check.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7562) uploaded jars sort by upload-time on dashboard page

2017-09-09 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-7562:
---

Assignee: Hai Zhou

> uploaded jars sort by upload-time on dashboard page
> ---
>
> Key: FLINK-7562
> URL: https://issues.apache.org/jira/browse/FLINK-7562
> Project: Flink
>  Issue Type: Wish
>  Components: Webfrontend
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Minor
>
> Currently, upload the jar package show is unordered in the Apache Flink Web 
> Dashboard page, 
> I think we should sort by upload time, so look more friendly!
> Regards,
> Hai Zhou.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7608) LatencyGauge change to histogram metric

2017-09-08 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16159312#comment-16159312
 ] 

Hai Zhou commented on FLINK-7608:
-

Hi [~rmetzger],
in org.apache.flink.streaming.api.operators.AbstractStreamOperator class:
{code:java}
/**
 * The gauge uses a HashMap internally to avoid classloading issues 
when accessing
 * the values using JMX.
 */
protected static class LatencyGauge implements Gauge>> {
{code}

But according to this log,
I think this latency statistics metric class should be histogram instead of 
gauge.  What's your opinion?

Regards,
Hai Zhou

> LatencyGauge change to  histogram metric
> 
>
> Key: FLINK-7608
> URL: https://issues.apache.org/jira/browse/FLINK-7608
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Reporter: Hai Zhou
>Assignee: Hai Zhou
> Fix For: 1.4.0, 1.3.3
>
>
> I used slf4jReporter[https://issues.apache.org/jira/browse/FLINK-4831]  to 
> export metrics the log file.
> I found:
> {noformat}
> -- Gauges 
> -
> ..
> zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
> Job.Map.0.latency:
>  value={LatencySourceDescriptor{vertexID=1, subtaskIndex=-1}={p99=116.0, 
> p50=59.5, min=11.0, max=116.0, p95=116.0, mean=61.836}}
> zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
> Job.Sink- Unnamed.0.latency: 
> value={LatencySourceDescriptor{vertexID=1, subtaskIndex=0}={p99=195.0, 
> p50=163.5, min=115.0, max=195.0, p95=195.0, mean=161.0}}
> ..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7608) LatencyGauge change to histogram metric

2017-09-08 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7608:
---

 Summary: LatencyGauge change to  histogram metric
 Key: FLINK-7608
 URL: https://issues.apache.org/jira/browse/FLINK-7608
 Project: Flink
  Issue Type: Bug
  Components: Metrics
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.4.0, 1.3.3


I used slf4jReporter[https://issues.apache.org/jira/browse/FLINK-4831]  to 
export metrics the log file.
I found:


{noformat}
-- Gauges -
..
zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
Job.Map.0.latency: value={LatencySourceDescriptor{vertexID=1, 
subtaskIndex=-1}={p99=116.0, p50=59.5, min=11.0, max=116.0, p95=116.0, 
mean=61.836}}
zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
Job.Sink- Unnamed.0.latency: value={LatencySourceDescriptor{vertexID=1, 
subtaskIndex=0}={p99=195.0, p50=163.5, min=115.0, max=195.0, p95=195.0, 
mean=161.0}}
..
{noformat}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7608) LatencyGauge change to histogram metric

2017-09-08 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7608:

Description: 
I used slf4jReporter[https://issues.apache.org/jira/browse/FLINK-4831]  to 
export metrics the log file.
I found:


{noformat}
-- Gauges -
..
zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
Job.Map.0.latency:
 value={LatencySourceDescriptor{vertexID=1, subtaskIndex=-1}={p99=116.0, 
p50=59.5, min=11.0, max=116.0, p95=116.0, mean=61.836}}
zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
Job.Sink- Unnamed.0.latency: 
value={LatencySourceDescriptor{vertexID=1, subtaskIndex=0}={p99=195.0, 
p50=163.5, min=115.0, max=195.0, p95=195.0, mean=161.0}}
..
{noformat}



  was:
I used slf4jReporter[https://issues.apache.org/jira/browse/FLINK-4831]  to 
export metrics the log file.
I found:


{noformat}
-- Gauges -
..
zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
Job.Map.0.latency: value={LatencySourceDescriptor{vertexID=1, 
subtaskIndex=-1}={p99=116.0, p50=59.5, min=11.0, max=116.0, p95=116.0, 
mean=61.836}}
zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
Job.Sink- Unnamed.0.latency: value={LatencySourceDescriptor{vertexID=1, 
subtaskIndex=0}={p99=195.0, p50=163.5, min=115.0, max=195.0, p95=195.0, 
mean=161.0}}
..
{noformat}




> LatencyGauge change to  histogram metric
> 
>
> Key: FLINK-7608
> URL: https://issues.apache.org/jira/browse/FLINK-7608
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics
>Reporter: Hai Zhou
>Assignee: Hai Zhou
> Fix For: 1.4.0, 1.3.3
>
>
> I used slf4jReporter[https://issues.apache.org/jira/browse/FLINK-4831]  to 
> export metrics the log file.
> I found:
> {noformat}
> -- Gauges 
> -
> ..
> zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
> Job.Map.0.latency:
>  value={LatencySourceDescriptor{vertexID=1, subtaskIndex=-1}={p99=116.0, 
> p50=59.5, min=11.0, max=116.0, p95=116.0, mean=61.836}}
> zhouhai-mbp.taskmanager.f3fd3a269c8c3da4e8319c8f6a201a57.Flink Streaming 
> Job.Sink- Unnamed.0.latency: 
> value={LatencySourceDescriptor{vertexID=1, subtaskIndex=0}={p99=195.0, 
> p50=163.5, min=115.0, max=195.0, p95=195.0, mean=161.0}}
> ..
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7604) Add OpenTSDB metrics reporter

2017-09-08 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7604:
---

 Summary: Add OpenTSDB metrics reporter
 Key: FLINK-7604
 URL: https://issues.apache.org/jira/browse/FLINK-7604
 Project: Flink
  Issue Type: Bug
  Components: Metrics
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.4.0, 1.3.3






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7594) Add a SQL CLI client

2017-09-07 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156954#comment-16156954
 ] 

Hai Zhou commented on FLINK-7594:
-

[~wheat9] That sounds great. +1
BTW,  If Alibaba and Huawei to share their design will be better;)

> Add a SQL CLI client
> 
>
> Key: FLINK-7594
> URL: https://issues.apache.org/jira/browse/FLINK-7594
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> At the moment a user can only specify queries within a Java/Scala program 
> which is nice for integrating table programs or parts of it with DataSet or 
> DataStream API. With more connectors coming up, it is time to also provide a 
> programming-free SQL client. The SQL client should consist of a CLI interface 
> and maybe also a REST API. The concrete design is still up for discussion.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4831) Implement a slf4j metric reporter

2017-09-05 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153866#comment-16153866
 ] 

Hai Zhou commented on FLINK-4831:
-

I recently read *dropwizard metric*, i seem to have some ideas.

> Implement a slf4j metric reporter
> -
>
> Key: FLINK-4831
> URL: https://issues.apache.org/jira/browse/FLINK-4831
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.2
>Reporter: Chesnay Schepler
>Assignee: Hai Zhou
>Priority: Minor
>  Labels: easyfix, starter
>
> For debugging purpose it would be very useful to have a log4j metric 
> reporter. If you don't want to setup a metric backend you currently have to 
> rely on JMX, which a) works a bit differently than other reporters (for 
> example it doesn't extend AbstractReporter) and b) makes it a bit tricky to 
> analyze results as metrics are cleaned up once a job finishes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-4831) Implement a slf4j metric reporter

2017-09-05 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-4831:
---

Assignee: Hai Zhou

> Implement a slf4j metric reporter
> -
>
> Key: FLINK-4831
> URL: https://issues.apache.org/jira/browse/FLINK-4831
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.2
>Reporter: Chesnay Schepler
>Assignee: Hai Zhou
>Priority: Minor
>  Labels: easyfix, starter
>
> For debugging purpose it would be very useful to have a log4j metric 
> reporter. If you don't want to setup a metric backend you currently have to 
> rely on JMX, which a) works a bit differently than other reporters (for 
> example it doesn't extend AbstractReporter) and b) makes it a bit tricky to 
> analyze results as metrics are cleaned up once a job finishes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7585) SimpleCounter not thread safe

2017-09-05 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153798#comment-16153798
 ] 

Hai Zhou commented on FLINK-7585:
-

Hi [~Zentol],

SimpleCounter is not thread safe.
Whether we need to modify the type of *count* to LongAdder, to ensure thread 
safety ?

Cheers,
Hai Zhou

> SimpleCounter not thread safe
> -
>
> Key: FLINK-7585
> URL: https://issues.apache.org/jira/browse/FLINK-7585
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Metrics
>Affects Versions: 1.3.2
>Reporter: Hai Zhou
>Assignee: Hai Zhou
> Fix For: 1.4.0, 1.3.3
>
>
> package org.apache.flink.metrics;
> /**
>  * A simple low-overhead {@link org.apache.flink.metrics.Counter} that is not 
> thread-safe.
>  */
> public class SimpleCounter implements Counter {
>   /** the current count. */
>   private long count;



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7585) SimpleCounter not thread safe

2017-09-05 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7585:
---

 Summary: SimpleCounter not thread safe
 Key: FLINK-7585
 URL: https://issues.apache.org/jira/browse/FLINK-7585
 Project: Flink
  Issue Type: Bug
  Components: Build System, Metrics
Affects Versions: 1.3.2
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.4.0, 1.3.3


package org.apache.flink.metrics;

/**
 * A simple low-overhead {@link org.apache.flink.metrics.Counter} that is not 
thread-safe.
 */
public class SimpleCounter implements Counter {

/** the current count. */
private long count;




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7573) Introduce Http protocol connector for Elasticsearch2

2017-09-04 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16153060#comment-16153060
 ] 

Hai Zhou commented on FLINK-7573:
-

Hi [~mingleizhang], Maybe you need to know first:
TCP protocol corresponds to the transport layer, and HTTP protocol corresponds 
to the application layer.
in essence, the two are not comparable. The HTTP protocol is based on the TCP 
protocol.

> Introduce Http protocol connector for Elasticsearch2
> 
>
> Key: FLINK-7573
> URL: https://issues.apache.org/jira/browse/FLINK-7573
> Project: Flink
>  Issue Type: Improvement
>  Components: ElasticSearch Connector
>Reporter: mingleizhang
>
> Currently, all connectors as far as I have known that merely support the TCP 
> transport protocol of Elasticsearch, but some of company's ES cluster just 
> relies on the HTTP protocol, and close the TCP port on production 
> environment. So, I suggest add a new implemention for creating a HTTP 
> protocol by using {{JestClient}}, which is a Java HTTP Rest client for 
> ElasticSearch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-3831) flink-streaming-java

2017-09-04 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-3831:
---

   Assignee: Hai Zhou
Environment: 
Apache Maven 3.3.9, Java version: 1.8.0_144

Description: 

{noformat}
[INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
flink-streaming-java_2.11 ---
[WARNING] Used undeclared dependencies found:
[WARNING]org.apache.flink:flink-metrics-core:jar:1.4-SNAPSHOT:compile
[WARNING]com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
[WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
[WARNING]org.scala-lang:scala-library:jar:2.11.11:compile
[WARNING]commons-io:commons-io:jar:2.4:compile
[WARNING]org.hamcrest:hamcrest-core:jar:1.3:test
[WARNING]com.fasterxml.jackson.core:jackson-databind:jar:2.7.4:compile
[WARNING]org.apache.flink:flink-java:jar:1.4-SNAPSHOT:compile
[WARNING]org.apache.flink:flink-shaded-hadoop2:jar:1.4-SNAPSHOT:compile
[WARNING]com.data-artisans:flakka-actor_2.11:jar:2.3-custom:compile
[WARNING]org.apache.flink:flink-optimizer_2.11:jar:1.4-SNAPSHOT:compile
[WARNING]org.apache.commons:commons-lang3:jar:3.3.2:compile
[WARNING]org.powermock:powermock-core:jar:1.6.5:test
[WARNING] Unused declared dependencies found:
[WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
[WARNING]log4j:log4j:jar:1.2.17:test
[WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
{noformat}


> flink-streaming-java
> 
>
> Key: FLINK-3831
> URL: https://issues.apache.org/jira/browse/FLINK-3831
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
> Environment: Apache Maven 3.3.9, Java version: 1.8.0_144
>Reporter: Till Rohrmann
>Assignee: Hai Zhou
>
> {noformat}
> [INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
> flink-streaming-java_2.11 ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.apache.flink:flink-metrics-core:jar:1.4-SNAPSHOT:compile
> [WARNING]com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
> [WARNING]org.scala-lang:scala-library:jar:2.11.11:compile
> [WARNING]commons-io:commons-io:jar:2.4:compile
> [WARNING]org.hamcrest:hamcrest-core:jar:1.3:test
> [WARNING]com.fasterxml.jackson.core:jackson-databind:jar:2.7.4:compile
> [WARNING]org.apache.flink:flink-java:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.flink:flink-shaded-hadoop2:jar:1.4-SNAPSHOT:compile
> [WARNING]com.data-artisans:flakka-actor_2.11:jar:2.3-custom:compile
> [WARNING]org.apache.flink:flink-optimizer_2.11:jar:1.4-SNAPSHOT:compile
> [WARNING]org.apache.commons:commons-lang3:jar:3.3.2:compile
> [WARNING]org.powermock:powermock-core:jar:1.6.5:test
> [WARNING] Unused declared dependencies found:
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7577) flink-core

2017-09-04 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7577:
---

 Summary: flink-core
 Key: FLINK-7577
 URL: https://issues.apache.org/jira/browse/FLINK-7577
 Project: Flink
  Issue Type: Sub-task
  Components: Build System
 Environment: Apache Maven 3.3.9, Java version: 1.8.0_144
Reporter: Hai Zhou
Assignee: Hai Zhou


{noformat}
[INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ flink-core ---
[WARNING] Used undeclared dependencies found:
[WARNING]org.hamcrest:hamcrest-core:jar:1.3:test
[WARNING]org.powermock:powermock-core:jar:1.6.5:test
[WARNING]org.powermock:powermock-reflect:jar:1.6.5:test
[WARNING]org.objenesis:objenesis:jar:2.1:compile
[WARNING] Unused declared dependencies found:
[WARNING]org.xerial.snappy:snappy-java:jar:1.1.1.3:compile
[WARNING]org.hamcrest:hamcrest-all:jar:1.3:test
[WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
[WARNING]log4j:log4j:jar:1.2.17:test
[WARNING]org.joda:joda-convert:jar:1.7:test
[WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
{noformat}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-3828) flink-runtime

2017-09-04 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-3828:
---

   Assignee: Hai Zhou
Environment: Apache Maven 3.3.9, Java version: 1.8.0_144
Description: 
{noformat}
[INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
flink-runtime_2.11 ---
[WARNING] Used undeclared dependencies found:
[WARNING]org.apache.flink:flink-metrics-core:jar:1.4-SNAPSHOT:compile
[WARNING]io.netty:netty:jar:3.8.0.Final:compile
[WARNING]com.google.code.findbugs:annotations:jar:2.0.1:test
[WARNING]com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
[WARNING]com.typesafe:config:jar:1.2.1:compile
[WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
[WARNING]commons-io:commons-io:jar:2.4:compile
[WARNING]org.hamcrest:hamcrest-core:jar:1.3:test
[WARNING]org.powermock:powermock-api-support:jar:1.6.5:test
[WARNING]javax.xml.bind:jaxb-api:jar:2.2.2:compile
[WARNING]commons-collections:commons-collections:jar:3.2.2:compile
[WARNING]com.fasterxml.jackson.core:jackson-annotations:jar:2.7.4:compile
[WARNING]org.powermock:powermock-core:jar:1.6.5:test
[WARNING]org.powermock:powermock-reflect:jar:1.6.5:test
[WARNING] Unused declared dependencies found:
[WARNING]com.data-artisans:flakka-slf4j_2.11:jar:2.3-custom:compile
[WARNING]org.reflections:reflections:jar:0.9.10:test
[WARNING]org.javassist:javassist:jar:3.18.2-GA:compile
[WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
[WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
[WARNING]log4j:log4j:jar:1.2.17:test
[WARNING]com.twitter:chill_2.11:jar:0.7.4:compile
[WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
{noformat}


  was:Clean up for flink-runtime


> flink-runtime
> -
>
> Key: FLINK-3828
> URL: https://issues.apache.org/jira/browse/FLINK-3828
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
> Environment: Apache Maven 3.3.9, Java version: 1.8.0_144
>Reporter: Till Rohrmann
>Assignee: Hai Zhou
>
> {noformat}
> [INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
> flink-runtime_2.11 ---
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.apache.flink:flink-metrics-core:jar:1.4-SNAPSHOT:compile
> [WARNING]io.netty:netty:jar:3.8.0.Final:compile
> [WARNING]com.google.code.findbugs:annotations:jar:2.0.1:test
> [WARNING]com.esotericsoftware.kryo:kryo:jar:2.24.0:compile
> [WARNING]com.typesafe:config:jar:1.2.1:compile
> [WARNING]org.apache.flink:flink-annotations:jar:1.4-SNAPSHOT:compile
> [WARNING]commons-io:commons-io:jar:2.4:compile
> [WARNING]org.hamcrest:hamcrest-core:jar:1.3:test
> [WARNING]org.powermock:powermock-api-support:jar:1.6.5:test
> [WARNING]javax.xml.bind:jaxb-api:jar:2.2.2:compile
> [WARNING]commons-collections:commons-collections:jar:3.2.2:compile
> [WARNING]com.fasterxml.jackson.core:jackson-annotations:jar:2.7.4:compile
> [WARNING]org.powermock:powermock-core:jar:1.6.5:test
> [WARNING]org.powermock:powermock-reflect:jar:1.6.5:test
> [WARNING] Unused declared dependencies found:
> [WARNING]com.data-artisans:flakka-slf4j_2.11:jar:2.3-custom:compile
> [WARNING]org.reflections:reflections:jar:0.9.10:test
> [WARNING]org.javassist:javassist:jar:3.18.2-GA:compile
> [WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]log4j:log4j:jar:1.2.17:test
> [WARNING]com.twitter:chill_2.11:jar:0.7.4:compile
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7574) flink-clients

2017-09-04 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7574:
---

 Summary: flink-clients
 Key: FLINK-7574
 URL: https://issues.apache.org/jira/browse/FLINK-7574
 Project: Flink
  Issue Type: Sub-task
  Components: Build System
Affects Versions: 1.3.2
 Environment: Apache Maven 3.3.9, Java version: 1.8.0_144
Reporter: Hai Zhou
Assignee: Hai Zhou


[INFO] --- maven-dependency-plugin:2.10:analyze (default-cli) @ 
flink-clients_2.11 ---
[WARNING] Used undeclared dependencies found:
[WARNING]org.scala-lang:scala-library:jar:2.11.11:compile
[WARNING]com.data-artisans:flakka-actor_2.11:jar:2.3-custom:compile
[WARNING] Unused declared dependencies found:
[WARNING]org.hamcrest:hamcrest-all:jar:1.3:test
[WARNING]org.apache.flink:force-shading:jar:1.4-SNAPSHOT:compile
[WARNING]org.powermock:powermock-module-junit4:jar:1.6.5:test
[WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
[WARNING]log4j:log4j:jar:1.2.17:test
[WARNING]org.powermock:powermock-api-mockito:jar:1.6.5:test
[WARNING]org.slf4j:slf4j-log4j12:jar:1.7.7:test




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-3827) Flink modules include unused dependencies

2017-09-04 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-3827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-3827:
---

Assignee: Hai Zhou

> Flink modules include unused dependencies
> -
>
> Key: FLINK-3827
> URL: https://issues.apache.org/jira/browse/FLINK-3827
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Assignee: Hai Zhou
>
> A quick look via {{mvn dependency:analyze}} revealed that many Flink modules 
> include dependencies which they don't really need. We should fix this for the 
> next {{1.1}} release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-3827) Flink modules include unused dependencies

2017-09-04 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152285#comment-16152285
 ] 

Hai Zhou commented on FLINK-3827:
-

Hi [~till.rohrmann], 
I found via {{mvn dependency:analyze}}  that many modules still have 
undeclared/unused  dependencies.
Is this still an issue?
if yes,
I will fix it.


> Flink modules include unused dependencies
> -
>
> Key: FLINK-3827
> URL: https://issues.apache.org/jira/browse/FLINK-3827
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>
> A quick look via {{mvn dependency:analyze}} revealed that many Flink modules 
> include dependencies which they don't really need. We should fix this for the 
> next {{1.1}} release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-3862) Restructure community website

2017-09-03 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16152077#comment-16152077
 ] 

Hai Zhou commented on FLINK-3862:
-

Hi [~fhue...@gmail.com],
Flink IRC channel looks not active, maybe we can consider using Slack instead 
of IRC?


> Restructure community website
> -
>
> Key: FLINK-3862
> URL: https://issues.apache.org/jira/browse/FLINK-3862
> Project: Flink
>  Issue Type: Task
>  Components: Project Website
>Affects Versions: 1.1.0
>Reporter: Till Rohrmann
>Priority: Minor
>
> The community website contains a large section of third party packages. It 
> might make sense to create a dedicated third party packages site to declutter 
> the community site. Furthermore, we should move the IRC communication channel 
> a bit further down in order to encourage people to rather use other 
> communication channels.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-6347) Migrate from Java serialization for MessageAcknowledgingSourceBase's state

2017-09-03 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-6347:
---

Assignee: Hai Zhou

> Migrate from Java serialization for MessageAcknowledgingSourceBase's state
> --
>
> Key: FLINK-6347
> URL: https://issues.apache.org/jira/browse/FLINK-6347
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Hai Zhou
>
> See umbrella JIRA FLINK-6343 for details. This subtask tracks the migration 
> for {{MessageAcknowledgingSourceBase}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-6344) Migrate from Java serialization for BucketingSink's state

2017-09-03 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-6344:
---

Assignee: Hai Zhou

> Migrate from Java serialization for BucketingSink's state
> -
>
> Key: FLINK-6344
> URL: https://issues.apache.org/jira/browse/FLINK-6344
> Project: Flink
>  Issue Type: Sub-task
>  Components: Streaming Connectors
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Hai Zhou
>
> See umbrella JIRA FLINK-6343 for details. This subtask tracks the migration 
> for `BucketingSink`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4831) Implement a slf4j metric reporter

2017-09-03 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151947#comment-16151947
 ] 

Hai Zhou commented on FLINK-4831:
-

Hi [~Zentol],
I checked your code, it's very good. 
but, I do not know what else need to be modified?
I just found via *extends AbstractReporter implements Scheduled*, the 
implementation would be easier. 

> Implement a slf4j metric reporter
> -
>
> Key: FLINK-4831
> URL: https://issues.apache.org/jira/browse/FLINK-4831
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.1.2
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: easyfix, starter
>
> For debugging purpose it would be very useful to have a log4j metric 
> reporter. If you don't want to setup a metric backend you currently have to 
> rely on JMX, which a) works a bit differently than other reporters (for 
> example it doesn't extend AbstractReporter) and b) makes it a bit tricky to 
> analyze results as metrics are cleaned up once a job finishes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7284) Verify compile time and runtime version of Hadoop

2017-09-03 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16151928#comment-16151928
 ] 

Hai Zhou commented on FLINK-7284:
-

similar issue: https://issues.apache.org/jira/browse/FLINK-2581

> Verify compile time and runtime version of Hadoop
> -
>
> Key: FLINK-7284
> URL: https://issues.apache.org/jira/browse/FLINK-7284
> Project: Flink
>  Issue Type: Improvement
>  Components: Cluster Management
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>
> In order to detect a potential version conflict when running a Flink cluster, 
> built with Hadoop {{x}}, in an environment which provides Hadoop {{y}}, we 
> should automatically check if {{x == y}}. If {{x != y}}, we should terminate 
> with an appropriate error message. This behaviour should also be 
> disengageable if one wants to run Flink explicitly in a different Hadoop 
> environment.
> The check could be done at cluster start up using Hadoops {{VersionInfo}} and 
> the build time Hadoop version info. The latter has to be included in the 
> Flink binaries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7570) Fix broken link on faq page

2017-09-01 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16150915#comment-16150915
 ] 

Hai Zhou commented on FLINK-7570:
-

PR: https://github.com/apache/flink-web/pull/83

> Fix broken link on faq page
> ---
>
> Key: FLINK-7570
> URL: https://issues.apache.org/jira/browse/FLINK-7570
> Project: Flink
>  Issue Type: Bug
>  Components: Project Website
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7570) Fix broken link on faq page

2017-09-01 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7570:
---

 Summary: Fix broken link on faq page
 Key: FLINK-7570
 URL: https://issues.apache.org/jira/browse/FLINK-7570
 Project: Flink
  Issue Type: Bug
  Components: Project Website
Reporter: Hai Zhou
Assignee: Hai Zhou






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-5944) Flink should support reading Snappy Files

2017-09-01 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-5944:
---

Assignee: (was: Hai Zhou)

> Flink should support reading Snappy Files
> -
>
> Key: FLINK-5944
> URL: https://issues.apache.org/jira/browse/FLINK-5944
> Project: Flink
>  Issue Type: New Feature
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ilya Ganelin
>  Labels: features
>
> Snappy is an extremely performant compression format that's widely used 
> offering fast decompression/compression. 
> This can be easily implemented by creating a SnappyInflaterInputStreamFactory 
> and updating the initDefaultInflateInputStreamFactories in FileInputFormat.
> Flink already includes the Snappy dependency in the project. 
> There is a minor gotcha in this. If we wish to use this with Hadoop, then we 
> must provide two separate implementations since Hadoop uses a different 
> version of the snappy format than Snappy Java (which is the xerial/snappy 
> included in Flink). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-5944) Flink should support reading Snappy Files

2017-09-01 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-5944:
---

Assignee: Hai Zhou

> Flink should support reading Snappy Files
> -
>
> Key: FLINK-5944
> URL: https://issues.apache.org/jira/browse/FLINK-5944
> Project: Flink
>  Issue Type: New Feature
>  Components: Batch Connectors and Input/Output Formats
>Reporter: Ilya Ganelin
>Assignee: Hai Zhou
>  Labels: features
>
> Snappy is an extremely performant compression format that's widely used 
> offering fast decompression/compression. 
> This can be easily implemented by creating a SnappyInflaterInputStreamFactory 
> and updating the initDefaultInflateInputStreamFactories in FileInputFormat.
> Flink already includes the Snappy dependency in the project. 
> There is a minor gotcha in this. If we wish to use this with Hadoop, then we 
> must provide two separate implementations since Hadoop uses a different 
> version of the snappy format than Snappy Java (which is the xerial/snappy 
> included in Flink). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7562) uploaded jars sort by upload-time on dashboard page

2017-08-30 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7562:
---

 Summary: uploaded jars sort by upload-time on dashboard page
 Key: FLINK-7562
 URL: https://issues.apache.org/jira/browse/FLINK-7562
 Project: Flink
  Issue Type: Wish
  Components: Webfrontend
Reporter: Hai Zhou
Priority: Minor


Currently, upload the jar package show is unordered in the Apache Flink Web 
Dashboard page, 
I think we should sort by upload time, so look more friendly!

Regards,
Hai Zhou.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7547) o.a.f.s.api.scala.async.AsyncFunction is not declared Serializable

2017-08-30 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-7547:
---

Assignee: Hai Zhou

> o.a.f.s.api.scala.async.AsyncFunction is not declared Serializable
> --
>
> Key: FLINK-7547
> URL: https://issues.apache.org/jira/browse/FLINK-7547
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.3.2
>Reporter: Elias Levy
>Assignee: Hai Zhou
>Priority: Minor
>
> {{org.apache.flink.streaming.api.scala.async.AsyncFunction}} is not declared 
> {{Serializable}}, whereas 
> {{org.apache.flink.streaming.api.functions.async.AsyncFunction}} is.  This 
> leads to the job not starting as the as async function can't be serialized 
> during initialization.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7560) Add Reset button for Plan Visualizer Page

2017-08-30 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7560:

Description: 
in http://flink.apache.org/visualizer/  page.
I wish add a 'Reset' button at :

{code:java}

 Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset

{code}

When you click the "Reset" button, the page will refresh and you can redraw.


!attachment-name.jpg|thumbnail!

  was:
in http://flink.apache.org/visualizer/  page.
I wish add a 'Reset' button at :

{panel:title=My title}
 Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
{panel}

When you click the "Reset" button, the page will refresh and you can redraw.

!screenshot-1.jpg|thumbnail!



> Add Reset button for Plan Visualizer Page
> -
>
> Key: FLINK-7560
> URL: https://issues.apache.org/jira/browse/FLINK-7560
> Project: Flink
>  Issue Type: Wish
>  Components: Project Website
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> in http://flink.apache.org/visualizer/  page.
> I wish add a 'Reset' button at :
> {code:java}
>  Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
> {code}
> When you click the "Reset" button, the page will refresh and you can redraw.
> !attachment-name.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7560) Add Reset button for Plan Visualizer Page

2017-08-30 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7560:

Attachment: screenshot-1.png

> Add Reset button for Plan Visualizer Page
> -
>
> Key: FLINK-7560
> URL: https://issues.apache.org/jira/browse/FLINK-7560
> Project: Flink
>  Issue Type: Wish
>  Components: Project Website
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> in http://flink.apache.org/visualizer/  page.
> I wish add a 'Reset' button at :
> {panel:title=My title}
>  Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
> {panel}
> When you click the "Reset" button, the page will refresh and you can redraw.
> !attachment-name.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7560) Add Reset button for Plan Visualizer Page

2017-08-30 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7560:

Description: 
in http://flink.apache.org/visualizer/  page.
I wish add a 'Reset' button at :

{panel:title=My title}
 Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
{panel}

When you click the "Reset" button, the page will refresh and you can redraw.

!screenshot-1.jpg|thumbnail!


  was:
in http://flink.apache.org/visualizer/  page.
I wish add a 'Reset' button at :

{panel:title=My title}
 Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
{panel}

When you click the "Reset" button, the page will refresh and you can redraw.

!attachment-name.jpg|thumbnail!



> Add Reset button for Plan Visualizer Page
> -
>
> Key: FLINK-7560
> URL: https://issues.apache.org/jira/browse/FLINK-7560
> Project: Flink
>  Issue Type: Wish
>  Components: Project Website
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> in http://flink.apache.org/visualizer/  page.
> I wish add a 'Reset' button at :
> {panel:title=My title}
>  Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
> {panel}
> When you click the "Reset" button, the page will refresh and you can redraw.
> !screenshot-1.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7560) Add Reset button for Plan Visualizer Page

2017-08-30 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7560:

Description: 
in http://flink.apache.org/visualizer/  page.
I wish add a 'Reset' button at :

{panel:title=My title}
 Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
{panel}

When you click the "Reset" button, the page will refresh and you can redraw.

!attachment-name.jpg|thumbnail!


  was:

in http://flink.apache.org/visualizer/  page.
I wish add a 'Reset' button at :

{panel:title=My title}
 Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
{panel}

When you click the "Reset" button, the page will refresh and you can redraw.




> Add Reset button for Plan Visualizer Page
> -
>
> Key: FLINK-7560
> URL: https://issues.apache.org/jira/browse/FLINK-7560
> Project: Flink
>  Issue Type: Wish
>  Components: Project Website
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Minor
>
> in http://flink.apache.org/visualizer/  page.
> I wish add a 'Reset' button at :
> {panel:title=My title}
>  Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
> {panel}
> When you click the "Reset" button, the page will refresh and you can redraw.
> !attachment-name.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7560) Add Reset button for Plan Visualizer Page

2017-08-30 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7560:
---

 Summary: Add Reset button for Plan Visualizer Page
 Key: FLINK-7560
 URL: https://issues.apache.org/jira/browse/FLINK-7560
 Project: Flink
  Issue Type: Wish
  Components: Project Website
Reporter: Hai Zhou
Assignee: Hai Zhou
Priority: Minor



in http://flink.apache.org/visualizer/  page.
I wish add a 'Reset' button at :

{panel:title=My title}
 Flink Plan Visualizer  | Zoom In | Zoom Out  | Reset
{panel}

When you click the "Reset" button, the page will refresh and you can redraw.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7559) Typo in flink-quickstart pom.xml

2017-08-30 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7559:
---

 Summary: Typo in flink-quickstart pom.xml
 Key: FLINK-7559
 URL: https://issues.apache.org/jira/browse/FLINK-7559
 Project: Flink
  Issue Type: Improvement
  Components: Quickstarts
Reporter: Hai Zhou
Assignee: Hai Zhou
Priority: Minor


pom.xml
{code:java}
standard *loggin* framework  ->  standard *logging* framework
{code}

I found this spelling error when I used Flink Maven Quickstart project  to 
generate new flink job project.
:D



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7555) Use flink-shaded-guava-18 to replace guava dependencies

2017-08-29 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7555:
---

 Summary:  Use flink-shaded-guava-18 to replace guava dependencies
 Key: FLINK-7555
 URL: https://issues.apache.org/jira/browse/FLINK-7555
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.4.0
 Environment: After [#issue 
FLINK-6982|https://issues.apache.org/jira/browse/FLINK-6982],
I still find 40 occurrences of 'import com.google.common' in project .

we should replace all 'import com.google.common.*' to  'import 
org.apache.flink.shaded.guava18.com.google.common.*' ?

if so, I will give a PR to fix it.
Reporter: Hai Zhou
 Fix For: 1.4.0


!attachment-name.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7555) Use flink-shaded-guava-18 to replace guava dependencies

2017-08-29 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7555:

Description: 
After [#issue FLINK-6982|https://issues.apache.org/jira/browse/FLINK-6982],
I still find 40 occurrences of 'import com.google.common' in project .

we should replace all 'import com.google.common.*' to  'import 
org.apache.flink.shaded.guava18.com.google.common.*' ?

if so, I will give a PR to fix it.

  was:!attachment-name.jpg|thumbnail!


>  Use flink-shaded-guava-18 to replace guava dependencies
> 
>
> Key: FLINK-7555
> URL: https://issues.apache.org/jira/browse/FLINK-7555
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
> Fix For: 1.4.0
>
>
> After [#issue FLINK-6982|https://issues.apache.org/jira/browse/FLINK-6982],
> I still find 40 occurrences of 'import com.google.common' in project .
> we should replace all 'import com.google.common.*' to  'import 
> org.apache.flink.shaded.guava18.com.google.common.*' ?
> if so, I will give a PR to fix it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7555) Use flink-shaded-guava-18 to replace guava dependencies

2017-08-29 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7555:

Environment: (was: After [#issue 
FLINK-6982|https://issues.apache.org/jira/browse/FLINK-6982],
I still find 40 occurrences of 'import com.google.common' in project .

we should replace all 'import com.google.common.*' to  'import 
org.apache.flink.shaded.guava18.com.google.common.*' ?

if so, I will give a PR to fix it.)

>  Use flink-shaded-guava-18 to replace guava dependencies
> 
>
> Key: FLINK-7555
> URL: https://issues.apache.org/jira/browse/FLINK-7555
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Hai Zhou
> Fix For: 1.4.0
>
>
> !attachment-name.jpg|thumbnail!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7297) Instable Kafka09ProducerITCase.testCustomPartitioning test case

2017-08-29 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou closed FLINK-7297.
---
Resolution: Fixed

> Instable Kafka09ProducerITCase.testCustomPartitioning test case
> ---
>
> Key: FLINK-7297
> URL: https://issues.apache.org/jira/browse/FLINK-7297
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector, Tests
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: test-stability
>
> There seems to be a test instability of 
> {{Kafka09ProducerITCase>KafkaProducerTestBase.testCustomPartitioning}} on 
> Travis.
> https://travis-ci.org/apache/flink/jobs/258538636



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7376) Cleanup options class and test classes in flink-clients

2017-08-29 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou closed FLINK-7376.
---
Resolution: Won't Fix

> Cleanup options class and test classes in flink-clients 
> 
>
> Key: FLINK-7376
> URL: https://issues.apache.org/jira/browse/FLINK-7376
> Project: Flink
>  Issue Type: Improvement
>  Components: Client
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Critical
>  Labels: cleanup, test
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7376) Cleanup options class and test classes in flink-clients

2017-08-29 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-7376:
---

Assignee: Hai Zhou

> Cleanup options class and test classes in flink-clients 
> 
>
> Key: FLINK-7376
> URL: https://issues.apache.org/jira/browse/FLINK-7376
> Project: Flink
>  Issue Type: Improvement
>  Components: Client
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Critical
>  Labels: cleanup, test
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7547) o.a.f.s.api.scala.async.AsyncFunction is not declared Serializable

2017-08-29 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145186#comment-16145186
 ] 

Hai Zhou commented on FLINK-7547:
-

HI [~elevy].
I will give a PR to fix it.

> o.a.f.s.api.scala.async.AsyncFunction is not declared Serializable
> --
>
> Key: FLINK-7547
> URL: https://issues.apache.org/jira/browse/FLINK-7547
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.3.2
>Reporter: Elias Levy
>Priority: Minor
>
> {{org.apache.flink.streaming.api.scala.async.AsyncFunction}} is not declared 
> {{Serializable}}, whereas 
> {{org.apache.flink.streaming.api.functions.async.AsyncFunction}} is.  This 
> leads to the job not starting as the as async function can't be serialized 
> during initialization.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7438) Some classes are eclipsed by classes in package scala

2017-08-29 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-7438:
---

Assignee: Hai Zhou

> Some classes are eclipsed by classes in package scala
> -
>
> Key: FLINK-7438
> URL: https://issues.apache.org/jira/browse/FLINK-7438
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, DataStream API
>Reporter: Ted Yu
>Assignee: Hai Zhou
>Priority: Minor
>
> Noticed the following during compilation:
> {code}
> [WARNING] 
> /flink/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/WindowedStream.scala:33:
>  warning: imported `OutputTag' is permanently  hidden by definition of 
> object OutputTag in package scala
> [WARNING] import org.apache.flink.util.{Collector, OutputTag}
> [WARNING]  ^
> [WARNING] 
> /flink/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/WindowedStream.scala:33:
>  warning: imported `OutputTag' is permanently  hidden by definition of 
> class OutputTag in package scala
> [WARNING] import org.apache.flink.util.{Collector, OutputTag}
> {code}
> We should avoid the warning e.r.t. OutputTag.
> There may be other occurrences of similar warning.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-6860) update Apache Beam in page Ecosystem

2017-08-28 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-6860:
---

Assignee: Hai Zhou

> update Apache Beam in page Ecosystem
> 
>
> Key: FLINK-6860
> URL: https://issues.apache.org/jira/browse/FLINK-6860
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Xu Mingmin
>Assignee: Hai Zhou
>Priority: Minor
>
> To remove the word {{incubating}} and update the link, --Apache Beam has 
> graduated as a top-level project.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7537) Add InfluxDB Sink for Flink Streaming

2017-08-26 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7537:

Description: 
InfluxDBSink via implementation RichSinkFunction.

[BAHIR-134|https://issues.apache.org/jira/browse/BAHIR-134]

  was:
InfluxDBSink via implementation RichSinkFunction.



> Add InfluxDB Sink for Flink Streaming
> -
>
> Key: FLINK-7537
> URL: https://issues.apache.org/jira/browse/FLINK-7537
> Project: Flink
>  Issue Type: Wish
>  Components: Streaming Connectors
>Affects Versions: 1.3.0
>Reporter: Hai Zhou
>Assignee: Hai Zhou
> Fix For: 1.3.0
>
>
> InfluxDBSink via implementation RichSinkFunction.
> [BAHIR-134|https://issues.apache.org/jira/browse/BAHIR-134]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7538) Add InfluxDB Sink for Flink Streaming

2017-08-26 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou closed FLINK-7538.
---
Resolution: Invalid

> Add InfluxDB Sink for Flink Streaming
> -
>
> Key: FLINK-7538
> URL: https://issues.apache.org/jira/browse/FLINK-7538
> Project: Flink
>  Issue Type: Wish
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>
> InfluxDBSink via implementation RichSinkFunction.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7538) Add InfluxDB Sink for Flink Streaming

2017-08-26 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7538:
---

 Summary: Add InfluxDB Sink for Flink Streaming
 Key: FLINK-7538
 URL: https://issues.apache.org/jira/browse/FLINK-7538
 Project: Flink
  Issue Type: Wish
Reporter: Hai Zhou
Assignee: Hai Zhou


InfluxDBSink via implementation RichSinkFunction.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7537) Add InfluxDB Sink for Flink Streaming

2017-08-26 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7537:
---

 Summary: Add InfluxDB Sink for Flink Streaming
 Key: FLINK-7537
 URL: https://issues.apache.org/jira/browse/FLINK-7537
 Project: Flink
  Issue Type: Wish
  Components: Streaming Connectors
Affects Versions: 1.3.0
Reporter: Hai Zhou
Assignee: Hai Zhou
 Fix For: 1.3.0


InfluxDBSink via implementation RichSinkFunction.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7536) Add MongoDB Source/Sink for Flink Streaming

2017-08-26 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7536:
---

 Summary: Add MongoDB Source/Sink for Flink Streaming
 Key: FLINK-7536
 URL: https://issues.apache.org/jira/browse/FLINK-7536
 Project: Flink
  Issue Type: Wish
  Components: Streaming Connectors
Reporter: Hai Zhou
Assignee: Hai Zhou


MongoSource / MongoSink via implementation RichSourceFunction / 
RichSinkFunction.

[BAHIR-133|https://issues.apache.org/jira/browse/BAHIR-133]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7510) Move some connectors to Apache Bahir

2017-08-25 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7510:

Description: 
Hi Flink community:
Flink is really a great stream processing framework, provides a number of 
connectors that support multiple data sources and sinks.
but,
I suggested that moving the unpopular connector to Bahir (and popular connector 
to keep in the main Flink codebase).
E.g
flink-connectors/flink-connector-twitter
flink-contrib/flink-connector-wikiedits


I am willing to do the work, if you think it's acceptable, I will create new 
sub-task issue and  submit a PR soon.


Regards,
Hai Zhou

the corresponding bahir issue: [https://issues.apache.org/jira/browse/BAHIR-131]

  was:
* flink-connectors *flink-connectors/flink-connector-twitter* & 
*flink-contrib/flink-connector-wikiedits*, They do not seem to be important for 
flink.

Maybe we need to move them to bahir-flink?

Correspond bahir-flink issue 
[#BAHIR-127](https://issues.apache.org/jira/browse/BAHIR-127)

Summary: Move some connectors to Apache Bahir  (was: Move 
"flink-connector-twitter & flink-connector-wikiedits" to bahir-flink)

> Move some connectors to Apache Bahir
> 
>
> Key: FLINK-7510
> URL: https://issues.apache.org/jira/browse/FLINK-7510
> Project: Flink
>  Issue Type: Wish
>  Components: Streaming Connectors
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>
> Hi Flink community:
> Flink is really a great stream processing framework, provides a number of 
> connectors that support multiple data sources and sinks.
> but,
> I suggested that moving the unpopular connector to Bahir (and popular 
> connector to keep in the main Flink codebase).
> E.g
> flink-connectors/flink-connector-twitter
> flink-contrib/flink-connector-wikiedits
> 
> I am willing to do the work, if you think it's acceptable, I will create new 
> sub-task issue and  submit a PR soon.
> Regards,
> Hai Zhou
> the corresponding bahir issue: 
> [https://issues.apache.org/jira/browse/BAHIR-131]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7510) Move "flink-connector-twitter & flink-connector-wikiedits" to bahir-flink

2017-08-25 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7510:

Description: 
* flink-connectors *flink-connectors/flink-connector-twitter* & 
*flink-contrib/flink-connector-wikiedits*, They do not seem to be important for 
flink.

Maybe we need to move them to bahir-flink?

Correspond bahir-flink issue 
[#BAHIR-127](https://issues.apache.org/jira/browse/BAHIR-127)

  was:
* flink-connectors *flink-connectors/flink-connector-twitter* & * 
flink-contrib/flink-connector-wikiedits*, They do not seem to be important for 
flink.

Maybe we need to move them to bahir-flink?

Correspond bahir-flink issue 
[#BAHIR-127](https://issues.apache.org/jira/browse/BAHIR-127)


> Move "flink-connector-twitter & flink-connector-wikiedits" to bahir-flink
> -
>
> Key: FLINK-7510
> URL: https://issues.apache.org/jira/browse/FLINK-7510
> Project: Flink
>  Issue Type: Wish
>  Components: Streaming Connectors
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>
> * flink-connectors *flink-connectors/flink-connector-twitter* & 
> *flink-contrib/flink-connector-wikiedits*, They do not seem to be important 
> for flink.
> Maybe we need to move them to bahir-flink?
> Correspond bahir-flink issue 
> [#BAHIR-127](https://issues.apache.org/jira/browse/BAHIR-127)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7510) Move "flink-connector-twitter & flink-connector-wikiedits" to bahir-flink

2017-08-25 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7510:
---

 Summary: Move "flink-connector-twitter & 
flink-connector-wikiedits" to bahir-flink
 Key: FLINK-7510
 URL: https://issues.apache.org/jira/browse/FLINK-7510
 Project: Flink
  Issue Type: Wish
  Components: Streaming Connectors
Reporter: Hai Zhou
Assignee: Hai Zhou


flink-connectors *flink-connectors/flink-connector-twitter* & * 
flink-contrib/flink-connector-wikiedits*, They do not seem to be important for 
flink.

Maybe we need to move them to bahir-flink?

Correspond bahir-flink issue 
[#BAHIR-127](https://issues.apache.org/jira/browse/BAHIR-127)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7510) Move "flink-connector-twitter & flink-connector-wikiedits" to bahir-flink

2017-08-25 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou updated FLINK-7510:

Description: 
* flink-connectors *flink-connectors/flink-connector-twitter* & * 
flink-contrib/flink-connector-wikiedits*, They do not seem to be important for 
flink.

Maybe we need to move them to bahir-flink?

Correspond bahir-flink issue 
[#BAHIR-127](https://issues.apache.org/jira/browse/BAHIR-127)

  was:
flink-connectors *flink-connectors/flink-connector-twitter* & * 
flink-contrib/flink-connector-wikiedits*, They do not seem to be important for 
flink.

Maybe we need to move them to bahir-flink?

Correspond bahir-flink issue 
[#BAHIR-127](https://issues.apache.org/jira/browse/BAHIR-127)


> Move "flink-connector-twitter & flink-connector-wikiedits" to bahir-flink
> -
>
> Key: FLINK-7510
> URL: https://issues.apache.org/jira/browse/FLINK-7510
> Project: Flink
>  Issue Type: Wish
>  Components: Streaming Connectors
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>
> * flink-connectors *flink-connectors/flink-connector-twitter* & * 
> flink-contrib/flink-connector-wikiedits*, They do not seem to be important 
> for flink.
> Maybe we need to move them to bahir-flink?
> Correspond bahir-flink issue 
> [#BAHIR-127](https://issues.apache.org/jira/browse/BAHIR-127)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7494) No license headers in ".travis.yml" file

2017-08-23 Thread Hai Zhou (JIRA)
Hai Zhou created FLINK-7494:
---

 Summary: No license headers in ".travis.yml" file
 Key: FLINK-7494
 URL: https://issues.apache.org/jira/browse/FLINK-7494
 Project: Flink
  Issue Type: Wish
  Components: Travis
Reporter: Hai Zhou
Assignee: Hai Zhou


I will fix the ".travis.yml" file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4422) Convert all time interval measurements to System.nanoTime()

2017-08-21 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16136194#comment-16136194
 ] 

Hai Zhou commented on FLINK-4422:
-

Hi [~StephanEwen], This is still a issue?
If yes, I would like to work on this issue.
Use the methods provided by 
[https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/util/clock/SystemClock.java]
 to replace the time measurement code, such as relative / intervals.

> Convert all time interval measurements to System.nanoTime()
> ---
>
> Key: FLINK-4422
> URL: https://issues.apache.org/jira/browse/FLINK-4422
> Project: Flink
>  Issue Type: Sub-task
>  Components: Core
>Reporter: Stephan Ewen
>Assignee: Jin Mingjian
>Priority: Minor
>
> In contrast to {{System.currentTimeMillis()}}, {{System.nanoTime()}} is 
> monotonous. To measure delays and time intervals, {{System.nanoTime()}} is 
> hence reliable, while {{System.currentTimeMillis()}} is not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7438) Some classes are eclipsed by classes in package scala

2017-08-21 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135084#comment-16135084
 ] 

Hai Zhou commented on FLINK-7438:
-

Hi [~aljoscha]
I would like to work on this issue.:D

> Some classes are eclipsed by classes in package scala
> -
>
> Key: FLINK-7438
> URL: https://issues.apache.org/jira/browse/FLINK-7438
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, DataStream API
>Reporter: Ted Yu
>Priority: Minor
>
> Noticed the following during compilation:
> {code}
> [WARNING] 
> /flink/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/WindowedStream.scala:33:
>  warning: imported `OutputTag' is permanently  hidden by definition of 
> object OutputTag in package scala
> [WARNING] import org.apache.flink.util.{Collector, OutputTag}
> [WARNING]  ^
> [WARNING] 
> /flink/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/WindowedStream.scala:33:
>  warning: imported `OutputTag' is permanently  hidden by definition of 
> class OutputTag in package scala
> [WARNING] import org.apache.flink.util.{Collector, OutputTag}
> {code}
> We should avoid the warning e.r.t. OutputTag.
> There may be other occurrences of similar warning.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-7438) Some classes are eclipsed by classes in package scala

2017-08-21 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135076#comment-16135076
 ] 

Hai Zhou edited comment on FLINK-7438 at 8/21/17 12:01 PM:
---

I think the reason for this problem is 
[https://issues.scala-lang.org/browse/SI-8808].
this problem:

{panel:title=org.apache.flink.streaming.api.scala.WindowedStream.scala}
package org.apache.flink.streaming.api.scala

import org.apache.flink.util.OutputTag  // permanently hidden warning here

class WindowedStream {

  @PublicEvolving
  def sideOutputLateData(outputTag: OutputTag[T]): WindowedStream[T, K, W] = {  

   //  this OutputTag class  is 
org.apache.flink.streaming.api.scala.OutputTag.scala
javaStream.sideOutputLateData(outputTag)   
this
  }
..
}
{panel}

{panel:title=org.apache.flink.streaming.api.scala.OutputTag.scala}
package org.apache.flink.streaming.api.scala

import org.apache.flink.util.{OutputTag => JOutputTag}

class OutputTag[T: TypeInformation](id: String) extends JOutputTag[T](id, 
implicitly[TypeInformation[T]])

object OutputTag {
  def apply[T: TypeInformation](id: String): OutputTag[T] = new OutputTag(id)
}
{panel}


Actually used "org.apache.flink.streaming.api.scala.OutputTag.scala" in 
WindowedStream class.
So we remove "import org.apache.flink.util.OutputTag" in WindowedStream class, 
the warnning will disappear.


was (Author: yew1eb):
I think the reason for this problem is 
[https://issues.scala-lang.org/browse/SI-8808].
this problem:

{panel:title=org.apache.flink.streaming.api.scala.WindowedStream.scala}
package org.apache.flink.streaming.api.scala

import org.apache.flink.util.OutputTag  // permanently hidden warning here

class WindowedStream {

  @PublicEvolving
  def sideOutputLateData(outputTag: OutputTag[T]): WindowedStream[T, K, W] = {  

   //  this OutputTag class  is 
org.apache.flink.streaming.api.scala.OutputTag.scala
javaStream.sideOutputLateData(outputTag)   
this
  }
..
}
{panel}

{panel:title=org.apache.flink.streaming.api.scala.OutputTag.scala}
package org.apache.flink.streaming.api.scala

import org.apache.flink.util.{OutputTag => JOutputTag}

class OutputTag[T: TypeInformation](id: String) extends JOutputTag[T](id, 
implicitly[TypeInformation[T]])

object OutputTag {
  def apply[T: TypeInformation](id: String): OutputTag[T] = new OutputTag(id)
}
{panel}

> Some classes are eclipsed by classes in package scala
> -
>
> Key: FLINK-7438
> URL: https://issues.apache.org/jira/browse/FLINK-7438
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, DataStream API
>Reporter: Ted Yu
>Priority: Minor
>
> Noticed the following during compilation:
> {code}
> [WARNING] 
> /flink/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/WindowedStream.scala:33:
>  warning: imported `OutputTag' is permanently  hidden by definition of 
> object OutputTag in package scala
> [WARNING] import org.apache.flink.util.{Collector, OutputTag}
> [WARNING]  ^
> [WARNING] 
> /flink/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/WindowedStream.scala:33:
>  warning: imported `OutputTag' is permanently  hidden by definition of 
> class OutputTag in package scala
> [WARNING] import org.apache.flink.util.{Collector, OutputTag}
> {code}
> We should avoid the warning e.r.t. OutputTag.
> There may be other occurrences of similar warning.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7438) Some classes are eclipsed by classes in package scala

2017-08-21 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135076#comment-16135076
 ] 

Hai Zhou commented on FLINK-7438:
-

I think the reason for this problem is 
[https://issues.scala-lang.org/browse/SI-8808].
this problem:

{panel:title=org.apache.flink.streaming.api.scala.WindowedStream.scala}
package org.apache.flink.streaming.api.scala

import org.apache.flink.util.OutputTag  // permanently hidden warning here

class WindowedStream {

  @PublicEvolving
  def sideOutputLateData(outputTag: OutputTag[T]): WindowedStream[T, K, W] = {  

   //  this OutputTag class  is 
org.apache.flink.streaming.api.scala.OutputTag.scala
javaStream.sideOutputLateData(outputTag)   
this
  }
..
}
{panel}

{panel:title=org.apache.flink.streaming.api.scala.OutputTag.scala}
package org.apache.flink.streaming.api.scala

import org.apache.flink.util.{OutputTag => JOutputTag}

class OutputTag[T: TypeInformation](id: String) extends JOutputTag[T](id, 
implicitly[TypeInformation[T]])

object OutputTag {
  def apply[T: TypeInformation](id: String): OutputTag[T] = new OutputTag(id)
}
{panel}

> Some classes are eclipsed by classes in package scala
> -
>
> Key: FLINK-7438
> URL: https://issues.apache.org/jira/browse/FLINK-7438
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, DataStream API
>Reporter: Ted Yu
>Priority: Minor
>
> Noticed the following during compilation:
> {code}
> [WARNING] 
> /flink/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/WindowedStream.scala:33:
>  warning: imported `OutputTag' is permanently  hidden by definition of 
> object OutputTag in package scala
> [WARNING] import org.apache.flink.util.{Collector, OutputTag}
> [WARNING]  ^
> [WARNING] 
> /flink/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/WindowedStream.scala:33:
>  warning: imported `OutputTag' is permanently  hidden by definition of 
> class OutputTag in package scala
> [WARNING] import org.apache.flink.util.{Collector, OutputTag}
> {code}
> We should avoid the warning e.r.t. OutputTag.
> There may be other occurrences of similar warning.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7484) com.esotericsoftware.kryo.KryoException: java.lang.IndexOutOfBoundsException: Index: 7, Size: 5

2017-08-21 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16135046#comment-16135046
 ] 

Hai Zhou commented on FLINK-7484:
-

Hi [~shashank734]
Can you upload your code?


> com.esotericsoftware.kryo.KryoException: java.lang.IndexOutOfBoundsException: 
> Index: 7, Size: 5
> ---
>
> Key: FLINK-7484
> URL: https://issues.apache.org/jira/browse/FLINK-7484
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Scala API
>Affects Versions: 1.3.2
> Environment: Flink 1.3.2 , Yarn Cluster, FsStateBackend
>Reporter: Shashank Agarwal
>
> I am using many CEP's and Global Window. I am getting following error 
> sometimes and application  crashes. I have checked logically there's no flow 
> in the program. Here ItemPojo is a Pojo class and we are using 
> java.utill.List[ItemPojo]. We are using Scala DataStream API please find 
> attached logs.
> {code}
> 2017-08-17 10:04:12,814 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - TriggerWindow(GlobalWindows(), 
> ListStateDescriptor{serializer=org.apache.flink.api.common.typeutils.base.ListSerializer@6d36aa3c},
>  co.thirdwatch.trigger.TransactionTrigger@5707c1cb, 
> WindowedStream.apply(WindowedStream.scala:582)) -> Flat Map -> Map -> Sink: 
> Saving CSV Features Sink (1/2) (06c0d4d231bc620ba9e7924b9b0da8d1) switched 
> from RUNNING to FAILED.
> com.esotericsoftware.kryo.KryoException: java.lang.IndexOutOfBoundsException: 
> Index: 7, Size: 5
> Serialization trace:
> category (co.thirdwatch.pojo.ItemPojo)
> underlying (scala.collection.convert.Wrappers$SeqWrapper)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125)
>   at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>   at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:761)
>   at com.twitter.chill.TraversableSerializer.read(Traversable.scala:43)
>   at com.twitter.chill.TraversableSerializer.read(Traversable.scala:21)
>   at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:679)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:106)
>   at 
> com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:528)
>   at com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:657)
>   at 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer.copy(KryoSerializer.java:190)
>   at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:101)
>   at 
> org.apache.flink.api.scala.typeutils.CaseClassSerializer.copy(CaseClassSerializer.scala:32)
>   at 
> org.apache.flink.runtime.state.ArrayListSerializer.copy(ArrayListSerializer.java:74)
>   at 
> org.apache.flink.runtime.state.ArrayListSerializer.copy(ArrayListSerializer.java:34)
>   at 
> org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:279)
>   at 
> org.apache.flink.runtime.state.heap.CopyOnWriteStateTable.get(CopyOnWriteStateTable.java:296)
>   at 
> org.apache.flink.runtime.state.heap.HeapListState.add(HeapListState.java:77)
>   at 
> org.apache.flink.streaming.runtime.operators.windowing.WindowOperator.processElement(WindowOperator.java:442)
>   at 
> org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:206)
>   at 
> org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:263)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IndexOutOfBoundsException: Index: 7, Size: 5
>   at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>   at java.util.ArrayList.get(ArrayList.java:429)
>   at 
> com.esotericsoftware.kryo.util.MapReferenceResolver.getReadObject(MapReferenceResolver.java:42)
>   at com.esotericsoftware.kryo.Kryo.readReferenceOrNull(Kryo.java:805)
>   at com.esotericsoftware.kryo.Kryo.readObjectOrNull(Kryo.java:728)
>   at 
> com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:113)
>   ... 22 more
> 2017-08-17 10:04:12,816 INFO  org.apache.flink.runtime.taskmanager.Task   
>   - Freeing task resources for TriggerWindow(GlobalWindows(), 
> ListStateDescriptor{serializer=org.apache.flink.api.common.typeutils.base.ListSerializer@6d36aa3c},
>  co.thirdwatch.trigger.TransactionTrigger@5707c1cb, 
> WindowedStream.apply(WindowedStream.scala:582)) -> Flat Map -> Map -> Sink: 
> Saving CSV Features Sink (1/2) (06c0d4d231bc620ba9e7924b9b0da8d1).
> 

[jira] [Closed] (FLINK-7382) Broken links in `Apache Flink Documentation` page

2017-08-21 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou closed FLINK-7382.
---

> Broken links in `Apache Flink Documentation`  page
> --
>
> Key: FLINK-7382
> URL: https://issues.apache.org/jira/browse/FLINK-7382
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Minor
> Fix For: 1.4.0, 1.3.3
>
>
> Some links in the * External Resources * section are Broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7382) Broken links in `Apache Flink Documentation` page

2017-08-21 Thread Hai Zhou (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou reassigned FLINK-7382:
---

Assignee: Hai Zhou

> Broken links in `Apache Flink Documentation`  page
> --
>
> Key: FLINK-7382
> URL: https://issues.apache.org/jira/browse/FLINK-7382
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Hai Zhou
>Assignee: Hai Zhou
>Priority: Minor
> Fix For: 1.4.0, 1.3.3
>
>
> Some links in the * External Resources * section are Broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7447) Hope add more committer information to "Community & Project Info" page.

2017-08-16 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128502#comment-16128502
 ] 

Hai Zhou commented on FLINK-7447:
-

[~Zentol] 
>From the standpoint of the contributor,  I still think that makes sense.
Maybe we can refer to other open source projects:
hadoop: http://hadoop.apache.org/who.html
tez: http://tez.apache.org/team-list.html
hbase: http://hbase.apache.org/team-list.html
spark: http://spark.apache.org/committers.html
...
:)


> Hope add more committer information to "Community & Project Info" page.
> ---
>
> Key: FLINK-7447
> URL: https://issues.apache.org/jira/browse/FLINK-7447
> Project: Flink
>  Issue Type: Wish
>  Components: Project Website
>Reporter: Hai Zhou
>
> I wish add the "organization" and "time zone" information to committer 
> introduction, while using the mail instead of Apache ID.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-7447) Hope add more committer information to "Community & Project Info" page.

2017-08-16 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128409#comment-16128409
 ] 

Hai Zhou edited comment on FLINK-7447 at 8/16/17 6:47 AM:
--

@gregho It benefits  the contributor easy to contact the committer to seek 
help, this is just my personal idea.

Due to contributors and committers are in different time zones, communication 
efficiency on issues or github  is usually very low and waiting for a long time 
to have a reply.

But if the contributors knows the time zone where the committers is located, he 
can choose the right time to communicate with the committers.

At http://flink.apache.org/community.html#people   page.


was (Author: yew1eb):
@gregho It benefits  the contributor easy to contact the committer to seek 
help, this is just my personal idea.

Due to contributors and committers are in different time zones, communication 
efficiency on issues or github  is usually very low and waiting for a long time 
to have a reply.

But if the contributors knows the time zone where the committers is located, he 
can choose the right time to communicate with the committers.


> Hope add more committer information to "Community & Project Info" page.
> ---
>
> Key: FLINK-7447
> URL: https://issues.apache.org/jira/browse/FLINK-7447
> Project: Flink
>  Issue Type: Wish
>  Components: Project Website
>Reporter: Hai Zhou
>
> I wish add the "organization" and "time zone" information to committer 
> introduction, while using the mail instead of Apache ID.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7447) Hope add more committer information to "Community & Project Info" page.

2017-08-16 Thread Hai Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16128409#comment-16128409
 ] 

Hai Zhou commented on FLINK-7447:
-

@gregho It benefits  the contributor easy to contact the committer to seek 
help, this is just my personal idea.

Due to contributors and committers are in different time zones, communication 
efficiency on issues or github  is usually very low and waiting for a long time 
to have a reply.

But if the contributors knows the time zone where the committers is located, he 
can choose the right time to communicate with the committers.


> Hope add more committer information to "Community & Project Info" page.
> ---
>
> Key: FLINK-7447
> URL: https://issues.apache.org/jira/browse/FLINK-7447
> Project: Flink
>  Issue Type: Wish
>  Components: Project Website
>Reporter: Hai Zhou
>
> I wish add the "organization" and "time zone" information to committer 
> introduction, while using the mail instead of Apache ID.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >