[jira] [Created] (FLINK-8019) Flink streaming job stopped at consuming Kafka data

2017-11-07 Thread Weihua Jiang (JIRA)
Weihua Jiang created FLINK-8019:
---

 Summary: Flink streaming job stopped at consuming Kafka data
 Key: FLINK-8019
 URL: https://issues.apache.org/jira/browse/FLINK-8019
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.3.2
 Environment: We are using Kafka 0.8.2.1 and Flink 1.3.2 on YARN mode.
Reporter: Weihua Jiang


Our flink streaming job consumes data from Kafka and it worked well for a long 
time. However, these days we encountered several times that it stopped 
consuming data from Kafka. 

The jstack shows that it stopped at LocalBufferPool.requestBuffer. The jstack 
file is attached. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4979: RMQSource support disabling queue declaration

2017-11-07 Thread sihuazhou
GitHub user sihuazhou opened a pull request:

https://github.com/apache/flink/pull/4979

RMQSource support disabling queue declaration

## What is the purpose of the change

This PR fixs 
[FLINK-8018](https://issues.apache.org/jira/browse/FLINK-8018), RabbitMQ 
connector should support disabling the call of queueDeclare or not, in case 
that user does not have sufficient authority to declare the queue.

## Brief change log

  - *Add queueDeclaration in RMQConnectionConfig to support enable or 
disable queue declaration, the default value is true*

## Verifying this change

This is a trivial change.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation
  - Does this pull request introduce a new feature? (no)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sihuazhou/flink RMQ_disable_queuedeclare

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4979.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4979


commit ae69a201e863eb21b5cf083d05430fe344ed8342
Author: summerleafs 
Date:   2017-11-08T06:00:55Z

introduce queueDeclaration for RMQConnectionConfig.

commit 4f4fb71aba2be312829f00ced6801e3439e67533
Author: summerleafs 
Date:   2017-11-08T06:32:19Z

fix build.

commit a41b495715acbfd4251f65aa2d023c90e1a7bb94
Author: summerleafs 
Date:   2017-11-08T06:39:50Z

set queueDeclaration default value to true.




---


[jira] [Created] (FLINK-8018) RMQ does not support disabling queueDeclare, when the user has no declaration permissions, it cannot connect

2017-11-07 Thread Sihua Zhou (JIRA)
Sihua Zhou created FLINK-8018:
-

 Summary: RMQ does not support disabling queueDeclare, when the 
user has no declaration permissions, it cannot connect
 Key: FLINK-8018
 URL: https://issues.apache.org/jira/browse/FLINK-8018
 Project: Flink
  Issue Type: Bug
  Components: RabbitMQ Connector
Affects Versions: 1.3.2
Reporter: Sihua Zhou
Assignee: Sihua Zhou


RabbitMQ connector should support disabling the call of queueDeclare or not, in 
case that user does not have sufficient authority to declare the queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7704) Port JobPlanHandler to new REST endpoint

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243348#comment-16243348
 ] 

ASF GitHub Bot commented on FLINK-7704:
---

GitHub user yew1eb opened a pull request:

https://github.com/apache/flink/pull/4978

[FLINK-7704][hotfix][flip6] Fix JobPlanInfoTest package path

This PR fix the package path of  `JobPlanInfoTest`, consistent with the
JobPlanInfo  `org.apache.flink.runtime.rest.messages`.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yew1eb/flink FLINK-7704-hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4978.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4978


commit fecaeeb14156867021d678f3f7e64776839f545a
Author: yew1eb 
Date:   2017-11-08T03:49:05Z

[FLINK-7704][hotfix][flip6] Fix JobPlanInfoTest package path




> Port JobPlanHandler to new REST endpoint
> 
>
> Key: FLINK-7704
> URL: https://issues.apache.org/jira/browse/FLINK-7704
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, REST, Webfrontend
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Hai Zhou UTC+8
>  Labels: flip-6
> Fix For: 1.4.0
>
>
> Port existing {{JobPlanHandler}} to new REST endpoint.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-8019) Flink streaming job stopped at consuming Kafka data

2017-11-07 Thread Weihua Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weihua Jiang updated FLINK-8019:

Attachment: jstack-20171108-2.log

> Flink streaming job stopped at consuming Kafka data
> ---
>
> Key: FLINK-8019
> URL: https://issues.apache.org/jira/browse/FLINK-8019
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.3.2
> Environment: We are using Kafka 0.8.2.1 and Flink 1.3.2 on YARN mode.
>Reporter: Weihua Jiang
> Attachments: jstack-20171108-2.log
>
>
> Our flink streaming job consumes data from Kafka and it worked well for a 
> long time. However, these days we encountered several times that it stopped 
> consuming data from Kafka. 
> The jstack shows that it stopped at LocalBufferPool.requestBuffer. The jstack 
> file is attached. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8018) RMQ does not support disabling queueDeclare, when the user has no declaration permissions, it cannot connect

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243453#comment-16243453
 ] 

ASF GitHub Bot commented on FLINK-8018:
---

GitHub user sihuazhou opened a pull request:

https://github.com/apache/flink/pull/4979

RMQSource support disabling queue declaration

## What is the purpose of the change

This PR fixs 
[FLINK-8018](https://issues.apache.org/jira/browse/FLINK-8018), RabbitMQ 
connector should support disabling the call of queueDeclare or not, in case 
that user does not have sufficient authority to declare the queue.

## Brief change log

  - *Add queueDeclaration in RMQConnectionConfig to support enable or 
disable queue declaration, the default value is true*

## Verifying this change

This is a trivial change.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)

## Documentation
  - Does this pull request introduce a new feature? (no)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sihuazhou/flink RMQ_disable_queuedeclare

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4979.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4979


commit ae69a201e863eb21b5cf083d05430fe344ed8342
Author: summerleafs 
Date:   2017-11-08T06:00:55Z

introduce queueDeclaration for RMQConnectionConfig.

commit 4f4fb71aba2be312829f00ced6801e3439e67533
Author: summerleafs 
Date:   2017-11-08T06:32:19Z

fix build.

commit a41b495715acbfd4251f65aa2d023c90e1a7bb94
Author: summerleafs 
Date:   2017-11-08T06:39:50Z

set queueDeclaration default value to true.




> RMQ does not support disabling queueDeclare, when the user has no declaration 
> permissions, it cannot connect
> 
>
> Key: FLINK-8018
> URL: https://issues.apache.org/jira/browse/FLINK-8018
> Project: Flink
>  Issue Type: Bug
>  Components: RabbitMQ Connector
>Affects Versions: 1.3.2
>Reporter: Sihua Zhou
>Assignee: Sihua Zhou
>
> RabbitMQ connector should support disabling the call of queueDeclare or not, 
> in case that user does not have sufficient authority to declare the queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7704) Port JobPlanHandler to new REST endpoint

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243422#comment-16243422
 ] 

ASF GitHub Bot commented on FLINK-7704:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4978
  
LGTM  


> Port JobPlanHandler to new REST endpoint
> 
>
> Key: FLINK-7704
> URL: https://issues.apache.org/jira/browse/FLINK-7704
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination, REST, Webfrontend
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Hai Zhou UTC+8
>  Labels: flip-6
> Fix For: 1.4.0
>
>
> Port existing {{JobPlanHandler}} to new REST endpoint.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4967: [FLINK-8001] [kafka] Prevent PeriodicWatermarkEmitter fro...

2017-11-07 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4967
  
Merging ..


---


[GitHub] flink issue #4978: [FLINK-7704][hotfix][flip6] Fix JobPlanInfoTest package p...

2017-11-07 Thread tzulitai
Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4978
  
LGTM 👍 


---


[GitHub] flink issue #4959: [FLINK-7998] private scope is changed to public to resolv...

2017-11-07 Thread naeioi
Github user naeioi commented on the issue:

https://github.com/apache/flink/pull/4959
  
Also can you replace `params.get("order")` with `params.get("orders")` in 
TPCHQuery3.scala for consistency with java and the input hint?


---


[jira] [Commented] (FLINK-8017) High availability cluster-id option incorrect in documentation

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243227#comment-16243227
 ] 

ASF GitHub Bot commented on FLINK-8017:
---

GitHub user dkelley-accretive opened a pull request:

https://github.com/apache/flink/pull/4976

[FLINK-8017] Fix High availability cluster-id key in documentation

*Thank you very much for contributing to Apache Flink - we are happy that 
you want to help us improve Flink. To help the community review your 
contribution in the best possible way, please go through the checklist below, 
which will get the contribution into a shape in which it can be best reviewed.*

*Please understand that we do not do this to make contributions to Flink a 
hassle. In order to uphold a high standard of quality for code contributions, 
while at the same time managing a large number of contributions, we need 
contributors to prepare the contributions well, and give reviewers enough 
contextual information for the review. Please also understand that 
contributions that do not follow this guide will take longer to review and thus 
typically be picked up with lower priority by the community.*

## Contribution Checklist

  - Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
made for typos in JavaDoc or documentation files, which need no JIRA issue.
  
  - Name the pull request in the form "[FLINK-] [component] Title of 
the pull request", where *FLINK-* should be replaced by the actual issue 
number. Skip *component* if you are unsure about which is the best component.
  Typo fixes that have no associated JIRA issue should be named following 
this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
`[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.

  - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
  
  - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes. You can set up Travis CI to do that following [this 
guide](http://flink.apache.org/contribute-code.html#best-practices).

  - Each pull request should address only one issue, not mix up code from 
multiple issues.
  
  - Each commit in the pull request has a meaningful commit message 
(including the JIRA id)

  - Once all items of the checklist are addressed, remove the above text 
and this checklist, leaving only the filled out template below.


**(The sections below can be removed for hotfixes of typos)**

## What is the purpose of the change

*(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*


## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change

*(Please pick either of the following options)*

This change is a trivial rework / code cleanup without any test coverage.

*(or)*

This change is already covered by existing tests, such as *(please describe 
tests)*.

*(or)*

This change added tests and can be verified as follows:

*(example:)*
  - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
  - *Extended integration test for recovery after master (JobManager) 
failure*
  - *Added test that validates that TaskInfo is transferred only once 
across recoveries*
  - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  - The S3 file system connector: (yes / no / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / no)
  - If yes, how is the feature documented? (not applicable 

[jira] [Commented] (FLINK-8001) Mark Kafka Consumer as idle if it doesn't have partitions

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243419#comment-16243419
 ] 

ASF GitHub Bot commented on FLINK-8001:
---

Github user tzulitai commented on the issue:

https://github.com/apache/flink/pull/4967
  
Merging ..


> Mark Kafka Consumer as idle if it doesn't have partitions
> -
>
> Key: FLINK-8001
> URL: https://issues.apache.org/jira/browse/FLINK-8001
> Project: Flink
>  Issue Type: Bug
>Reporter: Aljoscha Krettek
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.4.0, 1.3.3
>
>
> In Flink 1.3.x the Kafka Consumer will emit a {{Long.MAX_VALUE}} watermark if 
> it has zero partitions assigned. If this happens and other parallel instances 
> of the Kafka Consumer are marked as idle (which currently never happens by 
> default but does happen in custom forks of our Kafka code) this means that 
> the watermark jumps to {{Long.MAX_VALUE}} downstream.
> In Flink 1.4.x this happens implicitly in the {{PeriodicWatermarkEmitter}} in 
> {{AbstractFetcher}} where the watermark is {{Long.MAX_VALUE}} if we don't 
> have any partitions. This should be changed to mark the source as idle 
> instead, if we don't have any partitions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7998) Make case classes in TPCHQuery3.java public to allow dynamic instantiation

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243230#comment-16243230
 ] 

ASF GitHub Bot commented on FLINK-7998:
---

Github user naeioi commented on the issue:

https://github.com/apache/flink/pull/4959
  
Also can you replace `params.get("order")` with `params.get("orders")` in 
TPCHQuery3.scala for consistency with java and the input hint?


> Make case classes in TPCHQuery3.java public to allow dynamic instantiation
> --
>
> Key: FLINK-7998
> URL: https://issues.apache.org/jira/browse/FLINK-7998
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.3.2
>Reporter: Keren Zhu
>Priority: Minor
>  Labels: easyfix
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Case classes Lineitem, Customer and Order in example TPCHQuery3.java are set 
> to private. This causes an IllegalAccessException exception because of 
> reflection check in dynamic class instantiation. Making them public resolves 
> the problem (which is what implicitly suggested by _case class_ in 
> TPCHQuery3.scala)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4883: [FLINK-4809] Operators should tolerate checkpoint failure...

2017-11-07 Thread PangZhi
Github user PangZhi commented on the issue:

https://github.com/apache/flink/pull/4883
  
@StefanRRichter do we have any update on this PR?


---


[jira] [Comment Edited] (FLINK-8008) PojoTypeInfo should sort fields fields based on boolean

2017-11-07 Thread Muhammad Imran Tariq (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243362#comment-16243362
 ] 

Muhammad Imran Tariq edited comment on FLINK-8008 at 11/8/17 4:40 AM:
--

I am calling 
_public PojoTypeInfo(Class typeClass, List fields)_ constructor 
of class _PojoTypeInfo_.

Below is my code.
_//create a PojoTypeInfo
PojoTypeInfo sourceType = new PojoTypeInfo(Person.class, 
fieldList);
//create a dataset
DataSet data= env.createInput(new PojoCsvInputFormat(new 
Path(textPath),
CsvInputFormat.DEFAULT_LINE_DELIMITER, 
CsvInputFormat.DEFAULT_FIELD_DELIMITER, sourceType),
sourceType);

//create a table of this dataset
Table newT = tableEnv.fromDataSet(text);
//sink table
TableSink sink = new CsvTableSink("fielpath.csv", "|", 1,WriteMode.OVERWRITE);
newT.writeToSink(sink);_

So as I said earlier that there are two fields in my POJO class. first is 
ID(Integer), second one is Age(Double). PojoTypeInfo info sorts fields in 
alphabetical order. But CSVReader reads file and did not sort columns. When I 
sink my table, then datatype of Age field(which is Double) get applied on ID 
field. So initially my data in CSV was:
1,25
2,33
After sink it becomes
1.0,25
2.0,33

To avoid this I want PojoTypeInfo class not to sort fields inside its 
constructor.







was (Author: imran.tariq):
I am calling 
_public PojoTypeInfo(Class typeClass, List fields)_ constructor 
of class _PojoTypeInfo_.

Below is my code.
_//create a PojoTypeInfo
PojoTypeInfo sourceType = new PojoTypeInfo(Person.class, 
fieldList);
//create a dataset
DataSet data= env.createInput(new PojoCsvInputFormat(new 
Path(textPath),
CsvInputFormat.DEFAULT_LINE_DELIMITER, 
CsvInputFormat.DEFAULT_FIELD_DELIMITER, sourceType),
sourceType);

//create a table of this dataset
Table newT = tableEnv.fromDataSet(text);
//sink table
TableSink sink = new CsvTableSink("D:\\invesco\\POC\\Flink\\rules 
implementation\\data3.csv", "|", 1,
WriteMode.OVERWRITE);
newT.writeToSink(sink);_

So as I said earlier that there are two fields in my POJO class. first is 
ID(Integer), second one is Age(Double). PojoTypeInfo info sorts fields in 
alphabetical order. But CSVReader reads file and did not sort columns. When I 
sink my table, then datatype of Age field(which is Double) get applied on ID 
field. So initially my data in CSV was:
1,25
2,33
After sink it becomes
1.0,25
2.0,33

To avoid this I want PojoTypeInfo class not to sort fields inside its 
constructor.






> PojoTypeInfo should sort fields fields based on boolean
> ---
>
> Key: FLINK-8008
> URL: https://issues.apache.org/jira/browse/FLINK-8008
> Project: Flink
>  Issue Type: Improvement
>  Components: DataSet API
>Affects Versions: 1.3.2
>Reporter: Muhammad Imran Tariq
>Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Flink PojoTypeInfo sorts fields array that are passed into constructor 
> arguments. I want to create another constructor that takes boolean parameter 
> to sort field or not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4978: [FLINK-7704][hotfix][flip6] Fix JobPlanInfoTest pa...

2017-11-07 Thread yew1eb
GitHub user yew1eb opened a pull request:

https://github.com/apache/flink/pull/4978

[FLINK-7704][hotfix][flip6] Fix JobPlanInfoTest package path

This PR fix the package path of  `JobPlanInfoTest`, consistent with the
JobPlanInfo  `org.apache.flink.runtime.rest.messages`.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yew1eb/flink FLINK-7704-hotfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4978.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4978


commit fecaeeb14156867021d678f3f7e64776839f545a
Author: yew1eb 
Date:   2017-11-08T03:49:05Z

[FLINK-7704][hotfix][flip6] Fix JobPlanInfoTest package path




---


[GitHub] flink pull request #4977: [FLINK-7996] [table] Add support for (left.time = ...

2017-11-07 Thread xccui
GitHub user xccui opened a pull request:

https://github.com/apache/flink/pull/4977

[FLINK-7996] [table] Add support for (left.time = right.time) predicates to 
window join

## What is the purpose of the change

This PR adds `left.time = right.time` predicates support for time-windowed 
join in Table API and SQL.

## Brief change log

  - Change `WindowJoinUtil.extractWindowBoundsFromPredicate()` to accept 
single euqi-time predicate.
  - Add tests for the new predicate.
  - Update the documents.

## Verifying this change

This change can be verified by the added tests in `JoinTest` and 
`JoinITCase`.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (docs)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xccui/flink FLINK-7996

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4977.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4977


commit 03d54c672f4a529c1cf0e3af3e74d60fb37febb7
Author: Xingcan Cui 
Date:   2017-11-07T17:17:57Z

[FLINK-7996][table]Add support for (left.time = right.time) predicates to 
window join




---


[jira] [Commented] (FLINK-7996) Add support for (left.time = right.time) predicates to window join.

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243292#comment-16243292
 ] 

ASF GitHub Bot commented on FLINK-7996:
---

GitHub user xccui opened a pull request:

https://github.com/apache/flink/pull/4977

[FLINK-7996] [table] Add support for (left.time = right.time) predicates to 
window join

## What is the purpose of the change

This PR adds `left.time = right.time` predicates support for time-windowed 
join in Table API and SQL.

## Brief change log

  - Change `WindowJoinUtil.extractWindowBoundsFromPredicate()` to accept 
single euqi-time predicate.
  - Add tests for the new predicate.
  - Update the documents.

## Verifying this change

This change can be verified by the added tests in `JoinTest` and 
`JoinITCase`.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (docs)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xccui/flink FLINK-7996

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4977.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4977


commit 03d54c672f4a529c1cf0e3af3e74d60fb37febb7
Author: Xingcan Cui 
Date:   2017-11-07T17:17:57Z

[FLINK-7996][table]Add support for (left.time = right.time) predicates to 
window join




> Add support for (left.time = right.time) predicates to window join.
> ---
>
> Key: FLINK-7996
> URL: https://issues.apache.org/jira/browse/FLINK-7996
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Fabian Hueske
>Assignee: Xingcan Cui
>Priority: Critical
> Fix For: 1.4.0
>
>
> A common operation is to join the result of two window aggregations on the 
> same timestamp. 
> However, window joins do not support equality predicates on time attributes 
> such as {{left.time = right.time}} but require two range predicates such as 
> {{left.time >= right.time AND left.time <= right.time}}.
> This can be fixed in the translation code (the operator does not have to be 
> touched).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8017) High availability cluster-id option incorrect in documentation

2017-11-07 Thread Dan Kelley (JIRA)
Dan Kelley created FLINK-8017:
-

 Summary: High availability cluster-id option incorrect in 
documentation
 Key: FLINK-8017
 URL: https://issues.apache.org/jira/browse/FLINK-8017
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.3.2
Reporter: Dan Kelley
Priority: Minor


The property key in HighAvailabilityOptions.java is 
high-availability.cluster-id however the documentation states that the key is 
high-availability.zookeeper.path.cluster-id



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4729: [FLINK-7076] [ResourceManager] implement YARN stopWorker ...

2017-11-07 Thread suez1224
Github user suez1224 commented on the issue:

https://github.com/apache/flink/pull/4729
  
@tillrohrmann Updated the PR that addresses your comments. Could you please 
take another look when you have time?


---


[jira] [Commented] (FLINK-7076) Implement container release to support dynamic scaling

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243264#comment-16243264
 ] 

ASF GitHub Bot commented on FLINK-7076:
---

Github user suez1224 commented on the issue:

https://github.com/apache/flink/pull/4729
  
@tillrohrmann Updated the PR that addresses your comments. Could you please 
take another look when you have time?


> Implement container release to support dynamic scaling
> --
>
> Key: FLINK-7076
> URL: https://issues.apache.org/jira/browse/FLINK-7076
> Project: Flink
>  Issue Type: Sub-task
>  Components: ResourceManager
>Reporter: Till Rohrmann
>Assignee: Shuyi Chen
>  Labels: flip-6
>
> In order to support dynamic scaling, the {{YarnResourceManager}} has to be 
> able to dynamically free containers. We have to implement the 
> {{YarnResourceManager#stopWorker}} method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4976: [FLINK-8017] Fix High availability cluster-id key ...

2017-11-07 Thread dkelley-accretive
GitHub user dkelley-accretive opened a pull request:

https://github.com/apache/flink/pull/4976

[FLINK-8017] Fix High availability cluster-id key in documentation

*Thank you very much for contributing to Apache Flink - we are happy that 
you want to help us improve Flink. To help the community review your 
contribution in the best possible way, please go through the checklist below, 
which will get the contribution into a shape in which it can be best reviewed.*

*Please understand that we do not do this to make contributions to Flink a 
hassle. In order to uphold a high standard of quality for code contributions, 
while at the same time managing a large number of contributions, we need 
contributors to prepare the contributions well, and give reviewers enough 
contextual information for the review. Please also understand that 
contributions that do not follow this guide will take longer to review and thus 
typically be picked up with lower priority by the community.*

## Contribution Checklist

  - Make sure that the pull request corresponds to a [JIRA 
issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are 
made for typos in JavaDoc or documentation files, which need no JIRA issue.
  
  - Name the pull request in the form "[FLINK-] [component] Title of 
the pull request", where *FLINK-* should be replaced by the actual issue 
number. Skip *component* if you are unsure about which is the best component.
  Typo fixes that have no associated JIRA issue should be named following 
this pattern: `[hotfix] [docs] Fix typo in event time introduction` or 
`[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.

  - Fill out the template below to describe the changes contributed by the 
pull request. That will give reviewers the context they need to do the review.
  
  - Make sure that the change passes the automated tests, i.e., `mvn clean 
verify` passes. You can set up Travis CI to do that following [this 
guide](http://flink.apache.org/contribute-code.html#best-practices).

  - Each pull request should address only one issue, not mix up code from 
multiple issues.
  
  - Each commit in the pull request has a meaningful commit message 
(including the JIRA id)

  - Once all items of the checklist are addressed, remove the above text 
and this checklist, leaving only the filled out template below.


**(The sections below can be removed for hotfixes of typos)**

## What is the purpose of the change

*(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*


## Brief change log

*(for example:)*
  - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
  - *Deployments RPC transmits only the blob storage reference*
  - *TaskManagers retrieve the TaskInfo from the blob cache*


## Verifying this change

*(Please pick either of the following options)*

This change is a trivial rework / code cleanup without any test coverage.

*(or)*

This change is already covered by existing tests, such as *(please describe 
tests)*.

*(or)*

This change added tests and can be verified as follows:

*(example:)*
  - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
  - *Extended integration test for recovery after master (JobManager) 
failure*
  - *Added test that validates that TaskInfo is transferred only once 
across recoveries*
  - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes / no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
  - The serializers: (yes / no / don't know)
  - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
  - The S3 file system connector: (yes / no / don't know)

## Documentation

  - Does this pull request introduce a new feature? (yes / no)
  - If yes, how is the feature documented? (not applicable / docs / 
JavaDocs / not documented)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dkelley-accretive/flink FLINK-8017

Alternatively you can review and apply these changes as the 

[jira] [Commented] (FLINK-7076) Implement container release to support dynamic scaling

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16243101#comment-16243101
 ] 

ASF GitHub Bot commented on FLINK-7076:
---

Github user suez1224 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4729#discussion_r149540088
  
--- Diff: 
flink-yarn/src/test/java/org/apache/flink/yarn/YarnResourceManagerTest.java ---
@@ -0,0 +1,354 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.yarn;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.ResourceManagerOptions;
+import org.apache.flink.runtime.clusterframework.types.AllocationID;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+import org.apache.flink.runtime.concurrent.ScheduledExecutorServiceAdapter;
+import org.apache.flink.runtime.heartbeat.HeartbeatServices;
+import org.apache.flink.runtime.heartbeat.TestingHeartbeatServices;
+import org.apache.flink.runtime.highavailability.HighAvailabilityServices;
+import 
org.apache.flink.runtime.highavailability.TestingHighAvailabilityServices;
+import 
org.apache.flink.runtime.leaderelection.TestingLeaderElectionService;
+import org.apache.flink.runtime.metrics.MetricRegistry;
+import org.apache.flink.runtime.registration.RegistrationResponse;
+import org.apache.flink.runtime.resourcemanager.JobLeaderIdService;
+import 
org.apache.flink.runtime.resourcemanager.ResourceManagerConfiguration;
+import org.apache.flink.runtime.resourcemanager.SlotRequest;
+import org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager;
+import 
org.apache.flink.runtime.resourcemanager.slotmanager.SlotManagerException;
+import org.apache.flink.runtime.rpc.FatalErrorHandler;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.rpc.TestingRpcService;
+import org.apache.flink.runtime.taskexecutor.SlotReport;
+import org.apache.flink.runtime.taskexecutor.TaskExecutorGateway;
+import org.apache.flink.runtime.testutils.DirectScheduledExecutorService;
+import org.apache.flink.runtime.util.TestingFatalErrorHandler;
+
+import org.apache.flink.util.TestLogger;
+
+import 
org.apache.flink.shaded.guava18.com.google.common.collect.ImmutableList;
+
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.Container;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.ContainerLaunchContext;
+import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Priority;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.client.api.AMRMClient;
+import org.apache.hadoop.yarn.client.api.NMClient;
+import org.apache.hadoop.yarn.client.api.async.AMRMClientAsync;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_APP_ID;
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_CLIENT_HOME_DIR;
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_CLIENT_SHIP_FILES;
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_FLINK_CLASSPATH;
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_HADOOP_USER_NAME;
+import 

[GitHub] flink pull request #4729: [FLINK-7076] [ResourceManager] implement YARN stop...

2017-11-07 Thread suez1224
Github user suez1224 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4729#discussion_r149540088
  
--- Diff: 
flink-yarn/src/test/java/org/apache/flink/yarn/YarnResourceManagerTest.java ---
@@ -0,0 +1,354 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.yarn;
+
+import org.apache.flink.api.common.JobID;
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.ResourceManagerOptions;
+import org.apache.flink.runtime.clusterframework.types.AllocationID;
+import org.apache.flink.runtime.clusterframework.types.ResourceID;
+import org.apache.flink.runtime.clusterframework.types.ResourceProfile;
+import org.apache.flink.runtime.concurrent.ScheduledExecutor;
+import org.apache.flink.runtime.concurrent.ScheduledExecutorServiceAdapter;
+import org.apache.flink.runtime.heartbeat.HeartbeatServices;
+import org.apache.flink.runtime.heartbeat.TestingHeartbeatServices;
+import org.apache.flink.runtime.highavailability.HighAvailabilityServices;
+import 
org.apache.flink.runtime.highavailability.TestingHighAvailabilityServices;
+import 
org.apache.flink.runtime.leaderelection.TestingLeaderElectionService;
+import org.apache.flink.runtime.metrics.MetricRegistry;
+import org.apache.flink.runtime.registration.RegistrationResponse;
+import org.apache.flink.runtime.resourcemanager.JobLeaderIdService;
+import 
org.apache.flink.runtime.resourcemanager.ResourceManagerConfiguration;
+import org.apache.flink.runtime.resourcemanager.SlotRequest;
+import org.apache.flink.runtime.resourcemanager.slotmanager.SlotManager;
+import 
org.apache.flink.runtime.resourcemanager.slotmanager.SlotManagerException;
+import org.apache.flink.runtime.rpc.FatalErrorHandler;
+import org.apache.flink.runtime.rpc.RpcService;
+import org.apache.flink.runtime.rpc.TestingRpcService;
+import org.apache.flink.runtime.taskexecutor.SlotReport;
+import org.apache.flink.runtime.taskexecutor.TaskExecutorGateway;
+import org.apache.flink.runtime.testutils.DirectScheduledExecutorService;
+import org.apache.flink.runtime.util.TestingFatalErrorHandler;
+
+import org.apache.flink.util.TestLogger;
+
+import 
org.apache.flink.shaded.guava18.com.google.common.collect.ImmutableList;
+
+import org.apache.hadoop.yarn.api.records.ApplicationAttemptId;
+import org.apache.hadoop.yarn.api.records.ApplicationId;
+import org.apache.hadoop.yarn.api.records.Container;
+import org.apache.hadoop.yarn.api.records.ContainerId;
+import org.apache.hadoop.yarn.api.records.ContainerLaunchContext;
+import org.apache.hadoop.yarn.api.records.NodeId;
+import org.apache.hadoop.yarn.api.records.Priority;
+import org.apache.hadoop.yarn.api.records.Resource;
+import org.apache.hadoop.yarn.client.api.AMRMClient;
+import org.apache.hadoop.yarn.client.api.NMClient;
+import org.apache.hadoop.yarn.client.api.async.AMRMClientAsync;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.UUID;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_APP_ID;
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_CLIENT_HOME_DIR;
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_CLIENT_SHIP_FILES;
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_FLINK_CLASSPATH;
+import static org.apache.flink.yarn.YarnConfigKeys.ENV_HADOOP_USER_NAME;
+import static org.apache.flink.yarn.YarnConfigKeys.FLINK_JAR_PATH;
+import static org.junit.Assert.assertTrue;
+import static org.mockito.Matchers.any;
+import static org.mockito.Matchers.eq;
+import static org.mockito.Mockito.mock;
 

[jira] [Commented] (FLINK-4809) Operators should tolerate checkpoint failures

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242917#comment-16242917
 ] 

ASF GitHub Bot commented on FLINK-4809:
---

Github user PangZhi commented on the issue:

https://github.com/apache/flink/pull/4883
  
@StefanRRichter do we have any update on this PR?


> Operators should tolerate checkpoint failures
> -
>
> Key: FLINK-4809
> URL: https://issues.apache.org/jira/browse/FLINK-4809
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Stephan Ewen
>Assignee: Stefan Richter
> Fix For: 1.4.0
>
>
> Operators should try/catch exceptions in the synchronous and asynchronous 
> part of the checkpoint and send a {{DeclineCheckpoint}} message as a result.
> The decline message should have the failure cause attached to it.
> The checkpoint barrier should be sent anyways as a first step before 
> attempting to make a state checkpoint, to make sure that downstream operators 
> do not block in alignment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4809) Operators should tolerate checkpoint failures

2017-11-07 Thread Jing Fan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242916#comment-16242916
 ] 

Jing Fan commented on FLINK-4809:
-

Do we have any update on the PR? It has been handing for weeks.

> Operators should tolerate checkpoint failures
> -
>
> Key: FLINK-4809
> URL: https://issues.apache.org/jira/browse/FLINK-4809
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Stephan Ewen
>Assignee: Stefan Richter
> Fix For: 1.4.0
>
>
> Operators should try/catch exceptions in the synchronous and asynchronous 
> part of the checkpoint and send a {{DeclineCheckpoint}} message as a result.
> The decline message should have the failure cause attached to it.
> The checkpoint barrier should be sent anyways as a first step before 
> attempting to make a state checkpoint, to make sure that downstream operators 
> do not block in alignment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-4809) Operators should tolerate checkpoint failures

2017-11-07 Thread Jing Fan (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242916#comment-16242916
 ] 

Jing Fan edited comment on FLINK-4809 at 11/7/17 9:26 PM:
--

Do we have any update on the PR? It has been hanging for weeks.


was (Author: pangzhi):
Do we have any update on the PR? It has been handing for weeks.

> Operators should tolerate checkpoint failures
> -
>
> Key: FLINK-4809
> URL: https://issues.apache.org/jira/browse/FLINK-4809
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Reporter: Stephan Ewen
>Assignee: Stefan Richter
> Fix For: 1.4.0
>
>
> Operators should try/catch exceptions in the synchronous and asynchronous 
> part of the checkpoint and send a {{DeclineCheckpoint}} message as a result.
> The decline message should have the failure cause attached to it.
> The checkpoint barrier should be sent anyways as a first step before 
> attempting to make a state checkpoint, to make sure that downstream operators 
> do not block in alignment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8004) Sample code in Debugging & Monitoring Metrics documentation section is not valid java

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242875#comment-16242875
 ] 

ASF GitHub Bot commented on FLINK-8004:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4965
  
LGTM.


> Sample code in Debugging & Monitoring Metrics documentation section is not 
> valid java 
> --
>
> Key: FLINK-8004
> URL: https://issues.apache.org/jira/browse/FLINK-8004
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Metrics
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Andreas Hecke
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.4.0, 1.5.0
>
>
> Hi, we have been stumbled about some documentation inconsistencies in how to 
> use metrics in flink. Seems there is some invalid java code posted as samples 
> like having methods declared as @public and missing return statements, see 
> [here|https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/metrics.html#counter]
> I raised a question on 
> [SO|https://stackoverflow.com/questions/47153424/what-does-public-modifier-mean-in-method-signature]
>  and Fabian asked me to open a Jira issue



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4965: [FLINK-8004][metrics][docs] Fix usage examples

2017-11-07 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4965
  
LGTM.


---


[jira] [Commented] (FLINK-8010) Bump remaining flink-shaded dependencies

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242873#comment-16242873
 ] 

ASF GitHub Bot commented on FLINK-8010:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4971
  
scratch that, i have it in this one, here:
```
org.apache.flink
flink-shaded-guava
-   18.0-1.0
+   18.0-${flink.shaded.version}
```


> Bump remaining flink-shaded dependencies
> 
>
> Key: FLINK-8010
> URL: https://issues.apache.org/jira/browse/FLINK-8010
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0, 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4971: [FLINK-8010][build] Bump remaining flink-shaded versions

2017-11-07 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4971
  
scratch that, i have it in this one, here:
```
org.apache.flink
flink-shaded-guava
-   18.0-1.0
+   18.0-${flink.shaded.version}
```


---


[GitHub] flink issue #4971: [FLINK-8010][build] Bump remaining flink-shaded versions

2017-11-07 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4971
  
@greghogan I also had this in another PR but not this one.


---


[jira] [Commented] (FLINK-8010) Bump remaining flink-shaded dependencies

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242872#comment-16242872
 ] 

ASF GitHub Bot commented on FLINK-8010:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4971
  
@greghogan I also had this in another PR but not this one.


> Bump remaining flink-shaded dependencies
> 
>
> Key: FLINK-8010
> URL: https://issues.apache.org/jira/browse/FLINK-8010
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0, 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7998) Make case classes in TPCHQuery3.java public to allow dynamic instantiation

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242834#comment-16242834
 ] 

ASF GitHub Bot commented on FLINK-7998:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4959
  
Looks good but I have not tested. Can we also fix the parameter swap 
referenced in the JIRA?


> Make case classes in TPCHQuery3.java public to allow dynamic instantiation
> --
>
> Key: FLINK-7998
> URL: https://issues.apache.org/jira/browse/FLINK-7998
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.3.2
>Reporter: Keren Zhu
>Priority: Minor
>  Labels: easyfix
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
> Case classes Lineitem, Customer and Order in example TPCHQuery3.java are set 
> to private. This causes an IllegalAccessException exception because of 
> reflection check in dynamic class instantiation. Making them public resolves 
> the problem (which is what implicitly suggested by _case class_ in 
> TPCHQuery3.scala)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4971: [FLINK-8010][build] Bump remaining flink-shaded versions

2017-11-07 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4971
  
Is anyone else seeing the red/green highlights shifted two lines up for 
this PR? I'm seeing this in multiple browsers and with plug-ins disabled but 
only on this PR.


---


[GitHub] flink issue #4959: [FLINK-7998] private scope is changed to public to resolv...

2017-11-07 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4959
  
Looks good but I have not tested. Can we also fix the parameter swap 
referenced in the JIRA?


---


[jira] [Commented] (FLINK-8010) Bump remaining flink-shaded dependencies

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242848#comment-16242848
 ] 

ASF GitHub Bot commented on FLINK-8010:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4971
  
Is anyone else seeing the red/green highlights shifted two lines up for 
this PR? I'm seeing this in multiple browsers and with plug-ins disabled but 
only on this PR.


> Bump remaining flink-shaded dependencies
> 
>
> Key: FLINK-8010
> URL: https://issues.apache.org/jira/browse/FLINK-8010
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0, 1.5.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7897) Consider using nio.Files for file deletion in TransientBlobCleanupTask

2017-11-07 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7897:
--
Description: 
nio.Files#delete() provides better clue as to why the deletion may fail:

https://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#delete(java.nio.file.Path)

Depending on the potential exception, the call to localFile.exists() may be 
skipped.

  was:
nio.Files#delete() provides better clue as to why the deletion may fail:

https://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#delete(java.nio.file.Path)


Depending on the potential exception, the call to localFile.exists() may be 
skipped.


> Consider using nio.Files for file deletion in TransientBlobCleanupTask
> --
>
> Key: FLINK-7897
> URL: https://issues.apache.org/jira/browse/FLINK-7897
> Project: Flink
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Minor
>
> nio.Files#delete() provides better clue as to why the deletion may fail:
> https://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#delete(java.nio.file.Path)
> Depending on the potential exception, the call to localFile.exists() may be 
> skipped.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8006) flink-daemon.sh: line 103: binary operator expected

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242636#comment-16242636
 ] 

ASF GitHub Bot commented on FLINK-8006:
---

Github user elbaulp commented on the issue:

https://github.com/apache/flink/pull/4968
  
@greghogan Done, sorry for ignoring the stop script.


> flink-daemon.sh: line 103: binary operator expected
> ---
>
> Key: FLINK-8006
> URL: https://issues.apache.org/jira/browse/FLINK-8006
> Project: Flink
>  Issue Type: Bug
>  Components: Startup Shell Scripts
>Affects Versions: 1.3.2
> Environment: Linux 4.12.12-gentoo #2 SMP x86_64 Intel(R) Core(TM) 
> i3-3110M CPU @ 2.40GHz GenuineIntel GNU/Linux
>Reporter: Alejandro 
>  Labels: easyfix, newbie
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When executing `./bin/start-local.sh` I get
> flink-1.3.2/bin/flink-daemon.sh: line 79: $pid: ambiguous redirect
> flink-1.3.2/bin/flink-daemon.sh: line 103: [: /tmp/flink-Alejandro: binary 
> operator expected
> I solved the problem replacing $pid by "$pid" in lines 79 and 103.
> Should I make a PR to the repo?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4968: [FLINK-8006] [Startup Shell Scripts] Enclosing $pid in qu...

2017-11-07 Thread elbaulp
Github user elbaulp commented on the issue:

https://github.com/apache/flink/pull/4968
  
@greghogan Done, sorry for ignoring the stop script.


---


[jira] [Commented] (FLINK-8006) flink-daemon.sh: line 103: binary operator expected

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242567#comment-16242567
 ] 

ASF GitHub Bot commented on FLINK-8006:
---

Github user elbaulp commented on the issue:

https://github.com/apache/flink/pull/4968
  
@greghogan Didn't realized!, its failing too. I am going to fix it.


> flink-daemon.sh: line 103: binary operator expected
> ---
>
> Key: FLINK-8006
> URL: https://issues.apache.org/jira/browse/FLINK-8006
> Project: Flink
>  Issue Type: Bug
>  Components: Startup Shell Scripts
>Affects Versions: 1.3.2
> Environment: Linux 4.12.12-gentoo #2 SMP x86_64 Intel(R) Core(TM) 
> i3-3110M CPU @ 2.40GHz GenuineIntel GNU/Linux
>Reporter: Alejandro 
>  Labels: easyfix, newbie
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When executing `./bin/start-local.sh` I get
> flink-1.3.2/bin/flink-daemon.sh: line 79: $pid: ambiguous redirect
> flink-1.3.2/bin/flink-daemon.sh: line 103: [: /tmp/flink-Alejandro: binary 
> operator expected
> I solved the problem replacing $pid by "$pid" in lines 79 and 103.
> Should I make a PR to the repo?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4968: [FLINK-8006] [Startup Shell Scripts] Enclosing $pid in qu...

2017-11-07 Thread elbaulp
Github user elbaulp commented on the issue:

https://github.com/apache/flink/pull/4968
  
@greghogan Didn't realized!, its failing too. I am going to fix it.


---


[jira] [Commented] (FLINK-7978) Kafka011 exactly-once Producer sporadically fails to commit under high parallelism

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242540#comment-16242540
 ] 

ASF GitHub Bot commented on FLINK-7978:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4955
  
Thanks for fixing this! And thanks for the review!

I merged, could you please close the PR?


> Kafka011 exactly-once Producer sporadically fails to commit under high 
> parallelism
> --
>
> Key: FLINK-7978
> URL: https://issues.apache.org/jira/browse/FLINK-7978
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.4.0
>Reporter: Gary Yao
>Assignee: Piotr Nowojski
>Priority: Blocker
> Fix For: 1.4.0
>
>
> The Kafka011 exactly-once producer sporadically fails to commit/confirm the 
> first checkpoint. The behavior seems to be easier reproduced under high job 
> parallelism.
> *Logs/Stacktrace*
> {noformat}
> 10:24:35,347 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator  
>- Completed checkpoint 1 (191029 bytes in 1435 ms).
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 2/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-12], transactionStartTime=1509787474588} from 
> checkpoint 1
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 1/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-8], transactionStartTime=1509787474393} from 
> checkpoint 1
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 0/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-4], transactionStartTime=1509787474448} from 
> checkpoint 1
> 10:24:35,350 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 6/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-34], transactionStartTime=1509787474742} from 
> checkpoint 1
> 10:24:35,350 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 4/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-23], transactionStartTime=1509787474777} from 
> checkpoint 1
> 10:24:35,353 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 10/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-52], transactionStartTime=1509787474930} from 
> checkpoint 1
> 10:24:35,350 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 7/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-35], transactionStartTime=1509787474659} from 
> checkpoint 1
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 5/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-25], transactionStartTime=1509787474652} from 
> checkpoint 1
> 10:24:35,361 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 18/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-92], transactionStartTime=1509787475043} from 
> checkpoint 1
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 3/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-15], transactionStartTime=1509787474590} from 
> checkpoint 1
> 10:24:35,361 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 13/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> 

[GitHub] flink issue #4955: [FLINK-7978][kafka] Ensure that transactional ids will ne...

2017-11-07 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4955
  
Thanks for fixing this! And thanks for the review!

I merged, could you please close the PR?


---


[jira] [Closed] (FLINK-7978) Kafka011 exactly-once Producer sporadically fails to commit under high parallelism

2017-11-07 Thread Aljoscha Krettek (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aljoscha Krettek closed FLINK-7978.
---
Resolution: Fixed

Fixed on release-1.4 in
3cbf467ebdf639df4d7d4da78b7bc2929aa4b5d9
460e27aeb5e246aff0f8137448441c315123608c

Fixed on master in
2949dc43b238b7f689571f007fd3346de3b89ed9
d3aa3f0729e42d48820b3786f463eadc409ece4f

> Kafka011 exactly-once Producer sporadically fails to commit under high 
> parallelism
> --
>
> Key: FLINK-7978
> URL: https://issues.apache.org/jira/browse/FLINK-7978
> Project: Flink
>  Issue Type: Bug
>  Components: Kafka Connector
>Affects Versions: 1.4.0
>Reporter: Gary Yao
>Assignee: Piotr Nowojski
>Priority: Blocker
> Fix For: 1.4.0
>
>
> The Kafka011 exactly-once producer sporadically fails to commit/confirm the 
> first checkpoint. The behavior seems to be easier reproduced under high job 
> parallelism.
> *Logs/Stacktrace*
> {noformat}
> 10:24:35,347 INFO  org.apache.flink.runtime.checkpoint.CheckpointCoordinator  
>- Completed checkpoint 1 (191029 bytes in 1435 ms).
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 2/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-12], transactionStartTime=1509787474588} from 
> checkpoint 1
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 1/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-8], transactionStartTime=1509787474393} from 
> checkpoint 1
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 0/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-4], transactionStartTime=1509787474448} from 
> checkpoint 1
> 10:24:35,350 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 6/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-34], transactionStartTime=1509787474742} from 
> checkpoint 1
> 10:24:35,350 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 4/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-23], transactionStartTime=1509787474777} from 
> checkpoint 1
> 10:24:35,353 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 10/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-52], transactionStartTime=1509787474930} from 
> checkpoint 1
> 10:24:35,350 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 7/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-35], transactionStartTime=1509787474659} from 
> checkpoint 1
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 5/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-25], transactionStartTime=1509787474652} from 
> checkpoint 1
> 10:24:35,361 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 18/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-92], transactionStartTime=1509787475043} from 
> checkpoint 1
> 10:24:35,349 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 3/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> kafka-sink-1509787467330-15], transactionStartTime=1509787474590} from 
> checkpoint 1
> 10:24:35,361 INFO  
> org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction  - 
> FlinkKafkaProducer011 13/32 - checkpoint 1 complete, committing transaction 
> TransactionHolder{handle=KafkaTransactionState [transactionalId=Sink: 
> 

[jira] [Commented] (FLINK-8008) PojoTypeInfo should sort fields fields based on boolean

2017-11-07 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242512#comment-16242512
 ] 

Aljoscha Krettek commented on FLINK-8008:
-

What method is this using for reading?

> PojoTypeInfo should sort fields fields based on boolean
> ---
>
> Key: FLINK-8008
> URL: https://issues.apache.org/jira/browse/FLINK-8008
> Project: Flink
>  Issue Type: Improvement
>  Components: DataSet API
>Affects Versions: 1.3.2
>Reporter: Muhammad Imran Tariq
>Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Flink PojoTypeInfo sorts fields array that are passed into constructor 
> arguments. I want to create another constructor that takes boolean parameter 
> to sort field or not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7977) bump version of compatibility check for Flink 1.4

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242503#comment-16242503
 ] 

ASF GitHub Bot commented on FLINK-7977:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4945
  
@greghogan I commented on the issue. Yes this is the API compatibility 
check and the API should actually stay stable for all of the 1.x series. That's 
also why I'm wondering why we're comparing against 1.1.4.


> bump version of compatibility check for Flink 1.4
> -
>
> Key: FLINK-7977
> URL: https://issues.apache.org/jira/browse/FLINK-7977
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0
>
>
> Since Flink maintains backward compatibility check for 2 versions, Flink 1.4 
> should check compatibility with 1.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4945: [FLINK-7977][build] bump version of compatibility check f...

2017-11-07 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4945
  
@greghogan I commented on the issue. Yes this is the API compatibility 
check and the API should actually stay stable for all of the 1.x series. That's 
also why I'm wondering why we're comparing against 1.1.4.


---


[jira] [Commented] (FLINK-7973) Fix service shading relocation for S3 file systems

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242502#comment-16242502
 ] 

ASF GitHub Bot commented on FLINK-7973:
---

Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4961#discussion_r149449571
  
--- Diff: flink-filesystems/flink-s3-fs-hadoop/pom.xml ---
@@ -33,6 +33,7 @@ under the License.
jar
 

+   
--- End diff --

done (also the other suggestions)


> Fix service shading relocation for S3 file systems
> --
>
> Key: FLINK-7973
> URL: https://issues.apache.org/jira/browse/FLINK-7973
> Project: Flink
>  Issue Type: Bug
>Reporter: Stephan Ewen
>Assignee: Nico Kruber
>Priority: Blocker
> Fix For: 1.4.0
>
>
> The shade plugin relocates services incorrectly currently, applying 
> relocation patterns multiple times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4961: [FLINK-7973] fix shading and relocating Hadoop for...

2017-11-07 Thread NicoK
Github user NicoK commented on a diff in the pull request:

https://github.com/apache/flink/pull/4961#discussion_r149449571
  
--- Diff: flink-filesystems/flink-s3-fs-hadoop/pom.xml ---
@@ -33,6 +33,7 @@ under the License.
jar
 

+   
--- End diff --

done (also the other suggestions)


---


[jira] [Commented] (FLINK-7977) bump version of compatibility check for Flink 1.4

2017-11-07 Thread Bowen Li (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242495#comment-16242495
 ] 

Bowen Li commented on FLINK-7977:
-

[~greghogan] You're right, it should be API compatibility. Sorry that I got 
confused between the two.

[~rmetzger] what do you think about this issue?

> bump version of compatibility check for Flink 1.4
> -
>
> Key: FLINK-7977
> URL: https://issues.apache.org/jira/browse/FLINK-7977
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0
>
>
> Since Flink maintains backward compatibility check for 2 versions, Flink 1.4 
> should check compatibility with 1.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8006) flink-daemon.sh: line 103: binary operator expected

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242480#comment-16242480
 ] 

ASF GitHub Bot commented on FLINK-8006:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4968
  
Do we also need to quote within sub-expressions? Have you looked at the 
stop script?


> flink-daemon.sh: line 103: binary operator expected
> ---
>
> Key: FLINK-8006
> URL: https://issues.apache.org/jira/browse/FLINK-8006
> Project: Flink
>  Issue Type: Bug
>  Components: Startup Shell Scripts
>Affects Versions: 1.3.2
> Environment: Linux 4.12.12-gentoo #2 SMP x86_64 Intel(R) Core(TM) 
> i3-3110M CPU @ 2.40GHz GenuineIntel GNU/Linux
>Reporter: Alejandro 
>  Labels: easyfix, newbie
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When executing `./bin/start-local.sh` I get
> flink-1.3.2/bin/flink-daemon.sh: line 79: $pid: ambiguous redirect
> flink-1.3.2/bin/flink-daemon.sh: line 103: [: /tmp/flink-Alejandro: binary 
> operator expected
> I solved the problem replacing $pid by "$pid" in lines 79 and 103.
> Should I make a PR to the repo?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4968: [FLINK-8006] [Startup Shell Scripts] Enclosing $pid in qu...

2017-11-07 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4968
  
Do we also need to quote within sub-expressions? Have you looked at the 
stop script?


---


[jira] [Commented] (FLINK-8009) flink-dist pulls in flink-runtime's transitive avro/jackson dependency

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242475#comment-16242475
 ] 

ASF GitHub Bot commented on FLINK-8009:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4972
  
This looks good to go! I actually checked it out and manually ran the 
end-to-end tests because it's quicker than waiting for travis.  


> flink-dist pulls in flink-runtime's transitive avro/jackson dependency
> --
>
> Key: FLINK-8009
> URL: https://issues.apache.org/jira/browse/FLINK-8009
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.5.0
>
>
> The promotion of transitive dependencies in flink-runtime causes flink-dist 
> to contain _some_ transitive dependencies from flink-shaded-hadoop. (most 
> notably, avro and codehaus.jackson)
> We will either have to add an exclusion for each dependency to flink-dist, 
> set flink-shaded-hadoop to provided in flink-runtime (hacky, but less 
> intrusive), or remove the promotion and explicitly depend on various akka 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4972: [FLINK-8009][build][runtime] Remove transitive dependency...

2017-11-07 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4972
  
This looks good to go! I actually checked it out and manually ran the 
end-to-end tests because it's quicker than waiting for travis. 😅 


---


[jira] [Commented] (FLINK-7991) Cleanup kafka example jar filters

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242460#comment-16242460
 ] 

ASF GitHub Bot commented on FLINK-7991:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4975

[FLINK-7991][examples][kafka] Cleanup kafka10 example jar

## What is the purpose of the change

This PR cleans up the kafka example shading configuration, removing plenty 
of unnecessary classes from the resulting jar along with several ineffective 
inclusions.

## Brief change log

* remove inclusions for zk, curator, jute, I0Itex, jline and jammer as they 
are ineffective
* narrow down `org.apache.flink.streaming` inclusion to only include the 
kafka connector

## Verifying this change

* Build jar before and after and compare contents

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7991

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4975.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4975


commit cca79f8194ab46aab51f5dfb2f528802a9fbb9d4
Author: zentol 
Date:   2017-11-07T17:12:29Z

[FLINK-7991][examples][kafka] Cleanup kafka10 example jar




> Cleanup kafka example jar filters
> -
>
> Key: FLINK-7991
> URL: https://issues.apache.org/jira/browse/FLINK-7991
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0, 1.5.0
>
>
> The kafka example jar shading configuration includes the following files:
> {code}
> org/apache/flink/streaming/examples/kafka/**
> org/apache/flink/streaming/**
> org/apache/kafka/**
> org/apache/curator/**
> org/apache/zookeeper/**
> org/apache/jute/**
> org/I0Itec/**
> jline/**
> com/yammer/**
> kafka/**
> {code}
> Problems:
> * the inclusion of org.apache.flink.streaming causes large parts of the API 
> (and runtime...) to be included in the jar, along the _Twitter_ source and 
> *all* other examples
> * the yammer, jline, l0ltec, jute, zookeeper and curator inclusions are 
> ineffective; none of these classes show up in the example jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-7905) HadoopS3FileSystemITCase failed on travis

2017-11-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-7905.
---
Resolution: Fixed

> HadoopS3FileSystemITCase failed on travis
> -
>
> Key: FLINK-7905
> URL: https://issues.apache.org/jira/browse/FLINK-7905
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystem, Tests
>Affects Versions: 1.4.0
> Environment: https://travis-ci.org/zentol/flink/jobs/291550295
> https://travis-ci.org/tillrohrmann/flink/jobs/291491026
>Reporter: Chesnay Schepler
>Assignee: Stephan Ewen
>  Labels: test-stability
> Fix For: 1.4.0
>
>
> The {{HadoopS3FileSystemITCase}} is flaky on Travis because its access got 
> denied by S3.
> {code}
> ---
>  T E S T S
> ---
> Running org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> Tests run: 3, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 3.354 sec <<< 
> FAILURE! - in org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> testDirectoryListing(org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase)  
> Time elapsed: 0.208 sec  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/testdir: 
> getFileStatus on 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/testdir: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> 9094999D7456C589), S3 Extended Request ID: 
> fVIcROQh4E1/GjWYYV6dFp851rjiKtFgNSCO8KkoTmxWbuxz67aDGqRiA/a09q7KS6Mz1Tnyab4=
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4194)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4141)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1256)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1232)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:904)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1553)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:117)
>   at 
> org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.getFileStatus(HadoopFileSystem.java:77)
>   at org.apache.flink.core.fs.FileSystem.exists(FileSystem.java:509)
>   at 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase.testDirectoryListing(HadoopS3FileSystemITCase.java:163)
> testSimpleFileWriteAndRead(org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase)
>   Time elapsed: 0.275 sec  <<< ERROR!
> java.nio.file.AccessDeniedException: 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/test.txt: 
> getFileStatus on 
> s3://[secure]/tests-9273972a-70c2-4f06-862e-d02936313fea/test.txt: 
> com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
> S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 
> B3D8126BE6CF169F), S3 Extended Request ID: 
> T34sn+a/CcCFv+kFR/UbfozAkXXtiLDu2N31Ok5EydgKeJF5I2qXRCC/MkxSi4ymiiVWeSyb8FY=
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
>   at 
> 

[GitHub] flink pull request #4975: [FLINK-7991][examples][kafka] Cleanup kafka10 exam...

2017-11-07 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4975

[FLINK-7991][examples][kafka] Cleanup kafka10 example jar

## What is the purpose of the change

This PR cleans up the kafka example shading configuration, removing plenty 
of unnecessary classes from the resulting jar along with several ineffective 
inclusions.

## Brief change log

* remove inclusions for zk, curator, jute, I0Itex, jline and jammer as they 
are ineffective
* narrow down `org.apache.flink.streaming` inclusion to only include the 
kafka connector

## Verifying this change

* Build jar before and after and compare contents

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7991

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4975.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4975


commit cca79f8194ab46aab51f5dfb2f528802a9fbb9d4
Author: zentol 
Date:   2017-11-07T17:12:29Z

[FLINK-7991][examples][kafka] Cleanup kafka10 example jar




---


[jira] [Commented] (FLINK-8012) Table with time attribute cannot be written to CsvTableSink

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242458#comment-16242458
 ] 

ASF GitHub Bot commented on FLINK-8012:
---

GitHub user fhueske opened a pull request:

https://github.com/apache/flink/pull/4974

[FLINK-8012] [table] Fix TableSink config for tables with time attributes.

## What is the purpose of the change

Fix the configuration of TableSinks for Tables with time attributes 
(`TimeIndicatorTypeInfo`).
Time indicators types are internal and must not be exposed to the outside 
(such as TableSinks).

## Brief change log

* the field type of time attributes (rowtime or proctime) is changed to 
their publicly visible type `SQL_TIMESTAMP`.
* an existing test is adapted to check this case.

## Verifying this change

* an existing test method in `TableSinkITCase` was adapted.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): **no**
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
  - The serializers: **no**
  - The runtime per-record code paths (performance sensitive): **no**
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
  - The S3 file system connector: **no**

## Documentation

  - Does this pull request introduce a new feature? **no**
  - If yes, how is the feature documented? **n/a**


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhueske/flink tableSinkConfig

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4974.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4974


commit ab8120da2e7f751da2b5760e5b17d76264dcfcab
Author: Fabian Hueske 
Date:   2017-11-07T16:57:39Z

[FLINK-8012] [table] Fix TableSink config for tables with time attributes.




> Table with time attribute cannot be written to CsvTableSink
> ---
>
> Key: FLINK-8012
> URL: https://issues.apache.org/jira/browse/FLINK-8012
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.4.0, 1.3.2
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Critical
> Fix For: 1.4.0, 1.3.3
>
>
> A Table with a time attribute ({{TimeIndicatorTypeInfo}}) cannot be written 
> to a {{CsvTableSink}}.
> Trying to do so results in the following exception:
> {code}
> Exception in thread "main" org.apache.flink.table.api.TableException: The 
> time indicator type is an internal type only.
>   at 
> org.apache.flink.table.api.TableEnvironment.org$apache$flink$table$api$TableEnvironment$$validateFieldType$1(TableEnvironment.scala:937)
>   at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$2.apply(TableEnvironment.scala:963)
>   at 
> org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$2.apply(TableEnvironment.scala:960)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>   at 
> org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:960)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.getConversionMapper(StreamTableEnvironment.scala:289)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:810)
>   at 
> org.apache.flink.table.api.StreamTableEnvironment.writeToSink(StreamTableEnvironment.scala:263)
>   at org.apache.flink.table.api.Table.writeToSink(table.scala:857)
>   at org.apache.flink.table.api.Table.writeToSink(table.scala:830)
> {code}
> The time attribute should be automatically converted into a {{SQL_TIMESTAMP}} 
> type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4974: [FLINK-8012] [table] Fix TableSink config for tabl...

2017-11-07 Thread fhueske
GitHub user fhueske opened a pull request:

https://github.com/apache/flink/pull/4974

[FLINK-8012] [table] Fix TableSink config for tables with time attributes.

## What is the purpose of the change

Fix the configuration of TableSinks for Tables with time attributes 
(`TimeIndicatorTypeInfo`).
Time indicators types are internal and must not be exposed to the outside 
(such as TableSinks).

## Brief change log

* the field type of time attributes (rowtime or proctime) is changed to 
their publicly visible type `SQL_TIMESTAMP`.
* an existing test is adapted to check this case.

## Verifying this change

* an existing test method in `TableSinkITCase` was adapted.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): **no**
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
  - The serializers: **no**
  - The runtime per-record code paths (performance sensitive): **no**
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
  - The S3 file system connector: **no**

## Documentation

  - Does this pull request introduce a new feature? **no**
  - If yes, how is the feature documented? **n/a**


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/fhueske/flink tableSinkConfig

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4974.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4974


commit ab8120da2e7f751da2b5760e5b17d76264dcfcab
Author: Fabian Hueske 
Date:   2017-11-07T16:57:39Z

[FLINK-8012] [table] Fix TableSink config for tables with time attributes.




---


[jira] [Commented] (FLINK-5633) ClassCastException: X cannot be cast to X when re-submitting a job.

2017-11-07 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242418#comment-16242418
 ] 

Stephan Ewen commented on FLINK-5633:
-

[~erikvanoosten] Okay, I assume that the reveres class loading should fix that.

Just curious, why are you creating a new reader for each record?

> ClassCastException: X cannot be cast to X when re-submitting a job.
> ---
>
> Key: FLINK-5633
> URL: https://issues.apache.org/jira/browse/FLINK-5633
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, YARN
>Affects Versions: 1.1.4
>Reporter: Giuliano Caliari
>Priority: Minor
>
> I’m running a job on my local cluster and the first time I submit the job 
> everything works but whenever I cancel and re-submit the same job it fails 
> with:
> {quote}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
>   at 
> org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:101)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:634)
>   at au.com.my.package.pTraitor.OneTrait.execute(Traitor.scala:147)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$.delayedEndpoint$au$com$my$package$pTraitor$TraitorAppOneTrait$1(TraitorApp.scala:22)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$delayedInit$body.apply(TraitorApp.scala:21)
>   at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
>   at 
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.App$class.main(App.scala:76)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$.main(TraitorApp.scala:21)
>   at au.com.my.package.pTraitor.TraitorAppOneTrait.main(TraitorApp.scala)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:419)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:339)
>   at 
> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:831)
>   at org.apache.flink.client.CliFrontend.run(CliFrontend.java:256)
>   at 
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:1073)
>   at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1120)
>   at org.apache.flink.client.CliFrontend$2.call(CliFrontend.java:1117)
>   at 
> org.apache.flink.runtime.security.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:29)
>   at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1116)
> Caused by: org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply$mcV$sp(JobManager.scala:900)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
>   at 
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$6.apply(JobManager.scala:843)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>   at 
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> 

[jira] [Closed] (FLINK-7481) Binary search with integer overflow possibility

2017-11-07 Thread Hai Zhou UTC+8 (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hai Zhou UTC+8 closed FLINK-7481.
-
Resolution: Won't Fix

> Binary search with integer overflow possibility
> ---
>
> Key: FLINK-7481
> URL: https://issues.apache.org/jira/browse/FLINK-7481
> Project: Flink
>  Issue Type: Bug
>  Components: Core
>Reporter: Baihua Su
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7977) bump version of compatibility check for Flink 1.4

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242409#comment-16242409
 ] 

ASF GitHub Bot commented on FLINK-7977:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4945
  
Isn't this API compatibility rather than checkpoint/savepoint 
compatibility? And if the former should not 1.4 be checked against 1.3 (which 
should be checked against 1.2, etc.)?


> bump version of compatibility check for Flink 1.4
> -
>
> Key: FLINK-7977
> URL: https://issues.apache.org/jira/browse/FLINK-7977
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0
>
>
> Since Flink maintains backward compatibility check for 2 versions, Flink 1.4 
> should check compatibility with 1.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4945: [FLINK-7977][build] bump version of compatibility check f...

2017-11-07 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4945
  
Isn't this API compatibility rather than checkpoint/savepoint 
compatibility? And if the former should not 1.4 be checked against 1.3 (which 
should be checked against 1.2, etc.)?


---


[jira] [Commented] (FLINK-8009) flink-dist pulls in flink-runtime's transitive avro/jackson dependency

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242407#comment-16242407
 ] 

ASF GitHub Bot commented on FLINK-8009:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4972
  
@aljoscha fixed


> flink-dist pulls in flink-runtime's transitive avro/jackson dependency
> --
>
> Key: FLINK-8009
> URL: https://issues.apache.org/jira/browse/FLINK-8009
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.5.0
>
>
> The promotion of transitive dependencies in flink-runtime causes flink-dist 
> to contain _some_ transitive dependencies from flink-shaded-hadoop. (most 
> notably, avro and codehaus.jackson)
> We will either have to add an exclusion for each dependency to flink-dist, 
> set flink-shaded-hadoop to provided in flink-runtime (hacky, but less 
> intrusive), or remove the promotion and explicitly depend on various akka 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4972: [FLINK-8009][build][runtime] Remove transitive dependency...

2017-11-07 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4972
  
@aljoscha fixed


---


[jira] [Commented] (FLINK-7419) Shade jackson dependency in flink-avro

2017-11-07 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242398#comment-16242398
 ] 

Chesnay Schepler commented on FLINK-7419:
-

We could still relocating it though. The avro dependency is flink-dist is as i 
understand it only a fallback; if it causes a problem users can always supply 
their own avro version.

> Shade jackson dependency in flink-avro
> --
>
> Key: FLINK-7419
> URL: https://issues.apache.org/jira/browse/FLINK-7419
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> Avro uses {{org.codehouse.jackson}} which also exists in multiple 
> incompatible versions. We should shade it to 
> {{org.apache.flink.shaded.avro.org.codehouse.jackson}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8009) flink-dist pulls in flink-runtime's transitive avro/jackson dependency

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242350#comment-16242350
 ] 

ASF GitHub Bot commented on FLINK-8009:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4972
  
Currently this fails because of missing entries in `dependencyManagement` 
in the root pom.


> flink-dist pulls in flink-runtime's transitive avro/jackson dependency
> --
>
> Key: FLINK-8009
> URL: https://issues.apache.org/jira/browse/FLINK-8009
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.5.0
>
>
> The promotion of transitive dependencies in flink-runtime causes flink-dist 
> to contain _some_ transitive dependencies from flink-shaded-hadoop. (most 
> notably, avro and codehaus.jackson)
> We will either have to add an exclusion for each dependency to flink-dist, 
> set flink-shaded-hadoop to provided in flink-runtime (hacky, but less 
> intrusive), or remove the promotion and explicitly depend on various akka 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4972: [FLINK-8009][build][runtime] Remove transitive dependency...

2017-11-07 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4972
  
Currently this fails because of missing entries in `dependencyManagement` 
in the root pom.


---


[jira] [Commented] (FLINK-7419) Shade jackson dependency in flink-avro

2017-11-07 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242337#comment-16242337
 ] 

Chesnay Schepler commented on FLINK-7419:
-

I'm not entirely sure. Avro _does_ expose jackson in their public API (which 
they want to remove in AVRO-1605), and I have found at least one mention of 
jackson being used in generated classes.

[~StephanEwen] do you know more about this?

> Shade jackson dependency in flink-avro
> --
>
> Key: FLINK-7419
> URL: https://issues.apache.org/jira/browse/FLINK-7419
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> Avro uses {{org.codehouse.jackson}} which also exists in multiple 
> incompatible versions. We should shade it to 
> {{org.apache.flink.shaded.avro.org.codehouse.jackson}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8009) flink-dist pulls in flink-runtime's transitive avro/jackson dependency

2017-11-07 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242328#comment-16242328
 ] 

Chesnay Schepler commented on FLINK-8009:
-

yes, but they should only be pulled in by flink-avro and not flink-runtime.

> flink-dist pulls in flink-runtime's transitive avro/jackson dependency
> --
>
> Key: FLINK-8009
> URL: https://issues.apache.org/jira/browse/FLINK-8009
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.5.0
>
>
> The promotion of transitive dependencies in flink-runtime causes flink-dist 
> to contain _some_ transitive dependencies from flink-shaded-hadoop. (most 
> notably, avro and codehaus.jackson)
> We will either have to add an exclusion for each dependency to flink-dist, 
> set flink-shaded-hadoop to provided in flink-runtime (hacky, but less 
> intrusive), or remove the promotion and explicitly depend on various akka 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8016) Add documentation for KafkaJsonTableSink

2017-11-07 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-8016:


 Summary: Add documentation for KafkaJsonTableSink
 Key: FLINK-8016
 URL: https://issues.apache.org/jira/browse/FLINK-8016
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table API & SQL
Affects Versions: 1.4.0
Reporter: Fabian Hueske
Assignee: Fabian Hueske
 Fix For: 1.4.0


The documentation of available TableSources should be extended to include the 
KafkaJsonTableSinks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-8015) Add Kafka011JsonTableSink

2017-11-07 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-8015:
-
Issue Type: New Feature  (was: Improvement)

> Add Kafka011JsonTableSink
> -
>
> Key: FLINK-8015
> URL: https://issues.apache.org/jira/browse/FLINK-8015
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.5.0
>Reporter: Fabian Hueske
>
> We offer a TableSource for JSON-encoded Kafka 0.11 topics but no TableSink.
> Since Flink's Kafka producer changed for Kafka 0.11 we need new base classes 
> which we can reuse for TableSinks for later Kafka versions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8015) Add Kafka011JsonTableSink

2017-11-07 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-8015:


 Summary: Add Kafka011JsonTableSink
 Key: FLINK-8015
 URL: https://issues.apache.org/jira/browse/FLINK-8015
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Affects Versions: 1.5.0
Reporter: Fabian Hueske


We offer a TableSource for JSON-encoded Kafka 0.11 topics but no TableSink.
Since Flink's Kafka producer changed for Kafka 0.11 we need new base classes 
which we can reuse for TableSinks for later Kafka versions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8014) Add Kafka010JsonTableSink

2017-11-07 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-8014:


 Summary: Add Kafka010JsonTableSink
 Key: FLINK-8014
 URL: https://issues.apache.org/jira/browse/FLINK-8014
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: Fabian Hueske
Assignee: Fabian Hueske
 Fix For: 1.4.0


Offer a TableSource for JSON-encoded Kafka 0.10 topics but no TableSink.
Since, the required base classes are already there, a {{Kafka010JsonTableSink}} 
can be easily added. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8013) Check of AggregateFunction merge function signature is too strict.

2017-11-07 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-8013:


 Summary: Check of AggregateFunction merge function signature is 
too strict.
 Key: FLINK-8013
 URL: https://issues.apache.org/jira/browse/FLINK-8013
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: Fabian Hueske
Assignee: Fabian Hueske


The {{AggregationCodeGenerator}} checks that a user-defined 
{{AggregateFunction}} implements all required methods. However, the check for 
the {{merge(accumulator: ACC, its: java.lang.Iterable\[ACC\]): Unit}} method is 
too strict and rejects valid UDAGGs. 

This happens for more complex accumulators such as 
{{Array\[org.apache.flink.api.java.tuple.Tuple2\[java.lang.Integer, 
java.lang.Float\]\]}} because generic types are lost such that the check of the 
argument types of {{merge}} fails.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4973: [FLINK-8011][dist] Set flink-python to provided

2017-11-07 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4973

[FLINK-8011][dist] Set flink-python to provided

## What is the purpose of the change

Minor clean-up in the flink-dist pom. flink-python is now set to provided, 
similar to other libraries, and the shading exclusion was removed.

## Verifying this change

Compile flink-dist and check that flink-python is correctly put in the /lib 
folder.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8011

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4973.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4973


commit 9916af8d863845d81444ee870929ebf29b2d5d54
Author: zentol 
Date:   2017-11-07T16:13:57Z

[FLINK-8011][dist] Set flink-python to provided




---


[jira] [Commented] (FLINK-8011) Set dist flink-python dependency to provided

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242290#comment-16242290
 ] 

ASF GitHub Bot commented on FLINK-8011:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4973

[FLINK-8011][dist] Set flink-python to provided

## What is the purpose of the change

Minor clean-up in the flink-dist pom. flink-python is now set to provided, 
similar to other libraries, and the shading exclusion was removed.

## Verifying this change

Compile flink-dist and check that flink-python is correctly put in the /lib 
folder.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8011

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4973.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4973


commit 9916af8d863845d81444ee870929ebf29b2d5d54
Author: zentol 
Date:   2017-11-07T16:13:57Z

[FLINK-8011][dist] Set flink-python to provided




> Set dist flink-python dependency to provided
> 
>
> Key: FLINK-8011
> URL: https://issues.apache.org/jira/browse/FLINK-8011
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.5.0
>
>
> We can simplify the flink-dist pom by setting the flink-python dependency to 
> provided, which allows us to remove an exclusion from the shade plugin 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-8011) Set dist flink-python dependency to provided

2017-11-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-8011:

Fix Version/s: (was: 1.4.0)

> Set dist flink-python dependency to provided
> 
>
> Key: FLINK-8011
> URL: https://issues.apache.org/jira/browse/FLINK-8011
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.5.0
>
>
> We can simplify the flink-dist pom by setting the flink-python dependency to 
> provided, which allows us to remove an exclusion from the shade plugin 
> configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8012) Table with time attribute cannot be written to CsvTableSink

2017-11-07 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-8012:


 Summary: Table with time attribute cannot be written to 
CsvTableSink
 Key: FLINK-8012
 URL: https://issues.apache.org/jira/browse/FLINK-8012
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Affects Versions: 1.3.2, 1.4.0
Reporter: Fabian Hueske
Assignee: Fabian Hueske
Priority: Critical
 Fix For: 1.4.0, 1.3.3


A Table with a time attribute ({{TimeIndicatorTypeInfo}}) cannot be written to 
a {{CsvTableSink}}.
Trying to do so results in the following exception:

{code}
Exception in thread "main" org.apache.flink.table.api.TableException: The time 
indicator type is an internal type only.
at 
org.apache.flink.table.api.TableEnvironment.org$apache$flink$table$api$TableEnvironment$$validateFieldType$1(TableEnvironment.scala:937)
at 
org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$2.apply(TableEnvironment.scala:963)
at 
org.apache.flink.table.api.TableEnvironment$$anonfun$generateRowConverterFunction$2.apply(TableEnvironment.scala:960)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at 
org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:960)
at 
org.apache.flink.table.api.StreamTableEnvironment.getConversionMapper(StreamTableEnvironment.scala:289)
at 
org.apache.flink.table.api.StreamTableEnvironment.translate(StreamTableEnvironment.scala:810)
at 
org.apache.flink.table.api.StreamTableEnvironment.writeToSink(StreamTableEnvironment.scala:263)
at org.apache.flink.table.api.Table.writeToSink(table.scala:857)
at org.apache.flink.table.api.Table.writeToSink(table.scala:830)
{code}

The time attribute should be automatically converted into a {{SQL_TIMESTAMP}} 
type.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8009) flink-dist pulls in flink-runtime's transitive avro/jackson dependency

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242278#comment-16242278
 ] 

ASF GitHub Bot commented on FLINK-8009:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4972

[FLINK-8009][build][runtime] Remove transitive dependency promotion

## What is the purpose of the change

This PR replaces the dependency promotion from flink-runtime. The promotion 
appears to be interacting oddly with `optional` dependencies, and is generally 
prone to inducing unforeseen side-effects.

To accomplish the original goal behind the promotion I've added 
dependencies for akka-streams and akka-protobuf, which are the transitive 
dependencies that we want to keep being visible after the shading.

For reference, this is a comparison of the dependency footprint of 
flink-runtime as seen from another module (flink-dist), with and without 
dependency promotion:

```
Promotion enabled:
+- org.apache.flink:flink-runtime_2.11:jar:1.4-SNAPSHOT:compile
|  +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
|  +- org.objenesis:objenesis:jar:2.1:compile
|  +- 
org.apache.flink:flink-queryable-state-client-java_2.11:jar:1.4-SNAPSHOT:compile
|  +- org.tukaani:xz:jar:1.0:compile
|  +- org.apache.avro:avro:jar:1.8.2:compile
|  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
|  +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
|  +- com.thoughtworks.paranamer:paranamer:jar:2.7:compile
|  +- commons-codec:commons-codec:jar:1.10:compile
|  +- commons-logging:commons-logging:jar:1.1.3:compile
|  +- commons-lang:commons-lang:jar:2.6:compile
|  +- commons-configuration:commons-configuration:jar:1.7:compile
|  +- commons-digester:commons-digester:jar:1.8.1:compile
|  +- commons-beanutils:commons-beanutils-bean-collections:jar:1.8.3:compile
|  +- commons-io:commons-io:jar:2.4:compile
|  +- org.apache.flink:flink-shaded-netty:jar:4.0.27.Final-1.0:compile
|  +- org.apache.flink:flink-shaded-guava:jar:18.0-1.0:compile
|  +- org.apache.flink:flink-shaded-jackson:jar:2.7.9-2.0:compile
|  +- commons-cli:commons-cli:jar:1.3.1:compile
|  +- org.javassist:javassist:jar:3.18.2-GA:compile
|  +- com.typesafe.akka:akka-actor_2.11:jar:2.4.20:compile
|  +- com.typesafe:config:jar:1.3.0:compile
|  +- org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
|  +- com.typesafe.akka:akka-stream_2.11:jar:2.4.20:compile
|  +- org.reactivestreams:reactive-streams:jar:1.0.0:compile
|  +- com.typesafe:ssl-config-core_2.11:jar:0.2.1:compile
|  +- org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.0.4:compile
|  +- com.typesafe.akka:akka-protobuf_2.11:jar:2.4.20:compile
|  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.20:compile
|  +- org.clapper:grizzled-slf4j_2.11:jar:1.0.2:compile
|  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
|  +- com.twitter:chill_2.11:jar:0.7.4:compile
|  \- com.twitter:chill-java:jar:0.7.4:compile
```

```
Promotion disabled (does NOT include additional akka dependencies):
+- org.apache.flink:flink-runtime_2.11:jar:1.4-SNAPSHOT:compile
|  +- 
org.apache.flink:flink-queryable-state-client-java_2.11:jar:1.4-SNAPSHOT:compile
|  +- commons-io:commons-io:jar:2.4:compile
|  +- org.apache.flink:flink-shaded-netty:jar:4.0.27.Final-1.0:compile
|  +- org.apache.flink:flink-shaded-guava:jar:18.0-1.0:compile
|  +- org.apache.flink:flink-shaded-jackson:jar:2.7.9-2.0:compile
|  +- commons-cli:commons-cli:jar:1.3.1:compile
|  +- org.javassist:javassist:jar:3.18.2-GA:compile
|  +- com.typesafe.akka:akka-actor_2.11:jar:2.4.20:compile
|  |  +- com.typesafe:config:jar:1.3.0:compile
|  |  \- org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
|  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.20:compile
|  +- org.clapper:grizzled-slf4j_2.11:jar:1.0.2:compile
|  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
|  \- com.twitter:chill_2.11:jar:0.7.4:compile
| \- com.twitter:chill-java:jar:0.7.4:compile
```

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8009b

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4972.patch

To close this pull request, 

[GitHub] flink pull request #4972: [FLINK-8009][build][runtime] Remove transitive dep...

2017-11-07 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4972

[FLINK-8009][build][runtime] Remove transitive dependency promotion

## What is the purpose of the change

This PR replaces the dependency promotion from flink-runtime. The promotion 
appears to be interacting oddly with `optional` dependencies, and is generally 
prone to inducing unforeseen side-effects.

To accomplish the original goal behind the promotion I've added 
dependencies for akka-streams and akka-protobuf, which are the transitive 
dependencies that we want to keep being visible after the shading.

For reference, this is a comparison of the dependency footprint of 
flink-runtime as seen from another module (flink-dist), with and without 
dependency promotion:

```
Promotion enabled:
+- org.apache.flink:flink-runtime_2.11:jar:1.4-SNAPSHOT:compile
|  +- com.esotericsoftware.minlog:minlog:jar:1.2:compile
|  +- org.objenesis:objenesis:jar:2.1:compile
|  +- 
org.apache.flink:flink-queryable-state-client-java_2.11:jar:1.4-SNAPSHOT:compile
|  +- org.tukaani:xz:jar:1.0:compile
|  +- org.apache.avro:avro:jar:1.8.2:compile
|  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
|  +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
|  +- com.thoughtworks.paranamer:paranamer:jar:2.7:compile
|  +- commons-codec:commons-codec:jar:1.10:compile
|  +- commons-logging:commons-logging:jar:1.1.3:compile
|  +- commons-lang:commons-lang:jar:2.6:compile
|  +- commons-configuration:commons-configuration:jar:1.7:compile
|  +- commons-digester:commons-digester:jar:1.8.1:compile
|  +- commons-beanutils:commons-beanutils-bean-collections:jar:1.8.3:compile
|  +- commons-io:commons-io:jar:2.4:compile
|  +- org.apache.flink:flink-shaded-netty:jar:4.0.27.Final-1.0:compile
|  +- org.apache.flink:flink-shaded-guava:jar:18.0-1.0:compile
|  +- org.apache.flink:flink-shaded-jackson:jar:2.7.9-2.0:compile
|  +- commons-cli:commons-cli:jar:1.3.1:compile
|  +- org.javassist:javassist:jar:3.18.2-GA:compile
|  +- com.typesafe.akka:akka-actor_2.11:jar:2.4.20:compile
|  +- com.typesafe:config:jar:1.3.0:compile
|  +- org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
|  +- com.typesafe.akka:akka-stream_2.11:jar:2.4.20:compile
|  +- org.reactivestreams:reactive-streams:jar:1.0.0:compile
|  +- com.typesafe:ssl-config-core_2.11:jar:0.2.1:compile
|  +- org.scala-lang.modules:scala-parser-combinators_2.11:jar:1.0.4:compile
|  +- com.typesafe.akka:akka-protobuf_2.11:jar:2.4.20:compile
|  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.20:compile
|  +- org.clapper:grizzled-slf4j_2.11:jar:1.0.2:compile
|  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
|  +- com.twitter:chill_2.11:jar:0.7.4:compile
|  \- com.twitter:chill-java:jar:0.7.4:compile
```

```
Promotion disabled (does NOT include additional akka dependencies):
+- org.apache.flink:flink-runtime_2.11:jar:1.4-SNAPSHOT:compile
|  +- 
org.apache.flink:flink-queryable-state-client-java_2.11:jar:1.4-SNAPSHOT:compile
|  +- commons-io:commons-io:jar:2.4:compile
|  +- org.apache.flink:flink-shaded-netty:jar:4.0.27.Final-1.0:compile
|  +- org.apache.flink:flink-shaded-guava:jar:18.0-1.0:compile
|  +- org.apache.flink:flink-shaded-jackson:jar:2.7.9-2.0:compile
|  +- commons-cli:commons-cli:jar:1.3.1:compile
|  +- org.javassist:javassist:jar:3.18.2-GA:compile
|  +- com.typesafe.akka:akka-actor_2.11:jar:2.4.20:compile
|  |  +- com.typesafe:config:jar:1.3.0:compile
|  |  \- org.scala-lang.modules:scala-java8-compat_2.11:jar:0.7.0:compile
|  +- com.typesafe.akka:akka-slf4j_2.11:jar:2.4.20:compile
|  +- org.clapper:grizzled-slf4j_2.11:jar:1.0.2:compile
|  +- com.github.scopt:scopt_2.11:jar:3.5.0:compile
|  \- com.twitter:chill_2.11:jar:0.7.4:compile
| \- com.twitter:chill-java:jar:0.7.4:compile
```

## Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (yes)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8009b

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4972.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4972


commit 8fad5f3ccb3b8cf7029a5aa035eb0f52d7878e1d
Author: zentol 
Date:   2017-11-07T15:58:53Z


[jira] [Commented] (FLINK-8009) flink-dist pulls in flink-runtime's transitive avro/jackson dependency

2017-11-07 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242275#comment-16242275
 ] 

Aljoscha Krettek commented on FLINK-8009:
-

Btw, with the recent changes by [~StephanEwen] Avro is supposed to be in 
{{flink-dist}} for backwards compatibility.

> flink-dist pulls in flink-runtime's transitive avro/jackson dependency
> --
>
> Key: FLINK-8009
> URL: https://issues.apache.org/jira/browse/FLINK-8009
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.5.0
>
>
> The promotion of transitive dependencies in flink-runtime causes flink-dist 
> to contain _some_ transitive dependencies from flink-shaded-hadoop. (most 
> notably, avro and codehaus.jackson)
> We will either have to add an exclusion for each dependency to flink-dist, 
> set flink-shaded-hadoop to provided in flink-runtime (hacky, but less 
> intrusive), or remove the promotion and explicitly depend on various akka 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7073) Create RESTful JobManager endpoint

2017-11-07 Thread Till Rohrmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann reassigned FLINK-7073:


Assignee: Till Rohrmann

> Create RESTful JobManager endpoint
> --
>
> Key: FLINK-7073
> URL: https://issues.apache.org/jira/browse/FLINK-7073
> Project: Flink
>  Issue Type: Sub-task
>  Components: JobManager
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>  Labels: flip-6
>
> In order to communicate from the {{client}} with a running {{JobManager}} we 
> have to provide a RESTful endpoint for job specific operations. These 
> operations are:
> * Cancel job (PUT): Cancel the given job
> * Stop job (PUT): Stops the given job
> * Take savepoint (POST): Take savepoint for given job (How to return the 
> savepoint under which the savepoint was stored? Maybe always having to 
> specify a path)
> * Poll/Subscribe to notifications
> The REST JobManager endpoint should also serve the information required for 
> the web ui.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7419) Shade jackson dependency in flink-avro

2017-11-07 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242262#comment-16242262
 ] 

Aljoscha Krettek commented on FLINK-7419:
-

Do you know if generated code uses the Jackson dependency? If yes, I think we 
cannot even shade that.

> Shade jackson dependency in flink-avro
> --
>
> Key: FLINK-7419
> URL: https://issues.apache.org/jira/browse/FLINK-7419
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> Avro uses {{org.codehouse.jackson}} which also exists in multiple 
> incompatible versions. We should shade it to 
> {{org.apache.flink.shaded.avro.org.codehouse.jackson}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-8009) flink-dist pulls in flink-runtime's transitive avro/jackson dependency

2017-11-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-8009:

Summary: flink-dist pulls in flink-runtime's transitive avro/jackson 
dependency  (was: flink-dist contains flink-runtime's transitive hadoop 
dependencies)

> flink-dist pulls in flink-runtime's transitive avro/jackson dependency
> --
>
> Key: FLINK-8009
> URL: https://issues.apache.org/jira/browse/FLINK-8009
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.5.0
>
>
> The promotion of transitive dependencies in flink-runtime causes flink-dist 
> to contain _some_ transitive dependencies from flink-shaded-hadoop. (most 
> notably, avro and codehaus.jackson)
> We will either have to add an exclusion for each dependency to flink-dist, 
> set flink-shaded-hadoop to provided in flink-runtime (hacky, but less 
> intrusive), or remove the promotion and explicitly depend on various akka 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-8009) flink-dist contains flink-runtime's transitive hadoop dependencies

2017-11-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-8009:
---

Assignee: Chesnay Schepler

> flink-dist contains flink-runtime's transitive hadoop dependencies
> --
>
> Key: FLINK-8009
> URL: https://issues.apache.org/jira/browse/FLINK-8009
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.5.0
>
>
> The promotion of transitive dependencies in flink-runtime causes flink-dist 
> to contain _some_ transitive dependencies from flink-shaded-hadoop. (most 
> notably, avro and codehaus.jackson)
> We will either have to add an exclusion for each dependency to flink-dist, 
> set flink-shaded-hadoop to provided in flink-runtime (hacky, but less 
> intrusive), or remove the promotion and explicitly depend on various akka 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8009) flink-dist contains flink-runtime's transitive hadoop dependencies

2017-11-07 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242254#comment-16242254
 ] 

Chesnay Schepler commented on FLINK-8009:
-

avro and jackson are still pulled in. The dependency-reduced pom contains both 
these dependencies, with neither being marked as optional, even though hadoop 
is. I don't know why that happens though, it really is quite odd.

> flink-dist contains flink-runtime's transitive hadoop dependencies
> --
>
> Key: FLINK-8009
> URL: https://issues.apache.org/jira/browse/FLINK-8009
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.5.0
>
>
> The promotion of transitive dependencies in flink-runtime causes flink-dist 
> to contain _some_ transitive dependencies from flink-shaded-hadoop. (most 
> notably, avro and codehaus.jackson)
> We will either have to add an exclusion for each dependency to flink-dist, 
> set flink-shaded-hadoop to provided in flink-runtime (hacky, but less 
> intrusive), or remove the promotion and explicitly depend on various akka 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8008) PojoTypeInfo should sort fields fields based on boolean

2017-11-07 Thread Muhammad Imran Tariq (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242255#comment-16242255
 ] 

Muhammad Imran Tariq commented on FLINK-8008:
-

I am using POJO way to read CSV file. Say a csv file have 2 columns.
ID - Integer | AGE - Double
When a csv file is read, DataSet will make id first column but PojoTypeInfo 
read fields in sorted way and place 'age' column at 0 index. So Double datatype 
is being applied to ID column. That's why I don't want to sort fields.

> PojoTypeInfo should sort fields fields based on boolean
> ---
>
> Key: FLINK-8008
> URL: https://issues.apache.org/jira/browse/FLINK-8008
> Project: Flink
>  Issue Type: Improvement
>  Components: DataSet API
>Affects Versions: 1.3.2
>Reporter: Muhammad Imran Tariq
>Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Flink PojoTypeInfo sorts fields array that are passed into constructor 
> arguments. I want to create another constructor that takes boolean parameter 
> to sort field or not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8008) PojoTypeInfo should sort fields fields based on boolean

2017-11-07 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242234#comment-16242234
 ] 

Aljoscha Krettek commented on FLINK-8008:
-

What's the reason for not sorting the fields? This is done to get a stable 
order of the fields.

> PojoTypeInfo should sort fields fields based on boolean
> ---
>
> Key: FLINK-8008
> URL: https://issues.apache.org/jira/browse/FLINK-8008
> Project: Flink
>  Issue Type: Improvement
>  Components: DataSet API
>Affects Versions: 1.3.2
>Reporter: Muhammad Imran Tariq
>Priority: Minor
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Flink PojoTypeInfo sorts fields array that are passed into constructor 
> arguments. I want to create another constructor that takes boolean parameter 
> to sort field or not.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8009) flink-dist contains flink-runtime's transitive hadoop dependencies

2017-11-07 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242231#comment-16242231
 ] 

Aljoscha Krettek commented on FLINK-8009:
-

So they are included even though it's {{optional}}?

> flink-dist contains flink-runtime's transitive hadoop dependencies
> --
>
> Key: FLINK-8009
> URL: https://issues.apache.org/jira/browse/FLINK-8009
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0, 1.5.0
>
>
> The promotion of transitive dependencies in flink-runtime causes flink-dist 
> to contain _some_ transitive dependencies from flink-shaded-hadoop. (most 
> notably, avro and codehaus.jackson)
> We will either have to add an exclusion for each dependency to flink-dist, 
> set flink-shaded-hadoop to provided in flink-runtime (hacky, but less 
> intrusive), or remove the promotion and explicitly depend on various akka 
> dependencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7765) Enable dependency convergence

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1624#comment-1624
 ] 

ASF GitHub Bot commented on FLINK-7765:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4777#discussion_r149409270
  
--- Diff: pom.xml ---
@@ -289,6 +289,48 @@ under the License.
1.8.2

 
+   
+   
+   org.hamcrest
+   hamcrest-core
+   ${hamcrest.version}
+   
+
+   
+   
+   org.objenesis
+   objenesis
+   2.2
--- End diff --

is there an existing dependency that pulls in 2.2? (I only found 2.1 usages)


> Enable dependency convergence
> -
>
> Key: FLINK-7765
> URL: https://issues.apache.org/jira/browse/FLINK-7765
> Project: Flink
>  Issue Type: Improvement
>  Components: Build System
>Reporter: Piotr Nowojski
>Assignee: Piotr Nowojski
>
> For motivation check https://issues.apache.org/jira/browse/FLINK-7739
> SubTasks of this task depends on one another - to enable convergence in 
> `flink-runtime` it has to be enabled for `flink-shaded-hadoop` first.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4777: [FLINK-7765] Enable dependency convergence

2017-11-07 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4777#discussion_r149409270
  
--- Diff: pom.xml ---
@@ -289,6 +289,48 @@ under the License.
1.8.2

 
+   
+   
+   org.hamcrest
+   hamcrest-core
+   ${hamcrest.version}
+   
+
+   
+   
+   org.objenesis
+   objenesis
+   2.2
--- End diff --

is there an existing dependency that pulls in 2.2? (I only found 2.1 usages)


---


[jira] [Commented] (FLINK-8000) Sort REST handler URLs in RestServerEndpoint

2017-11-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242192#comment-16242192
 ] 

ASF GitHub Bot commented on FLINK-8000:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4958
  
you can also remove the comment in the DispatcherRestEndpoint that says to 
register the stat file handler last.


> Sort REST handler URLs in RestServerEndpoint
> 
>
> Key: FLINK-8000
> URL: https://issues.apache.org/jira/browse/FLINK-8000
> Project: Flink
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 1.4.0
>Reporter: Till Rohrmann
>Assignee: Till Rohrmann
>Priority: Minor
>  Labels: flip-6
>
> In order to make the {{RestServerEndpoint}} more easily extendable, we should 
> automatically sort the returned list of rest handler when calling 
> {{RestServerEndpoint#initializeHandlers}}. That way the order in which the 
> handlers are added to the list is independent of the actual registration 
> order. This is, for example, important for the static file server which 
> always needs to be registered last.
> I propose to add a special {{String}} {{Comparator}} which considers the 
> charactor {{':'}} to be the character with the largest value. That way we 
> should get always the following sort order:
> - URLs without path parameters have precedence over similar URLs where parts 
> are replaced by path parameters (e.g. {{/jobs/overview}}, {{/jobs/:jobid}} 
> and {{/jobs/:jobid/config}}, {{/jobs/:jobid/vertices/:vertexId}})
> - Prefixes are sorted before URLs containing the prefix (e.g. {{/jobs}}, 
> {{/jobs/overview}})



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4958: [FLINK-8000] Sort Rest handler URLS in RestServerEndpoint

2017-11-07 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4958
  
you can also remove the comment in the DispatcherRestEndpoint that says to 
register the stat file handler last.


---


[jira] [Updated] (FLINK-7991) Cleanup kafka example jar filters

2017-11-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7991:

Affects Version/s: 1.5.0

> Cleanup kafka example jar filters
> -
>
> Key: FLINK-7991
> URL: https://issues.apache.org/jira/browse/FLINK-7991
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0, 1.5.0
>
>
> The kafka example jar shading configuration includes the following files:
> {code}
> org/apache/flink/streaming/examples/kafka/**
> org/apache/flink/streaming/**
> org/apache/kafka/**
> org/apache/curator/**
> org/apache/zookeeper/**
> org/apache/jute/**
> org/I0Itec/**
> jline/**
> com/yammer/**
> kafka/**
> {code}
> Problems:
> * the inclusion of org.apache.flink.streaming causes large parts of the API 
> (and runtime...) to be included in the jar, along the _Twitter_ source and 
> *all* other examples
> * the yammer, jline, l0ltec, jute, zookeeper and curator inclusions are 
> ineffective; none of these classes show up in the example jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7991) Cleanup kafka example jar filters

2017-11-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler updated FLINK-7991:

Fix Version/s: 1.5.0

> Cleanup kafka example jar filters
> -
>
> Key: FLINK-7991
> URL: https://issues.apache.org/jira/browse/FLINK-7991
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.4.0, 1.5.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0, 1.5.0
>
>
> The kafka example jar shading configuration includes the following files:
> {code}
> org/apache/flink/streaming/examples/kafka/**
> org/apache/flink/streaming/**
> org/apache/kafka/**
> org/apache/curator/**
> org/apache/zookeeper/**
> org/apache/jute/**
> org/I0Itec/**
> jline/**
> com/yammer/**
> kafka/**
> {code}
> Problems:
> * the inclusion of org.apache.flink.streaming causes large parts of the API 
> (and runtime...) to be included in the jar, along the _Twitter_ source and 
> *all* other examples
> * the yammer, jline, l0ltec, jute, zookeeper and curator inclusions are 
> ineffective; none of these classes show up in the example jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-7991) Cleanup kafka example jar filters

2017-11-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-7991:
---

Assignee: Chesnay Schepler

> Cleanup kafka example jar filters
> -
>
> Key: FLINK-7991
> URL: https://issues.apache.org/jira/browse/FLINK-7991
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>
> The kafka example jar shading configuration includes the following files:
> {code}
> org/apache/flink/streaming/examples/kafka/**
> org/apache/flink/streaming/**
> org/apache/kafka/**
> org/apache/curator/**
> org/apache/zookeeper/**
> org/apache/jute/**
> org/I0Itec/**
> jline/**
> com/yammer/**
> kafka/**
> {code}
> Problems:
> * the inclusion of org.apache.flink.streaming causes large parts of the API 
> (and runtime...) to be included in the jar, along the _Twitter_ source and 
> *all* other examples
> * the yammer, jline, l0ltec, jute, zookeeper and curator inclusions are 
> ineffective; none of these classes show up in the example jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4971: [FLINK-8010][build] Bump remaining flink-shaded versions

2017-11-07 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/4971
  
+1 (once travis is green)


---


[jira] [Assigned] (FLINK-7419) Shade jackson dependency in flink-avro

2017-11-07 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-7419:
---

Assignee: Chesnay Schepler  (was: Fang Yong)

> Shade jackson dependency in flink-avro
> --
>
> Key: FLINK-7419
> URL: https://issues.apache.org/jira/browse/FLINK-7419
> Project: Flink
>  Issue Type: Sub-task
>  Components: Build System
>Reporter: Stephan Ewen
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.4.0
>
>
> Avro uses {{org.codehouse.jackson}} which also exists in multiple 
> incompatible versions. We should shade it to 
> {{org.apache.flink.shaded.avro.org.codehouse.jackson}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >