[jira] [Commented] (FLINK-6995) Add a warning to outdated documentation

2017-06-26 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062594#comment-16062594
 ] 

Timo Walther commented on FLINK-6995:
-

Thanks for offering your help [~mingleizhang]. The removal of very old version 
can only be done by a committer. Would be great if you could open a PR with a 
warning for the docs of the release-1.2 branch. Similar to the release-1.1 
warning.

> Add a warning to outdated documentation
> ---
>
> Key: FLINK-6995
> URL: https://issues.apache.org/jira/browse/FLINK-6995
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Timo Walther
>
> When I search for "flink yarn" by Google, the first result is a outdated 0.8 
> release documentation page. We should add a warning to outdated documentation 
> pages.
> There are other problems as well:
> The main page only links to 1.3 and 1.4 but the flink-docs-master 
> documentation links to 1.3, 1.2, 1.1, and 1.0. But each of those packages 
> only links to older releases so if a user arrives on a 1.2 page they won't 
> see 1.3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6992) add support for IN, NOT IN in tableAPI

2017-06-26 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062605#comment-16062605
 ] 

Timo Walther commented on FLINK-6992:
-

Yes, the IN for a list of literals is contained in the PR. As far as I can 
remember we could merge this part. The subquery part was a bit hacky, because 
it was not valid for all subqueries. It was only done in `Table.filter` but 
actually there are special `RexSubQuery` expressions and conversion rules from 
Calcite that we should use if possible.

> add support for IN, NOT IN in tableAPI
> --
>
> Key: FLINK-6992
> URL: https://issues.apache.org/jira/browse/FLINK-6992
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Ruidong Li
>Assignee: Ruidong Li
>Priority: Minor
>
> to support table.where('a in ("x", "y", "z"))



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-6942) Add E() supported in TableAPI

2017-06-26 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reassigned FLINK-6942:
---

Assignee: Timo Walther

> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Timo Walther
>  Labels: starter
>
> See FLINK-6960 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6995) Add a warning to outdated documentation

2017-06-26 Thread David Anderson (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062612#comment-16062612
 ] 

David Anderson commented on FLINK-6995:
---

[~mingleizhang] [~twalthr] Adding the warning for 1.2 is both easier and more 
complex than you might expect. The warning is already present, but has to be 
enabled by setting site.is_latest to false. A PR that does this on the 
release-1.2 branch would be welcome.

But we also need to modify https://flink.apache.org/q/stable-docs.html so that 
it points to the 1.3 release. Currently all of the old version warnings are 
still going to the 1.2 release. So we also need a PR on the asf-site branch of 
flink-web that modifies q/stable-docs.html to point to 1.3, and that PR needs 
to be merged first.

As for removing the old doc releases, check with [~uce].

> Add a warning to outdated documentation
> ---
>
> Key: FLINK-6995
> URL: https://issues.apache.org/jira/browse/FLINK-6995
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Timo Walther
>
> When I search for "flink yarn" by Google, the first result is a outdated 0.8 
> release documentation page. We should add a warning to outdated documentation 
> pages.
> There are other problems as well:
> The main page only links to 1.3 and 1.4 but the flink-docs-master 
> documentation links to 1.3, 1.2, 1.1, and 1.0. But each of those packages 
> only links to older releases so if a user arrives on a 1.2 page they won't 
> see 1.3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4181: Add E() support in Table API

2017-06-26 Thread twalthr
GitHub user twalthr opened a pull request:

https://github.com/apache/flink/pull/4181

Add E() support in Table API

Adds E() to Table API. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/twalthr/flink FLINK-6942

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4181.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4181


commit b84086f0e227a81b4f78669a4998c0b3be12125b
Author: twalthr 
Date:   2017-06-26T07:28:59Z

Add E() support in Table API




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #4181: [FLINK-6942] [table] Add E() support in Table API

2017-06-26 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4181
  
Hi @twalthr Thanks a lot for open this PR. It looks great to me. Here only 
one suggestion is add documentation for tableAPI.

Best,
SunJincheng


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported ...

2017-06-26 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123946739
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * All build-in scalar scalar functions.
--- End diff --

`Built-in scalar runtime functions.`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported ...

2017-06-26 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123945022
  
--- Diff: docs/dev/table/sql.md ---
@@ -1573,6 +1573,29 @@ INITCAP(string)
   
 
 
+
+  
+{% highlight text %}
+CONCAT(string1, string2,...)
+{% endhighlight %}
+  
+  
+Returns the string that results from concatenating the 
arguments. Returns NULL if any argument is NULL.
+  
+
+
+
+  
+{% highlight text %}
+CONCAT_WS(separator, string1, string2,...)
+{% endhighlight %}
+  
+  
+Returns the string that results from concatenating the 
arguments and separator. The separator is added between the strings to be 
concatenated. Returns NULL If the separator is NULL.
--- End diff --

Replace `and` with `using a`. Can you add a little example here as well?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported ...

2017-06-26 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123946426
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * All build-in scalar scalar functions.
+  */
+class ScalarFunctions {}
+
+object ScalarFunctions {
+
+  def power(a: Double, b: JBigDecimal): Double = {
+Math.pow(a, b.doubleValue())
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments.
+* Returns NULL if any argument is NULL.
+*/
+  @varargs
--- End diff --

This is a internal method. We don't need varargs here, we can use 
`Array[String]`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported ...

2017-06-26 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123944651
  
--- Diff: docs/dev/table/sql.md ---
@@ -1573,6 +1573,29 @@ INITCAP(string)
   
 
 
+
+  
+{% highlight text %}
+CONCAT(string1, string2,...)
+{% endhighlight %}
+  
+  
+Returns the string that results from concatenating the 
arguments. Returns NULL if any argument is NULL.
--- End diff --

Can you add a little example? `E.g. CONCAT('a', 'b', 'c') returns 'XXX'.`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported ...

2017-06-26 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123946049
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/CodeGenerator.scala
 ---
@@ -1560,6 +1561,13 @@ class CodeGenerator(
 requireArray(array)
 generateArrayElement(this, array)
 
+  case ScalarSqlFunctions.CONCAT | ScalarSqlFunctions.CONCAT_WS =>
+this.config.setNullCheck(false)
--- End diff --

We cannot modify the config here. Maybe it makes sense not to use a 
`CallGenerator` but add the logic to `ScalarOperators`. Other arrays functions 
such as `generateArrayCardinality`, `generateArrayElement` are there as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported ...

2017-06-26 Thread twalthr
Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123947651
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * All build-in scalar scalar functions.
+  */
+class ScalarFunctions {}
+
+object ScalarFunctions {
+
+  def power(a: Double, b: JBigDecimal): Double = {
+Math.pow(a, b.doubleValue())
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments.
+* Returns NULL if any argument is NULL.
+*/
+  @varargs
+  def concat(args: String*): String = {
+val sb = new JStringBuffer
+var i = 0
+while (i < args.length) {
+  if (args(i) == null) {
+return null
+  }
+  sb.append(args(i))
+  i += 1
+}
+sb.toString
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments and 
separator.
+* Returns NULL If the separator is NULL.
+*
+* Note: CONCAT_WS() does not skip empty strings. However, it does skip 
any NULL values after
+* the separator argument.
+*
+* @param args The first element of argument is the separator for the 
rest of the arguments.
+*/
+  @varargs
+  def concat_ws(args: String*): String = {
+val separator = args(0)
+if (null == separator) {
+  return null
+}
+
+val sb = new JStringBuffer
+
+var i = 1
+val dataList = args.filter(null != _)
--- End diff --

Why not doing the filtering within the while loop? You are iterating over 
the data twice. We should not use fancy Scala functions within runtime code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062643#comment-16062643
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123945022
  
--- Diff: docs/dev/table/sql.md ---
@@ -1573,6 +1573,29 @@ INITCAP(string)
   
 
 
+
+  
+{% highlight text %}
+CONCAT(string1, string2,...)
+{% endhighlight %}
+  
+  
+Returns the string that results from concatenating the 
arguments. Returns NULL if any argument is NULL.
+  
+
+
+
+  
+{% highlight text %}
+CONCAT_WS(separator, string1, string2,...)
+{% endhighlight %}
+  
+  
+Returns the string that results from concatenating the 
arguments and separator. The separator is added between the strings to be 
concatenated. Returns NULL If the separator is NULL.
--- End diff --

Replace `and` with `using a`. Can you add a little example here as well?


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062641#comment-16062641
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123946049
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/codegen/CodeGenerator.scala
 ---
@@ -1560,6 +1561,13 @@ class CodeGenerator(
 requireArray(array)
 generateArrayElement(this, array)
 
+  case ScalarSqlFunctions.CONCAT | ScalarSqlFunctions.CONCAT_WS =>
+this.config.setNullCheck(false)
--- End diff --

We cannot modify the config here. Maybe it makes sense not to use a 
`CallGenerator` but add the logic to `ScalarOperators`. Other arrays functions 
such as `generateArrayCardinality`, `generateArrayElement` are there as well.


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062642#comment-16062642
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123947651
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * All build-in scalar scalar functions.
+  */
+class ScalarFunctions {}
+
+object ScalarFunctions {
+
+  def power(a: Double, b: JBigDecimal): Double = {
+Math.pow(a, b.doubleValue())
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments.
+* Returns NULL if any argument is NULL.
+*/
+  @varargs
+  def concat(args: String*): String = {
+val sb = new JStringBuffer
+var i = 0
+while (i < args.length) {
+  if (args(i) == null) {
+return null
+  }
+  sb.append(args(i))
+  i += 1
+}
+sb.toString
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments and 
separator.
+* Returns NULL If the separator is NULL.
+*
+* Note: CONCAT_WS() does not skip empty strings. However, it does skip 
any NULL values after
+* the separator argument.
+*
+* @param args The first element of argument is the separator for the 
rest of the arguments.
+*/
+  @varargs
+  def concat_ws(args: String*): String = {
+val separator = args(0)
+if (null == separator) {
+  return null
+}
+
+val sb = new JStringBuffer
+
+var i = 1
+val dataList = args.filter(null != _)
--- End diff --

Why not doing the filtering within the while loop? You are iterating over 
the data twice. We should not use fancy Scala functions within runtime code.


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5

[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062644#comment-16062644
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123944651
  
--- Diff: docs/dev/table/sql.md ---
@@ -1573,6 +1573,29 @@ INITCAP(string)
   
 
 
+
+  
+{% highlight text %}
+CONCAT(string1, string2,...)
+{% endhighlight %}
+  
+  
+Returns the string that results from concatenating the 
arguments. Returns NULL if any argument is NULL.
--- End diff --

Can you add a little example? `E.g. CONCAT('a', 'b', 'c') returns 'XXX'.`


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062646#comment-16062646
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123946426
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * All build-in scalar scalar functions.
+  */
+class ScalarFunctions {}
+
+object ScalarFunctions {
+
+  def power(a: Double, b: JBigDecimal): Double = {
+Math.pow(a, b.doubleValue())
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments.
+* Returns NULL if any argument is NULL.
+*/
+  @varargs
--- End diff --

This is a internal method. We don't need varargs here, we can use 
`Array[String]`.


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6942) Add E() supported in TableAPI

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062639#comment-16062639
 ] 

ASF GitHub Bot commented on FLINK-6942:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4181
  
Hi @twalthr Thanks a lot for open this PR. It looks great to me. Here only 
one suggestion is add documentation for tableAPI.

Best,
SunJincheng


> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Timo Walther
>  Labels: starter
>
> See FLINK-6960 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062645#comment-16062645
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user twalthr commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123946739
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * All build-in scalar scalar functions.
--- End diff --

`Built-in scalar runtime functions.`


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3943: [FLINK-6617][table] Improve JAVA and SCALA logical plans ...

2017-06-26 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3943
  
@sunjincheng121 the structure looks very good. I would volunteer to go over 
the changes as a whole and merge this. If @fhueske @wuchong are fine with this? 
Can you rebase the PR a last time?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6617) Improve JAVA and SCALA logical plans consistent test

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062668#comment-16062668
 ] 

ASF GitHub Bot commented on FLINK-6617:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/3943
  
@sunjincheng121 the structure looks very good. I would volunteer to go over 
the changes as a whole and merge this. If @fhueske @wuchong are fine with this? 
Can you rebase the PR a last time?


> Improve JAVA and SCALA logical plans consistent test
> 
>
> Key: FLINK-6617
> URL: https://issues.apache.org/jira/browse/FLINK-6617
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently,we need some `StringExpression` test,for all JAVA and SCALA API.
> Such as:`GroupAggregations`,`GroupWindowAggregaton`(Session,Tumble),`Calc` 
> etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6617) Improve JAVA and SCALA logical plans consistent test

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062686#comment-16062686
 ] 

ASF GitHub Bot commented on FLINK-6617:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/3943
  
Thanks @twalthr , I'm fine with this ,a great +1!


> Improve JAVA and SCALA logical plans consistent test
> 
>
> Key: FLINK-6617
> URL: https://issues.apache.org/jira/browse/FLINK-6617
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently,we need some `StringExpression` test,for all JAVA and SCALA API.
> Such as:`GroupAggregations`,`GroupWindowAggregaton`(Session,Tumble),`Calc` 
> etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3943: [FLINK-6617][table] Improve JAVA and SCALA logical plans ...

2017-06-26 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/3943
  
Thanks @twalthr , I'm fine with this ,a great +1!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink issue #3977: [FLINK-5490] Not allowing sinks to be cleared when gettin...

2017-06-26 Thread aljoscha
Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/3977
  
@deepakmarathe Thanks for working on this. Before merging, could you please 
add a test that verifies that we can in fact get the execution plan and still 
execute the program afterwards. Otherwise, this fix might get lost if some 
other part of Flink changes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5490) ContextEnvironment.getExecutionPlan() clears Sinks

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062688#comment-16062688
 ] 

ASF GitHub Bot commented on FLINK-5490:
---

Github user aljoscha commented on the issue:

https://github.com/apache/flink/pull/3977
  
@deepakmarathe Thanks for working on this. Before merging, could you please 
add a test that verifies that we can in fact get the execution plan and still 
execute the program afterwards. Otherwise, this fix might get lost if some 
other part of Flink changes.


> ContextEnvironment.getExecutionPlan() clears Sinks
> --
>
> Key: FLINK-5490
> URL: https://issues.apache.org/jira/browse/FLINK-5490
> Project: Flink
>  Issue Type: Bug
>  Components: Client
>Reporter: Robert Schmidtke
>Priority: Trivial
>
> Getting the execution plan via the ExecutionEnvironment causes the sinks to 
> be cleared, effectively resulting in the situation that it is impossible to 
> first obtain the execution plan, and then execute it. Passing false to 
> https://github.com/apache/flink/blob/master/flink-clients/src/main/java/org/apache/flink/client/program/ContextEnvironment.java#L68
>  should fix this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4181: [FLINK-6942] [table] Add E() support in Table API

2017-06-26 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/4181
  
Thanks for the notice. Actually I updated the docs, but might forgot to add 
it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6942) Add E() supported in TableAPI

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062771#comment-16062771
 ] 

ASF GitHub Bot commented on FLINK-6942:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/4181
  
Thanks for the notice. Actually I updated the docs, but might forgot to add 
it.


> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Timo Walther
>  Labels: starter
>
> See FLINK-6960 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6952) Add link to Javadocs

2017-06-26 Thread Aljoscha Krettek (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062774#comment-16062774
 ] 

Aljoscha Krettek commented on FLINK-6952:
-

[~uce] About scaladocs, I think they are still not built correctly and (more 
importantly) there is only Scaladocs for the Scala batch package and no docs 
for code written in Java.

> Add link to Javadocs
> 
>
> Key: FLINK-6952
> URL: https://issues.apache.org/jira/browse/FLINK-6952
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Ufuk Celebi
>Assignee: Ufuk Celebi
>Priority: Minor
> Fix For: 1.2.2, 1.3.1, 1.4.0
>
>
> The project webpage and the docs are missing links to the Javadocs.
> I think we should add them as part of the external links at the bottom of the 
> doc navigation (above "Project Page").
> In the same manner we could add a link to the Scaladocs, but if I remember 
> correctly there was a problem with the build of the Scaladocs. Correct, 
> [~aljoscha]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4181: [FLINK-6942] [table] Add E() support in Table API

2017-06-26 Thread twalthr
Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/4181
  
I will merge this...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6942) Add E() supported in TableAPI

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062778#comment-16062778
 ] 

ASF GitHub Bot commented on FLINK-6942:
---

Github user twalthr commented on the issue:

https://github.com/apache/flink/pull/4181
  
I will merge this...


> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Timo Walther
>  Labels: starter
>
> See FLINK-6960 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4181: [FLINK-6942] [table] Add E() support in Table API

2017-06-26 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4181
  
Thanks @twalthr .
+1 to merged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6942) Add E() supported in TableAPI

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062789#comment-16062789
 ] 

ASF GitHub Bot commented on FLINK-6942:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4181
  
Thanks @twalthr .
+1 to merged.


> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Timo Walther
>  Labels: starter
>
> See FLINK-6960 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4181: [FLINK-6942] [table] Add E() support in Table API

2017-06-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4181


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6942) Add E() supported in TableAPI

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062791#comment-16062791
 ] 

ASF GitHub Bot commented on FLINK-6942:
---

Github user asfgit closed the pull request at:

https://github.com/apache/flink/pull/4181


> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Timo Walther
>  Labels: starter
> Fix For: 1.4.0
>
>
> See FLINK-6960 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-6942) Add E() supported in TableAPI

2017-06-26 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther resolved FLINK-6942.
-
   Resolution: Fixed
Fix Version/s: 1.4.0

Fixed in 1.4.0: 2d82030277da58040b24a14ff665a9356a4ee4b2

> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Timo Walther
>  Labels: starter
> Fix For: 1.4.0
>
>
> See FLINK-6960 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4181: [FLINK-6942] [table] Add E() support in Table API

2017-06-26 Thread ch33hau
Github user ch33hau commented on a diff in the pull request:

https://github.com/apache/flink/pull/4181#discussion_r123957141
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 ---
@@ -924,6 +924,19 @@ object pi {
 }
 
 /**
+  * Returns a value that is closer than any other value to e.
+  */
+object e {
+
+  /**
+* Returns a value that is closer than any other value to e.
--- End diff --

A trivial comment... 
Scala Doc should be
```
/**
 *
 *
```
Instead of 
```
/**
  *
  *
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6942) Add E() supported in TableAPI

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062794#comment-16062794
 ] 

ASF GitHub Bot commented on FLINK-6942:
---

Github user ch33hau commented on a diff in the pull request:

https://github.com/apache/flink/pull/4181#discussion_r123957141
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/scala/expressionDsl.scala
 ---
@@ -924,6 +924,19 @@ object pi {
 }
 
 /**
+  * Returns a value that is closer than any other value to e.
+  */
+object e {
+
+  /**
+* Returns a value that is closer than any other value to e.
--- End diff --

A trivial comment... 
Scala Doc should be
```
/**
 *
 *
```
Instead of 
```
/**
  *
  *
```


> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Timo Walther
>  Labels: starter
> Fix For: 1.4.0
>
>
> See FLINK-6960 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6939) Not store IterativeCondition with NFA state

2017-06-26 Thread Kostas Kloudas (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062812#comment-16062812
 ] 

Kostas Kloudas commented on FLINK-6939:
---

Hi [~jark], thanks for the quick response! I was planning to check out the PR 
for FLINK-6983 today, so I may have more comments later in the day on this. 
Apart from that, I also had some suggestions on your PR. Once again, thanks a 
lot for your work!

> Not store IterativeCondition with NFA state
> ---
>
> Key: FLINK-6939
> URL: https://issues.apache.org/jira/browse/FLINK-6939
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Jark Wu
>Assignee: Jark Wu
> Fix For: 1.4.0
>
>
> Currently, the IterativeCondition is stored with the total NFA state. And 
> de/serialized every time when update/get the NFA state. It is a heavy 
> operation and not necessary. In addition it is a required feature for 
> FLINK-6938.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3943: [FLINK-6617][table] Improve JAVA and SCALA logical plans ...

2017-06-26 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3943
  
@twalthr, sure! Thanks for taking care of this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6617) Improve JAVA and SCALA logical plans consistent test

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062813#comment-16062813
 ] 

ASF GitHub Bot commented on FLINK-6617:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3943
  
@twalthr, sure! Thanks for taking care of this PR.


> Improve JAVA and SCALA logical plans consistent test
> 
>
> Key: FLINK-6617
> URL: https://issues.apache.org/jira/browse/FLINK-6617
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently,we need some `StringExpression` test,for all JAVA and SCALA API.
> Such as:`GroupAggregations`,`GroupWindowAggregaton`(Session,Tumble),`Calc` 
> etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (FLINK-4565) Support for SQL IN operator

2017-06-26 Thread Dmytro Shkvyra (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmytro Shkvyra resolved FLINK-4565.
---
Resolution: Resolved

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Dmytro Shkvyra
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3502: [FLINK-4565] Support for SQL IN operator

2017-06-26 Thread DmytroShkvyra
Github user DmytroShkvyra commented on the issue:

https://github.com/apache/flink/pull/3502
  
@Xpray I also wondered when will this PR be merged?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062828#comment-16062828
 ] 

ASF GitHub Bot commented on FLINK-4565:
---

Github user DmytroShkvyra commented on the issue:

https://github.com/apache/flink/pull/3502
  
@Xpray I also wondered when will this PR be merged?


> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Dmytro Shkvyra
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-7001) Improve performance of Sliding Time Window with pane optimization

2017-06-26 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062827#comment-16062827
 ] 

Stephan Ewen commented on FLINK-7001:
-

Jark, this is a good idea - there have been discussions before and there were 
also some issues making this difficult.

Do you want to start a FLIP for that - then we can get all inputs to this 
optimization and make sure it works with pluggable state backends, etc.

> Improve performance of Sliding Time Window with pane optimization
> -
>
> Key: FLINK-7001
> URL: https://issues.apache.org/jira/browse/FLINK-7001
> Project: Flink
>  Issue Type: Improvement
>  Components: DataStream API
>Reporter: Jark Wu
>Assignee: Jark Wu
> Fix For: 1.4.0
>
>
> Currently, the implementation of time-based sliding windows treats each 
> window individually and replicates records to each window. For a window of 10 
> minute size that slides by 1 second the data is replicated 600 fold (10 
> minutes / 1 second). We can optimize sliding window by divide windows into 
> panes (aligned with slide), so that we can avoid record duplication and 
> leverage the checkpoint.
> I will attach a more detail design doc to the issue.
> The following issues are similar to this issue: FLINK-5387, FLINK-6990



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3943: [FLINK-6617][table] Improve JAVA and SCALA logical plans ...

2017-06-26 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/3943
  
@twalthr Extremely grateful If you can review and merge this PR after I 
rebase the PR. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6617) Improve JAVA and SCALA logical plans consistent test

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062839#comment-16062839
 ] 

ASF GitHub Bot commented on FLINK-6617:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/3943
  
@twalthr Extremely grateful If you can review and merge this PR after I 
rebase the PR. 



> Improve JAVA and SCALA logical plans consistent test
> 
>
> Key: FLINK-6617
> URL: https://issues.apache.org/jira/browse/FLINK-6617
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently,we need some `StringExpression` test,for all JAVA and SCALA API.
> Such as:`GroupAggregations`,`GroupWindowAggregaton`(Session,Tumble),`Calc` 
> etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7004) Switch to Travis Trusty image

2017-06-26 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-7004:
---

 Summary: Switch to Travis Trusty image
 Key: FLINK-7004
 URL: https://issues.apache.org/jira/browse/FLINK-7004
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.3.0, 1.2.0, 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Critical
 Fix For: 1.2.2, 1.4.0, 1.3.2


As shown in this PR https://github.com/apache/flink/pull/4167 switching to the 
Trusty image on Travis seems to stabilize the build times.

We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6991) Inaccessible link under Gelly document

2017-06-26 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062843#comment-16062843
 ] 

Chesnay Schepler commented on FLINK-6991:
-

[~njzhuyuqi] You now have contributor permissions, and can assign issues to 
yourself.

> Inaccessible link under Gelly document
> --
>
> Key: FLINK-6991
> URL: https://issues.apache.org/jira/browse/FLINK-6991
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: mingleizhang
>Assignee: njzhuyuqi
>
> When I visited the web 
> http://flink.apache.org/news/2015/08/24/introducing-flink-gelly.html#top, 
> then at the bottom of the page there is a Gelly Documentation link. It 
> belongs to a non-existent resources. It gives me the following style stuff.
> {noformat}
> No Such Resource
> File not found.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-6991) Inaccessible link under Gelly document

2017-06-26 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-6991:
---

Assignee: njzhuyuqi

> Inaccessible link under Gelly document
> --
>
> Key: FLINK-6991
> URL: https://issues.apache.org/jira/browse/FLINK-6991
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: mingleizhang
>Assignee: njzhuyuqi
>
> When I visited the web 
> http://flink.apache.org/news/2015/08/24/introducing-flink-gelly.html#top, 
> then at the bottom of the page there is a Gelly Documentation link. It 
> belongs to a non-existent resources. It gives me the following style stuff.
> {noformat}
> No Such Resource
> File not found.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6991) Inaccessible link under Gelly document

2017-06-26 Thread mingleizhang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062848#comment-16062848
 ] 

mingleizhang commented on FLINK-6991:
-

Thanks to [~Zentol]. :)

> Inaccessible link under Gelly document
> --
>
> Key: FLINK-6991
> URL: https://issues.apache.org/jira/browse/FLINK-6991
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: mingleizhang
>Assignee: njzhuyuqi
>
> When I visited the web 
> http://flink.apache.org/news/2015/08/24/introducing-flink-gelly.html#top, 
> then at the bottom of the page there is a Gelly Documentation link. It 
> belongs to a non-existent resources. It gives me the following style stuff.
> {noformat}
> No Such Resource
> File not found.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6991) Inaccessible link under Gelly document

2017-06-26 Thread njzhuyuqi (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062856#comment-16062856
 ] 

njzhuyuqi commented on FLINK-6991:
--

[~Zentol] thanks a lot

> Inaccessible link under Gelly document
> --
>
> Key: FLINK-6991
> URL: https://issues.apache.org/jira/browse/FLINK-6991
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: mingleizhang
>Assignee: njzhuyuqi
>
> When I visited the web 
> http://flink.apache.org/news/2015/08/24/introducing-flink-gelly.html#top, 
> then at the bottom of the page there is a Gelly Documentation link. It 
> belongs to a non-existent resources. It gives me the following style stuff.
> {noformat}
> No Such Resource
> File not found.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4182: [FLINK-7004] Switch to Travis Trusty image

2017-06-26 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4182

[FLINK-7004] Switch to Travis Trusty image

Cleaned up version of #4167. With this PR we switch to the trusty image on 
Travis as it appears to have more stable build times.

Other changes include:
* run in a sudo-enabled environment for more memory
* increase java heap size
* replace oraclejdk7 profile since it is no longer supported (see 
https://github.com/travis-ci/travis-ci/issues/7884)
* manually install maven 3.2.5 since trusty works with 3.3.9

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7004

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4182.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4182


commit e3ed988581a3f0e747d4a80335f86dd1ace07106
Author: zentol 
Date:   2017-06-26T09:44:45Z

[FLINK-7004] Switch to Travis Trusty image

- enable sudo for more memory
- increase java heap size
- replace usage of oraclejdk7 since it is no longer supporter
- manually install maven 3.2.5




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7004) Switch to Travis Trusty image

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062860#comment-16062860
 ] 

ASF GitHub Bot commented on FLINK-7004:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4182

[FLINK-7004] Switch to Travis Trusty image

Cleaned up version of #4167. With this PR we switch to the trusty image on 
Travis as it appears to have more stable build times.

Other changes include:
* run in a sudo-enabled environment for more memory
* increase java heap size
* replace oraclejdk7 profile since it is no longer supported (see 
https://github.com/travis-ci/travis-ci/issues/7884)
* manually install maven 3.2.5 since trusty works with 3.3.9

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 7004

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4182.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4182


commit e3ed988581a3f0e747d4a80335f86dd1ace07106
Author: zentol 
Date:   2017-06-26T09:44:45Z

[FLINK-7004] Switch to Travis Trusty image

- enable sudo for more memory
- increase java heap size
- replace usage of oraclejdk7 since it is no longer supporter
- manually install maven 3.2.5




> Switch to Travis Trusty image
> -
>
> Key: FLINK-7004
> URL: https://issues.apache.org/jira/browse/FLINK-7004
> Project: Flink
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 1.2.0, 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.2.2, 1.4.0, 1.3.2
>
>
> As shown in this PR https://github.com/apache/flink/pull/4167 switching to 
> the Trusty image on Travis seems to stabilize the build times.
> We should switch for 1.2, 1.3 and 1.4.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6522) Add ZooKeeper cleanup logic to ZooKeeperHaServices

2017-06-26 Thread Fang Yong (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062864#comment-16062864
 ] 

Fang Yong commented on FLINK-6522:
--

Hi [~till.rohrmann], I want to pick this issue. If I understand correctly, all 
data under the directory "/flink/cluster_id" should be removed when 
{{ZooKeeperHaServices#closeAndCleanupAllData}} is called, right?  Thanks

> Add ZooKeeper cleanup logic to ZooKeeperHaServices
> --
>
> Key: FLINK-6522
> URL: https://issues.apache.org/jira/browse/FLINK-6522
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Till Rohrmann
>
> The {{ZooKeeperHaServices}} provide a {{CuratorFramework}} client to access 
> ZooKeeper data. Consequently, all data (also for different job) are stored 
> under the same root node. When 
> {{HighAvailabilityServices#closeAndCleanupAllData}} is called, then this data 
> should be cleaned up. This cleanup logic is currently missing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-3414) Add Scala API for CEP's pattern definition

2017-06-26 Thread Dawid Wysakowicz (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062863#comment-16062863
 ] 

Dawid Wysakowicz commented on FLINK-3414:
-

Hi [~kkl0u], [~dian.fu]

As there is ongoing work to introduce GroupPatterns( [FLINK-6927] ) and 
branching patterns ( [FLINK-4641] ) I would like to start a discussion about 
reworking our Pattern API as a better hierarchical API would help in the 
implementation.

I feel there is couple of problems with current API:
# there is lots of places where we throw {{MalformedPatternException}}, it 
would be better to restrict illegal patterns in API rather than throw 
exceptions in runtime
# API does not corresponds well to hierarchy of groups and branches
# it is hard to provide a good intuitive Scala dsl
 
In my branch [cep-api-rebuilt| 
https://github.com/dawidwys/flink/tree/cep-api-rebuilt ] in {{flink-cep-scala}} 
in package {{org.apache.flink.cep.scala.pattern.proto}} I created a proposal 
for a new API written in Scala(better type support I think). For this API to 
work we would need to port {{NFACompiler.java}} into the new API, which I think 
should be done also in Scala, what would allow us to better leverage scala type 
support by pattern matching.

As the current API is not annotated as Public, I feel there is no that much of 
a problem with breaking API compatibility. 

I would love to hear your opinions.
 

> Add Scala API for CEP's pattern definition
> --
>
> Key: FLINK-3414
> URL: https://issues.apache.org/jira/browse/FLINK-3414
> Project: Flink
>  Issue Type: Improvement
>  Components: CEP
>Affects Versions: 1.0.0
>Reporter: Till Rohrmann
>Assignee: Dawid Wysakowicz
>Priority: Minor
>
> Currently, the CEP library only supports a Java API to specify complex event 
> patterns. In order to make it a bit less verbose for Scala users, it would be 
> nice to also add a Scala API for the CEP library. 
> A Scala API would also allow to pass Scala's anonymous functions as filter 
> conditions or as a select function, for example, or to use partial functions 
> to distinguish between different events.
> Furthermore, the Scala API could be designed to feel a bit more like a DSL:
> {code}
> begin "start" where _.id >= 42 -> "middle_1" as classOf[Subclass] || 
> "middle_2" where _.name equals "foobar" -> "end" where x => x.id <= x.volume
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4160: {FLINK-6965] Include snappy-java in flink-dist

2017-06-26 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4160#discussion_r123971846
  
--- Diff: tools/travis_mvn_watchdog.sh ---
@@ -182,6 +182,13 @@ check_shaded_artifacts() {
exit 1
fi
 
+   SNAPPY=`cat allClasses | grep '^org/xerial/snappy' | wc -l`
+   if [ AVRO == "0" ]; then
--- End diff --

yes! nice catch...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6965) Avro is missing snappy dependency

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062865#comment-16062865
 ] 

ASF GitHub Bot commented on FLINK-6965:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4160#discussion_r123971846
  
--- Diff: tools/travis_mvn_watchdog.sh ---
@@ -182,6 +182,13 @@ check_shaded_artifacts() {
exit 1
fi
 
+   SNAPPY=`cat allClasses | grep '^org/xerial/snappy' | wc -l`
+   if [ AVRO == "0" ]; then
--- End diff --

yes! nice catch...


> Avro is missing snappy dependency
> -
>
> Key: FLINK-6965
> URL: https://issues.apache.org/jira/browse/FLINK-6965
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 1.3.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.3.2
>
>
> The shading rework made before 1.3 removed a snappy dependency that was 
> accidentally pulled in through hadoop. This is technically alright, until 
> class-loaders rear their ugly heads.
> Our kafka connector can read avro records, which may or may not require 
> snappy. Usually this _should_ be solvable by including the snappy dependency 
> in the user-jar if necessary, however since the kafka connector loads classes 
> that it requires using the system class loader this doesn't work.
> As such we have to add a separate snappy dependency to flink-core.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4083: [FLINK-6742] Improve savepoint migration failure e...

2017-06-26 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4083#discussion_r123972388
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/savepoint/SavepointV2.java
 ---
@@ -168,10 +168,27 @@ public static Savepoint 
convertToOperatorStateSavepointV2(
expandedToLegacyIds = true;
}
 
+   if (jobVertex == null) {
+   throw new IllegalStateException(
+   "Could not find task for state with ID 
" + taskState.getJobVertexID() + ". " +
+   "When migrating a savepoint from a 
version < 1.3 please make sure that the topology was not " +
+   "changed through removal of a stateful 
operator or modification of a chain containing a stateful " +
+   "operator.");
+   }
+
List operatorIDs = 
jobVertex.getOperatorIDs();
 
for (int subtaskIndex = 0; subtaskIndex < 
jobVertex.getParallelism(); subtaskIndex++) {
-   SubtaskState subtaskState = 
taskState.getState(subtaskIndex);
+   SubtaskState subtaskState;
+   try {
+   subtaskState = 
taskState.getState(subtaskIndex);
--- End diff --

yes that's true, I'll create a follow-up PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6742) Improve error message when savepoint migration fails due to task removal

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062868#comment-16062868
 ] 

ASF GitHub Bot commented on FLINK-6742:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4083#discussion_r123972388
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/savepoint/SavepointV2.java
 ---
@@ -168,10 +168,27 @@ public static Savepoint 
convertToOperatorStateSavepointV2(
expandedToLegacyIds = true;
}
 
+   if (jobVertex == null) {
+   throw new IllegalStateException(
+   "Could not find task for state with ID 
" + taskState.getJobVertexID() + ". " +
+   "When migrating a savepoint from a 
version < 1.3 please make sure that the topology was not " +
+   "changed through removal of a stateful 
operator or modification of a chain containing a stateful " +
+   "operator.");
+   }
+
List operatorIDs = 
jobVertex.getOperatorIDs();
 
for (int subtaskIndex = 0; subtaskIndex < 
jobVertex.getParallelism(); subtaskIndex++) {
-   SubtaskState subtaskState = 
taskState.getState(subtaskIndex);
+   SubtaskState subtaskState;
+   try {
+   subtaskState = 
taskState.getState(subtaskIndex);
--- End diff --

yes that's true, I'll create a follow-up PR.


> Improve error message when savepoint migration fails due to task removal
> 
>
> Key: FLINK-6742
> URL: https://issues.apache.org/jira/browse/FLINK-6742
> Project: Flink
>  Issue Type: Bug
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Gyula Fora
>Assignee: Chesnay Schepler
>Priority: Minor
>  Labels: flink-rel-1.3.1-blockers
>
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointV2.convertToOperatorStateSavepointV2(SavepointV2.java:171)
>   at 
> org.apache.flink.runtime.checkpoint.savepoint.SavepointLoader.loadAndValidateSavepoint(SavepointLoader.java:75)
>   at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreSavepoint(CheckpointCoordinator.java:1090)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4565) Support for SQL IN operator

2017-06-26 Thread Timo Walther (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062869#comment-16062869
 ] 

Timo Walther commented on FLINK-4565:
-

[~dshkvyra] this issue has been reopened by [~fhueske]. Please do not mark this 
issue as resolved. A issue is resolved once it is merged into the master 
branch. 

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>Assignee: Dmytro Shkvyra
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (FLINK-4565) Support for SQL IN operator

2017-06-26 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther reopened FLINK-4565:
-
  Assignee: (was: Dmytro Shkvyra)

> Support for SQL IN operator
> ---
>
> Key: FLINK-4565
> URL: https://issues.apache.org/jira/browse/FLINK-4565
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> It seems that Flink SQL supports the uncorrelated sub-query IN operator. But 
> it should also be available in the Table API and tested.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4183: [FLINK-6969][table]Add support for deferred comput...

2017-06-26 Thread sunjincheng121
GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/4183

[FLINK-6969][table]Add support for deferred computation for group win…

In this PR. I have add support for deferred computation for group window 
aggregates.

- [x] General
  - The pull request references the related JIRA issue 
("[FLINK-6969][table]Add support for deferred computation for group window 
aggregates")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink FLINK-6969-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4183.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4183


commit 4030a7e84a851404a996b5bd76f805f24c34
Author: sunjincheng121 
Date:   2017-06-23T23:03:45Z

[FLINK-6969][table]Add support for deferred computation for group window 
aggregates




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6969) Add support for deferred computation for group window aggregates

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062870#comment-16062870
 ] 

ASF GitHub Bot commented on FLINK-6969:
---

GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/4183

[FLINK-6969][table]Add support for deferred computation for group win…

In this PR. I have add support for deferred computation for group window 
aggregates.

- [x] General
  - The pull request references the related JIRA issue 
("[FLINK-6969][table]Add support for deferred computation for group window 
aggregates")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [x] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink FLINK-6969-PR

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4183.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4183


commit 4030a7e84a851404a996b5bd76f805f24c34
Author: sunjincheng121 
Date:   2017-06-23T23:03:45Z

[FLINK-6969][table]Add support for deferred computation for group window 
aggregates




> Add support for deferred computation for group window aggregates
> 
>
> Key: FLINK-6969
> URL: https://issues.apache.org/jira/browse/FLINK-6969
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Fabian Hueske
>Assignee: sunjincheng
>
> Deferred computation is a strategy to deal with late arriving data and avoid 
> updates of previous results. Instead of computing a result as soon as it is 
> possible (i.e., when a corresponding watermark was received), deferred 
> computation adds a configurable amount of slack time in which late data is 
> accepted before the result is compute. For example, instead of computing a 
> tumbling window of 1 hour at each full hour, we can add a deferred 
> computation interval of 15 minute to compute the result quarter past each 
> full hour.
> This approach adds latency but can reduce the number of update esp. in use 
> cases where the user cannot influence the generation of watermarks. It is 
> also useful if the data is emitted to a system that cannot update result 
> (files or Kafka). The deferred computation interval should be configured via 
> the {{QueryConfig}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4138: [FLINK-6925][table]Add CONCAT/CONCAT_WS supported ...

2017-06-26 Thread sunjincheng121
Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123974055
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * All build-in scalar scalar functions.
+  */
+class ScalarFunctions {}
+
+object ScalarFunctions {
+
+  def power(a: Double, b: JBigDecimal): Double = {
+Math.pow(a, b.doubleValue())
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments.
+* Returns NULL if any argument is NULL.
+*/
+  @varargs
--- End diff --

If we using `Array[String]` we may get exception as follows:
```
Caused by: org.codehaus.commons.compiler.CompileException: Line 44, Column 
78: No applicable constructor/method found for actual parameters 
"java.lang.String, java.lang.String"; candidates are: "public static 
java.lang.String 
org.apache.flink.table.runtime.functions.ScalarFunctions.concat_ws(java.lang.String[])"
at 
org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11672)
at 
org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8828)

```
Because the scalar function of  `concat/concat_ws` really a varargs. What 
do you think ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6925) Add CONCAT/CONCAT_WS supported in SQL

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062875#comment-16062875
 ] 

ASF GitHub Bot commented on FLINK-6925:
---

Github user sunjincheng121 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4138#discussion_r123974055
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/functions/ScalarFunctions.scala
 ---
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.runtime.functions
+
+import scala.annotation.varargs
+import java.math.{BigDecimal => JBigDecimal}
+import java.lang.{StringBuffer => JStringBuffer}
+
+/**
+  * All build-in scalar scalar functions.
+  */
+class ScalarFunctions {}
+
+object ScalarFunctions {
+
+  def power(a: Double, b: JBigDecimal): Double = {
+Math.pow(a, b.doubleValue())
+  }
+
+  /**
+* Returns the string that results from concatenating the arguments.
+* Returns NULL if any argument is NULL.
+*/
+  @varargs
--- End diff --

If we using `Array[String]` we may get exception as follows:
```
Caused by: org.codehaus.commons.compiler.CompileException: Line 44, Column 
78: No applicable constructor/method found for actual parameters 
"java.lang.String, java.lang.String"; candidates are: "public static 
java.lang.String 
org.apache.flink.table.runtime.functions.ScalarFunctions.concat_ws(java.lang.String[])"
at 
org.codehaus.janino.UnitCompiler.compileError(UnitCompiler.java:11672)
at 
org.codehaus.janino.UnitCompiler.findMostSpecificIInvocable(UnitCompiler.java:8828)

```
Because the scalar function of  `concat/concat_ws` really a varargs. What 
do you think ?


> Add CONCAT/CONCAT_WS supported in SQL
> -
>
> Key: FLINK-6925
> URL: https://issues.apache.org/jira/browse/FLINK-6925
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> CONCAT(str1,str2,...)Returns the string that results from concatenating the 
> arguments. May have one or more arguments. If all arguments are nonbinary 
> strings, the result is a nonbinary string. If the arguments include any 
> binary strings, the result is a binary string. A numeric argument is 
> converted to its equivalent nonbinary string form.
> CONCAT() returns NULL if any argument is NULL.
> * Syntax:
> CONCAT(str1,str2,...) 
> * Arguments
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT('F', 'lin', 'k') -> 'Flink'
>   CONCAT('M', NULL, 'L') -> NULL
>   CONCAT(14.3) -> '14.3'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat]
> CONCAT_WS() stands for Concatenate With Separator and is a special form of 
> CONCAT(). The first argument is the separator for the rest of the arguments. 
> The separator is added between the strings to be concatenated. The separator 
> can be a string, as can the rest of the arguments. If the separator is NULL, 
> the result is NULL.
> * Syntax:
> CONCAT_WS(separator,str1,str2,...)
> * Arguments
> ** separator -
> ** str1,str2,... -
> * Return Types
>   string
> * Example:
>   CONCAT_WS(',','First name','Second name','Last Name') -> 'First name,Second 
> name,Last Name'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/string-functions.html#function_concat-ws]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6429) Bump up Calcite version to 1.13

2017-06-26 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062877#comment-16062877
 ] 

Fabian Hueske commented on FLINK-6429:
--

The vote for Calcite 1.13.0 has passed today and it should be available soon.

> Bump up Calcite version to 1.13
> ---
>
> Key: FLINK-6429
> URL: https://issues.apache.org/jira/browse/FLINK-6429
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Reporter: Timo Walther
>
> This is an umbrella issue for all tasks that need to be done once Apache 
> Calcite 1.13 is released.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-6310) LocalExecutor#endSession() uses wrong lock for synchronization

2017-06-26 Thread Fang Yong (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fang Yong reassigned FLINK-6310:


Assignee: Fang Yong

> LocalExecutor#endSession() uses wrong lock for synchronization
> --
>
> Key: FLINK-6310
> URL: https://issues.apache.org/jira/browse/FLINK-6310
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Reporter: Ted Yu
>Assignee: Fang Yong
>
> Here is related code:
> {code}
>   public void endSession(JobID jobID) throws Exception {
> synchronized (LocalExecutor.class) {
>   LocalFlinkMiniCluster flink = this.flink;
> {code}
> In other places, lock field is used for synchronization:
> {code}
>   public void start() throws Exception {
> synchronized (lock) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4184: [FLINK-6310] Use lock object for synchronization

2017-06-26 Thread zjureel
GitHub user zjureel opened a pull request:

https://github.com/apache/flink/pull/4184

[FLINK-6310] Use lock object for synchronization

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zjureel/flink FLINK-6310

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4184.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4184


commit 97e96414a371cb3f8d6119fd847536453c09436d
Author: zjureel 
Date:   2017-06-26T10:18:18Z

[FLINK-6310] Use lock object for synchronization




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6310) LocalExecutor#endSession() uses wrong lock for synchronization

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062879#comment-16062879
 ] 

ASF GitHub Bot commented on FLINK-6310:
---

GitHub user zjureel opened a pull request:

https://github.com/apache/flink/pull/4184

[FLINK-6310] Use lock object for synchronization

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("[FLINK-XXX] Jira 
title text")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [ ] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zjureel/flink FLINK-6310

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4184.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4184


commit 97e96414a371cb3f8d6119fd847536453c09436d
Author: zjureel 
Date:   2017-06-26T10:18:18Z

[FLINK-6310] Use lock object for synchronization




> LocalExecutor#endSession() uses wrong lock for synchronization
> --
>
> Key: FLINK-6310
> URL: https://issues.apache.org/jira/browse/FLINK-6310
> Project: Flink
>  Issue Type: Bug
>  Components: Local Runtime
>Reporter: Ted Yu
>Assignee: Fang Yong
>
> Here is related code:
> {code}
>   public void endSession(JobID jobID) throws Exception {
> synchronized (LocalExecutor.class) {
>   LocalFlinkMiniCluster flink = this.flink;
> {code}
> In other places, lock field is used for synchronization:
> {code}
>   public void start() throws Exception {
> synchronized (lock) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6909) Flink should support Lombok POJO

2017-06-26 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062883#comment-16062883
 ] 

Stephan Ewen commented on FLINK-6909:
-

Can you provide some more information? Why is the Lombok POJO not recognized by 
Flink as a POJO?

> Flink should support Lombok POJO
> 
>
> Key: FLINK-6909
> URL: https://issues.apache.org/jira/browse/FLINK-6909
> Project: Flink
>  Issue Type: Wish
>  Components: Type Serialization System
>Reporter: Md Kamaruzzaman
>Priority: Minor
> Fix For: 1.2.1
>
>
> Project lombok helps greatly to reduce boilerplate Java Code. 
> It seems that Flink does not accept a lombok POJO as a valid pojo. 
> e.g. Here is a POJO defined with lombok:
> @Getter
> @Setter
> @NoArgsConstructor
> public class SimplePojo
> Using this Pojo class to read from CSV file throws this exception:
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.flink.api.java.typeutils.GenericTypeInfo cannot be cast to 
> org.apache.flink.api.java.typeutils.PojoTypeInfo
> It would be great if flink supports lombok POJO.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6908) start-cluster.sh accepts batch/streaming mode argument

2017-06-26 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062885#comment-16062885
 ] 

Stephan Ewen commented on FLINK-6908:
-

They are no longer used. +1 for removing them.

> start-cluster.sh accepts batch/streaming mode argument
> --
>
> Key: FLINK-6908
> URL: https://issues.apache.org/jira/browse/FLINK-6908
> Project: Flink
>  Issue Type: Improvement
>  Components: Startup Shell Scripts
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Trivial
>
> The {{start-cluster.sh}} script still accepts an argument for the cluster 
> mode (BATCH|STREAMING).
> As far as I'm aware we no longer use these.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4035: [FLINK-6785] [metrics] Fix ineffective asserts in MetricR...

2017-06-26 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4035
  
I've resolved the checkstyle issues and will start merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6890) flink-dist Jar contains non-shaded Guava dependencies (built with Maven 3.0.5)

2017-06-26 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062889#comment-16062889
 ] 

Stephan Ewen commented on FLINK-6890:
-

Has this been confirmed? If yes, this should be a blocker for Flink 1.2.2 and 
1.3.2.

> flink-dist Jar contains non-shaded Guava dependencies (built with Maven 3.0.5)
> --
>
> Key: FLINK-6890
> URL: https://issues.apache.org/jira/browse/FLINK-6890
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.3.0, 1.2.1
>Reporter: Tzu-Li (Gordon) Tai
>
> See original discussion on ML: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Guava-version-conflict-td13561.html.
> Running {{mvn dependency:tree}} for {{flink-dist}} did not reveal any Guava 
> dependencies.
> This was tested with Maven 3.0.5.
> {code}
> com/google/common/util/concurrent/Futures$CombinedFuture.class
> com/google/common/util/concurrent/Futures$CombinerFuture.class
> com/google/common/util/concurrent/Futures$FallbackFuture$1$1.class
> com/google/common/util/concurrent/Futures$FallbackFuture$1.class
> com/google/common/util/concurrent/Futures$FallbackFuture.class
> com/google/common/util/concurrent/Futures$FutureCombiner.class
> com/google/common/util/concurrent/Futures$ImmediateCancelledFuture.class
> com/google/common/util/concurrent/Futures$ImmediateFailedCheckedFuture.class
> com/google/common/util/concurrent/Futures$ImmediateFailedFuture.class
> com/google/common/util/concurrent/Futures$ImmediateFuture.class
> com/google/common/util/concurrent/Futures$ImmediateSuccessfulCheckedFuture.class
> com/google/common/util/concurrent/Futures$ImmediateSuccessfulFuture.class
> com/google/common/util/concurrent/Futures$MappingCheckedFuture.class
> com/google/common/util/concurrent/Futures$NonCancellationPropagatingFuture$1.class
> com/google/common/util/concurrent/Futures$NonCancellationPropagatingFuture.class
> com/google/common/util/concurrent/Futures$WrappedCombiner.class
> com/google/common/util/concurrent/Futures.class
> com/google/common/util/concurrent/JdkFutureAdapters$ListenableFutureAdapter$1.class
> com/google/common/util/concurrent/JdkFutureAdapters$ListenableFutureAdapter.class
> com/google/common/util/concurrent/JdkFutureAdapters.class
> com/google/common/util/concurrent/ListenableFuture.class
> com/google/common/util/concurrent/ListenableFutureTask.class
> com/google/common/util/concurrent/ListenableScheduledFuture.class
> com/google/common/util/concurrent/ListenerCallQueue$Callback.class
> com/google/common/util/concurrent/ListenerCallQueue.class
> com/google/common/util/concurrent/ListeningExecutorService.class
> com/google/common/util/concurrent/ListeningScheduledExecutorService.class
> com/google/common/util/concurrent/Monitor$Guard.class
> com/google/common/util/concurrent/Monitor.class
> com/google/common/util/concurrent/MoreExecutors$1.class
> com/google/common/util/concurrent/MoreExecutors$2.class
> com/google/common/util/concurrent/MoreExecutors$3.class
> com/google/common/util/concurrent/MoreExecutors$4.class
> com/google/common/util/concurrent/MoreExecutors$Application$1.class
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6785) Ineffective checks in MetricRegistryTest

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062890#comment-16062890
 ] 

ASF GitHub Bot commented on FLINK-6785:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4035
  
I've resolved the checkstyle issues and will start merging.


> Ineffective checks in MetricRegistryTest
> 
>
> Key: FLINK-6785
> URL: https://issues.apache.org/jira/browse/FLINK-6785
> Project: Flink
>  Issue Type: Bug
>  Components: Metrics, Tests
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>
> Several tests in {{MetricRegistryTest}} have reporters doing assertions. By 
> design exceptions from reporters are however catched and logged, and thus 
> can't fail the test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6868) Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra Connector`

2017-06-26 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062891#comment-16062891
 ] 

Stephan Ewen commented on FLINK-6868:
-

Is this issue resolved?

> Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`
> -
>
> Key: FLINK-6868
> URL: https://issues.apache.org/jira/browse/FLINK-6868
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Cassandra Connector
>Affects Versions: 1.4.0
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Blocker
> Fix For: 1.4.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Shoud using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4109: [FLINK-6898] [metrics] Limit size of operator component i...

2017-06-26 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4109
  
a constant is a good idea, will add it and start merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6898) Limit size of operator component in metric name

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062892#comment-16062892
 ] 

ASF GitHub Bot commented on FLINK-6898:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4109
  
a constant is a good idea, will add it and start merging.


> Limit size of operator component in metric name
> ---
>
> Key: FLINK-6898
> URL: https://issues.apache.org/jira/browse/FLINK-6898
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Critical
> Fix For: 1.4.0, 1.3.2
>
>
> The operator name for some operators (specifically windows) can be very, very 
> long (250+) characters.
> I propose to limit the total space that the operator component can take up in 
> a metric name to 60 characters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6866) ClosureCleaner.clean fails for scala's JavaConverters wrapper classes

2017-06-26 Thread Stephan Ewen (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062894#comment-16062894
 ] 

Stephan Ewen commented on FLINK-6866:
-

Not sure how easily this is solvable. After all, the objects need to be 
serializable in the end. The closure cleaner cannot magically make 
non-serializable elements serializable.

Can you share the code that causes this problem?

Can you avoid having an instantiated Wrapper by the time the class is 
serialized?

> ClosureCleaner.clean fails for scala's JavaConverters wrapper classes
> -
>
> Key: FLINK-6866
> URL: https://issues.apache.org/jira/browse/FLINK-6866
> Project: Flink
>  Issue Type: Bug
>  Components: DataStream API, Scala API
>Affects Versions: 1.2.0, 1.3.0
> Environment: Scala 2.10.6, Scala 2.11.11
> Does not appear using Scala 2.12
>Reporter: SmedbergM
>
> MWE: https://github.com/SmedbergM/ClosureCleanerBug
> MWE console output: 
> https://gist.github.com/SmedbergM/ce969e6e8540da5b59c7dd921a496dc5



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-7005) Optimization steps are missing for nested registered tables

2017-06-26 Thread Timo Walther (JIRA)
Timo Walther created FLINK-7005:
---

 Summary: Optimization steps are missing for nested registered 
tables
 Key: FLINK-7005
 URL: https://issues.apache.org/jira/browse/FLINK-7005
 Project: Flink
  Issue Type: Bug
  Components: Table API & SQL
Reporter: Timo Walther
Assignee: Timo Walther


Tables that are registered (implicitly or explicitly) do not pass the first 
three optimization steps:

- decorrelate
- convert time indicators
- normalize the logical plan

E.g. this has the wrong plan right now:

{code}
val table = stream.toTable(tEnv, 'rowtime.rowtime, 'int, 'double, 'float, 
'bigdec, 'string)

val table1 = tEnv.sql(s"""SELECT 1 + 1 FROM $table""") // not optimized
val table2 = tEnv.sql(s"""SELECT myrt FROM $table1""")

val results = table2.toAppendStream[Row]
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3951: [FLINK-6461] Replace usages of deprecated web port key

2017-06-26 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3951
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6461) Deprecate web-related configuration defaults in ConfigConstants

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062901#comment-16062901
 ] 

ASF GitHub Bot commented on FLINK-6461:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/3951
  
merging.


> Deprecate web-related configuration defaults in ConfigConstants
> ---
>
> Key: FLINK-6461
> URL: https://issues.apache.org/jira/browse/FLINK-6461
> Project: Flink
>  Issue Type: Improvement
>  Components: Webfrontend
>Affects Versions: 1.3.0, 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.4.0
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7005) Optimization steps are missing for nested registered tables

2017-06-26 Thread Timo Walther (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timo Walther updated FLINK-7005:

Affects Version/s: 1.3.0
   1.3.1

> Optimization steps are missing for nested registered tables
> ---
>
> Key: FLINK-7005
> URL: https://issues.apache.org/jira/browse/FLINK-7005
> Project: Flink
>  Issue Type: Bug
>  Components: Table API & SQL
>Affects Versions: 1.3.0, 1.3.1
>Reporter: Timo Walther
>Assignee: Timo Walther
>
> Tables that are registered (implicitly or explicitly) do not pass the first 
> three optimization steps:
> - decorrelate
> - convert time indicators
> - normalize the logical plan
> E.g. this has the wrong plan right now:
> {code}
> val table = stream.toTable(tEnv, 'rowtime.rowtime, 'int, 'double, 'float, 
> 'bigdec, 'string)
> val table1 = tEnv.sql(s"""SELECT 1 + 1 FROM $table""") // not optimized
> val table2 = tEnv.sql(s"""SELECT myrt FROM $table1""")
> val results = table2.toAppendStream[Row]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4178: [FLINK-7000] Add custom configuration local enviro...

2017-06-26 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4178#discussion_r123979379
  
--- Diff: 
flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/StreamExecutionEnvironment.scala
 ---
@@ -768,6 +768,18 @@ object StreamExecutionEnvironment {
   }
 
   /**
+   * Creates a local execution environment. The local execution 
environment will run the
+   * program in a multi-threaded fashion in the same JVM as the 
environment was created in.
+   *
+   * @param parallelism   The parallelism for the local environment.
+   * @param configuration Pass a custom configuration into the cluster.
+   */
+  def createLocalEnvironment(parallelism: Int, configuration: 
Configuration):
--- End diff --

well that's unfortunate.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-7000) Add custom configuration for StreamExecutionEnvironment#createLocalEnvironment

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062907#comment-16062907
 ] 

ASF GitHub Bot commented on FLINK-7000:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4178#discussion_r123979379
  
--- Diff: 
flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/StreamExecutionEnvironment.scala
 ---
@@ -768,6 +768,18 @@ object StreamExecutionEnvironment {
   }
 
   /**
+   * Creates a local execution environment. The local execution 
environment will run the
+   * program in a multi-threaded fashion in the same JVM as the 
environment was created in.
+   *
+   * @param parallelism   The parallelism for the local environment.
+   * @param configuration Pass a custom configuration into the cluster.
+   */
+  def createLocalEnvironment(parallelism: Int, configuration: 
Configuration):
--- End diff --

well that's unfortunate.


> Add custom configuration for StreamExecutionEnvironment#createLocalEnvironment
> --
>
> Key: FLINK-7000
> URL: https://issues.apache.org/jira/browse/FLINK-7000
> Project: Flink
>  Issue Type: Improvement
>Reporter: Lim Chee Hau
>
> I was doing some local testings in {{Scala}} environment, however I found 
> that there is no straightforward way to add custom configuration to 
> {{StreamExecutionEnvironment}} by using {{createLocalEnvironment}} method. 
> This could be easily achieve in {{Java}} environment since 
> {{StreamExecutionEnvironment}} in {{Java}} has 
> - {{createLocalEnvironment()}}
> - {{createLocalEnvironment(Int)}}
> - {{createLocalEnvironment(Int, Configuration)}}
> Whereas Scala only has 2 out of 3 of these methods.
> Not sure if this is a missing implementation, if yes I could create a PR for 
> this.
> Therefore the example in [Local 
> Execution|https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/local_execution.html]
>  could be making sense for Scala users as well:
> bq. The LocalEnvironment allows also to pass custom configuration values to 
> Flink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4168: [FLINK-6987] Fix erroneous when path containing sp...

2017-06-26 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4168#discussion_r123980520
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java ---
@@ -461,8 +463,9 @@ public LocatableInputSplitAssigner 
getInputSplitAssigner(FileInputSplit[] splits

// take the desired number of splits into account
minNumSplits = Math.max(minNumSplits, this.numSplits);
-   
-   final Path path = this.filePath;
+
+   final Path path = new 
Path(URLDecoder.decode(this.filePath.toString(), 
Charset.defaultCharset().name()));
--- End diff --

I would keep the conversion to an URI for the sake of making less changes 
to the test.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6987) TextInputFormatTest fails when run in path containing spaces

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062913#comment-16062913
 ] 

ASF GitHub Bot commented on FLINK-6987:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4168#discussion_r123980520
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java ---
@@ -461,8 +463,9 @@ public LocatableInputSplitAssigner 
getInputSplitAssigner(FileInputSplit[] splits

// take the desired number of splits into account
minNumSplits = Math.max(minNumSplits, this.numSplits);
-   
-   final Path path = this.filePath;
+
+   final Path path = new 
Path(URLDecoder.decode(this.filePath.toString(), 
Charset.defaultCharset().name()));
--- End diff --

I would keep the conversion to an URI for the sake of making less changes 
to the test.


> TextInputFormatTest fails when run in path containing spaces
> 
>
> Key: FLINK-6987
> URL: https://issues.apache.org/jira/browse/FLINK-6987
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.3.1
>Reporter: Timo Walther
>Assignee: mingleizhang
>
> The test {{TextInputFormatTest.testNestedFileRead}} fails if the path 
> contains spaces.
> Reason: "Test erroneous"
> I was building Flink on MacOS 10.12.5 and the folder was called "flink-1.3.1 
> 2".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4156: [FLINK-6655] Add validateAndNormalizeUri method to...

2017-06-26 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980821
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
+"Please specify the file system scheme explicitly in the URI.")
+}
+
+if (path == null) {
+  throw new IllegalArgumentException("The path to store the archive 
job is null. " +
+"Please specify a directory path for archive.")
--- End diff --

-> "for storing job archives."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4156: [FLINK-6655] Add validateAndNormalizeUri method to...

2017-06-26 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980977
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
+"Please specify the file system scheme explicitly in the URI.")
+}
+
+if (path == null) {
+  throw new IllegalArgumentException("The path to store the archive 
job is null. " +
+"Please specify a directory path for archive.")
+}
+
+if (path.length == 0 || path == "/") {
+  throw new IllegalArgumentException("Cannot use the root directory 
for archive.")
+}
+if (!FileSystem.isFlinkSupportedScheme(archivePathUri.getScheme)) {
+  // skip verification checks for non-flink supported filesystem
+  // this is because the required filesystem classes may not be 
available to the flink client
+  new Path(archivePathUri)
+}
+else {
--- End diff --

we don't need this branch, as we only access the path from the jobmanager.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4156: [FLINK-6655] Add validateAndNormalizeUri method to...

2017-06-26 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980868
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
+"Please specify the file system scheme explicitly in the URI.")
+}
+
+if (path == null) {
+  throw new IllegalArgumentException("The path to store the archive 
job is null. " +
+"Please specify a directory path for archive.")
+}
+
+if (path.length == 0 || path == "/") {
+  throw new IllegalArgumentException("Cannot use the root directory 
for archive.")
--- End diff --

-> "for storing job archives."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4156: [FLINK-6655] Add validateAndNormalizeUri method to...

2017-06-26 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980694
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
--- End diff --

Let's include the config option key in the error message.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4156: [FLINK-6655] Add validateAndNormalizeUri method to...

2017-06-26 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980754
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
+"Please specify the file system scheme explicitly in the URI.")
+}
+
+if (path == null) {
+  throw new IllegalArgumentException("The path to store the archive 
job is null. " +
--- End diff --

again, config option key. Also, it should say "to store the job archives is 
null,"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6655) Misleading error message when HistoryServer path is empty

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062918#comment-16062918
 ] 

ASF GitHub Bot commented on FLINK-6655:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980754
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
+"Please specify the file system scheme explicitly in the URI.")
+}
+
+if (path == null) {
+  throw new IllegalArgumentException("The path to store the archive 
job is null. " +
--- End diff --

again, config option key. Also, it should say "to store the job archives is 
null,"


> Misleading error message when HistoryServer path is empty
> -
>
> Key: FLINK-6655
> URL: https://issues.apache.org/jira/browse/FLINK-6655
> Project: Flink
>  Issue Type: Bug
>  Components: History Server
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: mingleizhang
>Priority: Minor
>
> If the HistoryServer {{jobmanager.archive.fs.dir}} if e.g. {{file://}}. The 
> following exception mentions checkpoints, which is misleading.
> {code}
> java.lang.IllegalArgumentException: Cannot use the root directory for 
> checkpoints.
>   at 
> org.apache.flink.runtime.state.filesystem.FsStateBackend.validateAndNormalizeUri(FsStateBackend.java:358)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.org$apache$flink$runtime$jobmanager$MemoryArchivist$$archiveJsonFiles(MemoryArchivist.scala:201)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist$$anonfun$handleMessage$1.applyOrElse(MemoryArchivist.scala:107)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.aroundReceive(MemoryArchivist.scala:65)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6655) Misleading error message when HistoryServer path is empty

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062917#comment-16062917
 ] 

ASF GitHub Bot commented on FLINK-6655:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980821
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
+"Please specify the file system scheme explicitly in the URI.")
+}
+
+if (path == null) {
+  throw new IllegalArgumentException("The path to store the archive 
job is null. " +
+"Please specify a directory path for archive.")
--- End diff --

-> "for storing job archives."


> Misleading error message when HistoryServer path is empty
> -
>
> Key: FLINK-6655
> URL: https://issues.apache.org/jira/browse/FLINK-6655
> Project: Flink
>  Issue Type: Bug
>  Components: History Server
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: mingleizhang
>Priority: Minor
>
> If the HistoryServer {{jobmanager.archive.fs.dir}} if e.g. {{file://}}. The 
> following exception mentions checkpoints, which is misleading.
> {code}
> java.lang.IllegalArgumentException: Cannot use the root directory for 
> checkpoints.
>   at 
> org.apache.flink.runtime.state.filesystem.FsStateBackend.validateAndNormalizeUri(FsStateBackend.java:358)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.org$apache$flink$runtime$jobmanager$MemoryArchivist$$archiveJsonFiles(MemoryArchivist.scala:201)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist$$anonfun$handleMessage$1.applyOrElse(MemoryArchivist.scala:107)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.aroundReceive(MemoryArchivist.scala:65)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6655) Misleading error message when HistoryServer path is empty

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062919#comment-16062919
 ] 

ASF GitHub Bot commented on FLINK-6655:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980868
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
+"Please specify the file system scheme explicitly in the URI.")
+}
+
+if (path == null) {
+  throw new IllegalArgumentException("The path to store the archive 
job is null. " +
+"Please specify a directory path for archive.")
+}
+
+if (path.length == 0 || path == "/") {
+  throw new IllegalArgumentException("Cannot use the root directory 
for archive.")
--- End diff --

-> "for storing job archives."


> Misleading error message when HistoryServer path is empty
> -
>
> Key: FLINK-6655
> URL: https://issues.apache.org/jira/browse/FLINK-6655
> Project: Flink
>  Issue Type: Bug
>  Components: History Server
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: mingleizhang
>Priority: Minor
>
> If the HistoryServer {{jobmanager.archive.fs.dir}} if e.g. {{file://}}. The 
> following exception mentions checkpoints, which is misleading.
> {code}
> java.lang.IllegalArgumentException: Cannot use the root directory for 
> checkpoints.
>   at 
> org.apache.flink.runtime.state.filesystem.FsStateBackend.validateAndNormalizeUri(FsStateBackend.java:358)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.org$apache$flink$runtime$jobmanager$MemoryArchivist$$archiveJsonFiles(MemoryArchivist.scala:201)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist$$anonfun$handleMessage$1.applyOrElse(MemoryArchivist.scala:107)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.aroundReceive(MemoryArchivist.scala:65)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6655) Misleading error message when HistoryServer path is empty

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062920#comment-16062920
 ] 

ASF GitHub Bot commented on FLINK-6655:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980977
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
+"Please specify the file system scheme explicitly in the URI.")
+}
+
+if (path == null) {
+  throw new IllegalArgumentException("The path to store the archive 
job is null. " +
+"Please specify a directory path for archive.")
+}
+
+if (path.length == 0 || path == "/") {
+  throw new IllegalArgumentException("Cannot use the root directory 
for archive.")
+}
+if (!FileSystem.isFlinkSupportedScheme(archivePathUri.getScheme)) {
+  // skip verification checks for non-flink supported filesystem
+  // this is because the required filesystem classes may not be 
available to the flink client
+  new Path(archivePathUri)
+}
+else {
--- End diff --

we don't need this branch, as we only access the path from the jobmanager.


> Misleading error message when HistoryServer path is empty
> -
>
> Key: FLINK-6655
> URL: https://issues.apache.org/jira/browse/FLINK-6655
> Project: Flink
>  Issue Type: Bug
>  Components: History Server
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: mingleizhang
>Priority: Minor
>
> If the HistoryServer {{jobmanager.archive.fs.dir}} if e.g. {{file://}}. The 
> following exception mentions checkpoints, which is misleading.
> {code}
> java.lang.IllegalArgumentException: Cannot use the root directory for 
> checkpoints.
>   at 
> org.apache.flink.runtime.state.filesystem.FsStateBackend.validateAndNormalizeUri(FsStateBackend.java:358)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.org$apache$flink$runtime$jobmanager$MemoryArchivist$$archiveJsonFiles(MemoryArchivist.scala:201)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist$$anonfun$handleMessage$1.applyOrElse(MemoryArchivist.scala:107)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.aroundReceive(MemoryArchivist.scala:65)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJo

[jira] [Commented] (FLINK-6655) Misleading error message when HistoryServer path is empty

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062921#comment-16062921
 ] 

ASF GitHub Bot commented on FLINK-6655:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4156#discussion_r123980694
  
--- Diff: 
flink-runtime/src/main/scala/org/apache/flink/runtime/jobmanager/MemoryArchivist.scala
 ---
@@ -255,4 +255,75 @@ class MemoryArchivist(
   graphs.remove(jobID)
 }
   }
+
+  /**
+* Checks and normalizes the archive path URI. This method first checks 
the validity of the
+* URI (scheme, path, availability of a matching file system) and then 
normalizes the URL
+* to a path.
+*
+* If the URI does not include an authority, but the file system 
configured for the URI has an
+* authority, then the normalized path will include this authority.
+*
+* @param archivePathUri The URI to check and normalize.
+* @return a normalized URI as a Path.
+*
+* @throws IllegalArgumentException Thrown, if the URI misses schema or 
path.
+* @throws IOException Thrown, if no file system can be found for the 
URI's scheme.
+*/
+  @throws[IOException]
+  private def validateAndNormalizeUri(archivePathUri: URI): Path = {
+val scheme = archivePathUri.getScheme
+val path = archivePathUri.getPath
+
+// some validity checks
+if (scheme == null) {
+  throw new IllegalArgumentException("The scheme (hdfs://, file://, 
etc) is null. " +
--- End diff --

Let's include the config option key in the error message.


> Misleading error message when HistoryServer path is empty
> -
>
> Key: FLINK-6655
> URL: https://issues.apache.org/jira/browse/FLINK-6655
> Project: Flink
>  Issue Type: Bug
>  Components: History Server
>Affects Versions: 1.3.0
>Reporter: Timo Walther
>Assignee: mingleizhang
>Priority: Minor
>
> If the HistoryServer {{jobmanager.archive.fs.dir}} if e.g. {{file://}}. The 
> following exception mentions checkpoints, which is misleading.
> {code}
> java.lang.IllegalArgumentException: Cannot use the root directory for 
> checkpoints.
>   at 
> org.apache.flink.runtime.state.filesystem.FsStateBackend.validateAndNormalizeUri(FsStateBackend.java:358)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.org$apache$flink$runtime$jobmanager$MemoryArchivist$$archiveJsonFiles(MemoryArchivist.scala:201)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist$$anonfun$handleMessage$1.applyOrElse(MemoryArchivist.scala:107)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
>   at 
> scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:33)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.apply(LogMessages.scala:28)
>   at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
>   at 
> org.apache.flink.runtime.LogMessages$$anon$1.applyOrElse(LogMessages.scala:28)
>   at akka.actor.Actor$class.aroundReceive(Actor.scala:467)
>   at 
> org.apache.flink.runtime.jobmanager.MemoryArchivist.aroundReceive(MemoryArchivist.scala:65)
>   at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
>   at akka.actor.ActorCell.invoke(ActorCell.scala:487)
>   at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
>   at akka.dispatch.Mailbox.run(Mailbox.scala:220)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
>   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>   at 
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>   at 
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-6868) Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra Connector`

2017-06-26 Thread Chesnay Schepler (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-6868.
---
Resolution: Fixed

[~StephanEwen] yes, thanks for noticing.

1.4: b2062eeeaf68b37aa91e71b65b5201df04dab992

> Using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`
> -
>
> Key: FLINK-6868
> URL: https://issues.apache.org/jira/browse/FLINK-6868
> Project: Flink
>  Issue Type: Bug
>  Components: Build System, Cassandra Connector
>Affects Versions: 1.4.0
>Reporter: Benedict Jin
>Assignee: Benedict Jin
>Priority: Blocker
> Fix For: 1.4.0
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Shoud using `scala.binary.version` for `flink-streaming-scala` in `Cassandra 
> Connector`.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4022: [FLINK-5488] stop YarnClient before exception is thrown

2017-06-26 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4022
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-5488) yarnClient should be closed in AbstractYarnClusterDescriptor for error conditions

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16062926#comment-16062926
 ] 

ASF GitHub Bot commented on FLINK-5488:
---

Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4022
  
merging.


> yarnClient should be closed in AbstractYarnClusterDescriptor for error 
> conditions
> -
>
> Key: FLINK-5488
> URL: https://issues.apache.org/jira/browse/FLINK-5488
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Reporter: Ted Yu
>Assignee: Fang Yong
>
> Here is one example:
> {code}
> if(jobManagerMemoryMb > maxRes.getMemory() ) {
>   failSessionDuringDeployment(yarnClient, yarnApplication);
>   throw new YarnDeploymentException("The cluster does not have the 
> requested resources for the JobManager available!\n"
> + "Maximum Memory: " + maxRes.getMemory() + "MB Requested: " + 
> jobManagerMemoryMb + "MB. " + NOTE);
> }
> {code}
> yarnClient implements Closeable.
> It should be closed in situations where exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4123: [FLINK-6498] Migrate Zookeeper configuration options

2017-06-26 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4123
  
merging.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   3   >