[jira] [Commented] (FLINK-5488) yarnClient should be closed in AbstractYarnClusterDescriptor for error conditions

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058798#comment-16058798
 ] 

ASF GitHub Bot commented on FLINK-5488:
---

Github user zjureel commented on a diff in the pull request:

https://github.com/apache/flink/pull/4022#discussion_r123425557
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterClientV2.java ---
@@ -146,7 +146,9 @@ public ApplicationStatus getApplicationStatus() {
 
@Override
public void finalizeCluster() {
-   // Do nothing
--- End diff --

I have reverted the changes and fixed another problem as Ted Yu metioned in 
FLINK-5488, thanks @zentol 


> yarnClient should be closed in AbstractYarnClusterDescriptor for error 
> conditions
> -
>
> Key: FLINK-5488
> URL: https://issues.apache.org/jira/browse/FLINK-5488
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Reporter: Ted Yu
>Assignee: Fang Yong
>
> Here is one example:
> {code}
> if(jobManagerMemoryMb > maxRes.getMemory() ) {
>   failSessionDuringDeployment(yarnClient, yarnApplication);
>   throw new YarnDeploymentException("The cluster does not have the 
> requested resources for the JobManager available!\n"
> + "Maximum Memory: " + maxRes.getMemory() + "MB Requested: " + 
> jobManagerMemoryMb + "MB. " + NOTE);
> }
> {code}
> yarnClient implements Closeable.
> It should be closed in situations where exception is thrown.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4022: [FLINK-5488] stop YarnClient before exception is t...

2017-06-21 Thread zjureel
Github user zjureel commented on a diff in the pull request:

https://github.com/apache/flink/pull/4022#discussion_r123425557
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterClientV2.java ---
@@ -146,7 +146,9 @@ public ApplicationStatus getApplicationStatus() {
 
@Override
public void finalizeCluster() {
-   // Do nothing
--- End diff --

I have reverted the changes and fixed another problem as Ted Yu metioned in 
FLINK-5488, thanks @zentol 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6498) Migrate Zookeeper configuration options

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058795#comment-16058795
 ] 

ASF GitHub Bot commented on FLINK-6498:
---

Github user zjureel commented on a diff in the pull request:

https://github.com/apache/flink/pull/4123#discussion_r123424835
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java 
---
@@ -370,11 +370,11 @@ public static String generateZookeeperPath(String 
root, String namespace) {
 * Return the configured {@link ZkClientACLMode}.
 *
 * @param config The config to parse
-* @return Configured ACL mode or {@link 
HighAvailabilityOptions#ZOOKEEPER_CLIENT_ACL} if not
+* @return Configured ACL mode or "open" if not
--- End diff --

It's nicer to me too, thanks


> Migrate Zookeeper configuration options
> ---
>
> Key: FLINK-6498
> URL: https://issues.apache.org/jira/browse/FLINK-6498
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Reporter: Chesnay Schepler
>Assignee: Fang Yong
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4123: [FLINK-6498] Migrate Zookeeper configuration optio...

2017-06-21 Thread zjureel
Github user zjureel commented on a diff in the pull request:

https://github.com/apache/flink/pull/4123#discussion_r123424835
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java 
---
@@ -370,11 +370,11 @@ public static String generateZookeeperPath(String 
root, String namespace) {
 * Return the configured {@link ZkClientACLMode}.
 *
 * @param config The config to parse
-* @return Configured ACL mode or {@link 
HighAvailabilityOptions#ZOOKEEPER_CLIENT_ACL} if not
+* @return Configured ACL mode or "open" if not
--- End diff --

It's nicer to me too, thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6924) ADD LOG(X) supported in TableAPI

2017-06-21 Thread zjuwangg (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058726#comment-16058726
 ] 

zjuwangg commented on FLINK-6924:
-

This subtask can be assigned to me if possible, :)

> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-1390) java.lang.ClassCastException: X cannot be cast to X

2017-06-21 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058708#comment-16058708
 ] 

Erik van Oosten commented on FLINK-1390:


See 
https://issues.apache.org/jira/browse/FLINK-5633?focusedCommentId=16058706=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16058706
 for a proper solution.

>  java.lang.ClassCastException: X cannot be cast to X
> 
>
> Key: FLINK-1390
> URL: https://issues.apache.org/jira/browse/FLINK-1390
> Project: Flink
>  Issue Type: Bug
>  Components: YARN
>Affects Versions: 0.8.0
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>
> A user is affected by an issue, which is probably caused by different 
> classloaders being used for loading user classes.
> Current state of investigation:
> - the error happens in yarn sessions (there is only a YARN environment 
> available)
> - the error doesn't happen on the first time the job is being executed. It 
> only happens on subsequent executions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5633) ClassCastException: X cannot be cast to X when re-submitting a job.

2017-06-21 Thread Erik van Oosten (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058706#comment-16058706
 ] 

Erik van Oosten commented on FLINK-5633:


In case you need throughput (like we do), the caching is indispensable. In 
those cases you can use the following {{SpecificData}} implementation. Simply 
instantiate it once and then pass that singleton instance to every 
{{SpecificDatumReader}}.

{code:scala|title=LocalCachingSpecificData.scala}
import java.lang.reflect.Constructor
import java.util.concurrent.ConcurrentHashMap

import org.apache.avro.Schema
import org.apache.avro.specific.SpecificData
import scala.collection.JavaConverters._

/**
  * This can be used instead of [[SpecificData]] in multi-classloader 
environments like Flink.
  * This variation removes the JVM singleton constructor cache and replaces it 
with a
  * cache that is local to the current class loader.
  *
  * If two Flink jobs use the same generated Avro code, they will still have 
separate instances of the classes because
  * they live in separate class loaders.
  * However, a JVM-wide singleton cache keeps reference to the class in the 
first class loader that was loaded. Any
  * subsequent jobs will fail with a [[ClassCastException]] because they will 
get incompatible classes.
  */
class LocalCachingSpecificData extends SpecificData {
  private val NO_ARG: Array[Class[_]] = Array.empty
  private val SCHEMA_ARG: Array[Class[_]] = Array(classOf[Schema])
  private val CTOR_CACHE: scala.collection.concurrent.Map[Class[_], 
Constructor[_]] =
new ConcurrentHashMap[Class[_], Constructor[_]]().asScala

  /** Create an instance of a class.
* If the class implements 
[[org.apache.avro.specific.SpecificData.SchemaConstructable]], call a 
constructor with a
* [[org.apache.avro.Schema]] parameter, otherwise use a no-arg constructor.
*/
  private def newInstance(c: Class[_], s: Schema): AnyRef = {
val useSchema = 
classOf[SpecificData.SchemaConstructable].isAssignableFrom(c)
val constructor = CTOR_CACHE.getOrElseUpdate(c, {
  val ctor = c.getDeclaredConstructor((if (useSchema) SCHEMA_ARG else 
NO_ARG): _*)
  ctor.setAccessible(true)
  ctor
})
if (useSchema) constructor.newInstance(s).asInstanceOf[AnyRef]
else constructor.newInstance().asInstanceOf[AnyRef]
  }

  override def createFixed(old: AnyRef, schema: Schema): AnyRef = {
val c = getClass(schema)
if (c == null) super.createFixed(old, schema) // delegate to generic
else if (c.isInstance(old)) old
else newInstance(c, schema)
  }

  override def newRecord(old: AnyRef, schema: Schema): AnyRef = {
val c = getClass(schema)
if (c == null) super.newRecord(old, schema) // delegate to generic
else if (c.isInstance(old)) {old }
else {newInstance(c, schema) }
  }
}
{code}

> ClassCastException: X cannot be cast to X when re-submitting a job.
> ---
>
> Key: FLINK-5633
> URL: https://issues.apache.org/jira/browse/FLINK-5633
> Project: Flink
>  Issue Type: Bug
>  Components: Job-Submission, YARN
>Affects Versions: 1.1.4
>Reporter: Giuliano Caliari
>Priority: Minor
>
> I’m running a job on my local cluster and the first time I submit the job 
> everything works but whenever I cancel and re-submit the same job it fails 
> with:
> {quote}
> org.apache.flink.client.program.ProgramInvocationException: The program 
> execution failed: Job execution failed.
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427)
>   at 
> org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:101)
>   at 
> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:400)
>   at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
>   at 
> org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:634)
>   at au.com.my.package.pTraitor.OneTrait.execute(Traitor.scala:147)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$.delayedEndpoint$au$com$my$package$pTraitor$TraitorAppOneTrait$1(TraitorApp.scala:22)
>   at 
> au.com.my.package.pTraitor.TraitorAppOneTrait$delayedInit$body.apply(TraitorApp.scala:21)
>   at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
>   at 
> scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.App$$anonfun$main$1.apply(App.scala:76)
>   at scala.collection.immutable.List.foreach(List.scala:381)
>   at 
> scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
>   at scala.App$class.main(App.scala:76)
>   at 

[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6810:
---
Labels:   (was: starter)

> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>
> In this JIRA, will create some sub-task for add specific scalar function, 
> such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}},string-functions {{LPAD}}, etc. 
> I think is good way to let SQL work, and then add TableAPI to supported. So I 
> suggest one scalar function create two sub-task, one is for SQL. another for 
> TableAPI.
> *Note:*
> Every scalar function should add TableAPI doc in  
> {{./docs/dev/table/tableApi.md#built-in-functions}}. 
> Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6810:
---
Description: 
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

*Note:*
Every scalar function should add TableAPI doc in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. 
Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.

Welcome anybody to add the sub-task about standard database scalar function.




  was:
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

*Note:*
Every scalar function should add TableAPI doc in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. 
Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.

*Welcome anybody to add the sub-task about standard database scalar function. *





> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>
> In this JIRA, will create some sub-task for add specific scalar function, 
> such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}},string-functions {{LPAD}}, etc. 
> I think is good way to let SQL work, and then add TableAPI to supported. So I 
> suggest one scalar function create two sub-task, one is for SQL. another for 
> TableAPI.
> *Note:*
> Every scalar function should add TableAPI doc in  
> {{./docs/dev/table/tableApi.md#built-in-functions}}. 
> Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.
> Welcome anybody to add the sub-task about standard database scalar function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6810:
---
Description: 
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

*Note:*
Every scalar function should add TableAPI doc in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. 
Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.

*Welcome anybody to add the sub-task about standard database scalar function. *




  was:
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

*Note:*
Every scalar function should add TableAPI doc in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. 
Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.
*
Welcome anybody to add the sub-task about standard database scalar function. *





> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> In this JIRA, will create some sub-task for add specific scalar function, 
> such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}},string-functions {{LPAD}}, etc. 
> I think is good way to let SQL work, and then add TableAPI to supported. So I 
> suggest one scalar function create two sub-task, one is for SQL. another for 
> TableAPI.
> *Note:*
> Every scalar function should add TableAPI doc in  
> {{./docs/dev/table/tableApi.md#built-in-functions}}. 
> Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.
> *Welcome anybody to add the sub-task about standard database scalar function. 
> *



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6810:
---
Description: 
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

*Note:*
Every scalar function should add TableAPI doc in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. 
Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.
*
Welcome anybody to add the sub-task about standard database scalar function. *




  was:
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

*Note:*
Every scalar function should add TableAPI doc in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. 
Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.

Welcome anybody to add the sub-task about standard database scalar function. 





> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> In this JIRA, will create some sub-task for add specific scalar function, 
> such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}},string-functions {{LPAD}}, etc. 
> I think is good way to let SQL work, and then add TableAPI to supported. So I 
> suggest one scalar function create two sub-task, one is for SQL. another for 
> TableAPI.
> *Note:*
> Every scalar function should add TableAPI doc in  
> {{./docs/dev/table/tableApi.md#built-in-functions}}. 
> Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.
> *
> Welcome anybody to add the sub-task about standard database scalar function. *



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6966) Add maxParallelism and UIDs to all operators generated by the Table API / SQL

2017-06-21 Thread Jark Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058694#comment-16058694
 ] 

Jark Wu commented on FLINK-6966:


The key problem is how to generate a deterministic UID for the same query. We 
can learn from {{StreamGraphHasherV1#generateDeterministicHash}} which 
generates the default UID for operators. What should we take into account to 
generate the UID? The aggs, the window types?

> Add maxParallelism and UIDs to all operators generated by the Table API / SQL
> -
>
> Key: FLINK-6966
> URL: https://issues.apache.org/jira/browse/FLINK-6966
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Fabian Hueske
>Assignee: sunjincheng
>
> At the moment, the Table API does not assign UIDs and the max parallelism to 
> operators (except for operators with parallelism 1).
> We should do that to avoid problems when rescaling or restarting jobs from 
> savepoints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-6887) Split up CodeGenerator into several specific CodeGenerator

2017-06-21 Thread Jark Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-6887:
--

Assignee: Jark Wu

> Split up CodeGenerator into several specific CodeGenerator
> --
>
> Key: FLINK-6887
> URL: https://issues.apache.org/jira/browse/FLINK-6887
> Project: Flink
>  Issue Type: Task
>  Components: Table API & SQL
>Reporter: Jark Wu
>Assignee: Jark Wu
> Fix For: 1.4.0
>
>
> Currently, the {{CodeGenerator}} is very huge and a bit difficult to 
> maintain. I suggest to split it up into several specific {{XXXCodeGenerator}}.
> For example, create {{AggregationFunctionCodeGenerator}}  class and make it 
> extend to {{CodeGenerator}} and move the {{def generateAggregations(...)}} 
> method to it. The same as {{TableFunctionCollectorCodeGenerator}} and 
> {{InputFormatCodeGenerator}}.
> What do you think? [~fhueske], [~twalthr], [~sunjincheng121]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6810:
---
Description: 
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

*Note:*
Every scalar function should add TableAPI doc in  
{{./docs/dev/table/tableApi.md#built-in-functions}}. 
Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.

Welcome anybody to add the sub-task about standard database scalar function. 




  was:
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

Welcome anybody to add the sub-task about standard database scalar function. 




> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> In this JIRA, will create some sub-task for add specific scalar function, 
> such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}},string-functions {{LPAD}}, etc. 
> I think is good way to let SQL work, and then add TableAPI to supported. So I 
> suggest one scalar function create two sub-task, one is for SQL. another for 
> TableAPI.
> *Note:*
> Every scalar function should add TableAPI doc in  
> {{./docs/dev/table/tableApi.md#built-in-functions}}. 
> Add SQL doc in {{./docs/dev/table/sql.md#built-in-functions}}.
> Welcome anybody to add the sub-task about standard database scalar function. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6927) Support pattern group in CEP

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058676#comment-16058676
 ] 

ASF GitHub Bot commented on FLINK-6927:
---

Github user dianfu commented on the issue:

https://github.com/apache/flink/pull/4153
  
@dawidwys @kl0u It will be great if you could take a look at this PR.


> Support pattern group in CEP
> 
>
> Key: FLINK-6927
> URL: https://issues.apache.org/jira/browse/FLINK-6927
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Dian Fu
>Assignee: Dian Fu
>
> We should add support for pattern group. This would enrich the set of 
> supported patterns. For example, users can write patterns like this with this 
> feature available:
> {code}
>  A --> (B --> C.times(3)).optional() --> D
> {code}
> or
> {code}
> A --> (B --> C).times(3) --> D
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4153: [FLINK-6927] [cep] Support pattern group in CEP

2017-06-21 Thread dianfu
Github user dianfu commented on the issue:

https://github.com/apache/flink/pull/4153
  
@dawidwys @kl0u It will be great if you could take a look at this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6979) Add documentation for Aggregation Functions

2017-06-21 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6979:
--

 Summary: Add documentation for Aggregation Functions
 Key: FLINK-6979
 URL: https://issues.apache.org/jira/browse/FLINK-6979
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table API & SQL
Reporter: sunjincheng
 Fix For: 1.4.0


The User-defined Functions documentation is currently lacking a description of 
Aggregation Functions.
The page has a placeholder section with a TODO: ./docs/dev/table/udfs.md.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6978) Add documentation for Register User-Defined Functions.

2017-06-21 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6978:
--

 Summary: Add documentation for Register User-Defined Functions.
 Key: FLINK-6978
 URL: https://issues.apache.org/jira/browse/FLINK-6978
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng


The User-defined Functions documentation is currently lacking a description of 
Register User-Defined Functions.
The page has a placeholder section with a TODO: ./docs/dev/table/udfs.md.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2017-06-21 Thread Jark Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-6810:
---
Labels: starter  (was: )

> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> In this JIRA, will create some sub-task for add specific scalar function, 
> such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}},string-functions {{LPAD}}, etc. 
> I think is good way to let SQL work, and then add TableAPI to supported. So I 
> suggest one scalar function create two sub-task, one is for SQL. another for 
> TableAPI.
> Welcome anybody to add the sub-task about standard database scalar function. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6938) IterativeCondition should support RichFunction interface

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058660#comment-16058660
 ] 

ASF GitHub Bot commented on FLINK-6938:
---

Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4145
  
@dianfu I agree with you. I think we can improve that in another JIRA.
@dawidwys I have addressed all the comments.
@kl0u It'll be great if you can have a look. 

Cheers,
Jark


> IterativeCondition should support RichFunction interface
> 
>
> Key: FLINK-6938
> URL: https://issues.apache.org/jira/browse/FLINK-6938
> Project: Flink
>  Issue Type: Sub-task
>  Components: CEP
>Reporter: Jark Wu
>Assignee: Jark Wu
> Fix For: 1.4.0
>
>
> In FLIP-20, we need IterativeCondition to support an {{open()}} method to 
> compile the generated code once. We do not want to insert a if condition  in 
> the {{filter()}} method. So I suggest make IterativeCondition support 
> {{RichFunction}} interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4145: [FLINK-6938][FLINK-6939] [cep] Not store IterativeConditi...

2017-06-21 Thread wuchong
Github user wuchong commented on the issue:

https://github.com/apache/flink/pull/4145
  
@dianfu I agree with you. I think we can improve that in another JIRA.
@dawidwys I have addressed all the comments.
@kl0u It'll be great if you can have a look. 

Cheers,
Jark


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-6857) Add global default Kryo serializer configuration to StreamExecutionEnvironment

2017-06-21 Thread mingleizhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingleizhang reassigned FLINK-6857:
---

Assignee: mingleizhang

> Add global default Kryo serializer configuration to StreamExecutionEnvironment
> --
>
> Key: FLINK-6857
> URL: https://issues.apache.org/jira/browse/FLINK-6857
> Project: Flink
>  Issue Type: Improvement
>  Components: Configuration, Type Serialization System
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: mingleizhang
>
> See ML for original discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/KryoException-Encountered-unregistered-class-ID-td13476.html.
> We should have an additional {{setDefaultKryoSerializer}} method that allows 
> overriding the global default serializer that is not tied to specific classes 
> (out-of-the-box Kryo uses the {{FieldSerializer}} if no matches for default 
> serializer settings can be found for a class). Internally in Flink's 
> {{KryoSerializer}}, this would only be a matter of proxying that configured 
> global default serializer for Kryo by calling 
> {{Kryo.setDefaultSerializer(...)}} on the created Kryo instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6976) Add STR_TO_DATE supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6976:
---
Labels: starter  (was: )

> Add STR_TO_DATE supported in TableAPI
> -
>
> Key: FLINK-6976
> URL: https://issues.apache.org/jira/browse/FLINK-6976
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6895 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6977) Add MD5/SHA1/SHA2 supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6977:
---
Description: See FLINK-6926 for detail.  (was: See FLINK-6895 for detail.)

> Add MD5/SHA1/SHA2 supported in TableAPI
> ---
>
> Key: FLINK-6977
> URL: https://issues.apache.org/jira/browse/FLINK-6977
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6926 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6977) Add MD5/SHA1/SHA2 supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6977:
---
Labels: starter  (was: )

> Add MD5/SHA1/SHA2 supported in TableAPI
> ---
>
> Key: FLINK-6977
> URL: https://issues.apache.org/jira/browse/FLINK-6977
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6926 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6975) Add CONCAT/CONCAT_WS supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6975:
---
Labels: starter  (was: )

> Add CONCAT/CONCAT_WS supported in TableAPI
> --
>
> Key: FLINK-6975
> URL: https://issues.apache.org/jira/browse/FLINK-6975
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6925 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6846) Add TIMESTAMPADD supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6846:
---
Description: See FLINK-6811 for detail.  (was: See FLINK-6811)

> Add TIMESTAMPADD supported in TableAPI
> --
>
> Key: FLINK-6846
> URL: https://issues.apache.org/jira/browse/FLINK-6846
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6811 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6960) Add E() supported in SQL

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6960:
---
Summary: Add E() supported in SQL  (was: Add 
E(2.7182818284590452354),PI(3.14159265358979323846) supported in SQL)

> Add E() supported in SQL
> 
>
> Key: FLINK-6960
> URL: https://issues.apache.org/jira/browse/FLINK-6960
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>  Labels: starter
>
> E=Math.E 
> PI=Math.PI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6960) Add E() supported in SQL

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6960:
---
Description: 
E=Math.E 


  was:
E=Math.E 
PI=Math.PI


> Add E() supported in SQL
> 
>
> Key: FLINK-6960
> URL: https://issues.apache.org/jira/browse/FLINK-6960
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>  Labels: starter
>
> E=Math.E 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6942) Add E() supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6942:
---
Description: See FLINK-6960 for detail.  (was: Add a document for the 
FLINK-6810 related scalar functions)

> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6960 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6942) Add E() supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6942:
---
Summary: Add E() supported in TableAPI  (was: Add a document for the 
FLINK-6810 related scalar functions)

> Add E() supported in TableAPI
> -
>
> Key: FLINK-6942
> URL: https://issues.apache.org/jira/browse/FLINK-6942
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> Add a document for the FLINK-6810 related scalar functions



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6977) Add MD5/SHA1/SHA2 supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6977:
--

 Summary: Add MD5/SHA1/SHA2 supported in TableAPI
 Key: FLINK-6977
 URL: https://issues.apache.org/jira/browse/FLINK-6977
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng


See FLINK-6895 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6975) Add CONCAT/CONCAT_WS supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6975:
--

 Summary: Add CONCAT/CONCAT_WS supported in TableAPI
 Key: FLINK-6975
 URL: https://issues.apache.org/jira/browse/FLINK-6975
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng


See FLINK-6925 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6976) Add STR_TO_DATE supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6976:
--

 Summary: Add STR_TO_DATE supported in TableAPI
 Key: FLINK-6976
 URL: https://issues.apache.org/jira/browse/FLINK-6976
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng


See FLINK-6895 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6974) Add BIN supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6974:
---
Labels: starter  (was: )

> Add BIN supported in TableAPI
> -
>
> Key: FLINK-6974
> URL: https://issues.apache.org/jira/browse/FLINK-6974
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6893 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6973) Add L/RPAD supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6973:
---
Labels: starter  (was: )

> Add L/RPAD supported in TableAPI
> 
>
> Key: FLINK-6973
> URL: https://issues.apache.org/jira/browse/FLINK-6973
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6892 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6974) Add BIN supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6974:
--

 Summary: Add BIN supported in TableAPI
 Key: FLINK-6974
 URL: https://issues.apache.org/jira/browse/FLINK-6974
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng


See FLINK-6893 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6973) Add L/RPAD supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)
sunjincheng created FLINK-6973:
--

 Summary: Add L/RPAD supported in TableAPI
 Key: FLINK-6973
 URL: https://issues.apache.org/jira/browse/FLINK-6973
 Project: Flink
  Issue Type: Sub-task
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: sunjincheng


See FLINK-6892 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6972) Flink REPL api

2017-06-21 Thread Praveen Kanamarlapudi (JIRA)
Praveen Kanamarlapudi created FLINK-6972:


 Summary: Flink REPL api
 Key: FLINK-6972
 URL: https://issues.apache.org/jira/browse/FLINK-6972
 Project: Flink
  Issue Type: Improvement
  Components: Client
Reporter: Praveen Kanamarlapudi


Can you please develop FlinkIMap (Similar to 
[SparkIMain|https://github.com/apache/spark/blob/master/repl/scala-2.10/src/main/scala/org/apache/spark/repl/SparkIMain.scala])
 developer api for creating interactive sessions.

I am thinking to add flink support to [livy|https://github.com/cloudera/livy/]. 
For enabling flink interactive sessions it would be really helpful.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6924) ADD LOG(X) supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6924:
---
Labels: starter  (was: )

> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6924) ADD LOG(X) supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6924:
---
Summary: ADD LOG(X) supported in TableAPI  (was: ADD LOG supported in 
TableAPI)

> ADD LOG(X) supported in TableAPI
> 
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6924) ADD LOG supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6924:
---
Summary: ADD LOG supported in TableAPI  (was: ADD LOG/LPAD/RPAD/BIN 
supported in TableAPI)

> ADD LOG supported in TableAPI
> -
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>  Labels: starter
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6924) ADD LOG/LPAD/RPAD/BIN supported in TableAPI

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6924:
---
Description: See FLINK-6891 for detail.  (was: See FLINK-6891/ FLINK-6892/ 
FLINK-6893 for detail.)

> ADD LOG/LPAD/RPAD/BIN supported in TableAPI
> ---
>
> Key: FLINK-6924
> URL: https://issues.apache.org/jira/browse/FLINK-6924
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>
> See FLINK-6891 for detail.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6810) Add Some built-in Scalar Function supported

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng updated FLINK-6810:
---
Description: 
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

Welcome anybody to add the sub-task about standard database scalar function. 



  was:
In this JIRA, will create some sub-task for add specific scalar function, such 
as mathematical-function {{LOG}}, date-functions
 {{DATEADD}},string-functions {{LPAD}}, etc. 

I think is good way to let SQL work, and then add TableAPI to supported. So I 
suggest one scalar function create two sub-task, one is for SQL. another for 
TableAPI.

Welcome anybody to add the sub-task about standard database scalar function. 



> Add Some built-in Scalar Function supported
> ---
>
> Key: FLINK-6810
> URL: https://issues.apache.org/jira/browse/FLINK-6810
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>
> In this JIRA, will create some sub-task for add specific scalar function, 
> such as mathematical-function {{LOG}}, date-functions
>  {{DATEADD}},string-functions {{LPAD}}, etc. 
> I think is good way to let SQL work, and then add TableAPI to supported. So I 
> suggest one scalar function create two sub-task, one is for SQL. another for 
> TableAPI.
> Welcome anybody to add the sub-task about standard database scalar function. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4161: Hot fix a description of over Table API document.

2017-06-21 Thread sunjincheng121
GitHub user sunjincheng121 opened a pull request:

https://github.com/apache/flink/pull/4161

Hot fix a description of over Table API document.

Thanks for contributing to Apache Flink. Before you open your pull request, 
please take the following check list into consideration.
If your changes take all of the items into account, feel free to open your 
pull request. For more information and/or questions please refer to the [How To 
Contribute guide](http://flink.apache.org/how-to-contribute.html).
In addition to going through the list, please provide a meaningful 
description of your changes.

- [ ] General
  - The pull request references the related JIRA issue ("Hot fix a 
description of over Table API document.")
  - The pull request addresses only one issue
  - Each commit in the PR has a meaningful commit message (including the 
JIRA id)

- [x] Documentation
  - Documentation has been added for new functionality
  - Old documentation affected by the pull request has been updated
  - JavaDoc for public methods has been added

- [ ] Tests & Build
  - Functionality added by the pull request is covered by tests
  - `mvn clean verify` has been executed successfully locally or a Travis 
build has passed


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sunjincheng121/flink HotFix-OverDoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4161.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4161


commit 0f938c5e4c130eeb5b6bd450785a0ce4767f2302
Author: sunjincheng121 
Date:   2017-06-22T01:59:42Z

Hotfix a description of over Table API document.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6971) Add Alluxio Filesystem in Flink Ecosystem page

2017-06-21 Thread Bin Fan (JIRA)
Bin Fan created FLINK-6971:
--

 Summary: Add Alluxio Filesystem in Flink Ecosystem page
 Key: FLINK-6971
 URL: https://issues.apache.org/jira/browse/FLINK-6971
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: Bin Fan
Priority: Minor


Flink Ecosystem page (http://flink.apache.org/ecosystem.html) lists a set of 
third-party projects
that supports working with Flink.

Alluxio (www.alluxio.org) can work with Flink as a Hadoop-compatible file 
system, see more description in 
http://www.alluxio.org/docs/master/en/Running-Flink-on-Alluxio.html. I am 
wondering if I could submit patch to add a paragraph of Alluxio under 
http://flink.apache.org/ecosystem.html#third-party-projects and points users 
the Alluxio-flink integration page?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6971) Add Alluxio Filesystem in Flink Ecosystem page

2017-06-21 Thread Bin Fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bin Fan updated FLINK-6971:
---
Description: 
Flink Ecosystem page (http://flink.apache.org/ecosystem.html) lists a set of 
third-party projects that support working with Flink.

Alluxio (www.alluxio.org) can work with Flink as a Hadoop-compatible file 
system, see more description in 
http://www.alluxio.org/docs/master/en/Running-Flink-on-Alluxio.html. I am 
wondering if I could submit patch to add a paragraph of Alluxio under 
http://flink.apache.org/ecosystem.html#third-party-projects and points users 
the Alluxio-flink integration page?

  was:
Flink Ecosystem page (http://flink.apache.org/ecosystem.html) lists a set of 
third-party projects
that supports working with Flink.

Alluxio (www.alluxio.org) can work with Flink as a Hadoop-compatible file 
system, see more description in 
http://www.alluxio.org/docs/master/en/Running-Flink-on-Alluxio.html. I am 
wondering if I could submit patch to add a paragraph of Alluxio under 
http://flink.apache.org/ecosystem.html#third-party-projects and points users 
the Alluxio-flink integration page?


> Add Alluxio Filesystem in Flink Ecosystem page
> --
>
> Key: FLINK-6971
> URL: https://issues.apache.org/jira/browse/FLINK-6971
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Bin Fan
>Priority: Minor
>
> Flink Ecosystem page (http://flink.apache.org/ecosystem.html) lists a set of 
> third-party projects that support working with Flink.
> Alluxio (www.alluxio.org) can work with Flink as a Hadoop-compatible file 
> system, see more description in 
> http://www.alluxio.org/docs/master/en/Running-Flink-on-Alluxio.html. I am 
> wondering if I could submit patch to add a paragraph of Alluxio under 
> http://flink.apache.org/ecosystem.html#third-party-projects and points users 
> the Alluxio-flink integration page?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6379) Implement FLIP-6 Mesos Resource Manager

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058431#comment-16058431
 ] 

ASF GitHub Bot commented on FLINK-6379:
---

Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/3942
  
@tillrohrmann thoughts on this?   I'm open to anything.


> Implement FLIP-6 Mesos Resource Manager
> ---
>
> Key: FLINK-6379
> URL: https://issues.apache.org/jira/browse/FLINK-6379
> Project: Flink
>  Issue Type: Sub-task
>  Components: Mesos
>Reporter: Eron Wright 
>Assignee: Eron Wright 
>
> Given the new ResourceManager of FLIP-6, implement a new 
> MesosResourceManager.   
> The minimal effort would be to implement a new resource manager while 
> continuing to use the various local actors (launch coordinator, task monitor, 
> etc.) which implement the various FSMs associated with Mesos scheduling. 
> The Fenzo library would continue to solve the packing problem of matching 
> resource offers to slot requests.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3942: FLINK-6379 Mesos ResourceManager (FLIP-6)

2017-06-21 Thread EronWright
Github user EronWright commented on the issue:

https://github.com/apache/flink/pull/3942
  
@tillrohrmann thoughts on this?   I'm open to anything.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (FLINK-6966) Add maxParallelism and UIDs to all operators generated by the Table API / SQL

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng reassigned FLINK-6966:
--

Assignee: sunjincheng

> Add maxParallelism and UIDs to all operators generated by the Table API / SQL
> -
>
> Key: FLINK-6966
> URL: https://issues.apache.org/jira/browse/FLINK-6966
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Fabian Hueske
>Assignee: sunjincheng
>
> At the moment, the Table API does not assign UIDs and the max parallelism to 
> operators (except for operators with parallelism 1).
> We should do that to avoid problems when rescaling or restarting jobs from 
> savepoints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6962) SQL DDL for input and output tables

2017-06-21 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058284#comment-16058284
 ] 

Fabian Hueske commented on FLINK-6962:
--

Can you add some pseudo code for the DDL to sketch the functionality and API 
for this feature?
We will later discuss the exact API, but I think this would be very helpful to 
understand what's exactly proposed by this JIRA.

Thank you!

> SQL DDL for input and output tables
> ---
>
> Key: FLINK-6962
> URL: https://issues.apache.org/jira/browse/FLINK-6962
> Project: Flink
>  Issue Type: New Feature
>  Components: Table API & SQL
>Reporter: Shaoxuan Wang
>Assignee: lincoln.lee
> Fix For: 1.4.0
>
>
> This Jira adds support to allow user define the DDL for source and sink 
> tables, including the waterMark(on source table) and emit SLA (on result 
> table). The detailed design doc will be attached soon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6649) Improve Non-window group aggregate with configurable `earlyFire`.

2017-06-21 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-6649:
-
Issue Type: Sub-task  (was: Improvement)
Parent: FLINK-6961

> Improve Non-window group aggregate with configurable `earlyFire`.
> -
>
> Key: FLINK-6649
> URL: https://issues.apache.org/jira/browse/FLINK-6649
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently,  Non-windowed group aggregate is earlyFiring at count(1), that is 
> every row will emit a aggregate result. But some times user want config count 
> number (`early firing with count[N]`) , to reduce the downstream pressure. 
> This JIRA. will enable the config of e`earlyFiring` for  Non-windowed group 
> aggregate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6970) Add support for late data updates to group window aggregates

2017-06-21 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6970:


 Summary: Add support for late data updates to group window 
aggregates
 Key: FLINK-6970
 URL: https://issues.apache.org/jira/browse/FLINK-6970
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Fabian Hueske


Late arriving data is a common issue for group window aggregates. At the 
moment, the Table API simply drops late arriving records. Another approach are 
deferred computation (FLINK-6969) and late data updates. 

This issue proposes to add late data updates for group window aggregates. 
Instead of discarding the state of a window when the result has been computed, 
the state is kept for a certain time interval. If a late record for a window is 
received within this interval, an updated result is emitted (and the previous 
result is retracted). 
This feature will require a new parameter to the {{QueryConfig}} to configure 
the size of the late data interval.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6969) Add support for deferred computation for group window aggregates

2017-06-21 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6969:


 Summary: Add support for deferred computation for group window 
aggregates
 Key: FLINK-6969
 URL: https://issues.apache.org/jira/browse/FLINK-6969
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Fabian Hueske


Deferred computation is a strategy to deal with late arriving data and avoid 
updates of previous results. Instead of computing a result as soon as it is 
possible (i.e., when a corresponding watermark was received), deferred 
computation adds a configurable amount of slack time in which late data is 
accepted before the result is compute. For example, instead of computing a 
tumbling window of 1 hour at each full hour, we can add a deferred computation 
interval of 15 minute to compute the result quarter past each full hour.

This approach adds latency but can reduce the number of update esp. in use 
cases where the user cannot influence the generation of watermarks. It is also 
useful if the data is emitted to a system that cannot update result (files or 
Kafka). The deferred computation interval should be configured via the 
{{QueryConfig}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-992) Create CollectionDataSets by reading (client) local files.

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-992:
-

Assignee: Neelesh Srinivas Salian  (was: niraj rai)

> Create CollectionDataSets by reading (client) local files.
> --
>
> Key: FLINK-992
> URL: https://issues.apache.org/jira/browse/FLINK-992
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API, Python API
>Reporter: Fabian Hueske
>Assignee: Neelesh Srinivas Salian
>Priority: Minor
>  Labels: starter
>
> {{CollectionDataSets}} are a nice way to feed data into programs.
> We could add support to read a client-local file at program construction time 
> using a FileInputFormat, put its data into a CollectionDataSet, and ship its 
> data together with the program.
> This would remove the need to upload small files into DFS which are used 
> together with some large input (stored in DFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-992) Create CollectionDataSets by reading (client) local files.

2017-06-21 Thread niraj rai (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058224#comment-16058224
 ] 

niraj rai commented on FLINK-992:
-

Please go ahead

On Jun 21, 2017 2:00 PM, "Neelesh Srinivas Salian (JIRA)" 



> Create CollectionDataSets by reading (client) local files.
> --
>
> Key: FLINK-992
> URL: https://issues.apache.org/jira/browse/FLINK-992
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API, Python API
>Reporter: Fabian Hueske
>Assignee: niraj rai
>Priority: Minor
>  Labels: starter
>
> {{CollectionDataSets}} are a nice way to feed data into programs.
> We could add support to read a client-local file at program construction time 
> using a FileInputFormat, put its data into a CollectionDataSet, and ship its 
> data together with the program.
> This would remove the need to upload small files into DFS which are used 
> together with some large input (stored in DFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-5595) Add links to sub-sections in the left-hand navigation bar

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058223#comment-16058223
 ] 

Neelesh Srinivas Salian commented on FLINK-5595:


[~wints], if you are not working on this currently, can I assign it to myself?

> Add links to sub-sections in the left-hand navigation bar
> -
>
> Key: FLINK-5595
> URL: https://issues.apache.org/jira/browse/FLINK-5595
> Project: Flink
>  Issue Type: Improvement
>  Components: Project Website
>Reporter: Mike Winters
>Assignee: Mike Winters
>Priority: Minor
>  Labels: newbie, website
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Some pages on the Flink project site (such as 
> http://flink.apache.org/introduction.html) include a table of contents at the 
> top. The sections from the ToC are not exposed in the left-hand nav when the 
> page is active, but this could be a useful addition, especially for longer, 
> content-heavy pages. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (FLINK-1890) Add note to docs that ReadFields annotations are currently not evaluated

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Neelesh Srinivas Salian reassigned FLINK-1890:
--

Assignee: Neelesh Srinivas Salian

> Add note to docs that ReadFields annotations are currently not evaluated
> 
>
> Key: FLINK-1890
> URL: https://issues.apache.org/jira/browse/FLINK-1890
> Project: Flink
>  Issue Type: Wish
>  Components: DataSet API
>Reporter: Stefan Bunk
>Assignee: Neelesh Srinivas Salian
>Priority: Minor
>  Labels: starter
>
> In the Scala API, you have the option to declare forwarded fields via the
> {{withForwardedFields}} method.
> It would be nice to have sth. similar for read fields, as otherwise one needs 
> to create a class, which I personally try to avoid for readability.
> Maybe grouping all annotations in one function and have a first parameter 
> indicating the type of annotation is also an option, if you plan on adding 
> more annotations and want to keep the interface smaller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-1890) Add note to docs that ReadFields annotations are currently not evaluated

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058219#comment-16058219
 ] 

Neelesh Srinivas Salian commented on FLINK-1890:


[~fhueske], i can pick it up.

> Add note to docs that ReadFields annotations are currently not evaluated
> 
>
> Key: FLINK-1890
> URL: https://issues.apache.org/jira/browse/FLINK-1890
> Project: Flink
>  Issue Type: Wish
>  Components: DataSet API
>Reporter: Stefan Bunk
>Priority: Minor
>  Labels: starter
>
> In the Scala API, you have the option to declare forwarded fields via the
> {{withForwardedFields}} method.
> It would be nice to have sth. similar for read fields, as otherwise one needs 
> to create a class, which I personally try to avoid for readability.
> Maybe grouping all annotations in one function and have a first parameter 
> indicating the type of annotation is also an option, if you plan on adding 
> more annotations and want to keep the interface smaller.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6968) Store streaming, updating tables with unique key in queryable state

2017-06-21 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6968:


 Summary: Store streaming, updating tables with unique key in 
queryable state
 Key: FLINK-6968
 URL: https://issues.apache.org/jira/browse/FLINK-6968
 Project: Flink
  Issue Type: New Feature
  Components: Table API & SQL
Reporter: Fabian Hueske


Streaming tables with unique key are continuously updated. For example queries 
with a non-windowed aggregation generate such tables. Commonly, such updating 
tables are emitted via an upsert table sink to an external datastore (k-v 
store, database) to make it accessible to applications.

This issue is about adding a feature to store and maintain such a table as 
queryable state in Flink. By storing the table in Flnk's queryable state, we do 
not need an external data store to access the results of the query but can 
query the results directly from Flink.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-992) Create CollectionDataSets by reading (client) local files.

2017-06-21 Thread Neelesh Srinivas Salian (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058218#comment-16058218
 ] 

Neelesh Srinivas Salian commented on FLINK-992:
---

[~nrai], if you are not working on this at the moment, can I assign it to 
myself?

> Create CollectionDataSets by reading (client) local files.
> --
>
> Key: FLINK-992
> URL: https://issues.apache.org/jira/browse/FLINK-992
> Project: Flink
>  Issue Type: New Feature
>  Components: DataSet API, Python API
>Reporter: Fabian Hueske
>Assignee: niraj rai
>Priority: Minor
>  Labels: starter
>
> {{CollectionDataSets}} are a nice way to feed data into programs.
> We could add support to read a client-local file at program construction time 
> using a FileInputFormat, put its data into a CollectionDataSet, and ship its 
> data together with the program.
> This would remove the need to upload small files into DFS which are used 
> together with some large input (stored in DFS).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6966) Add maxParallelism and UIDs to all operators generated by the Table API / SQL

2017-06-21 Thread Fabian Hueske (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058204#comment-16058204
 ] 

Fabian Hueske commented on FLINK-6966:
--

Actually, this is not directly related to {{table.scala}} but to all operators 
generated from SQL and Table API queries.
But you are right. We have to think about how we can generate the UIDs in a 
consistent way without duplicates.

> Add maxParallelism and UIDs to all operators generated by the Table API / SQL
> -
>
> Key: FLINK-6966
> URL: https://issues.apache.org/jira/browse/FLINK-6966
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Fabian Hueske
>
> At the moment, the Table API does not assign UIDs and the max parallelism to 
> operators (except for operators with parallelism 1).
> We should do that to avoid problems when rescaling or restarting jobs from 
> savepoints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-6966) Add maxParallelism and UIDs to all operators generated by the Table API / SQL

2017-06-21 Thread Fabian Hueske (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske updated FLINK-6966:
-
Summary: Add maxParallelism and UIDs to all operators generated by the 
Table API / SQL  (was: Add maxParallelism and UIDs to all operators generated 
by the Table API)

> Add maxParallelism and UIDs to all operators generated by the Table API / SQL
> -
>
> Key: FLINK-6966
> URL: https://issues.apache.org/jira/browse/FLINK-6966
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Fabian Hueske
>
> At the moment, the Table API does not assign UIDs and the max parallelism to 
> operators (except for operators with parallelism 1).
> We should do that to avoid problems when rescaling or restarting jobs from 
> savepoints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6633) Register with shared state registry before adding to CompletedCheckpointStore

2017-06-21 Thread Cliff Resnick (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058158#comment-16058158
 ] 

Cliff Resnick commented on FLINK-6633:
--

Thanks [~srichter], I'll give this a try tomorrow morning EST.

> Register with shared state registry before adding to CompletedCheckpointStore
> -
>
> Key: FLINK-6633
> URL: https://issues.apache.org/jira/browse/FLINK-6633
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.3.0
>
>
> Introducing placeholders for previously existing shared state requires a 
> change that shared state is first registering with {{SharedStateregistry}} 
> (thereby being consolidated) and only after that added to a 
> {{CompletedCheckpointStore}}, so that the consolidated checkpoint is written 
> to stable storage. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6617) Improve JAVA and SCALA logical plans consistent test

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058147#comment-16058147
 ] 

ASF GitHub Bot commented on FLINK-6617:
---

Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3943
  
I think that is a very good idea. A review of a +6k -4k LOC PR takes a lot 
of time and has a good chance to be interrupted by other issues. Several 
smaller PRs would be much easier to review and merge. 
It would be great if you could do that @sunjincheng121.

Thank you very much!


> Improve JAVA and SCALA logical plans consistent test
> 
>
> Key: FLINK-6617
> URL: https://issues.apache.org/jira/browse/FLINK-6617
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently,we need some `StringExpression` test,for all JAVA and SCALA API.
> Such as:`GroupAggregations`,`GroupWindowAggregaton`(Session,Tumble),`Calc` 
> etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3943: [FLINK-6617][table] Improve JAVA and SCALA logical plans ...

2017-06-21 Thread fhueske
Github user fhueske commented on the issue:

https://github.com/apache/flink/pull/3943
  
I think that is a very good idea. A review of a +6k -4k LOC PR takes a lot 
of time and has a good chance to be interrupted by other issues. Several 
smaller PRs would be much easier to review and merge. 
It would be great if you could do that @sunjincheng121.

Thank you very much!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6960) Add E(2.7182818284590452354),PI(3.14159265358979323846) supported in SQL

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058138#comment-16058138
 ] 

ASF GitHub Bot commented on FLINK-6960:
---

Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4152#discussion_r12335
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/functions/scalarSqlFunctions/MathSqlFunctions.scala
 ---
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.functions.scalarSqlFunctions
+
+import org.apache.calcite.sql.{SqlFunction, SqlFunctionCategory, SqlKind}
+import org.apache.calcite.sql.`type`._
+
+class MathSqlFunctions {
--- End diff --

I think you can also have a stand-alone singleton object in Scala. No need 
to define a class if it is not used.


> Add E(2.7182818284590452354),PI(3.14159265358979323846) supported in SQL
> 
>
> Key: FLINK-6960
> URL: https://issues.apache.org/jira/browse/FLINK-6960
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API & SQL
>Reporter: sunjincheng
>Assignee: sunjincheng
>  Labels: starter
>
> E=Math.E 
> PI=Math.PI



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4152: [FLINK-6960][table] Add E supported in SQL.

2017-06-21 Thread fhueske
Github user fhueske commented on a diff in the pull request:

https://github.com/apache/flink/pull/4152#discussion_r12335
  
--- Diff: 
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/functions/scalarSqlFunctions/MathSqlFunctions.scala
 ---
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.flink.table.functions.scalarSqlFunctions
+
+import org.apache.calcite.sql.{SqlFunction, SqlFunctionCategory, SqlKind}
+import org.apache.calcite.sql.`type`._
+
+class MathSqlFunctions {
--- End diff --

I think you can also have a stand-alone singleton object in Scala. No need 
to define a class if it is not used.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6965) Avro is missing snappy dependency

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058056#comment-16058056
 ] 

ASF GitHub Bot commented on FLINK-6965:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4160#discussion_r123341447
  
--- Diff: flink-core/pom.xml ---
@@ -79,13 +79,12 @@ under the License.

org.apache.avro
avro
-   
-   
-   
-   org.xerial.snappy
-   snappy-java
-   
-   
+   
+
+   
--- End diff --

for the avro/snappy case, no. This PR should work as is.


> Avro is missing snappy dependency
> -
>
> Key: FLINK-6965
> URL: https://issues.apache.org/jira/browse/FLINK-6965
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 1.3.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.3.2
>
>
> The shading rework made before 1.3 removed a snappy dependency that was 
> accidentally pulled in through hadoop. This is technically alright, until 
> class-loaders rear their ugly heads.
> Our kafka connector can read avro records, which may or may not require 
> snappy. Usually this _should_ be solvable by including the snappy dependency 
> in the user-jar if necessary, however since the kafka connector loads classes 
> that it requires using the system class loader this doesn't work.
> As such we have to add a separate snappy dependency to flink-core.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4160: {FLINK-6965] Include snappy-java in flink-dist

2017-06-21 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4160#discussion_r123341447
  
--- Diff: flink-core/pom.xml ---
@@ -79,13 +79,12 @@ under the License.

org.apache.avro
avro
-   
-   
-   
-   org.xerial.snappy
-   snappy-java
-   
-   
+   
+
+   
--- End diff --

for the avro/snappy case, no. This PR should work as is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6923) Kafka connector needs to expose information about in-flight record in AbstractFetcher base class

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16058032#comment-16058032
 ] 

ASF GitHub Bot commented on FLINK-6923:
---

Github user zhenzhongxu commented on the issue:

https://github.com/apache/flink/pull/4149
  
Hi @tzulitai. Yes, we do have a use case where we need to disable Flink 
checkpointing because the time interval checkpointing model does not work with 
our constraints. We had to trigger Kafka commits by manually taking offset 
snapshot and commit after sink flushes (parallel source/sink operators are 
chained together), in this case, the partition offset is not incremented until 
the thread exits sink operator logic. Now, the only way to make the commit 
accurate is to expose which partition the in-flight message belongs to and we 
can consciously +1 to the offset at the time of snapshoting. 



> Kafka connector needs to expose information about in-flight record in 
> AbstractFetcher base class
> 
>
> Key: FLINK-6923
> URL: https://issues.apache.org/jira/browse/FLINK-6923
> Project: Flink
>  Issue Type: Improvement
>  Components: Kafka Connector
>Reporter: Zhenzhong Xu
>Assignee: Zhenzhong Xu
>Priority: Minor
>
> We have a use case where we have our custom Fetcher implementation that 
> extends AbstractFetcher base class. We need to periodically get current in 
> flight (in processing) records' partition and offset information. 
> This can be easily exposed in AbstractFetcher class.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4149: [FLINK-6923] [Kafka Connector] Expose in-processing/in-fl...

2017-06-21 Thread zhenzhongxu
Github user zhenzhongxu commented on the issue:

https://github.com/apache/flink/pull/4149
  
Hi @tzulitai. Yes, we do have a use case where we need to disable Flink 
checkpointing because the time interval checkpointing model does not work with 
our constraints. We had to trigger Kafka commits by manually taking offset 
snapshot and commit after sink flushes (parallel source/sink operators are 
chained together), in this case, the partition offset is not incremented until 
the thread exits sink operator logic. Now, the only way to make the commit 
accurate is to expose which partition the in-flight message belongs to and we 
can consciously +1 to the offset at the time of snapshoting. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6633) Register with shared state registry before adding to CompletedCheckpointStore

2017-06-21 Thread Stefan Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057939#comment-16057939
 ] 

Stefan Richter commented on FLINK-6633:
---

[~cre...@gmail.com] I think I have a fix for your problem in this branch: 
https://github.com/StefanRRichter/flink/tree/fixRecoverStandaloneCompeltedCheckpointStore.
 I will probably merge it at some point later this week.

> Register with shared state registry before adding to CompletedCheckpointStore
> -
>
> Key: FLINK-6633
> URL: https://issues.apache.org/jira/browse/FLINK-6633
> Project: Flink
>  Issue Type: Sub-task
>  Components: State Backends, Checkpointing
>Affects Versions: 1.3.0
>Reporter: Stefan Richter
>Assignee: Stefan Richter
>Priority: Blocker
> Fix For: 1.3.0
>
>
> Introducing placeholders for previously existing shared state requires a 
> change that shared state is first registering with {{SharedStateregistry}} 
> (thereby being consolidated) and only after that added to a 
> {{CompletedCheckpointStore}}, so that the consolidated checkpoint is written 
> to stable storage. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6967) Fully separate batch and storm examples

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057894#comment-16057894
 ] 

ASF GitHub Bot commented on FLINK-6967:
---

Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4159
  
+1


> Fully separate batch and storm examples
> ---
>
> Key: FLINK-6967
> URL: https://issues.apache.org/jira/browse/FLINK-6967
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples, Storm Compatibility
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.4.0
>
>
> Like the streaming examples (see FLINK-6863) the storm examples have a 
> dependency on the batch examples, exclusively for the WordCount example data.
> I propose to duplicate the test data again for the storm examples.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4159: [FLINK-6967] Remove batch-examples dependency from storm ...

2017-06-21 Thread greghogan
Github user greghogan commented on the issue:

https://github.com/apache/flink/pull/4159
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4136: [FLINK-6940][docs] Clarify the effect of configuri...

2017-06-21 Thread bowenli86
Github user bowenli86 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4136#discussion_r123306543
  
--- Diff: docs/ops/state_backends.md ---
@@ -124,7 +124,7 @@ RocksDBStateBackend is currently the only backend that 
offers incremental checkp
 ## Configuring a State Backend
 
 State backends can be configured per job. In addition, you can define a 
default state backend to be used when the
-job does not explicitly define a state backend.
+job does not explicitly define a state backend. Besides, state backend 
configured per-job will overwrite the default state backend configured in 
`flink-conf.yaml`
--- End diff --

Yeah, this one works too.

How about "State backends can be configured per job in code. In addition, 
you can define a default state backend in flink-conf.yaml that is used when the 
job does not explicitly define a state backend."?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6940) Clarify the effect of configuring per-job state backend

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057845#comment-16057845
 ] 

ASF GitHub Bot commented on FLINK-6940:
---

Github user bowenli86 commented on a diff in the pull request:

https://github.com/apache/flink/pull/4136#discussion_r123306543
  
--- Diff: docs/ops/state_backends.md ---
@@ -124,7 +124,7 @@ RocksDBStateBackend is currently the only backend that 
offers incremental checkp
 ## Configuring a State Backend
 
 State backends can be configured per job. In addition, you can define a 
default state backend to be used when the
-job does not explicitly define a state backend.
+job does not explicitly define a state backend. Besides, state backend 
configured per-job will overwrite the default state backend configured in 
`flink-conf.yaml`
--- End diff --

Yeah, this one works too.

How about "State backends can be configured per job in code. In addition, 
you can define a default state backend in flink-conf.yaml that is used when the 
job does not explicitly define a state backend."?


> Clarify the effect of configuring per-job state backend 
> 
>
> Key: FLINK-6940
> URL: https://issues.apache.org/jira/browse/FLINK-6940
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.3.0
>Reporter: Bowen Li
>Assignee: Bowen Li
>Priority: Minor
> Fix For: 1.4.0
>
>
> The documentation of having different options configuring flink state backend 
> is confusing. We should add explicit doc explaining configuring a per-job 
> flink state backend in code will overwrite any default state backend 
> configured in flink-conf.yaml



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6965) Avro is missing snappy dependency

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057789#comment-16057789
 ] 

ASF GitHub Bot commented on FLINK-6965:
---

Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4160#discussion_r123298637
  
--- Diff: flink-core/pom.xml ---
@@ -79,13 +79,12 @@ under the License.

org.apache.avro
avro
-   
-   
-   
-   org.xerial.snappy
-   snappy-java
-   
-   
+   
+
+   
--- End diff --

Side question: is a fix also required for connectors to correctly use the 
user classloader in such cases?


> Avro is missing snappy dependency
> -
>
> Key: FLINK-6965
> URL: https://issues.apache.org/jira/browse/FLINK-6965
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 1.3.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.3.2
>
>
> The shading rework made before 1.3 removed a snappy dependency that was 
> accidentally pulled in through hadoop. This is technically alright, until 
> class-loaders rear their ugly heads.
> Our kafka connector can read avro records, which may or may not require 
> snappy. Usually this _should_ be solvable by including the snappy dependency 
> in the user-jar if necessary, however since the kafka connector loads classes 
> that it requires using the system class loader this doesn't work.
> As such we have to add a separate snappy dependency to flink-core.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4160: {FLINK-6965] Include snappy-java in flink-dist

2017-06-21 Thread tzulitai
Github user tzulitai commented on a diff in the pull request:

https://github.com/apache/flink/pull/4160#discussion_r123298637
  
--- Diff: flink-core/pom.xml ---
@@ -79,13 +79,12 @@ under the License.

org.apache.avro
avro
-   
-   
-   
-   org.xerial.snappy
-   snappy-java
-   
-   
+   
+
+   
--- End diff --

Side question: is a fix also required for connectors to correctly use the 
user classloader in such cases?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6617) Improve JAVA and SCALA logical plans consistent test

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057698#comment-16057698
 ] 

ASF GitHub Bot commented on FLINK-6617:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/3943
  
May be I need split this PR into a couple of small PRs. What do you think? 
or have any other suggestions @fhueske  @wuchong @twalthr 


> Improve JAVA and SCALA logical plans consistent test
> 
>
> Key: FLINK-6617
> URL: https://issues.apache.org/jira/browse/FLINK-6617
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> Currently,we need some `StringExpression` test,for all JAVA and SCALA API.
> Such as:`GroupAggregations`,`GroupWindowAggregaton`(Session,Tumble),`Calc` 
> etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #3943: [FLINK-6617][table] Improve JAVA and SCALA logical plans ...

2017-06-21 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/3943
  
May be I need split this PR into a couple of small PRs. What do you think? 
or have any other suggestions @fhueske  @wuchong @twalthr 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (FLINK-6966) Add maxParallelism and UIDs to all operators generated by the Table API

2017-06-21 Thread sunjincheng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057656#comment-16057656
 ] 

sunjincheng edited comment on FLINK-6966 at 6/21/17 3:19 PM:
-

Hi, [~fhueske] do you meant that we need call {{uid/setUidHash}} when we 
translate to DataStream? And we provided unique UID for per transformation and 
job.If so, we need to consider all the Operators which are in the 
{{table.scala}}. right? 



was (Author: sunjincheng121):
Hi, [~fhueske] do you meant that we need call {{uid/setUidHash}} when we 
translate to DataStream? And we provided unique UID for per transformation and 
job. right?


> Add maxParallelism and UIDs to all operators generated by the Table API
> ---
>
> Key: FLINK-6966
> URL: https://issues.apache.org/jira/browse/FLINK-6966
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Fabian Hueske
>
> At the moment, the Table API does not assign UIDs and the max parallelism to 
> operators (except for operators with parallelism 1).
> We should do that to avoid problems when rescaling or restarting jobs from 
> savepoints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4160: {FLINK-6965] Include snappy-java in flink-dist

2017-06-21 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4160

{FLINK-6965] Include snappy-java in flink-dist

This PR adds removes the snappy dependency exclusion from flink-core.

Without this dependency Flink components that work with avro, and thus 
potentially snappy, fail when loading snappy. This also happens if snappy is 
provided in the user-jar since the flink component doesn't use the user class 
loader.

To prevent this dependency from being removed again in the future by 
accident i modified `checkShadedArtifacts()` function in the travis scripts to 
check for the presence of the dependency.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6965

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4160.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4160


commit f0e757776c45382cfc28daefe321d819d5a4a75c
Author: zentol 
Date:   2017-06-21T14:18:34Z

{FLINK-6965] Include snappy-java in flink-dist




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4159: [FLINK-6967] Remove batch-examples dependency from...

2017-06-21 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4159

[FLINK-6967] Remove batch-examples dependency from storm examples



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6967

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4159.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4159


commit 9f3a3ae1225a1add63fb9386382160bd75f35dab
Author: zentol 
Date:   2017-06-21T15:06:00Z

[FLINK-6967] Remove batch-examples dependency from storm examples




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6967) Fully separate batch and storm examples

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057660#comment-16057660
 ] 

ASF GitHub Bot commented on FLINK-6967:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4159

[FLINK-6967] Remove batch-examples dependency from storm examples



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6967

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4159.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4159


commit 9f3a3ae1225a1add63fb9386382160bd75f35dab
Author: zentol 
Date:   2017-06-21T15:06:00Z

[FLINK-6967] Remove batch-examples dependency from storm examples




> Fully separate batch and storm examples
> ---
>
> Key: FLINK-6967
> URL: https://issues.apache.org/jira/browse/FLINK-6967
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples, Storm Compatibility
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.4.0
>
>
> Like the streaming examples (see FLINK-6863) the storm examples have a 
> dependency on the batch examples, exclusively for the WordCount example data.
> I propose to duplicate the test data again for the storm examples.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-6967) Fully separate batch and storm examples

2017-06-21 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-6967:
---

 Summary: Fully separate batch and storm examples
 Key: FLINK-6967
 URL: https://issues.apache.org/jira/browse/FLINK-6967
 Project: Flink
  Issue Type: Improvement
  Components: Examples, Storm Compatibility
Affects Versions: 1.4.0
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler
Priority: Minor
 Fix For: 1.4.0


Like the streaming examples (see FLINK-6863) the storm examples have a 
dependency on the batch examples, exclusively for the WordCount example data.

I propose to duplicate the test data again for the storm examples.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6965) Avro is missing snappy dependency

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057665#comment-16057665
 ] 

ASF GitHub Bot commented on FLINK-6965:
---

GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/4160

{FLINK-6965] Include snappy-java in flink-dist

This PR adds removes the snappy dependency exclusion from flink-core.

Without this dependency Flink components that work with avro, and thus 
potentially snappy, fail when loading snappy. This also happens if snappy is 
provided in the user-jar since the flink component doesn't use the user class 
loader.

To prevent this dependency from being removed again in the future by 
accident i modified `checkShadedArtifacts()` function in the travis scripts to 
check for the presence of the dependency.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6965

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/4160.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4160


commit f0e757776c45382cfc28daefe321d819d5a4a75c
Author: zentol 
Date:   2017-06-21T14:18:34Z

{FLINK-6965] Include snappy-java in flink-dist




> Avro is missing snappy dependency
> -
>
> Key: FLINK-6965
> URL: https://issues.apache.org/jira/browse/FLINK-6965
> Project: Flink
>  Issue Type: Bug
>  Components: Type Serialization System
>Affects Versions: 1.3.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
> Fix For: 1.3.2
>
>
> The shading rework made before 1.3 removed a snappy dependency that was 
> accidentally pulled in through hadoop. This is technically alright, until 
> class-loaders rear their ugly heads.
> Our kafka connector can read avro records, which may or may not require 
> snappy. Usually this _should_ be solvable by including the snappy dependency 
> in the user-jar if necessary, however since the kafka connector loads classes 
> that it requires using the system class loader this doesn't work.
> As such we have to add a separate snappy dependency to flink-core.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6426) Update the document of group-window table API

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057640#comment-16057640
 ] 

ASF GitHub Bot commented on FLINK-6426:
---

Github user sunjincheng121 closed the pull request at:

https://github.com/apache/flink/pull/3806


> Update the document of group-window table API
> -
>
> Key: FLINK-6426
> URL: https://issues.apache.org/jira/browse/FLINK-6426
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> 1.Correct the method parameter type error in the group-window table API 
> document, change the document from ` .window([w: Window] as 'w)` to ` 
> .window([w: WindowWithoutAlias] as 'w)`
> 2. For the consistency of tableAPI and SQL, change the description of SQL 
> document from "Group Windows" to "Windows".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6966) Add maxParallelism and UIDs to all operators generated by the Table API

2017-06-21 Thread sunjincheng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057656#comment-16057656
 ] 

sunjincheng commented on FLINK-6966:


Hi, [~fhueske] do you meant that we need call {{uid/setUidHash}} when we 
translate to DataStream? And we provided unique UID for per transformation and 
job. right?


> Add maxParallelism and UIDs to all operators generated by the Table API
> ---
>
> Key: FLINK-6966
> URL: https://issues.apache.org/jira/browse/FLINK-6966
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: Fabian Hueske
>
> At the moment, the Table API does not assign UIDs and the max parallelism to 
> operators (except for operators with parallelism 1).
> We should do that to avoid problems when rescaling or restarting jobs from 
> savepoints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6967) Fully separate batch and storm examples

2017-06-21 Thread Chesnay Schepler (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057655#comment-16057655
 ] 

Chesnay Schepler commented on FLINK-6967:
-

As a bonus this will also resolve FLINK-6759.

> Fully separate batch and storm examples
> ---
>
> Key: FLINK-6967
> URL: https://issues.apache.org/jira/browse/FLINK-6967
> Project: Flink
>  Issue Type: Improvement
>  Components: Examples, Storm Compatibility
>Affects Versions: 1.4.0
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Minor
> Fix For: 1.4.0
>
>
> Like the streaming examples (see FLINK-6863) the storm examples have a 
> dependency on the batch examples, exclusively for the WordCount example data.
> I propose to duplicate the test data again for the storm examples.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #3806: [FLINK-6426][table]Update the document of group-wi...

2017-06-21 Thread sunjincheng121
Github user sunjincheng121 closed the pull request at:

https://github.com/apache/flink/pull/3806


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6498) Migrate Zookeeper configuration options

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057634#comment-16057634
 ] 

ASF GitHub Bot commented on FLINK-6498:
---

Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4123#discussion_r123271284
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java 
---
@@ -370,11 +370,11 @@ public static String generateZookeeperPath(String 
root, String namespace) {
 * Return the configured {@link ZkClientACLMode}.
 *
 * @param config The config to parse
-* @return Configured ACL mode or {@link 
HighAvailabilityOptions#ZOOKEEPER_CLIENT_ACL} if not
+* @return Configured ACL mode or "open" if not
--- End diff --

This may become outdated; it's better to do something like "or the default 
defined by {@link HighAvailabilityOptions#ZOOKEEPER_CLIENT_ACL}"


> Migrate Zookeeper configuration options
> ---
>
> Key: FLINK-6498
> URL: https://issues.apache.org/jira/browse/FLINK-6498
> Project: Flink
>  Issue Type: Sub-task
>  Components: Distributed Coordination
>Reporter: Chesnay Schepler
>Assignee: Fang Yong
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink pull request #4061: [FLINK-6841][table]Using TableSourceTable for both...

2017-06-21 Thread sunjincheng121
Github user sunjincheng121 closed the pull request at:

https://github.com/apache/flink/pull/4061


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] flink pull request #4123: [FLINK-6498] Migrate Zookeeper configuration optio...

2017-06-21 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/4123#discussion_r123271284
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java 
---
@@ -370,11 +370,11 @@ public static String generateZookeeperPath(String 
root, String namespace) {
 * Return the configured {@link ZkClientACLMode}.
 *
 * @param config The config to parse
-* @return Configured ACL mode or {@link 
HighAvailabilityOptions#ZOOKEEPER_CLIENT_ACL} if not
+* @return Configured ACL mode or "open" if not
--- End diff --

This may become outdated; it's better to do something like "or the default 
defined by {@link HighAvailabilityOptions#ZOOKEEPER_CLIENT_ACL}"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (FLINK-6426) Update the document of group-window table API

2017-06-21 Thread sunjincheng (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057638#comment-16057638
 ] 

sunjincheng commented on FLINK-6426:


Yes, when you restructuring the Table API docs, I closed this JIRA. but forgot 
close the PR. I'll close it now.:)

> Update the document of group-window table API
> -
>
> Key: FLINK-6426
> URL: https://issues.apache.org/jira/browse/FLINK-6426
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table API & SQL
>Affects Versions: 1.3.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> 1.Correct the method parameter type error in the group-window table API 
> document, change the document from ` .window([w: Window] as 'w)` to ` 
> .window([w: WindowWithoutAlias] as 'w)`
> 2. For the consistency of tableAPI and SQL, change the description of SQL 
> document from "Group Windows" to "Windows".



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6841) using TableSourceTable for both Stream and Batch OR remove useless import

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057635#comment-16057635
 ] 

ASF GitHub Bot commented on FLINK-6841:
---

Github user sunjincheng121 closed the pull request at:

https://github.com/apache/flink/pull/4061


> using TableSourceTable for both Stream and Batch OR remove useless import
> -
>
> Key: FLINK-6841
> URL: https://issues.apache.org/jira/browse/FLINK-6841
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> 1. {{StreamTableSourceTable}} exist useless import of {{TableException}}
> 2. {{StreamTableSourceTable}} only override {{getRowType}} of  
> {{FlinkTable}}, I think we can override the method in {{TableSourceTable}}, 
> If so we can using {{TableSourceTable}} for both {{Stream}} and {{Batch}}.
> What do you think? [~fhueske] [~twalthr]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4061: [FLINK-6841][table]Using TableSourceTable for both Stream...

2017-06-21 Thread sunjincheng121
Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4061
  
Thanks @fhueske, I noticed your description on the JIRA. issue. I think 
it's make sense to me. Because at the current time between  
`StreamTableSourceTable` and `(Batch)TableSourceTable` really some different, 
such as `watermarks`, `time attributes` etc. which you have mentioned. So I 
agree with you, and close both the PR. and JIRA.

Thanks,
SunJincheng


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (FLINK-6966) Add maxParallelism and UIDs to all operators generated by the Table API

2017-06-21 Thread Fabian Hueske (JIRA)
Fabian Hueske created FLINK-6966:


 Summary: Add maxParallelism and UIDs to all operators generated by 
the Table API
 Key: FLINK-6966
 URL: https://issues.apache.org/jira/browse/FLINK-6966
 Project: Flink
  Issue Type: Improvement
  Components: Table API & SQL
Affects Versions: 1.4.0
Reporter: Fabian Hueske


At the moment, the Table API does not assign UIDs and the max parallelism to 
operators (except for operators with parallelism 1).

We should do that to avoid problems when rescaling or restarting jobs from 
savepoints.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (FLINK-6841) using TableSourceTable for both Stream and Batch OR remove useless import

2017-06-21 Thread sunjincheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-6841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sunjincheng closed FLINK-6841.
--
Resolution: Won't Fix

> using TableSourceTable for both Stream and Batch OR remove useless import
> -
>
> Key: FLINK-6841
> URL: https://issues.apache.org/jira/browse/FLINK-6841
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> 1. {{StreamTableSourceTable}} exist useless import of {{TableException}}
> 2. {{StreamTableSourceTable}} only override {{getRowType}} of  
> {{FlinkTable}}, I think we can override the method in {{TableSourceTable}}, 
> If so we can using {{TableSourceTable}} for both {{Stream}} and {{Batch}}.
> What do you think? [~fhueske] [~twalthr]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-6841) using TableSourceTable for both Stream and Batch OR remove useless import

2017-06-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-6841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16057630#comment-16057630
 ] 

ASF GitHub Bot commented on FLINK-6841:
---

Github user sunjincheng121 commented on the issue:

https://github.com/apache/flink/pull/4061
  
Thanks @fhueske, I noticed your description on the JIRA. issue. I think 
it's make sense to me. Because at the current time between  
`StreamTableSourceTable` and `(Batch)TableSourceTable` really some different, 
such as `watermarks`, `time attributes` etc. which you have mentioned. So I 
agree with you, and close both the PR. and JIRA.

Thanks,
SunJincheng


> using TableSourceTable for both Stream and Batch OR remove useless import
> -
>
> Key: FLINK-6841
> URL: https://issues.apache.org/jira/browse/FLINK-6841
> Project: Flink
>  Issue Type: Improvement
>  Components: Table API & SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: sunjincheng
>
> 1. {{StreamTableSourceTable}} exist useless import of {{TableException}}
> 2. {{StreamTableSourceTable}} only override {{getRowType}} of  
> {{FlinkTable}}, I think we can override the method in {{TableSourceTable}}, 
> If so we can using {{TableSourceTable}} for both {{Stream}} and {{Batch}}.
> What do you think? [~fhueske] [~twalthr]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   >