[jira] [Commented] (FLINK-7243) Add ParquetInputFormat

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617106#comment-16617106
 ] 

ASF GitHub Bot commented on FLINK-7243:
---

HuangZhenQiu commented on issue #6483: [FLINK-7243][flink-formats] Add parquet 
input format
URL: https://github.com/apache/flink/pull/6483#issuecomment-421894204
 
 
   @lvhuyen 
   For the Array handling issue, I figured it out. it is a List back 
compatibility issue. When I do internal testing at my company, there is only 
one type of list schema needs to be handled. Thanks for digging it out.  
https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#lists
   
   I created a fix. Please have a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ParquetInputFormat
> --
>
> Key: FLINK-7243
> URL: https://issues.apache.org/jira/browse/FLINK-7243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: godfrey he
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
>
> Add a {{ParquetInputFormat}} to read data from a Apache Parquet file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] HuangZhenQiu commented on issue #6483: [FLINK-7243][flink-formats] Add parquet input format

2018-09-16 Thread GitBox
HuangZhenQiu commented on issue #6483: [FLINK-7243][flink-formats] Add parquet 
input format
URL: https://github.com/apache/flink/pull/6483#issuecomment-421894204
 
 
   @lvhuyen 
   For the Array handling issue, I figured it out. it is a List back 
compatibility issue. When I do internal testing at my company, there is only 
one type of list schema needs to be handled. Thanks for digging it out.  
https://github.com/apache/parquet-format/blob/master/LogicalTypes.md#lists
   
   I created a fix. Please have a look.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhijiangW commented on issue #6697: [hotfix][benchmarks] Add network broadcast benchmark

2018-09-16 Thread GitBox
zhijiangW commented on issue #6697: [hotfix][benchmarks] Add network broadcast 
benchmark
URL: https://github.com/apache/flink/pull/6697#issuecomment-421886999
 
 
   @pnowojski , I am supposed to cover the broadcast scenario benchmark later 
because of #6417, thanks for adding this timely. :)
   
   The codes look good to me, and I only have two additional concerns:  
   
   1. Actually there are two ways to realize broadcast semantics. The first way 
is via `RecordWriter#broadcastEmit()` as you realized in this PR. The other way 
is via `RecordWriter#emit(BroadcastChannelSelector)`, and in current 
`StreamNetworkBenchmarkEnvironment#createRecordWriter` we only define the 
`RoundRobinChannelSelector` for the writer. If we make the `ChannelSelector` as 
a parameter, we can also support other partitioner modes, such as `rebalance`, 
`forward`, etc. How do you consider these two ways?
   
   2. Do we need make the var `broadcastMode` as `@Param({"true","false"})` for 
covering the existing tests in `StreamNetworkThroughputBenchmarkTest`? Because 
currently there are no tests in the 
`StreamNetworkBroadcastThroughputBenchmarkTest`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10348) Solve data skew when consuming data from kafka

2018-09-16 Thread Jiayi Liao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617054#comment-16617054
 ] 

Jiayi Liao commented on FLINK-10348:


[~elevy] 
1. From my perspective, we use fetch request to request data, the 
parameters(fetch size/max wait..) are set by the our flink programs, so it's 
our job to decide how to request these data.
2. About the multiple input operators, it won't help if the partitions are more 
than the parallelism, because the operator consumes the earliest data and 
oldest data at the same time and it's hard to generate the watermark.

> Solve data skew when consuming data from kafka
> --
>
> Key: FLINK-10348
> URL: https://issues.apache.org/jira/browse/FLINK-10348
> Project: Flink
>  Issue Type: New Feature
>  Components: Kafka Connector
>Affects Versions: 1.6.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>
> By using KafkaConsumer, our strategy is to send fetch request to brokers with 
> a fixed fetch size. Assume x topic has n partition and there exists data skew 
> between partitions, now we need to consume data from x topic with earliest 
> offset, and we can get max fetch size data in every fetch request. The 
> problem is that when an task consumes data from both "big" partitions and 
> "small" partitions, the data in "big" partitions may be late elements because 
> "small" partitions are consumed faster.
> *Solution: *
> I think we can leverage two parameters to control this.
> 1. data.skew.check // whether to check data skew
> 2. data.skew.check.interval // the interval between checks
> Every data.skew.check.interval, we will check the latest offset of every 
> specific partition, and calculate (latest offset - current offset), then get 
> partitions which need to slow down and redefine their fetch size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10319) Too many requestPartitionState would crash JM

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617034#comment-16617034
 ] 

ASF GitHub Bot commented on FLINK-10319:


TisonKun commented on issue #6680: [FLINK-10319] [runtime] Too many 
requestPartitionState would crash JM
URL: https://github.com/apache/flink/pull/6680#issuecomment-421882421
 
 
   @Clark Thanks for your review!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Too many requestPartitionState would crash JM
> -
>
> Key: FLINK-10319
> URL: https://issues.apache.org/jira/browse/FLINK-10319
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.7.0
>Reporter: 陈梓立
>Assignee: 陈梓立
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Do not requestPartitionState from JM on partition request fail, which may 
> generate too many RPC requests and block JM.
> We gain little benefit to check what state producer is in, which in the other 
> hand crash JM by too many RPC requests. Task could always 
> retriggerPartitionRequest from its InputGate, it would be fail if the 
> producer has gone and succeed if the producer alive. Anyway, no need to ask 
> for JM for help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10319) Too many requestPartitionState would crash JM

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617035#comment-16617035
 ] 

ASF GitHub Bot commented on FLINK-10319:


TisonKun commented on issue #6680: [FLINK-10319] [runtime] Too many 
requestPartitionState would crash JM
URL: https://github.com/apache/flink/pull/6680#issuecomment-421882434
 
 
   cc @StephanEwen @twalthr 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Too many requestPartitionState would crash JM
> -
>
> Key: FLINK-10319
> URL: https://issues.apache.org/jira/browse/FLINK-10319
> Project: Flink
>  Issue Type: Improvement
>  Components: Distributed Coordination
>Affects Versions: 1.7.0
>Reporter: 陈梓立
>Assignee: 陈梓立
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>
> Do not requestPartitionState from JM on partition request fail, which may 
> generate too many RPC requests and block JM.
> We gain little benefit to check what state producer is in, which in the other 
> hand crash JM by too many RPC requests. Task could always 
> retriggerPartitionRequest from its InputGate, it would be fail if the 
> producer has gone and succeed if the producer alive. Anyway, no need to ask 
> for JM for help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun commented on issue #6680: [FLINK-10319] [runtime] Too many requestPartitionState would crash JM

2018-09-16 Thread GitBox
TisonKun commented on issue #6680: [FLINK-10319] [runtime] Too many 
requestPartitionState would crash JM
URL: https://github.com/apache/flink/pull/6680#issuecomment-421882421
 
 
   @Clark Thanks for your review!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TisonKun commented on issue #6680: [FLINK-10319] [runtime] Too many requestPartitionState would crash JM

2018-09-16 Thread GitBox
TisonKun commented on issue #6680: [FLINK-10319] [runtime] Too many 
requestPartitionState would crash JM
URL: https://github.com/apache/flink/pull/6680#issuecomment-421882434
 
 
   cc @StephanEwen @twalthr 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-7243) Add ParquetInputFormat

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-7243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617017#comment-16617017
 ] 

ASF GitHub Bot commented on FLINK-7243:
---

lvhuyen commented on issue #6483: [FLINK-7243][flink-formats] Add parquet input 
format
URL: https://github.com/apache/flink/pull/6483#issuecomment-421880581
 
 
   @HuangZhenQiu Sorry for the late response. 
   I was wrong. The problem should also be with PojoInputFormat as well.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add ParquetInputFormat
> --
>
> Key: FLINK-7243
> URL: https://issues.apache.org/jira/browse/FLINK-7243
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Reporter: godfrey he
>Assignee: Zhenqiu Huang
>Priority: Major
>  Labels: pull-request-available
>
> Add a {{ParquetInputFormat}} to read data from a Apache Parquet file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] lvhuyen commented on issue #6483: [FLINK-7243][flink-formats] Add parquet input format

2018-09-16 Thread GitBox
lvhuyen commented on issue #6483: [FLINK-7243][flink-formats] Add parquet input 
format
URL: https://github.com/apache/flink/pull/6483#issuecomment-421880581
 
 
   @HuangZhenQiu Sorry for the late response. 
   I was wrong. The problem should also be with PojoInputFormat as well.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10341) Add option to print flink command when running bin/flink

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16617014#comment-16617014
 ] 

ASF GitHub Bot commented on FLINK-10341:


Clark commented on issue #6690: [FLINK-10341][shell script] Add option to 
print flink command when running bin/flink
URL: https://github.com/apache/flink/pull/6690#issuecomment-421879448
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add option to print flink command when running bin/flink
> 
>
> Key: FLINK-10341
> URL: https://issues.apache.org/jira/browse/FLINK-10341
> Project: Flink
>  Issue Type: Improvement
>  Components: Startup Shell Scripts
>Reporter: Jeff Zhang
>Assignee: Jeff Zhang
>Priority: Major
>  Labels: pull-request-available
>
> It would be very useful if I can see the final java command which is used to 
> run flink program, specially when I hit very weird issue as flink 
> contributor. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] Clarkkkkk commented on issue #6690: [FLINK-10341][shell script] Add option to print flink command when running bin/flink

2018-09-16 Thread GitBox
Clark commented on issue #6690: [FLINK-10341][shell script] Add option to 
print flink command when running bin/flink
URL: https://github.com/apache/flink/pull/6690#issuecomment-421879448
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-10240) Pluggable scheduling strategy for batch jobs

2018-09-16 Thread Zhu Zhu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-10240:

Summary: Pluggable scheduling strategy for batch jobs  (was: Pluggable 
scheduling strategy for batch job)

> Pluggable scheduling strategy for batch jobs
> 
>
> Key: FLINK-10240
> URL: https://issues.apache.org/jira/browse/FLINK-10240
> Project: Flink
>  Issue Type: New Feature
>  Components: Distributed Coordination
>Reporter: Zhu Zhu
>Priority: Major
>  Labels: scheduling
>
> Currently batch jobs are scheduled with LAZY_FROM_SOURCES strategy: source 
> tasks are scheduled in the beginning, and other tasks are scheduled once 
> there input data are consumable.
> However, input data consumable does not always mean the task can work at 
> once. 
>  
> One example is the hash join operation, where the operator first consumes one 
> side(we call it build side) to setup a table, then consumes the other side(we 
> call it probe side) to do the real join work. If the probe side is started 
> early, it just get stuck on back pressure as the join operator will not 
> consume data from it before the building stage is done, causing a waste of 
> resources.
> If we have the probe side task started after the build stage is done, both 
> the build and probe side can have more computing resources as they are 
> staggered.
>  
> That's why we think a flexible scheduling strategy is needed, allowing job 
> owners to customize the vertex schedule order and constraints. Better 
> resource utilization usually means better performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10205) Batch Job: InputSplit Fault tolerant for DataSourceTask

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616958#comment-16616958
 ] 

ASF GitHub Bot commented on FLINK-10205:


isunjin commented on issue #6684: [FLINK-10205] Batch Job: InputSplit Fault 
tolerant for DataSource…
URL: https://github.com/apache/flink/pull/6684#issuecomment-421858786
 
 
   I have put the the document here, note that the document not only include 
this issue, but includes other failover improvements:
   
https://docs.google.com/document/d/1FdZdcA63tPUEewcCimTFy9Iz2jlVlMRANZkO4RngIuk/edit?usp=sharing


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Batch Job: InputSplit Fault tolerant for DataSourceTask
> ---
>
> Key: FLINK-10205
> URL: https://issues.apache.org/jira/browse/FLINK-10205
> Project: Flink
>  Issue Type: Sub-task
>  Components: JobManager
>Reporter: JIN SUN
>Assignee: JIN SUN
>Priority: Major
>  Labels: pull-request-available
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Today DataSource Task pull InputSplits from JobManager to achieve better 
> performance, however, when a DataSourceTask failed and rerun, it will not get 
> the same splits as its previous version. this will introduce inconsistent 
> result or even data corruption.
> Furthermore,  if there are two executions run at the same time (in batch 
> scenario), this two executions should process same splits.
> we need to fix the issue to make the inputs of a DataSourceTask 
> deterministic. The propose is save all splits into ExecutionVertex and 
> DataSourceTask will pull split from there.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] isunjin commented on issue #6684: [FLINK-10205] Batch Job: InputSplit Fault tolerant for DataSource…

2018-09-16 Thread GitBox
isunjin commented on issue #6684: [FLINK-10205] Batch Job: InputSplit Fault 
tolerant for DataSource…
URL: https://github.com/apache/flink/pull/6684#issuecomment-421858786
 
 
   I have put the the document here, note that the document not only include 
this issue, but includes other failover improvements:
   
https://docs.google.com/document/d/1FdZdcA63tPUEewcCimTFy9Iz2jlVlMRANZkO4RngIuk/edit?usp=sharing


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10259) Key validation for GroupWindowAggregate is broken

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616941#comment-16616941
 ] 

ASF GitHub Bot commented on FLINK-10259:


fhueske commented on issue #6641: [FLINK-10259] [table] Fix key extraction for 
GroupWindows.
URL: https://github.com/apache/flink/pull/6641#issuecomment-421851590
 
 
   Thanks for the review @hequn8128!
   I've updated the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Key validation for GroupWindowAggregate is broken
> -
>
> Key: FLINK-10259
> URL: https://issues.apache.org/jira/browse/FLINK-10259
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> WindowGroups have multiple equivalent keys (start, end) that should be 
> handled differently from other keys. The {{UpdatingPlanChecker}} uses 
> equivalence groups to identify equivalent keys but the keys of WindowGroups 
> are not correctly assigned to groups.
> This means that we cannot correctly extract keys from queries that use group 
> windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] fhueske commented on issue #6641: [FLINK-10259] [table] Fix key extraction for GroupWindows.

2018-09-16 Thread GitBox
fhueske commented on issue #6641: [FLINK-10259] [table] Fix key extraction for 
GroupWindows.
URL: https://github.com/apache/flink/pull/6641#issuecomment-421851590
 
 
   Thanks for the review @hequn8128!
   I've updated the PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10259) Key validation for GroupWindowAggregate is broken

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616938#comment-16616938
 ] 

ASF GitHub Bot commented on FLINK-10259:


fhueske commented on a change in pull request #6641: [FLINK-10259] [table] Fix 
key extraction for GroupWindows.
URL: https://github.com/apache/flink/pull/6641#discussion_r217931616
 
 

 ##
 File path: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/stream/sql/InsertIntoITCase.scala
 ##
 @@ -0,0 +1,406 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.stream.sql
+
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.streaming.api.TimeCharacteristic
+import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
+import org.apache.flink.table.api.scala._
+import org.apache.flink.table.api.{TableEnvironment, Types}
+import org.apache.flink.table.runtime.stream.table.{RowCollector, 
TestRetractSink, TestUpsertSink}
+import org.apache.flink.table.runtime.utils.StreamTestData
+import org.apache.flink.table.utils.MemoryTableSourceSinkUtil
+import org.apache.flink.test.util.{AbstractTestBase, TestBaseUtils}
+import org.junit.Assert._
+import org.junit.Test
+
+import scala.collection.JavaConverters._
+
+class InsertIntoITCase extends AbstractTestBase {
+
+  @Test
+  def testInsertIntoAppendStreamToTableSink(): Unit = {
 
 Review comment:
   Good point!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Key validation for GroupWindowAggregate is broken
> -
>
> Key: FLINK-10259
> URL: https://issues.apache.org/jira/browse/FLINK-10259
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: Fabian Hueske
>Assignee: Fabian Hueske
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0, 1.6.2, 1.5.5
>
>
> WindowGroups have multiple equivalent keys (start, end) that should be 
> handled differently from other keys. The {{UpdatingPlanChecker}} uses 
> equivalence groups to identify equivalent keys but the keys of WindowGroups 
> are not correctly assigned to groups.
> This means that we cannot correctly extract keys from queries that use group 
> windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] fhueske commented on a change in pull request #6641: [FLINK-10259] [table] Fix key extraction for GroupWindows.

2018-09-16 Thread GitBox
fhueske commented on a change in pull request #6641: [FLINK-10259] [table] Fix 
key extraction for GroupWindows.
URL: https://github.com/apache/flink/pull/6641#discussion_r217931616
 
 

 ##
 File path: 
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/stream/sql/InsertIntoITCase.scala
 ##
 @@ -0,0 +1,406 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.runtime.stream.sql
+
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.streaming.api.TimeCharacteristic
+import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
+import org.apache.flink.table.api.scala._
+import org.apache.flink.table.api.{TableEnvironment, Types}
+import org.apache.flink.table.runtime.stream.table.{RowCollector, 
TestRetractSink, TestUpsertSink}
+import org.apache.flink.table.runtime.utils.StreamTestData
+import org.apache.flink.table.utils.MemoryTableSourceSinkUtil
+import org.apache.flink.test.util.{AbstractTestBase, TestBaseUtils}
+import org.junit.Assert._
+import org.junit.Test
+
+import scala.collection.JavaConverters._
+
+class InsertIntoITCase extends AbstractTestBase {
+
+  @Test
+  def testInsertIntoAppendStreamToTableSink(): Unit = {
 
 Review comment:
   Good point!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10348) Solve data skew when consuming data from kafka

2018-09-16 Thread Elias Levy (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616933#comment-16616933
 ] 

Elias Levy commented on FLINK-10348:


I would suggest the strategy employed by Kafka Streams, which performs a best 
effort attempt to align streams by selectively fetching from the stream with 
the lowest watermark of there are messages available. 

Rather than implementing something like this writhin the Kafka connector 
source, which are independent tasks in Flink, I would suggest implementing it 
within multiple input operators. The operator can selectively process messages 
from the input stream with the lowest waternark if they are available. Back 
preassure can the take care of slowing down the higher volume input of 
nessesary. 

> Solve data skew when consuming data from kafka
> --
>
> Key: FLINK-10348
> URL: https://issues.apache.org/jira/browse/FLINK-10348
> Project: Flink
>  Issue Type: New Feature
>  Components: Kafka Connector
>Affects Versions: 1.6.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>
> By using KafkaConsumer, our strategy is to send fetch request to brokers with 
> a fixed fetch size. Assume x topic has n partition and there exists data skew 
> between partitions, now we need to consume data from x topic with earliest 
> offset, and we can get max fetch size data in every fetch request. The 
> problem is that when an task consumes data from both "big" partitions and 
> "small" partitions, the data in "big" partitions may be late elements because 
> "small" partitions are consumed faster.
> *Solution: *
> I think we can leverage two parameters to control this.
> 1. data.skew.check // whether to check data skew
> 2. data.skew.check.interval // the interval between checks
> Every data.skew.check.interval, we will check the latest offset of every 
> specific partition, and calculate (latest offset - current offset), then get 
> partitions which need to slow down and redefine their fetch size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10222) Table scalar function expression parses error when function name equals the exists keyword suffix

2018-09-16 Thread Fabian Hueske (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabian Hueske closed FLINK-10222.
-
   Resolution: Fixed
 Assignee: vinoyang
Fix Version/s: 1.5.5
   1.6.2
   1.4.3

Fixed for 1.7.0 with 4f219f40c3ab8c454ce52a60a857c3a2a0c56451
Fixed for 1.6.2 with 737b988cd901ab39240371de79d1f303153172eb
Fixed for 1.5.5 with d2034ca3ff6cb4736efef2c1918513f40f0d2243
Fixed for 1.4.3 with 2cdec3fb9ed2d2502ce6acdbcf0322e535efab47

> Table scalar function expression parses error when function name equals the 
> exists keyword suffix
> -
>
> Key: FLINK-10222
> URL: https://issues.apache.org/jira/browse/FLINK-10222
> Project: Flink
>  Issue Type: Bug
>  Components: Table API  SQL
>Reporter: vinoyang
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.4.3, 1.6.2, 1.5.5
>
>
> Suffix extraction in ExpressionParser.scala does not actually support 
> extraction of keywords with the same prefix. For example: Adding suffix 
> parsing rules for {{a.fun}} and {{a.function}} simultaneously will causes 
> exceptions. 
> some discussion : 
> [https://github.com/apache/flink/pull/6432#issuecomment-416127815]
> https://github.com/apache/flink/pull/6585#discussion_r212797015



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10222) Table scalar function expression parses error when function name equals the exists keyword suffix

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616907#comment-16616907
 ] 

ASF GitHub Bot commented on FLINK-10222:


asfgit closed pull request #6622: [FLINK-10222] [table] Table scalar function 
expression parses error when function name equals the exists keyword suffix
URL: https://github.com/apache/flink/pull/6622
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/ExpressionParser.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/ExpressionParser.scala
index 4909d2c79da..a8f73a21bdf 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/ExpressionParser.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/ExpressionParser.scala
@@ -42,7 +42,8 @@ object ExpressionParser extends JavaTokenParsers with 
PackratParsers {
 
   // Convert the keyword into an case insensitive Parser
   implicit def keyword2Parser(kw: Keyword): Parser[String] = {
-("""(?i)\Q""" + kw.key + """\E""").r
+("""(?i)\Q""" + kw.key +
+  
"""\E(?![_$a-zA-Z0-9\u005f\u0024\u0061-\u007a\u0041-\u005a\u0030-\u0039])""").r
   }
 
   // Keyword
diff --git 
a/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/KeywordParseTest.scala
 
b/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/KeywordParseTest.scala
new file mode 100644
index 000..05c464a37ea
--- /dev/null
+++ 
b/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/KeywordParseTest.scala
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.expressions
+
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.table.expressions.utils.ExpressionTestBase
+import org.apache.flink.types.Row
+import org.junit.{Assert, Test}
+
+/**
+  * Tests keyword as suffix.
+  */
+class KeywordParseTest extends ExpressionTestBase {
+
+  @Test
+  def testFunctionNameContainsSuffixKeyword(): Unit = {
+// ASC / DESC (no parameter)
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.ascii()")).asInstanceOf[Call]).functionName,
+  "ASCII")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.iiascii()")).asInstanceOf[Call]).functionName,
+  "IIASCII")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.iiasc()")).asInstanceOf[Call]).functionName,
+  "IIASC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.descabc()")).asInstanceOf[Call]).functionName,
+  "DESCABC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.abcdescabc()")).asInstanceOf[Call]).functionName,
+  "ABCDESCABC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.abcdesc()")).asInstanceOf[Call]).functionName,
+  "ABCDESC")
+// LOG has parameter
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.logabc()")).asInstanceOf[Call]).functionName,
+  "LOGABC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.abclogabc()")).asInstanceOf[Call]).functionName,
+  "ABCLOGABC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.abclog()")).asInstanceOf[Call]).functionName,
+  "ABCLOG")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.log2()")).asInstanceOf[Call]).functionName,
+  "LOG2")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.log10()")).asInstanceOf[Call]).functionName,
+  "LOG10")
+  }
+
+  override def testData: Any = new Row(0)
+
+  override def typeInfo: TypeInformation[Any] =
+new 

[GitHub] asfgit closed pull request #6622: [FLINK-10222] [table] Table scalar function expression parses error when function name equals the exists keyword suffix

2018-09-16 Thread GitBox
asfgit closed pull request #6622: [FLINK-10222] [table] Table scalar function 
expression parses error when function name equals the exists keyword suffix
URL: https://github.com/apache/flink/pull/6622
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/ExpressionParser.scala
 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/ExpressionParser.scala
index 4909d2c79da..a8f73a21bdf 100644
--- 
a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/ExpressionParser.scala
+++ 
b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/ExpressionParser.scala
@@ -42,7 +42,8 @@ object ExpressionParser extends JavaTokenParsers with 
PackratParsers {
 
   // Convert the keyword into an case insensitive Parser
   implicit def keyword2Parser(kw: Keyword): Parser[String] = {
-("""(?i)\Q""" + kw.key + """\E""").r
+("""(?i)\Q""" + kw.key +
+  
"""\E(?![_$a-zA-Z0-9\u005f\u0024\u0061-\u007a\u0041-\u005a\u0030-\u0039])""").r
   }
 
   // Keyword
diff --git 
a/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/KeywordParseTest.scala
 
b/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/KeywordParseTest.scala
new file mode 100644
index 000..05c464a37ea
--- /dev/null
+++ 
b/flink-libraries/flink-table/src/test/scala/org/apache/flink/table/expressions/KeywordParseTest.scala
@@ -0,0 +1,75 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.expressions
+
+import org.apache.flink.api.common.typeinfo.TypeInformation
+import org.apache.flink.api.java.typeutils.RowTypeInfo
+import org.apache.flink.table.expressions.utils.ExpressionTestBase
+import org.apache.flink.types.Row
+import org.junit.{Assert, Test}
+
+/**
+  * Tests keyword as suffix.
+  */
+class KeywordParseTest extends ExpressionTestBase {
+
+  @Test
+  def testFunctionNameContainsSuffixKeyword(): Unit = {
+// ASC / DESC (no parameter)
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.ascii()")).asInstanceOf[Call]).functionName,
+  "ASCII")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.iiascii()")).asInstanceOf[Call]).functionName,
+  "IIASCII")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.iiasc()")).asInstanceOf[Call]).functionName,
+  "IIASC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.descabc()")).asInstanceOf[Call]).functionName,
+  "DESCABC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.abcdescabc()")).asInstanceOf[Call]).functionName,
+  "ABCDESCABC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.abcdesc()")).asInstanceOf[Call]).functionName,
+  "ABCDESC")
+// LOG has parameter
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.logabc()")).asInstanceOf[Call]).functionName,
+  "LOGABC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.abclogabc()")).asInstanceOf[Call]).functionName,
+  "ABCLOGABC")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.abclog()")).asInstanceOf[Call]).functionName,
+  "ABCLOG")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.log2()")).asInstanceOf[Call]).functionName,
+  "LOG2")
+Assert.assertEquals(
+  
((ExpressionParser.parseExpression("f0.log10()")).asInstanceOf[Call]).functionName,
+  "LOG10")
+  }
+
+  override def testData: Any = new Row(0)
+
+  override def typeInfo: TypeInformation[Any] =
+new RowTypeInfo().asInstanceOf[TypeInformation[Any]]
+}


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific 

[jira] [Commented] (FLINK-10348) Solve data skew when consuming data from kafka

2018-09-16 Thread Jiayi Liao (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616853#comment-16616853
 ] 

Jiayi Liao commented on FLINK-10348:


[~twalthr][~Zentol] What do you think of this feature?

> Solve data skew when consuming data from kafka
> --
>
> Key: FLINK-10348
> URL: https://issues.apache.org/jira/browse/FLINK-10348
> Project: Flink
>  Issue Type: New Feature
>  Components: Kafka Connector
>Affects Versions: 1.6.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>
> By using KafkaConsumer, our strategy is to send fetch request to brokers with 
> a fixed fetch size. Assume x topic has n partition and there exists data skew 
> between partitions, now we need to consume data from x topic with earliest 
> offset, and we can get max fetch size data in every fetch request. The 
> problem is that when an task consumes data from both "big" partitions and 
> "small" partitions, the data in "big" partitions may be late elements because 
> "small" partitions are consumed faster.
> *Solution: *
> I think we can leverage two parameters to control this.
> 1. data.skew.check // whether to check data skew
> 2. data.skew.check.interval // the interval between checks
> Every data.skew.check.interval, we will check the latest offset of every 
> specific partition, and calculate (latest offset - current offset), then get 
> partitions which need to slow down and redefine their fetch size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-10350) Add assignTimestampsAndWatermarks in KeyedStream

2018-09-16 Thread Jiayi Liao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-10350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiayi Liao closed FLINK-10350.
--
Resolution: Invalid

> Add assignTimestampsAndWatermarks in KeyedStream
> 
>
> Key: FLINK-10350
> URL: https://issues.apache.org/jira/browse/FLINK-10350
> Project: Flink
>  Issue Type: New Feature
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-10351) Ignore specific operator's state when restore from savepoint

2018-09-16 Thread Jiayi Liao (JIRA)
Jiayi Liao created FLINK-10351:
--

 Summary: Ignore specific operator's state when restore from 
savepoint
 Key: FLINK-10351
 URL: https://issues.apache.org/jira/browse/FLINK-10351
 Project: Flink
  Issue Type: New Feature
  Components: State Backends, Checkpointing
Affects Versions: 1.6.0
Reporter: Jiayi Liao
Assignee: Jiayi Liao


The story is that I want to change autoWatermarkInterval, but I find that it 
didn't help because the processing time service is restored from the state 
backend.
So I wonder if we can provide a command like --ignoreState  to let 
users abandon the state they don't want?
I think it'll be helpful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (FLINK-6895) Add STR_TO_DATE supported in SQL

2018-09-16 Thread nando (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nando updated FLINK-6895:
-
Comment: was deleted

(was: Do you mean the string to format? It can be a column reference, for 
example:

{code:sql}
UPDATE `table`
 SET `column_as_date` = str_to_date( `column_varchar`, '%d-%m-%Y' );
{code})

> Add STR_TO_DATE supported in SQL
> 
>
> Key: FLINK-6895
> URL: https://issues.apache.org/jira/browse/FLINK-6895
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Jiayi Liao
>Priority: Major
>  Labels: pull-request-available
>
> STR_TO_DATE(str,format) This is the inverse of the DATE_FORMAT() function. It 
> takes a string str and a format string format. STR_TO_DATE() returns a 
> DATETIME value if the format string contains both date and time parts, or a 
> DATE or TIME value if the string contains only date or time parts. If the 
> date, time, or datetime value extracted from str is illegal, STR_TO_DATE() 
> returns NULL and produces a warning.
> * Syntax:
> STR_TO_DATE(str,format) 
> * Arguments
> **str: -
> **format: -
> * Return Types
>   DATAETIME/DATE/TIME
> * Example:
>   STR_TO_DATE('01,5,2013','%d,%m,%Y') -> '2013-05-01'
>   SELECT STR_TO_DATE('a09:30:17','a%h:%i:%s') -> '09:30:17'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_str-to-date]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (FLINK-6895) Add STR_TO_DATE supported in SQL

2018-09-16 Thread nando (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

nando updated FLINK-6895:
-
Comment: was deleted

(was: Taking a look at this [MySQL 
STR_TO_DATE|https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_str-to-date]
 it seems that the correct approach would be:
* Return a DATETIME ({{java.sql.Timestamp}}) value if the format string 
contains both date and time parts.
* Return a DATE ({{java.sql.Date}}) if the string contains date only.
* Return a TIME ({{java.sql.Time}}) if the string contains time only.

{{java.util.Date}} it's a superclass of these three classes (see [Class 
Date|https://docs.oracle.com/javase/7/docs/api/java/util/Date.html]). What do 
you think? Maybe it's a good idea to use this as the return type.)

> Add STR_TO_DATE supported in SQL
> 
>
> Key: FLINK-6895
> URL: https://issues.apache.org/jira/browse/FLINK-6895
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table API  SQL
>Affects Versions: 1.4.0
>Reporter: sunjincheng
>Assignee: Jiayi Liao
>Priority: Major
>  Labels: pull-request-available
>
> STR_TO_DATE(str,format) This is the inverse of the DATE_FORMAT() function. It 
> takes a string str and a format string format. STR_TO_DATE() returns a 
> DATETIME value if the format string contains both date and time parts, or a 
> DATE or TIME value if the string contains only date or time parts. If the 
> date, time, or datetime value extracted from str is illegal, STR_TO_DATE() 
> returns NULL and produces a warning.
> * Syntax:
> STR_TO_DATE(str,format) 
> * Arguments
> **str: -
> **format: -
> * Return Types
>   DATAETIME/DATE/TIME
> * Example:
>   STR_TO_DATE('01,5,2013','%d,%m,%Y') -> '2013-05-01'
>   SELECT STR_TO_DATE('a09:30:17','a%h:%i:%s') -> '09:30:17'
> * See more:
> ** [MySQL| 
> https://dev.mysql.com/doc/refman/5.7/en/date-and-time-functions.html#function_str-to-date]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-10341) Add option to print flink command when running bin/flink

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616740#comment-16616740
 ] 

ASF GitHub Bot commented on FLINK-10341:


zjffdu commented on issue #6690: [FLINK-10341][shell script] Add option to 
print flink command when running bin/flink
URL: https://github.com/apache/flink/pull/6690#issuecomment-421770262
 
 
   Thanks for the review @TisonKun , @dawidwys `bash -x` do helps, but my 
sencairo is a little different. I am using `bin/flink` in third party tools 
(notebook) to launch flink. The output of `bin/flink` command will be logged in 
log file, so printing the flink command is very useful for diagnosis, and `bash 
-x` will be too verbose and it is not so convenient to switch between `bash -x 
bin/flink` and `bin/flink` for me. In contrast, setting env variable 
`PRINT_FLINK_COMMAND` is very convenient for me. Does this scenario make sense 
to you ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add option to print flink command when running bin/flink
> 
>
> Key: FLINK-10341
> URL: https://issues.apache.org/jira/browse/FLINK-10341
> Project: Flink
>  Issue Type: Improvement
>  Components: Startup Shell Scripts
>Reporter: Jeff Zhang
>Assignee: Jeff Zhang
>Priority: Major
>  Labels: pull-request-available
>
> It would be very useful if I can see the final java command which is used to 
> run flink program, specially when I hit very weird issue as flink 
> contributor. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] zjffdu commented on issue #6690: [FLINK-10341][shell script] Add option to print flink command when running bin/flink

2018-09-16 Thread GitBox
zjffdu commented on issue #6690: [FLINK-10341][shell script] Add option to 
print flink command when running bin/flink
URL: https://github.com/apache/flink/pull/6690#issuecomment-421770262
 
 
   Thanks for the review @TisonKun , @dawidwys `bash -x` do helps, but my 
sencairo is a little different. I am using `bin/flink` in third party tools 
(notebook) to launch flink. The output of `bin/flink` command will be logged in 
log file, so printing the flink command is very useful for diagnosis, and `bash 
-x` will be too verbose and it is not so convenient to switch between `bash -x 
bin/flink` and `bin/flink` for me. In contrast, setting env variable 
`PRINT_FLINK_COMMAND` is very convenient for me. Does this scenario make sense 
to you ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-10349) Unify stopActor utils

2018-09-16 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16616699#comment-16616699
 ] 

ASF GitHub Bot commented on FLINK-10349:


TisonKun commented on issue #6701: [FLINK-10349] Unify stopActor utils
URL: https://github.com/apache/flink/pull/6701#issuecomment-421757683
 
 
   cc @tillrohrmann @GJL 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Unify stopActor utils
> -
>
> Key: FLINK-10349
> URL: https://issues.apache.org/jira/browse/FLINK-10349
> Project: Flink
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 1.7.0
>Reporter: 陈梓立
>Assignee: 陈梓立
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.7.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] TisonKun commented on issue #6701: [FLINK-10349] Unify stopActor utils

2018-09-16 Thread GitBox
TisonKun commented on issue #6701: [FLINK-10349] Unify stopActor utils
URL: https://github.com/apache/flink/pull/6701#issuecomment-421757683
 
 
   cc @tillrohrmann @GJL 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services