[GitHub] [flink] flinkbot edited a comment on pull request #17842: [FLINK-24966] [docs] Fix spelling errors in the project

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17842:
URL: https://github.com/apache/flink/pull/17842#issuecomment-974310249


   
   ## CI report:
   
   * 007d765a862fc6c288d08516cc06640f4e6e85ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26891)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-13550) Support for CPU FlameGraphs in web UI

2021-11-22 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-13550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447820#comment-17447820
 ] 

David Morávek commented on FLINK-13550:
---

[~jackylau] that's the line number. It's always -2 for the native methods. See 
StackTraceElement [1] for more details. Please use our dedicated user mailing 
list [2] for this type of questions, it's more convenient for us to help you 
there.

 

[1] [https://docs.oracle.com/javase/8/docs/api/java/lang/StackTraceElement.html]
[2] [https://flink.apache.org/community.html#mailing-lists]

> Support for CPU FlameGraphs in web UI
> -
>
> Key: FLINK-13550
> URL: https://issues.apache.org/jira/browse/FLINK-13550
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: David Morávek
>Assignee: Alexander Fedulov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
> Attachments: image-2021-11-23-13-36-03-269.png
>
>
> For a better insight into a running job, it would be useful to have ability 
> to render a CPU flame graph for a particular job vertex.
> Flink already has a stack-trace sampling mechanism in-place, so it should be 
> straightforward to implement.
> This should be done by implementing a new endpoint in REST API, which would 
> sample the stack-trace the same way as current BackPressureTracker does, only 
> with a different sampling rate and length of sampling.
> [Here|https://www.youtube.com/watch?v=GUNDehj9z9o] is a little demo of the 
> feature.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17842: [FLINK-24966] [docs] Fix spelling errors in the project

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17842:
URL: https://github.com/apache/flink/pull/17842#issuecomment-974310249


   
   ## CI report:
   
   * 007d765a862fc6c288d08516cc06640f4e6e85ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26891)
 
   * a3c7861bdf001aec700269cbf344d849f2fc4087 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17822: Release 1.14 kafka3.0 bug

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17822:
URL: https://github.com/apache/flink/pull/17822#issuecomment-971696959


   
   ## CI report:
   
   * 4315c1be1f94367058c85be82e89d1bd623c63a7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26885)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17875: [BP-1.14][FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17875:
URL: https://github.com/apache/flink/pull/17875#issuecomment-976151167


   
   ## CI report:
   
   * 3c8b9142f5707820507d61fc71784f7ed4bc07f1 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26882)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] slinkydeveloper commented on a change in pull request #17811: [FLINK-24754] Implement CHAR/VARCHAR length validation for sinks

2021-11-22 Thread GitBox


slinkydeveloper commented on a change in pull request #17811:
URL: https://github.com/apache/flink/pull/17811#discussion_r754861749



##
File path: 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/runtime/operators/sink/ConstraintEnforcer.java
##
@@ -188,14 +201,13 @@ public void processElement(StreamRecord element) 
throws Exception {
 for (int i = 0; i < charFieldIndices.length; i++) {
 final int fieldIdx = charFieldIndices[i];
 final int precision = charFieldPrecisions[i];
-final String stringValue = rowData.getString(fieldIdx).toString();
+final BinaryStringData stringData = (BinaryStringData) 
rowData.getString(fieldIdx);
 
-if (stringValue.length() > precision) {
+if (stringData.getJavaObject().length() > precision) {

Review comment:
   I think invoking `stringData.getJavaObject()` is not safe if you don't 
invoke `stringData.ensureMaterialized()` first, looking at the rest of the code 
in `BinaryStringData`.
   
   Perhaps to get the string length you need either `getSizeInBytes()` or 
`numChars()`? `BinaryStringDataUtil#substringSQL` uses the latter.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17670: [FLINK-24760][docs] Update user document for batch window tvf

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17670:
URL: https://github.com/apache/flink/pull/17670#issuecomment-960539433


   
   ## CI report:
   
   * 9f14fb2c282614088363960f23af3d6c5770ff15 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26814)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17876: [FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17876:
URL: https://github.com/apache/flink/pull/17876#issuecomment-976152397


   
   ## CI report:
   
   * 9b4fa70d5e2f6a31bbb7ad1b84fb8334ac24a46d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26883)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17842: [FLINK-24966] [docs] Fix spelling errors in the project

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17842:
URL: https://github.com/apache/flink/pull/17842#issuecomment-974310249


   
   ## CI report:
   
   * 007d765a862fc6c288d08516cc06640f4e6e85ea Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26891)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17842: [FLINK-24966] [docs] Fix spelling errors in the project

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17842:
URL: https://github.com/apache/flink/pull/17842#issuecomment-974310249


   
   ## CI report:
   
   * c847ce6f383ebe2588f1ca020b0865ba226d1dd5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26771)
 
   * 007d765a862fc6c288d08516cc06640f4e6e85ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26891)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YesOrNo828 commented on a change in pull request #17831: [FLINK-15825][Table SQL/API] Add renameDatabase() to Catalog

2021-11-22 Thread GitBox


YesOrNo828 commented on a change in pull request #17831:
URL: https://github.com/apache/flink/pull/17831#discussion_r754853810



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/HiveCatalog.java
##
@@ -398,6 +398,34 @@ public void alterDatabase(
 }
 }
 
+@Override
+public void renameDatabase(
+String databaseName, String newDatabaseName, boolean 
ignoreIfNotExists)
+throws DatabaseNotExistException, DatabaseAlreadyExistException, 
CatalogException {
+checkArgument(
+!isNullOrWhitespaceOnly(databaseName), "databaseName cannot be 
null or empty");
+checkArgument(
+!isNullOrWhitespaceOnly(newDatabaseName),
+"newDatabaseName cannot be null or empty");
+
+try {
+if (databaseExists(databaseName)) {
+if (databaseExists(newDatabaseName)) {
+throw new DatabaseAlreadyExistException(getName(), 
newDatabaseName);
+} else {
+Database database = getHiveDatabase(databaseName);
+database.setName(newDatabaseName);
+client.alterDatabase(databaseName, database);

Review comment:
   Hi @shenzhu , according to source code of version 
[hive-2.1.1](https://github.com/apache/hive/blob/rel/release-2.1.1/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java#L766),
 this method `alterDatabase` supports only the parameters of the database or 
the owner can be changed.
   Maybe you can consiter to drop the database firstly and then create a new 
one.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17842: [FLINK-24966] [docs] Fix spelling errors in the project

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17842:
URL: https://github.com/apache/flink/pull/17842#issuecomment-974310249


   
   ## CI report:
   
   * c847ce6f383ebe2588f1ca020b0865ba226d1dd5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26771)
 
   * 007d765a862fc6c288d08516cc06640f4e6e85ea UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17792: [FLINK-24763][fs-connector] LimitableReader should swallow exception when reached limit

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17792:
URL: https://github.com/apache/flink/pull/17792#issuecomment-968524862


   
   ## CI report:
   
   * eca2f327e5f0f82afd0ae6e89ce04545b7d1cab4 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26879)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gmzz123 edited a comment on pull request #17843: [FLINK-24879][core] All ReducingStateDescriptor constructor should check reduceFunction is not a richFunction.

2021-11-22 Thread GitBox


gmzz123 edited a comment on pull request #17843:
URL: https://github.com/apache/flink/pull/17843#issuecomment-976213242


   @azagrebin @StephanEwen 
   could you help to review it? thanks!
   And I have create 
[FLINK-24994](https://issues.apache.org/jira/projects/FLINK/issues/FLINK-24994) 
also about this issue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gmzz123 commented on pull request #17843: [FLINK-24879][core] All ReducingStateDescriptor constructor should check reduceFunction is not a richFunction.

2021-11-22 Thread GitBox


gmzz123 commented on pull request #17843:
URL: https://github.com/apache/flink/pull/17843#issuecomment-976213242


   @azagrebin @Stephan Ewen 
   could you help to review it? thanks!
   And I have create 
[FLINK-24994](https://issues.apache.org/jira/projects/FLINK/issues/FLINK-24994) 
also about this issue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   * 6514ef1e95b85e57ecef428241ed005266f12167 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26890)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   * 6514ef1e95b85e57ecef428241ed005266f12167 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gmzz123 commented on pull request #17838: [FLINK-24915] [DataStream] fix StreamElementSerializer#deserialize(reuse, source) forgets to handle tag == TAG_STREAM_STATUS.

2021-11-22 Thread GitBox


gmzz123 commented on pull request #17838:
URL: https://github.com/apache/flink/pull/17838#issuecomment-976209663


   @AHeise 
   Could you help review it? Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   * 6514ef1e95b85e57ecef428241ed005266f12167 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-22 Thread GitBox


lindong28 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r754839066



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasL2.java
##
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.param;
+
+import org.apache.flink.ml.param.DoubleParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+import org.apache.flink.ml.param.WithParams;
+
+/** Interface for the shared L2 regularization param. */
+public interface HasL2 extends WithParams {

Review comment:
   Ideally we can make the parameter future proof.
   
   According to the scikit-learn doc [1], it looks like the regularization can 
be one of `l1`, `l2` and `elasticnet`. And scikit-learn supports all choices.
   
   Though Spark provides only the `HasElasticNetParam` without explicit `l1` or 
`l2` choices, the parameter doc suggests that `l1` or `l2` regularization is 
effectively used if user sets the parameter value to be `1` or `2`.
   
   So both scikit-learn and Spark support all three modes. I guess we also want 
to be able to support these three modes in Flink ML, even if we support only 
one for now.
   
   If we add `HasL2` here, how do we expect users to specify `L1` and 
`elasticnet` mode in the future? Should we use a double-valued 
`HasElasticNetParam` like Spark, or use a string-valued `HasPenalty` similar to 
Scikit-learn?
   
   [1] 
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html
   
   [1] 
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   * 0c9ce198380616a92a8b129ca2831b73e8988515 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-22 Thread GitBox


lindong28 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r754833974



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/linear/HasPredictionDetailCol.java
##
@@ -0,0 +1,39 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.param.linear;
+
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.StringParam;
+import org.apache.flink.ml.param.WithParams;
+
+/** Interface for the shared prediction detail param. */
+public interface HasPredictionDetailCol extends WithParams {

Review comment:
   I see. Since the semantic of this column is the probability of the 
prediction result, would it be more intuitive to use `HasProbabilityCol` here? 
The word `detail` is much broader than `probability` and does not provide much 
information of what is in this column.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] zhipeng93 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-22 Thread GitBox


zhipeng93 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r754834016



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/linear/LogisticRegressionModel.java
##
@@ -0,0 +1,220 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.classification.linear;
+
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.PrimitiveArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.connector.source.Source;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.connector.file.sink.FileSink;
+import org.apache.flink.connector.file.src.FileSource;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.ml.api.core.Model;
+import 
org.apache.flink.ml.classification.linear.LogisticRegressionModelData.LogisticRegressionModelDataEncoder;
+import 
org.apache.flink.ml.classification.linear.LogisticRegressionModelData.LogisticRegressionModelDataStreamFormat;
+import org.apache.flink.ml.common.broadcast.BroadcastUtils;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import 
org.apache.flink.streaming.api.functions.sink.filesystem.bucketassigners.BasePathBucketAssigner;
+import 
org.apache.flink.streaming.api.functions.sink.filesystem.rollingpolicies.OnCheckpointRollingPolicy;
+import org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.types.Row;
+
+import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Map;
+
+/** This class implements {@link Model} for {@link LogisticRegression}. */
+public class LogisticRegressionModel
+implements Model,
+LogisticRegressionModelParams {
+
+private Map, Object> paramMap;
+
+private Table model;
+
+public LogisticRegressionModel(Map, Object> paramMap) {
+this.paramMap = paramMap;
+ParamUtils.initializeMapWithDefaultValues(this.paramMap, this);
+}
+
+public LogisticRegressionModel() {
+this(new HashMap<>());
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+StreamTableEnvironment tEnv =
+(StreamTableEnvironment) ((TableImpl) 
model).getTableEnvironment();
+String dataPath = ReadWriteUtils.getDataPath(path);
+FileSink sink =
+FileSink.forRowFormat(new Path(dataPath), new 
LogisticRegressionModelDataEncoder())
+.withRollingPolicy(OnCheckpointRollingPolicy.build())
+.withBucketAssigner(new BasePathBucketAssigner<>())
+.build();
+ReadWriteUtils.saveMetadata(this, path);
+tEnv.toDataStream(model)
+.map(x -> (LogisticRegressionModelData) x.getField(0))
+.sinkTo(sink)
+.setParallelism(1);
+}
+
+public static LogisticRegressionModel load(StreamExecutionEnvironment env, 
String path)
+throws IOException {
+StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
+Source source =
+FileSource.forRecordStreamFormat(
+new LogisticRegressionModelDataStreamFormat(),
+ReadWriteUtils.getDataPaths(path))
+  

[GitHub] [flink] flinkbot commented on pull request #17877: [FLINK-24495][python][tests] Upgrade virtualenv version

2021-11-22 Thread GitBox


flinkbot commented on pull request #17877:
URL: https://github.com/apache/flink/pull/17877#issuecomment-976201704


   
   ## CI report:
   
   * 78eee6205acd3e09c5ba404c4880dd9aee9c813a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17877: [FLINK-24495][python][tests] Upgrade virtualenv version

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17877:
URL: https://github.com/apache/flink/pull/17877#issuecomment-976201704


   
   ## CI report:
   
   * 78eee6205acd3e09c5ba404c4880dd9aee9c813a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26889)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   * 0c9ce198380616a92a8b129ca2831b73e8988515 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17877: [FLINK-24495][python][tests] Upgrade virtualenv version

2021-11-22 Thread GitBox


flinkbot commented on pull request #17877:
URL: https://github.com/apache/flink/pull/17877#issuecomment-976202050


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 78eee6205acd3e09c5ba404c4880dd9aee9c813a (Tue Nov 23 
06:40:29 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24495) Python installdeps hangs

2021-11-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24495:
---
Labels: pull-request-available test-stability  (was: test-stability)

> Python installdeps hangs
> 
>
> Key: FLINK-24495
> URL: https://issues.apache.org/jira/browse/FLINK-24495
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.0
>Reporter: Xintong Song
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.15.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24922=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=23587
> {code}
> Oct 10 02:30:01 py38-cython create: /__w/1/s/flink-python/.tox/py38-cython
> Oct 10 02:30:04 py38-cython installdeps: pytest, apache-beam==2.27.0, 
> cython==0.29.16, grpcio>=1.29.0,<2, grpcio-tools>=1.3.5,<=1.14.2, 
> apache-flink-libraries
> Oct 10 02:45:22 
> ==
> Oct 10 02:45:22 Process produced no output for 900 seconds.
> Oct 10 02:45:22 
> ==
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] HuangXingBo opened a new pull request #17877: [FLINK-24495][python][tests] Upgrade virtualenv version

2021-11-22 Thread GitBox


HuangXingBo opened a new pull request #17877:
URL: https://github.com/apache/flink/pull/17877


   ## What is the purpose of the change
   
   *This pull request will upgrade `virtualenv` version.*
   
   
   ## Brief change log
   
 - *Upgrade `virtualenv` version to `20.10.0`*
   
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
 - *The original tests.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   * 0c9ce198380616a92a8b129ca2831b73e8988515 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-22 Thread GitBox


lindong28 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r754831435



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasEpsilon.java
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.param;
+
+import org.apache.flink.ml.param.DoubleParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+import org.apache.flink.ml.param.WithParams;
+
+/** Interface for the shared epsilon param. */
+public interface HasEpsilon extends WithParams {

Review comment:
   I have similar thoughts as @yunfengzhou-hub. Here are my findings that 
may be useful to consider here.
   
   Spark and Scikit-learn [1] uses HasTol for this purpose. Logistic Regression 
wiki [2] mentions tolerance instead of epsilon. I searched on Google for words 
that are commonly used for determining the "termination criteria". It looks 
like tolerance is much more popular than epsilon in the machine learning domain 
(e.g. [3]).
   
   [1] 
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html
   [2] https://en.wikipedia.org/wiki/Logistic_regression
   [3] 
https://support.minitab.com/en-us/minitab/18/help-and-how-to/modeling-statistics/regression/how-to/nonlinear-regression/interpret-the-results/all-statistics-and-graphs/methods-and-starting-values/




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   * 19e582467aae47a351d2422e63f884fa15065355 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17670: [FLINK-24760][docs] Update user document for batch window tvf

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17670:
URL: https://github.com/apache/flink/pull/17670#issuecomment-960539433


   
   ## CI report:
   
   * 9f14fb2c282614088363960f23af3d6c5770ff15 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26814)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-22 Thread GitBox


lindong28 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r754825324



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasEpsilon.java
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.param;
+
+import org.apache.flink.ml.param.DoubleParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+import org.apache.flink.ml.param.WithParams;
+
+/** Interface for the shared epsilon param. */
+public interface HasEpsilon extends WithParams {
+
+Param EPSILON =
+new DoubleParam(
+"epsilon",
+"Convergence tolerance for iterative algorithms. The 
default value is 0.1",
+0.1,

Review comment:
   I have similar thoughts as @yunfengzhou-hub. Here are my findings that 
may be useful to consider here.
   
   Spark and Scikit-learn [1] uses HasTol for this purpose. Logistic Regression 
wiki [2] mentions tolerance instead of epsilon. I searched on Google for words 
that are commonly used for determining the "termination criteria". It looks 
like tolerance is much more popular than epsilon in the machine learning domain 
(e.g. [3]).
   
   [1] 
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html
   [2] https://en.wikipedia.org/wiki/Logistic_regression
   [3] 
https://support.minitab.com/en-us/minitab/18/help-and-how-to/modeling-statistics/regression/how-to/nonlinear-regression/interpret-the-results/all-statistics-and-graphs/methods-and-starting-values/




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24915) StreamElementSerializer#deserialize(StreamElement reuse, DataInputView source) forgets to handle tag == TAG_STREAM_STATUS

2021-11-22 Thread bx123 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447797#comment-17447797
 ] 

bx123 commented on FLINK-24915:
---

[~arvid] 

[#FLINK-5017] introduced StreamStatus support for this serializer, while it 
does not handle streamstatus when object is reused in method 
deserialize(StreamElement reuse, DataInputView source).

And [#FLINK-1137] said object reuse is not support in current flink version. So 
I guess maybe this is why this problem not occur in production. Is this right? 
flink-1137 is implemented in 2015 and I am not sure the status about object 
reuse in current version.

Can you help me on this issue? thanks!

> StreamElementSerializer#deserialize(StreamElement reuse, DataInputView 
> source) forgets to handle tag == TAG_STREAM_STATUS
> -
>
> Key: FLINK-24915
> URL: https://issues.apache.org/jira/browse/FLINK-24915
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Reporter: bx123
>Priority: Minor
>  Labels: pull-request-available
>
> when StreamElement is reused, we also have to handle tag == TAG_WATERMARK as 
> object reuse disabled do. 
> See also Flink-5017.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17670: [FLINK-24760][docs] Update user document for batch window tvf

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17670:
URL: https://github.com/apache/flink/pull/17670#issuecomment-960539433


   
   ## CI report:
   
   * 9f14fb2c282614088363960f23af3d6c5770ff15 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26814)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] beyond1920 commented on pull request #17670: [FLINK-24760][docs] Update user document for batch window tvf

2021-11-22 Thread GitBox


beyond1920 commented on pull request #17670:
URL: https://github.com/apache/flink/pull/17670#issuecomment-976196981


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] hililiwei commented on a change in pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify

2021-11-22 Thread GitBox


hililiwei commented on a change in pull request #17749:
URL: https://github.com/apache/flink/pull/17749#discussion_r754827593



##
File path: 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/DefaultPartTimeExtractor.java
##
@@ -77,29 +79,49 @@
 .toFormatter()
 .withResolverStyle(ResolverStyle.LENIENT);
 
-@Nullable private final String pattern;
+@Nullable private final String extractorPattern;
+@Nullable private final String formatterPattern;
 
-public DefaultPartTimeExtractor(@Nullable String pattern) {
-this.pattern = pattern;
+public DefaultPartTimeExtractor(
+@Nullable String extractorPattern, @Nullable String 
formatterPattern) {
+this.extractorPattern = extractorPattern;
+this.formatterPattern = formatterPattern;
 }
 
 @Override
 public LocalDateTime extract(List partitionKeys, List 
partitionValues) {
 String timestampString;
-if (pattern == null) {
+if (extractorPattern == null) {
 timestampString = partitionValues.get(0);
 } else {
-timestampString = pattern;
+timestampString = extractorPattern;
 for (int i = 0; i < partitionKeys.size(); i++) {
 timestampString =
 timestampString.replaceAll(
 "\\$" + partitionKeys.get(i), 
partitionValues.get(i));
 }
 }
-return toLocalDateTime(timestampString);
+return toLocalDateTime(timestampString, this.formatterPattern);
 }
 
-public static LocalDateTime toLocalDateTime(String timestampString) {
+public static LocalDateTime toLocalDateTime(
+String timestampString, @Nullable String formatterPattern) {
+
+if (formatterPattern == null) {
+return 
DefaultPartTimeExtractor.toLocalDateTimeDefault(timestampString);
+}

Review comment:
   ```
   public static LocalDateTime toLocalDateTimeDefault(String 
timestampString) {
   try {
   return LocalDateTime.parse(timestampString, TIMESTAMP_FORMATTER);
   } catch (DateTimeParseException e) {
   return LocalDateTime.of(
   LocalDate.parse(timestampString, DATE_FORMATTER), 
LocalTime.MIDNIGHT);
   }
   }
   ```
   Yes, I agree with you. I've tried to delete it, but if we remove 
toLocalDateTimeDefault, do we have to take out the exception handling 
separately?
   The toLocalDateTime method might look like this:
   
   ```
   public static LocalDateTime toLocalDateTime(
   String timestampString, @Nullable String formatterPattern) {
   
   DateTimeFormatter dateTimeFormatter =
   formatterPattern == null
   ? TIMESTAMP_FORMATTER
   : DateTimeFormatter.ofPattern(formatterPattern, 
Locale.ROOT);
   try {
   return LocalDateTime.parse(timestampString, dateTimeFormatter);
   } catch (DateTimeParseException e) {
   if (formatterPattern == null) {
   dateTimeFormatter = DATE_FORMATTER;
   }
   return LocalDateTime.of(
   LocalDate.parse(timestampString, dateTimeFormatter), 
LocalTime.MIDNIGHT);
   }
   }
   ```
   Personally, I feel that the catch block is a bit inappropriate. How do you 
feel about this? Or do you have a better way? thx.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17838: [FLINK-24915] [DataStream] fix StreamElementSerializer#deserialize(reuse, source) forgets to handle tag == TAG_STREAM_STATUS.

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17838:
URL: https://github.com/apache/flink/pull/17838#issuecomment-974023420


   
   ## CI report:
   
   * c8204684d4268cad534759a6c7bb487e17fdbb05 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26836)
 
   * 2e59912934125dc7935502b9fa8c18883feae7d0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26888)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17838: [FLINK-24915] [DataStream] fix StreamElementSerializer#deserialize(reuse, source) forgets to handle tag == TAG_STREAM_STATUS.

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17838:
URL: https://github.com/apache/flink/pull/17838#issuecomment-974023420


   
   ## CI report:
   
   * c8204684d4268cad534759a6c7bb487e17fdbb05 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26836)
 
   * 2e59912934125dc7935502b9fa8c18883feae7d0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-22 Thread GitBox


lindong28 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r754825324



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasEpsilon.java
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.param;
+
+import org.apache.flink.ml.param.DoubleParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+import org.apache.flink.ml.param.WithParams;
+
+/** Interface for the shared epsilon param. */
+public interface HasEpsilon extends WithParams {
+
+Param EPSILON =
+new DoubleParam(
+"epsilon",
+"Convergence tolerance for iterative algorithms. The 
default value is 0.1",
+0.1,

Review comment:
   I have similar thoughts as @yunfengzhou-hub. Here are my findings that 
may be useful to consider here.
   
   Spark and Scikit-learn [1] uses HasTol for this purpose. Logistic Regression 
wiki [2] mentions tolerance instead of epsilon. I searched on Google for words 
that are commonly used for determining the "termination criteria". It looks 
like tolerance is much more popular than epsilon in the machine learning domain 
(e.g. [3]).
   
   [1] 
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html
   [2] https://en.wikipedia.org/wiki/Logistic_regression
   [3] 
https://support.minitab.com/en-us/minitab/18/help-and-how-to/modeling-statistics/regression/how-to/nonlinear-regression/interpret-the-results/all-statistics-and-graphs/methods-and-starting-values/




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17874: [FLINK-24046] Refactor the EmbeddedRocksDBStateBackend configuration logic

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17874:
URL: https://github.com/apache/flink/pull/17874#issuecomment-976109395


   
   ## CI report:
   
   * 95f6020ca7df973e5a50a0eab36fa2bd1a33878c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26880)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17823: [FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17823:
URL: https://github.com/apache/flink/pull/17823#issuecomment-972474977


   
   ## CI report:
   
   * fe017595cf990c90cf53deea9c11288e77c7565a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26881)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-22 Thread GitBox


lindong28 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r754820934



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasEpsilon.java
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.param;
+
+import org.apache.flink.ml.param.DoubleParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+import org.apache.flink.ml.param.WithParams;
+
+/** Interface for the shared epsilon param. */
+public interface HasEpsilon extends WithParams {
+
+Param EPSILON =

Review comment:
   Spark and Scikit-learn [1] uses HasTol for this purpose. Logistic 
Regression wiki [2] mentions `tolerance` instead of `epsilon`. I searched on 
Google for words that are commonly used for determining the "termination 
criteria". It looks like `tolerance` is much more popular than epsilon in the 
machine learning domain (e.g. [3]).
   
   How about we use the same `HasTol` as Spark?
   
   BTW, @yunfengzhou-hub asked a similar question in a previous comment. That 
comment was closed without reply. Can we wait for the confirmation from 
reviewers before resolving such comments?
   
   [1] 
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html
   [2] https://en.wikipedia.org/wiki/Logistic_regression
   [3] 
https://support.minitab.com/en-us/minitab/18/help-and-how-to/modeling-statistics/regression/how-to/nonlinear-regression/interpret-the-results/all-statistics-and-graphs/methods-and-starting-values/

##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasFeaturesCol.java
##
@@ -27,7 +27,10 @@
 public interface HasFeaturesCol extends WithParams {
 Param FEATURES_COL =
 new StringParam(
-"featuresCol", "Features column name.", "features", 
ParamValidators.notNull());
+"featuresCol",
+"Name of the features column name.",

Review comment:
   According to Google results (google `name of column or column name`), it 
seems that the original `...column name` is more widely used than `name of 
column...`.
   
   So it seems simpler to use the original `Features column name`?
   
   And the 2nd `name` seems to be redundant here.
   

##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasBatchSize.java
##
@@ -0,0 +1,40 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.param;
+
+import org.apache.flink.ml.param.IntParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+import org.apache.flink.ml.param.WithParams;
+
+/** Interface for the shared batchSize param. */
+public interface HasBatchSize extends WithParams {
+
+Param BATCH_SIZE =
+new IntParam(
+"batchSize", "Batch size of training algorithms.", 100, 
ParamValidators.gt(0));

Review comment:
   How about setting the default value here to be 32?
   
   As explained in [1], batch size is typically power of 2. And according to 
[2], batchSize=32 could be a good starting point.
   
   
   [1] 
https://datascience.stackexchange.com/questions/20179/what-is-the-advantage-of-keeping-batch-size-a-power-of-2




-- 
This is an automated message from the Apache Git 

[GitHub] [flink] YuvalItzchakov commented on a change in pull request #17845: [FLINK-24352] [flink-table-planner] Add null check for temporal table check on SqlSnapshot

2021-11-22 Thread GitBox


YuvalItzchakov commented on a change in pull request #17845:
URL: https://github.com/apache/flink/pull/17845#discussion_r754823553



##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/join/LookupJoinTest.scala
##
@@ -525,7 +525,21 @@ class LookupJoinTest(legacyTableSource: Boolean) extends 
TableTestBase with Seri
 verifyTranslationSuccess(sql)
   }
 
-  // 
==
+  @Test
+  def testJoinTemporalTableWithCTE(): Unit = {
+val sql =
+  """
+|WITH MyLookupTable AS (SELECT * FROM MyTable),
+|OtherLookupTable AS (SELECT * FROM LookupTable)
+|SELECT MyLookupTable.b FROM MyLookupTable
+|JOIN OtherLookupTable FOR SYSTEM_TIME AS OF MyLookupTable.proctime AS 
D
+|ON MyLookupTable.a = D.id AND D.age = 10
+  """.stripMargin
+
+verifyTranslationSuccess(sql)

Review comment:
   @tsreaper I can add an additional test that verifies to exec plan, as it 
was similar before and after my change (I originally tried using it in my test, 
but the `NullPointerException` did not reproduce using it).




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on a change in pull request #17845: [FLINK-24352] [flink-table-planner] Add null check for temporal table check on SqlSnapshot

2021-11-22 Thread GitBox


YuvalItzchakov commented on a change in pull request #17845:
URL: https://github.com/apache/flink/pull/17845#discussion_r754823553



##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/plan/stream/sql/join/LookupJoinTest.scala
##
@@ -525,7 +525,21 @@ class LookupJoinTest(legacyTableSource: Boolean) extends 
TableTestBase with Seri
 verifyTranslationSuccess(sql)
   }
 
-  // 
==
+  @Test
+  def testJoinTemporalTableWithCTE(): Unit = {
+val sql =
+  """
+|WITH MyLookupTable AS (SELECT * FROM MyTable),
+|OtherLookupTable AS (SELECT * FROM LookupTable)
+|SELECT MyLookupTable.b FROM MyLookupTable
+|JOIN OtherLookupTable FOR SYSTEM_TIME AS OF MyLookupTable.proctime AS 
D
+|ON MyLookupTable.a = D.id AND D.age = 10
+  """.stripMargin
+
+verifyTranslationSuccess(sql)

Review comment:
   @tsreaper I can add an additional test that verifies to exec plan, as it 
was similar before and after my change (I originally tried using it in my test, 
but the `NullPointerException` did not reproduce using it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24948) Special character in column names breaks JDBC statement parsing

2021-11-22 Thread Paul Lin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447789#comment-17447789
 ] 

Paul Lin commented on FLINK-24948:
--

Yes, I'm working on it. May I ask your opinion on the solution? Should we 
forbid column names with special characters or just fix the parsing of named 
parameters?

> Special character in column names breaks JDBC statement parsing
> ---
>
> Key: FLINK-24948
> URL: https://issues.apache.org/jira/browse/FLINK-24948
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.12.4
>Reporter: Paul Lin
>Assignee: Paul Lin
>Priority: Major
>
> Currently, JDBC connector assumes columns names respect Java identifier 
> naming restrictions, but Databases that support JDBC may have different 
> naming restrictions. For example, MySQL allows dots and colons in column 
> names. In that case, JDBC connector would have trouble parsing the SQL.
> We could fix this by validating field names in `JdbcDmlOptions`. In addition, 
> it'd be good to clarify the naming restrictions of Flink SQL, so users and 
> connector developers would know the standard.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17813: [FLINK-24802][Table SQL/Planner] Improve cast ROW to STRING

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17813:
URL: https://github.com/apache/flink/pull/17813#issuecomment-970956527


   
   ## CI report:
   
   * d4a10bc3d83bb43c839ae7b54e8bd41a65b94925 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26877)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26887)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify DateTi

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17749:
URL: https://github.com/apache/flink/pull/17749#issuecomment-965029957


   
   ## CI report:
   
   * b87090dace76f1298934c25b6fdb892853376bb1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26868)
 
   * 2aa66ef4a13a091a4c6c1b5638e9ed557ab1cf6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26878)
 
   * 5b63c4ec4e1204cba377c5012b8583e5ab24804b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-12941) Translate "Amazon AWS Kinesis Streams Connector" page into Chinese

2021-11-22 Thread ZhuoYu Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447787#comment-17447787
 ] 

ZhuoYu Chen commented on FLINK-12941:
-

Hi [~jark] I am very interested in this,and I want do some job for flink,can I 
help to do that?
Thank you

> Translate "Amazon AWS Kinesis Streams Connector" page into Chinese
> --
>
> Key: FLINK-12941
> URL: https://issues.apache.org/jira/browse/FLINK-12941
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Minor
>  Labels: auto-unassigned
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/kinesis.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/kinesis.zh.md"



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] hililiwei commented on a change in pull request #17749: [FLINK-24758][Connectors / FileSystem] filesystem sink: add partitiontime-extractor.formatter-pattern to allow user to speify

2021-11-22 Thread GitBox


hililiwei commented on a change in pull request #17749:
URL: https://github.com/apache/flink/pull/17749#discussion_r754819067



##
File path: 
flink-table/flink-table-runtime/src/main/java/org/apache/flink/table/filesystem/FileSystemConnectorOptions.java
##
@@ -156,6 +156,22 @@
 .withDescription(
 "The extractor class for implement 
PartitionTimeExtractor interface.");
 
+public static final ConfigOption 
PARTITION_TIME_EXTRACTOR_TIMESTAMP_FORMATTER =
+key("partition.time-extractor.timestamp-formatter")
+.stringType()
+.noDefaultValue()

Review comment:
   the default value is broader than  '-MM-dd HH:mm:ss'。
   ```
   private static final DateTimeFormatter TIMESTAMP_FORMATTER =
   new DateTimeFormatterBuilder()
   .appendValue(YEAR, 1, 10, SignStyle.NORMAL)
   .appendLiteral('-')
   .appendValue(MONTH_OF_YEAR, 1, 2, SignStyle.NORMAL)
   .appendLiteral('-')
   .appendValue(DAY_OF_MONTH, 1, 2, SignStyle.NORMAL)
   .optionalStart()
   .appendLiteral(" ")
   .appendValue(HOUR_OF_DAY, 1, 2, SignStyle.NORMAL)
   .appendLiteral(':')
   .appendValue(MINUTE_OF_HOUR, 1, 2, SignStyle.NORMAL)
   .appendLiteral(':')
   .appendValue(SECOND_OF_MINUTE, 1, 2, SignStyle.NORMAL)
   .optionalStart()
   .appendFraction(ChronoField.NANO_OF_SECOND, 1, 9, true)
   .optionalEnd()
   .optionalEnd()
   .toFormatter()
   .withResolverStyle(ResolverStyle.LENIENT);
   ```
   
   For example, the default values above work for single-digit months, but 
'-MM-dd HH:mm:ss' does not, the same for  milliseconds.
   Originally, I used  '-MM-dd HH:mm:ss'  as the default and removed the 
TIMESTAMP_FORMATTER, but I found that this had compatibility issues.
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24696) Translate how to configure unaligned checkpoints into Chinese

2021-11-22 Thread ZhuoYu Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447786#comment-17447786
 ] 

ZhuoYu Chen commented on FLINK-24696:
-

Hi [~pnowojski]  , I am very interested in this,and I want do some job for 
flink,can I help to do that?
Thank youHi Liebing Yu , I am very interested in this,and I want do some job 
for flink,can I help to do that?
Thank you

> Translate how to configure unaligned checkpoints into Chinese
> -
>
> Key: FLINK-24696
> URL: https://issues.apache.org/jira/browse/FLINK-24696
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation
>Affects Versions: 1.15.0, 1.14.1
>Reporter: Piotr Nowojski
>Priority: Major
> Fix For: 1.15.0
>
>
> As part of FLINK-24695 
> {{docs/content/docs/ops/state/checkpointing_under_backpressure.md}} and 
> {{docs/content/docs/dev/datastream/fault-tolerance/checkpointing.md}} were 
> modified. Those modifications should be translated into Chinese



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-13550) Support for CPU FlameGraphs in web UI

2021-11-22 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447783#comment-17447783
 ] 

jackylau commented on FLINK-13550:
--

hi [~arvid]  [~dmvk] , i have a question. 
[https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/ops/debugging/flame_graphs/
 
|https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/ops/debugging/flame_graphs/]

what is the meaning of the digits? why it can be  negative number?

!image-2021-11-23-13-36-03-269.png!

> Support for CPU FlameGraphs in web UI
> -
>
> Key: FLINK-13550
> URL: https://issues.apache.org/jira/browse/FLINK-13550
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: David Morávek
>Assignee: Alexander Fedulov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
> Attachments: image-2021-11-23-13-36-03-269.png
>
>
> For a better insight into a running job, it would be useful to have ability 
> to render a CPU flame graph for a particular job vertex.
> Flink already has a stack-trace sampling mechanism in-place, so it should be 
> straightforward to implement.
> This should be done by implementing a new endpoint in REST API, which would 
> sample the stack-trace the same way as current BackPressureTracker does, only 
> with a different sampling rate and length of sampling.
> [Here|https://www.youtube.com/watch?v=GUNDehj9z9o] is a little demo of the 
> feature.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-13550) Support for CPU FlameGraphs in web UI

2021-11-22 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447783#comment-17447783
 ] 

jackylau edited comment on FLINK-13550 at 11/23/21, 5:36 AM:
-

hi [~arvid]  [~dmvk] , i have a question. 
[https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/ops/debugging/flame_graphs/
 
|https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/ops/debugging/flame_graphs/]

what is the meaning of the digits? why it can be  negative number?

 

!image-2021-11-23-13-36-03-269.png!


was (Author: jackylau):
hi [~arvid]  [~dmvk] , i have a question. 
[https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/ops/debugging/flame_graphs/
 
|https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/ops/debugging/flame_graphs/]

what is the meaning of the digits? why it can be  negative number?

!image-2021-11-23-13-36-03-269.png!

> Support for CPU FlameGraphs in web UI
> -
>
> Key: FLINK-13550
> URL: https://issues.apache.org/jira/browse/FLINK-13550
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: David Morávek
>Assignee: Alexander Fedulov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
> Attachments: image-2021-11-23-13-36-03-269.png
>
>
> For a better insight into a running job, it would be useful to have ability 
> to render a CPU flame graph for a particular job vertex.
> Flink already has a stack-trace sampling mechanism in-place, so it should be 
> straightforward to implement.
> This should be done by implementing a new endpoint in REST API, which would 
> sample the stack-trace the same way as current BackPressureTracker does, only 
> with a different sampling rate and length of sampling.
> [Here|https://www.youtube.com/watch?v=GUNDehj9z9o] is a little demo of the 
> feature.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-13550) Support for CPU FlameGraphs in web UI

2021-11-22 Thread jackylau (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-13550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jackylau updated FLINK-13550:
-
Attachment: image-2021-11-23-13-36-03-269.png

> Support for CPU FlameGraphs in web UI
> -
>
> Key: FLINK-13550
> URL: https://issues.apache.org/jira/browse/FLINK-13550
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: David Morávek
>Assignee: Alexander Fedulov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
> Attachments: image-2021-11-23-13-36-03-269.png
>
>
> For a better insight into a running job, it would be useful to have ability 
> to render a CPU flame graph for a particular job vertex.
> Flink already has a stack-trace sampling mechanism in-place, so it should be 
> straightforward to implement.
> This should be done by implementing a new endpoint in REST API, which would 
> sample the stack-trace the same way as current BackPressureTracker does, only 
> with a different sampling rate and length of sampling.
> [Here|https://www.youtube.com/watch?v=GUNDehj9z9o] is a little demo of the 
> feature.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24975) Add hooks and extension points to FlinkSQL

2021-11-22 Thread junbiao chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447778#comment-17447778
 ] 

junbiao chen commented on FLINK-24975:
--

I will start a discussion about this issue later

> Add hooks and extension points to FlinkSQL
> --
>
> Key: FLINK-24975
> URL: https://issues.apache.org/jira/browse/FLINK-24975
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: junbiao chen
>Priority: Major
>
> refer to sparkSQL,https://issues.apache.org/jira/browse/SPARK-18127



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17822: Release 1.14 kafka3.0 bug

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17822:
URL: https://github.com/apache/flink/pull/17822#issuecomment-971696959


   
   ## CI report:
   
   * 3719c0402ec979c619371fcde9f2e7d2c46d69ed Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26805)
 
   * 4315c1be1f94367058c85be82e89d1bd623c63a7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26885)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17822: Release 1.14 kafka3.0 bug

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17822:
URL: https://github.com/apache/flink/pull/17822#issuecomment-971696959


   
   ## CI report:
   
   * 3719c0402ec979c619371fcde9f2e7d2c46d69ed Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26805)
 
   * 4315c1be1f94367058c85be82e89d1bd623c63a7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on pull request #17805: [FLINK-24708][planner] Fix wrong results of the IN operator

2021-11-22 Thread GitBox


godfreyhe commented on pull request #17805:
URL: https://github.com/apache/flink/pull/17805#issuecomment-976162994


   merged


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24708) `ConvertToNotInOrInRule` has a bug which leads to wrong result

2021-11-22 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-24708.
--
Resolution: Fixed

Fixed in:
1.15.0: 90e850301e672fc0da293abc55eb446f7ec68ffa
1.14.1: 4315c1be1f94367058c85be82e89d1bd623c63a7
1.13.4: 39134d1c1524f6014e0ce676e471379d386bc659

> `ConvertToNotInOrInRule` has a bug which leads to wrong result
> --
>
> Key: FLINK-24708
> URL: https://issues.apache.org/jira/browse/FLINK-24708
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Assignee: Shengkai Fang
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.15.0, 1.14.1, 1.13.4
>
> Attachments: image-2021-10-29-23-59-48-074.png
>
>
> A user report this bug in maillist, I paste the content here.
> We are in the process of upgrading from Flink 1.9.3 to 1.13.3.  We have 
> noticed that statements with either where UPPER(field) or LOWER(field) in 
> combination with an IN do not always evaluate correctly. 
>  
> The following test case highlights this problem.
>  
>  
> {code:java}
> import org.apache.flink.streaming.api.datastream.DataStream;
> import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
> import org.apache.flink.table.api.Schema;
> import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
> public class TestCase {
> public static void main(String[] args) throws Exception {
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.setParallelism(1);
> TestData testData = new TestData();
> testData.setField1("bcd");
> DataStream stream = env.fromElements(testData);
> stream.print();  // To prevent 'No operators' error
> final StreamTableEnvironment tableEnvironment = 
> StreamTableEnvironment.create(env);
> tableEnvironment.createTemporaryView("testTable", stream, 
> Schema.newBuilder().build());
> // Fails because abcd is larger than abc
> tableEnvironment.executeSql("select *, '1' as run from testTable 
> WHERE lower(field1) IN ('abcd', 'abc', 'bcd', 'cde')").print();
> // Succeeds because lower was removed
> tableEnvironment.executeSql("select *, '2' as run from testTable 
> WHERE field1 IN ('abcd', 'abc', 'bcd', 'cde')").print();
> // These 4 succeed because the smallest literal is before abcd
> tableEnvironment.executeSql("select *, '3' as run from testTable 
> WHERE lower(field1) IN ('abc', 'abcd', 'bcd', 'cde')").print();
> tableEnvironment.executeSql("select *, '4' as run from testTable 
> WHERE lower(field1) IN ('abc', 'bcd', 'abhi', 'cde')").print();
> tableEnvironment.executeSql("select *, '5' as run from testTable 
> WHERE lower(field1) IN ('cde', 'abcd', 'abc', 'bcd')").print();
> tableEnvironment.executeSql("select *, '6' as run from testTable 
> WHERE lower(field1) IN ('cde', 'abc', 'abcd', 'bcd')").print();
> // Fails because smallest is not first
> tableEnvironment.executeSql("select *, '7' as run from testTable 
> WHERE lower(field1) IN ('cdef', 'abce', 'abcd', 'ab', 'bcd')").print();
> // Succeeds
> tableEnvironment.executeSql("select *, '8' as run from testTable 
> WHERE lower(field1) IN ('ab', 'cdef', 'abce', 'abcdefgh', 'bcd')").print();
> env.execute("TestCase");
> }
> public static class TestData {
> private String field1;
> public String getField1() {
> return field1;
> }
> public void setField1(String field1) {
> this.field1 = field1;
> }
> }
> }
> {code}
>  
> The job produces the following output:
> Empty set
> +-+---++
> |op| field1|    run|
> +-+---++
> |+I|    bcd|  2|
> +-+---++
> 1 row in set
> +-+---++
> |op| field1|    run|
> +-+---++
> |+I|    bcd|  3|
> +-+---++
> 1 row in set
> +-+---++
> |op| field1|    run|
> +-+---++
> |+I|    bcd| 

[GitHub] [flink] godfreyhe closed pull request #17805: [FLINK-24708][planner] Fix wrong results of the IN operator

2021-11-22 Thread GitBox


godfreyhe closed pull request #17805:
URL: https://github.com/apache/flink/pull/17805


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25011) Introduce VertexParallelismDecider

2021-11-22 Thread Lijie Wang (Jira)
Lijie Wang created FLINK-25011:
--

 Summary: Introduce VertexParallelismDecider
 Key: FLINK-25011
 URL: https://issues.apache.org/jira/browse/FLINK-25011
 Project: Flink
  Issue Type: Sub-task
Reporter: Lijie Wang


Introduce VertexParallelismDecider and provide a default implementation.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24997) count(null) not supported in flink sql query

2021-11-22 Thread zouyunhe (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447767#comment-17447767
 ] 

zouyunhe commented on FLINK-24997:
--

OK [~icshuo] 

> count(null) not supported in flink sql query
> 
>
> Key: FLINK-24997
> URL: https://issues.apache.org/jira/browse/FLINK-24997
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: zouyunhe
>Priority: Major
>
> I use sql client to submit a sql query to flink session cluster,  the sql is 
> {code:java}
> select count(null);{code}
>   it submit failed and throws the exception
> {code:java}
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not 
> execute SQL statement.
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:211)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:231)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:532) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:423) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$1(CliClient.java:332)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at java.util.Optional.ifPresent(Optional.java:183) ~[?:?]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:325)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
> Caused by: java.lang.UnsupportedOperationException: Unsupported type 'NULL' 
> to get internal serializer
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:125)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 
> ~[?:?]
>         at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>  ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 
> ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:550) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:?]
>         at 
> java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:517) ~[?:?]
>         at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.(RowDataSerializer.java:73)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:109)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalTypeInfo.of(InternalTypeInfo.java:83)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:106)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:250)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> 

[jira] [Commented] (FLINK-24997) count(null) not supported in flink sql query

2021-11-22 Thread Shuo Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447766#comment-17447766
 ] 

Shuo Cheng commented on FLINK-24997:


[~zouyunhe] you can follow the similar Jira FLINK-17484

> count(null) not supported in flink sql query
> 
>
> Key: FLINK-24997
> URL: https://issues.apache.org/jira/browse/FLINK-24997
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: zouyunhe
>Priority: Major
>
> I use sql client to submit a sql query to flink session cluster,  the sql is 
> {code:java}
> select count(null);{code}
>   it submit failed and throws the exception
> {code:java}
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not 
> execute SQL statement.
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:211)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:231)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:532) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:423) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$1(CliClient.java:332)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at java.util.Optional.ifPresent(Optional.java:183) ~[?:?]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:325)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
> Caused by: java.lang.UnsupportedOperationException: Unsupported type 'NULL' 
> to get internal serializer
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:125)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 
> ~[?:?]
>         at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>  ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 
> ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:550) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:?]
>         at 
> java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:517) ~[?:?]
>         at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.(RowDataSerializer.java:73)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:109)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalTypeInfo.of(InternalTypeInfo.java:83)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:106)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:250)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> 

[jira] [Created] (FLINK-25010) Speed up hive's createMRSplits by multi thread

2021-11-22 Thread Liu (Jira)
Liu created FLINK-25010:
---

 Summary: Speed up hive's createMRSplits by multi thread
 Key: FLINK-25010
 URL: https://issues.apache.org/jira/browse/FLINK-25010
 Project: Flink
  Issue Type: Improvement
Reporter: Liu


We have thousands of hive partitions and the method createMRSplits will take 
much time, for example, ten minutes. We can speed up the process by multi 
thread for different partitions.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25010) Speed up hive's createMRSplits by multi thread

2021-11-22 Thread Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu updated FLINK-25010:

Component/s: Connectors / Hive

> Speed up hive's createMRSplits by multi thread
> --
>
> Key: FLINK-25010
> URL: https://issues.apache.org/jira/browse/FLINK-25010
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Reporter: Liu
>Priority: Major
>
> We have thousands of hive partitions and the method createMRSplits will take 
> much time, for example, ten minutes. We can speed up the process by multi 
> thread for different partitions.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] godfreyhe commented on a change in pull request #17666: [FLINK-21327][table-planner-blink] Support window TVF in batch mode

2021-11-22 Thread GitBox


godfreyhe commented on a change in pull request #17666:
URL: https://github.com/apache/flink/pull/17666#discussion_r754795448



##
File path: 
flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/planner/plan/batch/sql/WindowTableFunctionTest.scala
##
@@ -0,0 +1,198 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.batch.sql
+
+import org.apache.flink.api.scala._
+import org.apache.flink.table.api._
+import org.apache.flink.table.planner.utils.TableTestBase
+
+import java.sql.Timestamp
+
+import org.junit.{Before, Test}
+
+class WindowTableFunctionTest extends TableTestBase {

Review comment:
   please add some test cases to verify the projection 
window-table-function transpose

##
File path: 
flink-table/flink-table-planner/src/main/scala/org/apache/flink/table/planner/plan/nodes/common/CommonPhysicalWindowTableFunction.scala
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.common
+
+import org.apache.flink.table.api.TableException
+import org.apache.flink.table.planner.plan.logical.{CumulativeWindowSpec, 
HoppingWindowSpec, TimeAttributeWindowingStrategy, TumblingWindowSpec}
+
+import org.apache.calcite.plan.{RelOptCluster, RelTraitSet}
+import org.apache.calcite.rel.`type`.RelDataType
+import org.apache.calcite.rel.metadata.RelMetadataQuery
+import org.apache.calcite.rel.{RelNode, RelWriter, SingleRel}
+
+import scala.collection.JavaConverters._
+
+/**
+ * Base physical RelNode for window table-valued function.
+ */
+abstract class CommonPhysicalWindowTableFunction(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+inputRel: RelNode,
+outputRowType: RelDataType,
+val windowing: TimeAttributeWindowingStrategy)
+  extends SingleRel(cluster, traitSet, inputRel) {
+
+  override def deriveRowType(): RelDataType = outputRowType
+
+  override def explainTerms(pw: RelWriter): RelWriter = {
+val inputFieldNames = getInput.getRowType.getFieldNames.asScala.toArray
+super.explainTerms(pw)
+  .item("window", windowing.toSummaryString(inputFieldNames))
+  }
+
+  override def estimateRowCount(mq: RelMetadataQuery): Double = {

Review comment:
   We should add a test case in FlinkRelMdRowCountTest for 
WindowTableFunction




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on a change in pull request #17670: [FLINK-24760][docs] Update user document for batch window tvf

2021-11-22 Thread GitBox


godfreyhe commented on a change in pull request #17670:
URL: https://github.com/apache/flink/pull/17670#discussion_r754793897



##
File path: docs/content.zh/docs/dev/table/sql/queries/window-agg.md
##
@@ -40,7 +40,9 @@ Unlike other aggregations on continuous tables, window 
aggregation do not emit i
 
 ### Windowing TVFs
 
-Flink supports `TUMBLE`, `HOP` and `CUMULATE` types of window aggregations, 
which can be defined on either [event or processing time attributes]({{< ref 
"docs/dev/table/concepts/time_attributes" >}}). See [Windowing TVF]({{< ref 
"docs/dev/table/sql/queries/window-tvf" >}}) for more windowing functions 
information.
+Flink supports `TUMBLE`, `HOP` and `CUMULATE` types of window aggregations.
+For SQL queries on streaming tables, the time attribute field of a window 
table-valued function must be on either [event or processing time 
attributes]({{< ref "docs/dev/table/concepts/time_attributes" >}}). See 
[Windowing TVF]({{< ref "docs/dev/table/sql/queries/window-tvf" >}}) for more 
windowing functions information.
+For SQL on batch tables, the time attribute field of a window table-valued 
function must be an attribute of type `TIMESTAMP` or `TIMESTAMP_LTZ`. 

Review comment:
   +1 for streaming/batch mode




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24997) count(null) not supported in flink sql query

2021-11-22 Thread zouyunhe (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447765#comment-17447765
 ] 

zouyunhe commented on FLINK-24997:
--

[~jark] [~icshuo] the calcite can not deduce the type, but the exception throws 
in `InternalSerializers`,  it seems the `InternalSerializers can not be 
compatible with null type, should we fix this ?

 

> count(null) not supported in flink sql query
> 
>
> Key: FLINK-24997
> URL: https://issues.apache.org/jira/browse/FLINK-24997
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: zouyunhe
>Priority: Major
>
> I use sql client to submit a sql query to flink session cluster,  the sql is 
> {code:java}
> select count(null);{code}
>   it submit failed and throws the exception
> {code:java}
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not 
> execute SQL statement.
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:211)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:231)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:532) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:423) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$1(CliClient.java:332)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at java.util.Optional.ifPresent(Optional.java:183) ~[?:?]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:325)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
> Caused by: java.lang.UnsupportedOperationException: Unsupported type 'NULL' 
> to get internal serializer
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:125)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 
> ~[?:?]
>         at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>  ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 
> ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:550) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:?]
>         at 
> java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:517) ~[?:?]
>         at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.(RowDataSerializer.java:73)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:109)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalTypeInfo.of(InternalTypeInfo.java:83)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:106)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> 

[jira] [Commented] (FLINK-24997) count(null) not supported in flink sql query

2021-11-22 Thread zouyunhe (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447764#comment-17447764
 ] 

zouyunhe commented on FLINK-24997:
--

[~icshuo] it works by use `select count(cast(null as int))`.   should we set 
type coercion enabled in flink sql planner to support this query, which would 
conform to the usage habits of users?

> count(null) not supported in flink sql query
> 
>
> Key: FLINK-24997
> URL: https://issues.apache.org/jira/browse/FLINK-24997
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: zouyunhe
>Priority: Major
>
> I use sql client to submit a sql query to flink session cluster,  the sql is 
> {code:java}
> select count(null);{code}
>   it submit failed and throws the exception
> {code:java}
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not 
> execute SQL statement.
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:211)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:231)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:532) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:423) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$1(CliClient.java:332)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at java.util.Optional.ifPresent(Optional.java:183) ~[?:?]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:325)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
> Caused by: java.lang.UnsupportedOperationException: Unsupported type 'NULL' 
> to get internal serializer
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:125)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 
> ~[?:?]
>         at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>  ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 
> ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:550) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:?]
>         at 
> java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:517) ~[?:?]
>         at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.(RowDataSerializer.java:73)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:109)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalTypeInfo.of(InternalTypeInfo.java:83)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:106)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> 

[jira] [Commented] (FLINK-24446) Casting from STRING to TIMESTAMP_LTZ looses fractional seconds

2021-11-22 Thread Shen Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447763#comment-17447763
 ] 

Shen Zhu commented on FLINK-24446:
--

Hey [~matriv] , I'm interested in working on this ticket, would you mind 
assigning it to me?

> Casting from STRING to TIMESTAMP_LTZ looses fractional seconds
> --
>
> Key: FLINK-24446
> URL: https://issues.apache.org/jira/browse/FLINK-24446
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Marios Trivyzas
>Priority: Major
>
> Currently the method *toTimestamp(str, tz)* from *SqlDateTimeUtils* doesn't 
> accept more then 23 chars in the input and it also return a long which is the 
> millis since epoch so the rest of the fractional secs are ignored.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] godfreyhe closed pull request #17806: [FLINK-24708][planner] Fix wrong results of the IN operator

2021-11-22 Thread GitBox


godfreyhe closed pull request #17806:
URL: https://github.com/apache/flink/pull/17806


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17876: [FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17876:
URL: https://github.com/apache/flink/pull/17876#issuecomment-976152397


   
   ## CI report:
   
   * 9b4fa70d5e2f6a31bbb7ad1b84fb8334ac24a46d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26883)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17876: [FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot commented on pull request #17876:
URL: https://github.com/apache/flink/pull/17876#issuecomment-976152397


   
   ## CI report:
   
   * 9b4fa70d5e2f6a31bbb7ad1b84fb8334ac24a46d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17875: [BP-1.14][FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17875:
URL: https://github.com/apache/flink/pull/17875#issuecomment-976151167


   
   ## CI report:
   
   * 3c8b9142f5707820507d61fc71784f7ed4bc07f1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26882)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17876: [FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot commented on pull request #17876:
URL: https://github.com/apache/flink/pull/17876#issuecomment-976152309


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9b4fa70d5e2f6a31bbb7ad1b84fb8334ac24a46d (Tue Nov 23 
04:04:30 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17875: [BP-1.14][FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot commented on pull request #17875:
URL: https://github.com/apache/flink/pull/17875#issuecomment-976151287


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3c8b9142f5707820507d61fc71784f7ed4bc07f1 (Tue Nov 23 
04:01:44 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17875: [BP-1.14][FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot commented on pull request #17875:
URL: https://github.com/apache/flink/pull/17875#issuecomment-976151167


   
   ## CI report:
   
   * 3c8b9142f5707820507d61fc71784f7ed4bc07f1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17823: [FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17823:
URL: https://github.com/apache/flink/pull/17823#issuecomment-972474977


   
   ## CI report:
   
   * fc2c92648891940ca961617c5a7715d3e8137689 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26686)
 
   * fe017595cf990c90cf53deea9c11288e77c7565a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26881)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 opened a new pull request #17876: [FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


wangyang0918 opened a new pull request #17876:
URL: https://github.com/apache/flink/pull/17876


   Backport #17823 to 1.14.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24997) count(null) not supported in flink sql query

2021-11-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447762#comment-17447762
 ] 

Jark Wu commented on FLINK-24997:
-

I agree with [~icshuo]. But the exception can be improved. 

> count(null) not supported in flink sql query
> 
>
> Key: FLINK-24997
> URL: https://issues.apache.org/jira/browse/FLINK-24997
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: zouyunhe
>Priority: Major
>
> I use sql client to submit a sql query to flink session cluster,  the sql is 
> {code:java}
> select count(null);{code}
>   it submit failed and throws the exception
> {code:java}
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not 
> execute SQL statement.
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:211)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:231)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:532) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:423) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$1(CliClient.java:332)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at java.util.Optional.ifPresent(Optional.java:183) ~[?:?]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:325)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
> Caused by: java.lang.UnsupportedOperationException: Unsupported type 'NULL' 
> to get internal serializer
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:125)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 
> ~[?:?]
>         at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>  ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 
> ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:550) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:?]
>         at 
> java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:517) ~[?:?]
>         at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.(RowDataSerializer.java:73)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:109)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalTypeInfo.of(InternalTypeInfo.java:83)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:106)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecEdge.translateToPlan(ExecEdge.java:250)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> 

[jira] [Commented] (FLINK-24948) Special character in column names breaks JDBC statement parsing

2021-11-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447761#comment-17447761
 ] 

Jark Wu commented on FLINK-24948:
-

Thanks [~Paul Lin], do you want to provide a fix for this?

> Special character in column names breaks JDBC statement parsing
> ---
>
> Key: FLINK-24948
> URL: https://issues.apache.org/jira/browse/FLINK-24948
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.12.4
>Reporter: Paul Lin
>Assignee: Paul Lin
>Priority: Major
>
> Currently, JDBC connector assumes columns names respect Java identifier 
> naming restrictions, but Databases that support JDBC may have different 
> naming restrictions. For example, MySQL allows dots and colons in column 
> names. In that case, JDBC connector would have trouble parsing the SQL.
> We could fix this by validating field names in `JdbcDmlOptions`. In addition, 
> it'd be good to clarify the naming restrictions of Flink SQL, so users and 
> connector developers would know the standard.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17823: [FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


flinkbot edited a comment on pull request #17823:
URL: https://github.com/apache/flink/pull/17823#issuecomment-972474977


   
   ## CI report:
   
   * fc2c92648891940ca961617c5a7715d3e8137689 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=26686)
 
   * fe017595cf990c90cf53deea9c11288e77c7565a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 opened a new pull request #17875: [BP-1.14][FLINK-24937][e2e] Return correct exit code in build_image

2021-11-22 Thread GitBox


wangyang0918 opened a new pull request #17875:
URL: https://github.com/apache/flink/pull/17875


   Backport #17823 to 1.14.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21415) JDBC connector should support to disable caching missing key

2021-11-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447760#comment-17447760
 ] 

Jark Wu commented on FLINK-21415:
-

[~liliwei], yes, you can add a config option 
{{lookup.cache.caching-missing-key=true}} to swith the behavior, but the 
default value should be true to keep compatibility. 

> JDBC connector should support to disable caching missing key
> 
>
> Key: FLINK-21415
> URL: https://issues.apache.org/jira/browse/FLINK-21415
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.1
>Reporter: Shuai Xia
>Assignee: liwei li
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
>
> JDBC does not query data, and the cache will store an ArrayList without data.
> We should add size judgment.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24997) count(null) not supported in flink sql query

2021-11-22 Thread Shuo Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447758#comment-17447758
 ] 

Shuo Cheng commented on FLINK-24997:


Flink use Calcite to parse SQL, and Calcite has a strong type validation. 
Literal 'NULL' is not allowed when Calcite cannot deduce its type.  You can 
make it work like `select count(cast(null as int))`.

> count(null) not supported in flink sql query
> 
>
> Key: FLINK-24997
> URL: https://issues.apache.org/jira/browse/FLINK-24997
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client, Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: zouyunhe
>Priority: Major
>
> I use sql client to submit a sql query to flink session cluster,  the sql is 
> {code:java}
> select count(null);{code}
>   it submit failed and throws the exception
> {code:java}
> org.apache.flink.table.client.gateway.SqlExecutionException: Could not 
> execute SQL statement.
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeOperation(LocalExecutor.java:211)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:231)
>  ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:532) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.callOperation(CliClient.java:423) 
> ~[flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.lambda$executeStatement$1(CliClient.java:332)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at java.util.Optional.ifPresent(Optional.java:183) ~[?:?]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:325)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>  [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
> [flink-sql-client_2.12-1.14.0.jar:1.14.0]
> Caused by: java.lang.UnsupportedOperationException: Unsupported type 'NULL' 
> to get internal serializer
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:125)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:195) 
> ~[?:?]
>         at 
> java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>  ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:484) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:474) 
> ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:550) ~[?:?]
>         at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:?]
>         at 
> java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:517) ~[?:?]
>         at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.(RowDataSerializer.java:73)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.createInternal(InternalSerializers.java:109)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalSerializers.create(InternalSerializers.java:55)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.runtime.typeutils.InternalTypeInfo.of(InternalTypeInfo.java:83)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.common.CommonExecCalc.translateToPlanInternal(CommonExecCalc.java:106)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase.translateToPlan(ExecNodeBase.java:134)
>  ~[flink-table_2.12-1.14.0.jar:1.14.0]
>         at 
> 

[jira] [Updated] (FLINK-24966) Fix spelling errors in the project

2021-11-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-24966:

Summary: Fix spelling errors in the project  (was: Typo fix)

> Fix spelling errors in the project
> --
>
> Key: FLINK-24966
> URL: https://issues.apache.org/jira/browse/FLINK-24966
> Project: Flink
>  Issue Type: Improvement
>Reporter: jakevin
>Priority: Minor
>  Labels: pull-request-available, starter, typo
>
> Hi, I'm freshman to flink. I find some typo in the project.
> like: `cachable`, `clinet` 
> I'd like start the first step of participating in the community by fixing 
> typos.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] wuchong commented on a change in pull request #17842: [FLINK-24966] [docs] Fix spelling errors in the project

2021-11-22 Thread GitBox


wuchong commented on a change in pull request #17842:
URL: https://github.com/apache/flink/pull/17842#discussion_r754787598



##
File path: flink-end-to-end-tests/flink-tpcds-test/tpcds-tool/query/query44.sql
##
@@ -1,5 +1,5 @@
 -- start query 1 in stream 0 using template 
../query_templates_qualified/query44.tpl
-select  asceding.rnk, i1.i_product_name best_performing, i2.i_product_name 
worst_performing
+select  ascending.rnk, i1.i_product_name best_performing, i2.i_product_name 
worst_performing

Review comment:
   The origin TP-CDS query using `asceding`, so I would prefer not changing 
this file. 

##
File path: 
flink-core/src/test/java/org/apache/flink/core/fs/EntropyInjectorTest.java
##
@@ -169,7 +169,7 @@ public void testWithSafetyNet() throws Exception {
 }
 
 @Test
-public void testClassLoaderFixingFsWithSafeyNet() throws Exception {
+public void testClassLoaderFixingFsWithSafeNet() throws Exception {

Review comment:
   ```suggestion
   public void testClassLoaderFixingFsWithSafetyNet() throws Exception {
   ```

##
File path: 
flink-formats/flink-avro/src/test/java/org/apache/flink/formats/avro/EncoderDecoderTest.java
##
@@ -58,7 +58,7 @@
 public class EncoderDecoderTest {
 
 @Test
-public void testComplexStringsDirecty() {
+public void testComplexStringsDirect() {

Review comment:
   ```suggestion
   public void testComplexStringsDirectly() {
   ```

##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/io/StreamTaskSourceInput.java
##
@@ -106,7 +106,7 @@ public int getNumberOfInputChannels() {
  * from a network input. So that we can checkpoint state of the source and 
all of the other
  * operators at the same time.
  *
- * Also we are choosing to block the source, as a best effort 
optimisation as: - either there
+ * Also we are choosing to block the source, as a best effort 
Optimization as: - either there

Review comment:
   ```suggestion
* Also we are choosing to block the source, as a best effort 
optimization as: - either there
   ```

##
File path: 
flink-core/src/test/java/org/apache/flink/core/fs/EntropyInjectorTest.java
##
@@ -195,7 +195,7 @@ public void testClassLoaderFixingFsWithSafeyNet() throws 
Exception {
 }
 
 @Test
-public void testClassLoaderFixingFsWithoutSafeyNet() throws Exception {
+public void testClassLoaderFixingFsWithoutSafeNet() throws Exception {

Review comment:
   ```suggestion
   public void testClassLoaderFixingFsWithoutSafetyNet() throws Exception {
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21583) Allow comments in CSV format without having to ignore parse errors

2021-11-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447755#comment-17447755
 ] 

Jark Wu commented on FLINK-21583:
-

[~nkruber], could you help to check that? It seems we don't need to set 
'csv.ignore-parse-errors' = 'true' . 

[~liliwei] could you check the behavior on Flink 1.12.1 ? 


> Allow comments in CSV format without having to ignore parse errors
> --
>
> Key: FLINK-21583
> URL: https://issues.apache.org/jira/browse/FLINK-21583
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.12.1
>Reporter: Nico Kruber
>Assignee: liwei li
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Currently, when you pass {{'csv.allow-comments' = 'true'}} to a table 
> definition, you also have to set {{'csv.ignore-parse-errors' = 'true'}} to 
> actually skip the commented-out line (and the docs mention this prominently 
> as well). This, however, may mask actual parsing errors that you want to be 
> notified of.
> I would like to propose that {{allow-comments}} actually also skips the 
> commented-out lines automatically because these shouldn't be used anyway.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] RocMarshal commented on a change in pull request #17789: [FLINK-24351][docs] Translate "JSON Function" pages into Chinese

2021-11-22 Thread GitBox


RocMarshal commented on a change in pull request #17789:
URL: https://github.com/apache/flink/pull/17789#discussion_r754785987



##
File path: docs/data/sql_functions_zh.yml
##
@@ -908,15 +897,14 @@ json:
   - sql: JSON_OBJECTAGG([KEY] key VALUE value [ { NULL | ABSENT } ON NULL ])
 table: jsonObjectAgg(JsonOnNull, keyExpression, valueExpression)
 description: |
-  Builds a JSON object string by aggregating key-value expressions into a 
single JSON object.
+  通过将 key-value 聚合到单个 JSON 对象中,构建 JSON 对象字符串。
 
-  The key expression must return a non-nullable character string. Value 
expressions can be
-  arbitrary, including other JSON functions. If a value is `NULL`, the `ON 
NULL` behavior
-  defines what to do. If omitted, `NULL ON NULL` is assumed by default.
+  键表达式必须返回不为空的字符串。值表达式可以是任意的,包括其他 JSON 函数。
+  如果值为 `NULL`,则 `ON NULL` 行为定义了要执行的操作。如果省略,默认情况下假定为 `NULL ON NULL`。
 
-  Note that keys must be unique. If a key occurs multiple times, an error 
will be thrown.
+  请注意,键必须是唯一的。如果一个键出现多次,将抛出一个错误。
 
-  This function is currently not supported in `OVER` windows.
+  此函数目前在 `OVER` windows 中不受支持。

Review comment:
   ```suggestion
 目前在 `OVER` windows 中不支持此函数。
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24975) Add hooks and extension points to FlinkSQL

2021-11-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447753#comment-17447753
 ] 

Jark Wu commented on FLINK-24975:
-

Yes, this one deserves a FLIP. Btw, this may also relate to FLINK-21283. cc 
[~zhangjun]

> Add hooks and extension points to FlinkSQL
> --
>
> Key: FLINK-24975
> URL: https://issues.apache.org/jira/browse/FLINK-24975
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Reporter: junbiao chen
>Priority: Major
>
> refer to sparkSQL,https://issues.apache.org/jira/browse/SPARK-18127



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-24976) sink utils not check the schema info between query and sink table

2021-11-22 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-24976.
---
Resolution: Not A Problem

This is a SQL standard that not valid field names when INSERT INTO. If you want 
to have a clear insert fields in the statement, you can specify the insert 
column list [1], for example: 

{code}
INSERT INTO T(c, b) SELECT x, y FROM S;
{code}

[1]: 
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/insert/

> sink utils not check the schema info between  query and sink table
> --
>
> Key: FLINK-24976
> URL: https://issues.apache.org/jira/browse/FLINK-24976
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.12.5
>Reporter: xiaodao
>Priority: Major
>
> sql like this
> {code:java}
> //CREATE TABLE source
> (
>     id        INT,
>     name      STRING,
>     PROCTIME AS PROCTIME()
> ) WITH (
>       'connector' = 'kafka'
>       ,'topic' = 'da'
>       ,'properties.bootstrap.servers' = 'localhost:9092'
>       ,'properties.group.id' = 'test'
>       ,'scan.startup.mode' = 'earliest-offset'
>       ,'format' = 'json'
>       ,'json.timestamp-format.standard' = 'SQL'
>       ); create table MyResultTable (
>     id int,
>     name string,
>     primary key (id) not enforced
> ) with (
>     'connector' = 'jdbc',
>     'url' = 'jdbc:mysql://localhost:3306/test',
>     'table-name' = 'users',
>     'username' = 'root',
>     'password' = 'abc123'
> );     
> insert into MyResultTable select id as idx, name, age from source; {code}
> in this sql, sink table has field "id","name", but my query result is just 
> "idx", "name";
> the sql execute is ok;
> but my question why it not valid name of query and sink table ;
> the code is in 
> org.apache.flink.table.planner.sinks.DynamicSinkUtils#validateSchemaAndApplyImplicitCast
> in will cause mistake when the field is too much.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24002) Support count window with the window TVF

2021-11-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447749#comment-17447749
 ] 

Jark Wu commented on FLINK-24002:
-

Thanks [~jingzhang] for hte summary. Your proposed syntax looks good to me. 
Regarding to your questions:
1. I think it's find to not emit for windows not have enough size. But please 
make sure the semantic is the same for both batch and streaming. 
2. Agree. 
3. It depends on the implementation of the count window. For example, the count 
window of DataStream and Table SQL have different implementation and of course 
have different state cleanup strategy. 

> Support count window with the window TVF
> 
>
> Key: FLINK-24002
> URL: https://issues.apache.org/jira/browse/FLINK-24002
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jing Zhang
>Priority: Minor
>
> For a long time, count window is supported in Table API, but not supported in 
> SQL.
> With the new window TVF syntax, we can also introduce a new window function 
> for count window.
> For example, the following TUMBLE_ROW assigns windows in 10 row-count 
> interval. 
> |{{SELECT}} {{*}}
> {{FROM}} {{{}TABLE{}}}{{{}({}}}
> {{   }}{{TUMBLE_ROW(}}
> {{ }}{{data => }}{{TABLE}} {{inputTable PARTITION BY order_id,}}
> {{     }}{{size}} {{=> 10));}}|
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] RocMarshal removed a comment on pull request #16962: [FLINK-15352][connector-jdbc] Develop MySQLCatalog to connect Flink with MySQL tables and ecosystem.

2021-11-22 Thread GitBox


RocMarshal removed a comment on pull request #16962:
URL: https://github.com/apache/flink/pull/16962#issuecomment-962460877


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25008) Improve behaviour of NotNullEnforcer when dropping records

2021-11-22 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17447748#comment-17447748
 ] 

Jark Wu commented on FLINK-25008:
-

Sounds good to me. 

> Improve behaviour of NotNullEnforcer when dropping records
> --
>
> Key: FLINK-25008
> URL: https://issues.apache.org/jira/browse/FLINK-25008
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Reporter: Marios Trivyzas
>Priority: Major
>
> By default *NotNullEnforcer* is configured as *ERROR,* so if a record with 
> *null* value(s) for the corresponding column(s) marked as *NOT NULL* is 
> processed an error is thrown. User can change the configuration and choose 
> *DROP* so those records would be silently dropped and not end up in the sink.
> Maybe it worths adding another option, like *LOG_AND_DROP* so that those 
> records are not silently dropped, but instead end up in some log and 
> facilitate easier debugging or post-processing of the pipeline.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


  1   2   3   4   5   6   7   8   >