[jira] [Commented] (FLINK-25106) Support tombstone messages in FLINK's "kafka" connector

2021-11-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451578#comment-17451578
 ] 

Martijn Visser commented on FLINK-25106:


[~varunyeligar] Thanks for clarifying. If I understand correctly, you want to 
use the Kafka messages in an upsert fashion (so that you can do both updates 
and deletes). 

I think that the feature need is actually an extension on the upsert-kafka 
connector (to support specific offsets), because the key difference there is 
between the kafka connector and the upsert-kafka connector is how tombstones 
are dealt with currently. 

> Support tombstone messages in FLINK's "kafka" connector
> ---
>
> Key: FLINK-25106
> URL: https://issues.apache.org/jira/browse/FLINK-25106
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Varun Yeligar
>Priority: Minor
>
> Currently, FLINK's "kafka" connector ignores all the tombstone messages, 
> whereas the "upsert-kafka" connector supports tombstone messages and sets the 
> type of the row to RowKind.DELETE ([Code 
> Link|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/DynamicKafkaDeserializationSchema.java#L126]).
> I wanted to know if it is feasible to support tombstone messages in "kafka" 
> connector by setting all the value fields to NULL and the RowKind to DELETE. 
> I could also raise a PR with the respective changes if required.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] shenzhu commented on pull request #17919: [FLINK-24419][table-planner] Trim to precision when casting to BINARY/VARBINARY

2021-11-30 Thread GitBox


shenzhu commented on pull request #17919:
URL: https://github.com/apache/flink/pull/17919#issuecomment-983372453


   > @shenzhu Thx again for your time and effort here. It would be great to go 
for option 1. since as you already mentioned we're trying to move all the logic 
in `ScalarOperatorGens` into the new java structure under the `CastRule` 
interface.
   
   Hey @matriv , thanks for your feedback!
   I updated this PR with option 1, would you mind taking a look at it when you 
have a moment?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] crazyzhou opened a new pull request #484: Add Pravega Flink connector 101 blog

2021-11-30 Thread GitBox


crazyzhou opened a new pull request #484:
URL: https://github.com/apache/flink-web/pull/484


   A replicate of 
https://cncf.pravega.io/blog/2021/11/01/pravega-flink-connector-101/ 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Zakelly commented on pull request #16964: [FLINK-23791] Enable RocksDB info log by default

2021-11-30 Thread GitBox


Zakelly commented on pull request #16964:
URL: https://github.com/apache/flink/pull/16964#issuecomment-983368636


   After FLINK-24046, we provide default value for each option in 
`RocksDBConfigurableOptions`, which could be loaded via 
`RocksDBResourceContainer`. So I reopen this PR and simply change the default 
value to enable the RocksDB info log.
   Does this resolve your concern? @carp84 @NicoK .  And @Myasuka please take a 
look. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] alpreu commented on a change in pull request #17930: [FLINK-24326][docs][connectors] Update elasticsearch sink pages with new unified sink API (FLIP-143) implementation

2021-11-30 Thread GitBox


alpreu commented on a change in pull request #17930:
URL: https://github.com/apache/flink/pull/17930#discussion_r759917751



##
File path: docs/content/docs/connectors/datastream/elasticsearch.md
##
@@ -65,240 +61,90 @@ about how to package the program with the libraries for 
cluster execution.
 
 Instructions for setting up an Elasticsearch cluster can be found
 
[here](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html).
-Make sure to set and remember a cluster name. This must be set when
-creating an `ElasticsearchSink` for requesting document actions against your 
cluster.
 
 ## Elasticsearch Sink
 
-The `ElasticsearchSink` uses a `TransportClient` (before 6.x) or 
`RestHighLevelClient` (starting with 6.x) to communicate with an
-Elasticsearch cluster.
-
 The example below shows how to configure and create a sink:
 
 {{< tabs "51732edd-4218-470e-adad-b1ebb4021ae4" >}}
-{{< tab "java, 5.x" >}}
-```java
-import org.apache.flink.api.common.functions.RuntimeContext;
-import org.apache.flink.streaming.api.datastream.DataStream;
-import 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkFunction;
-import org.apache.flink.streaming.connectors.elasticsearch.RequestIndexer;
-import org.apache.flink.streaming.connectors.elasticsearch5.ElasticsearchSink;
-
-import org.elasticsearch.action.index.IndexRequest;
-import org.elasticsearch.client.Requests;
-
-import java.net.InetAddress;
-import java.net.InetSocketAddress;
-import java.util.ArrayList;
-import java.util.HashMap;
-import java.util.List;
-import java.util.Map;
-
-DataStream input = ...;
-
-Map config = new HashMap<>();
-config.put("cluster.name", "my-cluster-name");
-// This instructs the sink to emit after every element, otherwise they would 
be buffered
-config.put("bulk.flush.max.actions", "1");
-
-List transportAddresses = new ArrayList<>();
-transportAddresses.add(new 
InetSocketAddress(InetAddress.getByName("127.0.0.1"), 9300));
-transportAddresses.add(new 
InetSocketAddress(InetAddress.getByName("10.2.3.1"), 9300));
-
-input.addSink(new ElasticsearchSink<>(config, transportAddresses, new 
ElasticsearchSinkFunction() {
-public IndexRequest createIndexRequest(String element) {
-Map json = new HashMap<>();
-json.put("data", element);
-
-return Requests.indexRequest()
-.index("my-index")
-.type("my-type")
-.source(json);
-}
-
-@Override
-public void process(String element, RuntimeContext ctx, RequestIndexer 
indexer) {
-indexer.add(createIndexRequest(element));
-}
-}));```
-{{< /tab >}}
 {{< tab "java, Elasticsearch 6.x and above" >}}

Review comment:
   I am thinking about removing the version here completely. At the top of 
the page we already have it and putting it with the minor version might cause 
even more confusion because we do support e.g. 6.5. What do you think?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16964: [FLINK-23791] Enable RocksDB info log by default

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #16964:
URL: https://github.com/apache/flink/pull/16964#issuecomment-904569778


   
   ## CI report:
   
   * 85158d496f87c535525f9e5ad05269038402f02f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22798)
 
   * d03ea828043b4989180de77a23b9582dcf246cd6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27332)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] alpreu commented on a change in pull request #17930: [FLINK-24326][docs][connectors] Update elasticsearch sink pages with new unified sink API (FLIP-143) implementation

2021-11-30 Thread GitBox


alpreu commented on a change in pull request #17930:
URL: https://github.com/apache/flink/pull/17930#discussion_r759916773



##
File path: docs/content/docs/connectors/datastream/elasticsearch.md
##
@@ -43,15 +43,11 @@ of the Elasticsearch installation:
   
   
 
-5.x
-{{< artifact flink-connector-elasticsearch5 >}}
-
-
-6.x
+<= 6.3.1
 {{< artifact flink-connector-elasticsearch6 >}}
 
 
-7 and later versions
+<= 7.5.1

Review comment:
   Please see the comment above, I'll have to check if there is a 
dependency matrix or if we can confirm that with the docker image used in the 
tests




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] alpreu commented on a change in pull request #17930: [FLINK-24326][docs][connectors] Update elasticsearch sink pages with new unified sink API (FLIP-143) implementation

2021-11-30 Thread GitBox


alpreu commented on a change in pull request #17930:
URL: https://github.com/apache/flink/pull/17930#discussion_r759916214



##
File path: docs/content/docs/connectors/datastream/elasticsearch.md
##
@@ -43,15 +43,11 @@ of the Elasticsearch installation:
   
   
 
-5.x
-{{< artifact flink-connector-elasticsearch5 >}}
-
-
-6.x
+<= 6.3.1

Review comment:
   The dependency version we are using is 6.3.1, I did not check if we can 
bump it for ES 6 but I had checked it for ES 7 and there bumping the version 
also required code changes so I left it as is




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16964: [FLINK-23791] Enable RocksDB info log by default

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #16964:
URL: https://github.com/apache/flink/pull/16964#issuecomment-904569778


   
   ## CI report:
   
   * 85158d496f87c535525f9e5ad05269038402f02f Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22798)
 
   * d03ea828043b4989180de77a23b9582dcf246cd6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-24980) Collect sizes of finished BLOCKING result partitions

2021-11-30 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-24980.
---
Fix Version/s: 1.15.0
   Resolution: Fixed

Done via 3761bbf68c183cf4728353948acea0a2ac40f2a7

> Collect sizes of finished BLOCKING result partitions
> 
>
> Key: FLINK-24980
> URL: https://issues.apache.org/jira/browse/FLINK-24980
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Metrics
>Reporter: Lijie Wang
>Assignee: Lijie Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> The adaptive batch scheduler needs to know the size of each result partition 
> when the task is finished.
> This issue will introduce the *numBytesProduced* counter and register it into 
> {*}TaskIOMetricGroup{*}, to record the size of each result partition. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink-ml] weibozhao commented on a change in pull request #24: [Flink 24557] - Add knn algorithm to flink-ml

2021-11-30 Thread GitBox


weibozhao commented on a change in pull request #24:
URL: https://github.com/apache/flink-ml/pull/24#discussion_r759912742



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasFeatureColsDefaultAsNull.java
##
@@ -0,0 +1,28 @@
+package org.apache.flink.ml.common.param;
+
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+import org.apache.flink.ml.param.WithParams;
+
+/** Params of the names of the feature columns used for training in the input 
table. */
+public interface HasFeatureColsDefaultAsNull extends WithParams {
+/**
+ * @cn-name 特征列名数组
+ * @cn 特征列名数组,默认全选
+ */
+Param FEATURE_COLS =

Review comment:
   done
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk closed pull request #17905: [FLINK-24980] Introduce numBytesProduced counter into ResultPartition to record the size of result partition.

2021-11-30 Thread GitBox


zhuzhurk closed pull request #17905:
URL: https://github.com/apache/flink/pull/17905


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] weibozhao commented on a change in pull request #24: [Flink 24557] - Add knn algorithm to flink-ml

2021-11-30 Thread GitBox


weibozhao commented on a change in pull request #24:
URL: https://github.com/apache/flink-ml/pull/24#discussion_r759909634



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/knn/KnnModelData.java
##
@@ -0,0 +1,134 @@
+package org.apache.flink.ml.classification.knn;
+
+import org.apache.flink.api.common.serialization.Encoder;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.typeutils.RowTypeInfo;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.connector.file.src.reader.SimpleStreamFormat;
+import org.apache.flink.core.fs.FSDataInputStream;
+import org.apache.flink.table.types.DataType;
+import org.apache.flink.types.Row;
+
+import com.esotericsoftware.kryo.Kryo;
+import com.esotericsoftware.kryo.io.Input;
+import com.esotericsoftware.kryo.io.Output;
+
+import java.io.IOException;
+import java.io.OutputStream;
+import java.io.Serializable;
+import java.util.ArrayList;
+import java.util.Comparator;
+import java.util.List;
+
+/** knn model data, which will be used to calculate the distances between 
nodes. */
+public class KnnModelData implements Serializable, Cloneable {
+private static final long serialVersionUID = -2940551481683238630L;

Review comment:
   done

##
File path: flink-ml-lib/src/main/java/org/apache/flink/ml/common/param/HasK.java
##
@@ -0,0 +1,41 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.common.param;
+
+import org.apache.flink.ml.param.IntParam;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.param.ParamValidators;
+import org.apache.flink.ml.param.WithParams;
+
+/** Interface for the shared K param. */
+public interface HasK extends WithParams {
+
+/**
+ * topK
+ */
+Param K = new IntParam("k", "k", 10, ParamValidators.gt(0));

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] weibozhao commented on a change in pull request #24: [Flink 24557] - Add knn algorithm to flink-ml

2021-11-30 Thread GitBox


weibozhao commented on a change in pull request #24:
URL: https://github.com/apache/flink-ml/pull/24#discussion_r759909541



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/knn/KnnParams.java
##
@@ -0,0 +1,18 @@
+package org.apache.flink.ml.classification.knn;
+
+import org.apache.flink.ml.common.param.HasFeatureColsDefaultAsNull;
+import org.apache.flink.ml.common.param.HasK;
+import org.apache.flink.ml.common.param.HasLabelCol;
+import org.apache.flink.ml.common.param.HasPredictionCol;
+import org.apache.flink.ml.common.param.HasVectorColDefaultAsNull;
+import org.apache.flink.ml.param.WithParams;
+
+/** knn parameters. */
+public interface KnnParams
+extends WithParams,

Review comment:
   done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25106) Support tombstone messages in FLINK's "kafka" connector

2021-11-30 Thread Varun Yeligar (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451559#comment-17451559
 ] 

Varun Yeligar commented on FLINK-25106:
---

[~MartijnVisser] , In our use-case we want to receive all the messages 
(including tombstone messages). We didn't go ahead with upsert-kafka connector 
because it doesn't support specifying the 
"[specific-offsets|https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/kafka/#scan-startup-specific-offsets]";
 from which we want to start consuming the messages.

i.e. "kafka" connector supports specific-offsets but not tombstone messages 
whereas "upsert-kafka" connectors supports tombstone messages but not the 
specific-offsets.

In our use-case, we want to support both the cases (tombstone messages + 
specific-offsets).

Please let me know if there is any alternative way to achieve both.

> Support tombstone messages in FLINK's "kafka" connector
> ---
>
> Key: FLINK-25106
> URL: https://issues.apache.org/jira/browse/FLINK-25106
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Core
>Reporter: Varun Yeligar
>Priority: Minor
>
> Currently, FLINK's "kafka" connector ignores all the tombstone messages, 
> whereas the "upsert-kafka" connector supports tombstone messages and sets the 
> type of the row to RowKind.DELETE ([Code 
> Link|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka/src/main/java/org/apache/flink/streaming/connectors/kafka/table/DynamicKafkaDeserializationSchema.java#L126]).
> I wanted to know if it is feasible to support tombstone messages in "kafka" 
> connector by setting all the value fields to NULL and the RowKind to DELETE. 
> I could also raise a PR with the respective changes if required.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17919: [FLINK-24419][table-planner] Trim to precision when casting to BINARY/VARBINARY

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17919:
URL: https://github.com/apache/flink/pull/17919#issuecomment-979659835


   
   ## CI report:
   
   * e7d585c7aab70989fe340f5f14b359952433ddf2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27327)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-25124) A deadlock occurs when the jdbc sink uses two consecutive dimension tables to associate

2021-11-30 Thread shizhengchao (Jira)
shizhengchao created FLINK-25124:


 Summary: A deadlock occurs when the jdbc sink uses two consecutive 
dimension tables to associate
 Key: FLINK-25124
 URL: https://issues.apache.org/jira/browse/FLINK-25124
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: 1.13.1
Reporter: shizhengchao


 

The sql statement is as follows:
{code:java}
//代码占位符
INSERT INTO imei_phone_domestic_realtime
  SELECT
  t.data.imei AS imei,
  CAST(t.data.register_date_key AS bigint) AS register_date_key,
  c.agent_type AS channel_name,
  c.agent_short_name,
  c.agent_name,
  c.agent_chinese_name,
  c.isforeign AS agent_market_type,
  p.seriename AS series_name,
  p.salename AS sale_name,
  p.devname AS dev_name,
  p.devnamesource AS dev_name_source,
  p.color,
  p.isforeign AS product_market_type,
  p.carrier,
  p.lcname AS life_cycle,
  IFNULL(p.shipping_price,0) AS shipping_price,
  IFNULL(p.retail_price,0) AS  retail_price
  FROM kafka_imei_phone_domestic_realtime AS t
  LEFT JOIN dim_product FOR SYSTEM_TIME AS OF t.proctime AS p ON 
p.pn=t.item_code
  LEFT JOIN dim_customer FOR SYSTEM_TIME AS OF t.proctime AS c ON 
c.customer_code=t.customer_code
  where t.eventType='update'; {code}
There will be a probability of deadlock:
{code:java}
//代码占位符
"jdbc-upsert-output-format-thread-1" Id=84 BLOCKED on 
org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat@649788af 
owned by "Legacy Source Thread - Source: 
TableSourceScan(table=[[default_catalog, default_database, 
kafka_imei_phone_domestic_realtime]], fields=[data, eventType]) -> 
Calc(select=[data, data.item_code AS $f3], where=[(eventType = 
_UTF-16LE'update':VARCHAR(2147483647) CHARACTER SET "UTF-16LE")]) -> 
LookupJoin(table=[default_catalog.default_database.dim_product], 
joinType=[LeftOuterJoin], async=[false], lookup=[pn=$f3], select=[data, $f3, 
pn, color, isforeign, devname, salename, seriename, lcname, carrier, 
devnamesource, shipping_price, retail_price]) -> Calc(select=[data, color, 
isforeign, devname, salename, seriename, lcname, carrier, devnamesource, 
shipping_price, retail_price, data.customer_code AS $f31]) -> 
LookupJoin(table=[default_catalog.default_database.dim_customer], 
joinType=[LeftOuterJoin], async=[false], lookup=[customer_code=$f31], 
select=[data, color, isforeign, devname, salename, seriename, lcname, carrier, 
devnamesource, shipping_price, retail_price, $f31, customer_code, 
agent_short_name, agent_name, isforeign, agent_type, agent_chinese_name]) -> 
Calc(select=[data.imei AS imei, CAST(data.register_date_key) AS 
register_date_key, agent_type AS channel_name, agent_short_name, agent_name, 
agent_chinese_name, isforeign0 AS agent_market_type, seriename AS series_name, 
salename AS sale_name, devname AS dev_name, devnamesource AS dev_name_source, 
color, isforeign AS product_market_type, carrier, lcname AS life_cycle, 
IFNULL(shipping_price, 0:DECIMAL(10, 0)) AS shipping_price, 
IFNULL(retail_price, 0:DECIMAL(10, 0)) AS retail_price]) -> 
NotNullEnforcer(fields=[imei]) -> Sink: 
Sink(table=[default_catalog.default_database.imei_phone_domestic_realtime], 
fields=[imei, register_date_key, channel_name, agent_short_name, agent_name, 
agent_chinese_name, agent_market_type, series_name, sale_name, dev_name, 
dev_name_source, color, product_market_type, carrier, life_cycle, 
shipping_price, retail_price]) (6/12)#0" Id=82
    at 
org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat.lambda$open$0(JdbcBatchingOutputFormat.java:124)
    -  blocked on 
org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat@649788af
    at 
org.apache.flink.connector.jdbc.internal.JdbcBatchingOutputFormat$$Lambda$344/21845506.run(Unknown
 Source)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ...    Number of locked synchronizers = 1
    - java.util.concurrent.ThreadPoolExecutor$Worker@325612a2 {code}
 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-25099) flink on yarn Accessing two HDFS Clusters

2021-11-30 Thread chenqizhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451554#comment-17451554
 ] 

chenqizhu edited comment on FLINK-25099 at 12/1/21, 7:01 AM:
-

this is my submmision cli example. As you can see, it's very simple

{code:java}
bin/flink run -t yarn-per-job  -Dexecution.checkpointing.interval=60s 
examples/streaming/TopSpeedWindowing.jar
{code}

If nodeManager is using the default yarn config , why can it throw exceptions 
like 'java.net.UnknownHostException: flinkcluster' . 
After all, 'flinkcluster' is client-side configuration.  It seems contradictory
[~zuston]



was (Author: libra_816):
this is my submmision cli example. As you can see, it's very simple

{code:java}
bin/flink run -t yarn-per-job  -Dexecution.checkpointing.interval=60s 
examples/streaming/TopSpeedWindowing.jar
{code}

If nodeManager is using the default yarn config , why can it throw exceptions 
like 'java.net.UnknownHostException: flinkcluster' . 
After all, 'flinkcluster' is client-side configuration.  It seems contradictory



> flink on yarn Accessing two HDFS Clusters
> -
>
> Key: FLINK-25099
> URL: https://issues.apache.org/jira/browse/FLINK-25099
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, FileSystems, Runtime / State Backends
>Affects Versions: 1.13.3
> Environment: flink : 1.13.3
> hadoop : 3.3.0
>Reporter: chenqizhu
>Priority: Major
> Attachments: flink-chenqizhu-client-hdfsn21n163.log
>
>
> Flink version 1.13 supports configuration of Hadoop properties in 
> flink-conf.yaml via flink.hadoop.*. There is A requirement to write 
> checkpoint to HDFS with SSDS (called cluster B) to speed checkpoint writing, 
> but this HDFS cluster is not the default HDFS in the flink client (called 
> cluster A by default). Yaml is configured with nameservices for cluster A and 
> cluster B, which is similar to HDFS federated mode.
> The configuration is as follows:
>  
> {code:java}
> flink.hadoop.dfs.nameservices: ACluster,BCluster
> flink.hadoop.fs.defaultFS: hdfs://BCluster
> flink.hadoop.dfs.ha.namenodes.ACluster: nn1,nn2
> flink.hadoop.dfs.namenode.rpc-address.ACluster.nn1: 10.:9000
> flink.hadoop.dfs.namenode.http-address.ACluster.nn1: 10.:50070
> flink.hadoop.dfs.namenode.rpc-address.ACluster.nn2: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.ACluster.nn2: 10.xx:50070
> flink.hadoop.dfs.client.failover.proxy.provider.ACluster: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> flink.hadoop.dfs.ha.namenodes.BCluster: nn1,nn2
> flink.hadoop.dfs.namenode.rpc-address.BCluster.nn1: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.BCluster.nn1: 10.xx:50070
> flink.hadoop.dfs.namenode.rpc-address.BCluster.nn2: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.BCluster.nn2: 10.x:50070
> flink.hadoop.dfs.client.failover.proxy.provider.BCluster: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> {code}
>  
> However, an error occurred during the startup of the job, which is reported 
> as follows:
> (change configuration items to A flink local client default HDFS cluster, the 
> operation can be normal boot:  flink.hadoop.fs.DefaultFS: hdfs: / / ACluster)
> {noformat}
> Caused by: BCluster
> java.net.UnknownHostException: BCluster
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:448)
> at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:139)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:374)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:308)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:184)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
> at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:270)
> at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:68)
> at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:415)
> at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:412)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:412)
> at 
> org.apache

[jira] [Commented] (FLINK-25099) flink on yarn Accessing two HDFS Clusters

2021-11-30 Thread chenqizhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451554#comment-17451554
 ] 

chenqizhu commented on FLINK-25099:
---

this is my submmision cli example. As you can see, it's very simple

{code:java}
bin/flink run -t yarn-per-job  -Dexecution.checkpointing.interval=60s 
examples/streaming/TopSpeedWindowing.jar
{code}

If nodeManager is using the default yarn config , why can it throw exceptions 
like 'java.net.UnknownHostException: flinkcluster' . 
After all, 'flinkcluster' is client-side configuration.  It seems contradictory



> flink on yarn Accessing two HDFS Clusters
> -
>
> Key: FLINK-25099
> URL: https://issues.apache.org/jira/browse/FLINK-25099
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, FileSystems, Runtime / State Backends
>Affects Versions: 1.13.3
> Environment: flink : 1.13.3
> hadoop : 3.3.0
>Reporter: chenqizhu
>Priority: Major
> Attachments: flink-chenqizhu-client-hdfsn21n163.log
>
>
> Flink version 1.13 supports configuration of Hadoop properties in 
> flink-conf.yaml via flink.hadoop.*. There is A requirement to write 
> checkpoint to HDFS with SSDS (called cluster B) to speed checkpoint writing, 
> but this HDFS cluster is not the default HDFS in the flink client (called 
> cluster A by default). Yaml is configured with nameservices for cluster A and 
> cluster B, which is similar to HDFS federated mode.
> The configuration is as follows:
>  
> {code:java}
> flink.hadoop.dfs.nameservices: ACluster,BCluster
> flink.hadoop.fs.defaultFS: hdfs://BCluster
> flink.hadoop.dfs.ha.namenodes.ACluster: nn1,nn2
> flink.hadoop.dfs.namenode.rpc-address.ACluster.nn1: 10.:9000
> flink.hadoop.dfs.namenode.http-address.ACluster.nn1: 10.:50070
> flink.hadoop.dfs.namenode.rpc-address.ACluster.nn2: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.ACluster.nn2: 10.xx:50070
> flink.hadoop.dfs.client.failover.proxy.provider.ACluster: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> flink.hadoop.dfs.ha.namenodes.BCluster: nn1,nn2
> flink.hadoop.dfs.namenode.rpc-address.BCluster.nn1: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.BCluster.nn1: 10.xx:50070
> flink.hadoop.dfs.namenode.rpc-address.BCluster.nn2: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.BCluster.nn2: 10.x:50070
> flink.hadoop.dfs.client.failover.proxy.provider.BCluster: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> {code}
>  
> However, an error occurred during the startup of the job, which is reported 
> as follows:
> (change configuration items to A flink local client default HDFS cluster, the 
> operation can be normal boot:  flink.hadoop.fs.DefaultFS: hdfs: / / ACluster)
> {noformat}
> Caused by: BCluster
> java.net.UnknownHostException: BCluster
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:448)
> at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:139)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:374)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:308)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:184)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
> at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:270)
> at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:68)
> at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:415)
> at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:412)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:412)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:247)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:240)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:228)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 

[jira] [Commented] (FLINK-23324) Postgres of JDBC Connector enable case-sensitive.

2021-11-30 Thread Ada Wong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451550#comment-17451550
 ] 

Ada Wong commented on FLINK-23324:
--

[~jark]  please assign someone to review it.

> Postgres of JDBC Connector enable case-sensitive.
> -
>
> Key: FLINK-23324
> URL: https://issues.apache.org/jira/browse/FLINK-23324
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.13.1, 1.12.4
>Reporter: Ada Wong
>Assignee: Ada Wong
>Priority: Major
>  Labels: pull-request-available
>
> Now the PostgresDialect is case-insensitive. I think this is a bug.
> https://stackoverflow.com/questions/20878932/are-postgresql-column-names-case-sensitive
> https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS
> Could we delete PostgresDialect#quoteIdentifier, make it using super class.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-18779) Support the SupportsFilterPushDown interface for LookupTableSource

2021-11-30 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-18779:


Assignee: Yuan Zhu

> Support the SupportsFilterPushDown interface for LookupTableSource
> --
>
> Key: FLINK-18779
> URL: https://issues.apache.org/jira/browse/FLINK-18779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Yuan Zhu
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-18779) Support the SupportsFilterPushDown interface for LookupTableSource

2021-11-30 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451544#comment-17451544
 ] 

Jingsong Lee commented on FLINK-18779:
--

[~straw]  Thanks, CC [~godfreyhe] 

> Support the SupportsFilterPushDown interface for LookupTableSource
> --
>
> Key: FLINK-18779
> URL: https://issues.apache.org/jira/browse/FLINK-18779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25116) Fabric8FlinkKubeClientITCase hangs on Azure

2021-11-30 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-25116:

Labels: test-stability  (was: )

> Fabric8FlinkKubeClientITCase hangs on Azure
> ---
>
> Key: FLINK-25116
> URL: https://issues.apache.org/jira/browse/FLINK-25116
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes, Tests
>Reporter: Yun Tang
>Priority: Major
>  Labels: test-stability
>
> Instance: 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=27208&view=logs&j=bea52777-eaf8-5663-8482-18fbc3630e81&t=b2642e3a-5b86-574d-4c8a-f7e2842bfb14]
>  
> {code:java}
> 2021-11-29T13:18:56.6420610Z Nov 29 13:18:56 Invoking mvn with 
> '/home/vsts/maven_cache/apache-maven-3.2.5/bin/mvn 
> -Dmaven.repo.local=/home/vsts/work/1/.m2/repository 
> -Dmaven.wagon.http.pool=false -Dorg.slf4j.simpleLogger.showDateTime=true 
> -Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS 
> -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=warn
>  --no-snapshot-updates -B -Dhadoop.version=2.8.3 -Dinclude_hadoop_aws 
> -Dscala-2.12  --settings 
> /home/vsts/work/1/s/tools/ci/google-mirror-settings.xml  test 
> -Dlog.dir=/home/vsts/work/_temp/debug_files 
> -Dlog4j.configurationFile=file:///home/vsts/work/1/s/flink-end-to-end-tests/../tools/ci/log4j.properties
>  -Dtest=org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClientITCase'
> 2021-11-29T13:19:16.0638794Z Nov 29 13:19:16 [INFO] --- 
> maven-surefire-plugin:3.0.0-M5:test (default-test) @ flink-kubernetes ---
> 2021-11-29T17:10:39.7133994Z 
> ==
> 2021-11-29T17:10:39.7134714Z === WARNING: This task took already 95% of the 
> available time budget of 282 minutes === {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25117) NoSuchMethodError getCatalog()

2021-11-30 Thread zzt (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451540#comment-17451540
 ] 

zzt commented on FLINK-25117:
-

[~wenlong.lwl]  I use flink-1.13.3 image with jars from 1.13.3 and produces 
this problem. I mis-typed the version in env.

> NoSuchMethodError getCatalog()
> --
>
> Key: FLINK-25117
> URL: https://issues.apache.org/jira/browse/FLINK-25117
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.3
> Environment: offical docker image,  flink:1.13.3-scala_2.12
>Reporter: zzt
>Priority: Major
>
> {code:java}
> Flink SQL> insert into `wide_order` (`user_id`, `row_num`, `sum`)
> > select `t`.`receiver_user_id`, `t`.`rowNum`, `t`.`total`
> > from (select `t`.`receiver_user_id`, `t`.`total`, ROW_NUMBER() OVER (ORDER 
> > BY total desc) as `rowNum`
> >       from (select `order_view`.`receiver_user_id`, 
> > sum(`order_view`.`total`) as `total`
> >             from `order_view` where create_time > '2021-11-01 00:24:55.453'
> >             group by `order_view`.`receiver_user_id`) `t`) `t`
> > where `rowNum` <= 1;
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.table.catalog.CatalogManager$TableLookupResult.getCatalog()Ljava/util/Optional;
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.extractTableStats(DatabaseCalciteSchema.java:106)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getStatistic(DatabaseCalciteSchema.java:90)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.lambda$getTable$0(DatabaseCalciteSchema.java:79)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getTable(DatabaseCalciteSchema.java:74)
>     at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83)
>     at org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:289)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntryFrom(SqlValidatorUtil.java:1059)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntry(SqlValidatorUtil.java:1016)
>     at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:119)
>     at 
> org.apache.flink.table.planner.plan.FlinkCalciteCatalogReader.getTable(FlinkCalciteCatalogReader.java:86)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.appendPartitionAndNullsProjects(PreValidateReWriter.scala:116)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:56)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:47)
>     at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:205)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$parseStatement$1(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:90)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.cli.CliClient.parseCommand(CliClient.java:385)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:326)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95)
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
>     ... 1 more
> Shutting down the session...
> done. {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25117) NoSuchMethodError getCatalog()

2021-11-30 Thread zzt (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zzt updated FLINK-25117:

Environment: offical docker image,  flink:1.13.3-scala_2.12  (was: offical 
docker image,  flink:1.13.2-scala_2.12)

> NoSuchMethodError getCatalog()
> --
>
> Key: FLINK-25117
> URL: https://issues.apache.org/jira/browse/FLINK-25117
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.3
> Environment: offical docker image,  flink:1.13.3-scala_2.12
>Reporter: zzt
>Priority: Major
>
> {code:java}
> Flink SQL> insert into `wide_order` (`user_id`, `row_num`, `sum`)
> > select `t`.`receiver_user_id`, `t`.`rowNum`, `t`.`total`
> > from (select `t`.`receiver_user_id`, `t`.`total`, ROW_NUMBER() OVER (ORDER 
> > BY total desc) as `rowNum`
> >       from (select `order_view`.`receiver_user_id`, 
> > sum(`order_view`.`total`) as `total`
> >             from `order_view` where create_time > '2021-11-01 00:24:55.453'
> >             group by `order_view`.`receiver_user_id`) `t`) `t`
> > where `rowNum` <= 1;
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.table.catalog.CatalogManager$TableLookupResult.getCatalog()Ljava/util/Optional;
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.extractTableStats(DatabaseCalciteSchema.java:106)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getStatistic(DatabaseCalciteSchema.java:90)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.lambda$getTable$0(DatabaseCalciteSchema.java:79)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getTable(DatabaseCalciteSchema.java:74)
>     at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83)
>     at org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:289)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntryFrom(SqlValidatorUtil.java:1059)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntry(SqlValidatorUtil.java:1016)
>     at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:119)
>     at 
> org.apache.flink.table.planner.plan.FlinkCalciteCatalogReader.getTable(FlinkCalciteCatalogReader.java:86)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.appendPartitionAndNullsProjects(PreValidateReWriter.scala:116)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:56)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:47)
>     at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:205)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$parseStatement$1(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:90)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.cli.CliClient.parseCommand(CliClient.java:385)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:326)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95)
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
>     ... 1 more
> Shutting down the session...
> done. {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-18779) Support the SupportsFilterPushDown interface for LookupTableSource

2021-11-30 Thread Yuan Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451536#comment-17451536
 ] 

Yuan Zhu edited comment on FLINK-18779 at 12/1/21, 6:35 AM:


Hi, [~twalthr] ,[~lzljs3620320], [~matriv], [~slinkydeveloper]. There is no one 
to work on this ticket for a long time, could anyone help assign it to me and 
review my PR.

Looking forword to your comments, thanks.


was (Author: straw):
Hi, [~twalthr] ,[~lzljs3620320], [~matriv], [~slinkydeveloper]. There is no one 
to work on this ticket for a long time, could anyone help assign it to me and 
review my PR.

Looking for your comment, thanks.

> Support the SupportsFilterPushDown interface for LookupTableSource
> --
>
> Key: FLINK-18779
> URL: https://issues.apache.org/jira/browse/FLINK-18779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17971: Update kafka.md

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17971:
URL: https://github.com/apache/flink/pull/17971#issuecomment-983331984


   
   ## CI report:
   
   * 8ae9baf7cb758e0df2e70d399acc08a13a651aea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27330)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25123) Improve expression description in SQL operator

2021-11-30 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-25123:

Fix Version/s: 1.15.0

> Improve expression description in SQL operator
> --
>
> Key: FLINK-25123
> URL: https://issues.apache.org/jira/browse/FLINK-25123
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25076) Simplify name of SQL operators

2021-11-30 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-25076:

Fix Version/s: 1.15.0

> Simplify name of SQL operators
> --
>
> Key: FLINK-25076
> URL: https://issues.apache.org/jira/browse/FLINK-25076
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25076) Simplify name of SQL operators

2021-11-30 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-25076:

Component/s: Table SQL / Planner

> Simplify name of SQL operators
> --
>
> Key: FLINK-25076
> URL: https://issues.apache.org/jira/browse/FLINK-25076
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25074) Simplify name of window operators in DS by moving details to description

2021-11-30 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-25074:

Fix Version/s: 1.15.0

> Simplify name of window operators in DS by moving details to description
> 
>
> Key: FLINK-25074
> URL: https://issues.apache.org/jira/browse/FLINK-25074
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25074) Simplify name of window operators in DS by moving details to description

2021-11-30 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-25074:

Component/s: API / DataStream

> Simplify name of window operators in DS by moving details to description
> 
>
> Key: FLINK-25074
> URL: https://issues.apache.org/jira/browse/FLINK-25074
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25073) Introduce Tree Mode description for job vertex

2021-11-30 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-25073:

Component/s: API / DataStream

> Introduce Tree Mode description for job vertex
> --
>
> Key: FLINK-25073
> URL: https://issues.apache.org/jira/browse/FLINK-25073
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #17971: Update kafka.md

2021-11-30 Thread GitBox


flinkbot commented on pull request #17971:
URL: https://github.com/apache/flink/pull/17971#issuecomment-983331969






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25073) Introduce Tree Mode description for job vertex

2021-11-30 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-25073:

Fix Version/s: 1.15.0

> Introduce Tree Mode description for job vertex
> --
>
> Key: FLINK-25073
> URL: https://issues.apache.org/jira/browse/FLINK-25073
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25072) Introduce description for operator

2021-11-30 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-25072:

Fix Version/s: 1.15.0

> Introduce description for operator
> --
>
> Key: FLINK-25072
> URL: https://issues.apache.org/jira/browse/FLINK-25072
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-18779) Support the SupportsFilterPushDown interface for LookupTableSource

2021-11-30 Thread Yuan Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451536#comment-17451536
 ] 

Yuan Zhu commented on FLINK-18779:
--

Hi, [~twalthr] ,[~lzljs3620320], [~matriv], [~slinkydeveloper]. There is no one 
to work on this ticket for a long time, could anyone help assign it to me and 
review my PR.

> Support the SupportsFilterPushDown interface for LookupTableSource
> --
>
> Key: FLINK-18779
> URL: https://issues.apache.org/jira/browse/FLINK-18779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25070) FLIP-195: Improve the name and structure of vertex and operator name for job

2021-11-30 Thread Wenlong Lyu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenlong Lyu updated FLINK-25070:

Fix Version/s: 1.15.0

> FLIP-195: Improve the name and structure of vertex and operator name for job
> 
>
> Key: FLINK-25070
> URL: https://issues.apache.org/jira/browse/FLINK-25070
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Web Frontend, Table SQL / 
> Runtime
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
> Fix For: 1.15.0
>
>
> this is an umbrella issue tracking the improvement of operator/vertex names 
> in flink: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-195%3A+Improve+the+name+and+structure+of+vertex+and+operator+name+for+job



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-18779) Support the SupportsFilterPushDown interface for LookupTableSource

2021-11-30 Thread Yuan Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451536#comment-17451536
 ] 

Yuan Zhu edited comment on FLINK-18779 at 12/1/21, 6:31 AM:


Hi, [~twalthr] ,[~lzljs3620320], [~matriv], [~slinkydeveloper]. There is no one 
to work on this ticket for a long time, could anyone help assign it to me and 
review my PR.

Looking for your comment, thanks.


was (Author: straw):
Hi, [~twalthr] ,[~lzljs3620320], [~matriv], [~slinkydeveloper]. There is no one 
to work on this ticket for a long time, could anyone help assign it to me and 
review my PR.

> Support the SupportsFilterPushDown interface for LookupTableSource
> --
>
> Key: FLINK-18779
> URL: https://issues.apache.org/jira/browse/FLINK-18779
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25076) Simplify name of SQL operators

2021-11-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-25076:
--

Assignee: Wenlong Lyu

> Simplify name of SQL operators
> --
>
> Key: FLINK-25076
> URL: https://issues.apache.org/jira/browse/FLINK-25076
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25074) Simplify name of window operators in DS by moving details to description

2021-11-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-25074:
--

Assignee: Wenlong Lyu

> Simplify name of window operators in DS by moving details to description
> 
>
> Key: FLINK-25074
> URL: https://issues.apache.org/jira/browse/FLINK-25074
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25123) Improve expression description in SQL operator

2021-11-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-25123:
--

Assignee: Wenlong Lyu

> Improve expression description in SQL operator
> --
>
> Key: FLINK-25123
> URL: https://issues.apache.org/jira/browse/FLINK-25123
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25118) Add vertex index as prefix in vertex name

2021-11-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-25118:
--

Assignee: Wenlong Lyu

> Add vertex index as prefix in vertex name
> -
>
> Key: FLINK-25118
> URL: https://issues.apache.org/jira/browse/FLINK-25118
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25073) Introduce Tree Mode description for job vertex

2021-11-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-25073:
--

Assignee: Wenlong Lyu

> Introduce Tree Mode description for job vertex
> --
>
> Key: FLINK-25073
> URL: https://issues.apache.org/jira/browse/FLINK-25073
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25072) Introduce description for operator

2021-11-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-25072:
--

Assignee: Wenlong Lyu

> Introduce description for operator
> --
>
> Key: FLINK-25072
> URL: https://issues.apache.org/jira/browse/FLINK-25072
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (FLINK-25070) FLIP-195: Improve the name and structure of vertex and operator name for job

2021-11-30 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-25070:
--

Assignee: Wenlong Lyu

> FLIP-195: Improve the name and structure of vertex and operator name for job
> 
>
> Key: FLINK-25070
> URL: https://issues.apache.org/jira/browse/FLINK-25070
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream, Runtime / Web Frontend, Table SQL / 
> Runtime
>Reporter: Wenlong Lyu
>Assignee: Wenlong Lyu
>Priority: Major
>
> this is an umbrella issue tracking the improvement of operator/vertex names 
> in flink: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-195%3A+Improve+the+name+and+structure+of+vertex+and+operator+name+for+job



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25123) Improve expression description in SQL operator

2021-11-30 Thread Wenlong Lyu (Jira)
Wenlong Lyu created FLINK-25123:
---

 Summary: Improve expression description in SQL operator
 Key: FLINK-25123
 URL: https://issues.apache.org/jira/browse/FLINK-25123
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Wenlong Lyu






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] mingwayXue opened a new pull request #17971: Update kafka.md

2021-11-30 Thread GitBox


mingwayXue opened a new pull request #17971:
URL: https://github.com/apache/flink/pull/17971


   fixed kafka markdown error
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25117) NoSuchMethodError getCatalog()

2021-11-30 Thread Wenlong Lyu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451528#comment-17451528
 ] 

Wenlong Lyu commented on FLINK-25117:
-

[~zzt] I would suggest you not put any flink jars from 1.13.3 to the lib if you 
are running job on flink-1.13.2. you can use jars from 1.13.2 or use 
flink-1.13.3 image

> NoSuchMethodError getCatalog()
> --
>
> Key: FLINK-25117
> URL: https://issues.apache.org/jira/browse/FLINK-25117
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.3
> Environment: offical docker image,  flink:1.13.2-scala_2.12
>Reporter: zzt
>Priority: Major
>
> {code:java}
> Flink SQL> insert into `wide_order` (`user_id`, `row_num`, `sum`)
> > select `t`.`receiver_user_id`, `t`.`rowNum`, `t`.`total`
> > from (select `t`.`receiver_user_id`, `t`.`total`, ROW_NUMBER() OVER (ORDER 
> > BY total desc) as `rowNum`
> >       from (select `order_view`.`receiver_user_id`, 
> > sum(`order_view`.`total`) as `total`
> >             from `order_view` where create_time > '2021-11-01 00:24:55.453'
> >             group by `order_view`.`receiver_user_id`) `t`) `t`
> > where `rowNum` <= 1;
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.table.catalog.CatalogManager$TableLookupResult.getCatalog()Ljava/util/Optional;
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.extractTableStats(DatabaseCalciteSchema.java:106)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getStatistic(DatabaseCalciteSchema.java:90)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.lambda$getTable$0(DatabaseCalciteSchema.java:79)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getTable(DatabaseCalciteSchema.java:74)
>     at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83)
>     at org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:289)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntryFrom(SqlValidatorUtil.java:1059)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntry(SqlValidatorUtil.java:1016)
>     at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:119)
>     at 
> org.apache.flink.table.planner.plan.FlinkCalciteCatalogReader.getTable(FlinkCalciteCatalogReader.java:86)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.appendPartitionAndNullsProjects(PreValidateReWriter.scala:116)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:56)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:47)
>     at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:205)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$parseStatement$1(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:90)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.cli.CliClient.parseCommand(CliClient.java:385)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:326)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95)
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
>     ... 1 more
> Shutting down the session...
> done. {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #16609: [FLINK-23324][connector-jdbc] fix Postgres case-insensitive.

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #16609:
URL: https://github.com/apache/flink/pull/16609#issuecomment-887558020


   
   ## CI report:
   
   * 25f7778c5ee8bde238dff84ae777a859931a6f78 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27302)
 
   * 906f6d18caf7662f1563ecd952bd2eaefd06 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27329)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25102) ParquetFileSystemITCase.testPartialDynamicPartition failed on azure

2021-11-30 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451523#comment-17451523
 ] 

Yun Gao commented on FLINK-25102:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=27319&view=logs&j=ba53eb01-1462-56a3-8e98-0dd97fbcaab5&t=2e426bf0-b717-56bb-ab62-d63086457354&l=11546]

>  ParquetFileSystemITCase.testPartialDynamicPartition failed on azure
> 
>
> Key: FLINK-25102
> URL: https://issues.apache.org/jira/browse/FLINK-25102
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.15.0
>Reporter: Yun Gao
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.15.0
>
>
> {code:java}
> Nov 29 23:00:01   at 
> org.apache.hadoop.conf.Configuration.getResource(Configuration.java:2780)
> Nov 29 23:00:01   at 
> org.apache.hadoop.conf.Configuration.getStreamReader(Configuration.java:3036)
> Nov 29 23:00:01   at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2995)
> Nov 29 23:00:01   at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2968)
> Nov 29 23:00:01   at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
> Nov 29 23:00:01   at 
> org.apache.hadoop.conf.Configuration.get(Configuration.java:1200)
> Nov 29 23:00:01   at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1254)
> Nov 29 23:00:01   at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1479)
> Nov 29 23:00:01   at 
> org.apache.parquet.hadoop.codec.SnappyCodec.createInputStream(SnappyCodec.java:75)
> Nov 29 23:00:01   at 
> org.apache.parquet.hadoop.CodecFactory$HeapBytesDecompressor.decompress(CodecFactory.java:109)
> Nov 29 23:00:01   at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader$1.visit(ColumnChunkPageReadStore.java:103)
> Nov 29 23:00:01   at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader$1.visit(ColumnChunkPageReadStore.java:99)
> Nov 29 23:00:01   at 
> org.apache.parquet.column.page.DataPageV1.accept(DataPageV1.java:120)
> Nov 29 23:00:01   at 
> org.apache.parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader.readPage(ColumnChunkPageReadStore.java:99)
> Nov 29 23:00:01   at 
> org.apache.flink.formats.parquet.vector.reader.AbstractColumnReader.readToVector(AbstractColumnReader.java:154)
> Nov 29 23:00:01   at 
> org.apache.flink.formats.parquet.ParquetVectorizedInputFormat$ParquetReader.nextBatch(ParquetVectorizedInputFormat.java:390)
> Nov 29 23:00:01   at 
> org.apache.flink.formats.parquet.ParquetVectorizedInputFormat$ParquetReader.readBatch(ParquetVectorizedInputFormat.java:358)
> Nov 29 23:00:01   at 
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:67)
> Nov 29 23:00:01   at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> Nov 29 23:00:01   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:142)
> Nov 29 23:00:01   ... 6 more
> Nov 29 23:00:01 
> Nov 29 23:00:02 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 16.341 s - in 
> org.apache.flink.formats.parquet.ParquetFileCompactionITCase
> Nov 29 23:00:03 [ERROR] Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> Nov 29 23:00:03 [INFO] Running 
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase
> Nov 29 23:00:20 [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time 
> elapsed: 17.033 s - in 
> org.apache.flink.formats.parquet.ParquetFsStreamingSinkITCase
> Nov 29 23:00:21 [INFO] 
> Nov 29 23:00:21 [INFO] Results:
> Nov 29 23:00:21 [INFO] 
> Nov 29 23:00:21 [ERROR] Errors: 
> Nov 29 23:00:21 [ERROR] ParquetFileSystemITCase.testPartialDynamicPartition
> Nov 29 23:00:21 [ERROR]   Run 1: Failed to fetch next result
> Nov 29 23:00:21 [INFO]   Run 2: PASS
> Nov 29 23:00:21 [INFO] 
> Nov 29 23:00:21 [INFO] 
> Nov 29 23:00:21 [ERROR] Tests run: 42, Failures: 0, Errors: 1, Skipped: 1
>  {code}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=27235&view=logs&j=ba53eb01-1462-56a3-8e98-0dd97fbcaab5&t=2e426bf0-b717-56bb-ab62-d63086457354&l=11545



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #16609: [FLINK-23324][connector-jdbc] fix Postgres case-insensitive.

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #16609:
URL: https://github.com/apache/flink/pull/16609#issuecomment-887558020


   
   ## CI report:
   
   * 25f7778c5ee8bde238dff84ae777a859931a6f78 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27302)
 
   * 906f6d18caf7662f1563ecd952bd2eaefd06 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25117) NoSuchMethodError getCatalog()

2021-11-30 Thread zzt (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451516#comment-17451516
 ] 

zzt commented on FLINK-25117:
-

hi, [~wenlong.lwl] thanks for your reponse. I start the sql-client with 
following cmd:
{code:java}
./bin/sql-client.sh embedded --jar lib/flink-connector-kafka_2.12-1.13.3.jar 
lib/flink-connector-jdbc_2.12-1.13.3.jar {code}
`flink-table-api-java` is indeed in `lib` dir. So I should not include 
`flink-table-api-java` in `lib`? Any document for me to check if other jars to 
exclude?

> NoSuchMethodError getCatalog()
> --
>
> Key: FLINK-25117
> URL: https://issues.apache.org/jira/browse/FLINK-25117
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.3
> Environment: offical docker image,  flink:1.13.2-scala_2.12
>Reporter: zzt
>Priority: Major
>
> {code:java}
> Flink SQL> insert into `wide_order` (`user_id`, `row_num`, `sum`)
> > select `t`.`receiver_user_id`, `t`.`rowNum`, `t`.`total`
> > from (select `t`.`receiver_user_id`, `t`.`total`, ROW_NUMBER() OVER (ORDER 
> > BY total desc) as `rowNum`
> >       from (select `order_view`.`receiver_user_id`, 
> > sum(`order_view`.`total`) as `total`
> >             from `order_view` where create_time > '2021-11-01 00:24:55.453'
> >             group by `order_view`.`receiver_user_id`) `t`) `t`
> > where `rowNum` <= 1;
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.table.catalog.CatalogManager$TableLookupResult.getCatalog()Ljava/util/Optional;
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.extractTableStats(DatabaseCalciteSchema.java:106)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getStatistic(DatabaseCalciteSchema.java:90)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.lambda$getTable$0(DatabaseCalciteSchema.java:79)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getTable(DatabaseCalciteSchema.java:74)
>     at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83)
>     at org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:289)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntryFrom(SqlValidatorUtil.java:1059)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntry(SqlValidatorUtil.java:1016)
>     at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:119)
>     at 
> org.apache.flink.table.planner.plan.FlinkCalciteCatalogReader.getTable(FlinkCalciteCatalogReader.java:86)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.appendPartitionAndNullsProjects(PreValidateReWriter.scala:116)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:56)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:47)
>     at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:205)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$parseStatement$1(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:90)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.cli.CliClient.parseCommand(CliClient.java:385)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:326)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95)
>     at org.apache.flink.table.client.SqlClient.startClient(S

[jira] [Comment Edited] (FLINK-25117) NoSuchMethodError getCatalog()

2021-11-30 Thread zzt (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451516#comment-17451516
 ] 

zzt edited comment on FLINK-25117 at 12/1/21, 5:54 AM:
---

hi, [~wenlong.lwl] thanks for your response. I start the sql-client with 
following cmd:
{code:java}
./bin/sql-client.sh embedded --jar lib/flink-connector-kafka_2.12-1.13.3.jar 
lib/flink-connector-jdbc_2.12-1.13.3.jar {code}
`flink-table-api-java` is indeed in `lib` dir. So I should not include 
`flink-table-api-java` in `lib`? Any document for me to check if other jars to 
exclude?


was (Author: JIRAUSER281018):
hi, [~wenlong.lwl] thanks for your reponse. I start the sql-client with 
following cmd:
{code:java}
./bin/sql-client.sh embedded --jar lib/flink-connector-kafka_2.12-1.13.3.jar 
lib/flink-connector-jdbc_2.12-1.13.3.jar {code}
`flink-table-api-java` is indeed in `lib` dir. So I should not include 
`flink-table-api-java` in `lib`? Any document for me to check if other jars to 
exclude?

> NoSuchMethodError getCatalog()
> --
>
> Key: FLINK-25117
> URL: https://issues.apache.org/jira/browse/FLINK-25117
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.3
> Environment: offical docker image,  flink:1.13.2-scala_2.12
>Reporter: zzt
>Priority: Major
>
> {code:java}
> Flink SQL> insert into `wide_order` (`user_id`, `row_num`, `sum`)
> > select `t`.`receiver_user_id`, `t`.`rowNum`, `t`.`total`
> > from (select `t`.`receiver_user_id`, `t`.`total`, ROW_NUMBER() OVER (ORDER 
> > BY total desc) as `rowNum`
> >       from (select `order_view`.`receiver_user_id`, 
> > sum(`order_view`.`total`) as `total`
> >             from `order_view` where create_time > '2021-11-01 00:24:55.453'
> >             group by `order_view`.`receiver_user_id`) `t`) `t`
> > where `rowNum` <= 1;
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.table.catalog.CatalogManager$TableLookupResult.getCatalog()Ljava/util/Optional;
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.extractTableStats(DatabaseCalciteSchema.java:106)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getStatistic(DatabaseCalciteSchema.java:90)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.lambda$getTable$0(DatabaseCalciteSchema.java:79)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getTable(DatabaseCalciteSchema.java:74)
>     at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83)
>     at org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:289)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntryFrom(SqlValidatorUtil.java:1059)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntry(SqlValidatorUtil.java:1016)
>     at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:119)
>     at 
> org.apache.flink.table.planner.plan.FlinkCalciteCatalogReader.getTable(FlinkCalciteCatalogReader.java:86)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.appendPartitionAndNullsProjects(PreValidateReWriter.scala:116)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:56)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:47)
>     at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:205)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$parseStatement$1(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:90)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.cli.CliClient.parseCommand(CliClient.java:385)
>     at 
> org.apache.flink.ta

[jira] [Commented] (FLINK-25099) flink on yarn Accessing two HDFS Clusters

2021-11-30 Thread Junfan Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451514#comment-17451514
 ] 

Junfan Zhang commented on FLINK-25099:
--

If you could provide more detailed info like submmision cli example, i maybe 
reproduce it.

Just guess that nodemanager dont have the flinkcluster nameservice due to using 
the default yarn cluster config instead of the configured flink.hadoop.

[~libra_816]

> flink on yarn Accessing two HDFS Clusters
> -
>
> Key: FLINK-25099
> URL: https://issues.apache.org/jira/browse/FLINK-25099
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, FileSystems, Runtime / State Backends
>Affects Versions: 1.13.3
> Environment: flink : 1.13.3
> hadoop : 3.3.0
>Reporter: chenqizhu
>Priority: Major
> Attachments: flink-chenqizhu-client-hdfsn21n163.log
>
>
> Flink version 1.13 supports configuration of Hadoop properties in 
> flink-conf.yaml via flink.hadoop.*. There is A requirement to write 
> checkpoint to HDFS with SSDS (called cluster B) to speed checkpoint writing, 
> but this HDFS cluster is not the default HDFS in the flink client (called 
> cluster A by default). Yaml is configured with nameservices for cluster A and 
> cluster B, which is similar to HDFS federated mode.
> The configuration is as follows:
>  
> {code:java}
> flink.hadoop.dfs.nameservices: ACluster,BCluster
> flink.hadoop.fs.defaultFS: hdfs://BCluster
> flink.hadoop.dfs.ha.namenodes.ACluster: nn1,nn2
> flink.hadoop.dfs.namenode.rpc-address.ACluster.nn1: 10.:9000
> flink.hadoop.dfs.namenode.http-address.ACluster.nn1: 10.:50070
> flink.hadoop.dfs.namenode.rpc-address.ACluster.nn2: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.ACluster.nn2: 10.xx:50070
> flink.hadoop.dfs.client.failover.proxy.provider.ACluster: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> flink.hadoop.dfs.ha.namenodes.BCluster: nn1,nn2
> flink.hadoop.dfs.namenode.rpc-address.BCluster.nn1: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.BCluster.nn1: 10.xx:50070
> flink.hadoop.dfs.namenode.rpc-address.BCluster.nn2: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.BCluster.nn2: 10.x:50070
> flink.hadoop.dfs.client.failover.proxy.provider.BCluster: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> {code}
>  
> However, an error occurred during the startup of the job, which is reported 
> as follows:
> (change configuration items to A flink local client default HDFS cluster, the 
> operation can be normal boot:  flink.hadoop.fs.DefaultFS: hdfs: / / ACluster)
> {noformat}
> Caused by: BCluster
> java.net.UnknownHostException: BCluster
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:448)
> at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:139)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:374)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:308)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:184)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
> at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:270)
> at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:68)
> at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:415)
> at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:412)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:412)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:247)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:240)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:228)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java

[GitHub] [flink] flinkbot edited a comment on pull request #17970: [FLINK-25122] [flink-dist] Add variable expansion for FLINK_ENV_JAVA_OPTS

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17970:
URL: https://github.com/apache/flink/pull/17970#issuecomment-983310109


   
   ## CI report:
   
   * e8824fe7506c49979f6f151e8daf2eeb13f51c9c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27328)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17970: [FLINK-25122] [flink-dist] Add variable expansion for FLINK_ENV_JAVA_OPTS

2021-11-30 Thread GitBox


flinkbot commented on pull request #17970:
URL: https://github.com/apache/flink/pull/17970#issuecomment-983310907


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e8824fe7506c49979f6f151e8daf2eeb13f51c9c (Wed Dec 01 
05:49:43 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-25122).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25122) flink-dist/src/main/flink-bin/bin/flink does not expand variable for FLINK_ENV_JAVA_OPTS

2021-11-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25122:
---
Labels: pull-request-available pull-requests-available  (was: 
pull-requests-available)

> flink-dist/src/main/flink-bin/bin/flink does not expand variable for 
> FLINK_ENV_JAVA_OPTS
> 
>
> Key: FLINK-25122
> URL: https://issues.apache.org/jira/browse/FLINK-25122
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.12.5
>Reporter: L Z
>Priority: Major
>  Labels: pull-request-available, pull-requests-available
>
> According to the suggestion on 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/ops/debugging/application_profiling/]
>  or 
> [https://nightlies.apache.org/flink/flink-docs-release-1.12/ops/debugging/application_profiling.html]
>  
> after adding
> {code:yaml}
> env.java.opts: "-Xloggc:${FLINK_LOG_PREFIX}.gc.log <...omitted chars...>"
> {code}
> to flink-conf.yaml.
> flink CLI `./bin/flink run ` fails with
> {code:bash}
> Invalid file name for use with -Xloggc: Filename can only contain the 
> characters [A-Z][a-z][0-9]-_.%[p|t] but it has been 
> ${FLINK_LOG_PREFIX}.gc.log Note %p or %t can only be used once Error: Could 
> not create the Java Virtual Machine. Error: A fatal exception has occurred. 
> Program will exit.{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot commented on pull request #17970: [FLINK-25122] [flink-dist] Add variable expansion for FLINK_ENV_JAVA_OPTS

2021-11-30 Thread GitBox


flinkbot commented on pull request #17970:
URL: https://github.com/apache/flink/pull/17970#issuecomment-983310109


   
   ## CI report:
   
   * e8824fe7506c49979f6f151e8daf2eeb13f51c9c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25122) flink-dist/src/main/flink-bin/bin/flink does not expand variable for FLINK_ENV_JAVA_OPTS

2021-11-30 Thread L Z (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

L Z updated FLINK-25122:

Labels: pull-requests-available  (was: pull-request-available)

> flink-dist/src/main/flink-bin/bin/flink does not expand variable for 
> FLINK_ENV_JAVA_OPTS
> 
>
> Key: FLINK-25122
> URL: https://issues.apache.org/jira/browse/FLINK-25122
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.12.5
>Reporter: L Z
>Priority: Major
>  Labels: pull-requests-available
>
> According to the suggestion on 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/ops/debugging/application_profiling/]
>  or 
> [https://nightlies.apache.org/flink/flink-docs-release-1.12/ops/debugging/application_profiling.html]
>  
> after adding
> {code:yaml}
> env.java.opts: "-Xloggc:${FLINK_LOG_PREFIX}.gc.log <...omitted chars...>"
> {code}
> to flink-conf.yaml.
> flink CLI `./bin/flink run ` fails with
> {code:bash}
> Invalid file name for use with -Xloggc: Filename can only contain the 
> characters [A-Z][a-z][0-9]-_.%[p|t] but it has been 
> ${FLINK_LOG_PREFIX}.gc.log Note %p or %t can only be used once Error: Could 
> not create the Java Virtual Machine. Error: A fatal exception has occurred. 
> Program will exit.{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-25122) flink-dist/src/main/flink-bin/bin/flink does not expand variable for FLINK_ENV_JAVA_OPTS

2021-11-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-25122:
---
Labels: pull-request-available  (was: )

> flink-dist/src/main/flink-bin/bin/flink does not expand variable for 
> FLINK_ENV_JAVA_OPTS
> 
>
> Key: FLINK-25122
> URL: https://issues.apache.org/jira/browse/FLINK-25122
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.12.5
>Reporter: L Z
>Priority: Major
>  Labels: pull-request-available
>
> According to the suggestion on 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/ops/debugging/application_profiling/]
>  or 
> [https://nightlies.apache.org/flink/flink-docs-release-1.12/ops/debugging/application_profiling.html]
>  
> after adding
> {code:yaml}
> env.java.opts: "-Xloggc:${FLINK_LOG_PREFIX}.gc.log <...omitted chars...>"
> {code}
> to flink-conf.yaml.
> flink CLI `./bin/flink run ` fails with
> {code:bash}
> Invalid file name for use with -Xloggc: Filename can only contain the 
> characters [A-Z][a-z][0-9]-_.%[p|t] but it has been 
> ${FLINK_LOG_PREFIX}.gc.log Note %p or %t can only be used once Error: Could 
> not create the Java Virtual Machine. Error: A fatal exception has occurred. 
> Program will exit.{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] CodeFerriCode opened a new pull request #17970: [FLINK-25122] [flink-dist] Add variable expansion for FLINK_ENV_JAVA_OPTS

2021-11-30 Thread GitBox


CodeFerriCode opened a new pull request #17970:
URL: https://github.com/apache/flink/pull/17970


   and add FLINK_LOG_PREFIX variable
   
   
   
   ## What is the purpose of the change
   
   Add variable expansion for FLINK_ENV_JAVA_OPTS, and introduce variable 
FLINK_LOG_PREFIX in flink to make env.java.opts in the documentation like 
"-Xlog:gc*:file=${FLINK_LOG_PREFIX}.gc.log" work.
   
   ## Brief change log
   
 - *Add FLINK_ENV_JAVA_OPTS=$(eval echo ${FLINK_ENV_JAVA_OPTS}) before 
starting the JVM.*
 - *Introduce variable FLINK_LOG_PREFIX, so that env.java.opts value 
containing this variable can get eval correctly.*
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25122) flink-dist/src/main/flink-bin/bin/flink does not expand variable for FLINK_ENV_JAVA_OPTS

2021-11-30 Thread L Z (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

L Z updated FLINK-25122:

Description: 
According to the suggestion on 
[https://nightlies.apache.org/flink/flink-docs-master/docs/ops/debugging/application_profiling/]
 or 
[https://nightlies.apache.org/flink/flink-docs-release-1.12/ops/debugging/application_profiling.html]
 

after adding
{code:yaml}
env.java.opts: "-Xloggc:${FLINK_LOG_PREFIX}.gc.log <...omitted chars...>"
{code}
to flink-conf.yaml.

flink CLI `./bin/flink run ` fails with
{code:bash}
Invalid file name for use with -Xloggc: Filename can only contain the 
characters [A-Z][a-z][0-9]-_.%[p|t] but it has been ${FLINK_LOG_PREFIX}.gc.log 
Note %p or %t can only be used once Error: Could not create the Java Virtual 
Machine. Error: A fatal exception has occurred. Program will exit.{code}

  was:
After adding 
{code:yaml}
env.java.opts: "-Xloggc:${FLINK_LOG_PREFIX}.gc.log <...omitted chars...>"
{code}
to flink-conf.yaml.

flink CLI fails with
{code:bash}
Invalid file name for use with -Xloggc: Filename can only contain the 
characters [A-Z][a-z][0-9]-_.%[p|t] but it has been ${FLINK_LOG_PREFIX}.gc.log 
Note %p or %t can only be used once Error: Could not create the Java Virtual 
Machine. Error: A fatal exception has occurred. Program will exit.{code}


> flink-dist/src/main/flink-bin/bin/flink does not expand variable for 
> FLINK_ENV_JAVA_OPTS
> 
>
> Key: FLINK-25122
> URL: https://issues.apache.org/jira/browse/FLINK-25122
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.12.5
>Reporter: L Z
>Priority: Major
>
> According to the suggestion on 
> [https://nightlies.apache.org/flink/flink-docs-master/docs/ops/debugging/application_profiling/]
>  or 
> [https://nightlies.apache.org/flink/flink-docs-release-1.12/ops/debugging/application_profiling.html]
>  
> after adding
> {code:yaml}
> env.java.opts: "-Xloggc:${FLINK_LOG_PREFIX}.gc.log <...omitted chars...>"
> {code}
> to flink-conf.yaml.
> flink CLI `./bin/flink run ` fails with
> {code:bash}
> Invalid file name for use with -Xloggc: Filename can only contain the 
> characters [A-Z][a-z][0-9]-_.%[p|t] but it has been 
> ${FLINK_LOG_PREFIX}.gc.log Note %p or %t can only be used once Error: Could 
> not create the Java Virtual Machine. Error: A fatal exception has occurred. 
> Program will exit.{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17969: [FLINK-25031] Job finishes iff all job vertices finish.

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17969:
URL: https://github.com/apache/flink/pull/17969#issuecomment-983248559


   
   ## CI report:
   
   * d761590cdefec2a9d57edec6175e73565ff6be3c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27324)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17919: [FLINK-24419][table-planner] Trim to precision when casting to BINARY/VARBINARY

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17919:
URL: https://github.com/apache/flink/pull/17919#issuecomment-979659835


   
   ## CI report:
   
   * d18299137d194d8f2e10b70480796cfc797a0d69 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27325)
 
   * e7d585c7aab70989fe340f5f14b359952433ddf2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27327)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-24953) Optime hive parallelism inference

2021-11-30 Thread xiangqiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451507#comment-17451507
 ] 

xiangqiao edited comment on FLINK-24953 at 12/1/21, 5:21 AM:
-

sorry, this problem has been fixed in master branch. I will close this jira.

duplicated with this jira:https://issues.apache.org/jira/browse/FLINK-22898


was (Author: xiangqiao):
sorry, this problem has been fixed in master branch. I will close this jira.

> Optime hive parallelism inference
> -
>
> Key: FLINK-24953
> URL: https://issues.apache.org/jira/browse/FLINK-24953
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.13.0, 1.14.0
>Reporter: xiangqiao
>Assignee: xiangqiao
>Priority: Major
>  Labels: pull-request-available
>
> Currently, when I disable hive table source parallelism inference using 
> configuration  and set parallelism.default: 100. 
> {code:java}
> table.exec.hive.infer-source-parallelism: false 
> parallelism.default: 100{code}
> The result is that the parallelism of hive table source is {*}1{*}, and the 
> configuration of the default parallelism is not effective.
> I will optimize this problem. In the future, when disable hive table source 
> parallelism inference ,the  parallelism of hive table source will be 
> determined according to the following order:
>  
> 1. If table.exec.resource.default-parallelism is set, the configured value 
> will be used
> 2. If parallelism.default is set, the configured value is used
> 3. If the above two configuration items are not set, the default value is 1
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25122) flink-dist/src/main/flink-bin/bin/flink does not expand variable for FLINK_ENV_JAVA_OPTS

2021-11-30 Thread L Z (Jira)
L Z created FLINK-25122:
---

 Summary: flink-dist/src/main/flink-bin/bin/flink does not expand 
variable for FLINK_ENV_JAVA_OPTS
 Key: FLINK-25122
 URL: https://issues.apache.org/jira/browse/FLINK-25122
 Project: Flink
  Issue Type: Bug
  Components: Command Line Client
Affects Versions: 1.12.5
Reporter: L Z


After adding 
{code:yaml}
env.java.opts: "-Xloggc:${FLINK_LOG_PREFIX}.gc.log <...omitted chars...>"
{code}
to flink-conf.yaml.

flink CLI fails with
{code:bash}
Invalid file name for use with -Xloggc: Filename can only contain the 
characters [A-Z][a-z][0-9]-_.%[p|t] but it has been ${FLINK_LOG_PREFIX}.gc.log 
Note %p or %t can only be used once Error: Could not create the Java Virtual 
Machine. Error: A fatal exception has occurred. Program will exit.{code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-24953) Optime hive parallelism inference

2021-11-30 Thread xiangqiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiangqiao closed FLINK-24953.
-
Resolution: Duplicate

> Optime hive parallelism inference
> -
>
> Key: FLINK-24953
> URL: https://issues.apache.org/jira/browse/FLINK-24953
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.13.0, 1.14.0
>Reporter: xiangqiao
>Assignee: xiangqiao
>Priority: Major
>  Labels: pull-request-available
>
> Currently, when I disable hive table source parallelism inference using 
> configuration  and set parallelism.default: 100. 
> {code:java}
> table.exec.hive.infer-source-parallelism: false 
> parallelism.default: 100{code}
> The result is that the parallelism of hive table source is {*}1{*}, and the 
> configuration of the default parallelism is not effective.
> I will optimize this problem. In the future, when disable hive table source 
> parallelism inference ,the  parallelism of hive table source will be 
> determined according to the following order:
>  
> 1. If table.exec.resource.default-parallelism is set, the configured value 
> will be used
> 2. If parallelism.default is set, the configured value is used
> 3. If the above two configuration items are not set, the default value is 1
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-24953) Optime hive parallelism inference

2021-11-30 Thread xiangqiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451507#comment-17451507
 ] 

xiangqiao commented on FLINK-24953:
---

sorry, this problem has been fixed in master branch. I will close this jira.

> Optime hive parallelism inference
> -
>
> Key: FLINK-24953
> URL: https://issues.apache.org/jira/browse/FLINK-24953
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.13.0, 1.14.0
>Reporter: xiangqiao
>Assignee: xiangqiao
>Priority: Major
>  Labels: pull-request-available
>
> Currently, when I disable hive table source parallelism inference using 
> configuration  and set parallelism.default: 100. 
> {code:java}
> table.exec.hive.infer-source-parallelism: false 
> parallelism.default: 100{code}
> The result is that the parallelism of hive table source is {*}1{*}, and the 
> configuration of the default parallelism is not effective.
> I will optimize this problem. In the future, when disable hive table source 
> parallelism inference ,the  parallelism of hive table source will be 
> determined according to the following order:
>  
> 1. If table.exec.resource.default-parallelism is set, the configured value 
> will be used
> 2. If parallelism.default is set, the configured value is used
> 3. If the above two configuration items are not set, the default value is 1
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] xiangqiao123 closed pull request #17826: [FLINK-24953][Connectors / Hive]Optime hive parallelism inference

2021-11-30 Thread GitBox


xiangqiao123 closed pull request #17826:
URL: https://github.com/apache/flink/pull/17826


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xiangqiao123 commented on a change in pull request #17826: [FLINK-24953][Connectors / Hive]Optime hive parallelism inference

2021-11-30 Thread GitBox


xiangqiao123 commented on a change in pull request #17826:
URL: https://github.com/apache/flink/pull/17826#discussion_r759859765



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveParallelismInference.java
##
@@ -63,10 +63,10 @@ int limit(Long limit) {
 return parallelism;
 }
 
-if (limit != null) {
-parallelism = Math.min(parallelism, (int) (limit / 1000));
+if (!infer || limit == null) {

Review comment:
   sorry, this problem has been fixed in master branch. I will close this 
pr.
   
https://github.com/apache/flink/blob/2d2d92e9812851091d7cee9c9c1764a0f7b4fdc8/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveParallelismInference.java#L62




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xiangqiao123 commented on a change in pull request #17826: [FLINK-24953][Connectors / Hive]Optime hive parallelism inference

2021-11-30 Thread GitBox


xiangqiao123 commented on a change in pull request #17826:
URL: https://github.com/apache/flink/pull/17826#discussion_r759859765



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveParallelismInference.java
##
@@ -63,10 +63,10 @@ int limit(Long limit) {
 return parallelism;
 }
 
-if (limit != null) {
-parallelism = Math.min(parallelism, (int) (limit / 1000));
+if (!infer || limit == null) {

Review comment:
   sorry, this problem has been fixed in master branch. I will close this 
pr.
   
https://github.com/apache/flink/blob/2d2d92e9812851091d7cee9c9c1764a0f7b4fdc8/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveParallelismInference.java#L81




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17919: [FLINK-24419][table-planner] Trim to precision when casting to BINARY/VARBINARY

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17919:
URL: https://github.com/apache/flink/pull/17919#issuecomment-979659835


   
   ## CI report:
   
   * 8e8aaa7df0ebae7ec45b9bc2e61ea8e55d9f45b8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27154)
 
   * d18299137d194d8f2e10b70480796cfc797a0d69 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27325)
 
   * e7d585c7aab70989fe340f5f14b359952433ddf2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27327)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17919: [FLINK-24419][table-planner] Trim to precision when casting to BINARY/VARBINARY

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17919:
URL: https://github.com/apache/flink/pull/17919#issuecomment-979659835


   
   ## CI report:
   
   * 8e8aaa7df0ebae7ec45b9bc2e61ea8e55d9f45b8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27154)
 
   * d18299137d194d8f2e10b70480796cfc797a0d69 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27325)
 
   * e7d585c7aab70989fe340f5f14b359952433ddf2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] lindong28 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-30 Thread GitBox


lindong28 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r759854126



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/linear/LogisticRegression.java
##
@@ -0,0 +1,653 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.classification.linear;
+
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.PrimitiveArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.base.DoubleComparator;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.typeutils.TupleTypeInfo;
+import org.apache.flink.iteration.DataStreamList;
+import org.apache.flink.iteration.IterationBody;
+import org.apache.flink.iteration.IterationBodyResult;
+import org.apache.flink.iteration.IterationConfig;
+import org.apache.flink.iteration.IterationConfig.OperatorLifeCycle;
+import org.apache.flink.iteration.IterationListener;
+import org.apache.flink.iteration.Iterations;
+import org.apache.flink.iteration.ReplayableDataStreamList;
+import org.apache.flink.iteration.operator.OperatorStateUtils;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.broadcast.BroadcastUtils;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.common.iteration.TerminateOnMaxIterOrTol;
+import org.apache.flink.ml.linalg.BLAS;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.runtime.state.StateInitializationContext;
+import org.apache.flink.runtime.state.StateSnapshotContext;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
+import org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator;
+import org.apache.flink.streaming.api.operators.BoundedMultiInput;
+import org.apache.flink.streaming.api.operators.BoundedOneInput;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.TwoInputStreamOperator;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.OutputTag;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.collections.IteratorUtils;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+/**
+ * This class implements methods to train a logistic regression model. For 
details, see
+ * https://en.wikipedia.org/wiki/Logistic_regression.
+ */
+public class LogisticRegression
+implements Estimator,
+LogisticRegressionParams {
+
+private Map, Object> paramMap = new HashMap<>();
+
+private static final OutputTag> MODEL_OUTPUT =
+new OutputTag>("MODEL_OUTPUT") {};
+
+public LogisticRegression() {
+ParamUtils.initializeMapWithDefaultValues(this.paramMap, this);
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static LogisticRegression load(StreamExecutionEnvironment env, 
String path)
+throws IOException {
+return ReadWriteUtils.load

[GitHub] [flink-ml] lindong28 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-30 Thread GitBox


lindong28 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r759853261



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/linear/LogisticRegression.java
##
@@ -0,0 +1,653 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.classification.linear;
+
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.PrimitiveArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.base.DoubleComparator;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.typeutils.TupleTypeInfo;
+import org.apache.flink.iteration.DataStreamList;
+import org.apache.flink.iteration.IterationBody;
+import org.apache.flink.iteration.IterationBodyResult;
+import org.apache.flink.iteration.IterationConfig;
+import org.apache.flink.iteration.IterationConfig.OperatorLifeCycle;
+import org.apache.flink.iteration.IterationListener;
+import org.apache.flink.iteration.Iterations;
+import org.apache.flink.iteration.ReplayableDataStreamList;
+import org.apache.flink.iteration.operator.OperatorStateUtils;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.broadcast.BroadcastUtils;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.common.iteration.TerminateOnMaxIterOrTol;
+import org.apache.flink.ml.linalg.BLAS;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.runtime.state.StateInitializationContext;
+import org.apache.flink.runtime.state.StateSnapshotContext;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
+import org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator;
+import org.apache.flink.streaming.api.operators.BoundedMultiInput;
+import org.apache.flink.streaming.api.operators.BoundedOneInput;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.TwoInputStreamOperator;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.OutputTag;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.collections.IteratorUtils;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+/**
+ * This class implements methods to train a logistic regression model. For 
details, see
+ * https://en.wikipedia.org/wiki/Logistic_regression.
+ */
+public class LogisticRegression
+implements Estimator,
+LogisticRegressionParams {
+
+private Map, Object> paramMap = new HashMap<>();
+
+private static final OutputTag> MODEL_OUTPUT =
+new OutputTag>("MODEL_OUTPUT") {};
+
+public LogisticRegression() {
+ParamUtils.initializeMapWithDefaultValues(this.paramMap, this);
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static LogisticRegression load(StreamExecutionEnvironment env, 
String path)
+throws IOException {
+return ReadWriteUtils.load

[GitHub] [flink] shenzhu commented on pull request #17788: [FLINK-15826][Tabel SQL/API] Add renameFunction() to Catalog

2021-11-30 Thread GitBox


shenzhu commented on pull request #17788:
URL: https://github.com/apache/flink/pull/17788#issuecomment-983287432


   > @shenzhu My PR was merged now. If you rebase I think those violations 
should not show up anymore.
   
   Hey @Airblader , thanks for your PR! I just rebased my code, would you mind 
taking a look at this PR when you have a moment?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17945: [FLINK-21027][state] Introduce KeyedStateBackend#isSafeToReuseState for opmitization hint

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17945:
URL: https://github.com/apache/flink/pull/17945#issuecomment-981579279


   
   ## CI report:
   
   * 175f81daf4357c8e6e3bafa4204a47a679c35a9e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27208)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17953: [FLINK-24481][docs] Translate buffer debloat documenation to chinese

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17953:
URL: https://github.com/apache/flink/pull/17953#issuecomment-982382670


   
   ## CI report:
   
   * 613fdd459bdc162d32219604db1eadd907089eed Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27263)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17842: [FLINK-24966] [docs] Fix spelling errors in the project

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17842:
URL: https://github.com/apache/flink/pull/17842#issuecomment-974310249


   
   ## CI report:
   
   * bfbbc0c07ec324af26872c0c5422ae6262358da0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27323)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-25117) NoSuchMethodError getCatalog()

2021-11-30 Thread Wenlong Lyu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451499#comment-17451499
 ] 

Wenlong Lyu commented on FLINK-25117:
-

hi, [~zzt] the method TableLookupResult.getCatalog()Ljava/util/Optional is 
added at 1.13.3. Please check the dependency in your custom jars, make sure 
that all of flink library(such as flink-table-api-java) are excluded in the 
custom jars.

> NoSuchMethodError getCatalog()
> --
>
> Key: FLINK-25117
> URL: https://issues.apache.org/jira/browse/FLINK-25117
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.13.3
> Environment: offical docker image,  flink:1.13.2-scala_2.12
>Reporter: zzt
>Priority: Major
>
> {code:java}
> Flink SQL> insert into `wide_order` (`user_id`, `row_num`, `sum`)
> > select `t`.`receiver_user_id`, `t`.`rowNum`, `t`.`total`
> > from (select `t`.`receiver_user_id`, `t`.`total`, ROW_NUMBER() OVER (ORDER 
> > BY total desc) as `rowNum`
> >       from (select `order_view`.`receiver_user_id`, 
> > sum(`order_view`.`total`) as `total`
> >             from `order_view` where create_time > '2021-11-01 00:24:55.453'
> >             group by `order_view`.`receiver_user_id`) `t`) `t`
> > where `rowNum` <= 1;
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>     at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.flink.table.catalog.CatalogManager$TableLookupResult.getCatalog()Ljava/util/Optional;
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.extractTableStats(DatabaseCalciteSchema.java:106)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getStatistic(DatabaseCalciteSchema.java:90)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.lambda$getTable$0(DatabaseCalciteSchema.java:79)
>     at java.util.Optional.map(Optional.java:215)
>     at 
> org.apache.flink.table.planner.catalog.DatabaseCalciteSchema.getTable(DatabaseCalciteSchema.java:74)
>     at 
> org.apache.calcite.jdbc.SimpleCalciteSchema.getImplicitTable(SimpleCalciteSchema.java:83)
>     at org.apache.calcite.jdbc.CalciteSchema.getTable(CalciteSchema.java:289)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntryFrom(SqlValidatorUtil.java:1059)
>     at 
> org.apache.calcite.sql.validate.SqlValidatorUtil.getTableEntry(SqlValidatorUtil.java:1016)
>     at 
> org.apache.calcite.prepare.CalciteCatalogReader.getTable(CalciteCatalogReader.java:119)
>     at 
> org.apache.flink.table.planner.plan.FlinkCalciteCatalogReader.getTable(FlinkCalciteCatalogReader.java:86)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter$.appendPartitionAndNullsProjects(PreValidateReWriter.scala:116)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:56)
>     at 
> org.apache.flink.table.planner.calcite.PreValidateReWriter.visit(PreValidateReWriter.scala:47)
>     at org.apache.calcite.sql.SqlCall.accept(SqlCall.java:139)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:113)
>     at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:107)
>     at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:205)
>     at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:101)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$parseStatement$1(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.wrapClassLoader(ExecutionContext.java:90)
>     at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.parseStatement(LocalExecutor.java:176)
>     at 
> org.apache.flink.table.client.cli.CliClient.parseCommand(CliClient.java:385)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeStatement(CliClient.java:326)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInteractive(CliClient.java:297)
>     at 
> org.apache.flink.table.client.cli.CliClient.executeInInteractiveMode(CliClient.java:221)
>     at org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:151)
>     at org.apache.flink.table.client.SqlClient.start(SqlClient.java:95)
>     at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
>     ... 1 more
> Shutting down the session...
> done. {code}



--
This message was sent by Atlassian Jira

[GitHub] [flink] flinkbot edited a comment on pull request #17956: [FLINK-18779][table sql/planner]Support the SupportsFilterPushDown for LookupTableSource

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17956:
URL: https://github.com/apache/flink/pull/17956#issuecomment-982442976


   
   ## CI report:
   
   * d0558544d5b853381e1e62a0f248c4891f748641 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27322)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myracle commented on a change in pull request #17826: [FLINK-24953][Connectors / Hive]Optime hive parallelism inference

2021-11-30 Thread GitBox


Myracle commented on a change in pull request #17826:
URL: https://github.com/apache/flink/pull/17826#discussion_r759839237



##
File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveParallelismInference.java
##
@@ -63,10 +63,10 @@ int limit(Long limit) {
 return parallelism;
 }
 
-if (limit != null) {
-parallelism = Math.min(parallelism, (int) (limit / 1000));
+if (!infer || limit == null) {

Review comment:
   The problem occurs in 1.13.1 version. The master branch has already 
resolved the problem.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21573) Support expression reuse in codegen

2021-11-30 Thread Q Kang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451493#comment-17451493
 ] 

Q Kang commented on FLINK-21573:


Hi [~godfreyhe] [~lzljs3620320] , I've come up with a simple solution to reuse 
scalar function expression in `CodeGeneratorContext`, only when the function is 
deterministic. This solution has been tested in our production environment and 
is good to go. Would you mind assigning this ticket to me? Thx.

> Support expression reuse in codegen
> ---
>
> Key: FLINK-21573
> URL: https://issues.apache.org/jira/browse/FLINK-21573
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Benchao Li
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> Currently there is no expression reuse in codegen, and this may result in 
> more CPU overhead in some cases. E.g.
> {code:java}
> SELECT my_map['key1'] as key1, my_map['key2'] as key2, my_map['key3'] as key3
> FROM (
>   SELECT dump_json_to_map(col1) as my_map
>   FROM T
> )
> {code}
> `dump_json_to_map` will be called 3 times.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (FLINK-24046) Refactor the relationship bwtween PredefinedOptions and RocksDBConfigurableOptions

2021-11-30 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang resolved FLINK-24046.
--
Resolution: Fixed

merged in master: 2d2d92e9812851091d7cee9c9c1764a0f7b4fdc8

> Refactor the relationship bwtween PredefinedOptions and 
> RocksDBConfigurableOptions
> --
>
> Key: FLINK-24046
> URL: https://issues.apache.org/jira/browse/FLINK-24046
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.14.1
>
>
> RocksDBConfigurableOptions mainly focus on the settings of DBOptions and 
> ColumnFamilyOptions. The original design of this class is used to let user 
> could configure RocksDB via configurations instead of programmatically 
> implemented RocksDBOptionsFactory.
> To make the minimal change, original options in RocksDBConfigurableOptions 
> have no default value so that we would not make anything happen in 
> DefaultConfigurableOptionsFactory just as before.
> However, this make user not so clear of the option meaning with no default 
> value, and we could consider change the relationship between them.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24046) Refactor the relationship bwtween PredefinedOptions and RocksDBConfigurableOptions

2021-11-30 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang updated FLINK-24046:
-
Fix Version/s: 1.15.0
   (was: 1.14.1)

> Refactor the relationship bwtween PredefinedOptions and 
> RocksDBConfigurableOptions
> --
>
> Key: FLINK-24046
> URL: https://issues.apache.org/jira/browse/FLINK-24046
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Zakelly Lan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> RocksDBConfigurableOptions mainly focus on the settings of DBOptions and 
> ColumnFamilyOptions. The original design of this class is used to let user 
> could configure RocksDB via configurations instead of programmatically 
> implemented RocksDBOptionsFactory.
> To make the minimal change, original options in RocksDBConfigurableOptions 
> have no default value so that we would not make anything happen in 
> DefaultConfigurableOptionsFactory just as before.
> However, this make user not so clear of the option meaning with no default 
> value, and we could consider change the relationship between them.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] Myasuka closed pull request #17874: [FLINK-24046] Refactor the EmbeddedRocksDBStateBackend configuration logic

2021-11-30 Thread GitBox


Myasuka closed pull request #17874:
URL: https://github.com/apache/flink/pull/17874


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] zhipeng93 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-30 Thread GitBox


zhipeng93 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r759828179



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/linear/LogisticRegression.java
##
@@ -0,0 +1,653 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.classification.linear;
+
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.PrimitiveArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.base.DoubleComparator;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.typeutils.TupleTypeInfo;
+import org.apache.flink.iteration.DataStreamList;
+import org.apache.flink.iteration.IterationBody;
+import org.apache.flink.iteration.IterationBodyResult;
+import org.apache.flink.iteration.IterationConfig;
+import org.apache.flink.iteration.IterationConfig.OperatorLifeCycle;
+import org.apache.flink.iteration.IterationListener;
+import org.apache.flink.iteration.Iterations;
+import org.apache.flink.iteration.ReplayableDataStreamList;
+import org.apache.flink.iteration.operator.OperatorStateUtils;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.broadcast.BroadcastUtils;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.common.iteration.TerminateOnMaxIterOrTol;
+import org.apache.flink.ml.linalg.BLAS;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.runtime.state.StateInitializationContext;
+import org.apache.flink.runtime.state.StateSnapshotContext;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
+import org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator;
+import org.apache.flink.streaming.api.operators.BoundedMultiInput;
+import org.apache.flink.streaming.api.operators.BoundedOneInput;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.TwoInputStreamOperator;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.OutputTag;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.collections.IteratorUtils;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+/**
+ * This class implements methods to train a logistic regression model. For 
details, see
+ * https://en.wikipedia.org/wiki/Logistic_regression.
+ */
+public class LogisticRegression
+implements Estimator,
+LogisticRegressionParams {
+
+private Map, Object> paramMap = new HashMap<>();
+
+private static final OutputTag> MODEL_OUTPUT =
+new OutputTag>("MODEL_OUTPUT") {};
+
+public LogisticRegression() {
+ParamUtils.initializeMapWithDefaultValues(this.paramMap, this);
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static LogisticRegression load(StreamExecutionEnvironment env, 
String path)
+throws IOException {
+return ReadWriteUtils.load

[jira] [Commented] (FLINK-17782) Add array,map,row types support for parquet row writer

2021-11-30 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451487#comment-17451487
 ] 

Jingsong Lee commented on FLINK-17782:
--

[~sujun1020] and [~Runking] , sorry for the late response, I will take a look 
to PR in next few days, if it goes well, it will be merged into the flink 
repository soon.

> Add array,map,row types support for parquet row writer
> --
>
> Key: FLINK-17782
> URL: https://issues.apache.org/jira/browse/FLINK-17782
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jingsong Lee
>Assignee: sujun
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-12-01-11-37-21-884.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] JingsongLi commented on pull request #17542: [FLINK-17782] Add array,map,row types support for parquet row writer

2021-11-30 Thread GitBox


JingsongLi commented on pull request #17542:
URL: https://github.com/apache/flink/pull/17542#issuecomment-983257963


   Sorry for the late response, I will take a review in the next few days, hope 
you are still active.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17788: [FLINK-15826][Tabel SQL/API] Add renameFunction() to Catalog

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17788:
URL: https://github.com/apache/flink/pull/17788#issuecomment-968179643


   
   ## CI report:
   
   * 1adea585c2e4802e2c6d6b9de46becadc9e3d2c5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27321)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17782) Add array,map,row types support for parquet row writer

2021-11-30 Thread sujun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sujun updated FLINK-17782:
--
Attachment: image-2021-12-01-11-37-21-884.png

> Add array,map,row types support for parquet row writer
> --
>
> Key: FLINK-17782
> URL: https://issues.apache.org/jira/browse/FLINK-17782
> Project: Flink
>  Issue Type: Sub-task
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Jingsong Lee
>Assignee: sujun
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-12-01-11-37-21-884.png
>
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [flink] flinkbot edited a comment on pull request #17919: [FLINK-24419][table-planner] Trim to precision when casting to BINARY/VARBINARY

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17919:
URL: https://github.com/apache/flink/pull/17919#issuecomment-979659835


   
   ## CI report:
   
   * 8e8aaa7df0ebae7ec45b9bc2e61ea8e55d9f45b8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27154)
 
   * d18299137d194d8f2e10b70480796cfc797a0d69 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27325)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-ml] zhipeng93 commented on a change in pull request #28: [Flink-24556] Add Estimator and Transformer for logistic regression

2021-11-30 Thread GitBox


zhipeng93 commented on a change in pull request #28:
URL: https://github.com/apache/flink-ml/pull/28#discussion_r759828179



##
File path: 
flink-ml-lib/src/main/java/org/apache/flink/ml/classification/linear/LogisticRegression.java
##
@@ -0,0 +1,653 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.ml.classification.linear;
+
+import org.apache.flink.api.common.functions.RichMapFunction;
+import org.apache.flink.api.common.state.ListState;
+import org.apache.flink.api.common.state.ListStateDescriptor;
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.common.typeinfo.PrimitiveArrayTypeInfo;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeutils.base.DoubleComparator;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.api.java.tuple.Tuple3;
+import org.apache.flink.api.java.typeutils.TupleTypeInfo;
+import org.apache.flink.iteration.DataStreamList;
+import org.apache.flink.iteration.IterationBody;
+import org.apache.flink.iteration.IterationBodyResult;
+import org.apache.flink.iteration.IterationConfig;
+import org.apache.flink.iteration.IterationConfig.OperatorLifeCycle;
+import org.apache.flink.iteration.IterationListener;
+import org.apache.flink.iteration.Iterations;
+import org.apache.flink.iteration.ReplayableDataStreamList;
+import org.apache.flink.iteration.operator.OperatorStateUtils;
+import org.apache.flink.ml.api.Estimator;
+import org.apache.flink.ml.common.broadcast.BroadcastUtils;
+import org.apache.flink.ml.common.datastream.DataStreamUtils;
+import org.apache.flink.ml.common.iteration.TerminateOnMaxIterOrTol;
+import org.apache.flink.ml.linalg.BLAS;
+import org.apache.flink.ml.param.Param;
+import org.apache.flink.ml.util.ParamUtils;
+import org.apache.flink.ml.util.ReadWriteUtils;
+import org.apache.flink.runtime.state.StateInitializationContext;
+import org.apache.flink.runtime.state.StateSnapshotContext;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
+import org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator;
+import org.apache.flink.streaming.api.operators.BoundedMultiInput;
+import org.apache.flink.streaming.api.operators.BoundedOneInput;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.TwoInputStreamOperator;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.table.api.Table;
+import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
+import org.apache.flink.table.api.internal.TableImpl;
+import org.apache.flink.util.Collector;
+import org.apache.flink.util.OutputTag;
+import org.apache.flink.util.Preconditions;
+
+import org.apache.commons.collections.IteratorUtils;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Random;
+
+/**
+ * This class implements methods to train a logistic regression model. For 
details, see
+ * https://en.wikipedia.org/wiki/Logistic_regression.
+ */
+public class LogisticRegression
+implements Estimator,
+LogisticRegressionParams {
+
+private Map, Object> paramMap = new HashMap<>();
+
+private static final OutputTag> MODEL_OUTPUT =
+new OutputTag>("MODEL_OUTPUT") {};
+
+public LogisticRegression() {
+ParamUtils.initializeMapWithDefaultValues(this.paramMap, this);
+}
+
+@Override
+public Map, Object> getParamMap() {
+return paramMap;
+}
+
+@Override
+public void save(String path) throws IOException {
+ReadWriteUtils.saveMetadata(this, path);
+}
+
+public static LogisticRegression load(StreamExecutionEnvironment env, 
String path)
+throws IOException {
+return ReadWriteUtils.load

[GitHub] [flink] flinkbot edited a comment on pull request #17953: [FLINK-24481][docs] Translate buffer debloat documenation to chinese

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17953:
URL: https://github.com/apache/flink/pull/17953#issuecomment-982382670


   
   ## CI report:
   
   * 613fdd459bdc162d32219604db1eadd907089eed Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27263)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17919: [FLINK-24419][table-planner] Trim to precision when casting to BINARY/VARBINARY

2021-11-30 Thread GitBox


flinkbot edited a comment on pull request #17919:
URL: https://github.com/apache/flink/pull/17919#issuecomment-979659835


   
   ## CI report:
   
   * 8e8aaa7df0ebae7ec45b9bc2e61ea8e55d9f45b8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=27154)
 
   * d18299137d194d8f2e10b70480796cfc797a0d69 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-25120) Add many kinds of checks in ML Python API

2021-11-30 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-25120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-25120:
-
Parent: FLINK-25121
Issue Type: Sub-task  (was: New Feature)

> Add many kinds of checks in ML Python API
> -
>
> Key: FLINK-25120
> URL: https://issues.apache.org/jira/browse/FLINK-25120
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Library / Machine Learning
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
> Fix For: 0.1.0
>
>
> Add many kinds of checks in ML Python API. These checks include pytest, 
> flask8 and mypy.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-24933) Support ML Python API to implement FLIP-173 and FLIP-174

2021-11-30 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-24933:
-
Parent: FLINK-25121
Issue Type: Sub-task  (was: New Feature)

> Support ML Python API to implement FLIP-173 and FLIP-174
> 
>
> Key: FLINK-24933
> URL: https://issues.apache.org/jira/browse/FLINK-24933
> Project: Flink
>  Issue Type: Sub-task
>  Components: Library / Machine Learning
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.1.0
>
>
> Support ML Python API to implement FLIP-173 and FLIP-174



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-25099) flink on yarn Accessing two HDFS Clusters

2021-11-30 Thread chenqizhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451193#comment-17451193
 ] 

chenqizhu edited comment on FLINK-25099 at 12/1/21, 3:29 AM:
-

When I ignore setting the default.fs=flinkcluster and specifying the checkpoint 
path to hdfs://flinkcluster/x,
The job could not run properly, the status was always INITIALIZING(It seems 
that the jobmanger cannot be started, but I'm not sure why)
Changing the checkpoint path to hdfs:///x and everything works fine(It 
obviously uses the default HDFS)[~zuston]







was (Author: libra_816):
When I ignore setting the default.fs=flinkcluster and specifying the checkpoint 
path to hdfs://flinkcluster/x,
The job could not run properly, the status was always INITIALIZING(It seems 
that the jobmanger cannot be started, but I'm not sure why)
Changing the checkpoint path to hdfs:///x and everything works fine(It 
obviously uses the default HDFS)[~zuston]

The following is jobmanager.log

{code:java}
2021-11-30 23:32:44,345 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService  
   [] - Starting RPC endpoint for 
org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager at 
akka://flink/user/rpc/resourcemanager_0 .
2021-11-30 23:32:44,406 INFO  
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
Starting DefaultLeaderElectionService with 
ZooKeeperLeaderElectionDriver{leaderPath='/leader/dispatcher_lock'}.
2021-11-30 23:32:44,407 INFO  
org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - 
Starting the resource manager.
2021-11-30 23:32:44,408 INFO  
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
Starting DefaultLeaderRetrievalService with 
ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/resource_manager_lock'}.
2021-11-30 23:32:44,409 INFO  
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
Starting DefaultLeaderRetrievalService with 
ZookeeperLeaderRetrievalDriver{retrievalPath='/leader/dispatcher_lock'}.
2021-11-30 23:32:44,409 INFO  
org.apache.flink.runtime.dispatcher.runner.DefaultDispatcherRunner [] - 
DefaultDispatcherRunner was granted leadership with leader id 
8d5e0cf1-06da-4648-bd89-f2949356902f. Creating new DispatcherLeaderProcess.
2021-11-30 23:32:44,414 INFO  
org.apache.flink.runtime.dispatcher.runner.JobDispatcherLeaderProcess [] - 
Start JobDispatcherLeaderProcess.
2021-11-30 23:32:44,421 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService  
   [] - Starting RPC endpoint for 
org.apache.flink.runtime.dispatcher.MiniDispatcher at 
akka://flink/user/rpc/dispatcher_1 .
2021-11-30 23:32:44,449 INFO  
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
Starting DefaultLeaderElectionService with 
ZooKeeperLeaderElectionDriver{leaderPath='/leader/cad84e92fb8ac17daf839af61fb8f9ae/job_manager_lock'}.
2021-11-30 23:32:44,498 WARN  akka.remote.transport.netty.NettyTransport
   [] - Remote connection to [null] failed with 
java.net.ConnectException: Connection refused: flink7/10.21.0.7:26635
2021-11-30 23:32:44,499 WARN  akka.remote.ReliableDeliverySupervisor
   [] - Association with remote system [akka.tcp://flink@flink7:26635] 
has failed, address is now gated for [50] ms. Reason: [Association failed with 
[akka.tcp://flink@flink7:26635]] Caused by: [java.net.ConnectException: 
Connection refused: flink7/10.21.0.7:26635]
2021-11-30 23:32:44,504 INFO  
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
over to rm2
2021-11-30 23:32:44,547 INFO  org.apache.flink.yarn.YarnResourceManagerDriver   
   [] - Recovered 0 containers from previous attempts ([]).
2021-11-30 23:32:44,547 INFO  
org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - 
Recovered 0 workers from previous attempt.
2021-11-30 23:32:44,584 WARN  akka.remote.transport.netty.NettyTransport
   [] - Remote connection to [null] failed with 
java.net.ConnectException: Connection refused: flink7/10.21.0.7:26635
2021-11-30 23:32:44,585 WARN  akka.remote.ReliableDeliverySupervisor
   [] - Association with remote system [akka.tcp://flink@flink7:26635] 
has failed, address is now gated for [50] ms. Reason: [Association failed with 
[akka.tcp://flink@flink7:26635]] Caused by: [java.net.ConnectException: 
Connection refused: flink7/10.21.0.7:26635]
2021-11-30 23:32:44,601 INFO  org.apache.hadoop.conf.Configuration  
   [] - resource-types.xml not found
2021-11-30 23:32:44,602 INFO  
org.apache.hadoop.yarn.util.resource.ResourceUtils   [] - Unable to 
find 'resource-types.xml'.
2021-11-30 23:32:44,615 INFO  
org.apache.flink.runtime.externalresource.ExternalResourceUtils [] - Enabled 
external resources: []
2021-11-30 23:32:44,620 I

[jira] [Comment Edited] (FLINK-25099) flink on yarn Accessing two HDFS Clusters

2021-11-30 Thread chenqizhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17451486#comment-17451486
 ] 

chenqizhu edited comment on FLINK-25099 at 12/1/21, 3:28 AM:
-

Now the only question is why the defaultFS configuration works correctly on the 
flink client (from the console log, the client uploads files to the 
HDFS(hdfs://flinkcluster/**)), but it does not work on nodeManager.  [~zuston]


was (Author: libra_816):
Now the only question is why the defaultFS configuration works correctly on the 
flink client (from the console log, the client uploads files to the 
HDFS(hdfs://flinkcluster/**)), but it does not work on nodeManager. () [~zuston]

> flink on yarn Accessing two HDFS Clusters
> -
>
> Key: FLINK-25099
> URL: https://issues.apache.org/jira/browse/FLINK-25099
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, FileSystems, Runtime / State Backends
>Affects Versions: 1.13.3
> Environment: flink : 1.13.3
> hadoop : 3.3.0
>Reporter: chenqizhu
>Priority: Major
> Attachments: flink-chenqizhu-client-hdfsn21n163.log
>
>
> Flink version 1.13 supports configuration of Hadoop properties in 
> flink-conf.yaml via flink.hadoop.*. There is A requirement to write 
> checkpoint to HDFS with SSDS (called cluster B) to speed checkpoint writing, 
> but this HDFS cluster is not the default HDFS in the flink client (called 
> cluster A by default). Yaml is configured with nameservices for cluster A and 
> cluster B, which is similar to HDFS federated mode.
> The configuration is as follows:
>  
> {code:java}
> flink.hadoop.dfs.nameservices: ACluster,BCluster
> flink.hadoop.fs.defaultFS: hdfs://BCluster
> flink.hadoop.dfs.ha.namenodes.ACluster: nn1,nn2
> flink.hadoop.dfs.namenode.rpc-address.ACluster.nn1: 10.:9000
> flink.hadoop.dfs.namenode.http-address.ACluster.nn1: 10.:50070
> flink.hadoop.dfs.namenode.rpc-address.ACluster.nn2: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.ACluster.nn2: 10.xx:50070
> flink.hadoop.dfs.client.failover.proxy.provider.ACluster: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> flink.hadoop.dfs.ha.namenodes.BCluster: nn1,nn2
> flink.hadoop.dfs.namenode.rpc-address.BCluster.nn1: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.BCluster.nn1: 10.xx:50070
> flink.hadoop.dfs.namenode.rpc-address.BCluster.nn2: 10.xx:9000
> flink.hadoop.dfs.namenode.http-address.BCluster.nn2: 10.x:50070
> flink.hadoop.dfs.client.failover.proxy.provider.BCluster: 
> org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
> {code}
>  
> However, an error occurred during the startup of the job, which is reported 
> as follows:
> (change configuration items to A flink local client default HDFS cluster, the 
> operation can be normal boot:  flink.hadoop.fs.DefaultFS: hdfs: / / ACluster)
> {noformat}
> Caused by: BCluster
> java.net.UnknownHostException: BCluster
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:448)
> at 
> org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:139)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:374)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:308)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:184)
> at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3414)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:158)
> at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3474)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3442)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:524)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
> at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:270)
> at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:68)
> at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:415)
> at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:412)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:412)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:247)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:240)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocaliz

  1   2   3   4   5   6   7   8   9   >