[jira] [Commented] (FLINK-29359) Pulsar Table Connector pom config and packaging

2023-08-30 Thread Yufei Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760716#comment-17760716
 ] 

Yufei Zhang commented on FLINK-29359:
-

[~tison] Yeah I think when the ticket is created it was still an issue. But I 
didn't update it timely and I'm unaware of the current situation now. But I 
think it should be safe to close this ticket~ 

> Pulsar Table Connector pom config and packaging
> ---
>
> Key: FLINK-29359
> URL: https://issues.apache.org/jira/browse/FLINK-29359
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-29359) Pulsar Table Connector pom config and packaging

2023-08-30 Thread Yufei Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760716#comment-17760716
 ] 

Yufei Zhang edited comment on FLINK-29359 at 8/31/23 4:56 AM:
--

[~tison] Yeah I think when the ticket is created it was still an issue. But I 
didn't update it timely and I'm unaware of the current situation now. I think 
it should be safe to close this ticket~ 


was (Author: affe):
[~tison] Yeah I think when the ticket is created it was still an issue. But I 
didn't update it timely and I'm unaware of the current situation now. But I 
think it should be safe to close this ticket~ 

> Pulsar Table Connector pom config and packaging
> ---
>
> Key: FLINK-29359
> URL: https://issues.apache.org/jira/browse/FLINK-29359
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32122) Update the Azure Blob Storage document to assist in configuring the MSI provider with a shaded class name

2023-08-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32122:
---
Labels: pull-request-available  (was: )

> Update the Azure Blob Storage document to assist in configuring the MSI 
> provider with a shaded class name
> -
>
> Key: FLINK-32122
> URL: https://issues.apache.org/jira/browse/FLINK-32122
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, Documentation
>Reporter: Surendra Singh Lilhore
>Priority: Minor
>  Labels: pull-request-available
>
> Many users have reported on the mailing list that they are unable to 
> configure the ABFS filesystem as a checkpoint directory. This is often due to 
> ClassNotFoundException errors for Hadoop classes that are configured in the 
> configuration value. For instance, when using MsiTokenProvider for ABFS 
> storage in Flink, it should be configured with the shaded class name. 
> However, many users mistakenly use the Hadoop class name or package instead.
>  
> fs.azure.account.oauth.provider.type: 
> *org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.azurebfs.oauth2.MsiTokenProvider*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] surendralilhore commented on pull request #22602: [FLINK-32122] Update the Azure Blob Storage document to assist in configuring the MSI provider with a shaded class name

2023-08-30 Thread via GitHub


surendralilhore commented on PR #22602:
URL: https://github.com/apache/flink/pull/22602#issuecomment-1700360398

   @MartijnVisser , Thanks for review.  Updated Chinese document.
   
   Sorry for late reply. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] masteryhx commented on a diff in pull request #23239: [FLINK-26585][state-processor-api] replace implementation of MultiStateKeyIterator with Stream-free implementation

2023-08-30 Thread via GitHub


masteryhx commented on code in PR #23239:
URL: https://github.com/apache/flink/pull/23239#discussion_r1311067683


##
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/MultiStateKeyIterator.java:
##
@@ -46,47 +55,59 @@ public final class MultiStateKeyIterator implements 
CloseableIterator {
 
 private final KeyedStateBackend backend;
 
-private final Iterator internal;
+private Iterator> outerIter;
+private Iterator innerIter;
 
 private final CloseableRegistry registry;
 
 private K currentKey;
 
 public MultiStateKeyIterator(
 List> descriptors, 
KeyedStateBackend backend) {
+
+outerIter = descriptors.iterator();
+innerIter = null;
+
 this.descriptors = Preconditions.checkNotNull(descriptors);
 this.backend = Preconditions.checkNotNull(backend);
 
 this.registry = new CloseableRegistry();
-this.internal =
-descriptors.stream()
-.map(
-descriptor ->
-backend.getKeys(
-descriptor.getName(), 
VoidNamespace.INSTANCE))
-.peek(
-stream -> {
-try {
-
registry.registerCloseable(stream::close);
-} catch (IOException e) {
-throw new RuntimeException(
-"Failed to read keys from 
configured StateBackend",
-e);
-}
-})
-.flatMap(stream -> stream)
-.iterator();
 }
 
 @Override
 public boolean hasNext() {

Review Comment:
   How about:
   ```
   
   @Override
   public boolean hasNext() {
   while (innerIter == null || !innerIter.hasNext()) {
   if (!outerIter.hasNext()) {
   return false;
   }
   
   StateDescriptor descriptor = outerIter.next();
   Stream stream =
   backend.getKeys(descriptor.getName(), 
VoidNamespace.INSTANCE);
   innerIter = stream.iterator();
   try {
   registry.registerCloseable(stream::close);
   } catch (IOException e) {
   throw new RuntimeException(
   "Failed to read keys from configured StateBackend", 
e);
   }
   }
   return true;
   }
   ```



##
flink-libraries/flink-state-processing-api/src/test/java/org/apache/flink/state/api/input/MultiStateKeyIteratorTest.java:
##
@@ -125,4 +226,117 @@ public void testIteratorRemovesFromAllDescriptors() 
throws Exception {
 .count());
 }
 }
+
+/** Test for lazy enumeration of inner iterators. */
+@Test
+public void testIteratorPullsSingleKeyFromAllDescriptors() throws 
AssertionError {

Review Comment:
   IIUC, you want to use this case to verify the key numer you iterator is 
correct ?
   So you should iterator until it doesn't hasNext, right ?



##
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/MultiStateKeyIterator.java:
##
@@ -31,13 +31,22 @@
 import java.io.IOException;
 import java.util.Iterator;
 import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.stream.Stream;
 
 /**
  * An iterator for reading all keys in a state backend across multiple 
partitioned states.
  *
  * To read unique keys across all partitioned states callers must invoke 
{@link
  * MultiStateKeyIterator#remove}.
  *
+ * Note: This is a replacement of the original implementation which used 
streams with a known
+ * flaw in the {@link Stream#flatMap(java.util.function.Function)} 
implementation that lead to
+ * completely enumerating and buffering nested iterators event for a single 
call to {@link
+ * MultiStateKeyIterator#hasNext}.
+ *
+ * @see https://bugs.openjdk.org/browse/JDK-8267359;>https://bugs.openjdk.org/browse/JDK-8267359

Review Comment:
   IMO, this comment about why we update the logic is not necessary, which 
could be found in the Jira Ticket.
   Or you could just add simple description before outerIter, e.g. : "Avoid 
using Stream#flatMap due to xxx, see FLINK-26585 for more details"



##
flink-libraries/flink-state-processing-api/src/main/java/org/apache/flink/state/api/input/MultiStateKeyIterator.java:
##
@@ -46,47 +55,59 @@ public final class MultiStateKeyIterator implements 
CloseableIterator {
 
 private final KeyedStateBackend backend;
 
-private final Iterator internal;
+private Iterator> outerIter;

Review Comment:
   Could 

[GitHub] [flink] JunRuiLee commented on pull request #23181: W.I.P only used to run ci.

2023-08-30 Thread via GitHub


JunRuiLee commented on PR #23181:
URL: https://github.com/apache/flink/pull/23181#issuecomment-1700345704

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] dianfu commented on pull request #23329: [FLINK-32989][python] Fix version parsing issue

2023-08-30 Thread via GitHub


dianfu commented on PR #23329:
URL: https://github.com/apache/flink/pull/23329#issuecomment-1700340731

   Verified that the broken CI could pass with this PR.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-29199) Support blue-green deployment type

2023-08-30 Thread Matt Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760707#comment-17760707
 ] 

Matt Wang commented on FLINK-29199:
---

[~nfraison.datadog] I think [double data output] requires corresponding 
processing by the job and is not a more general solution. In our internal 
practice, we will strictly ensure that at any time, only one job is running in 
a Region, because we believe that job double-running will have a relatively 
great impact on the business, and the downstream of the job all need to deal 
with duplicate data. However, we have also encountered some scenarios that 
require the ability to release in grayscale. We are currently exploring the 
ability to roll upgrading new versions of jobs at Region granularity.

> Support blue-green deployment type
> --
>
> Key: FLINK-29199
> URL: https://issues.apache.org/jira/browse/FLINK-29199
> Project: Flink
>  Issue Type: New Feature
>  Components: Kubernetes Operator
> Environment: Kubernetes
>Reporter: Oleg Vorobev
>Priority: Minor
>
> Are there any plans to support blue-green deployment/rollout mode similar to 
> *BlueGreen* in the 
> [flinkk8soperator|https://github.com/lyft/flinkk8soperator] to avoid downtime 
> while updating?
> The idea is to run a new version in parallel with an old one and remove the 
> old one only after the stability condition of the new one is satisfied (like 
> in 
> [rollbacks|https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.1/docs/custom-resource/job-management/#application-upgrade-rollbacks-experimental]).
> For stateful apps with {*}upgradeMode: savepoint{*}, this means: not 
> cancelling an old job after creating a savepoint -> starting new job from 
> that savepoint -> waiting for it to become running/one successful 
> checkpoint/timeout or something else -> cancelling and removing old job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23341: Dev flink 6912 test6

2023-08-30 Thread via GitHub


flinkbot commented on PR #23341:
URL: https://github.com/apache/flink/pull/23341#issuecomment-1700332505

   
   ## CI report:
   
   * d5dc0f956e46340210165aa55c11ae92bfad84ed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-30 Thread Shengkai Fang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760705#comment-17760705
 ] 

Shengkai Fang commented on FLINK-32731:
---

> How do you find out whether "it works"?

I just try to observe whether the test fails again in the daily run tests. But 
I find I can not find the flink-ci.flink-master-mirror pipeline anymore..

>  It affects 1.18 but the change you documented was only merged to master.

Sure. I will cherry-pick this to release-1.18


> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call 

[GitHub] [flink] WencongLiu commented on pull request #21184: [FLINK-29787][ci] fix ci METHOD_NEW_DEFAULT issue

2023-08-30 Thread via GitHub


WencongLiu commented on PR #21184:
URL: https://github.com/apache/flink/pull/21184#issuecomment-1700319487

   Hello @XComp @liyubin117 , currently I meet a same problem when I'm trying 
to add a new default method to a class with @Public annotation.  Should I just 
need to add an @PublicEvolving annotation to the new added default method to 
avoid this?
   
   BTW, according to the [API compatibility 
guarantees](https://nightlies.apache.org/flink/flink-docs-master/docs/ops/upgrading/)
  in flink website, we shouldn't check binary incompatible between minor 
versions 
[flink/pom.xml](https://github.com/apache/flink/blob/aa8d93ea239f5be79066b7e5caad08d966c86ab2/pom.xml#L2246C5-L2246C5).
 We need to open a issue to fix it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (FLINK-32975) Enhance equal() for all MapState's iterator

2023-08-30 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-32975.
--
Fix Version/s: 1.19.0
   Resolution: Fixed

merge 
[aa8d93ea|https://github.com/apache/flink/commit/aa8d93ea239f5be79066b7e5caad08d966c86ab2]
 into master

> Enhance equal() for all MapState's iterator
> ---
>
> Key: FLINK-32975
> URL: https://issues.apache.org/jira/browse/FLINK-32975
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends, Tests
>Reporter: Rui Xia
>Assignee: Rui Xia
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> This ticket is originated from the junit version upgrade of Changelog module.
> The assertThat() in junit5 uses Object#equals to compare two Map.Entry. The 
> unnamed class Map.Entry in ChangelogMapState uses the default 
> Object#equals(), which does not compares the contents of two entries.
> This ticket is to add a basic equal() implementation for the Map.Entry UV> in ChangelogMapState.
> EDIT: To be more general, the equal() for RocksDB MapState's iterator is also 
> vacant. It would better align the behavior of the comparsion of all 
> MapState#Entry. This ticket will correct them together (RocksDB's and 
> Changelog's).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32963) Make the test "testKeyedMapStateStateMigration" stable

2023-08-30 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760699#comment-17760699
 ] 

Hangxiang Yu commented on FLINK-32963:
--

This is just a simple improvement to make this UT case more reasonable which 
doesn't affect the main code path and CI.

So I think it's fine that we just resolved it in the master.

> Make the test "testKeyedMapStateStateMigration" stable
> --
>
> Key: FLINK-32963
> URL: https://issues.apache.org/jira/browse/FLINK-32963
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.17.1
>Reporter: Asha Boyapati
>Assignee: Asha Boyapati
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.19.0
>
>
> We are proposing to make the following test stable:
> {{org.apache.flink.runtime.state.FileStateBackendMigrationTest.testKeyedMapStateStateMigration}}
> The test is currently flaky because the order of elements returned by the 
> iterator is non-deterministic.
> The following PR fixes the flaky test by making it independent of the order 
> of elements returned by the iterator:
> [https://github.com/apache/flink/pull/23298]
> We detected this using the NonDex tool using the following command:
> {{mvn edu.illinois:nondex-maven-plugin:2.1.1:nondex -pl flink-runtime 
> -DnondexRuns=10 
> -Dtest=org.apache.flink.runtime.state.FileStateBackendMigrationTest#testKeyedMapStateStateMigration}}
> Please see the following Continuous Integration log that shows the flakiness:
> [https://github.com/asha-boyapati/flink/actions/runs/5909136145/job/16029377793]
> Please see the following Continuous Integration log that shows that the 
> flakiness is fixed by this change:
> [https://github.com/asha-boyapati/flink/actions/runs/5909183468/job/16029467973]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-29390) Pulsar SQL Connector: SQLClient E2E testing

2023-08-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29390:
---
Labels: pull-request-available  (was: )

> Pulsar SQL Connector: SQLClient E2E testing
> ---
>
> Key: FLINK-29390
> URL: https://issues.apache.org/jira/browse/FLINK-29390
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Assignee: Zili Chen
>Priority: Minor
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-pulsar] tisonkun opened a new pull request, #58: [FLINK-29390] SQLClient E2E testing

2023-08-30 Thread via GitHub


tisonkun opened a new pull request, #58:
URL: https://github.com/apache/flink-connector-pulsar/pull/58

   Run tests.
   
   ## Significant changes
   
   *(Please check any boxes [x] if the answer is "yes". You can first publish 
the PR and check them afterwards, for
   convenience.)*
   
   - [x] Dependencies have been added or upgraded
   - [ ] Public API has been changed (Public API is any class annotated with 
`@Public(Evolving)`)
   - [ ] Serializers have been changed
   - [ ] New feature has been introduced
   - If yes, how is this documented? (not applicable / docs / JavaDocs / 
not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32523) NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted fails with timeout on AZP

2023-08-30 Thread Hangxiang Yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760695#comment-17760695
 ] 

Hangxiang Yu commented on FLINK-32523:
--

[~mapohl] Thanks for the reminder.
I will merge them into 1.16 & 1.17 after their CI pass.

> NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted fails with timeout 
> on AZP
> ---
>
> Key: FLINK-32523
> URL: https://issues.apache.org/jira/browse/FLINK-32523
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.16.2, 1.18.0
>Reporter: Sergey Nuyanzin
>Assignee: Hangxiang Yu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.18.0
>
> Attachments: failure.log
>
>
> This build
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=50795=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=0c010d0c-3dec-5bf1-d408-7b18988b1b2b=8638
>  fails with timeout
> {noformat}
> Jul 03 01:26:35 org.junit.runners.model.TestTimedOutException: test timed out 
> after 10 milliseconds
> Jul 03 01:26:35   at java.lang.Object.wait(Native Method)
> Jul 03 01:26:35   at java.lang.Object.wait(Object.java:502)
> Jul 03 01:26:35   at 
> org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:61)
> Jul 03 01:26:35   at 
> org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.verifyAllOperatorsNotifyAborted(NotifyCheckpointAbortedITCase.java:198)
> Jul 03 01:26:35   at 
> org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted(NotifyCheckpointAbortedITCase.java:189)
> Jul 03 01:26:35   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Jul 03 01:26:35   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Jul 03 01:26:35   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Jul 03 01:26:35   at java.lang.reflect.Method.invoke(Method.java:498)
> Jul 03 01:26:35   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Jul 03 01:26:35   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
> Jul 03 01:26:35   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
> Jul 03 01:26:35   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Jul 03 01:26:35   at java.lang.Thread.run(Thread.java:748)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32781) Release Testing: Add a metric for back-pressure from the ChangelogStateBackend

2023-08-30 Thread Hangxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hangxiang Yu resolved FLINK-32781.
--
Resolution: Fixed

Verified the metric in UI, just as [~Yanfei Lei] shows.

> Release Testing: Add a metric for back-pressure from the ChangelogStateBackend
> --
>
> Key: FLINK-32781
> URL: https://issues.apache.org/jira/browse/FLINK-32781
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Yanfei Lei
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: image-2023-08-21-17-38-56-927.png
>
>
> The back-pressure from ChangelogStateBackend is reported as 
> [`changelogBusyTimeMsPerSecond`,|https://nightlies.apache.org/flink/flink-docs-master/docs/ops/metrics/#io]
>   its value should be 0 if the changelog is not enabled by default, otherwise 
> it should be a non-negative value. This metric can be seen in the metric tab 
> in flink web ui of any job.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (FLINK-32785) Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support operator-level state TTL configuration

2023-08-30 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1775#comment-1775
 ] 

Jane Chan edited comment on FLINK-32785 at 8/31/23 3:09 AM:


This ticket aims to verify FLINK-31791: Enhance COMPILED PLAN to support 
operator-level state TTL configuration.

More details about this feature and how to use it can be found in this 
[documentation|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/overview/#configure-operator-level-state-ttl].
 The verification steps are as follows.
h3. Part I: Functionality Verification

1. Start the standalone session cluster and sql client.

2. Execute the following DDL statements.
{code:sql}
CREATE TABLE `default_catalog`.`default_database`.`Orders` (
  `order_id` INT,
  `line_order_id` INT
) WITH (
  'connector' = 'datagen', 
  'rows-per-second' = '5'

); 

CREATE TABLE `default_catalog`.`default_database`.`LineOrders` (
  `line_order_id` INT,
  `ship_mode` STRING
) WITH (
  'connector' = 'datagen',
  'rows-per-second' = '5'
);

CREATE TABLE `default_catalog`.`default_database`.`OrdersShipInfo` (
  `order_id` INT,
  `line_order_id` INT,
  `ship_mode` STRING ) WITH (
  'connector' = 'print'
);
{code}
 
3. Generate Compiled Plan
{code:sql}
COMPILE PLAN '/path/to/plan.json' FOR
INSERT INTO OrdersShipInfo 
SELECT a.order_id, a.line_order_id, b.ship_mode 
FROM Orders a JOIN LineOrders b ON a.line_order_id = b.line_order_id;
{code}
 

4. Verify JSON plan content
The generated JSON file should contain the following "state" JSON array for 
StreamJoin ExecNode.
{code:json}
{
"id" : 5,
"type" : "stream-exec-join_1",
"joinSpec" : {
  ...
},
"state" : [ {
  "index" : 0,
  "ttl" : "0 ms",
  "name" : "leftState"
}, {
  "index" : 1,
  "ttl" : "0 ms",
  "name" : "rightState"
} ],
"inputProperties": [...],
"outputType": ...,
"description": ...
}
{code}
h3. Part II: Compatibility Verification

Repeat the previously described steps using the flink-1.17 release, and then 
execute the generated plan using 1.18 via
{code:sql}
EXECUTE PLAN '/path/to/plan-generated-by-old-flink-version.json'
{code}
 


was (Author: qingyue):
This ticket aims to verify FLINK-31791: Enhance COMPILED PLAN to support 
operator-level state TTL configuration.

More details about this feature and how to use it can be found in this 
[documentation|https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/concepts/overview/#configure-operator-level-state-ttl].
 The verification steps are as follows.
h3. Part I: Functionality Verification

1. Start the standalone session cluster and sql client.

2. Execute the following DDL statements.
{code:sql}
CREATE TABLE `default_catalog`.`default_database`.`Orders` (
  `order_id` INT,
  `line_order_id` INT
) WITH (
  'connector' = 'datagen'
); 

CREATE TABLE `default_catalog`.`default_database`.`LineOrders` (
  `line_order_id` INT,
  `ship_mode` STRING
) WITH (
  'connector' = 'datagen'
);

CREATE TABLE `default_catalog`.`default_database`.`OrdersShipInfo` (
  `order_id` INT,
  `line_order_id` INT,
  `ship_mode` STRING ) WITH (
  'connector' = 'print'
);
{code}
 
3. Generate Compiled Plan
{code:sql}
COMPILE PLAN '/path/to/plan.json' FOR
INSERT INTO OrdersShipInfo 
SELECT a.order_id, a.line_order_id, b.ship_mode 
FROM Orders a JOIN LineOrders b ON a.line_order_id = b.line_order_id;
{code}
 

4. Verify JSON plan content
The generated JSON file should contain the following "state" JSON array for 
StreamJoin ExecNode.
{code:json}
{
"id" : 5,
"type" : "stream-exec-join_1",
"joinSpec" : {
  ...
},
"state" : [ {
  "index" : 0,
  "ttl" : "0 ms",
  "name" : "leftState"
}, {
  "index" : 1,
  "ttl" : "0 ms",
  "name" : "rightState"
} ],
"inputProperties": [...],
"outputType": ...,
"description": ...
}
{code}
h3. Part II: Compatibility Verification

Repeat the previously described steps using the flink-1.17 release, and then 
execute the generated plan using 1.18 via
{code:sql}
EXECUTE PLAN '/path/to/plan-generated-by-old-flink-version.json'
{code}
 

> Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support 
> operator-level state TTL configuration
> -
>
> Key: FLINK-32785
> URL: https://issues.apache.org/jira/browse/FLINK-32785
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Sergey Nuyanzin
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32785) Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support operator-level state TTL configuration

2023-08-30 Thread Jane Chan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760693#comment-17760693
 ] 

Jane Chan commented on FLINK-32785:
---

Hi, [~snuyanzin], 
This issue is because the test step description is not accurate enough. The 
datagen connector, by default, emits 10,000 records per second. If the default 
state backend type is hashmap and the table.exec.state.ttl is not set, it 
quickly leads to an OOM. I've updated the test procedure description; please 
let me know if there are any other issues.
 
 
 

> Release Testing: Verify FLIP-292: Enhance COMPILED PLAN to support 
> operator-level state TTL configuration
> -
>
> Key: FLINK-32785
> URL: https://issues.apache.org/jira/browse/FLINK-32785
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Sergey Nuyanzin
>Priority: Major
> Fix For: 1.18.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] masteryhx merged pull request #23334: [FLINK-32523][test] Guarantee all operators triggering decline checkpoint together for NotifyCheckpointAbortedITCase#testNotifyCheckpointAborted

2023-08-30 Thread via GitHub


masteryhx merged PR #23334:
URL: https://github.com/apache/flink/pull/23334


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] masteryhx commented on pull request #23333: [FLINK-32523][test] Guarantee all operators triggering decline checkpoint together for NotifyCheckpointAbortedITCase#testNotifyCheckpointAb

2023-08-30 Thread via GitHub


masteryhx commented on PR #2:
URL: https://github.com/apache/flink/pull/2#issuecomment-1700302031

   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mananmangal closed pull request #648: [FLINK-32700] Support job drain for Savepoint upgrade mode

2023-08-30 Thread via GitHub


mananmangal closed pull request #648: [FLINK-32700] Support job drain for 
Savepoint upgrade mode
URL: https://github.com/apache/flink-kubernetes-operator/pull/648


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mananmangal commented on pull request #648: [FLINK-32700] Support job drain for Savepoint upgrade mode

2023-08-30 Thread via GitHub


mananmangal commented on PR #648:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/648#issuecomment-1700300622

   #661  Created.
   Closing this one due to incorrect rebase.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-29390) Pulsar SQL Connector: SQLClient E2E testing

2023-08-30 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-29390:
-

Assignee: Zili Chen

> Pulsar SQL Connector: SQLClient E2E testing
> ---
>
> Key: FLINK-29390
> URL: https://issues.apache.org/jira/browse/FLINK-29390
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Assignee: Zili Chen
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (FLINK-31427) Pulsar Catalog support with Schema translation

2023-08-30 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen closed FLINK-31427.
-
Fix Version/s: (was: pulsar-4.0.1)
   Resolution: Later

It's rare that users would use Pulsar as Catalog source. Postpone for later.

> Pulsar Catalog support with Schema translation
> --
>
> Key: FLINK-31427
> URL: https://issues.apache.org/jira/browse/FLINK-31427
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: pulsar-4.0.0
>Reporter: Yufan Sheng
>Assignee: Yufan Sheng
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
> This task will make the Pulsar serve as the Flink catalog. It will expose the 
> Pulsar's namespace as the Flink's database, the topic as the Flink's table. 
> You can easily create a table and database on Pulsar. The table can be 
> consumed by other clients with a valid schema check.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] mananmangal opened a new pull request, #661: FLINK-32700

2023-08-30 Thread via GitHub


mananmangal opened a new pull request, #661:
URL: https://github.com/apache/flink-kubernetes-operator/pull/661

   
   
   ## What is the purpose of the change
   
   This pull request adds an option called 
`kubernetes.operator.job.drain-on-savepoint-deletion` to indicate whether a job 
should be drained before deleting a FlinkDeployment or FlinkSessionJob, only if 
savepoint on deletion is enabled.
   
   
   ## Brief change log
   
 - Add new configurable option to drain a job before deletion, if the 
savepoint on deletion is enabled
   
   ## Verifying this change
   
   
   This change added tests and can be verified as follows:
   
 - Added unit tests with the new configuration option enabled 
`testSubmitAndDrainOnCleanUpWithSavepoint` in `ApplicationReconcilerTest` and 
`SessionJobReconcilerTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): No
 - The public API, i.e., is any changes to the `CustomResourceDescriptors`: 
yes
 - Core observer or reconciler logic that is regularly executed: yes
   
   ## Documentation
   
 - Does this pull request introduce a new feature? yes
 - If yes, how is the feature documented? docs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (FLINK-29360) Pulsar Table Connector Documentation

2023-08-30 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-29360:
-

Assignee: Zili Chen

> Pulsar Table Connector Documentation
> 
>
> Key: FLINK-29360
> URL: https://issues.apache.org/jira/browse/FLINK-29360
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Assignee: Zili Chen
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29359) Pulsar Table Connector pom config and packaging

2023-08-30 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760687#comment-17760687
 ] 

Zili Chen commented on FLINK-29359:
---

[~affe] [~syhily] [~leonard] I'm unsure if this ticket means to support SQL jar 
packaging.

I can see that we already have the module flink-sql-connector-pulsar.

> Pulsar Table Connector pom config and packaging
> ---
>
> Key: FLINK-29359
> URL: https://issues.apache.org/jira/browse/FLINK-29359
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29357) Pulsar Table Sink code: implementation

2023-08-30 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen resolved FLINK-29357.
---
Fix Version/s: pulsar-4.1.0
   Resolution: Fixed

master via c71fc862e0d4a782c19f361d3bf581da836cca79

> Pulsar Table Sink code: implementation
> --
>
> Key: FLINK-29357
> URL: https://issues.apache.org/jira/browse/FLINK-29357
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Assignee: Zili Chen
>Priority: Minor
> Fix For: pulsar-4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29357) Pulsar Table Sink code: implementation

2023-08-30 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-29357:
-

Assignee: Zili Chen

> Pulsar Table Sink code: implementation
> --
>
> Key: FLINK-29357
> URL: https://issues.apache.org/jira/browse/FLINK-29357
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Assignee: Zili Chen
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29358) Pulsar Table Connector testing

2023-08-30 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen resolved FLINK-29358.
---
Fix Version/s: pulsar-4.1.0
 Assignee: Zili Chen
   Resolution: Fixed

master via c71fc862e0d4a782c19f361d3bf581da836cca79

> Pulsar Table Connector testing
> --
>
> Key: FLINK-29358
> URL: https://issues.apache.org/jira/browse/FLINK-29358
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Assignee: Zili Chen
>Priority: Minor
> Fix For: pulsar-4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-29356) Pulsar Table Source code :implementation

2023-08-30 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen resolved FLINK-29356.
---
Fix Version/s: pulsar-4.1.0
   Resolution: Fixed

master via c71fc862e0d4a782c19f361d3bf581da836cca79

> Pulsar Table Source code :implementation
> 
>
> Key: FLINK-29356
> URL: https://issues.apache.org/jira/browse/FLINK-29356
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Assignee: Zili Chen
>Priority: Minor
> Fix For: pulsar-4.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-29356) Pulsar Table Source code :implementation

2023-08-30 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-29356:
-

Assignee: Zili Chen

> Pulsar Table Source code :implementation
> 
>
> Key: FLINK-29356
> URL: https://issues.apache.org/jira/browse/FLINK-29356
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Pulsar
>Affects Versions: 1.17.0
>Reporter: Yufei Zhang
>Assignee: Zili Chen
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-connector-pulsar] tisonkun commented on pull request #56: [FLINK-26203] Basic table factory for Pulsar connector

2023-08-30 Thread via GitHub


tisonkun commented on PR #56:
URL: 
https://github.com/apache/flink-connector-pulsar/pull/56#issuecomment-1700296791

   @leonardBang Thank you! Merged.
   
   > sql jar and task for docs can start as well next
   
   Yep. Let me create the related JIRA tickets..


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] mananmangal commented on pull request #648: [FLINK-32700] Support job drain for Savepoint upgrade mode

2023-08-30 Thread via GitHub


mananmangal commented on PR #648:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/648#issuecomment-1700295844

   Looks like something went wrong during rebase, let me create another request 
and we can close this one.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #23340: Dev flink 6912 test5

2023-08-30 Thread via GitHub


flinkbot commented on PR #23340:
URL: https://github.com/apache/flink/pull/23340#issuecomment-1700294924

   
   ## CI report:
   
   * 100cdf50addeb89ca565cc2d9e2f864b9b1787fb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #23339: [BP-1.18][FLINK-32821][examples] Include flink-connector-datagen for streaming examples

2023-08-30 Thread via GitHub


flinkbot commented on PR #23339:
URL: https://github.com/apache/flink/pull/23339#issuecomment-1700294811

   
   ## CI report:
   
   * 8b7483b80352548191ca15517e08a5dbeacc8334 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-pulsar] tisonkun merged pull request #56: [FLINK-26203] Basic table factory for Pulsar connector

2023-08-30 Thread via GitHub


tisonkun merged PR #56:
URL: https://github.com/apache/flink-connector-pulsar/pull/56


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] X-czh commented on pull request #23339: [BP-1.18][FLINK-32821][examples] Include flink-connector-datagen for streaming examples

2023-08-30 Thread via GitHub


X-czh commented on PR #23339:
URL: https://github.com/apache/flink/pull/23339#issuecomment-1700293301

   @huwh Could you help take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-26203) Support Table API in Pulsar Connector

2023-08-30 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760680#comment-17760680
 ] 

Zili Chen commented on FLINK-26203:
---

Yeah. I'm updating the tickets here now.

> Support Table API in Pulsar Connector
> -
>
> Key: FLINK-26203
> URL: https://issues.apache.org/jira/browse/FLINK-26203
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Pulsar
>Reporter: Yufei Zhang
>Assignee: Yufan Sheng
>Priority: Minor
>  Labels: Pulsar, auto-deprioritized-major, pull-request-available
>
> Currently Pulsar connector only supports DataStream API. We plan to support 
> Table API as well.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (FLINK-32798) Release Testing: Verify FLIP-294: Support Customized Catalog Modification Listener

2023-08-30 Thread Qingsheng Ren (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qingsheng Ren resolved FLINK-32798.
---
Resolution: Done

> Release Testing: Verify FLIP-294: Support Customized Catalog Modification 
> Listener
> --
>
> Key: FLINK-32798
> URL: https://issues.apache.org/jira/browse/FLINK-32798
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Hang Ruan
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: result.png, sqls.png, test.png
>
>
> The document about catalog modification listener is: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/#catalog-modification-listener



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32755) Add quick start guide for Flink OLAP

2023-08-30 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo reassigned FLINK-32755:
--

Assignee: xiangyu feng

> Add quick start guide for Flink OLAP
> 
>
> Key: FLINK-32755
> URL: https://issues.apache.org/jira/browse/FLINK-32755
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: xiangyu feng
>Assignee: xiangyu feng
>Priority: Major
>  Labels: pull-request-available
>
> I propose to add a new {{QUICKSTART.md}} guide that provides instructions for 
> beginner to build a production ready Flink OLAP Service by using 
> flink-jdbc-driver, flink-sql-gateway and flink session cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33004) Decoupling topology and network memory to support complex job topologies

2023-08-30 Thread dalongliu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760673#comment-17760673
 ] 

dalongliu commented on FLINK-33004:
---

cc [~guoweijie] 

> Decoupling topology and network memory to support complex job topologies
> 
>
> Key: FLINK-33004
> URL: https://issues.apache.org/jira/browse/FLINK-33004
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.18.0, 1.19.0
>Reporter: dalongliu
>Priority: Major
>
> Currently, the default value of taskmanager.memory.network.fraction option in 
> Flink is 0.1, and after the topology of the job is complex enough, it will 
> run with an insufficient network buffer. We currently encountered this issue 
> when running TPC-DS test set q9, and bypassed it by adjusting 
> taskmanager.memory.network.fraction to 0.2. Theoretically, we should have 
> network memory decoupled from the job topology so that arbitrarily complex 
> jobs can be supported.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33004) Decoupling topology and network memory to support complex job topologies

2023-08-30 Thread dalongliu (Jira)
dalongliu created FLINK-33004:
-

 Summary: Decoupling topology and network memory to support complex 
job topologies
 Key: FLINK-33004
 URL: https://issues.apache.org/jira/browse/FLINK-33004
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Network
Affects Versions: 1.18.0, 1.19.0
Reporter: dalongliu


Currently, the default value of taskmanager.memory.network.fraction option in 
Flink is 0.1, and after the topology of the job is complex enough, it will run 
with an insufficient network buffer. We currently encountered this issue when 
running TPC-DS test set q9, and bypassed it by adjusting 
taskmanager.memory.network.fraction to 0.2. Theoretically, we should have 
network memory decoupled from the job topology so that arbitrarily complex jobs 
can be supported.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33003) Flink ML add isolationForest algorithm

2023-08-30 Thread zhaozijun (Jira)
zhaozijun created FLINK-33003:
-

 Summary: Flink ML add isolationForest algorithm
 Key: FLINK-33003
 URL: https://issues.apache.org/jira/browse/FLINK-33003
 Project: Flink
  Issue Type: New Feature
  Components: Library / Machine Learning
Reporter: zhaozijun
 Attachments: IsolationForest.zip

I want to use flink solve some problems related to anomaly detection, but 
currently flink ml lacks algorithms related to anomaly detection, so I want to 
add the isolation forest algorithm to library/flink ml. During the 
implementation process, when IterationBody is used, I try to understand the 
implementation of the Kmeans algorithm, and use iterative behavior to calculate 
the center point of the isolation forest algorithm, but in the test, I found 
that when the parallelism > 1, the number of iterations > 1, and there will be 
sometimes succeed sometimes fail (fail to find the broadcast variable). Please 
teachers help me to review and point out my problem. Thank you 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32798) Release Testing: Verify FLIP-294: Support Customized Catalog Modification Listener

2023-08-30 Thread Hang Ruan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760670#comment-17760670
 ] 

Hang Ruan commented on FLINK-32798:
---

[~renqs] [~zjureel] , I think we could complete this testing task. Thanks.

> Release Testing: Verify FLIP-294: Support Customized Catalog Modification 
> Listener
> --
>
> Key: FLINK-32798
> URL: https://issues.apache.org/jira/browse/FLINK-32798
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.18.0
>Reporter: Qingsheng Ren
>Assignee: Hang Ruan
>Priority: Major
> Fix For: 1.18.0
>
> Attachments: result.png, sqls.png, test.png
>
>
> The document about catalog modification listener is: 
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/#catalog-modification-listener



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23338: [FLINK-32952][table-planner] Fix scan reuse with readable metadata and watermark push down get wrong watermark error

2023-08-30 Thread via GitHub


flinkbot commented on PR #23338:
URL: https://github.com/apache/flink/pull/23338#issuecomment-1700235442

   
   ## CI report:
   
   * aa90d3540a792e72dea020a15b7cb67d35962aad UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wangyang0918 commented on a diff in pull request #23327: [FLINK-32994][runtime] Adds human-readable toString implementations to the LeaderElectionDriver classes

2023-08-30 Thread via GitHub


wangyang0918 commented on code in PR #23327:
URL: https://github.com/apache/flink/pull/23327#discussion_r1311008016


##
flink-runtime/src/main/java/org/apache/flink/runtime/util/ZooKeeperUtils.java:
##
@@ -610,12 +611,41 @@ FileSystemStateStorageHelper 
createFileSystemStateStorage(
 prefix);
 }
 
-/** Creates a ZooKeeper path of the form "/a/b/.../z". */
-public static String generateZookeeperPath(String... paths) {
-return Arrays.stream(paths)
+/** Creates an absolute ZooKeeper path of the form "/a/b/.../z". */
+public static String generateAbsoluteZookeeperPath(String... pathElements) 
{
+return generateZookeeperPath("/", pathElements);
+}
+
+/** Creates a relative ZooKeeper path of the form "a/b/.../z". */
+public static String generateRelativeZooKeeperPath(String... pathElements) 
{
+return generateZookeeperPath("", pathElements);
+}
+
+private static String generateZookeeperPath(String prefix, String... 
pathElements) {
+return Arrays.stream(pathElements)
 .map(ZooKeeperUtils::trimSlashes)
 .filter(s -> !s.isEmpty())
-.collect(Collectors.joining("/", "/", ""));
+.collect(Collectors.joining("/", prefix, ""));
+}
+
+private static boolean isAbsolutePath(String path) {
+return path.startsWith("/");
+}
+
+/** Extracts the parent path from the given {@code path}. */
+public static String extractParentPath(String path) {

Review Comment:
   Got it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] swuferhong opened a new pull request, #23338: [FLINK-32952][table-planner] Fix scan reuse with readable metadata and watermark push down get wrong watermark error

2023-08-30 Thread via GitHub


swuferhong opened a new pull request, #23338:
URL: https://github.com/apache/flink/pull/23338

   
   
   ## What is the purpose of the change
   
   cherry-pick to releae-1.18
   
   
   ## Brief change log
   
   ## Verifying this change
   
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no docs
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] liuyongvs commented on pull request #22745: [FLINK-31691][table] Add built-in MAP_FROM_ENTRIES function.

2023-08-30 Thread via GitHub


liuyongvs commented on PR #22745:
URL: https://github.com/apache/flink/pull/22745#issuecomment-1700223225

   hi @snuyanzin do you have time to look again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32990) when execute Plan#translate function with CREATE TABLE AS statement, the CreateTableASOperation as Plan.translate function parameter exception

2023-08-30 Thread Licho Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Licho Sun updated FLINK-32990:
--
Description: 
The `translate` function comment description `ModifyOperation` could be a 
parameter, but in the implementation function, there isn't a process for the 
`CreateTableASOperation` type.

I think at code PlannerBase.scala:L191(private[flink] def 
translateToRel(modifyOperation: ModifyOperation): RelNode) function no item for

`CreateTableASOperation`  

  was:The `translate` function comment description `ModifyOperation` could be a 
parameter, but in the implementation function, there isn't a process for the 
`CreateTableASOperation` type.


> when execute Plan#translate function with CREATE TABLE AS statement,  the 
> CreateTableASOperation  as Plan.translate function parameter exception
> 
>
> Key: FLINK-32990
> URL: https://issues.apache.org/jira/browse/FLINK-32990
> Project: Flink
>  Issue Type: Bug
>Reporter: Licho Sun
>Priority: Major
>
> The `translate` function comment description `ModifyOperation` could be a 
> parameter, but in the implementation function, there isn't a process for the 
> `CreateTableASOperation` type.
> I think at code PlannerBase.scala:L191(private[flink] def 
> translateToRel(modifyOperation: ModifyOperation): RelNode) function no item 
> for
> `CreateTableASOperation`  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] swuferhong commented on pull request #23285: [FLINK-32952][table-planner] Fix scan reuse with readable metadata and watermark push down get wrong watermark error

2023-08-30 Thread via GitHub


swuferhong commented on PR #23285:
URL: https://github.com/apache/flink/pull/23285#issuecomment-1700165210

   > @swuferhong could you open a cherry-pick pull request for release-1.18 
branch?
   
   Sure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (FLINK-18445) Short circuit join condition for lookup join

2023-08-30 Thread lincoln lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lincoln lee closed FLINK-18445.
---
Resolution: Fixed

fixed in master: 360b97a710a9711bffbf320db2806c17557bb334

> Short circuit join condition for lookup join
> 
>
> Key: FLINK-18445
> URL: https://issues.apache.org/jira/browse/FLINK-18445
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Runtime
>Reporter: Rui Li
>Assignee: lincoln lee
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> pull-request-available
> Fix For: 1.19.0
>
>
> Consider the following query:
> {code}
> select *
> from probe
> left join
> build for system_time as of probe.ts
> on probe.key=build.key and probe.col is not null
> {code}
> In current implementation, we lookup each probe.key in build to decide 
> whether a match is found. A possible optimization is to skip the lookup for 
> rows whose {{col}} is null.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] lincoln-lil merged pull request #23316: [FLINK-18445][table] Add pre-filter optimization for lookup join

2023-08-30 Thread via GitHub


lincoln-lil merged PR #23316:
URL: https://github.com/apache/flink/pull/23316


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-11526) Support Chinese Website for Apache Flink

2023-08-30 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-11526:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Support Chinese Website for Apache Flink
> 
>
> Key: FLINK-11526
> URL: https://issues.apache.org/jira/browse/FLINK-11526
> Project: Flink
>  Issue Type: New Feature
>  Components: chinese-translation, Project Website
>Reporter: Jark Wu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue is an umbrella issue for tracking fully support Chinese for Flink 
> website (flink.apache.org).
> A more detailed description can be found in the proposal doc: 
> https://docs.google.com/document/d/1R1-uDq-KawLB8afQYrczfcoQHjjIhq6tvUksxrfhBl0/edit#



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-15012) Checkpoint directory not cleaned up

2023-08-30 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15012:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
pull-request-available  (was: auto-deprioritized-major auto-unassigned 
pull-request-available stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> Checkpoint directory not cleaned up
> ---
>
> Key: FLINK-15012
> URL: https://issues.apache.org/jira/browse/FLINK-15012
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.9.1
>Reporter: Nico Kruber
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I started a Flink cluster with 2 TMs using {{start-cluster.sh}} and the 
> following config (in addition to the default {{flink-conf.yaml}})
> {code:java}
> state.checkpoints.dir: file:///path/to/checkpoints/
> state.backend: rocksdb {code}
> After submitting a jobwith checkpoints enabled (every 5s), checkpoints show 
> up, e.g.
> {code:java}
> bb969f842bbc0ecc3b41b7fbe23b047b/
> ├── chk-2
> │   ├── 238969e1-6949-4b12-98e7-1411c186527c
> │   ├── 2702b226-9cfc-4327-979d-e5508ab2e3d5
> │   ├── 4c51cb24-6f71-4d20-9d4c-65ed6e826949
> │   ├── e706d574-c5b2-467a-8640-1885ca252e80
> │   └── _metadata
> ├── shared
> └── taskowned {code}
> If I shut down the cluster via {{stop-cluster.sh}}, these files will remain 
> on disk and not be cleaned up.
> In contrast, if I cancel the job, at least {{chk-2}} will be deleted, but 
> still leaving the (empty) directories.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-18822) [umbrella] Improve and complete Change Data Capture formats

2023-08-30 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18822:
---
  Labels: auto-deprioritized-major auto-deprioritized-minor auto-unassigned 
 (was: auto-deprioritized-major auto-unassigned stale-minor)
Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> [umbrella] Improve and complete Change Data Capture formats
> ---
>
> Key: FLINK-18822
> URL: https://issues.apache.org/jira/browse/FLINK-18822
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Jark Wu
>Priority: Not a Priority
>  Labels: auto-deprioritized-major, auto-deprioritized-minor, 
> auto-unassigned
>
> This is an umbrella issue to collect new features and improvements and bugs 
> for CDC formats. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32509) avoid using skip in InputStreamFSInputWrapper.seek

2023-08-30 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32509:
---
Labels: pull-request-available stale-major  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 60 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> avoid using skip in InputStreamFSInputWrapper.seek
> --
>
> Key: FLINK-32509
> URL: https://issues.apache.org/jira/browse/FLINK-32509
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core
>Affects Versions: 1.18.0
>Reporter: Libin Qin
>Priority: Major
>  Labels: pull-request-available, stale-major
>
> The implementation of  InputStream does not return -1  for eof.
> The java doc of InputStream said "The skip method may, for a variety of 
> reasons, end up skipping over some smaller number of bytes, possibly 0." 
> For FileInputStream, it allows skipping any number of bytes past the end of 
> the file.
> So the method "seek" of InputStreamFSInputWrapper will cause infinite loop if 
> desired exceed end of file
>  
> I reproduced with following case
>  
> {code:java}
> byte[] bytes = "flink".getBytes();
> try (InputStream inputStream = new ByteArrayInputStream(bytes)){ 
> InputStreamFSInputWrapper wrapper = new 
> InputStreamFSInputWrapper(inputStream); 
> wrapper.seek(20); 
> } {code}
> I  found an issue of commons-io talks about the problem of skip
> https://issues.apache.org/jira/browse/IO-203
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32264) Add FIELD support in SQL & Table API

2023-08-30 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-32264:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Add FIELD support in SQL & Table API
> 
>
> Key: FLINK-32264
> URL: https://issues.apache.org/jira/browse/FLINK-32264
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Bonnie Varghese
>Assignee: Hanyu Zheng
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.18.0
>
>
> FIELD Function
> Description
> The FIELD function returns the position of a value in a list of values (val1, 
> val2, val3, ...).
> Syntax
> The syntax for the FIELD function is:
> FIELD( value, ...)
> Parameters or Arguments
> value
> The value to find in the list.
> val1, val2, val3, ...
> The list of values that is to be searched.
> Note
> If value is not found in the list of values (val1, val2, val3, ...), the 
> FIELD function will return 0.
> If value is NULL, the FIELD function will return 0.
> If list of values is NULL, return 0.
> Example
> Let's look at some  FIELD function examples and explore how to use the FIELD 
> function.
> For example:
>  
> {code:java}
> SELECT FIELD('b', 'a', 'b', 'c', 'd', 'e', 'f');
> Result: 2
> SELECT FIELD('B', 'a', 'b', 'c', 'd', 'e', 'f');
> Result: 2
> SELECT FIELD(15, 10, 20, 15, 40);
> Result: 3
> SELECT FIELD('c', 'a', 'b');
> Result: 0
> SELECT FIELD('g', '');
> Result: 0
> SELECT FIELD(null, 'a', 'b', 'c');
> Result: 0
> SELECT FIELD('a', null);
> Result: 0
> {code}
> see also:
> MySQL:https://dev.mysql.com/doc/refman/8.0/en/string-functions.html#function_field



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-29459) Sink v2 has bugs in supporting legacy v1 implementations with global committer

2023-08-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760626#comment-17760626
 ] 

Martijn Visser commented on FLINK-29459:


[~gaoyunhaii] Sorry for the late reply, let's continue the discussion here, 
since we don't have to involve the change of the API. I know that [~tzulitai] 
is also interested in this topic, so let us know what you think!

> Sink v2 has bugs in supporting legacy v1 implementations with global committer
> --
>
> Key: FLINK-29459
> URL: https://issues.apache.org/jira/browse/FLINK-29459
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.16.0, 1.17.0, 1.15.3
>Reporter: Yun Gao
>Assignee: Yun Gao
>Priority: Major
> Fix For: 1.18.0, 1.16.3, 1.17.2
>
>
> Currently when supporting Sink implementation using version 1 interface, 
> there are issues after restoring from a checkpoint after failover:
>  # In global committer operator, when restoring SubtaskCommittableManager, 
> the subtask id is replaced with the one in the current operator. This means 
> that the id originally is the id of the sender task (0 ~ N - 1), but after 
> restoring it has to be 0. This would cause Duplication Key exception during 
> restoring.
>  # For Committer operator, the subtaskId of CheckpointCommittableManagerImpl 
> is always restored to 0 after failover for all the subtasks. This makes the 
> summary sent to the Global Committer is attached with wrong subtask id.
>  # For Committer operator, the checkpoint id of SubtaskCommittableManager is 
> always restored to 1 after failover, this make the following committable sent 
> to the global committer is attached with wrong checkpoint id. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32962) Failure to install python dependencies from requirements file

2023-08-30 Thread Aleksandr Pilipenko (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Pilipenko updated FLINK-32962:

Affects Version/s: 1.18.0

> Failure to install python dependencies from requirements file
> -
>
> Key: FLINK-32962
> URL: https://issues.apache.org/jira/browse/FLINK-32962
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.15.4, 1.16.2, 1.18.0, 1.17.1
>Reporter: Aleksandr Pilipenko
>Priority: Major
>  Labels: pull-request-available
>
> We have encountered an issue when Flink fails to install python dependencies 
> from requirements file if python environment contains setuptools dependency 
> version 67.5.0 or above.
> Flink job fails with following error:
> {code:java}
> py4j.protocol.Py4JJavaError: An error occurred while calling o118.await.
> : java.util.concurrent.ExecutionException: 
> org.apache.flink.table.api.TableException: Failed to wait job finish
> at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> at 
> org.apache.flink.table.api.internal.TableResultImpl.awaitInternal(TableResultImpl.java:118)
> at 
> org.apache.flink.table.api.internal.TableResultImpl.await(TableResultImpl.java:81)
> ...
> Caused by: java.util.concurrent.ExecutionException: 
> org.apache.flink.client.program.ProgramInvocationException: Job failed 
> (JobID: 2ca4026944022ac4537c503464d4c47f)
> at 
> java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
> at 
> java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
> at 
> org.apache.flink.table.api.internal.InsertResultProvider.hasNext(InsertResultProvider.java:83)
> ... 6 more
> Caused by: org.apache.flink.client.program.ProgramInvocationException: Job 
> failed (JobID: 2ca4026944022ac4537c503464d4c47f)
> at 
> org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:130)
> at 
> java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:642)
> at 
> java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506)
> at 
> java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073)
> ...
> Caused by: java.io.IOException: java.io.IOException: Failed to execute the 
> command: /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install 
> --ignore-installed -r 
> /var/folders/rb/q_3h54_94b57gz_qkbp593vwgn/T/tm_localhost:63904-4178d3/blobStorage/job_2ca4026944022ac4537c503464d4c47f/blob_p-feb04bed919e628d98b1ef085111482f05bf43a1-1f5c3fda7cbdd6842e950e04d9d807c5
>  --install-option 
> --prefix=/var/folders/rb/q_3h54_94b57gz_qkbp593vwgn/T/python-dist-c17912db-5aba-439f-9e39-69cb8c957bec/python-requirements
> output:
> Usage:
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] 
>  [package-index-options] ...
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] -r 
>  [package-index-options] ...
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] 
> [-e]  ...
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] 
> [-e]  ...
>   /Users/z3d1k/.pyenv/versions/3.9.17/bin/python -m pip install [options] 
>  ...
> no such option: --install-option
> at 
> org.apache.flink.python.util.PythonEnvironmentManagerUtils.pipInstallRequirements(PythonEnvironmentManagerUtils.java:144)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.installRequirements(AbstractPythonEnvironmentManager.java:215)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.lambda$open$0(AbstractPythonEnvironmentManager.java:126)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager$PythonEnvResources.createResource(AbstractPythonEnvironmentManager.java:435)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager$PythonEnvResources.getOrAllocateSharedResource(AbstractPythonEnvironmentManager.java:402)
> at 
> org.apache.flink.python.env.AbstractPythonEnvironmentManager.open(AbstractPythonEnvironmentManager.java:114)
> at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:238)
> at 
> org.apache.flink.streaming.api.operators.python.process.AbstractExternalPythonFunctionOperator.open(AbstractExternalPythonFunctionOperator.java:57)
> at 
> org.apache.flink.table.runtime.operators.python.AbstractStatelessFunctionOperator.open(AbstractStatelessFunctionOperator.java:92)
> at 
> 

[jira] [Closed] (FLINK-33002) Bump snappy-java from 1.1.4 to 1.1.10.1

2023-08-30 Thread Martijn Visser (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn Visser closed FLINK-33002.
--
Fix Version/s: statefun-3.3.0
   Resolution: Fixed

Fixed in:

apache/flink-statefun 4b1e0ff0f21d3299b0e23c2dcbf782bfb959d942

> Bump snappy-java from 1.1.4 to 1.1.10.1
> ---
>
> Key: FLINK-33002
> URL: https://issues.apache.org/jira/browse/FLINK-33002
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Stateful Functions
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
> Fix For: statefun-3.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33002) Bump snappy-java from 1.1.4 to 1.1.10.1

2023-08-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33002:
---
Labels: pull-request-available  (was: )

> Bump snappy-java from 1.1.4 to 1.1.10.1
> ---
>
> Key: FLINK-33002
> URL: https://issues.apache.org/jira/browse/FLINK-33002
> Project: Flink
>  Issue Type: Technical Debt
>  Components: Stateful Functions
>Reporter: Martijn Visser
>Assignee: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-statefun] MartijnVisser merged pull request #330: [FLINK-33002] Bump snappy-java from 1.1.4 to 1.1.10.1 in /statefun-flink

2023-08-30 Thread via GitHub


MartijnVisser merged PR #330:
URL: https://github.com/apache/flink-statefun/pull/330


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #23337: [FLINK-32962][python] Remove pip version check on installing dependencies.

2023-08-30 Thread via GitHub


flinkbot commented on PR #23337:
URL: https://github.com/apache/flink/pull/23337#issuecomment-1699727538

   
   ## CI report:
   
   * c6a8c746e52ec4ca81671f32b19bebccb23a1600 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-statefun] dependabot[bot] commented on pull request #327: Bump jackson-databind from 2.13.2.2 to 2.13.4.2

2023-08-30 Thread via GitHub


dependabot[bot] commented on PR #327:
URL: https://github.com/apache/flink-statefun/pull/327#issuecomment-1699759947

   Looks like this PR is already up-to-date with master! If you'd still like to 
recreate it from scratch, overwriting any edits, you can request `@dependabot 
recreate`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-33002) Bump snappy-java from 1.1.4 to 1.1.10.1

2023-08-30 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-33002:
--

 Summary: Bump snappy-java from 1.1.4 to 1.1.10.1
 Key: FLINK-33002
 URL: https://issues.apache.org/jira/browse/FLINK-33002
 Project: Flink
  Issue Type: Technical Debt
  Components: Stateful Functions
Reporter: Martijn Visser
Assignee: Martijn Visser






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33001) KafkaSource in batch mode failing with exception if topic partition is empty

2023-08-30 Thread Martijn Visser (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760601#comment-17760601
 ] 

Martijn Visser commented on FLINK-33001:


[~abdul] Can you please verify this with the externalized version of the Kafka 
connector, given that that has bug fixes for some other things as well? See 
https://flink.apache.org/downloads/#apache-flink-kafka-connector-300

> KafkaSource in batch mode failing with exception if topic partition is empty
> 
>
> Key: FLINK-33001
> URL: https://issues.apache.org/jira/browse/FLINK-33001
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.7, 1.14.6, 1.17.1
>Reporter: Abdul
>Priority: Major
>
> If the Kafka topic is empty in Batch mode, there is an exception while 
> processing it. This bug was supposedly fixed but unfortunately, the exception 
> still occurs. The original bug was reported as this 
> https://issues.apache.org/jira/browse/FLINK-27041
> We tried to backport it but it still doesn't work. 
>  * The problem will occur in case of the DEBUG level of logger for class 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
>  * The same problems will occur in other versions of Flink, at least in the 
> 1.15 release branch and tag release-1.15.4
>  * The same problem also occurs in Flink 1.17.1 and 1.14
>  
> The minimal code to produce this is 
>  
> {code:java}
>   final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   env.setRuntimeMode(RuntimeExecutionMode.BATCH);
>   KafkaSource kafkaSource = KafkaSource
>   .builder()
>   .setBootstrapServers("localhost:9092")
>   .setTopics("test_topic")
>   .setValueOnlyDeserializer(new 
> SimpleStringSchema())
>   .setBounded(OffsetsInitializer.latest())
>   .build();
>   DataStream stream = env.fromSource(
>   kafkaSource,
>   WatermarkStrategy.noWatermarks(),
>   "Kafka Source"
>   );
>   stream.print();
>   env.execute("Flink KafkaSource test job"); {code}
> This produces exception: 
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception    at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
>     at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) 
>    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)    at 
> java.lang.Thread.run(Thread.java:748)Caused by: java.lang.RuntimeException: 
> SplitFetcher thread 0 received unexpected exception while polling the records 
>    at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     ... 1 moreCaused by: java.lang.IllegalStateException: You can only 
> check the position for partitions assigned to this consumer.    at 
> org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1737)
>     at 
> 

[GitHub] [flink-kubernetes-operator] mananmangal commented on a diff in pull request #648: [FLINK-32700] Support job drain for Savepoint upgrade mode

2023-08-30 Thread via GitHub


mananmangal commented on code in PR #648:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/648#discussion_r1310744500


##
flink-kubernetes-operator/src/main/java/org/apache/flink/kubernetes/operator/config/KubernetesOperatorConfigOptions.java:
##
@@ -566,4 +566,12 @@ public static String operatorConfigKey(String key) {
 .defaultValue(false)
 .withDescription(
 "Indicate whether a savepoint must be taken when 
deleting a FlinkDeployment or FlinkSessionJob.");
+
+@Documentation.Section(SECTION_DYNAMIC)
+public static final ConfigOption DRAIN_ON_SAVEPOINT_DELETION =
+operatorConfig("job.drain-on-savepoint-deletion")
+.booleanType()
+.defaultValue(false)
+.withDescription(
+"Indicate whether a job should be drained before 
deleting a FlinkDeployment or FlinkSessionJob, only if savepoint on deletion is 
enabled.");

Review Comment:
   updated the description.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] gyfora commented on pull request #648: [FLINK-32700] Support job drain for Savepoint upgrade mode

2023-08-30 Thread via GitHub


gyfora commented on PR #648:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/648#issuecomment-1699740847

   Looks good, please rebase on main and I will merge this tomorrow 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] z3d1k opened a new pull request, #23337: [FLINK-32962][python] Remove pip version check on installing dependencies.

2023-08-30 Thread via GitHub


z3d1k opened a new pull request, #23337:
URL: https://github.com/apache/flink/pull/23337

   ## What is the purpose of the change
   
   Removing `pip` version check when installing job python dependencies from 
requirements file.
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
*flink-python/pyflink/table/tests/test_dependency.py*.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: ni
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] ferenc-csaky opened a new pull request, #23336: [FLINK-32987][tests] Fix exception handling for BlobClientTest.testSocketTimeout

2023-08-30 Thread via GitHub


ferenc-csaky opened a new pull request, #23336:
URL: https://github.com/apache/flink/pull/23336

   ## What is the purpose of the change
   
   Fix too strict exception evaluation in `BlobClientTest.testSocketTimeout` 
that was introduced unintentionally during the JUni4 -> JUnit5 migration.
   
   ## Brief change log
   
   Reverted the logic to use `ExceptionUtils` as it was before.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-32987) BlobClientSslTest>BlobClientTest.testSocketTimeout expected SocketTimeoutException but identified SSLException

2023-08-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-32987:
---
Labels: pull-request-available test-stability  (was: test-stability)

> BlobClientSslTest>BlobClientTest.testSocketTimeout expected 
> SocketTimeoutException but identified SSLException
> --
>
> Key: FLINK-32987
> URL: https://issues.apache.org/jira/browse/FLINK-32987
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=8692
> {code}
> Aug 29 03:28:11 03:28:11.280 [ERROR]   
> BlobClientSslTest>BlobClientTest.testSocketTimeout:512 
> Aug 29 03:28:11 Expecting a throwable with cause being an instance of:
> Aug 29 03:28:11   java.net.SocketTimeoutException
> Aug 29 03:28:11 but was an instance of:
> Aug 29 03:28:11   javax.net.ssl.SSLException
> Aug 29 03:28:11 Throwable that failed the check:
> Aug 29 03:28:11 
> Aug 29 03:28:11 java.io.IOException: GET operation failed: Read timed out
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClient.getInternal(BlobClient.java:231)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.lambda$testSocketTimeout$2(BlobClientTest.java:510)
> Aug 29 03:28:11   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Aug 29 03:28:11   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.catchThrowable(Assertions.java:1366)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.assertThatThrownBy(Assertions.java:1210)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.testSocketTimeout(BlobClientTest.java:508)
> Aug 29 03:28:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 29 03:28:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 29 03:28:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 29 03:28:11   at java.lang.reflect.Method.invoke(Method.java:498)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #23336: [FLINK-32987][tests] Fix exception handling for BlobClientTest.testSocketTimeout

2023-08-30 Thread via GitHub


flinkbot commented on PR #23336:
URL: https://github.com/apache/flink/pull/23336#issuecomment-1699646820

   
   ## CI report:
   
   * ad170cdaeb797cf40a8e1f490810b573199ab108 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32987) BlobClientSslTest>BlobClientTest.testSocketTimeout expected SocketTimeoutException but identified SSLException

2023-08-30 Thread Ferenc Csaky (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760563#comment-17760563
 ] 

Ferenc Csaky commented on FLINK-32987:
--

Definitely, pls. assign it to me, I'll take care of it. Thanks for the detailed 
description!

> BlobClientSslTest>BlobClientTest.testSocketTimeout expected 
> SocketTimeoutException but identified SSLException
> --
>
> Key: FLINK-32987
> URL: https://issues.apache.org/jira/browse/FLINK-32987
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=8692
> {code}
> Aug 29 03:28:11 03:28:11.280 [ERROR]   
> BlobClientSslTest>BlobClientTest.testSocketTimeout:512 
> Aug 29 03:28:11 Expecting a throwable with cause being an instance of:
> Aug 29 03:28:11   java.net.SocketTimeoutException
> Aug 29 03:28:11 but was an instance of:
> Aug 29 03:28:11   javax.net.ssl.SSLException
> Aug 29 03:28:11 Throwable that failed the check:
> Aug 29 03:28:11 
> Aug 29 03:28:11 java.io.IOException: GET operation failed: Read timed out
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClient.getInternal(BlobClient.java:231)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.lambda$testSocketTimeout$2(BlobClientTest.java:510)
> Aug 29 03:28:11   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Aug 29 03:28:11   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.catchThrowable(Assertions.java:1366)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.assertThatThrownBy(Assertions.java:1210)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.testSocketTimeout(BlobClientTest.java:508)
> Aug 29 03:28:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 29 03:28:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 29 03:28:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 29 03:28:11   at java.lang.reflect.Method.invoke(Method.java:498)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink-kubernetes-operator] mxm commented on a diff in pull request #660: [FLINK-32991] Ensure registration of all scaling metrics

2023-08-30 Thread via GitHub


mxm commented on code in PR #660:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/660#discussion_r1310569105


##
flink-kubernetes-operator-autoscaler/src/test/java/org/apache/flink/kubernetes/operator/autoscaler/AutoScalerFlinkMetricsTest.java:
##
@@ -66,8 +67,10 @@ public void testMetricsRegistration() {
 initRecommendedParallelism(evaluatedMetrics);
 lastEvaluatedMetrics.put(resourceID, evaluatedMetrics);
 
-metrics.registerScalingMetrics(() -> 
lastEvaluatedMetrics.get(resourceID));
-metrics.registerScalingMetrics(() -> 
lastEvaluatedMetrics.get(resourceID));
+metrics.registerScalingMetrics(
+() -> List.of(jobVertexID), () -> 
lastEvaluatedMetrics.get(resourceID));
+metrics.registerScalingMetrics(
+() -> List.of(jobVertexID), () -> 
lastEvaluatedMetrics.get(resourceID));

Review Comment:
   I added this test in 
https://github.com/apache/flink-kubernetes-operator/pull/660/commits/7cc8bbc51cbf3aeba9d035f50bf0d1b76e8b5e8f



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33001) KafkaSource in batch mode failing with exception if topic partition is empty

2023-08-30 Thread Jun Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Qin updated FLINK-33001:

Description: 
If the Kafka topic is empty in Batch mode, there is an exception while 
processing it. This bug was supposedly fixed but unfortunately, the exception 
still occurs. The original bug was reported as this 
https://issues.apache.org/jira/browse/FLINK-27041

We tried to backport it but it still doesn't work. 
 * The problem will occur in case of the DEBUG level of logger for class 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 * The same problems will occur in other versions of Flink, at least in the 
1.15 release branch and tag release-1.15.4
 * The same problem also occurs in Flink 1.17.1 and 1.14

 

The minimal code to produce this is 

 
{code:java}
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);

KafkaSource kafkaSource = KafkaSource
.builder()
.setBootstrapServers("localhost:9092")
.setTopics("test_topic")
.setValueOnlyDeserializer(new 
SimpleStringSchema())
.setBounded(OffsetsInitializer.latest())
.build();

DataStream stream = env.fromSource(
kafkaSource,
WatermarkStrategy.noWatermarks(),
"Kafka Source"
);
stream.print();

env.execute("Flink KafkaSource test job"); {code}
This produces exception: 
{code:java}
Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
exception    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
    at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583) 
   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)    
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)    at 
java.lang.Thread.run(Thread.java:748)Caused by: java.lang.RuntimeException: 
SplitFetcher thread 0 received unexpected exception while polling the records   
 at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)    
at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
   ... 1 moreCaused by: java.lang.IllegalStateException: You can only check 
the position for partitions assigned to this consumer.    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1737)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1704)
    at 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.maybeLogSplitChangesHandlingResult(KafkaPartitionSplitReader.java:375)
    at 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.handleSplitsChanges(KafkaPartitionSplitReader.java:232)
    at 
org.apache.flink.connector.base.source.reader.fetcher.AddSplitsTask.run(AddSplitsTask.java:49)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
    ... 6 more {code}
 

The only *workaround* that works fine right now is to change the DEBUG level to 
INFO for logging. 

 
{code:java}
logger.KafkaPartitionSplitReader.name = 

[jira] [Updated] (FLINK-33001) KafkaSource in batch mode failing with exception if topic partition is empty

2023-08-30 Thread Abdul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdul updated FLINK-33001:
--
Environment: (was: The only workaround that works fine right now is to 
change the DEBUG level to INFO for logging. 

 
{code:java}
logger.KafkaPartitionSplitReader.name = 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader 

logger.KafkaPartitionSplitReader.level = INFO{code}
It is strange that changing this doesn't cause the above exception. )

> KafkaSource in batch mode failing with exception if topic partition is empty
> 
>
> Key: FLINK-33001
> URL: https://issues.apache.org/jira/browse/FLINK-33001
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.7, 1.14.6, 1.17.1
>Reporter: Abdul
>Priority: Major
>
> If the Kafka topic is empty in Batch mode, there is an exception while 
> processing it. This bug was supposedly fixed but unfortunately, the exception 
> still occurs. The original bug was reported as this 
> https://issues.apache.org/jira/browse/FLINK-27041
> We tried to backport it but it still doesn't work. 
>  * The problem will occur in case of the DEBUG level of logger for class 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
>  * The same problems will occur in other versions of Flink, at least in the 
> 1.15 release branch and tag release-1.15.4
>  * The same problem also occurs in Flink 1.7.1 and 1.14
>  
> The minimal code to produce this is 
>  
> {code:java}
>   final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   env.setRuntimeMode(RuntimeExecutionMode.BATCH);
>   KafkaSource kafkaSource = KafkaSource
>   .builder()
>   .setBootstrapServers("localhost:9092")
>   .setTopics("test_topic")
>   .setValueOnlyDeserializer(new 
> SimpleStringSchema())
>   .setBounded(OffsetsInitializer.latest())
>   .build();
>   DataStream stream = env.fromSource(
>   kafkaSource,
>   WatermarkStrategy.noWatermarks(),
>   "Kafka Source"
>   );
>   stream.print();
>   env.execute("Flink KafkaSource test job"); {code}
> This produces exception: 
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception    at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
>     at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
>     at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
>     at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
>     at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
>     at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
>     at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583)
>     at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758) 
>    at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)    at 
> java.lang.Thread.run(Thread.java:748)Caused by: java.lang.RuntimeException: 
> SplitFetcher thread 0 received unexpected exception while polling the records 
>    at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
>     at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
>     at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)   
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>     ... 1 moreCaused by: java.lang.IllegalStateException: You can only 
> check the position for partitions assigned to this consumer.    at 
> 

[jira] [Updated] (FLINK-33001) KafkaSource in batch mode failing with exception if topic partition is empty

2023-08-30 Thread Abdul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdul updated FLINK-33001:
--
Description: 
If the Kafka topic is empty in Batch mode, there is an exception while 
processing it. This bug was supposedly fixed but unfortunately, the exception 
still occurs. The original bug was reported as this 
https://issues.apache.org/jira/browse/FLINK-27041

We tried to backport it but it still doesn't work. 
 * The problem will occur in case of the DEBUG level of logger for class 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 * The same problems will occur in other versions of Flink, at least in the 
1.15 release branch and tag release-1.15.4
 * The same problem also occurs in Flink 1.7.1 and 1.14

 

The minimal code to produce this is 

 
{code:java}
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);

KafkaSource kafkaSource = KafkaSource
.builder()
.setBootstrapServers("localhost:9092")
.setTopics("test_topic")
.setValueOnlyDeserializer(new 
SimpleStringSchema())
.setBounded(OffsetsInitializer.latest())
.build();

DataStream stream = env.fromSource(
kafkaSource,
WatermarkStrategy.noWatermarks(),
"Kafka Source"
);
stream.print();

env.execute("Flink KafkaSource test job"); {code}
This produces exception: 
{code:java}
Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
exception    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
    at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583) 
   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)    
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)    at 
java.lang.Thread.run(Thread.java:748)Caused by: java.lang.RuntimeException: 
SplitFetcher thread 0 received unexpected exception while polling the records   
 at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)    
at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
   ... 1 moreCaused by: java.lang.IllegalStateException: You can only check 
the position for partitions assigned to this consumer.    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1737)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1704)
    at 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.maybeLogSplitChangesHandlingResult(KafkaPartitionSplitReader.java:375)
    at 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.handleSplitsChanges(KafkaPartitionSplitReader.java:232)
    at 
org.apache.flink.connector.base.source.reader.fetcher.AddSplitsTask.run(AddSplitsTask.java:49)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
    ... 6 more {code}
 

The only *workaround* that works fine right now is to change the DEBUG level to 
INFO for logging. 

 
{code:java}
logger.KafkaPartitionSplitReader.name = 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader 

[GitHub] [flink-kubernetes-operator] 1996fanrui commented on a diff in pull request #660: [FLINK-32991] Ensure registration of all scaling metrics

2023-08-30 Thread via GitHub


1996fanrui commented on code in PR #660:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/660#discussion_r1310383497


##
flink-kubernetes-operator-autoscaler/src/test/java/org/apache/flink/kubernetes/operator/autoscaler/AutoScalerFlinkMetricsTest.java:
##
@@ -66,8 +67,10 @@ public void testMetricsRegistration() {
 initRecommendedParallelism(evaluatedMetrics);
 lastEvaluatedMetrics.put(resourceID, evaluatedMetrics);
 
-metrics.registerScalingMetrics(() -> 
lastEvaluatedMetrics.get(resourceID));
-metrics.registerScalingMetrics(() -> 
lastEvaluatedMetrics.get(resourceID));
+metrics.registerScalingMetrics(
+() -> List.of(jobVertexID), () -> 
lastEvaluatedMetrics.get(resourceID));
+metrics.registerScalingMetrics(
+() -> List.of(jobVertexID), () -> 
lastEvaluatedMetrics.get(resourceID));

Review Comment:
   Sorry, Max. I didn't express clear.
   
   This PR you fix a bug that some metrics aren't ready when  
`registerScalingMetrics` is called, so these metrics cannot be registerd even 
if they are ready in the future.
   
   Could we adding a test for this case? This test should fail without your PR, 
and it should success with your PR. Does it make sense?
   
   Please let me know if it's not clear, thanks~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33001) KafkaSource in batch mode failing with exception if topic partition is empty

2023-08-30 Thread Abdul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdul updated FLINK-33001:
--
Description: 
If the Kafka topic is empty in Batch mode, there is an exception while 
processing it. This bug was supposedly fixed but unfortunately, the exception 
still occurs. The original bug was reported as this 
https://issues.apache.org/jira/browse/FLINK-27041

We tried to backport it but it still doesn't work. 
 * The problem will occur in case of the DEBUG level of logger for class 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 * The same problems will occur in other versions of Flink, at least in the 
1.15 release branch and tag release-1.15.4
 * The same problem also occurs in Flink 1.7.1 and 1.14

 

The minimal code to produce this is 

 
{code:java}
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);

KafkaSource kafkaSource = KafkaSource
.builder()
.setBootstrapServers("localhost:9092")
.setTopics("test_topic")
.setValueOnlyDeserializer(new 
SimpleStringSchema())
.setBounded(OffsetsInitializer.latest())
.build();

DataStream stream = env.fromSource(
kafkaSource,
WatermarkStrategy.noWatermarks(),
"Kafka Source"
);
stream.print();

env.execute("Flink KafkaSource test job"); {code}
This produces exception: 
{code:java}
Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
exception    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
    at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583) 
   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)    
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)    at 
java.lang.Thread.run(Thread.java:748)Caused by: java.lang.RuntimeException: 
SplitFetcher thread 0 received unexpected exception while polling the records   
 at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)    
at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
   ... 1 moreCaused by: java.lang.IllegalStateException: You can only check 
the position for partitions assigned to this consumer.    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1737)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1704)
    at 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.maybeLogSplitChangesHandlingResult(KafkaPartitionSplitReader.java:375)
    at 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.handleSplitsChanges(KafkaPartitionSplitReader.java:232)
    at 
org.apache.flink.connector.base.source.reader.fetcher.AddSplitsTask.run(AddSplitsTask.java:49)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
    ... 6 more {code}

  was:
If the Kafka topic is empty in Batch mode, there is an exception while 
processing it. This bug was supposedly fixed but unfortunately, the exception 
still occurs. The original bug was reported as this 

[jira] [Updated] (FLINK-33001) KafkaSource in batch mode failing with exception if topic partition is empty

2023-08-30 Thread Abdul (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abdul updated FLINK-33001:
--
Description: 
If the Kafka topic is empty in Batch mode, there is an exception while 
processing it. This bug was supposedly fixed but unfortunately, the exception 
still occurs. The original bug was reported as this 
https://issues.apache.org/jira/browse/FLINK-27041

We tried to backport it but it still doesn't work. 
 * The problem will occur in case of the DEBUG level of logger for class 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 * The same problems will occur in other versions of Flink, at least in the 
1.15 release branch and tag release-1.15.4
 * The same problem also occurs in Flink 1.7.1 and 1.14

 

The minimal code to produce this is 

 
{code:java}
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);
KafkaSource kafkaSource = KafkaSource
.builder()
.setBootstrapServers("localhost:9092")
.setTopics("test_topic")
.setValueOnlyDeserializer(new SimpleStringSchema())
.setBounded(OffsetsInitializer.latest())
.build();
DataStream stream = env.fromSource(
kafkaSource,
WatermarkStrategy.noWatermarks(), "Kafka Source" );
stream.print();
env.execute("Flink KafkaSource test job");
{code}

This produces exception: 
{code:java}
Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
exception    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
    at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583) 
   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)    
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)    at 
java.lang.Thread.run(Thread.java:748)Caused by: java.lang.RuntimeException: 
SplitFetcher thread 0 received unexpected exception while polling the records   
 at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)    
at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
   ... 1 moreCaused by: java.lang.IllegalStateException: You can only check 
the position for partitions assigned to this consumer.    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1737)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1704)
    at 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.maybeLogSplitChangesHandlingResult(KafkaPartitionSplitReader.java:375)
    at 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.handleSplitsChanges(KafkaPartitionSplitReader.java:232)
    at 
org.apache.flink.connector.base.source.reader.fetcher.AddSplitsTask.run(AddSplitsTask.java:49)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
    ... 6 more {code}

  was:
If the Kafka topic is empty in Batch mode, there is an exception while 
processing it. This bug was supposedly fixed but unfortunately, the exception 
still occurs. The original bug was reported as this 
https://issues.apache.org/jira/browse/FLINK-27041


We tried to backport it but it still doesn't work. 
 * The problem will occur in case of DEBUG level of logger for class 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 * The same problems will occur in other versions of Flink, at least in the 
1.15 release branch and tag release-1.15.4
 * Same problem also occur in Flink 1.7.1 and 1.14

 

 

[jira] [Created] (FLINK-33001) KafkaSource in batch mode failing with exception if topic partition is empty

2023-08-30 Thread Abdul (Jira)
Abdul created FLINK-33001:
-

 Summary: KafkaSource in batch mode failing with exception if topic 
partition is empty
 Key: FLINK-33001
 URL: https://issues.apache.org/jira/browse/FLINK-33001
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.17.1, 1.14.6, 1.12.7
 Environment: The only workaround that works fine right now is to 
change the DEBUG level to INFO for logging. 

 
{code:java}
logger.KafkaPartitionSplitReader.name = 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader 

logger.KafkaPartitionSplitReader.level = INFO{code}
It is strange that changing this doesn't cause the above exception. 
Reporter: Abdul


If the Kafka topic is empty in Batch mode, there is an exception while 
processing it. This bug was supposedly fixed but unfortunately, the exception 
still occurs. The original bug was reported as this 
https://issues.apache.org/jira/browse/FLINK-27041


We tried to backport it but it still doesn't work. 
 * The problem will occur in case of DEBUG level of logger for class 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
 * The same problems will occur in other versions of Flink, at least in the 
1.15 release branch and tag release-1.15.4
 * Same problem also occur in Flink 1.7.1 and 1.14

 

 

The minimal code to produce this is 

 
final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
env.setRuntimeMode(RuntimeExecutionMode.BATCH);

KafkaSource kafkaSource = KafkaSource
.builder()
.setBootstrapServers("localhost:9092")
.setTopics("test_topic")
.setValueOnlyDeserializer(new 
SimpleStringSchema())
.setBounded(OffsetsInitializer.latest())
.build();

DataStream stream = env.fromSource(
kafkaSource,
WatermarkStrategy.noWatermarks(),   
"Kafka Source"  );
stream.print();

env.execute("Flink KafkaSource test job");
This produces exception: 
{code:java}
Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
exception    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
    at 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
    at 
org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
    at 
org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
    at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
    at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
    at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583) 
   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)    
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)    at 
java.lang.Thread.run(Thread.java:748)Caused by: java.lang.RuntimeException: 
SplitFetcher thread 0 received unexpected exception while polling the records   
 at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
    at 
org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)    
at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
   ... 1 moreCaused by: java.lang.IllegalStateException: You can only check 
the position for partitions assigned to this consumer.    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1737)
    at 
org.apache.kafka.clients.consumer.KafkaConsumer.position(KafkaConsumer.java:1704)
    at 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.maybeLogSplitChangesHandlingResult(KafkaPartitionSplitReader.java:375)
    at 

[jira] [Resolved] (FLINK-20681) Support specifying the hdfs path when ship archives or files

2023-08-30 Thread Rui Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Fan resolved FLINK-20681.
-
Fix Version/s: 1.19.0
   Resolution: Fixed

> Support specifying the hdfs path  when ship archives or files
> -
>
> Key: FLINK-20681
> URL: https://issues.apache.org/jira/browse/FLINK-20681
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Ruguo Yu
>Assignee: junzhong qin
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, pull-requests-available, stale-assigned
> Fix For: 1.19.0
>
> Attachments: image-2020-12-23-20-58-41-234.png, 
> image-2020-12-24-01-01-10-021.png
>
>
> Currently, our team try to submit flink job that depends extra resource with 
> yarn-application target, and use two options: "yarn.ship-archives" and 
> "yarn.ship-files".
> But above options only support specifying local resource and shiping them to 
> hdfs, besides if it can support remote resource on distributed filesystem 
> (such as hdfs), then get the following benefits:
>  * client will exclude the local resource uploading to accelerate the job 
> submission process
>  * yarn will cache them on the nodes so that they doesn't need to be 
> downloaded for application



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-20681) Support specifying the hdfs path when ship archives or files

2023-08-30 Thread Rui Fan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760480#comment-17760480
 ] 

Rui Fan commented on FLINK-20681:
-

Merged  2c50b4e956305426f478b726d4de4a640a16b810

> Support specifying the hdfs path  when ship archives or files
> -
>
> Key: FLINK-20681
> URL: https://issues.apache.org/jira/browse/FLINK-20681
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: Ruguo Yu
>Assignee: junzhong qin
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available, pull-requests-available, stale-assigned
> Attachments: image-2020-12-23-20-58-41-234.png, 
> image-2020-12-24-01-01-10-021.png
>
>
> Currently, our team try to submit flink job that depends extra resource with 
> yarn-application target, and use two options: "yarn.ship-archives" and 
> "yarn.ship-files".
> But above options only support specifying local resource and shiping them to 
> hdfs, besides if it can support remote resource on distributed filesystem 
> (such as hdfs), then get the following benefits:
>  * client will exclude the local resource uploading to accelerate the job 
> submission process
>  * yarn will cache them on the nodes so that they doesn't need to be 
> downloaded for application



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] 1996fanrui merged pull request #23219: [FLINK-20681][yarn] Support specifying the hdfs path when ship archives or files

2023-08-30 Thread via GitHub


1996fanrui merged PR #23219:
URL: https://github.com/apache/flink/pull/23219


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32987) BlobClientSslTest>BlobClientTest.testSocketTimeout expected SocketTimeoutException but identified SSLException

2023-08-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760478#comment-17760478
 ] 

Matthias Pohl commented on FLINK-32987:
---

The issue is caused by FLINK-32835 where we migrate from JUnit4 to 5. It looks 
like we're did a migration of the code that checks for 
{{SocketTimeoutException}} but the assert became more strict (the exception has 
to be the direct cause whereas the old code just checked from some cause being 
the {{SocketTimeoutException}}). I removed the 1.18.0 affected version from the 
ticket.

[~ferenc-csaky] can you pick this up?

> BlobClientSslTest>BlobClientTest.testSocketTimeout expected 
> SocketTimeoutException but identified SSLException
> --
>
> Key: FLINK-32987
> URL: https://issues.apache.org/jira/browse/FLINK-32987
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=8692
> {code}
> Aug 29 03:28:11 03:28:11.280 [ERROR]   
> BlobClientSslTest>BlobClientTest.testSocketTimeout:512 
> Aug 29 03:28:11 Expecting a throwable with cause being an instance of:
> Aug 29 03:28:11   java.net.SocketTimeoutException
> Aug 29 03:28:11 but was an instance of:
> Aug 29 03:28:11   javax.net.ssl.SSLException
> Aug 29 03:28:11 Throwable that failed the check:
> Aug 29 03:28:11 
> Aug 29 03:28:11 java.io.IOException: GET operation failed: Read timed out
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClient.getInternal(BlobClient.java:231)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.lambda$testSocketTimeout$2(BlobClientTest.java:510)
> Aug 29 03:28:11   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Aug 29 03:28:11   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.catchThrowable(Assertions.java:1366)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.assertThatThrownBy(Assertions.java:1210)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.testSocketTimeout(BlobClientTest.java:508)
> Aug 29 03:28:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 29 03:28:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 29 03:28:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 29 03:28:11   at java.lang.reflect.Method.invoke(Method.java:498)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32987) BlobClientSslTest>BlobClientTest.testSocketTimeout expected SocketTimeoutException but identified SSLException

2023-08-30 Thread Matthias Pohl (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias Pohl updated FLINK-32987:
--
Affects Version/s: (was: 1.18.0)

> BlobClientSslTest>BlobClientTest.testSocketTimeout expected 
> SocketTimeoutException but identified SSLException
> --
>
> Key: FLINK-32987
> URL: https://issues.apache.org/jira/browse/FLINK-32987
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.19.0
>Reporter: Matthias Pohl
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52740=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=125e07e7-8de0-5c6c-a541-a567415af3ef=8692
> {code}
> Aug 29 03:28:11 03:28:11.280 [ERROR]   
> BlobClientSslTest>BlobClientTest.testSocketTimeout:512 
> Aug 29 03:28:11 Expecting a throwable with cause being an instance of:
> Aug 29 03:28:11   java.net.SocketTimeoutException
> Aug 29 03:28:11 but was an instance of:
> Aug 29 03:28:11   javax.net.ssl.SSLException
> Aug 29 03:28:11 Throwable that failed the check:
> Aug 29 03:28:11 
> Aug 29 03:28:11 java.io.IOException: GET operation failed: Read timed out
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClient.getInternal(BlobClient.java:231)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.lambda$testSocketTimeout$2(BlobClientTest.java:510)
> Aug 29 03:28:11   at 
> org.assertj.core.api.ThrowableAssert.catchThrowable(ThrowableAssert.java:63)
> Aug 29 03:28:11   at 
> org.assertj.core.api.AssertionsForClassTypes.catchThrowable(AssertionsForClassTypes.java:892)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.catchThrowable(Assertions.java:1366)
> Aug 29 03:28:11   at 
> org.assertj.core.api.Assertions.assertThatThrownBy(Assertions.java:1210)
> Aug 29 03:28:11   at 
> org.apache.flink.runtime.blob.BlobClientTest.testSocketTimeout(BlobClientTest.java:508)
> Aug 29 03:28:11   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Aug 29 03:28:11   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Aug 29 03:28:11   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Aug 29 03:28:11   at java.lang.reflect.Method.invoke(Method.java:498)
> [...]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-32952) Scan reuse with readable metadata and watermark push down will get wrong watermark

2023-08-30 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760471#comment-17760471
 ] 

Jark Wu commented on FLINK-32952:
-

Fixed in 
- master: 103de5bf136816ce1e520f372e17b162e4aa2ba7
- release-1.18: TODO

> Scan reuse with readable metadata and watermark push down will get wrong 
> watermark 
> ---
>
> Key: FLINK-32952
> URL: https://issues.apache.org/jira/browse/FLINK-32952
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.19.0
>
>
> Scan reuse with readable metadata and watermark push down will get wrong 
> result. In class ScanReuser, we will re-build watermark spec after projection 
> push down. However, we will get wrong index while try to find index in new 
> source type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-32952) Scan reuse with readable metadata and watermark push down will get wrong watermark

2023-08-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-32952:

Fix Version/s: 1.19.0

> Scan reuse with readable metadata and watermark push down will get wrong 
> watermark 
> ---
>
> Key: FLINK-32952
> URL: https://issues.apache.org/jira/browse/FLINK-32952
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.19.0
>
>
> Scan reuse with readable metadata and watermark push down will get wrong 
> result. In class ScanReuser, we will re-build watermark spec after projection 
> push down. However, we will get wrong index while try to find index in new 
> source type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-32952) Scan reuse with readable metadata and watermark push down will get wrong watermark

2023-08-30 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-32952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-32952:
---

Assignee: Yunhong Zheng

> Scan reuse with readable metadata and watermark push down will get wrong 
> watermark 
> ---
>
> Key: FLINK-32952
> URL: https://issues.apache.org/jira/browse/FLINK-32952
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.18.0
>Reporter: Yunhong Zheng
>Assignee: Yunhong Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.18.0, 1.19.0
>
>
> Scan reuse with readable metadata and watermark push down will get wrong 
> result. In class ScanReuser, we will re-build watermark spec after projection 
> push down. However, we will get wrong index while try to find index in new 
> source type.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] wuchong commented on pull request #23285: [FLINK-32952][table-planner] Fix scan reuse with readable metadata and watermark push down get wrong watermark error

2023-08-30 Thread via GitHub


wuchong commented on PR #23285:
URL: https://github.com/apache/flink/pull/23285#issuecomment-1699314661

   @swuferhong could you open a cherry-pick pull request for release-1.18 
branch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] wuchong merged pull request #23285: [FLINK-32952][table-planner] Fix scan reuse with readable metadata and watermark push down get wrong watermark error

2023-08-30 Thread via GitHub


wuchong merged PR #23285:
URL: https://github.com/apache/flink/pull/23285


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-kubernetes-operator] 1996fanrui commented on a diff in pull request #660: [FLINK-32991] Ensure registration of all scaling metrics

2023-08-30 Thread via GitHub


1996fanrui commented on code in PR #660:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/660#discussion_r1310383497


##
flink-kubernetes-operator-autoscaler/src/test/java/org/apache/flink/kubernetes/operator/autoscaler/AutoScalerFlinkMetricsTest.java:
##
@@ -66,8 +67,10 @@ public void testMetricsRegistration() {
 initRecommendedParallelism(evaluatedMetrics);
 lastEvaluatedMetrics.put(resourceID, evaluatedMetrics);
 
-metrics.registerScalingMetrics(() -> 
lastEvaluatedMetrics.get(resourceID));
-metrics.registerScalingMetrics(() -> 
lastEvaluatedMetrics.get(resourceID));
+metrics.registerScalingMetrics(
+() -> List.of(jobVertexID), () -> 
lastEvaluatedMetrics.get(resourceID));
+metrics.registerScalingMetrics(
+() -> List.of(jobVertexID), () -> 
lastEvaluatedMetrics.get(resourceID));

Review Comment:
   Sorry, Max. I didn't express clear.
   
   This PR you fix a bug that some metrics aren't ready when  
`registerScalingMetrics` is called, so these metrics cannot be registerd even 
if they are ready in the future.
   
   Could we adding a test for this case? This test should fail without your PR, 
and it should success with your PR.
   
   Please let me know if it's not clear, thanks~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32972) TaskTest.testInterruptibleSharedLockInInvokeAndCancel causes a JVM shutdown with exit code 239

2023-08-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760439#comment-17760439
 ] 

Matthias Pohl commented on FLINK-32972:
---

[~akalashnikov] Can you have a look into it? We're running into 
{{FatalExitExceptionHandler}} again similar to FLINK-30844. We might want to 
change the test so that we don't have this issue anymore. The 1s timeout which 
you set in FLINK-30844 might not be the right approach?

> TaskTest.testInterruptibleSharedLockInInvokeAndCancel causes a JVM shutdown 
> with exit code 239
> --
>
> Key: FLINK-32972
> URL: https://issues.apache.org/jira/browse/FLINK-32972
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.17.2
>Reporter: Sergey Nuyanzin
>Priority: Major
>  Labels: test-stability
>
> Within this build 
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52668=logs=b0a398c0-685b-599c-eb57-c8c2a771138e=747432ad-a576-5911-1e2a-68c6bedc248a=8677]
> it looks like task 
> {{1ec32305eb0f926acae926007429c142__0_0}} was 
> canceled
> {noformat}
> 
> Test 
> testInterruptibleSharedLockInInvokeAndCancel(org.apache.flink.runtime.taskmanager.TaskTest)
>  is running.
> 
> 01:30:05,140 [main] INFO  
> org.apache.flink.runtime.io.network.NettyShuffleServiceFactory [] - Created a 
> new FileChannelManager for storing result partitions of BLOCKING shuffles. 
> Used directories:
>   /tmp/flink-netty-shuffle-82415974-782a-46db-afbc-8f18f30a4ec5
> 01:30:05,177 [main] INFO  
> org.apache.flink.runtime.io.network.buffer.NetworkBufferPool [] - Allocated 
> 32 MB for network buffer pool (number of memory segments: 1024, bytes per 
> segment: 32768).
> 01:30:05,181 [   Test Task (1/1)#0] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Test Task 
> (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0) 
> switched from CREATED to DEPLOYING.
> 01:30:05,190 [   Test Task (1/1)#0] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Loading JAR 
> files for task Test Task (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0) 
> [DEPLOYING].
> 01:30:05,192 [   Test Task (1/1)#0] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Test Task 
> (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0) 
> switched from DEPLOYING to INITIALIZING.
> 01:30:05,192 [   Test Task (1/1)#0] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Test Task 
> (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0) 
> switched from INITIALIZING to RUNNING.
> 01:30:05,195 [main] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Attempting 
> to cancel task Test Task (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0).
> 01:30:05,196 [main] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Test Task 
> (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0) 
> switched from RUNNING to CANCELING.
> 01:30:05,196 [main] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Triggering 
> cancellation of task code Test Task (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0).
> 01:30:05,197 [   Test Task (1/1)#0] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Test Task 
> (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0) 
> switched from CANCELING to CANCELED.
> 01:30:05,198 [   Test Task (1/1)#0] INFO  
> org.apache.flink.runtime.taskmanager.Task[] - Freeing 
> task resources for Test Task (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0).
> {noformat}
> and after that there are records in logs complaining htat task did not react
> {noformat}
> 01:30:05,337 [Canceler/Interrupts for Test Task (1/1)#0 
> (1ec32305eb0f926acae926007429c142__0_0).] 
> WARN  org.apache.flink.runtime.taskmanager.Task[] - Task 
> 'Test Task (1/1)#0' did not react to cancelling signal - interrupting; it is 
> stuck for 0 seconds in method:
>  
> app//org.apache.flink.runtime.metrics.groups.AbstractMetricGroup.close(AbstractMetricGroup.java:322)
> 

[jira] [Commented] (FLINK-32992) Recommended parallelism metric is a duplicate of Parallelism metric

2023-08-30 Thread Gyula Fora (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760431#comment-17760431
 ] 

Gyula Fora commented on FLINK-32992:


I think they are not the same when scaling is turned off (when the autoscaler 
only suggests parallelisms). In that case parallelism will stay the same while 
recommended parallelism changes. That was the original intention here as well.

> Recommended parallelism metric is a duplicate of Parallelism metric
> ---
>
> Key: FLINK-32992
> URL: https://issues.apache.org/jira/browse/FLINK-32992
> Project: Flink
>  Issue Type: Bug
>  Components: Autoscaler, Deployment / Kubernetes
>Affects Versions: kubernetes-operator-1.6.0
>Reporter: Maximilian Michels
>Assignee: Maximilian Michels
>Priority: Minor
> Fix For: kubernetes-operator-1.7.0
>
>
> The two metrics are the same. Recommended parallelism seems to have been 
> added as a way to report real-time parallelism updates before we changed all 
> metrics to be reported in real time.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] 1996fanrui commented on pull request #23295: [FLINK-26341][zookeeper] Upgrade all tests related to ZooKeeperTestEnvironment to junit5 and removing the ZooKeeperTestEnvironment

2023-08-30 Thread via GitHub


1996fanrui commented on PR #23295:
URL: https://github.com/apache/flink/pull/23295#issuecomment-1699278365

   Thanks @XComp  for the reivew and merge!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #23335: [BP-1.18][FLINK-32994][runtime] Adds proper toString() implementations to the LeaderElectionDriver implementations to have human-readable ve

2023-08-30 Thread via GitHub


flinkbot commented on PR #23335:
URL: https://github.com/apache/flink/pull/23335#issuecomment-1699278251

   
   ## CI report:
   
   * a758dbf01861674bdeddf93d1fd320624ca11eec UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] XComp opened a new pull request, #23335: [FLINK-32994][runtime] Adds proper toString() implementations to the LeaderElectionDriver implementations to have human-readable versions of t

2023-08-30 Thread via GitHub


XComp opened a new pull request, #23335:
URL: https://github.com/apache/flink/pull/23335

   1.18 backport for parent PR #23327


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32731) SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException

2023-08-30 Thread Matthias Pohl (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17760421#comment-17760421
 ] 

Matthias Pohl commented on FLINK-32731:
---

Thanks for looking into it, [~fsk119]. How do you find out whether "it works"? 
Should we provide backports for this test instability as well? It affects 1.18 
but the change you documented was only merged to {{master}}.

> SqlGatewayE2ECase.testHiveServer2ExecuteStatement failed due to MetaException
> -
>
> Key: FLINK-32731
> URL: https://issues.apache.org/jira/browse/FLINK-32731
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Gateway
>Affects Versions: 1.18.0
>Reporter: Matthias Pohl
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=51891=logs=fb37c667-81b7-5c22-dd91-846535e99a97=011e961e-597c-5c96-04fe-7941c8b83f23=10987
> {code}
> Aug 02 02:14:04 02:14:04.957 [ERROR] Tests run: 3, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 198.658 s <<< FAILURE! - in 
> org.apache.flink.table.gateway.SqlGatewayE2ECase
> Aug 02 02:14:04 02:14:04.966 [ERROR] 
> org.apache.flink.table.gateway.SqlGatewayE2ECase.testHiveServer2ExecuteStatement
>   Time elapsed: 31.437 s  <<< ERROR!
> Aug 02 02:14:04 java.util.concurrent.ExecutionException: 
> Aug 02 02:14:04 java.sql.SQLException: 
> org.apache.flink.table.gateway.service.utils.SqlExecutionException: Failed to 
> execute the operation d440e6e7-0fed-49c9-933e-c7be5bbae50d.
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.processThrowable(OperationManager.java:414)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:267)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> Aug 02 02:14:04   at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> Aug 02 02:14:04   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> Aug 02 02:14:04   at java.lang.Thread.run(Thread.java:750)
> Aug 02 02:14:04 Caused by: org.apache.flink.table.api.TableException: Could 
> not execute CreateTable in path `hive`.`default`.`CsvTable`
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1289)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.createTable(CatalogManager.java:939)
> Aug 02 02:14:04   at 
> org.apache.flink.table.operations.ddl.CreateTableOperation.execute(CreateTableOperation.java:84)
> Aug 02 02:14:04   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:1080)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.callOperation(OperationExecutor.java:570)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeOperation(OperationExecutor.java:458)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationExecutor.executeStatement(OperationExecutor.java:210)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.SqlGatewayServiceImpl.lambda$executeStatement$1(SqlGatewayServiceImpl.java:212)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager.lambda$submitOperation$1(OperationManager.java:119)
> Aug 02 02:14:04   at 
> org.apache.flink.table.gateway.service.operation.OperationManager$Operation.lambda$run$0(OperationManager.java:258)
> Aug 02 02:14:04   ... 7 more
> Aug 02 02:14:04 Caused by: 
> org.apache.flink.table.catalog.exceptions.CatalogException: Failed to create 
> table default.CsvTable
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.hive.HiveCatalog.createTable(HiveCatalog.java:547)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.lambda$createTable$16(CatalogManager.java:950)
> Aug 02 02:14:04   at 
> org.apache.flink.table.catalog.CatalogManager.execute(CatalogManager.java:1283)
> Aug 02 02:14:04   ... 16 more
> Aug 02 02:14:04 Caused by: MetaException(message:Got exception: 
> java.net.ConnectException Call From 70d5c7217fe8/172.17.0.2 to 
> hadoop-master:9000 failed on connection exception: 

  1   2   3   >