[jira] [Commented] (FLINK-29288) Can't start a job with a jar in the system classpath

2023-11-11 Thread Trystan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785217#comment-17785217
 ] 

Trystan commented on FLINK-29288:
-

[~sap1ens] are you running native k8s mode or standalone mode? If the mailing 
list is a better forum for this discussion I can move it there, too.

> Can't start a job with a jar in the system classpath
> 
>
> Key: FLINK-29288
> URL: https://issues.apache.org/jira/browse/FLINK-29288
> Project: Flink
>  Issue Type: Bug
>  Components: Kubernetes Operator
>Affects Versions: kubernetes-operator-1.1.0
>Reporter: Yaroslav Tkachenko
>Priority: Major
>  Labels: pull-request-available
> Fix For: kubernetes-operator-1.2.0
>
>
> I'm using the latest (unreleased) version of the Kubernetes operator.
> It looks like currently, it's impossible to use it with a job jar file in the 
> system classpath (/opt/flink/lib). *jarURI* is required and it's always 
> passed as a *pipeline.jars* parameter to the Flink process. In practice, it 
> means that the same class is loaded twice: once by the system classloader and 
> another time by the user classloader. This leads to exceptions like this:
> {quote}java.lang.LinkageError: loader constraint violation: when resolving 
> method 'XXX' the class loader org.apache.flink.util.ChildFirstClassLoader 
> @47a5b70d of the current class, YYY, and the class loader 'app' for the 
> method's defining class, ZZZ, have different Class objects for the type AAA 
> used in the signature
> {quote}
> In my opinion, jarURI must be made optional even for the application mode. In 
> this case, it's assumed that it's already available in the system classpath.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33522] Use created savepoint even if job cancellation fails [flink-kubernetes-operator]

2023-11-11 Thread via GitHub


gyfora commented on code in PR #706:
URL: 
https://github.com/apache/flink-kubernetes-operator/pull/706#discussion_r1390245416


##
flink-kubernetes-operator/src/test/java/org/apache/flink/kubernetes/operator/service/AbstractFlinkServiceTest.java:
##
@@ -673,9 +693,11 @@ public void nativeSavepointFormatTest() throws Exception {
 .set(OPERATOR_SAVEPOINT_FORMAT_TYPE, 
SavepointFormatType.NATIVE),
 false);
 assertTrue(stopWithSavepointFuture.isDone());
-assertEquals(jobID, stopWithSavepointFuture.get().f0);
-assertEquals(SavepointFormatType.NATIVE, 
stopWithSavepointFuture.get().f1);
-assertEquals(savepointPath, stopWithSavepointFuture.get().f2);
+if (!failAfterSavepointCompletes) {
+assertEquals(jobID, stopWithSavepointFuture.get().f0);
+assertEquals(SavepointFormatType.NATIVE, 
stopWithSavepointFuture.get().f1);
+assertEquals(savepointPath, stopWithSavepointFuture.get().f2);
+}

Review Comment:
   Am I missing something here, or why are we not validating the savepoint path 
in both the failure and non failure case?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-32723) FLIP-334 : Decoupling autoscaler and kubernetes and support the Standalone Autoscaler

2023-11-11 Thread Maximilian Michels (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-32723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785183#comment-17785183
 ] 

Maximilian Michels commented on FLINK-32723:


Thanks for following through with this [~fanrui]! Great work.

> FLIP-334 : Decoupling autoscaler and kubernetes and support the Standalone 
> Autoscaler
> -
>
> Key: FLINK-32723
> URL: https://issues.apache.org/jira/browse/FLINK-32723
> Project: Flink
>  Issue Type: New Feature
>  Components: Autoscaler, Kubernetes Operator
>Reporter: Rui Fan
>Assignee: Rui Fan
>Priority: Major
> Fix For: kubernetes-operator-1.7.0
>
>
> This is an umbrella jira for decoupling autoscaler and kubernetes.
> https://cwiki.apache.org/confluence/x/x4qzDw



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-33361) Add Java 17 compatibility to Flink Kafka connector

2023-11-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-33361:
---
Labels: pull-request-available  (was: )

> Add Java 17 compatibility to Flink Kafka connector
> --
>
> Key: FLINK-33361
> URL: https://issues.apache.org/jira/browse/FLINK-33361
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: kafka-3.0.1, kafka-3.1.0
>Reporter: Martijn Visser
>Priority: Major
>  Labels: pull-request-available
>
> When currently trying to {{mvn clean install -Dflink.version=1.18.0 
> -Dscala-2.12 -Prun-end-to-end-tests 
> -DdistDir=/Users/mvisser/Developer/flink-1.18.0 
> -Dflink.convergence.phase=install 
> -Dlog4j.configurationFile=tools/ci/log4j.properties}} this fails with errors 
> like:
> {code:java}
> [INFO] 
> [INFO] Results:
> [INFO] 
> [ERROR] Errors: 
> [ERROR] FlinkKafkaConsumerBaseMigrationTest.testRestore
> [ERROR]   Run 1: Exception while creating StreamOperatorStateContext.
> [ERROR]   Run 2: Exception while creating StreamOperatorStateContext.
> [ERROR]   Run 3: Exception while creating StreamOperatorStateContext.
> [ERROR]   Run 4: Exception while creating StreamOperatorStateContext.
> [ERROR]   Run 5: Exception while creating StreamOperatorStateContext.
> [ERROR]   Run 6: Exception while creating StreamOperatorStateContext.
> [ERROR]   Run 7: Exception while creating StreamOperatorStateContext.
> [ERROR]   Run 8: Exception while creating StreamOperatorStateContext.
> [ERROR]   Run 9: Exception while creating StreamOperatorStateContext.
> [INFO] 
> [ERROR]   
> FlinkKafkaConsumerBaseTest.testExplicitStateSerializerCompatibility:721 » 
> Runtime
> [ERROR]   FlinkKafkaConsumerBaseTest.testScaleDown:742->testRescaling:817 » 
> Checkpoint C...
> [ERROR]   FlinkKafkaConsumerBaseTest.testScaleUp:737->testRescaling:817 » 
> Checkpoint Cou...
> [ERROR]   UpsertKafkaDynamicTableFactoryTest.testBufferedTableSink:243 » 
> UncheckedIO jav...
> {code}
> Example stacktrace:
> {code:java}
> Test 
> testBufferedTableSink(org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactoryTest)
>  failed with:
> java.io.UncheckedIOException: java.io.IOException: Serializing the source 
> elements failed: java.lang.reflect.InaccessibleObjectException: Unable to 
> make field private final java.lang.Object[] java.util.Arrays$ArrayList.a 
> accessible: module java.base does not "opens java.util" to unnamed module 
> @45b4c3a9
>   at 
> org.apache.flink.streaming.api.functions.source.FromElementsFunction.setOutputType(FromElementsFunction.java:162)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySetOutputType(StreamingFunctionUtils.java:84)
>   at 
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.setOutputType(StreamingFunctionUtils.java:60)
>   at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.setOutputType(AbstractUdfStreamOperator.java:146)
>   at 
> org.apache.flink.streaming.api.operators.SimpleOperatorFactory.setOutputType(SimpleOperatorFactory.java:118)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraph.addOperator(StreamGraph.java:434)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraph.addOperator(StreamGraph.java:402)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraph.addLegacySource(StreamGraph.java:356)
>   at 
> org.apache.flink.streaming.runtime.translators.LegacySourceTransformationTranslator.translateInternal(LegacySourceTransformationTranslator.java:66)
>   at 
> org.apache.flink.streaming.runtime.translators.LegacySourceTransformationTranslator.translateForStreamingInternal(LegacySourceTransformationTranslator.java:53)
>   at 
> org.apache.flink.streaming.runtime.translators.LegacySourceTransformationTranslator.translateForStreamingInternal(LegacySourceTransformationTranslator.java:40)
>   at 
> org.apache.flink.streaming.api.graph.SimpleTransformationTranslator.translateForStreaming(SimpleTransformationTranslator.java:62)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraphGenerator.translate(StreamGraphGenerator.java:860)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraphGenerator.transform(StreamGraphGenerator.java:590)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraphGenerator.getParentInputIds(StreamGraphGenerator.java:881)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraphGenerator.translate(StreamGraphGenerator.java:839)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraphGenerator.transform(StreamGraphGenerator.java:590)
>   at 
> org.apache.flink.streaming.api.graph.StreamGraphGenerator.generate(StreamGraphGenerator.java:328)
>   at 
> 

Re: [PR] [FLINK-33361] Add Java 17 compatibility to Flink Kafka connector [flink-connector-kafka]

2023-11-11 Thread via GitHub


boring-cyborg[bot] commented on PR #68:
URL: 
https://github.com/apache/flink-connector-kafka/pull/68#issuecomment-1806840882

   Thanks for opening this pull request! Please check out our contributing 
guidelines. (https://flink.apache.org/contributing/how-to-contribute.html)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FLINK-33361] Add Java 17 compatibility to Flink Kafka connector [flink-connector-kafka]

2023-11-11 Thread via GitHub


snuyanzin opened a new pull request, #68:
URL: https://github.com/apache/flink-connector-kafka/pull/68

   The PR adds compatibility with java 17
   
   Also it adds `ci.yml` which allows to run tests with java 17
   Once this will be enabled in shared ci  
(https://github.com/apache/flink-connector-shared-utils/pull/24) this `ci/yml` 
could be removed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-11-11 Thread via GitHub


RocMarshal commented on PR #23635:
URL: https://github.com/apache/flink/pull/23635#issuecomment-1806832620

   Thank you @KarmaGYZ @1996fanrui  very much for your comments. and I updated 
the PR  based on your comments.
   Have a great weekend~ :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-33523) DataType ARRAY fails to cast into Object[]

2023-11-11 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated FLINK-33523:
--
Component/s: Table SQL / API

> DataType ARRAY fails to cast into Object[]
> 
>
> Key: FLINK-33523
> URL: https://issues.apache.org/jira/browse/FLINK-33523
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.18.0
>Reporter: Prabhu Joseph
>Priority: Major
>
> When upgrading Iceberg's Flink version to 1.18, we found the Flink-related 
> unit test case broken due to this issue. The below code used to work fine in 
> Flink 1.17 but failed after upgrading to 1.18. DataType ARRAY 
> fails to cast into Object[].
> *Error:*
> {code}
> Exception in thread "main" java.lang.ClassCastException: [I cannot be cast to 
> [Ljava.lang.Object;
> at FlinkArrayIntNotNullTest.main(FlinkArrayIntNotNullTest.java:18)
> {code}
> *Repro:*
> {code}
>   import org.apache.flink.table.data.ArrayData;
>   import org.apache.flink.table.data.GenericArrayData;
>   import org.apache.flink.table.api.EnvironmentSettings;
>   import org.apache.flink.table.api.TableEnvironment;
>   import org.apache.flink.table.api.TableResult;
>   public class FlinkArrayIntNotNullTest {
> public static void main(String[] args) throws Exception {
>   EnvironmentSettings settings = 
> EnvironmentSettings.newInstance().inBatchMode().build();
>   TableEnvironment env = TableEnvironment.create(settings);
>   env.executeSql("CREATE TABLE filesystemtable2 (id INT, data ARRAY NOT NULL>) WITH ('connector' = 'filesystem', 'path' = 
> '/tmp/FLINK/filesystemtable2', 'format'='json')");
>   env.executeSql("INSERT INTO filesystemtable2 VALUES (4,ARRAY [1,2,3])");
>   TableResult tableResult = env.executeSql("SELECT * from 
> filesystemtable2");
>   ArrayData actualArrayData = new GenericArrayData((Object[]) 
> tableResult.collect().next().getField(1));
> }
>   }
> {code}
> *Analysis:*
> 1. The code works fine with ARRAY datatype. The issue happens when using 
> ARRAY.
> 2. The code works fine when casting into int[] instead of Object[].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-11-11 Thread via GitHub


1996fanrui commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r139022


##
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DefaultResourceAllocationStrategy.java:
##
@@ -95,7 +96,7 @@ public DefaultResourceAllocationStrategy(
 SlotManagerUtils.generateDefaultSlotResourceProfile(
 totalResourceProfile, numSlotsPerWorker);
 this.availableResourceMatchingStrategy =
-evenlySpreadOutSlots
+taskManagerLoadBalanceMode == TaskManagerLoadBalanceMode.SLOTS

Review Comment:
   It's fine for me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [FLINK-33386][runtime] Support tasks balancing at slot level for Default Scheduler [flink]

2023-11-11 Thread via GitHub


RocMarshal commented on code in PR #23635:
URL: https://github.com/apache/flink/pull/23635#discussion_r1390231615


##
flink-runtime/src/main/java/org/apache/flink/runtime/resourcemanager/slotmanager/DefaultResourceAllocationStrategy.java:
##
@@ -95,7 +96,7 @@ public DefaultResourceAllocationStrategy(
 SlotManagerUtils.generateDefaultSlotResourceProfile(
 totalResourceProfile, numSlotsPerWorker);
 this.availableResourceMatchingStrategy =
-evenlySpreadOutSlots
+taskManagerLoadBalanceMode == TaskManagerLoadBalanceMode.SLOTS

Review Comment:
   yes, that's right.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-31854) FLIP-307:[Phase 1] [Sink Support] Flink Connector Redshift

2023-11-11 Thread Samrat Deb (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-31854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samrat Deb updated FLINK-31854:
---
Labels: pull-request-available  (was: )

> FLIP-307:[Phase 1] [Sink Support] Flink Connector Redshift 
> ---
>
> Key: FLINK-31854
> URL: https://issues.apache.org/jira/browse/FLINK-31854
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / AWS
>Reporter: Samrat Deb
>Assignee: Samrat Deb
>Priority: Major
>  Labels: pull-request-available
>
> This is an umbrella Jira for 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-307%3A++Flink+Connector+Redshift]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33525) Migrate LoadSimulationPipeline in Flink K8S Operator to the new Source API

2023-11-11 Thread Alexander Fedulov (Jira)
Alexander Fedulov created FLINK-33525:
-

 Summary: Migrate LoadSimulationPipeline in Flink K8S Operator to 
the new Source API
 Key: FLINK-33525
 URL: https://issues.apache.org/jira/browse/FLINK-33525
 Project: Flink
  Issue Type: Sub-task
Reporter: Alexander Fedulov


https://github.com/apache/flink-kubernetes-operator/blob/main/examples/autoscaling/src/main/java/autoscaling/LoadSimulationPipeline.java#L100C51-L100C65



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33525) Migrate LoadSimulationPipeline in Flink K8S Operator to the new Source API

2023-11-11 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov reassigned FLINK-33525:
-

Assignee: (was: Alexander Fedulov)

> Migrate LoadSimulationPipeline in Flink K8S Operator to the new Source API
> --
>
> Key: FLINK-33525
> URL: https://issues.apache.org/jira/browse/FLINK-33525
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Alexander Fedulov
>Priority: Major
>
> https://github.com/apache/flink-kubernetes-operator/blob/main/examples/autoscaling/src/main/java/autoscaling/LoadSimulationPipeline.java#L100C51-L100C65



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (FLINK-33525) Migrate LoadSimulationPipeline in Flink K8S Operator to the new Source API

2023-11-11 Thread Alexander Fedulov (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander Fedulov reassigned FLINK-33525:
-

Assignee: Alexander Fedulov

> Migrate LoadSimulationPipeline in Flink K8S Operator to the new Source API
> --
>
> Key: FLINK-33525
> URL: https://issues.apache.org/jira/browse/FLINK-33525
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Alexander Fedulov
>Assignee: Alexander Fedulov
>Priority: Major
>
> https://github.com/apache/flink-kubernetes-operator/blob/main/examples/autoscaling/src/main/java/autoscaling/LoadSimulationPipeline.java#L100C51-L100C65



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (FLINK-33251) SQL Client query execution aborts after a few seconds: ConnectTimeoutException

2023-11-11 Thread Jorick Caberio (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-33251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17785160#comment-17785160
 ] 

Jorick Caberio commented on FLINK-33251:


FWIW I tried replicating this in 1.16 and 1.18.

1.16 works as expected while 1.18 has the same issue.

> SQL Client query execution aborts after a few seconds: ConnectTimeoutException
> --
>
> Key: FLINK-33251
> URL: https://issues.apache.org/jira/browse/FLINK-33251
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.18.0, 1.17.1
> Environment: Macbook Pro 
> Apple M1 Max
>  
> {code:java}
> $ uname -a
> Darwin asgard08 23.0.0 Darwin Kernel Version 23.0.0: Fri Sep 15 14:41:43 PDT 
> 2023; root:xnu-10002.1.13~1/RELEASE_ARM64_T6000 arm64
> {code}
> {code:bash}
> $ java --version
> openjdk 11.0.20.1 2023-08-24
> OpenJDK Runtime Environment Homebrew (build 11.0.20.1+0)
> OpenJDK 64-Bit Server VM Homebrew (build 11.0.20.1+0, mixed mode)
> $ mvn --version
> Apache Maven 3.9.5 (57804ffe001d7215b5e7bcb531cf83df38f93546)
> Maven home: /opt/homebrew/Cellar/maven/3.9.5/libexec
> Java version: 11.0.20.1, vendor: Homebrew, runtime: 
> /opt/homebrew/Cellar/openjdk@11/11.0.20.1/libexec/openjdk.jdk/Contents/Home
> Default locale: en_GB, platform encoding: UTF-8
> OS name: "mac os x", version: "14.0", arch: "aarch64", family: "mac"
> {code}
>Reporter: Robin Moffatt
>Priority: Major
> Attachments: log.zip
>
>
> If I run a streaming query from an unbounded connector from the SQL Client, 
> it bombs out after ~15 seconds.
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: 
> connection timed out: localhost/127.0.0.1:52596
> {code}
> This *doesn't* happen on 1.16.2. It *does* happen on *1.17.1* and *1.18* that 
> I have just built locally (git repo hash `9b837727b6d`). 
> The corresponding task's status in the Web UI shows as `CANCELED`. 
> ---
> h2. To reproduce
> Launch local cluster and SQL client
> {code}
> ➜  flink-1.18-SNAPSHOT ./bin/start-cluster.sh 
> Starting cluster.
> Starting standalonesession daemon on host asgard08.
> Starting taskexecutor daemon on host asgard08.
> ➜  flink-1.18-SNAPSHOT ./bin/sql-client.sh
> […]
> Flink SQL>
> {code}
> Set streaming mode and result mode
> {code:sql}
> Flink SQL> SET 'execution.runtime-mode' = 'STREAMING';
> [INFO] Execute statement succeed.
> Flink SQL> SET 'sql-client.execution.result-mode' = 'changelog';
> [INFO] Execute statement succeed.
> {code}
> Define a table to read data from CSV files in a folder
> {code:sql}
> CREATE TABLE firewall (
>   event_time STRING,
>   source_ip  STRING,
>   dest_ipSTRING,
>   source_prt INT,
>   dest_prt   INT
> ) WITH (
>   'connector' = 'filesystem',
>   'path' = 'file:///tmp/firewall/',
>   'format' = 'csv',
>   'source.monitor-interval' = '1' -- unclear from the docs what the unit is 
> here
> );
> {code}
> Create a CSV file to read in
> {code:bash}
> $ mkdir /tmp/firewall
> $ cat > /tmp/firewall/data.csv < 2018-05-11 00:19:34,151.35.34.162,125.26.20.222,2014,68
> 2018-05-11 22:20:43,114.24.126.190,21.68.21.69,379,1619
> EOF
> {code}
> Run a streaming query 
> {code}
> SELECT * FROM firewall;
> {code}
> You will get results showing (and if you add another data file it will show 
> up) - but after ~30 seconds the query aborts and throws an error back to the 
> user at the SQL Client prompt
> {code}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: 
> connection timed out: localhost/127.0.0.1:58470
> Flink SQL>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)