tement": [
{
"Action": [
"s3:ListBucket",
"s3:Get*",
"s3:Put*",
"s3:Delete*"
],
"Resource": [
"arn:aws:s3:::-flink-dev",
a promising option to look into.
Svend
[1] https://github.com/spotify/flink-on-k8s-operator/issues/82
On Mon, 9 Aug 2021, at 12:36 PM, Niklas Wilcke wrote:
> Hi Yuval,
>
> thank you for sharing all the information. I forgot to mention the Lyft
> operator. Thanks for "addin
maybe KafkaSource should refuse to even start if
its configured parallelism is higher than the kafka partitioning ? Otherwise,
this error condition is rather difficult to interpret IMHO. I'm happy to open a
jira and work on that if that's desired?
Best regards,
Svend
On Wed, 4 Aug 2021, at
a supplementary ProcessFunction in the pipeline just to have
one more stateful thing and hope to trigger the checkpoint, though without
success.
On Tue, 3 Aug 2021, at 1:33 PM, Robert Metzger wrote:
> Hi Svend,
> I'm a bit confused by this statement:
>
>> * In sreaming mod
ose are streaming concepts.
Hopefully I'm doing something wrong?
[1] http://mail-archives.apache.org/mod_mbox/flink-user/202106.mbox/browser
Thanks a lot in advance,
Svend
```
// I'm testing this by launching the app an IDE
StreamExecutionEnvironment env =
StreamExecutionEnvironme
ook, the max width seems to be defined in [1], and used in
various places like [2] and [3]. Should I open a Jira to discuss this and cc
you in it?
Cheers,
Svend
[1]
https://github.com/apache/flink/blob/6d8c02f90a5a3054015f2f1ee83be821d925ccd1/flink-table/flink-table-common/src/main/java/org/apa
flink-docs-release-1.13/docs/dev/table/sqlclient/#sql-client-startup-options
Thanks a lot in advance!
Svend
You can debug the various operations that are attempted on S3 by setting this
logger to DEBUG level: org.apache.hadoop.fs.s3a
Good luck :)
Svend
[1]
https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html
[2]
https://hadoop.apache.org/docs/current/hadoop-aws/tools/had
not aware of the 2nd issue you refer to related to in-progress job? In case
that helps, we access the Flink-UI by simply opening a port-forward on port
8081 on the job manager, which among other things shows the currently running
jobs.
Svend
On Mon, 31 May 2021, at 12:00 PM, Ilya Karpov wrote:
Awesome, thanks a lot for clarifications Jing Zhang, it's very useful.
Best,
Svend
On Sun, 30 May 2021, at 6:27 AM, JING ZHANG wrote:
> Hi Svend,
> Your solution could work well in Flink 1.13.0 and Flink 1.13.0+ because those
> version provides many related improvements.
>
&
an
important attention point.
Hope this helps,
Svend
On Fri, 28 May 2021, at 9:09 AM, Ilya Karpov wrote:
> Hi there,
>
> I’m making a little research about the easiest way to deploy link job to k8s
> cluster and manage its lifecycle by *k8s operator*. The list of solutions
If that is correct, I guess I can simply use the DataStream connector for that
specific topic and then convert it to a Table.
Thanks a lot!
Svend
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/kafka/#source-per-partition-watermarks
[2]
https://ci
Hi everyone,
What is the typical architectural approach with Flink SQL for processing recent
events from Kafka and older events from some separate cheaper storage?
I currently have the following situation in mind:
* events are appearing in Kafka and retained there for, say, 1 month
* events
Thanks for the feedback.
The CSV is a good idea and will make my tests more readable, I'll use that.
Looking forward to Flink 1.13 !
Svend
On Fri, 30 Apr 2021, at 9:09 AM, Timo Walther wrote:
> Hi,
>
> there are multiple ways to create a table for testing:
>
> -
estDataTable = tableEnv.fromDataStream(testStream, *$*("created"),
*$*("event_time").rowtime());
tableEnv.createTemporaryView("post_events_kafka", testDataTable);
On Thu, 29 Apr 2021, at 7:04 PM, Svend wrote:
> I'm trying to write java unit test for a Flink SQL application us
I'm trying to write java unit test for a Flink SQL application using Flink mini
cluster, but I do not manage to create an input table with nested fields and
time characteristics.
I had a look at the documentation and examples below, although I'm still
struggling:
*$*("user_id"), *$*("handle"), *$*("name"));
table.execute().print();
}
}
"""
You can also dig here, you'll probably find better examples
https://github.com/apache/flink/tree/master/flink-examples/flink-examples-table
Cheers,
Svend
O
Hi all,
I'm failing to setup an example of wire serialization with Protobuf, could
you help me figure out what I'm doing wrong?
I'm using a simple protobuf schema:
```
syntax = "proto3";
import "google/protobuf/wrappers.proto";
option java_multiple_files = true;
message DemoUserEvent {
18 matches
Mail list logo