Hi Flink Users,
We need to expose some additional options for the s3 hadoop filesystem:
Specifically, we want to set object tagging and lifecycle. This would be a
fairly easy change and we initially thought to create a new Filsystem with
very minor changes to allow this.
However then I wondered,
Tried to attach tar file but it got blocked. Resending with files attached
individually.
Ok, have minimal reproducible example. Attaching a tar file of the job that
crashed.
The crash has nothing to do with the number of state variables. But it does
seem to be caused by using a type for
>
> Could I use your command with no docker?
Hypothetically, yes, but it's a somewhat impractical idea. The
ClickCountJob needs Flink and Kafka, and there is another java application
(the clickevent-generator) that writes into Kafka the data that is being
processed.
On Sat, Oct 10, 2020 at 5:32
No, thanks! I used JobClient to getJobStatus and sleep if it was not
terminal. I'll switch to this.
On Sat, Oct 10, 2020 at 12:50 AM Aljoscha Krettek
wrote:
> Hi Dan,
>
> did you try using the JobClient you can get from the TableResult to wait
> for job completion? You can get a CompletableFu
Hi mates !
I'm in the beginning of the road of building a recommendation pipeline on
top of Flink.
I'm going to register a list of UDF python functions on job
startups where each UDF is an ML model.
Over time new model versions appear in the ML registry and I would like to
update my UDF functions
Could I use your command with no docker?
-- --
??:
"David Anderson"
The ClickCountJob used in the operations playground accepts an application
parameter, like this:
flink run -d /opt/ClickCountJob.jar --bootstrap.servers kafka:9092
--checkpointing --event-time --backpressure
To try this, you would modify the docker-compose.yaml file in [1]. If you
want to see how
Hi Dan,
did you try using the JobClient you can get from the TableResult to wait
for job completion? You can get a CompletableFuture for the JobResult
which should help you.
Best,
Aljoscha
On 08.10.20 23:55, Dan Hill wrote:
I figured out the issue. The join caused part of the job's executi
Hi Song
Flink-1.4.2 is a bit too old, and I think this error is caused by FLINK-8876
[1][2] which should be fixed after Flink-1.5, please consider to upgrade Flink
version.
[1] https://issues.apache.org/jira/browse/FLINK-8876
[2] https://issues.apache.org/jira/browse/FLINK-8836
Best
Yun Tang