Hi Bastien,
Flink supports to register state via state descriptor when calling
runtimeContext.getState(). However, once the state is registered, it cannot be
removed anymore. And when you restore from savepoint, the previous state is
registered again [1]. Flink does not to drop state directly a
Hi,
Currently, flink-connector-jdbc doesn't support MS Server dialect. Only
MySQL and Postgres are supported.
Best,
Jark
On Tue, 8 Dec 2020 at 01:20, aj wrote:
> Hello ,
>
> I am trying to create a table with microsoft sql server using flink sql
>
> CREATE TABLE sampleSQLSink (
> id INTEG
Hi, Jakub
I am not familiar with the `sbt pack`. But I assume you are doing following
(correct me if I misunderstand you)
1. The UDF and Job jar are in the same "fatjar"
2. You "new" a UDF object in the job().
3. You submit the "fatjar" to the local Flink environment.
In theory there should not b
Hi,
I followed [1] to define my own metric as:
val dropwizardHistogram = new com.codahale.metrics.Histogram(new
SlidingWindowReservoir(500))
histogram = getRuntimeContext
.getMetricGroup
.histogram("*feature_latency*", new
DropwizardHistogramWrapper(dropwizardHistogram))
and it is su
Hi Deep,
Could you show your current code snippet? I have tried the Csv file data on my
local machine and it works fine, so I guess what might be wrong elsewhere.
Best,
Wei
> 在 2020年12月8日,03:20,DEEP NARAYAN Singh 写道:
>
> Hi Wei and Till,
> Thanks for the quick reply.
>
> @Wei, I tried with
>
> For *log4j1*, I am afraid you need to set the java dynamic option[1] to
> get a similar effect.
Sorry for the inconvenience. Actually, I means log4j1 in the
above sentence. IIRC, log4j1 could not
support using system env in the log4j configuration.
However, it seems that you are running a ses
As a follow up – I’m trying to follow the approach I outlined below, and I’m
having trouble figuring out how to perform the step of doing the delete/insert
after the job is complete.
I’ve tried adding a job listener, like so, but that doesn’t seem to ever get
fired off:
val statementSet =
I forgot to mention: this is Flink 1.10.
-K
On Mon, Dec 7, 2020 at 5:08 PM Kye Bae wrote:
> Hello!
>
> We have a real-time streaming workflow that has been running for about 2.5
> weeks.
>
> Then, we began to get the exception below from taskmanagers (random) since
> yesterday, and the job bega
Hello!
We have a real-time streaming workflow that has been running for about 2.5
weeks.
Then, we began to get the exception below from taskmanagers (random) since
yesterday, and the job began to fail/restart every hour or so.
The job does recover after each restart, but sometimes it takes more
Version 1.11.2
On Sun, Dec 6, 2020 at 10:20 PM Yun Gao wrote:
> Hi Rex,
>
>I tried a similar example[1] but did not reproduce the issue, which
> version of Flink you are using now ?
>
> Best,
> Yun
>
>
>
>
> [1] The example code:
>
> StreamExecutionEnvironment bsEnv =
> StreamExecutionEnvi
Thanks for all this feedback, this is really helpful!
On Sun, Dec 6, 2020 at 7:23 PM Xintong Song wrote:
> FYI, I've opened FLINK-20503 for this.
> https://issues.apache.org/jira/browse/FLINK-20503
>
> Thank you~
>
> Xintong Song
>
>
>
> On Mon, Dec 7, 2020 at 11:10 AM Xintong Song
> wrote:
>
>
Hi Guowei,
It turned out for my application I unfortunately can't have the UDF in the
"job's" classpath. As I am using a local Flink environment and `sbt pack`
(similar to a fatjar) to create launch scripts therefore, to my understanding,
I can't access the classpath (when the project is packe
Hello ,
I am trying to create a table with microsoft sql server using flink sql
CREATE TABLE sampleSQLSink (
id INTEGER
message STRING,
ts TIMESTAMP(3),
proctime AS PROCTIME()
) WITH (
'connector' = 'jdbc',
'driver' = 'com.microsoft.sqlserver.jdbc.SQLServerDriver',
'u
I'm using a dockerized HA cluster that I submit pipelines to through the CLI.
So where exactly can I configure the PIPELINE env variable? Seems like it needs
to be set per container. But many different pipelines run on the same
TaskManager (so also the same container).
And your example mentions
Hello,
We have experienced some weird issues with POJO mapState in a streaming job
upon checkpointing when removing state, then modifying the state POJO and
restoring job
Caused by: java.lang.NullPointerException
at
org.apache.flink.api.java.typeutils.runtime.PojoSerializer.(PojoSerializer.java:12
How does Flink cache values that also do not exist in the database?
I would like to cache hits forever, but I would like to check items that do not
exist in the database only every 15 minutes? Is there a way to set that up in
the SQL / Table api? Also, is there a way to set that up in Keyed Sta
I am setting up a Flink job that will reload a table in a postgres database
using the Flink SQL functionality. I just wanted to make sure that given the
current feature set I am going about this the correct way. I am currently using
version 1.11.2, but plan on upgrading to 1.12 soon whenever it
Hi there,
We're using Flink to read from a Kinesis stream. The stream contains
messages that themselves contain lists of events and we want our Flink jobs
(using the event time characteristic) to process those events individually.
We have this working using flatMap in the DataStream but we're havi
Hi Deep,
Could you use the TextInputFormat which reads a file line by line? That way
you can do the JSON parsing as part of a mapper which consumes the file
lines.
Cheers,
Till
On Mon, Dec 7, 2020 at 1:05 PM Wei Zhong wrote:
> Hi Deep,
>
> (redirecting this to user mailing list as this is not
Hi Deep,
(redirecting this to user mailing list as this is not a dev question)
You can try to set the line delimiter and field delimiter of the
RowCsvInputFormat to a non-printing character (assume there is no non-printing
characters in the csv files). It will read all the content of a csv file
Hi Guowei,
Great thanks for your help. Your suggestion indeed solved the issue. I moved
`myFunction` to the class path where execution starts.
Kind regards,
Jakub
Von: Guowei Ma
Gesendet: Montag, 7. Dezember 2020 12:16
An: Jakub N
Cc: user@flink.apache.org
B
Hi Jakub,
I think the problem is that the `cls`, which you load at runtime, is not in
the thread context classloader.
Flink deserializes the `myFunction` object with the context classloader.
So maybe you could put the myFunction in the job's class path.
Best,
Guowei
On Mon, Dec 7, 2020 at 5:54 PM
Hi Guowei,
Thanks for your help,
here is the relevant code (QueryCommand class):
val fsSettings: EnvironmentSettings = EnvironmentSettings
.newInstance()
.useBlinkPlanner()
.inStreamingMode()
.build()
val fsEnv: StreamExecutionEnvironment =
StreamExecutionEnvironment.getExecutionEnviro
23 matches
Mail list logo