Hi John,
can you please submit this as an issue in JIRA? If you suspect it is
related to other issues, just make a note of that in the issue as well.
Thanks!
Ingo
On 23.01.22 18:05, John Smith wrote:
Just I'm case but in 1.14.x regardless of the job manager is leader or
not. Before submitt
Hi,
We are on flink 1.13.3 and trying to upgrade the cluster to 1.14.3
version. I see that job(on 1.13.3) is unable to start up because it says it
couldn't find metrics group(inside flinkkafkaconsumer class).
- can I deploy 1.13.3 job on 1.14.3 cluster?
- can I deploy 1.14.3 job on 1.13.3 cluste
Thanks Yun for your response, but I a not sure why the doc states that in many
cases it is non-parallel.
On Monday, January 24, 2022, 01:30:27 AM EST, Yun Tang
wrote:
#yiv5427582454 P {margin-top:0;margin-bottom:0;}Hi Singh,
All the output operator transformed by AllWindowedStream
Hi Caizhi:
Thanks for your reply.
I need to aggregate streams based on dynamic groupings. All the groupings
(keyBy) are not known upfront and can be added or removed after the streaming
application is started and I don't want to restart the application/change the
code. So, I wanted to find ou
Cool, thanks! I'll speak to the shared cluster support team to see if they
can install our CA cert on every box. So we've got that side of
authentication sorted - flink can trust that the service is who it says it
is.
How about the other way around? Any thoughts on how I could provide a
*key*store
Hi team,
We have a streaming job running with 1 JM + 4 TM in our k8s cluster.
Recently one of the TMs encountered some failure, and the job can't be
recovered from the lastest state from the checkpoint. From the log we found
something suspicious -
2022-01-21T13:38:41.296Z | FlinkStreamJob | SPNJP
Hi using Flink 1.14.3 with gradle. I explicitly added the flink client
dependency and the job starts but it quits with...
In Flink 1.10 the job worked as is. How do I set the number of slots and is
there any other settings for the IDE?
16:29:50,633 INFO
org.apache.flink.runtime.resourcemanager.s
Hello Filip,
As far as I know SslContextBuilder.forClient() should use the default trust
store, so if you will install your self signed certificate in the community
supported container image this should be picked up[1].
The following user has reported something similar, and it seems that
they've g
Hi All!
I was wondering if there's a way to secure a remote function by requiring
the client (flink) to use a client cert. Preferably a base64 encoded string
from the env properties, but that might be asking for a lot :)
I had a look at the code, and NettySharedResources seems to use
SslContextBu
I had a doubt regarding watermarking w.r.t streaming with FileSource. Will
really appreciate it if somebody can explain this behavior.
Consider a filesystem with a root folder containing date wise sub folders
such as *D1* , *D2*, … and so on. Each of these date folders further has
24 sub-folders
Hi Team,
We are currently running our streaming application based Flink(Datastream
API ) on a non-prod cluster.And planning to move it to production cluster
soon.. We are keeping cerating keyed state backed by rocksdb in the flink
application. We need a mechanism to query these keyed state values
Hi,
I would like to bump this up a little bit.
The isCaseSensitive is rather clear. If this is false, then column read in
parquet file is case insensitive.
batchSize - how many records we read from the Parquet file before passing
it to the upper classes right?
Could someone describe what timest
Hi Sebastian,
If you are a Flink runtime developer, Flink already make the runtime code scala
free [1] for maintenance concerns. If you are just a Flink user, I think both
languages are fine.
[1] https://issues.apache.org/jira/browse/FLINK-14105
[FLINK-14105] Make flink-runtime scala-free - ASF
Hi there,
I am getting started with Apache Flink. I am curious whether there is a clear
winner between developing in either Scala or Java.
It sounds like Flink is typically slower to support new versions of Scala and
that Java development might have fewer quirks.
What do you think? I have expe
Hi all,
I have one flink job which reads data from one kafka topic and sinks to two
kafka topics using Flink SQL.
The code is something like this:
tableEnv.executeSql(
"""
create table sink_table1 (
xxx
xxx
) with (
'connector' = 'kafka',
'topic' = 'topic1'
)
"""
)
tableEnv.execute
Hi all,
I agree with Xintong's comment: Reducing the number of deployment modes
would help users. There is a clearer distinction between session mode and
the two other deployment modes (i.e. application and job mode). The
difference between application and job mode is not that easy to grasp for
new
Certain (expected and completely fine) lifecycle events, like the one
you mentioned, do log a stacktrace on debug level I believe. This one is
not a cause for concern.
On 24/01/2022 11:02, Caizhi Weng wrote:
Hi!
The exception stack you provided is not complete. Could you please
provide the w
Hi!
The exception stack you provided is not complete. Could you please provide
the whole exception stack (including all "Caused by")? Also could you
please provide your user code so that others can look into this problem?
Shawn Du 于2022年1月24日周一 17:22写道:
> Hi experts,
> I am new to flink, just r
Hi!
Sorry for the late reply. Which Flink version are you using? For current
Flink master there is no JdbcTableSource.
Qihua Yang 于2022年1月19日周三 16:00写道:
> Should I change the query? something like below to add a limit? If no
> limit, does that mean flink will read whole huge table to memory and
Hi
After 1.14.0 I think Flink should work well even at the 1000*1000 scale +
10s akka.timeout in the deploy stage.
So thank you for any further feedback after you investigate.
BTW: I think you might look at
https://issues.apache.org/jira/browse/FLINK-24295, which might cause the
problem.
Best,
Gu
Hi!
As far as I know there is currently no way to do this. However if you'd
like to, you can implement this with a custom source. Before you stop the
job you need to send a signal to this custom source (for example through a
common file on HDFS or just through socket) and if the custom source
dete
Hi!
Adding/removing keyed streams will change the topology graph of the job.
Currently it is not possible to do so without restarting the job and as far
as I know there is no existing framework/pattern to achieve this.
By the way, why do you need this functionality? Could you elaborate more on
yo
Hi experts,
I am new to flink, just run a simple job in IDE, but there are many exceptions
thrown when job finished(see blow).
job source is bounded, read from a local file and run in streaming mode. there
is a customer sink also, simply write to local file.
It seems that each time I run, I got d
Hi!
All properties you set by calling KafkaSource.builder().setProperty() will
also be given to KafkaConsumer (see [1]). However these two properties are
specific to Flink and Kafka does not know them, so Kafka will produce a
warning message. These messages are harmless as long as the properties y
Sorry for joining the discussion late.
I'm leaning towards deprecating the per-job mode soonish, and eventually
dropping it in the long-term.
- One less deployment mode makes it easier for users (especially newcomers)
to understand. Deprecating the per-job mode sends the signal that it is
legacy,
hi all:
anyone know some project can auto convert DB Table Schema(like Mysql create
table ) to Flink Schema(org.apache.flink.table.api.Schema) tools ,I want
implment dynamic generate Flink Table Schema
---
Best,
WuKong
Hi Guowei,
Thanks a lot for your reply.
I’m using 1.14.0. The timeout happens at job deployment time. A subtask would
run for a short period of `akka.ask.timeout` before fails due to the timeout.
I noticed that jobmanager have a very hight CPU usage at the moment, like
2000%. I’m reasoning abo
Hi Zhilong,
Thanks a lot for your very detailed answer!
My setup: Flink 1.14.0 on YARN, jdk1.8_u202
The timeout happens at the job deployment stage. I checked GC logs, both JM and
TM look good, but the CPU usage of JM could go up to 2000% for a short time
(cgroups are not turned on).
I’ve se
>
> I checked the image prior cluster creation; all logs' files are there.
> once the cluster is deployed, they are missing. (bug?)
I do not think it is a bug since we already have shipped all the config
files(log4j properties, flink-conf.yaml) via the ConfigMap.
Then it is directly mounted to an
29 matches
Mail list logo