Hi everybody,
I am using Flink 1.4.2 and periodically my job goes down with the following
exception in logs. Relaunching the job does not help, only restarting the
whole cluster.
Is there a JIRA problem for that? will upgrade to 1.5 help?
java.io.FileNotFoundException:
Hi all,
I am getting similar exception while upgrading from Flink 1.4 to 1.6:
```
06 Feb 2019 14:37:34,080 ERROR
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Fatal error
occurred in the cluster entrypoint.
java.lang.RuntimeException: org.apache.flink.util.FlinkException: Could not
my guess is that tmp directory got cleaned on your host and Flink couldn't
restore memory state from it upon startup.
Take a look at
https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#configuring-temporary-io-directories
article, I think it is relevant
On Thu, Nov 1, 2018 at
that's what are You looking for:
> https://issues.apache.org/jira/browse/KAFKA-6221
>
> This issue is connected with Kafka itself rather than Flink.
>
> Best Regards,
> Dom.
>
> wt., 23 paź 2018 o 15:04 Alexander Smirnov
> napisał(a):
>
>> Hi,
>>
>>
Hi,
I stumbled upon an exception in the "Exceptions" tab which I could not
explain. Do you know what could cause it? Unfortunately I don't know how to
reproduce it. Do you know if there is a respective JIRA issue for it?
Here's the exception's stack trace:
I think that's because you declared it as transient field.
Move the declaration inside of "open" function to resolve that
On Mon, Oct 22, 2018 at 3:48 PM Ahmad Hassan wrote:
> 2018-10-22 13:46:31,944 INFO org.apache.flink.runtime.taskmanager.Task
> -
You need to us an
> older point to restart.
>
> Best,
> Stefan
>
>
> Am 25.09.2018 um 16:53 schrieb Alexander Smirnov <
> alexander.smirn...@gmail.com>:
>
> Thanks Stefan.
>
> is it only Flink runtime should be updated, or the job should be
> recompiled
ra/browse/FLINK-8836 which
> would also match to your Flink version. I suggest to update to 1.4.3 or
> higher to avoid the issue in the future.
>
> Best,
> Stefan
>
>
> Am 25.09.2018 um 16:37 schrieb Alexander Smirnov <
> alexander.smirn...@gmail.com>:
>
&g
I'm getting an exception on job starting from a savepoint. Why that could
happen?
Flink 1.4.2
java.lang.IllegalStateException: Could not initialize operator state
backend.
at
Thanks Hequn!
On Thu, 30 Aug 2018 at 04:49, Hequn Cheng wrote:
> Hi Alex,
>
> It seems a bug. There is a discussion here
> <http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Kryo-Exception-td20324.html>
> .
> Best, Hequn
>
> On Wed, Aug 29, 2018
Hi,
A job fell into a restart loop with the following exception. Is it
something known?
What could cause it?
Flink 1.4.2
16 Aug 2018 13:43:00,835 INFO org.apache.flink.runtime.taskmanager.Task -
Source: Custom Source -> (Filter -> Timestamps/Watermarks -> Map, Filter ->
Timestamps/Watermarks ->
Hello,
I have a cluster with multiple jobs running on it. One of the jobs has
checkpoints constantly failing
[image: image.png]
How do I investigate it?
Thank you,
Alex
Hello,
I noticed CPU utilization went high and took a thread dump on the task
manager node. Why would RocksDBMapState.entries() / seek0 call consumes CPU?
It is Flink 1.4.2
"Co-Flat Map (3/4)" #16129 prio=5 os_prio=0 tid=0x7fefac029000
nid=0x338f runnable [0x7feed2002000]
different. Usually, the .log file stores the log
> information output by the log framework. Flink uses slf4j as the log
> interface and supports log4j and logback configurations. The .out file
> stores the STDOUT information. This information is usually output by you
> calling s
Hi,
could you please explain the difference between *.log and *.out files in
Flink?
What information is supposed to be in each of them?
Is "log" a subset of "out"?
How do I setup rotation with gzipping?
Thank you,
Alex
to denote development and stable releases?
Hi Alexey,
I know that Kibana(https://en.wikipedia.org/wiki/Kibana) can show logs from
different servers at one screen. May be this is what you are looking for
Alex
On Mon, May 14, 2018 at 5:17 PM NEKRASSOV, ALEXEI wrote:
> Is there a way to see logs from multiple Task
the bug you have hit was fixed in 0.11.0.2.
>
> As a side note, as far as we know our FlinkKafkaProducer011 works fine
> with Kafka 1.0.x.
>
> Piotrek
>
> On 7 May 2018, at 12:12, Alexander Smirnov <alexander.smirn...@gmail.com>
> wrote:
>
> Hi Piotr, using 0.11.
sion are you using?
>
> Piotrek
>
>
> On 4 May 2018, at 17:55, Alexander Smirnov <alexander.smirn...@gmail.com>
> wrote:
>
> Thanks for quick turnaround Stefan, Piotr
>
> This is a rare reproducible issue and I will keep an eye on it
>
> searching on the
t; has an idea?
>
> Best,
> Stefan
>
> Am 04.05.2018 um 15:45 schrieb Alexander Smirnov <
> alexander.smirn...@gmail.com>:
>
> Hi,
>
> what could cause the following exception?
>
> org.apache.flink.streaming.connectors.kafka.F
Hi,
what could cause the following exception?
org.apache.flink.streaming.connectors.kafka.FlinkKafka011Exception: Failed
to send data to Kafka: This server is not the leader for that
topic-partition.
at
Hi,
I'm creating kafka producer with timestamps enabled following
instructions at
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/connectors/kafka.html#kafka-producer
Optional customPartitioner = Optional.empty();
FlinkKafkaProducer011
ok, I got it. Barrier-n is an indicator or n-th checkpoint.
My first impression was that barriers are carrying offset information, but
it was wrong.
Thanks for unblocking ;-)
Alex
Hi,
I'm reading documentation about checkpointing:
https://ci.apache.org/projects/flink/flink-docs-master/internals/stream_checkpointing.html
It describes a case, when an operator receives data from all its incoming
streams alongs with barriers. There's also an illustration on that page for
the
Hi Fabian,
please share the workarounds, that must be helpful for my case as well
Thank you,
Alex
On Mon, Apr 23, 2018 at 2:14 PM Fabian Hueske wrote:
> Hi Miki,
>
> Sorry for the late response.
> There are basically two ways to implement an enrichment join as in your
> use
That's absolutely no problem Tzu-Li. Either of them would work. Thank you!
On Thu, Apr 19, 2018 at 4:56 PM Tzu-Li (Gordon) Tai
wrote:
> @Alexander
> Sorry about that, that would be my mistake. I’ll close FLINK-9204 as a
> duplicate and leave my thoughts on FLINK-9155.
Hi,
I have a co-flatmap function which reads data from external DB on specific
events.
The API for the DB layer is homegrown and it uses multiple threads to speed
up reading.
Can it cause any problems if I use the multithreading API in the flatmap1
function? Is it allowed in Flink?
Or, maybe I
this feature has been implemented in 1.4.0, take a look at
https://issues.apache.org/jira/browse/FLINK-4022
https://ci.apache.org/projects/flink/flink-docs-release-1.4/dev/connectors/kafka.html#kafka-consumers-topic-and-partition-discovery
On Wed, Apr 11, 2018 at 3:33 PM chandresh pancholi <
I've seen similar problem, but it was not a heap size, but Metaspace.
It was caused by a job restarting in a loop. Looks like for each restart,
Flink loads new instance of classes and very soon in runs out of metaspace.
I've created a JIRA issue for this problem, but got no response from the
I have the same question. In case of kafka source, it would be good to know
topic name and offset of the corrupted message for further investigation.
Looks like the only option is to write messages into a log file
On Fri, Apr 6, 2018 at 9:12 PM Elias Levy
wrote:
> I
art-strategy value is being completely ignored (regardless of it’s
> value) when user enables checkpointing:
>
> env.enableCheckpointing(5000, CheckpointingMode.EXACTLY_ONCE);
>
> I suspect this is a bug, but I have to confirm it.
>
> Thanks, Piotrek
>
> On 5 Apr 2018,
=2147483647,
delayBetweenRestartAttempts=1) for 43ecfe9cb258b7f624aad9868d306edb.*
2018-04-05 22:38:29,656 INFO
org.apache.flink.runtime.executiongraph.ExecutionGraph- Job
recovers via failover strategy: full graph restart
On Thu, Apr 5, 2018 at 10:35 PM Alexander Smirnov <
alexander.sm
h the newest Flink
> version. Otherwise ClassNotFoundException usually indicates that
> something is wrong with your dependencies. Maybe you can share your
> pom.xml with us.
>
> Regards,
> Timo
>
> Am 02.04.18 um 13:32 schrieb Alexander Smirnov:
> > I see a lot of me
Hello,
I've defined restart strategy in flink-conf.yaml as none. WebUI / Job
Manager section confirms that.
But looks like this setting is disregarded.
When I go into job's configuration in the WebUI, in the Execution
Configuration section I can see:
Max. number of execution retries
I see a lot of messages in flink log like below. What's the cause?
02 Apr 2018 04:09:13,554 ERROR
org.apache.kafka.clients.producer.internals.Sender - Uncaught error in
kafka producer I/O thread:
org.apache.kafka.common.KafkaException: Error registering mbean
>
> Nico
>
> On 28/03/18 12:27, Alexander Smirnov wrote:
> > Hi,
> >
> > are the files needed only on cluster startup stage?
> > are they only used by bash scripts?
> >
> > Alex
>
> --
> Nico Kruber | Software Engineer
> data Artisans
>
Hi,
are the files needed only on cluster startup stage?
are they only used by bash scripts?
Alex
>
> Does the issue really happen after 48 hours?
> Is there some indication of a failure in TaskManager log?
>
> If you will be still unable to solve the problem, please provide full
> TaskManager and JobManager logs.
>
> Piotrek
>
> On 21 Mar 2018, at 16:00, Alexande
Hi,
For standalone cluster configuration, is it possible to use vanilla Apache
Zookeeper?
I saw there's a wrapper around it in Flink - FlinkZooKeeperQuorumPeer. Is
it mandatory to use it?
Thank you,
Alex
Thank you so much, it helped!
From: Piotr Nowojski <pi...@data-artisans.com<mailto:pi...@data-artisans.com>>
Date: Thursday, October 12, 2017 at 6:00 PM
To: Alexander Smirnov <asmir...@five9.com<mailto:asmir...@five9.com>>
Cc: "user@flink.apache.org<mailt
Hello All,
I got the following error while attempting to execute a job via command line:
[root@flink01 bin]# ./flink run -c com.five9.stream.PrecomputeJob
/vagrant/flink-precompute-1.0-SNAPSHOT.jar -Xmx2048m -Xms2048m
Cluster configuration: Standalone cluster with JobManager at
Hi Biplob,
Yes unix timestamp is what I¹m using now.
But the problem is that a time window like '1 day' is defined using
different start-end timestamps for users in different time zones
Let me try to draw it
|1--2-3-4---|
1 and 3 - time frames for European users
2 and 4 -
Hello everybody,
I’m exploring Flink options to build statistics engine for call center solution.
One thing I’m not sure how to implement.
Currently have the following jobs in the architecture.
Job #1 – is for matching start and end events and calculate durations. Like
having call started and
Hi,
source data, read from MQ, contains tenant Id.
Is there a way to route messages from particular tenant to particular Flink
node? Is it what can be configured?
Thank you,
Alex
-tuple-with-json4s
>
> I didn't try the approach myself, though.
>
> On Mon, Apr 25, 2016 at 6:50 PM, Alexander Smirnov <
> alexander.smirn...@gmail.com> wrote:
>
>> Hello everybody!
>>
>> my RMQSource function receives string with JSONs in it.
>> Because ma
Hello everybody!
my RMQSource function receives string with JSONs in it.
Because many operations in Flink rely on Tuple operations, I think it is a
good idea to convert JSON to Tuple.
I believe this task has been solved already :)
what's the common approach for this conversion?
Thank you,
Alex
Is, some specialized for certain use-cases.
>>
>> Specifying Flink programs by config files (or graphically) would require
>> a data model, a DataStream/DataSet program generator and probably a code
>> generation component.
>>
>> Best, Fabian
>>
>> 2016-04-22
Hi guys!
I’m new to Flink, and actually to this mailing list as well :) this is my first
message.
I’m still reading the documentation and I would say Flink is an amazing
system!! Thanks everybody who participated in the development!
The information I didn’t find in the documentation - if it is
48 matches
Mail list logo