Hi Stefan,
Your reply really helps me a lot. Thank you.
2018-01-08 19:38 GMT+08:00 Stefan Richter :
> Hi,
>
> 1. If `snapshotState` failed at the first checkpoint, does it mean there
> is no state and no transaction can be aborted by default?
>
>
> This is a general problem and not only limited
*Background:* We have a setup of Flink 1.3.2 along with a secure MAPR
(v5.2.1) cluster (Flink is running on mapr client nodes). We run this flink
cluster via flink-jobmanager.sh foreground and flink-taskmanager.sh
foreground command via Marathon. We want to upgrade to Flink 1.4.0.
Since, we requir
When I change the path from an s3 path to a local path I get the following
error:
Cluster configuration: Standalone cluster with JobManager at localhost/
127.0.0.1:6123
Using address localhost:6123 to connect to JobManager.
JobManager web interface address http://localhost:8082
Starting execution
Here is the task manager log:
2018-01-08 16:16:13,406 INFO
org.apache.flink.runtime.taskmanager.TaskManager - Received task Source:
Kafka -> Sink: S3 (1/1)
2018-01-08 16:16:13,407 INFO org.apache.flink.runtime.taskmanager.Task -
Source: Kafka -> Sink: S3 (1/1) (bc932736c6526eb1bd41f6aaa73b2997) sw
Thank You Ufuk & Shannon. Since my kafka consumer is
UnboundedKafkaSource from BEAM, not sure if records-lag-max metrics is
exposed. Let me research further.
Thanks,
Jins George
On 01/08/2018 10:11 AM, Shannon Carey wrote:
Right, backpressure only measures backpressure on the inside of the Fl
+Aljoscha Krettek I setup my project using the
template you suggested and I'm able to bucket and write files locally. I
also want to test writing to s3 but I don't know how to configure the `sbt
run` command to tell the FlinkMiniCluster to use the
flink-s3-fs-hadoop-1.4.0.jar and a flink-conf.yaml
Right, backpressure only measures backpressure on the inside of the Flink job.
Ie. between Flink tasks.
Therefore, it’s up to you to monitor whether your Flink job is “keeping up”
with the source stream. If you’re using Kafka, there’s a metric that the
consumer library makes available. For exam
Hi,
Thanks Gordon, should have read the announcement :)
This might indeed be the case here, I will just use the workaround. At
least this is a known issue, almost got a heart attack today :D
Cheers,
Gyula
Tzu-Li (Gordon) Tai ezt írta (időpont: 2018. jan. 8.,
H, 17:56):
> Hi Gyula,
>
> Is your
Hi Gyula,
Is your 1.3 savepoint from Flink 1.3.1 or 1.3.0?
In those versions, we had a critical bug that caused duplicate partition
assignments in corner cases, so the assignment logic was altered from 1.3.1
to 1.3.2 (and therefore also 1.4.0).
If you indeed was using 1.3.1 or 1.3.0, and you are
Migrating the jobs by setting the sources to parallelism = 1 and then scale
back up after migration seems to be a good workaround, but I am wondering
if something I do made this happen or this is a bug.
Gyula Fóra ezt írta (időpont: 2018. jan. 8., H,
14:46):
> Hi,
>
> Is it possible that the Kaf
Hi,
I want to find the number of events happened so far in last 5 minutes and
make that as a Queryable state. Is it possible? It will be of great help if
someone provide some sample code for the same.
Thanks,
Velu.
Hi,
Is it possible that the Kafka partition assignment logic has changed
between Flink 1.3 and 1.4? I am trying to migrate some jobs using Kafka
0.8 sources and about half my jobs lost offset state for some partitions
(but not all partitions). Jobs with parallelism 1 dont seem to be
affected...
Hey all,I have an iterative algorithm implemented in Gelly. As long as I
upgraded everything to flink-1.3.1 from 1.1.2, the runtime has been increased
and in some cases task managers are killed. The error msg is|
"akka.ask.timeout". I increased akka.ask.timeout, but the problem still exist.
Chee
Hi,
> 1. If `snapshotState` failed at the first checkpoint, does it mean there is
> no state and no transaction can be aborted by default?
This is a general problem and not only limited to the first checkpoint.
Whenever you open a transaction, there is no guaranteed way to store it in
persist
I think I got it
Glad you solved this tricky issue and thanks for sharing your solution :-)
Best, Fabian
2018-01-06 14:33 GMT+01:00 Vishal Santoshi :
> Yep, this though is suboptimal as you imagined. Two things
>
> * has a internally has a that is a ultra lite version of IN,
> only required
Yes, the logs are the only way to find out which port the reporter is
bound to.
We may be able to display this information in the web-UI, but it isn't
very high on my list and will probably require
modifications to the reporter interface.
On 06.01.2018 04:24, Kien Truong wrote:
Hi,
We are u
You can't run the python script directly, instead it must be submitted
to a flink cluster using the pyflink.sh script, as
described in the documentation, which will in turn call the script with
the appropriate parameters.
On 04.01.2018 11:08, Yassine MARZOUGUI wrote:
Hi all,
Any ideas on this
Hi Stefan,
Since TwoPhaseCommitSinkFunction is new to me, I would like to know more
details.
There are two more questions:
1. If `snapshotState` failed at the first checkpoint, does it mean there
is no state and no transaction can be aborted by default?
2. I saw FlinkKafkaProducer011 has a trans
Yes the logs are the only way to find out which port the reporter is
bound to.
We may be able to display this information in the web-UI, but it isn't
very high on my list and will probably require
modifications to the reporter interface.
On 21.12.2017 09:41, Piotr Nowojski wrote:
I am not su
19 matches
Mail list logo