Hi,
Thanks a lot for the reply. I configured a restart strategy as suggested and
now the TM failure scenario is working as expected. Once a TM is killed
another active TM automatically recovers the job.
--
View this message in context:
Hi
I think there is a good way in FlinkKafkaProducerBase.java to deal with
this situation. There is a KeyedSerializationSchema user have to implement.
KeyedSerializationSchema will be used to serialize data, so that
SinkFunction just need to understand the type after serialization.
In your case,
Hi,
I update flink from 1.1.3 to 1.2
but fail
this is jobManager error log
Failed toString() invocation on an object of type
[org.apache.hadoop.yarn.api.records.impl.pb.LocalResourcePBImpl]
java.lang.NoSuchMethodError:
Thanks, Stephan ! I will try it!
2017-02-21 21:42 GMT+08:00 Stephan Ewen :
> Hi!
>
> Flink 1.1.4 and Flink 1.2 fixed a bunch of issues with HA, can you try
> those versions?
>
> If these also have issues, could you share the logs of the JobManager?
>
> Thanks!
>
> On Tue, Feb
What's the best way to retrieve both the values in Tuple2 inside a custom
sink given that the type is not known inside the sink function?
Thanks Stefan and Stephan for your comments. I changed the type of the field
and now the job seems to be running again.
And thanks Robert for filing the Jira!
Cheers,
Steffen
Am 21. Februar 2017 18:36:41 MEZ schrieb Robert Metzger :
>I've filed a JIRA for the problem:
Stephan:
The links were in the other email from vinay.
> On Feb 21, 2017, at 10:46 AM, Stephan Ewen wrote:
>
> Hi!
>
> I cannot find the screenshots you attached.
> The Apache Mailing lists sometimes don't support attachments, can you link to
> the screenshots some way
I've filed a JIRA for this issue:
https://issues.apache.org/jira/browse/FLINK-5874
On Wed, Jul 20, 2016 at 4:32 PM, Stephan Ewen wrote:
> I thing we can simply add this behavior when we use the TypeComparator in
> the keyBy() function. It can implement the hashCode() as a
I've filed a JIRA for the problem:
https://issues.apache.org/jira/browse/FLINK-5874
On Tue, Feb 21, 2017 at 4:09 PM, Stephan Ewen wrote:
> @Steffen
>
> Yes, you can currently not use arrays as keys. There is a check missing
> that gives you a proper error message for that.
>
>
Thank you for your reply.
Under my understanding, Map / Filter Function operate with "at least once" when
a failure occurs, and it is necessary to code that it will be saved (overwritten) in
Elasticsearch with the same ID even if double data comes. Is it correct?
(sorry, I cannot understand
Hi Shai,
I checked online that Azure DS5_v2 has SSD for storage, why don't you try
to use FLASH_SSD_OPTIMIZED option
In my case as well the stream was getting stuck for few minutes, my
checkpoint duration is 6secs and minimumPauseIntervalBetweenCheckpoints is
5secs
Hey Shai!
Thanks for reporting this.
It's hard to tell what causes this from your email, but could you
check the checkpoint interface
(https://ci.apache.org/projects/flink/flink-docs-release-1.3/monitoring/checkpoint_monitoring.html)
and report how much progress the checkpoints make before
On Tue, Feb 21, 2017 at 2:35 PM, Vadim Vararu wrote:
> Basically, i have a big dictionary of reference data that has to be
> accessible from all the nodes (in order to do some joins of log line with
> reference line).
If the dictionary is small you can make it part of
Hey! Did you configure a restart strategy?
https://ci.apache.org/projects/flink/flink-docs-release-1.3/dev/restart_strategies.html
Keep in mind that in In stand alone mode a TM process that has exited
won't be automatically restarted though.
On Tue, Feb 21, 2017 at 10:00 AM, F.Amara
Hi Vinay.
I couldn't understand from the thread, what configuration solved your problem?
I'm using the default predefined option. Perhaps it's not the best
configuration for my setting (I'm using Azure DS5_v2 machines), I honestly
haven't given much thought to that particular detail, but I
Hi,
if you key is a double[], even if the field is a final double[], it is mutable
because the array entries can be mutated and maybe that is what happened? You
can check if the following two points are in sync, hash-wise:
KeyGroupStreamPartitioner::selectChannels and
Hi Shai,
I was facing similar issue , however now the stream is not stuck in between.
you can refer this thread for the configurations I have done :
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Checkpointing-with-RocksDB-as-statebackend-td11752.html
What is the
Thanks for these pointers, Stefan.
I've started a fresh job and didn't migrate any state from previous
execution. Moreover, all the fields of all the events I'm using are
declared final.
I've set a breakpoint to figure out what event is causing the problem,
and it turns out that Flink
Hi.
I'm running a Flink 1.2 job with a 10 seconds checkpoint interval. After some
running time (minutes-hours) Flink fails to save checkpoints, and stops
processing records (I'm not sure if the checkpointing failure is the cause of
the problem or just a symptom).
After several checkpoints that
Hi!
Flink 1.1.4 and Flink 1.2 fixed a bunch of issues with HA, can you try
those versions?
If these also have issues, could you share the logs of the JobManager?
Thanks!
On Tue, Feb 21, 2017 at 11:41 AM, lining jing wrote:
> flink version: 1.1.3
>
> kill jobmanager,
Hi all,
I would like to do something similar to Spark's broadcast mechanism.
Basically, i have a big dictionary of reference data that has to be
accessible from all the nodes (in order to do some joins of log line
with reference line).
I did not find yet a way to do it.
Any ideas?
Hi Rami,
could you maybe provide your code? You could also send it to me directly if
you don't want to share with the community.
It might be that there is something in the way the pipeline is setup that
causes the (generated) operator UIDs to not be deterministic.
Best,
Aljoscha
On Sat, 7 Jan
Hi Seth,
sorry for taking so long to get back to you on this. I think the watermark
thing might have been misleading by me, I don't even know anymore what I
was thinking back then.
Were you able to confirm that the results were in fact correct for the runs
with the different parallelism? I know
I think you basically need something like this:
DataStream input = ...
DataStream withErrors = input.filter(new MyErrorFilter());
DataStream withoutErrors = input.filter(new MyWithoutErrorFilter());
withErrors.addSink(...)
withoutErrors.addSink(...)
Does that help?
On Mon, 20 Feb 2017 at 13:44
flink version: 1.1.3
kill jobmanager, the job fail. Ha config did not work.
Hi,
thanks for the reply.
There isn't other way to do that?
Using REST you can send json like this :
curl -XPOST 'localhost:9200/customer/external?pretty' -H
'Content-Type: application/json' -d'
{
"name": "Jane Doe"
}
'
In my case I have json like this:
{
"filters" : {
Hi,
I'm working with Apache Flink 1.1.2 and testing on High Availability mode.
In the case of Task Manager failures they say a standby TM will recover the
work of the failed TM. In my case, I have 4 TM's running in parallel and
when a TM is killed the state goes to Cancelling and then to Failed
Hi and thank you for your response,
is it possible to give me a simple example? How can I put the variable into
a state and then access the state to the next apply function?
I am new to flink.
Thank you.
--
View this message in context:
28 matches
Mail list logo