Hi Stephan,
I faced the similar issue, the way implemented this(though a workaround) is
by making the input to fold function i.e. the initial value to fold
symmetric to what goes into the window function.
I made the initial value to fold function a tuple with all non
required/available index valu
Hi Manish,
Appreciate the way you presented Apache Flink. While it's like an 'Intro' to
beginners, I would really encourage you to highlight/present some of the
groundbreaking features that flink offers towards stream processing like -
-> Explicit handling of time with it's notion of 'Event time
Hi All,
I'm running my flink application on YARN. It's frequently getting suspended,
though gracefully. Below is the snippet of the error, attaching full
jobmanager log to help debug. Please help me identify the cause and resolve
the issue.
Thank you
Regards,
Anchit
Error snippet:
2016-11-09 0
Hi Maximilian,
Thanks for you response. Since, I'm running the application on YARN cluster
using 'yarn-cluster' mode i.e. using 'flink run -m yarn-cluster ..' command.
Is there anything more that I need to configure apart from setting up
'yarn.application-attempts: 10' property inside conf/flink-c
Hi All,
I started my flink application on YARN using flink run -m yarn-cluster,
after running smoothly for 20 hrs it failed. Ideally the application should
have recover on losing the Job Manger (which runs in the same container as
the application master) pertaining to the fault tolerant nature of
Hi Jamie,
Thanks for sharing your thoughts. I'll try and integrate with Graphite to
see if this gets resolved.
Regards,
Anchit
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Metrics-InfluxDB-Grafana-Help-with-query-influxDB-query-for
I've set the metric reporting frequency to InfluxDB as 10s. In the
screenshot, I'm using Grafana query interval of 1s. I've tried 10s and more
too, the graph shape changes a bit but the incorrect negative values are
still plotted(makes no difference).
Something to add: If the subtasks are less tha
Yes, thank Stephan.
Regards,
Anchit
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-on-YARN-Fault-Tolerance-use-case-supported-or-not-tp9776p9817.html
Sent from the Apache Flink User Mailing List archive. mailing list archive at
Nabbl
Hi Jamie,
Thank you so much for your response.
The below query:
SELECT derivative(sum("count"), 1s) FROM "numRecordsIn" WHERE "task_name" =
'Sink: Unnamed' AND $timeFilter GROUP BY time(1s)
behaves the same as with the use of the templating variable in the 'All'
case i.e. shows a plots of junk
Hi All,
I tried testing fault tolerance in a different way(not sure if it as
appropriate way) of my running flink application. I ran the flink
application on YARN and after completing few checkpoints, killed the YARN
application using:
yarn application -kill application_1476277440022_
Furthe
Hi All,
I'm trying to plot the flink application metrics using grafana backed by
influxdb. I need to plot/monitor the 'numRecordsIn' & 'numRecordsOut' for
each operator/operation. I'm finding it hard to generate the influxdb query
in grafana which can help me make this plot.
I am able to plot the
Hi Aljoscha,
I am using the custom trigger with GlobalWindows window assigner. Do I still
need to override clear method and delete the ProcessingTimeTimer using-
triggerContext.deleteProcessingTimeTimer(prevTime)?
Regards,
Anchit
--
View this message in context:
http://apache-flink-user-maili
Hi Bart,
Thank you so much for sharing the approach. Looks like this solved my
problem. Here's what I have as an implementation for my use-case:
package org.apache.flink.quickstart
import org.apache.flink.api.common.state.{ ReducingState,
ReducingStateDescriptor, ValueState, ValueStateDescriptor
Hi All,
I have a use case where in I'm supposed to work with Session Windows to
maintain some values for some sessionIDs/keys.
The use case is as follows:
I need to maintain a session window for the incoming data and discard the
window after some set gap/period of inactivity but what I want is t
Hi Janardhan/Stephan,
I just figured out what the issue is (Talking about Flink KafkaConnector08,
don't know about Flink KafkaConnector09)
The reason why- bin/kafka-consumer-groups.sh --zookeeper
--describe --group is not showing any result
is because of the absence of the
/consumers//owners/
Hi Robert,
Thanks for your response. I just figured out what the issue is.
The reason why- bin/kafka-consumer-groups.sh --zookeeper
--describe --group is not showing any result
is because of the absence of the
/consumers//owners/ in the zookeeper.
The flink application is creating and upda
Hi All,
I'm using Flink Kafka connector08. I need to check/monitor the offsets of
the my flink application's kafka consumer.
When running this:
bin/kafka-consumer-groups.sh --zookeeper --describe
--group
I get the message: No topic available for consumer group provided. Why is
the consumer no
Hi All,
I'm building a recommendation system streaming application for which I need
to broadcast a very large model object (used in iterative scoring) among
all the task managers performing the operation parallely for the operator
I'm doing an this operation in map1 of CoMapFunction. Please sugge
sues.apache.org/jira/browse/FLINK-3239
>
> 2016-09-29 9:12 GMT+02:00 Anchit Jatana :
>
>> Hi All,
>>
>> I'm trying to link my flink application with HBase for simple read/write
>> operations. I need to implement Flink to HBase the connectivity through
>> Ke
ond input are the lookup location changes. You
> can then use this input to update the location.
>
> Search for CoMap/CoFlatMap here:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/datastream_api.
> html#datastream-transformations
>
> Best,
>
> Ufuk
>
&
Hi All,
I'm trying to link my flink application with HBase for simple read/write
operations. I need to implement Flink to HBase the connectivity through
Kerberos using the keytab.
Can somebody share(or link me to some resource) a sample
code/implementation on how to achieve Flink to HBase connect
Hi All,
I have a use case where in need to create multiple source streams from
multiple files and monitor the files for any changes using the "
FileProcessingMode.PROCESS_CONTINUOUSLY"
Intention is to achieve something like this(have a monitored stream for
each of the multiple files), something l
Hi All,
*Brief:* I have a use case where I need to interact with a running flink
application.
*Detail:*
My Flink application has a *Kafka source*, *an operator processing on the
content received* from the Kafka stream(this operator is using a lookup
from an external source file to accomplish the
23 matches
Mail list logo