Hello,
I have a custom app, in which when due to some exception, the app restarts
I want to cancel all registered flink timers in the initializeState method.
Based on the documentation I feel like all timer state is saved in the
state, so if the app restarts the timers are still active.
Is there
So what you want is the counts of every keys ?
Why didn't you use count aggregation?
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Ken,
Thanks for your inputs again.
I will wait for Flink guys to come back to me for the suggestion of
implementation of 100 K unique counters.
For time being, I will make the number of counter metric value a
configurable parameter in my application. So, user will know what he is
trying to do.
Hi Gaurav,
I’ve use a few hundred counters before without problems. My concern about >
100K unique counters is that you wind up generating load (and maybe memory
issues) for the JobManager.
E.g. with Hadoop’s metric system trying to go much beyond 1000 counters could
cause significant
Hi Andrey,
Yes .Setting setFailOnCheckpointingErrors(false) solved the problem.
But in between I am getting this error :
2019-01-16 21:07:26,979 ERROR
org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerDetailsHandler
- Implementation error: Unhandled exception.
Hi Andrey,
Yes. CustomBucketingSink is custom class copied from Bucketing Sink itself .
Few changes were added :
1. Add timestamp in part files
2. Few Logging statements
Note: Looks like I copied it from version 1.4 ( Don't know if that could be
the reason for failure)
Did it override
Does that mean that the full set of fs.s3a.<…> configs in core-site.xml will be
picked up from flink-conf.yaml by flink? Or only certain configs involved with
authentication?
From: Till Rohrmann [mailto:trohrm...@apache.org]
Sent: Wednesday, January 16, 2019 3:43 AM
To: Vinay Patil
Cc: Kostas
Hi Guys,
We done some load tests and we got the exception below, I saw that the
JobManager was restarted, If I understood correctly, it will get new job id
and the state will lost - is that correct? how the state is handled setting
HA as described here
Hi,
I have a Flink job where I receive a stream of AggregationKeys, stored in
BroadcastState which I join in a Tuple2 with a stream of RiskMeasureMessages,
which I then wish to aggregate in a Window.
The RiskMeasureMessages are bounded by CalcStart and CalcEnd messages which
come on separate
Hi Sohi,
Something was originally interrupted in DFSOutputStream$DataStreamer.run.
It was thrown in the timer callback which processed files in
CustomBucketingSink.
Task reported the failure to JM and JM triggered then job cancelation.
I do not see this CustomBucketingSink in Flink code. Is it
Hi Alexandru,
you can call `StreamExecutionEnvironment#createLocalEnvironment` which you
can pass a Flink configuration object.
Cheers,
Till
On Wed, Jan 16, 2019 at 1:05 PM Alexandru Gutan
wrote:
> Hi everyone!
>
> Is there a way to set flink-conf.yaml params but when running from the
> IDE?
Hi Ramya,
I think the problem is that you access the serializationSchema from the
closure of ElasticsearchSinkFunction. Try creating
ElasticsearchSinkFunction that will get the serializationSchema in ctor.
If this is not the problem could you share the full stack of the error?
Best,
Dawid
On
Hi Andrey ,
Pls find logs . Attaching dropbox link as logs as large .
Job Manager . : https://www.dropbox.com/s/q0rd60coydupl6w/full.log.gz?dl=0
Application :
https://www.dropbox.com/s/cn3yrd273wd99f2/jm-sohan.log.gz?dl=0
Thanks
Sohi
--
Sent from:
Hi everyone!
Is there a way to set flink-conf.yaml params but when running from the IDE?
What I'm trying to do is to setup JMX metrics:
metrics.reporter.jmx.class:
org.apache.flink.metrics.jmx.JMXReportermetrics.reporter.jmx.port:
8789
Thanks!
Hi Sohi,
This still looks like Task Manager logs, could you post Job Master logs,
please?
Best,
Andrey
On Tue, Jan 15, 2019 at 7:49 AM sohimankotia wrote:
> Hi ,
>
> Any Update/help please ?
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>
I haven't configured this myself but I would guess that you need to set the
parameters defined here under S3A Authentication methods [1]. If the
environment variables don't work, then I would try to set the
authentication properties.
[1]
Hi Till,
Can you please let us know the configurations that we need to set for
Profile based credential provider in flink-conf.yaml
Exporting AWS_PROFILE property on EMR did not work.
Regards,
Vinay Patil
On Wed, Jan 16, 2019 at 3:05 PM Till Rohrmann wrote:
> The old BucketingSink was using
The old BucketingSink was using Hadoop's S3 filesystem directly whereas the
new StreamingFileSink uses Flink's own FileSystem which need to be
configured via the flink-conf.yaml.
Cheers,
Till
On Wed, Jan 16, 2019 at 10:31 AM Vinay Patil
wrote:
> Hi Till,
>
> We are not providing
Hi Sohi,
Could it be that you configured your job tasks to fail if checkpoint fails
(streamExecutionEnvironment.getCheckpointConfig().setFailOnCheckpointingErrors(true))?
Could you send the complete job master log?
If checkpoint 470 has been subsumed by 471, it could be that its directory
is
The prometheus reporter ignores scope formats. In fact all reporters
that work with tags (i.e., key-value-pairs) ignore them, the idea being
that you would search specific metrics based on their tags.
I'm not aware of any intermediate workarounds.
There is a JIRA for this issue:
Hi Till,
We are not providing `fs.s3a.access.key: access_key`, `fs.s3a.secret.key:
secret_key` in flink-conf.yaml as we are using Profile based credentials
provider. The older BucketingSink code is able to get the credentials and
write to S3. We are facing this issue only with StreamingFileSink.
Hi Sohimankotia,
you can control Flink's failure behaviour in case of a checkpoint failure
via the `ExecutionConfig#setFailTaskOnCheckpointError(boolean)`. Per
default it is set to true which means that a Flink task will fail if a
checkpoint error occurs. If you set it to false, then the job
Actually Till is right.
Sorry, my fault, I did not read your second email where Vinay mentions the
core-site.xml.
Cheers,
Kostas
On Wed, Jan 16, 2019 at 10:25 AM Till Rohrmann wrote:
> Hi Vinay,
>
> Flink's file systems are self contained and won't respect the
> core-site.xml if I'm not
Hi Vinay,
Flink's file systems are self contained and won't respect the core-site.xml
if I'm not mistaken. Instead you have to set the credentials in the flink
configuration flink-conf.yaml via `fs.s3a.access.key: access_key`,
`fs.s3a.secret.key: secret_key` and so on [1]. Have you tried this
Hi,
Need help in the below scenario,
I have CustomInputFormat which loads the records using a bean,
public class CustomInputFormat extends GenericInputFormat {
private Iterator> recordsIterator;
@Override
public void open(GenericInputSplit split) throws IOException {
Hi Taher,
So you are using the same configuration files and everything and the only
thing you change is the "s3://" to "s3a://" and the sink cannot find the
credentials?
Could you please provide the logs of the Task Managers?
Cheers,
Kostas
On Wed, Jan 16, 2019 at 9:13 AM Dawid Wysakowicz
Forgot to cc ;)
On 16/01/2019 08:51, Vinay Patil wrote:
> Hi,
>
> Can someone please help on this issue. We have even tried to set
> fs.s3a.impl in core-site.xml, still its not working.
>
> Regards,
> Vinay Patil
>
>
> On Fri, Jan 11, 2019 at 5:03 PM Taher Koitawala [via Apache Flink User
>
Hi,
I cc Kostas who should be able to help you.
Best,
Dawid
On 16/01/2019 08:51, Vinay Patil wrote:
> Hi,
>
> Can someone please help on this issue. We have even tried to set
> fs.s3a.impl in core-site.xml, still its not working.
>
> Regards,
> Vinay Patil
>
>
> On Fri, Jan 11, 2019 at 5:03 PM
28 matches
Mail list logo