Hi,
You probably need to set core-site.xml and set the Hadoop conf path in
flink-conf.yaml
core-site.xml:
fs.s3.impl
org.apache.hadoop.fs.s3a.S3AFileSystem
fs.s3.buffer.dir
/tmp
I’ve had similar issue when I tried to upgrade to Flink 1.4.2 .
On Thu, Mar 15, 2018 at 9:39 AM Aljoscha
ategy at your job and if
> not it will set the fixed delay restart strategy. This will effectively
> overwrite the default restart strategy which you define in the
> flink-conf.yaml file.
>
> Cheers,
> Till
>
> On Thu, Sep 22, 2016 at 10:01 PM, Deepak Jha wrote:
>
> &
restart-strategy.failure-rate.max-failures-per-interval: 300
It works when I set it up explicitly in topology using
*env.setRestartStrategy *
PFA snapshot of the Jobmanager log.
Thanks,
Deepak Jha
lps you to work around the problem for the moment until we've
> added the automatic shut down and restart.
>
> Cheers,
> Till
>
> On Mon, Sep 12, 2016 at 5:55 AM, Deepak Jha > wrote:
>
> > Hi Till,
> > One more thing i noticed after looking into followi
me know if my understanding is wrong.
On Fri, Sep 9, 2016 at 8:01 AM, Deepak Jha wrote:
> Hi Till,
> I'm getting following message in Jobmanager log
>
> 2016-09-09 07:46:55,093 PDT [WARN] ip-10-8-11-249
> [flink-akka.actor.default-dispatcher-985] akka.remote.RemoteWa
nected to anymore? The logs should at least contain a
> hint why the TaskManager lost the connection initially.
>
> Cheers,
> Till
>
> On Thu, Sep 8, 2016 at 7:08 PM, Deepak Jha wrote:
>
> > Hi,
> > I've setup Flink HA on AWS ( 3 Taskmanagers and 2 Jobmanagers each a
f the Taskmanager. Is there any
setting that I need to do ?
--
Thanks,
Deepak Jha
also experimented with
using aws-java-sdk in fatjar as well but it did not work. I looked into
aws-java-sdk-1.7.4.jar and see that com/amazonaws/services/dynamodbv2
exists.
Please let me know what am I doing wrong. Any help will be appreciated.
--
Thanks,
Deepak Jha
of my app, or to build a big fat jar? how do the devs here do
> it?
>
> Thanks.
>
--
Thanks,
Deepak Jha
; On Fri, Apr 8, 2016 at 11:46 PM, Deepak Jha > wrote:
>
> > Hi,
> > I have a use case where I need to get UniqueId of an operator inside a
> > stream. DataStream's getId() returns the ID of the stream but I have
> > operator partitioned (say partitionByHash)
Hi,
I have a use case where I need to get UniqueId of an operator inside a
stream. DataStream's getId() returns the ID of the stream but I have
operator partitioned (say partitionByHash) inside the datastream. So I
would like to get unique ID for each operator working in parallel. Is there
a way in
// return both stages
> }
>
> val (stage3, stage4) = run(src)
> stage3.addSink(Write_To_Kafka_Topic_Y)
> stage4.addSink(Write_To_Kafka_Topic_X)
>
>
> On Wed, 30 Mar 2016 at 20:19 Deepak Jha wrote:
>
> > Hi,
> > I'm building a pipeline using Flink usin
dSink(Write_To_Kafka_Topic_X)
Ideally I will not prefer to call addSink method inside run (as mentioned
in bold lines above).
--
Thanks,
Deepak Jha
gt; https://issues.apache.org/jira/browse/FLINK-2821
>
> Best,
> Max
>
> On Mon, Mar 14, 2016 at 4:49 PM, Deepak Jha wrote:
> > Hi Maximilian,
> > Thanks for your response. I will wait for the update.
> >
> > On Monday, March 14, 2016, Maximilian Michels wrote
rSystem network interface is bound to.
> >
>
> It looks like we have to expose this configuration to users who have a
> special network setup.
>
> Best,
> Max
>
> On Mon, Mar 14, 2016 at 5:42 AM, Deepak Jha > wrote:
>
> > Hi Stephan & Ufuk,
> > Thanks for
y solve the issue, but isn't it possible to
> > handle this outside of Flink? I've found this stack overflow question,
> > which should be related:
> >
> >
> http://stackoverflow.com/questions/26539727/giving-a-docker-container-a-routable-ip-address
> >
> > What's your opinion?
> >
>
--
Thanks,
Deepak Jha
n ports?
> Flink will try to open some ports and needs the OS or container to permit
> that.
>
> Greetings,
> Stephan
>
>
> On Thu, Mar 10, 2016 at 6:27 PM, Deepak Jha wrote:
>
> > Hi Stephan,
> > I tried 0.10.2 as well still running into the same issue.
>
Hi Stephan,
I tried 0.10.2 as well still running into the same issue.
On Thursday, March 10, 2016, Deepak Jha wrote:
> Yes. Flink 1.0.0
>
> On Thursday, March 10, 2016, Stephan Ewen > wrote:
>
>> Hi!
>>
>> Is this Flink 1.0.0 ?
>>
>> Stephan
>>
Yes. Flink 1.0.0
On Thursday, March 10, 2016, Stephan Ewen wrote:
> Hi!
>
> Is this Flink 1.0.0 ?
>
> Stephan
>
>
> On Thu, Mar 10, 2016 at 6:02 AM, Deepak Jha > wrote:
>
> > Hi All,
> >
> > I'm trying to setup Flink 1.0.0 cluster on
e jobmanager log file for details also the jobmanager config file...
--
Thanks,
Deepak Jha
2016-03-09 18:04:11,887 PST [INFO] ec2-52-3-248-202.compute-1.ama [main]
o.a.f.runtime.jobmanager.JobManager -
2016-03-09 18:04:
> >
> >
> > To look into 1.2, can you check the TaskManager log at the beginning,
> > where it says what interface/hostname the TaskManager selected to use?
> >
> > Thanks,
> > Stephan
> >
> >
> >
> >
> >
> >
> > On
on to 192.168.99.104 6123 port [tcp/*] succeeded!
masters file on TM contains
192.168.99.104:8080
Did anyone face this issue with remote JM/TM combination ?
--
Thanks,
Deepak Jha
s.
>
> I'm curious to know if everything works as expected. If you encounter
> something that seems wrong, let us know.
>
> – Ufuk
>
>
> On Fri, Feb 19, 2016 at 9:02 PM, Deepak Jha wrote:
> > Hi Ufuk,
> > I'm planning to build Flink HA cluster and I ma
the jobmanager address in the
> config).
>
> In theory, you can also skip the "slaves" file if you ssh manually
> into the machines and start the task managers via the taskmanger.sh
> script, but I don't think that this is what you are looking for. Or
> are y
Hi Max and Stephan,
Does this mean that I can start Flink HA cluster without keeping any entry
in "slaves" file ? I'm asking this because then I should not worry about
copying public key for password-less ssh in Flink HA cluster
On Wed, Feb 17, 2016 at 12:38 PM, Deepak Jha
Sorry for the typo Stephan
On Wednesday, February 17, 2016, Deepak Jha wrote:
> Thanks Max and Steven for the response.
>
> On Wednesday, February 17, 2016, Stephan Ewen > wrote:
>
>> Hi Deepak!
>>
>> The "slaves" file is only used by the SSH script
gt; simply register at the job manager using the provided configuration.
> > In HA mode, they will lookup the currently leading job manager first
> > and then connect to it. The job manager can then assign work.
> >
> > Cheers,
> > Max
> >
> > On Tue, Feb 16
Hi All,
I have a question on scaling-up/scaling-down flink cluster.
As per the documentation, in order to scale-up the cluster, I can add a new
taskmanager on the fly and jobmanager can assign work to it. Assuming, I
have Flink HA , so in the event of master JobManager failure, how is this
taskmana
quot;.
>
>
>
>
> On Fri, Jan 22, 2016 at 8:13 PM, Deepak Jha wrote:
>
> > Hi Devs,
> > I just started using Flink and would like to ass kafka as Sink. I went
> > through the documentation but so far I've not succeeded in writing to
> Kafka
> &
new FlinkKafkaProducer[Demo]("127.0.0.1:9092",
"test_topic", new SimpleStringSchema()))
Can anyone explain me what am I doing wrong in adding Kafka as Sink ?
--
Thanks,
Deepak Jha
30 matches
Mail list logo