t;> > starts but no job is submitted, and after 1 min or so the
> >> session
> >> >> > crashes. I attached the jobmanager log.
> >> >> >
> >> >> > In Zookeeper the root-directory is created and ch
gt;>
>> >> >>
>> >> >> Cheers,
>> >> >>
>> >> >> Till
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Mon,
>> >
> >> > Konstnatin
> >> >
> >> >
> >> > On 23.11.2015 17:00, Gwenhael Pasquiers wrote:
> >> >> We are not yet using HA in our cluster instances.
> >> >
>> >
> >> > Everything runs fine, with the default
> recovery.zookeeper.root.path.
> >> >
> >> > Does anyone have an idea, what is going on?
> >> >
> >> > Cheers,
> >> >
> >> > Konstnatin
> >>
e not yet using HA in our cluster instances.
>> >>
>> >> But yes, we will have to change the zookeeper.path.root J
>> >>
>> >>
>> >>
>> >> We package our jobs with their own config folder (we don’t rel
folder (we don’t rely on
> >> flink’s config folder); we can put the maven project name into this
> >> property then they will have different values J
> >>
> >>
> >>
> >>
> >>
> >> *From:*Till Rohrmann [mailto:trohrm...@apache.org
gt;>
>>
>> We package our jobs with their own config folder (we don’t rely on
>> flink’s config folder); we can put the maven project name into this
>> property then they will have different values J
>>
>>
>>
>>
>>
>> *From:*Till Roh
folder); we can put the maven project name into this
> property then they will have different values J
>
>
>
>
>
> *From:*Till Rohrmann [mailto:trohrm...@apache.org]
> *Sent:* lundi 23 novembre 2015 14:51
> *To:* user@flink.apache.org
> *Subject:* Re: YARN High
From: Till Rohrmann [mailto:trohrm...@apache.org]
Sent: lundi 23 novembre 2015 14:51
To: user@flink.apache.org
Subject: Re: YARN High Availability
The problem is the execution graph handle which is stored in ZooKeeper. You can
manually remove it via the ZooKeeper shell by simply deleting everything
7;t they ?
>
> B.R.
>
>
> -Original Message-
> From: Ufuk Celebi [mailto:u...@apache.org]
> Sent: lundi 23 novembre 2015 12:12
> To: user@flink.apache.org
> Subject: Re: YARN High Availability
>
> Hey Gwenhaël,
>
> the restarting jobs are most likely old jo
lundi 23 novembre 2015 12:12
To: user@flink.apache.org
Subject: Re: YARN High Availability
Hey Gwenhaël,
the restarting jobs are most likely old job submissions. They are not cleaned
up when you shut down the cluster, but only when they finish (either regular
finish or after cancelling).
ilian Michels [mailto:m...@apache.org]
> Sent: jeudi 19 novembre 2015 13:36
> To: user@flink.apache.org
> Subject: Re: YARN High Availability
>
> The docs have been updated.
>
> On Thu, Nov 19, 2015 at 12:36 PM, Ufuk Celebi wrote:
>> I’ve added a note about this to the doc
> > Looking at the logs I saw that it was having issues trying to connect to
>> > ZK.
>> >
>> > To make I short is had the wrong port.
>> >
>> >
>> >
>> > It is now starting.
>> >
>> >
>> >
>> >
gt; >
> >
> >
> > It is now starting.
> >
> >
> >
> > Tomorrow I’ll try to kill some JobManagers *evil*.
> >
> >
> >
> > Another question : if I have multiple HA flink jobs, are there some points
> > to check in order to be s
> >
> > Looking at the logs I saw that it was having issues trying to connect to ZK.
> >
> > To make I short is had the wrong port.
> >
> >
> >
> > It is now starting.
> >
> >
> >
> > Tomorrow I’ll try to kill some JobManagers
iers <
>>>> gwenhael.pasqui...@ericsson.com> wrote:
>>>> > Nevermind,
>>>> >
>>>> >
>>>> >
>>>> > Looking at the logs I saw that it was having issues trying to connect
>>>> to ZK.
>>&g
t; Looking at the logs I saw that it was having issues trying to connect
>>> to ZK.
>>> >
>>> > To make I short is had the wrong port.
>>> >
>>> >
>>> >
>>> > It is now starting.
>>> >
>>> >
; >
>> >
>> >
>> > It is now starting.
>> >
>> >
>> >
>> > Tomorrow I’ll try to kill some JobManagers *evil*.
>> >
>> >
>> >
>> > Another question : if I have multiple HA flin
>
> >
> >
> > Tomorrow I’ll try to kill some JobManagers *evil*.
> >
> >
> >
> > Another question : if I have multiple HA flink jobs, are there some
> points to check in order to be sure that they won’t collide on hdfs or ZK ?
> >
> >
&
s *evil*.
>
>
>
> Another question : if I have multiple HA flink jobs, are there some points to
> check in order to be sure that they won’t collide on hdfs or ZK ?
>
>
>
> B.R.
>
>
>
> Gwenhaël PASQUIERS
>
>
>
> From: Til
nother question : if I have multiple HA flink jobs, are there some points
> to check in order to be sure that they won’t collide on hdfs or ZK ?
>
>
>
> B.R.
>
>
>
> Gwenhaël PASQUIERS
>
>
>
> *From:* Till Rohrmann [mailto:till.rohrm...@gmail.com]
> *Sent:* merc
be sure that they won’t collide on hdfs or ZK ?
B.R.
Gwenhaël PASQUIERS
From: Till Rohrmann [mailto:till.rohrm...@gmail.com]
Sent: mercredi 18 novembre 2015 18:01
To: user@flink.apache.org
Subject: Re: YARN High Availability
Hi Gwenhaël,
do you have access to the yarn logs?
Cheers,
Till
On
Hi Gwenhaël,
do you have access to the yarn logs?
Cheers,
Till
On Wed, Nov 18, 2015 at 5:55 PM, Gwenhael Pasquiers <
gwenhael.pasqui...@ericsson.com> wrote:
> Hello,
>
>
>
> We’re trying to set up high availability using an existing zookeeper
> quorum already running in our Cloudera cluster.
>
23 matches
Mail list logo