Here is the meaning of 2 (see PlacementPolicy):

   * No data locality; do not bother trying to ask for any location

   */

  public static final int NO_DATA_LOCALITY = 2;

On Tue, Jan 6, 2015 at 4:15 PM, Gour Saha <gs...@hortonworks.com> wrote:

> Try setting property *yarn.component.placement.policy* to 2 for the
> component, something like this -
>
>     "HBASE_MASTER": {
>       "yarn.role.priority": "1",
>       "yarn.component.instances": "1",
>       "yarn.memory": "1500",
>       "yarn.component.placement.policy": "2"
>     },
>
> -Gour
>
> On Tue, Jan 6, 2015 at 3:33 PM, Nitin Aggarwal <
> nitin3588.aggar...@gmail.com
> > wrote:
>
> > Hi,
> >
> > We keep on running into scenario, where one of the node in the cluster
> went
> > bad (either due to clock out of sync, no disk space etc.). As a result
> > container fails to start, and due to locality, container is assigned on
> the
> > same machine again and again, and it fails again and again. After few
> > failures, when failure threshold is reached (which is currently also not
> > reset correctly. SLIDER-629), it triggers instance shut-down.
> >
> > Is there a way to give up locality, in case of multiple failures, to
> avoid
> > this scenario ?
> >
> > Thanks
> > Nitin Aggarwal
> >
>
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

Reply via email to