Hello,
adding to this: the hbase regionserver does not survive either when it runs
into that situation! When putting a node into decomissioning, if a
regionserver has a file open on that node, it dies:
2015-01-28 10:11:18,178 FATAL [regionserver60020.logRoller]
regionserver.HRegionServer:
Hello,
I ran into what a weird problem creating files and for the minute I only have a
shaky conclusion:
logged in as a vanilla user on a datanode the simple command hdfs dfs -put
/etc/motd motd reproducibly bails out with
WARN hdfs.DFSClient: DataStreamer Exception
Le 12 déc. 2014 à 03:13, Vinod Kumar Vavilapalli vino...@hortonworks.com a
écrit :
Auth to local mappings
- nn/nn-h...@cluster.com - hdfs
- dn/.*@cluster.com - hdfs
The combination of the above lets you block any other user other than hdfs
from faking like a datanode.
Purposes
-
Le 10 déc. 2014 à 20:08, Vinod Kumar Vavilapalli vino...@hortonworks.com a
écrit :
You don't need patterns for host-names, did you see the support for _HOST in
the principle names? You can specify the datanode principle to be say
datanodeUser@_HOST@realm, and Hadoop libraries interpret and
Hello,
how would you guys go about adding additional nodes to a Hadoop cluster running
with Kerberos, preferably without restarting the
namenode/resourcemanager/hbase-master etc?
I am aware that one can add names to dfs.hosts and run dfsadmin -refreshNodes,
but with Kerberos I have the
Hello,
How do you add a new datanode to a secure cluster, without restarting the
namenode?
In order to prevent identity theft of mapred or hdfs, a secure cluster needs to
carefully maintain
auth_to_local in core-site.xml as far as I understand, typically with lines
such as
Well, that does not seem to be the issue. The Kerberos ticket gets refreshed
automatically, but the delegation token doesn't.
Le 3 déc. 2013 à 20:24, Raviteja Chirala a écrit :
Alternatively you can schedule a cron job to do kinit every 20 hours or so.
Just to renew token before it expires.
Hello,
I am trying to understand why my long-running mapreduce jobs stop after 24
hours (approx) on a secure cluster.
This is on Cloudera CDH 4.3.0, hence hadoop 2.0.0, using mrv1 (not yarn),
authentication specified as kerberos. Trying with a short-lived Kerberos
ticket (1h) I see that it