Hello
I have a question about the native libraries distributed with the 2.6.4
release. The documentation from
http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-common/NativeLibraries.html
says
"The pre-built 32-bit i386-Linux native hadoop library is available as part
of the
Thanks Vinaykumar and Gurmukh,
I have made it working successfully through auth_to_local configs. But i
faced much issues.
Actually I have two nodes cluster, one being namenode and datanode, and
second being datanode only. I faced some authentication from keytab related
issues. for example:
I
Please note, there are two different configs.
“dfs.datanode.kerberos.principal” and “dfs.namenode.kerberos.principal”
Following configs can be set, as required.
dfs.datanode.kerberos.principal --> dn/_HOST
dfs.namenode.kerberos.principal --> nn/_HOST
“nn/_HOST” will be used only in namenode
What you are doing is correct, but datanodes, in addition to the
dn/_HOST principal also needs nn/_HOST principal.
Follow my github for configs from workign cluster:
https://github.com/netxillon/hadoop/tree/master/kerberos
On 30/06/16 5:54 PM, Aneela Saleem wrote:
Thanks Vinayakumar
Yes
Thanks for your response Chris, so I understand that there are no standard
implementation of cp as a REST API ?
You mention that cp is a combination of "open, create and rename" all of
theses method are available thought webhdfs. Do you think that we can re
product remotely though execute several