Namenode crashes in 2.7.2

2019-07-11 Thread kumar r
Hi,

In Hadoop-2.7.2, i am getting same error reported in here
https://issues.apache.org/jira/browse/HDFS-12985

Is there patch available for hadoop-2.7.2 version? How can i restart
namenode without null pointer exception?

Thanks,
Kumar


Re: Encryption type AES256 CTS mode with HMAC SHA1-96 is notsupported/enabled

2016-10-24 Thread kumar r
Hi,

If i installed policy files then it shows,



*GSSException: Failure unspecified at GSS-API level (Mechanism level:
Specified version of key is not available (44))*


But without installing Policy files itself, it works fine with local
Windows Active Directory.

Thanks,



On Mon, Oct 24, 2016 at 12:28 PM, <wget.n...@gmail.com> wrote:

> Looks like the strong encryption policy file for Java (Oracle) isn’t
> installed. Or you don’t have a valid Kerberos ticket in your cache (klist).
>
>
>
> --
> B: mapredit.blogspot.com
>
>
>
> *From: *kumar r <kumarc...@gmail.com>
> *Sent: *Monday, October 24, 2016 8:49 AM
> *To: *user@hadoop.apache.org
> *Subject: *Encryption type AES256 CTS mode with HMAC SHA1-96 is
> notsupported/enabled
>
>
>
> Hi,
>
> I am trying to configure hadoop pseudo node secure cluster (to ensure
> proper working) in Azure using Azure Domain Service.
>
> OS - Windows Server 2012 R2 Datacenter
>
> Hadoop Version - 2.7.2
>
> I can able to run
>
> *hadoop fs -ls /*
>
> Example MapReduce job works fine
> *yarn jar
> %HADOOP_HOME%\share\hadoop\mapreduce\hadoop-mapreduce-examples-*.jar pi 16
> 1*
>
>
>
> But when i run,
>
> *hdfs fsck /*
>
> it gives,
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Connecting to namenode via https://node1:50470/fsck?ugi=Kumar=%2F
> <https://node1:50470/fsck?ugi=Kumar=%2F>Exception in thread "main"
> java.io.IOException:
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> Authentication failed, status: 403, message: GSSException: No valid
> credentials provided (Mechanism level: Failed to find any Kerberos
> credentails)at
> org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:335)at
> org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:73)at
> org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:152)at
> org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:149)at
> java.security.AccessController.doPrivileged(Native Method)at
> javax.security.auth.Subject.doAs(Subject.java:415)at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:148)at
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)at
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)at
> org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:377)Caused by:
> org.apache.hadoop.security.authentication.client.AuthenticationException:
> Authentication failed, status: 403, message: GSSException: No valid
> credentials provided (Mechanism level: Failed to find any Kerberos
> credentails)at
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:274)
> at
> org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
> at
> org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:214)
> at
> org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
> at
> org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:161)
> at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:333)... 10
> more*
>
> When i access namenode web ui, it shows
>
>
>
>
> *GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Encryption type AES256 CTS mode with HMAC SHA1-96 is not 
> supported/enabled)**[image: Inline image 1]*
>
> Someone help me to resolve this error and get it work successfully.
>
>
>


Encryption type AES256 CTS mode with HMAC SHA1-96 is not supported/enabled

2016-10-24 Thread kumar r
Hi,

I am trying to configure hadoop pseudo node secure cluster (to ensure
proper working) in Azure using Azure Domain Service.

OS - Windows Server 2012 R2 Datacenter
Hadoop Version - 2.7.2

I can able to run
*hadoop fs -ls /*

Example MapReduce job works fine
*yarn jar
%HADOOP_HOME%\share\hadoop\mapreduce\hadoop-mapreduce-examples-*.jar pi 16
1*

But when i run,
*hdfs fsck /*

it gives,





















*Connecting to namenode via https://node1:50470/fsck?ugi=Kumar=%2F
Exception in thread "main"
java.io.IOException:
org.apache.hadoop.security.authentication.client.AuthenticationException:
Authentication failed, status: 403, message: GSSException: No valid
credentials provided (Mechanism level: Failed to find any Kerberos
credentails)at
org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:335)at
org.apache.hadoop.hdfs.tools.DFSck.access$000(DFSck.java:73)at
org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:152)at
org.apache.hadoop.hdfs.tools.DFSck$1.run(DFSck.java:149)at
java.security.AccessController.doPrivileged(Native Method)at
javax.security.auth.Subject.doAs(Subject.java:415)at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.hdfs.tools.DFSck.run(DFSck.java:148)at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)at
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)at
org.apache.hadoop.hdfs.tools.DFSck.main(DFSck.java:377)Caused by:
org.apache.hadoop.security.authentication.client.AuthenticationException:
Authentication failed, status: 403, message: GSSException: No valid
credentials provided (Mechanism level: Failed to find any Kerberos
credentails)at
org.apache.hadoop.security.authentication.client.AuthenticatedURL.extractToken(AuthenticatedURL.java:274)
at
org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:77)
at
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:214)
at
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:215)
at
org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:161)
at org.apache.hadoop.hdfs.tools.DFSck.doWork(DFSck.java:333)... 10
more*


When i access namenode web ui, it shows





*GSSException: Failure unspecified at GSS-API level (Mechanism level:
Encryption type AES256 CTS mode with HMAC SHA1-96 is not
supported/enabled)[image: Inline image 1]*

Someone help me to resolve this error and get it work successfully.


Oozie distcp failed in secure cluster

2016-07-29 Thread kumar r
Hi,

I have configured hadoop-2.7.2 and oozie-4.2.0 with Kerberos security
enabled.

Distcp oozie action submitted as workflow job. When running the oozie
launcher, i am getting following exception.


2016-07-29 12:39:04,394 ERROR [uber-SubtaskRunner]
org.apache.hadoop.tools.DistCp: Exception encountered
org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation
Token can be issued only with kerberos or web authentication
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:6635)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:563)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:987)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy14.getDelegationToken(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getDelegationToken(ClientNamenodeProtocolTranslatorPB.java:933)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy15.getDelegationToken(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getDelegationToken(DFSClient.java:1029)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getDelegationToken(DistributedFileSystem.java:1542)
at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:530)
at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:508)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.addDelegationTokens(DistributedFileSystem.java:2228)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at 
org.apache.hadoop.tools.mapred.CopyOutputFormat.checkOutputSpecs(CopyOutputFormat.java:121)
at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.tools.DistCp.createAndSubmitJob(DistCp.java:183)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:153)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:126)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.oozie.action.hadoop.DistcpMain.run(DistcpMain.java:64)
at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:47)
at org.apache.oozie.action.hadoop.DistcpMain.main(DistcpMain.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:236)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
at 
org.apache.hadoop.mapred.LocalContainerLauncher$EventHandler.runSubtask(LocalContainerLauncher.java:380)

Re: Mask value not shown in GETFACL using webhdfs

2016-05-27 Thread kumar r
Thank you for the detailed explanation.

On Tue, May 24, 2016 at 10:48 PM, Chris Nauroth <cnaur...@hortonworks.com>
wrote:

> Hello Kumar,
>
> I answered at the Stack Overflow link.  I'll repeat the same information
> here for everyone's benefit.
>
> HDFS implements the POSIX ACL model [1].  The linked documentation
> explains that the mask entry is persisted into the group permission bits of
> the classic POSIX permission model.  This is done to support the
> requirements of POSIX ACLs and also support backwards-compatibility with
> existing tools like chmod, which are unaware of the extended ACL entries.
> Quoting that document:
>
> > In minimal ACLs, the group class permissions are identical to the
> > owning group permissions. In extended ACLs, the group class may
> > contain entries for additional users or groups. This results in a
> > problem: some of these additional entries may contain permissions that
> > are not contained in the owning group entry, so the owning group entry
> > permissions may differ from the group class permissions.
> >
> > This problem is solved by the virtue of the mask entry. With minimal
> > ACLs, the group class permissions map to the owning group entry
> > permissions. With extended ACLs, the group class permissions map to
> > the mask entry permissions, whereas the owning group entry still
> > defines the owning group permissions.
> >
> > ...
> >
> > When an application changes any of the owner, group, or other class
> > permissions (e.g., via the chmod command), the corresponding ACL entry
> > changes as well. Likewise, when an application changes the permissions
> > of an ACL entry that maps to one of the user classes, the permissions
> > of the class change.
>
> This is relevant to your question, because it means the mask is not in
> fact persisted as an extended ACL entry.  Instead, it's in the permission
> bits.  When querying WebHDFS, you've made a "raw" API call to retrieve
> information about the ACL.  When running getfacl, you've run an application
> that layers additional display logic on top of that API call.  getfacl is
> aware that for a file with an ACL, the group permission bits are
> interpreted as the mask, and so it displays accordingly.
>
> This is not specific to WebHDFS.  If an application were to call
> getAclStatus through the NameNode's RPC protocol, then it would see the
> equivalent of the WebHDFS response.  Also, if you were to use the getfacl
> command on a webhdfs:// URI, then the command would still display the mask,
> because the application knows to apply that logic regardless of the
> FileSystem implementation.
>
> [1] http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html
>
> --Chris Nauroth
>
> From: kumar r <kumarc...@gmail.com>
> Date: Monday, May 23, 2016 at 10:20 PM
> To: "user@hadoop.apache.org" <user@hadoop.apache.org>
> Subject: Mask value not shown in GETFACL using webhdfs
>
> Hi,
>
> In Hadoop, i have enabled authorization. I have set few acl for a
> directory.
>
> When i execute getfacl command in hadoop bin, i can see mask value in that.
>
> hadoop fs -getfacl /Kumar
>
> # file: /Kumar
> # owner: Kumar
> # group: Hadoop
> user::rwx
> user:Babu:rwx
> group::r-x
> mask::rwx
> other::r-x
>
> If i run the same command using webhdfs, mask value not shown.
>
> http://localhost:50070/webhdfs/v1/Kumar?op=GETACLSTATUS
>
> {
>   "AclStatus": {
> "entries": [
>   "user:Babu:rwx",
>   "group::r-x"
> ],
> "group": "Hadoop",
> "owner": "Kumar",
> "permission": "775",
> "stickyBit": false
>   }
> }
>
> What the reason for not showing mask value in webhdfs for GETFACL command?
>
>
> Find the stack overflow question,
>
>
> http://stackoverflow.com/questions/37404899/mask-value-not-shown-in-getfacl-using-webhdfs
>
>
> Thanks,
>


Mask value not shown in GETFACL using webhdfs

2016-05-23 Thread kumar r
Hi,

In Hadoop, i have enabled authorization. I have set few acl for a directory.

When i execute getfacl command in hadoop bin, i can see mask value in that.

hadoop fs -getfacl /Kumar

# file: /Kumar
# owner: Kumar
# group: Hadoop
user::rwx
user:Babu:rwx
group::r-x
mask::rwx
other::r-x

If i run the same command using webhdfs, mask value not shown.

http://localhost:50070/webhdfs/v1/Kumar?op=GETACLSTATUS

{
  "AclStatus": {
"entries": [
  "user:Babu:rwx",
  "group::r-x"
],
"group": "Hadoop",
"owner": "Kumar",
"permission": "775",
"stickyBit": false
  }
}

What the reason for not showing mask value in webhdfs for GETFACL command?


Find the stack overflow question,

http://stackoverflow.com/questions/37404899/mask-value-not-shown-in-getfacl-using-webhdfs


Thanks,


Re: True multi machine cluster

2016-05-10 Thread kumar r
Hi,

Yes, Hadoop will work fine in windows for all mode including fully
distributed mode. The same will be applicable for Spark and Yarn.


On Mon, May 9, 2016 at 11:27 PM, Abi  wrote:

> Is Hadoop work on multiple windows machine out of the box.
>
> 1. Keyword is "out of box"
>
> 2. Same question for yarn and spark ?
>
>
> Please dont ask why we are using windows or other questions. We just are.
> Would appreciate replies to these questions.


Transparent Encryption in HDFS REST api

2016-05-10 Thread kumar r
Hi,

Is there HDFS rest api support available for transparent encryption
"crypto" in HDFS?

https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#crypto_command-line_interface

Thanks,
Kumar


Hadoop-2.7.2 MR job stuck in accepted state

2016-02-21 Thread kumar r
Hi,

I have configured hadoop-2.7.2 pseudo-node cluster in windows. When i
submit a MR job, it works fine. But if i submit multiple MR jobs then only
one job runs at a time.

First job is in RUNNING state and all other job in ACCEPTED state even yarn
has enough memory (3 GB free out of 6 GB).
But the same works fine in hadoop-2.5.2


*yarn-site.xml*


 yarn.nodemanager.resource.memory-mb
 6144


 yarn.scheduler.minimum-allocation-mb
 256


 yarn.scheduler.maximum-allocation-mb
 2250


Any new property to be set for running multiple job at a time?

Stack Overflow Question :
http://stackoverflow.com/questions/35545986/hadoop-2-7-2-mr-job-in-accepted-state

Thanks,

Kumar


HDFS ACL recursive not working for WEBHDFS REST API

2015-07-15 Thread kumar r
Working on Hadoop-2.6.0, enabled HDFS ACLs. When trying through command
line, recursive -R working correctly but when using REST API, its not
working

*hadoop fs -setfacl -x -R default:group:HadoopUsers /test1*


The above command working correctly but when trying with REST API,
recursive not working. ACL removed only for specified directory test1 but
not for sub directories.

*curl -i -X PUT
http://HOST:PORT/webhdfs/v1/test1?op=REMOVEACLENTRIESaclspec=default:group:HadoopUsers:recursive=true*

Do i missed anything? How to use recursive in REST API?


hadoop setfacl --set not working

2015-07-15 Thread kumar r
I am windows user, Configured Hadoop-2.6.0 secured with kerberos. Trying to
set ACL for a directory using below command


*hadoop fs -setfacl --set user::rwx,user:user1:---,group::rwx,other::rwx /test1*

It gives


*-setfacl: Too many arguments
Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x
acl_spec} path]|[--set acl_spec path]*


I have posted question in stackoverflow and the link is


*http://stackoverflow.com/questions/31422810/hadoop-setfacl-set-not-working*
http://stackoverflow.com/questions/31422810/hadoop-setfacl-set-not-working


Acl property not working - reg

2015-02-04 Thread kumar r
Hi,
   I am using hadoop-2.6.0 enabled with kerberos and ldap with Active
Directory in Windows. I have tested some ACL property. Below property
is not working for me.

security.job.client.protocol.acl
security.admin.operations.protocol.acl

I have set true for the following properties but still its not working.

mapreduce.cluster.acls.enabled
yarn.acl.enable

Do i set any other property?

Thanks,
R.Kumar