Hi Kumar,

This is correct syntax for the HDFS setfacl command.  If you're running from 
Windows cmd.exe, then you may need to wrap command line parameters in quotes if 
they contain any of the cmd.exe parameter delimiters.  In cmd.exe, the 
parameter delimiters are space, comma, semicolon and equal sign.  The syntax 
for an ACL spec contains commas, so we need to wrap that in quotes.  Otherwise, 
cmd.exe splits it into multiple arguments before invoking the Hadoop code, and 
this is why you see an error for too many arguments.  When I ran this on 
Windows, it worked:

hadoop fs -setfacl --set "user::rwx,user:user1:---,group::rwx,other::rwx" /test1

--Chris Nauroth

From: kumar r <kumarlear...@gmail.com<mailto:kumarlear...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Date: Wednesday, July 15, 2015 at 12:29 AM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>" 
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Subject: hadoop setfacl --set not working

I am windows user, Configured Hadoop-2.6.0 secured with kerberos. Trying to set 
ACL for a directory using below command


hadoop fs -setfacl --set user::rwx,user:user1:---,group::rwx,other::rwx /test1


It gives

-setfacl: Too many arguments
Usage: hadoop fs [generic options] -setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} 
<path>]|[--set <acl_spec> <path>]

I have posted question in stackoverflow and the link is

http://stackoverflow.com/questions/31422810/hadoop-setfacl-set-not-working

Reply via email to