Hi,

i have a question regarding the file permissions.
I have a kind of workflow where i submit a job from my laptop to a remote hadoop cluster.
After the job finished i do some file operations on the generated output.
The "cluster-user" is different to the "laptop-user". As output i specify a directory inside the users home. This output directory, created through the map-reduce job has "cluster-user" permissions, so this does not allow me to move or delete the output folder with my "laptop-user".

So it looks as follow:
/user/jz/              rwxrwxrwx     jz            supergroup
/user/jz/output   rwxr-xr-x        hadoop    supergroup

I tried different things to achieve what i want (moving/deleting the output folder):
- jobConf.setUser("hadoop") on the client side
- System.setProperty("user.name","hadoop") before jobConf instantiation on the client side
- add user.name node in the hadoop-site.xml on the client side
- setPermision(777) on the home folder on the client side (does not work recursiv) - setPermision(777) on the output folder on the client side (permission denied) - create the output folder before running the job (Output directory already exists exception)

None of the things i tried worked. Is there a way to achieve what i want ?
Any ideas appreciated!

cheers
Johannes


--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 101tec GmbH

Halle (Saale), Saxony-Anhalt, Germany
http://www.101tec.com

Reply via email to