Re: [some bugs] Re: file permission problem

2008-03-17 Thread s29752-hadoopuser
Hi Stefan,

 any magic we can do with hadoop.dfs.umask?

dfs.umask is similar to Unix umask.

 Or is there any other off switch for the file security?

If dfs.permissions is set to false, then the security will be turned off.  

For the two questions above, see 
http://hadoop.apache.org/core/docs/r0.16.1/hdfs_permissions_guide.html for more 
details

 I definitely can reproduce the problem Johannes describes ...

I guess you are using the nightly builds which having the bug.  Please try 
0.16.1 release or current trunk.

 Beside of that I had some interesting observations.
 If I have permissions to write to a folder A I can delete folder A and 
 file B that is inside of folder A even if I do have no permissions for B.

This is also true for POSIX or Unix, where Hadoop permission bases on.

 Also I noticed following in my dfs
 [EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls /user/joa23/myApp-1205474968598
 Found 1 items
 /user/joa23/myApp-1205474968598/VOICE_CALLdir2008-03-13 16:00   
 rwxr-xr-xhadoopsupergroup
 [EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls 
 /user/joa23/myApp-1205474968598/VOICE_CALL
 Found 1 items
 /user/joa23/myApp-1205474968598/VOICE_CALL/part-0r 327311   
 2008-03-13 16:00rw-r--r--joa23supergroup

 Do I miss something or was I able to write as user joa23 into a 
 folder owned by hadoop where I should have no permissions. :-O.
 Should I open some jira issues?

Suppose joa23 is not a superuser.  Then, no.

The output above only shows a file owned by joa23 exists in a directory owned 
hadoop.  This can definitely be done by a sequence of commands with chmod/chown.

Suppose joa23 is not a superuser.  If joa23 can create a file, say by hadoop 
fs -put ..., under hadoop's directory with rwxr-xr-x, then it is a bug.  But I 
don't think we can do this.

Hope this helps.

Nicholas






Re: [some bugs] Re: file permission problem

2008-03-15 Thread Stefan Groschupf

Great - it is even alrady fixed in 16.1!
Thanks for the hint!
Stefan

On Mar 14, 2008, at 2:49 PM, Andy Li wrote:


I think this is the same problem related to this mail thread.

http://www.mail-archive.com/[EMAIL PROTECTED]/msg02759.html

A JIRA has been filed, please see HADOOP-2915.

On Fri, Mar 14, 2008 at 2:08 AM, Stefan Groschupf [EMAIL PROTECTED]  
wrote:



Hi,
any magic we can do with hadoop.dfs.umask? Or is there any other off
switch for the file security?
Thanks.
Stefan
On Mar 13, 2008, at 11:26 PM, Stefan Groschupf wrote:


Hi Nicholas, Hi All,

I definitely can reproduce the problem Johannes describes.
Also from debugging through the code it is clearly a bug from my
point of view.
So this is the call stack:
SequenceFile.createWriter
FileSystem.create
DFSClient.create
namenode.create
In NameNode I found this:
namesystem.startFile(src,
  new PermissionStatus(Server.getUserInfo().getUserName(),
null, masked),
  clientName, clientMachine, overwrite, replication, blockSize);

In getUserInfo is this comment:
// This is to support local calls (as opposed to rpc ones) to the
name-node.
  // Currently it is name-node specific and should be placed
somewhere else.
  try {
return UnixUserGroupInformation.login();
The login javaDoc says:
/**
 * Get current user's name and the names of all its groups from  
Unix.

 * It's assumed that there is only one UGI per user. If this user
already
 * has a UGI in the ugi map, return the ugi in the map.
 * Otherwise get the current user's information from Unix, store it
 * in the map, and return it.
 */

Beside of that I had some interesting observations.
If I have permissions to write to a folder A I can delete folder A
and file B that is inside of folder A even if I do have no
permissions for B.

Also I noticed following in my dfs
[EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls /user/joa23/
myApp-1205474968598
Found 1 items
/user/joa23/myApp-1205474968598/VOICE_CALLdir
2008-03-13

16:00

rwxr-xr-x hadoop  supergroup
[EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls /user/joa23/
myApp-1205474968598/VOICE_CALL
Found 1 items
/user/joa23/myApp-1205474968598/VOICE_CALL/part-0 r 3   27311
2008-03-13 16:00  rw-r--r--   joa23   supergroup

Do I miss something or was I able to write as user joa23 into a
folder owned by hadoop where I should have no permissions. :-O.
Should I open some jira issues?

Stefan





On Mar 13, 2008, at 10:55 AM, [EMAIL PROTECTED] wrote:


Hi Johannes,


i'm using the 0.16.0 distribution.

I assume you mean the 0.16.0 release (

http://hadoop.apache.org/core/releases.html

) without any additional patch.

I just have tried it but cannot reproduce the problem you
described.  I did the following:
1) start a cluster with tsz
2) run a job with nicholas

The output directory and files are owned by nicholas.  Am I doing
the same thing you did?  Could you try again?

Nicholas



- Original Message 
From: Johannes Zillmann [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, March 12, 2008 5:47:27 PM
Subject: file permission problem

Hi,

i have a question regarding the file permissions.
I have a kind of workflow where i submit a job from my laptop to a
remote hadoop cluster.
After the job finished i do some file operations on the generated
output.
The cluster-user is different to the laptop-user. As output i
specify a directory inside the users home. This output directory,
created through the map-reduce job has cluster-user permissions,
so
this does not allow me to move or delete the output folder with my
laptop-user.

So it looks as follow:
/user/jz/  rwxrwxrwx jzsupergroup
/user/jz/output   rwxr-xr-xhadoopsupergroup

I tried different things to achieve what i want (moving/deleting  
the

output folder):
- jobConf.setUser(hadoop) on the client side
- System.setProperty(user.name,hadoop) before jobConf
instantiation
on the client side
- add user.name node in the hadoop-site.xml on the client side
- setPermision(777) on the home folder on the client side (does
not work
recursiv)
- setPermision(777) on the output folder on the client side
(permission
denied)
- create the output folder before running the job (Output  
directory

already exists exception)

None of the things i tried worked. Is there a way to achieve what
i want ?
Any ideas appreciated!

cheers
Johannes






--
~~~
101tec GmbH

Halle (Saale), Saxony-Anhalt, Germany
http://www.101tec.com






~~~
101tec Inc.
Menlo Park, California, USA
http://www.101tec.com





~~~
101tec Inc.
Menlo Park, California, USA
http://www.101tec.com





~~~
101tec Inc.
Menlo Park, California, USA
http://www.101tec.com




[some bugs] Re: file permission problem

2008-03-14 Thread Stefan Groschupf

Hi Nicholas, Hi All,

I definitely can reproduce the problem Johannes describes.
Also from debugging through the code it is clearly a bug from my point  
of view.

So this is the call stack:
SequenceFile.createWriter
FileSystem.create
DFSClient.create
namenode.create
In NameNode I found this:
 namesystem.startFile(src,
new PermissionStatus(Server.getUserInfo().getUserName(),  
null, masked),

clientName, clientMachine, overwrite, replication, blockSize);

In getUserInfo is this comment:
 // This is to support local calls (as opposed to rpc ones) to the  
name-node.
// Currently it is name-node specific and should be placed  
somewhere else.

try {
  return UnixUserGroupInformation.login();
The login javaDoc says:
 /**
   * Get current user's name and the names of all its groups from Unix.
   * It's assumed that there is only one UGI per user. If this user  
already

   * has a UGI in the ugi map, return the ugi in the map.
   * Otherwise get the current user's information from Unix, store it
   * in the map, and return it.
   */

Beside of that I had some interesting observations.
If I have permissions to write to a folder A I can delete folder A and  
file B that is inside of folder A even if I do have no permissions for  
B.


Also I noticed following in my dfs
[EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls /user/joa23/ 
myApp-1205474968598

Found 1 items
/user/joa23/myApp-1205474968598/VOICE_CALL	dir		2008-03-13 16:00	 
rwxr-xr-x	hadoop	supergroup
[EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls /user/joa23/ 
myApp-1205474968598/VOICE_CALL

Found 1 items
/user/joa23/myApp-1205474968598/VOICE_CALL/part-0	r 3	27311	 
2008-03-13 16:00	rw-r--r--	joa23	supergroup


Do I miss something or was I able to write as user joa23 into a folder  
owned by hadoop where I should have no permissions. :-O.

Should I open some jira issues?

Stefan





On Mar 13, 2008, at 10:55 AM, [EMAIL PROTECTED] wrote:


Hi Johannes,


i'm using the 0.16.0 distribution.
I assume you mean the 0.16.0 release (http://hadoop.apache.org/core/releases.html 
) without any additional patch.


I just have tried it but cannot reproduce the problem you  
described.  I did the following:

1) start a cluster with tsz
2) run a job with nicholas

The output directory and files are owned by nicholas.  Am I doing  
the same thing you did?  Could you try again?


Nicholas



- Original Message 
From: Johannes Zillmann [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, March 12, 2008 5:47:27 PM
Subject: file permission problem

Hi,

i have a question regarding the file permissions.
I have a kind of workflow where i submit a job from my laptop to a
remote hadoop cluster.
After the job finished i do some file operations on the generated  
output.

The cluster-user is different to the laptop-user. As output i
specify a directory inside the users home. This output directory,
created through the map-reduce job has cluster-user permissions, so
this does not allow me to move or delete the output folder with my
laptop-user.

So it looks as follow:
/user/jz/  rwxrwxrwx jzsupergroup
/user/jz/output   rwxr-xr-xhadoopsupergroup

I tried different things to achieve what i want (moving/deleting the
output folder):
- jobConf.setUser(hadoop) on the client side
- System.setProperty(user.name,hadoop) before jobConf  
instantiation

on the client side
- add user.name node in the hadoop-site.xml on the client side
- setPermision(777) on the home folder on the client side (does not  
work

recursiv)
- setPermision(777) on the output folder on the client side  
(permission

denied)
- create the output folder before running the job (Output directory
already exists exception)

None of the things i tried worked. Is there a way to achieve what i  
want ?

Any ideas appreciated!

cheers
Johannes






--
~~~
101tec GmbH

Halle (Saale), Saxony-Anhalt, Germany
http://www.101tec.com






~~~
101tec Inc.
Menlo Park, California, USA
http://www.101tec.com




Re: [some bugs] Re: file permission problem

2008-03-14 Thread Stefan Groschupf

Hi,
any magic we can do with hadoop.dfs.umask? Or is there any other off  
switch for the file security?

Thanks.
Stefan
On Mar 13, 2008, at 11:26 PM, Stefan Groschupf wrote:


Hi Nicholas, Hi All,

I definitely can reproduce the problem Johannes describes.
Also from debugging through the code it is clearly a bug from my  
point of view.

So this is the call stack:
SequenceFile.createWriter
FileSystem.create
DFSClient.create
namenode.create
In NameNode I found this:
namesystem.startFile(src,
   new PermissionStatus(Server.getUserInfo().getUserName(),  
null, masked),

   clientName, clientMachine, overwrite, replication, blockSize);

In getUserInfo is this comment:
// This is to support local calls (as opposed to rpc ones) to the  
name-node.
   // Currently it is name-node specific and should be placed  
somewhere else.

   try {
 return UnixUserGroupInformation.login();
The login javaDoc says:
/**
  * Get current user's name and the names of all its groups from Unix.
  * It's assumed that there is only one UGI per user. If this user  
already

  * has a UGI in the ugi map, return the ugi in the map.
  * Otherwise get the current user's information from Unix, store it
  * in the map, and return it.
  */

Beside of that I had some interesting observations.
If I have permissions to write to a folder A I can delete folder A  
and file B that is inside of folder A even if I do have no  
permissions for B.


Also I noticed following in my dfs
[EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls /user/joa23/ 
myApp-1205474968598

Found 1 items
/user/joa23/myApp-1205474968598/VOICE_CALL	dir		2008-03-13 16:00	 
rwxr-xr-x	hadoop	supergroup
[EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls /user/joa23/ 
myApp-1205474968598/VOICE_CALL

Found 1 items
/user/joa23/myApp-1205474968598/VOICE_CALL/part-0	r 3	27311	 
2008-03-13 16:00	rw-r--r--	joa23	supergroup


Do I miss something or was I able to write as user joa23 into a  
folder owned by hadoop where I should have no permissions. :-O.

Should I open some jira issues?

Stefan





On Mar 13, 2008, at 10:55 AM, [EMAIL PROTECTED] wrote:


Hi Johannes,


i'm using the 0.16.0 distribution.
I assume you mean the 0.16.0 release (http://hadoop.apache.org/core/releases.html 
) without any additional patch.


I just have tried it but cannot reproduce the problem you  
described.  I did the following:

1) start a cluster with tsz
2) run a job with nicholas

The output directory and files are owned by nicholas.  Am I doing  
the same thing you did?  Could you try again?


Nicholas



- Original Message 
From: Johannes Zillmann [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, March 12, 2008 5:47:27 PM
Subject: file permission problem

Hi,

i have a question regarding the file permissions.
I have a kind of workflow where i submit a job from my laptop to a
remote hadoop cluster.
After the job finished i do some file operations on the generated  
output.

The cluster-user is different to the laptop-user. As output i
specify a directory inside the users home. This output directory,
created through the map-reduce job has cluster-user permissions,  
so

this does not allow me to move or delete the output folder with my
laptop-user.

So it looks as follow:
/user/jz/  rwxrwxrwx jzsupergroup
/user/jz/output   rwxr-xr-xhadoopsupergroup

I tried different things to achieve what i want (moving/deleting the
output folder):
- jobConf.setUser(hadoop) on the client side
- System.setProperty(user.name,hadoop) before jobConf  
instantiation

on the client side
- add user.name node in the hadoop-site.xml on the client side
- setPermision(777) on the home folder on the client side (does  
not work

recursiv)
- setPermision(777) on the output folder on the client side  
(permission

denied)
- create the output folder before running the job (Output directory
already exists exception)

None of the things i tried worked. Is there a way to achieve what  
i want ?

Any ideas appreciated!

cheers
Johannes






--
~~~
101tec GmbH

Halle (Saale), Saxony-Anhalt, Germany
http://www.101tec.com






~~~
101tec Inc.
Menlo Park, California, USA
http://www.101tec.com





~~~
101tec Inc.
Menlo Park, California, USA
http://www.101tec.com




Re: [some bugs] Re: file permission problem

2008-03-14 Thread Andy Li
I think this is the same problem related to this mail thread.

http://www.mail-archive.com/[EMAIL PROTECTED]/msg02759.html

A JIRA has been filed, please see HADOOP-2915.

On Fri, Mar 14, 2008 at 2:08 AM, Stefan Groschupf [EMAIL PROTECTED] wrote:

 Hi,
 any magic we can do with hadoop.dfs.umask? Or is there any other off
 switch for the file security?
 Thanks.
 Stefan
 On Mar 13, 2008, at 11:26 PM, Stefan Groschupf wrote:

  Hi Nicholas, Hi All,
 
  I definitely can reproduce the problem Johannes describes.
  Also from debugging through the code it is clearly a bug from my
  point of view.
  So this is the call stack:
  SequenceFile.createWriter
  FileSystem.create
  DFSClient.create
  namenode.create
  In NameNode I found this:
  namesystem.startFile(src,
 new PermissionStatus(Server.getUserInfo().getUserName(),
  null, masked),
 clientName, clientMachine, overwrite, replication, blockSize);
 
  In getUserInfo is this comment:
  // This is to support local calls (as opposed to rpc ones) to the
  name-node.
 // Currently it is name-node specific and should be placed
  somewhere else.
 try {
   return UnixUserGroupInformation.login();
  The login javaDoc says:
  /**
* Get current user's name and the names of all its groups from Unix.
* It's assumed that there is only one UGI per user. If this user
  already
* has a UGI in the ugi map, return the ugi in the map.
* Otherwise get the current user's information from Unix, store it
* in the map, and return it.
*/
 
  Beside of that I had some interesting observations.
  If I have permissions to write to a folder A I can delete folder A
  and file B that is inside of folder A even if I do have no
  permissions for B.
 
  Also I noticed following in my dfs
  [EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls /user/joa23/
  myApp-1205474968598
  Found 1 items
  /user/joa23/myApp-1205474968598/VOICE_CALLdir   2008-03-13
 16:00
  rwxr-xr-x hadoop  supergroup
  [EMAIL PROTECTED] hadoop]$ bin/hadoop fs -ls /user/joa23/
  myApp-1205474968598/VOICE_CALL
  Found 1 items
  /user/joa23/myApp-1205474968598/VOICE_CALL/part-0 r 3   27311
  2008-03-13 16:00  rw-r--r--   joa23   supergroup
 
  Do I miss something or was I able to write as user joa23 into a
  folder owned by hadoop where I should have no permissions. :-O.
  Should I open some jira issues?
 
  Stefan
 
 
 
 
 
  On Mar 13, 2008, at 10:55 AM, [EMAIL PROTECTED] wrote:
 
  Hi Johannes,
 
  i'm using the 0.16.0 distribution.
  I assume you mean the 0.16.0 release (
 http://hadoop.apache.org/core/releases.html
  ) without any additional patch.
 
  I just have tried it but cannot reproduce the problem you
  described.  I did the following:
  1) start a cluster with tsz
  2) run a job with nicholas
 
  The output directory and files are owned by nicholas.  Am I doing
  the same thing you did?  Could you try again?
 
  Nicholas
 
 
  - Original Message 
  From: Johannes Zillmann [EMAIL PROTECTED]
  To: core-user@hadoop.apache.org
  Sent: Wednesday, March 12, 2008 5:47:27 PM
  Subject: file permission problem
 
  Hi,
 
  i have a question regarding the file permissions.
  I have a kind of workflow where i submit a job from my laptop to a
  remote hadoop cluster.
  After the job finished i do some file operations on the generated
  output.
  The cluster-user is different to the laptop-user. As output i
  specify a directory inside the users home. This output directory,
  created through the map-reduce job has cluster-user permissions,
  so
  this does not allow me to move or delete the output folder with my
  laptop-user.
 
  So it looks as follow:
  /user/jz/  rwxrwxrwx jzsupergroup
  /user/jz/output   rwxr-xr-xhadoopsupergroup
 
  I tried different things to achieve what i want (moving/deleting the
  output folder):
  - jobConf.setUser(hadoop) on the client side
  - System.setProperty(user.name,hadoop) before jobConf
  instantiation
  on the client side
  - add user.name node in the hadoop-site.xml on the client side
  - setPermision(777) on the home folder on the client side (does
  not work
  recursiv)
  - setPermision(777) on the output folder on the client side
  (permission
  denied)
  - create the output folder before running the job (Output directory
  already exists exception)
 
  None of the things i tried worked. Is there a way to achieve what
  i want ?
  Any ideas appreciated!
 
  cheers
  Johannes
 
 
 
 
 
  --
  ~~~
  101tec GmbH
 
  Halle (Saale), Saxony-Anhalt, Germany
  http://www.101tec.com
 
 
 
 
 
  ~~~
  101tec Inc.
  Menlo Park, California, USA
  http://www.101tec.com
 
 
 

 ~~~
 101tec Inc.
 Menlo Park, California, USA
 http://www.101tec.com





Re: file permission problem

2008-03-13 Thread s29752-hadoopuser
Hi Johannes,

 i'm using the 0.16.0 distribution.
I assume you mean the 0.16.0 release 
(http://hadoop.apache.org/core/releases.html) without any additional patch.

I just have tried it but cannot reproduce the problem you described.  I did the 
following:
1) start a cluster with tsz
2) run a job with nicholas

The output directory and files are owned by nicholas.  Am I doing the same 
thing you did?  Could you try again?

Nicholas


 - Original Message 
 From: Johannes Zillmann [EMAIL PROTECTED]
 To: core-user@hadoop.apache.org
 Sent: Wednesday, March 12, 2008 5:47:27 PM
 Subject: file permission problem

 Hi,

 i have a question regarding the file permissions.
 I have a kind of workflow where i submit a job from my laptop to a 
 remote hadoop cluster.
 After the job finished i do some file operations on the generated output.
 The cluster-user is different to the laptop-user. As output i 
 specify a directory inside the users home. This output directory, 
 created through the map-reduce job has cluster-user permissions, so 
 this does not allow me to move or delete the output folder with my 
 laptop-user.

 So it looks as follow:
 /user/jz/  rwxrwxrwx jzsupergroup
 /user/jz/output   rwxr-xr-xhadoopsupergroup

 I tried different things to achieve what i want (moving/deleting the 
 output folder):
 - jobConf.setUser(hadoop) on the client side
 - System.setProperty(user.name,hadoop) before jobConf instantiation 
 on the client side
 - add user.name node in the hadoop-site.xml on the client side
 - setPermision(777) on the home folder on the client side (does not work 
 recursiv)
 - setPermision(777) on the output folder on the client side (permission 
 denied)
 - create the output folder before running the job (Output directory 
 already exists exception)

 None of the things i tried worked. Is there a way to achieve what i want ?
 Any ideas appreciated!

 cheers
 Johannes


   


-- 
~~~ 
101tec GmbH

Halle (Saale), Saxony-Anhalt, Germany
http://www.101tec.com






Re: file permission problem

2008-03-12 Thread s29752-hadoopuser
Hi Johannes,

Which version of hadoop are you using?  There is a known bug in some nightly 
builds.

Nicholas


- Original Message 
From: Johannes Zillmann [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, March 12, 2008 5:47:27 PM
Subject: file permission problem

Hi,

i have a question regarding the file permissions.
I have a kind of workflow where i submit a job from my laptop to a 
remote hadoop cluster.
After the job finished i do some file operations on the generated output.
The cluster-user is different to the laptop-user. As output i 
specify a directory inside the users home. This output directory, 
created through the map-reduce job has cluster-user permissions, so 
this does not allow me to move or delete the output folder with my 
laptop-user.

So it looks as follow:
/user/jz/  rwxrwxrwx jzsupergroup
/user/jz/output   rwxr-xr-xhadoopsupergroup

I tried different things to achieve what i want (moving/deleting the 
output folder):
- jobConf.setUser(hadoop) on the client side
- System.setProperty(user.name,hadoop) before jobConf instantiation 
on the client side
- add user.name node in the hadoop-site.xml on the client side
- setPermision(777) on the home folder on the client side (does not work 
recursiv)
- setPermision(777) on the output folder on the client side (permission 
denied)
- create the output folder before running the job (Output directory 
already exists exception)

None of the things i tried worked. Is there a way to achieve what i want ?
Any ideas appreciated!

cheers
Johannes


-- 
~~~ 
101tec GmbH

Halle (Saale), Saxony-Anhalt, Germany
http://www.101tec.com






Re: file permission problem

2008-03-12 Thread Johannes Zillmann

Hi Nicholas,

i'm using the 0.16.0 distribution.

Johannes



[EMAIL PROTECTED] wrote:

Hi Johannes,

Which version of hadoop are you using?  There is a known bug in some nightly 
builds.

Nicholas


- Original Message 
From: Johannes Zillmann [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, March 12, 2008 5:47:27 PM
Subject: file permission problem

Hi,

i have a question regarding the file permissions.
I have a kind of workflow where i submit a job from my laptop to a 
remote hadoop cluster.

After the job finished i do some file operations on the generated output.
The cluster-user is different to the laptop-user. As output i 
specify a directory inside the users home. This output directory, 
created through the map-reduce job has cluster-user permissions, so 
this does not allow me to move or delete the output folder with my 
laptop-user.


So it looks as follow:
/user/jz/  rwxrwxrwx jzsupergroup
/user/jz/output   rwxr-xr-xhadoopsupergroup

I tried different things to achieve what i want (moving/deleting the 
output folder):

- jobConf.setUser(hadoop) on the client side
- System.setProperty(user.name,hadoop) before jobConf instantiation 
on the client side

- add user.name node in the hadoop-site.xml on the client side
- setPermision(777) on the home folder on the client side (does not work 
recursiv)
- setPermision(777) on the output folder on the client side (permission 
denied)
- create the output folder before running the job (Output directory 
already exists exception)


None of the things i tried worked. Is there a way to achieve what i want ?
Any ideas appreciated!

cheers
Johannes


  



--
~~~ 
101tec GmbH


Halle (Saale), Saxony-Anhalt, Germany
http://www.101tec.com