I think your conf is incorrectly set and your job was run locally. Also, have
you done jobconf.setNumReduceTasks(0)? Try running some example jobs to test
your setting.
Nicholas Sze
- Original Message
> From: Erik Holstad <[EMAIL PROTECTED]>
> To: core-user@hadoop.apache.org
> Sen
Hi Joman,
The temp directory we talking here is the temp directory in the local file
system (i.e. Unix in your case). There is a config property hadoop.tmp.dir
(see hadoop-default.xml), which specifies the path of temp directory. Before
you start the cluster, you should set this property and
Hi Christophe,
This exception happens when you access the FileSystem after calling
FileSystem.close(). From the error message below, a FileSystem input stream
was accessed after FileSystem.close(). I guess the FileSystem was closed
manually (and too early). In most cases, you don't have to c
This information can be found in
http://hadoop.apache.org/core/docs/current/hdfs_permissions_guide.html
Nicholas
- Original Message
> From: Chris Collins <[EMAIL PROTECTED]>
> To: core-user@hadoop.apache.org
> Sent: Wednesday, June 11, 2008 9:31:18 PM
> Subject: Re: client connect as di
The best way is to use sudo command to execute hadoop client. Does it work for
you?
Nicholas
- Original Message
> From: Bob Remeika <[EMAIL PROTECTED]>
> To: core-user@hadoop.apache.org
> Sent: Wednesday, June 11, 2008 12:56:14 PM
> Subject: client connect as different username?
>
>
The following works for me
set JAVA_HOME=/cygdrive/c/Progra~1/Java/jdk1.5.0_14
Nicholas
- Original Message
From: vatsan <[EMAIL PROTECTED]>
To: core-user@hadoop.apache.org
Sent: Friday, May 23, 2008 5:41:05 PM
Subject: JAVA_HOME Cygwin problem (solution doesn't work)
I have installed
Hi Senthil,
drwxrwxrwx5 hadoop hadoop 4096 May 7 18:02 datastore
This one is your local directory. I think you might have mixed up the local
and hdfs directories.
Nicholas
- Original Message
From: "Natarajan, Senthil" <[EMAIL PROTECTED]>
To: "core-user@hadoop.apache.or
Hi Senthil,
Let me explain the error message " Permission denied: user=test, access=WRITE,
inode="datastore":hadoop:supergroup:rwxr-xr-x". It says that the current user
"test" is trying to WRITE to the inode "datastore" with owner hadoop:supergroup
and permission 755. So the problem is in the
Hi Senthil,
I cannot see why it does not work. Could you try again, do a fs -ls right
after you see the error message?
Nicholas
- Original Message
From: "Natarajan, Senthil" <[EMAIL PROTECTED]>
To: "core-user@hadoop.apache.org"
Sent: Friday, May 9, 2008 11:49:49 AM
Subject: RE: Had
Hi Rick,
> the hbase master must be run on the same machine as the hadoop hdfs (what
> part of it?) if one wants to use the hdfs permissions system or that right
> now we must run without permissions?
Hdfs and hbase (and all clients) should run under the same administrative
domain, but not the
Hi Senthil,
drwxrwxrwx4 hadoop hadoop 4096 May 8 16:31 hadoop-hadoop
drwxrwxrwx2 test test 4096 May 9 09:29 hadoop-test
>From the output format, the directories above seem not HDFS directories. Are
>you running map/red jobs over local file system (e.g. Linux)?
Nicholas
Hi Stack,
> One question this raises is if the "hbase:hbase" user and group are being
> derived from the Linux file system user and group, or if they are the hdfs
> user and group?
HDFS currently does not manage user and group information. User and group in
HDFS are being derived from the unde
Hi Senthil,
In the error message, it says that the permission for "datastore" is 755. Are
you sure that you have changed it to 777?
Nicholas
- Original Message
From: "Natarajan, Senthil" <[EMAIL PROTECTED]>
To: "core-user@hadoop.apache.org"
Sent: Thursday, May 8, 2008 11:57:46 AM
S
Hi Senthil,
Since the path "myapps" is relative, copyFromLocal will copy the file to the
home directory, i.e. /user/Test/myapps in your case. If /user/Test doesn't not
exist, it will first try to create it. You got AccessControlException because
the permission of /user is 755.
Hope this help
Your distcp command looks correct. distcp may have created some log files
(e.g. inside /_distcp_logs_5vzva5 from your previous email.) Could you check
the logs, see whether there are error messages?
If you could send me the distcp output and the logs, I may be able to find out
the problem. (
>To check that the file actually exists on S3, I tried the following commands:
>
>bin/hadoop fs -fs s3://id:[EMAIL PROTECTED] -ls
>bin/hadoop fs -fs s3://id:[EMAIL PROTECTED] -ls
>
>The first returned nothing, while the second returned the following:
>
>Found 1 items
>/_distcp_logs_5vzva5
distcp supports multiple sources (link Unix cp) and if the specified source is
a directory, it copies the entire directory. So, you could either do
distcp src1 src2 ... src100 dst
or
first copy all srcs to srcdir, and then
distcp srcdir dstdir
I have no experience on S3 and EC2. Not sure
It might be a bug. Could you try the following?
bin/hadoop fs -ls s3://ID:[EMAIL PROTECTED]/InputFileFormat.xml
Nicholas
- Original Message
From: Prasan Ary <[EMAIL PROTECTED]>
To: core-user@hadoop.apache.org
Sent: Wednesday, April 2, 2008 7:41:50 AM
Subject: Re: distcp fails :Input so
> That was a typo in my email. I do have s3:// in my command when it fails.
Not sure what's wrong. Your command looks right to me. Would you mind to show
me the exact error message you see?
Nicholas
> bin/hadoop distcp s3//:@/fileone.txt
/somefolder_on_hdfs/fileone.txt : Fails - Input source doesnt exist.
Should "s3//..." be "s3://..."?
Nicholas
Hi Stefan,
> any magic we can do with hadoop.dfs.umask?
>
dfs.umask is similar to Unix umask.
> Or is there any other off switch for the file security?
>
If dfs.permissions is set to false, then the security will be turned off.
For the two questions above, see
http://hadoop.apache.org/core/do
Hi,
Let me clarify the versions having this problem.
0.16.0 release, 0.16.1 release, current trunk:
no problem
Nightly builds between 0.16.0 and 0.16.1 before HADOOP-2391 or after
HADOOP-2915:
no problem
Nightly builds between 0.16.0 and 0.16.1 after HADOOP-2391 and before
HADOOP-2915:
bug
Hi Johannes,
> i'm using the 0.16.0 distribution.
I assume you mean the 0.16.0 release
(http://hadoop.apache.org/core/releases.html) without any additional patch.
I just have tried it but cannot reproduce the problem you described. I did the
following:
1) start a cluster with "tsz"
2) run a jo
Hi Johannes,
Which version of hadoop are you using? There is a known bug in some nightly
builds.
Nicholas
- Original Message
From: Johannes Zillmann <[EMAIL PROTECTED]>
To: core-user@hadoop.apache.org
Sent: Wednesday, March 12, 2008 5:47:27 PM
Subject: file permission problem
Hi,
i
24 matches
Mail list logo