Re: Hadoop Certification Progamme

2010-12-08 Thread Jeff Hammerbacher
Hey Matthew, In particular, see http://www.cloudera.com/hadoop-training/ for details on Cloudera's training and certifications. Regards, Jeff On Wed, Dec 8, 2010 at 7:44 PM, Esteban Gutierrez Moguel < esteban...@gmail.com> wrote: > Matthew, > > Cloudera has rolled a certification program for de

Re: Running not as "hadoop" user

2010-12-08 Thread Adarsh Sharma
Todd Lipcon wrote: The user who started the NN has superuser privileges on HDFS. You can also configure a supergroup by setting dfs.permissions.supergroup (default "supergroup") -Todd On Wed, Dec 8, 2010 at 9:34 PM, Mark Kerzner wrote: Hi, "hadoop" user has some advantages for running Ha

Re: Running not as "hadoop" user

2010-12-08 Thread Todd Lipcon
The user who started the NN has superuser privileges on HDFS. You can also configure a supergroup by setting dfs.permissions.supergroup (default "supergroup") -Todd On Wed, Dec 8, 2010 at 9:34 PM, Mark Kerzner wrote: > Hi, > > "hadoop" user has some advantages for running Hadoop. For example, i

Running not as "hadoop" user

2010-12-08 Thread Mark Kerzner
Hi, "hadoop" user has some advantages for running Hadoop. For example, if HDFS is mounted as a local file system, then only user "hadoop" has write/delete permissions. Can this privilege be given to another user? In other words, is this "hadoop" user hard-coded, or can another be used in its stea

Re: Reduce Error

2010-12-08 Thread Ted Yu
>From Raj earlier: I have seen this error from time to time and it has been either due to space or missing directories or disk errors. Space issue was caused by the fact that the I had mounted /de/sdc on /hadoop-dsk and the mount had failed. And in another case I had accidentally deleted hadoop

Re: Reduce Error

2010-12-08 Thread Raj V
Go through the jobtracker, find the relevant node that handled attempt_201012061426_0001_m_000292_0 and figure out if there are FS or permssion problems. Raj From: Adarsh Sharma To: common-user@hadoop.apache.org Sent: Wed, December 8, 2010 7:48:47 PM Subjec

Re: Reduce Error

2010-12-08 Thread Adarsh Sharma
Ted Yu wrote: Any chance mapred.local.dir is under /tmp and part of it got cleaned up ? On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma wrote: Dear all, Did anyone encounter the below error while running job in Hadoop. It occurs in the reduce phase of the job. attempt_201012061426_0001_m_00

Re: Hadoop Certification Progamme

2010-12-08 Thread Esteban Gutierrez Moguel
Matthew, Cloudera has rolled a certification program for developers and admins. Take a look into their website. Cheers, Esteban. On Dec 8, 2010 9:41 PM, "Matthew John" wrote: > Hi all,. > > Is there any valid Hadoop Certification available ? Something which adds > credibility to your Hadoop expe

Hadoop Certification Progamme

2010-12-08 Thread Matthew John
Hi all,. Is there any valid Hadoop Certification available ? Something which adds credibility to your Hadoop expertise. Matthew

Re: urgent, error: java.io.IOException: Cannot create directory

2010-12-08 Thread Richard Zhang
oh, sorry. I corrected that typo hadoop$ ls tmp/dir/hadoop-hadoop/dfs/name/current -l total 0 hadoop$ ls tmp/dir/hadoop-hadoop/dfs/name -l total 4 drwxr-xr-x 2 hadoop hadoop 4096 2010-12-08 22:17 current Even I remove the tmp I manually created and set all the Hadoop package to be 777. Then I run

Re: urgent, error: java.io.IOException: Cannot create directory

2010-12-08 Thread Konstantin Boudnik
Yeah, I figured that match. What I was referring to is the ending of the paths: .../hadoop-hadoop/dfs/name/current .../hadoop-hadoop/dfs/hadoop They are different --   Take care, Konstantin (Cos) Boudnik On Wed, Dec 8, 2010 at 15:55, Richard Zhang wrote: > Hi: > "/your/path/to/hadoop"  represen

Re: urgent, error: java.io.IOException: Cannot create directory

2010-12-08 Thread Richard Zhang
Hi: "/your/path/to/hadoop" represents the location where hadoop is installed. BTW, I believe this is a file writing permission problem. Because I use the same *-site.xml setting to install with root and it works. But when I use the dedicated user hadoop, it always introduces this problem. But I d

Re: urgent, error: java.io.IOException: Cannot create directory

2010-12-08 Thread Konstantin Boudnik
it seems that you are looking at 2 different directories: first post: /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current second: ls -l tmp/dir/hadoop-hadoop/dfs/hadoop --   Take care, Konstantin (Cos) Boudnik On Wed, Dec 8, 2010 at 14:19, Richard Zhang wro

Re: urgent, error: java.io.IOException: Cannot create directory

2010-12-08 Thread Richard Zhang
would that be the reason that 54310 port is not open? I just used * iptables -A INPUT -p tcp --dport 54310 -j ACCEPT to open the port. But it seems the same erorr exists. Richard * On Wed, Dec 8, 2010 at 4:56 PM, Richard Zhang wrote: > Hi James: > I verified that I have the following permission se

Re: urgent, error: java.io.IOException: Cannot create directory

2010-12-08 Thread Richard Zhang
Hi James: I verified that I have the following permission set for the path: ls -l tmp/dir/hadoop-hadoop/dfs/hadoop total 4 drwxr-xr-x 2 hadoop hadoop 4096 2010-12-08 15:56 current Thanks. Richard On Wed, Dec 8, 2010 at 4:50 PM, james warren wrote: > Hi Richard - > > First thing that comes to m

Re: urgent, error: java.io.IOException: Cannot create directory

2010-12-08 Thread james warren
Hi Richard - First thing that comes to mind is a permissions issue. Can you verify that your directories along the desired namenode path are writable by the appropriate user(s)? HTH, -James On Wed, Dec 8, 2010 at 1:37 PM, Richard Zhang wrote: > Hi Guys: > I am just installation the hadoop 0.21

urgent, error: java.io.IOException: Cannot create directory

2010-12-08 Thread Richard Zhang
Hi Guys: I am just installation the hadoop 0.21.0 in a single node cluster. I encounter the following error when I run bin/hadoop namenode -format 10/12/08 16:27:22 ERROR namenode.NameNode: java.io.IOException: Cannot create directory /your/path/to/hadoop/tmp/dir/hadoop-hadoop/dfs/name/current

Making input in Map iterable

2010-12-08 Thread Alex Baranau
Hello, I have a data processing logic implemented so that on input it receives Iterable. I.e. pretty much the same as reducer's API. But I need to use this code in Map, where each element is "arrived" as map() method invocation. To solve the problem (at least for now), I'm doing the following: * r

Re: Help with

2010-12-08 Thread Aman
bin/start-dfs.sh sources bin/hadoop-config.sh hence please put the following command in file bin/hadoop-config.sh export HADOOP_HOME=/user/local/hadoop On Nov 30, 2010, at 12:00 PM, "Greg Troyan" wrote: > I am building a cluster using Michael G. Noll's instructions found here: > > http://ww

Re: Configure Secondary Namenode

2010-12-08 Thread Aman
>> Date: Wed, 18 Aug 2010 13:08:03 +0530 >> From: adarsh.sha...@orkash.com >> To: core-u...@hadoop.apache.org >> Subject: Configure Secondary Namenode >> >> I am not able to find any command or parameter in core-default.xml to >> configure secondary namenode on separate machine. >> I have a 4-node

Re: Help: 1) Hadoop processes still are running after we stopped hadoop.2) How to exclude a dead node?

2010-12-08 Thread Sudhir Vallamkondu
Yes. Reference: I couldn't find a apache hadoop page describing this but see below link http://serverfault.com/questions/115148/hadoop-slaves-file-necessary On 12/7/10 11:59 PM, "common-user-digest-h...@hadoop.apache.org" wrote: > From: li ping > Date: Wed, 8 Dec 2010 14:17:40 +0800 > To: >

Re: Reduce Error

2010-12-08 Thread Ted Yu
Any chance mapred.local.dir is under /tmp and part of it got cleaned up ? On Wed, Dec 8, 2010 at 4:17 AM, Adarsh Sharma wrote: > Dear all, > > Did anyone encounter the below error while running job in Hadoop. It occurs > in the reduce phase of the job. > > attempt_201012061426_0001_m_000292_0: >

Reduce Error

2010-12-08 Thread Adarsh Sharma
Dear all, Did anyone encounter the below error while running job in Hadoop. It occurs in the reduce phase of the job. attempt_201012061426_0001_m_000292_0: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for taskTracker/jobcache/job_2010120614

Re: how to run jobs every 30 minutes?

2010-12-08 Thread Alejandro Abdelnur
Or, if you want to do it in a reliable way you could use an Oozie coordinator job. On Wed, Dec 8, 2010 at 1:53 PM, edward choi wrote: > My mistake. Come to think about it, you are right, I can just make an > infinite loop inside the Hadoop application. > Thanks for the reply. > > 2010/12/7 Harsh