about replication

2013-08-04 Thread Irfan Sayed
hi, i have setup the two node apache hadoop cluster on windows environment one is namenode and another is datanode everything is working fine. one thing which i need to know is , how the replication starts if i create a.txt in namenode , how it will be appeared in datanodes please suggest regar

Re: about monitor metrics in hadoop

2013-08-04 Thread 闫昆
I use cdh4.3 ,my hadoop_env.sh file in follow directory $HADOOP_HOME/etc/hadoop/ 2013/8/5 ch huang > hi,all: > i installed yarn ,no mapreducev1 install,and it have no > /etc/hadoop/conf/hadoop-env.sh file exist, i use openTSDB to monitor my > cluster,and need some option set on hadoop-e

about monitor metrics in hadoop

2013-08-04 Thread ch huang
hi,all: i installed yarn ,no mapreducev1 install,and it have no /etc/hadoop/conf/hadoop-env.sh file exist, i use openTSDB to monitor my cluster,and need some option set on hadoop-env.sh, and how can i do now? create a new hadoop-env.sh file ?

metics v1 in hadoop-2.0.5

2013-08-04 Thread lei liu
Can I use metrics v1 in hadoop-2.0.5? Thanks, LiuLei

Re: access Permission issure

2013-08-04 Thread 武泽胜
If you use ShellBasedUnixGroupsMapping, you should add the user on your server machine. From: 闫昆 mailto:yankunhad...@gmail.com>> Reply-To: "user@hadoop.apache.org" mailto:user@hadoop.apache.org>> Date: Monday, August 5, 2013 8:56 AM To: "user@hadoop.apache.org

YARN with local filesystem

2013-08-04 Thread Rod Paulk
I am having an issue running 2.0.5-alpha (BigTop-0.6.0) YARN-MapReduce on the local filesystem instead of HDFS. The appTokens file that the error states is missing, does exist after the job fails. I saw other 'similar' issues noted in YARN-917, YARN-513, YARN-993. When I switch to HDFS, the jo

Re: access Permission issure

2013-08-04 Thread 闫昆
Hi 武泽胜 when I use usermod -a -G hadoop dyf usermod: user 'dyf' does not exist I think the problem is my client windows so the user not exist in linux 2013/8/5 闫昆 > thank for your help Harsh J and 武泽胜 > this way can submit a job to hadoop cluster? > Now I can access HDFS > > > 2013/8/4 Harsh J

Re: access Permission issure

2013-08-04 Thread 闫昆
thank for your help Harsh J and 武泽胜 this way can submit a job to hadoop cluster? Now I can access HDFS 2013/8/4 Harsh J > You will need to create a home directory for your user(s). > > sudo -u hadoop hadoop fs -mkdir -p /user/dyf > sudo -u hadoop hadoop fs -chown dyf:dyf /user/dyf > > On Sun, A

Re: incrCounter doesn't return a value

2013-08-04 Thread Manish Verma
Jens/Ashish, Thanks for your replies. I missed the point that these counters are maintained by their respective tasks and periodically sent to the tasktracker and then to the jobtracker for global aggregation. Thanks Manish On Sun, Aug 4, 2013 at 10:43 AM, Ashish Umrani wrote: > Manish, > > I a

Re: incrCounter doesn't return a value

2013-08-04 Thread Ashish Umrani
Manish, I am not sure if the counter would provide a globally unique id. To the best of my knowledge, Counters are mapper specific. So even if one could see the application working on one mapper, at the end, when deployed on production, duplicate ids would cause a problem. So, unless you are lo

Re: incrCounter doesn't return a value

2013-08-04 Thread Manish Verma
Ashish, I agree that incrCounter API works as designed. I looked at other ways to generate auto increment ids and found that ZooKeeper has been used by some to do this. Unique numbers would not satisfy my requirement as I am moving an app from traditional data warehouse to Hadoop. The incrCounter

incrCounter doesn't return a value

2013-08-04 Thread Jens Scheidtmann
Dear Manish, Use some combination of Job Id and/or IP address and other attributes to make your own unique ids. incrCounter had to synchronize globally across your cluster, if it were to provide incremental IDs. Best regards, Jens Am Donnerstag, 1. August 2013 schrieb Manish Verma : > Hi, > >

Re: incrCounter doesn't return a value

2013-08-04 Thread Manish Verma
Rishi, I want to call this API from the mappers. Thanks Manish On Sat, Aug 3, 2013 at 4:39 PM, Rishi Yadav wrote: > Hi Manish, > > Can you give some more details. You are accessing counter values from > driver? > > > > > On Wed, Jul 31, 2013 at 5:30 PM, Manish Verma > wrote: > >> Hi, >> >> I

Re: access Permission issure

2013-08-04 Thread Harsh J
You will need to create a home directory for your user(s). sudo -u hadoop hadoop fs -mkdir -p /user/dyf sudo -u hadoop hadoop fs -chown dyf:dyf /user/dyf On Sun, Aug 4, 2013 at 7:21 PM, 武泽胜 wrote: > Sorry, I've made a mistake, the user group hadoop also do not have write > permission :( > > Upda

Re: access Permission issure

2013-08-04 Thread 武泽胜
Sorry, I've made a mistake, the user group hadoop also do not have write permission :( Updated ways to fix your problem: 1. Add write permission for other group 2. Add user dyf to user group hadoop, and add write permission for user group hadoop. Or if you do not want to enable the autho

Re: access Permission issure

2013-08-04 Thread 武泽胜
Fix a typo. From: wu zesheng mailto:wuzesh...@xiaomi.com>> Reply-To: "user@hadoop.apache.org" mailto:user@hadoop.apache.org>> Date: Sunday, August 4, 2013 9:29 PM To: "user@hadoop.apache.org" mailto:user@hadoop.apache.org>> Subject: R

Re: access Permission issure

2013-08-04 Thread 武泽胜
The permission of of / is 'hadoop:hadoop:drwxr-xr-x', your user name is dyf, you're not the owner, I guess you're also not belong to the user group 'hadoop', so you do not have the write permission. Two ways to fix this: 1. Add dyf user group hadoop 2. Change the permission of /, add writ

hadoop task child failed

2013-08-04 Thread claytonly
Hello all, I got a problem when I run wordcount from hadoop example. Given I run hadoop in xen-based VMs, I want to increase vm's cpu cores. But when I inrease vm's vcpu cores. A reduce task is failed. The log detail show following: 13/08/01 16:19:27 INFO mapred.JobClient: Task Id : attempt_201