Re: Why I can only run 2 map/reduce task at a time?

2009-12-21 Thread Starry SHI
at http://jobtracker-address:50030/ 2. your data is not enough to create more than 2 map tasks. But in that case reducers should still be equal to mapred.reduce.tasks On Mon, Dec 21, 2009 at 9:39 AM, Starry SHI starr...@gmail.com wrote: Hi, I am currently using hadoop 0.19.2

Why I can only run 2 map/reduce task at a time?

2009-12-20 Thread Starry SHI
Hi, I am currently using hadoop 0.19.2 to run large data processing. But I noticed when the job is launched, there are only two map/reduce tasks running in the very beginning. after one heartbeat (5sec), another two map/reduce task is started. I want to ask how I can increase the map/reduce

how to assign each hadoop user to different group?

2009-12-17 Thread Starry SHI
Hi. My Hadoop cluster (0.20.1) has multiple users. When I use different user's account to create a file in HDFS, I find that no matter what group the user belonging to in Linux, the files in HDFS indicate that they belongs to userX, supergroup. I wonder why all users are belonging to group

permission control in HDFS

2009-12-17 Thread Starry SHI
Hi. I wonder how permission control can be used in HDFS? I am using hadoop 0.20.1, and I have 3 user accessing to HDFS. I use user1's account to create a file and chmod 600 to this file in HDFS. However, I tried to use user2 and user3's account to access the file belonging to user1, they can

Re: Why DrWho

2009-12-17 Thread Starry SHI
Previous I have met the same problem. Hadoop use linux command whoami to retrieve the current user, which is not supported in solaris. There are some other errors caused by solaris in running hadoop. If you put hadoop in Linux, these problems will disappear. Starry /* Tomorrow is another day. So

Re: how to assign each hadoop user to different group?

2009-12-17 Thread Starry SHI
...@linkedin.comwrote: Group permissions come from id/whoami/etc. So define them that way in UNIX and it should get carried over to Hadoop. That said, it is probably the wrong behavior to have the default when group resolution fails to be supergroup. On 12/17/09 4:11 AM, Starry SHI starr...@gmail.com

Re: permission control in HDFS

2009-12-17 Thread Starry SHI
is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories. /description /property Hope this helps. -Ravi On 12/17/09 4:22 AM, Starry SHI starr...@gmail.com wrote: Hi. I wonder how permission control can be used

Re: Hadoop NameNode not starting up

2009-11-13 Thread Starry SHI
actually you can put the hadoop.tmp.dir to other place, e.g /opt/hadoop_tmp or /var/hadoop_tmp. first create the folder there, and assign the correct mode for the hadoop_tmp folder, (chmod 777 for all the user to use hadoop). then change the conf xml file accordingly, and run hadoop namenode

Re: Exceptions when starting hadoop

2009-09-27 Thread Starry SHI
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=cyd,cyd,adm,dialout,cdrom,plugdev,lpadmin,sambashare,admin ip=/19 2.168.33.7 cmd=listStatus src=/ dst=nullperm=null 2009/9/26 Starry SHI starr...@gmail.com Is that the first time you start your cluster? My experience is that, when you start

Re: Where are temp files stored?

2009-09-27 Thread Starry SHI
process its own part of data. Do you have some ideas on this point? Best regards, Starry /* Tomorrow is another day. So is today. */ On Sat, Sep 26, 2009 at 15:07, dave bayer da...@cloudfactory.org wrote: On Sep 25, 2009, at 11:34 PM, Starry SHI wrote: Hi. I am wondering where the temp files

Re: Exceptions when starting hadoop

2009-09-26 Thread Starry SHI
Is that the first time you start your cluster? My experience is that, when you start the cluster once, then change the conf (say, add another slave), and restart your cluster, it sometimes generate some IPC issues (like the timeout in the namenode log). This change will cause the filesystem into

Re: multi core nodes

2009-09-03 Thread Starry SHI
I also would like to know whether it is possible to configure this. Hope somebody can provide a solution. Starry On Fri, Sep 4, 2009 at 04:20, ll_oz_ll himanshu_cool...@yahoo.com wrote: Hi, Is hadoop able to take into account multi core nodes, so that nodes which have multiple cores run

Re: hadoop mailing list administrator is an idiot

2009-07-02 Thread Starry SHI
i don't know how this admin offended you. but i think if you meet with any problems, you can post it here for help, rather than complaining without providing reasons. Starry /* Tomorrow is another day. So is today. */ On Thu, Jul 2, 2009 at 10:00, Bogdan M. Maryniuk