Re: Welcoming Harsh J as a Hadoop committer

2011-09-18 Thread Chandraprakash Bhagtani
Congratulations Harsh !!! On Fri, Sep 16, 2011 at 7:34 PM, Abhishek Mehta wrote: > the best from tresata on the election Harsh! > > Cheers > > Abhishek Mehta ('Abhi') > (e) abhis...@tresata.com > (v) 980.355.9855 > > > > > On Sep 16, 2011, at 2:33 AM, Todd Lipcon wrote: > > > On behalf of the PMC

Re: Deploying my job jar on hadoop cluster

2010-08-29 Thread Chandraprakash Bhagtani
ks, > Deepika > > -----Original Message- > From: Chandraprakash Bhagtani [mailto:cpbhagt...@gmail.com] > Sent: Saturday, August 28, 2010 10:40 PM > To: general@hadoop.apache.org > Subject: Re: Deploying my job jar on hadoop cluster > > Deepika, > > You just have

Re: Deploying my job jar on hadoop cluster

2010-08-28 Thread Chandraprakash Bhagtani
Deepika, You just have to run the following command on any of the cluster node HADOOP_HOME/bin/hadoop jar this command will automatically copy the jar on all the tasktrackers. On Sun, Aug 29, 2010 at 6:07 AM, Deepika Khera wrote: > Hi, > > I want to deploy my map reduce job jar on the Hadoo

Re: Performance tools for Hadoop

2010-03-03 Thread Chandraprakash Bhagtani
Hi Devender, Currently there are 2 ways to analyze performance of hadoop cluster and jobs 1. Hadoop Vaidya: is a performance diagnostic tool for hadoop jobs which executes a set of rules against the job counters and gives a report of performance improvement areas as a result. But Hadoop Vaidya i

Re: How do I trigger multiple Mapper tasks?

2010-01-17 Thread Chandraprakash Bhagtani
you can set *mapred.max.split.size* property in mapred-site.xml to create more splits and map tasks. On Mon, Jan 18, 2010 at 12:51 PM, Something Something < mailinglist...@gmail.com> wrote: > Hello, > > I read the documentation about running multiple Mapper tasks, but I can't > get multiple Mappe

Re: nodes lying idle

2009-09-12 Thread Chandraprakash Bhagtani
fate, Neo? > Neo: No. > Morpheus: Why Not? > Neo: Because I don't like the idea that I'm not in control of my life. > > > > - Original Message > From: Chandraprakash Bhagtani > To: general@hadoop.apache.org > Sent: Saturday, September 12, 2009 1

Re: nodes lying idle

2009-09-11 Thread Chandraprakash Bhagtani
You need to check your cluster's Map/Reduce task capacity. i.e. how many Map/Reduce task can run on cluster at once. You can check it on http://JobtrackerServerIP:50030. You should also check total number of map tasks in your job. It should be greater than map task capacity of the cluster. Intial

Re: multicore node clusters

2009-09-10 Thread Chandraprakash Bhagtani
Hi, You should definitely change mapred.tasktracker.map/reduce.tasks.maximum. If your tasks are more CPU bound then you should run the tasks equal to the number of CPU cores otherwise you can run more tasks than cores. You can determine CPU and memory usage by running "top" command on datanodes. Y

Problem Configuring Hadoop Eclipse Plugin

2009-05-09 Thread Chandraprakash Bhagtani
Hi, I am trying to configure hadoop eclipse plugin 0.19.1 but getting the following exception An internal error occurred during: "Connecting to DFS Test". java.lang.IllegalStateException Please help -- Thanks & Regards, Chandra Prakash Bhagtani,