How to configure SWIM

2012-03-01 Thread Arvind
Hi all, Can anybody help me to configure SWIM -- Statistical Workload Injector for MapReduce on my hadoop cluster

Re: Why hadoop is written in java?

2010-10-09 Thread Arvind Kalyan
ning Hadoop: http://hadoop.apache.org/common/docs/current/single_node_setup.html#PreReqs Imagine how it would be, if it were to be written in C/C++. While C/C++ might give you a performance improvement at run-time, it can be a total nightmare to develop and maintain. Especially if the network gets to be heterogeneo

Re: Datanode abort

2010-03-26 Thread Arvind Sharma
Is the server down for any reason ? may be system panic and didn't reboot itself ? What OS is the datanode? The datanode is unreachable on the network... From: "y_823...@tsmc.com" To: common-user@hadoop.apache.org Sent: Fri, March 26, 2010 2:11:50 AM Subje

Re: WritableName can't load class in hive

2010-03-17 Thread Arvind Prabhakar
your file. -Arvind On Tue, Mar 16, 2010 at 2:50 PM, Oded Rotem wrote: > Actually, now I moved to this error: > > java.lang.RuntimeException: org.apache.hadoop.hive.serde2.SerDeException: > class org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe: expects either > BytesWritable

Re: Unexpected termination of a job

2010-03-04 Thread Arvind Sharma
Have you tried after increasing HEAP memory to your process ? Arvind From: Rakhi Khatwani To: common-user@hadoop.apache.org Sent: Wed, March 3, 2010 10:38:43 PM Subject: Re: Unexpected termination of a job Hi, I tried running it on eclipse, the job

Re: DFSClient write error when DN down

2009-12-04 Thread Arvind Sharma
Thanks Todd ! Just wanted another confirmation I guess :-) Arvind From: Todd Lipcon To: common-user@hadoop.apache.org Sent: Fri, December 4, 2009 8:35:56 AM Subject: Re: DFSClient write error when DN down Hi Arvind, Looks to me like you've identifie

Re: DFSClient write error when DN down

2009-12-04 Thread Arvind Sharma
Any suggestions would be welcome :-) Arvind From: Arvind Sharma To: common-user@hadoop.apache.org Sent: Wed, December 2, 2009 8:02:39 AM Subject: DFSClient write error when DN down I have seen similar error logs in the Hadoop Jira (Hadoop-2691, HDFS-795

DFSClient write error when DN down

2009-12-02 Thread Arvind Sharma
this problem on the client side ? As I understood, the DFSClient APIs will take care of situations like this and the clients don't need to worry about if some of the DN goes down. Also, the replication factor is 3 in my setup and there are 10 DN (out of which TWO went down) Thanks! Arvind

Re: measuring memory usage

2009-09-09 Thread Arvind Sharma
esday, September 9, 2009 9:05:57 AM Subject: Re: measuring memory usage Linux vh20.dev.com 2.6.18-53.el5 #1 SMP Mon Nov 12 02:22:48 EST 2007 i686 i686 i386 GNU/Linux /usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0 On 9/8/09, Arvind Sharma wrote: > Which OS you are running the command ? Linux/MacOS/Windows ? &

Re: measuring memory usage

2009-09-08 Thread Arvind Sharma
Which OS you are running the command ? Linux/MacOS/Windows ? Which JDK version ? Arvind From: Ted Yu To: common-user@hadoop.apache.org Sent: Friday, September 4, 2009 11:57:44 AM Subject: measuring memory usage Hi, I am using Hadoop 0.20 How can I get

Re: Copying directories out of HDFS

2009-09-05 Thread Arvind Sharma
They do work on directories as well... Arvind From: Kris Jirapinyo To: common-user@hadoop.apache.org Sent: Friday, September 4, 2009 11:41:22 PM Subject: Re: Copying directories out of HDFS I thought -get and -copyToLocal don't work on directories, on

Re: Copying directories out of HDFS

2009-09-04 Thread Arvind Sharma
You mean programmatically or command line ? Command line : bin/hadoop -get /path/to/dfs/dir /path/to/local/dir Arvind From: Kris Jirapinyo To: common-user Sent: Friday, September 4, 2009 5:15:00 PM Subject: Copying directories out of HDFS Hi all

Re: Where does System.out.println() go?

2009-08-24 Thread Arvind Sharma
most of the user level log files goes under $HADOOP_HOME/logs/userlog...try there Arvind From: Mark Kerzner To: core-u...@hadoop.apache.org Sent: Monday, August 24, 2009 6:22:50 PM Subject: Where does System.out.println() go? Hi, when I run Hadoop in

Re: Getting free space percentage on DFS

2009-08-23 Thread Arvind Sharma
The APIs work for the user with which Hadoop was started. And moreover I don't think the User level authentication is there yet in Hadoop (not sure here though) for APIs... From: Stas Oskin To: common-user@hadoop.apache.org Sent: Sunday, August 23, 2009 1:33:

Re: Getting free space percentage on DFS

2009-08-23 Thread Arvind Sharma
= ds.getDfsUsed(); long remaining = ds.getRemaining(); long presentCapacity = used + remaining; hdfsPercentDiskUsed = Math.round1.0 * used) / presentCapacity) * 100)); } Arvind From: Stas Oskin To: core-u

Re: Cluster Disk Usage

2009-08-20 Thread Arvind Sharma
Sorry, I also sent a direct e-mail to one response there I asked one question - what is the cost of these APIs ??? Are they too expensive calls ? Is the API only going to the NN which stores this data ? Thanks! Arvind From: Arvind Sharma To: common

Re: Cluster Disk Usage

2009-08-20 Thread Arvind Sharma
Using hadoop-0.19.2 From: Arvind Sharma To: common-user@hadoop.apache.org Sent: Thursday, August 20, 2009 3:56:53 PM Subject: Cluster Disk Usage Is there a way to find out how much disk space - overall or per Datanode basis - is available before creating a

Cluster Disk Usage

2009-08-20 Thread Arvind Sharma
left on the grid before trying to create the file. Arvind

Re: Hadoop - flush() files

2009-08-18 Thread Arvind Sharma
some known issues with that. Has anyone experienced any problem while using the sync() method ? Arvind Hi, I was wondering if anyone here have stared using (or has been using) the newer Hadoop versions (0-20.1 ??? ) - which provides API for flushing ou

Hadoop - flush() files

2009-08-17 Thread Arvind Sharma
Hi, I was wondering if anyone here have stared using (or has been using) the newer Hadoop versions (0-20.1 ??? ) - which provides API for flushing out any open files on the HDFS ? Are there any known issues I should be aware of ? Thanks! Arvind

Error in starting Pseudo-Distributed mode hadoop-0.19.2

2009-08-14 Thread arvind subramanian
STARTUP_MSG: host = arvind-laptop/127.0.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 0.19.2 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.19 -r 789657; compiled by 'root' on Tue Jun 30 12:40:5

Error in starting Pseudo-Distributed mode hadoop-0.19.2

2009-08-13 Thread arvind subramanian
STARTUP_MSG: host = arvind-laptop/127.0.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 0.19.2 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.19 -r 789657; compiled by 'root' on Tue Jun 30 12:40:5

Re: How to re-read the config files

2009-08-13 Thread Arvind Sharma
? Without re-starting the cluster (which is in production and customer wouldn't like that either :-) ) Thanks! Arvind From: Jakob Homan To: common-user@hadoop.apache.org Sent: Thursday, August 13, 2009 2:04:43 PM Subject: Re: How to re-read the config files

How to re-read the config files

2009-08-13 Thread Arvind Sharma
Hi, I was wondering if there is way to let Hadoop re-read the config file (hadoop-site.xml) after making some changes in it. I don't want to restart the whole cluster for that. I am using Hadoop 0.19.2 Thanks! Arvind