Hi all,
Can anybody help me to configure SWIM -- Statistical Workload Injector for
MapReduce on my hadoop cluster
ning Hadoop:
http://hadoop.apache.org/common/docs/current/single_node_setup.html#PreReqs
Imagine how it would be, if it were to be written in C/C++.
While C/C++ might give you a performance improvement at run-time, it can be
a total nightmare to develop and maintain. Especially if the network gets to
be heterogeneo
Is the server down for any reason ? may be system panic and didn't reboot
itself ? What OS is the datanode?
The datanode is unreachable on the network...
From: "y_823...@tsmc.com"
To: common-user@hadoop.apache.org
Sent: Fri, March 26, 2010 2:11:50 AM
Subje
your file.
-Arvind
On Tue, Mar 16, 2010 at 2:50 PM, Oded Rotem wrote:
> Actually, now I moved to this error:
>
> java.lang.RuntimeException: org.apache.hadoop.hive.serde2.SerDeException:
> class org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe: expects either
> BytesWritable
Have you tried after increasing HEAP memory to your process ?
Arvind
From: Rakhi Khatwani
To: common-user@hadoop.apache.org
Sent: Wed, March 3, 2010 10:38:43 PM
Subject: Re: Unexpected termination of a job
Hi,
I tried running it on eclipse, the job
Thanks Todd !
Just wanted another confirmation I guess :-)
Arvind
From: Todd Lipcon
To: common-user@hadoop.apache.org
Sent: Fri, December 4, 2009 8:35:56 AM
Subject: Re: DFSClient write error when DN down
Hi Arvind,
Looks to me like you've identifie
Any suggestions would be welcome :-)
Arvind
From: Arvind Sharma
To: common-user@hadoop.apache.org
Sent: Wed, December 2, 2009 8:02:39 AM
Subject: DFSClient write error when DN down
I have seen similar error logs in the Hadoop Jira (Hadoop-2691, HDFS-795
this problem on the client side ? As I
understood, the DFSClient APIs will take care of situations like this and the
clients don't need to worry about if some of the DN goes down.
Also, the replication factor is 3 in my setup and there are 10 DN (out of which
TWO went down)
Thanks!
Arvind
esday, September 9, 2009 9:05:57 AM
Subject: Re: measuring memory usage
Linux vh20.dev.com 2.6.18-53.el5 #1 SMP Mon Nov 12 02:22:48 EST 2007
i686 i686 i386 GNU/Linux
/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0
On 9/8/09, Arvind Sharma wrote:
> Which OS you are running the command ? Linux/MacOS/Windows ?
&
Which OS you are running the command ? Linux/MacOS/Windows ?
Which JDK version ?
Arvind
From: Ted Yu
To: common-user@hadoop.apache.org
Sent: Friday, September 4, 2009 11:57:44 AM
Subject: measuring memory usage
Hi,
I am using Hadoop 0.20
How can I get
They do work on directories as well...
Arvind
From: Kris Jirapinyo
To: common-user@hadoop.apache.org
Sent: Friday, September 4, 2009 11:41:22 PM
Subject: Re: Copying directories out of HDFS
I thought -get and -copyToLocal don't work on directories, on
You mean programmatically or command line ?
Command line :
bin/hadoop -get /path/to/dfs/dir /path/to/local/dir
Arvind
From: Kris Jirapinyo
To: common-user
Sent: Friday, September 4, 2009 5:15:00 PM
Subject: Copying directories out of HDFS
Hi all
most of the user level log files goes under $HADOOP_HOME/logs/userlog...try
there
Arvind
From: Mark Kerzner
To: core-u...@hadoop.apache.org
Sent: Monday, August 24, 2009 6:22:50 PM
Subject: Where does System.out.println() go?
Hi,
when I run Hadoop in
The APIs work for the user with which Hadoop was started. And moreover I don't
think the User level authentication is there yet in Hadoop (not sure here
though) for APIs...
From: Stas Oskin
To: common-user@hadoop.apache.org
Sent: Sunday, August 23, 2009 1:33:
= ds.getDfsUsed();
long remaining = ds.getRemaining();
long presentCapacity = used + remaining;
hdfsPercentDiskUsed = Math.round1.0 * used) / presentCapacity)
* 100));
}
Arvind
From: Stas Oskin
To: core-u
Sorry, I also sent a direct e-mail to one response
there I asked one question - what is the cost of these APIs ??? Are they too
expensive calls ? Is the API only going to the NN which stores this data ?
Thanks!
Arvind
From: Arvind Sharma
To: common
Using hadoop-0.19.2
From: Arvind Sharma
To: common-user@hadoop.apache.org
Sent: Thursday, August 20, 2009 3:56:53 PM
Subject: Cluster Disk Usage
Is there a way to find out how much disk space - overall or per Datanode basis
- is available before creating a
left on the grid before trying to
create the file.
Arvind
some known issues with that.
Has anyone experienced any problem while using the sync() method ?
Arvind
Hi,
I was wondering if anyone here have stared using (or has been using) the newer
Hadoop versions (0-20.1 ??? ) - which provides API for flushing ou
Hi,
I was wondering if anyone here have stared using (or has been using) the newer
Hadoop versions (0-20.1 ??? ) - which provides API for flushing out any open
files on the HDFS ?
Are there any known issues I should be aware of ?
Thanks!
Arvind
STARTUP_MSG: host = arvind-laptop/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.19.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.19 -r
789657; compiled by 'root' on Tue Jun 30 12:40:5
STARTUP_MSG: host = arvind-laptop/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.19.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.19 -r
789657; compiled by 'root' on Tue Jun 30 12:40:5
? Without re-starting the
cluster (which is in production and customer wouldn't like that either :-) )
Thanks!
Arvind
From: Jakob Homan
To: common-user@hadoop.apache.org
Sent: Thursday, August 13, 2009 2:04:43 PM
Subject: Re: How to re-read the config files
Hi,
I was wondering if there is way to let Hadoop re-read the config file
(hadoop-site.xml) after making some changes in it.
I don't want to restart the whole cluster for that.
I am using Hadoop 0.19.2
Thanks!
Arvind
24 matches
Mail list logo