Tarandeep Singh wrote:
Hi,
I am running a MR job that requires usage of some java.awt.* classes, that
can't be run in headless mode.
Right now, I am running Hadoop in a single node cluster (my laptop) which
has X11 server running. I have set up my ssh server and client to do X11
forwarding.
I
Hi,
Do your steps qualify as separate MR jobs? Then using JobClient APIs should
be more than sufficient for such dependencies.
You can add the whole output directory as input to another one to read all
files, and provide PathFilter to ignore any files you don't want to be
processed, like side
From memory, some parts of AWT won't run in headless mode. I used to run an
x virtual frame buffer on servers that created graphics. It's a standard
package on most Linux distros. I forget if there was something special
needed to set it up, but might be worth looking into.
On Sun, Jan 17, 2010
I use it all the time. See http://wiki.apache.org/hadoop/EclipsePlugIn
Kind regards
Steve Watt
From:
aa...@buffalo.edu
To:
core-u...@hadoop.apache.org
Date:
01/17/2010 01:52 AM
Subject:
Eclipse Plugin for Hadoop
Hi all,
I was just looking around and I stumbled across the Eclipse
On Mon, Jan 18, 2010 at 2:52 AM, Steve Loughran ste...@apache.org wrote:
Tarandeep Singh wrote:
Hi,
I am running a MR job that requires usage of some java.awt.* classes, that
can't be run in headless mode.
Right now, I am running Hadoop in a single node cluster (my laptop) which
has X11
hadoop fs -rmr /op
That command always fails. I am trying to run sequential hadoop jobs.
After the first run all subsequent runs fail while cleaning up ( aka
removing the hadoop dir created by previous run ). What can I do to
avoid this ?
here is my hadoop version :
# hadoop version
Hadoop
Can you try with dfs/ without quotes?If using pig to run jobs you can use rmf
within your script(again w/o quotes) to force remove and avoid error if
file/dir not present.Or if doing this inside hadoop job, you can use
FileSystem/FileStatus to delete directories.HTH.
Cheers,
/R
On 1/19/10
Hmmm. I am actually running it from a batch file. Is hadoop fs -rmr
not that stable compared to pig's rm OR hadoop's FileSystem ?
Let me try your suggestion by writing a cleanup script in pig.
-Thanks,
Prasen
On Tue, Jan 19, 2010 at 10:25 AM, Rekha Joshi rekha...@yahoo-inc.com wrote:
Can you
Hi,
When NN is in safe mode, you get a read-only view of the hadoop file system. (
since NN is reconstructing its image of FS )
Use hadoop dfsadmin -safemode get to check if in safe mode.
hadoop dfsadmin -safemode leave to leave safe mode forcefully. Or use hadoop
dfsadmin -safemode wait to
They are only alternatives. hadoop fs -rmr works well for me. I do not exactly
know what error it gives you or how the call is invoked.On batch , lets say on
perl below should work fine
$cmd = hadoop fs -rmr /op;
system($cmd);
Cheers,
/R
On 1/19/10 10:31 AM, prasenjit mukherjee
That was exactly the reason. Thanks a bunch.
On Tue, Jan 19, 2010 at 12:24 PM, Mafish Liu maf...@gmail.com wrote:
2010/1/19 prasenjit mukherjee pmukher...@quattrowireless.com:
I run hadoop fs -rmr .. immediately after start-all.sh Does the
namenode always start in safemode and after
11 matches
Mail list logo