Hi:
I wish to use Hadoop streaming to run a program which requires specific
PATH and CLASSPATH variables. I have set these two variables in both
/etc/profile and ~/.bashrc on all slaves (and restarted these slaves).
However, when I run the hadoop streaming job, the program generates error
Hi All:
I'm new to the hadoop platform, and I was trying to establish a network of 7
computers to form a cluster.
When I accessed the namenode web GUI at
http://ss1:50070http://mediaminer:50070/dfsnodelist.jsp?whatNodes=LIVE,
I found that the displayed configured capacity is much larger than the
-hadoop.com/m/suLVN1M4Q881/
On Mon, Jun 20, 2011 at 1:50 PM, Andy XUE andyxuey...@gmail.com wrote:
Hi All:
I'm new to the hadoop platform, and I was trying to establish a network
of 7
computers to form a cluster.
When I accessed the namenode web GUI at
http://ss1:50070http
, Andy XUE andyxuey...@gmail.com wrote:
and the log file with error
message (*hadoop-rui-jobtracker-ss2.log http://db.tt/PPGhEaa*) are
linked.
This is a case of
http://wiki.apache.org/hadoop/FAQ#What_does_.22file_could_only_be_replicated_to_0_nodes.2C_instead_of_1.22_mean.3F
Hi there:
I'm a new user to Hadoop and Nutch, and I am trying to run the crawler *
Nutch* on a distributed system powered by *Hadoop*. However as it turns out,
the distributed system does not recognise any slave nodes in the cluster.
I've stucked at this point for months and am desperate to look