So I'm new to Hadoop and I have been trying unsuccessfully to work
through the Quickstart tutorial to get a single node working in
pseudo-distributed mode. I can't seem to put data into HDFS using
release 0.18.2 under Java 1.6.0_04-b12:
$ bin/hadoop fs -put conf input
08/11/05 18:32:23 INFO dfs.DF
shahab mehmandoust wrote:
I'm try to write a daemon that periodically wakes up and runs map/reduce
jobs, but I've have little luck. I've tried different ways (including using
cascading) and I keep arriving at the below exception:
java.lang.OutOfMemoryError: Java heap space
at
org.apache.had
I'm try to write a daemon that periodically wakes up and runs map/reduce
jobs, but I've have little luck. I've tried different ways (including using
cascading) and I keep arriving at the below exception:
java.lang.OutOfMemoryError: Java heap space
at
org.apache.hadoop.mapred.MapTask$MapOutput
On Wednesday 05 November 2008 15:27:34 Karl Anderson wrote:
> I am running into a similar issue. It seems to be affected by the
> number of simultaneous tasks.
For me, while I generally allow up to 4 mappers per node, in this particular
instance I had only one mapper reading from a single gzippe
Hey, anyone early for hadoop bootcamp aswell? How about meet for
drinks tonight? Send me a mail offlist...
Stefan
we just went from 8k to 64k after some problems,
Karl Anderson wrote:
On 4-Nov-08, at 3:45 PM, Yuri Pradkin wrote:
Hi,
I'm running current snapshot (-r709609), doing a simple word count
using python over
streaming. I'm have a relatively moderate setup of 17 nodes.
I'm getting this except
On 4-Nov-08, at 3:45 PM, Yuri Pradkin wrote:
Hi,
I'm running current snapshot (-r709609), doing a simple word count
using python over
streaming. I'm have a relatively moderate setup of 17 nodes.
I'm getting this exception:
java.io.FileNotFoundException: /usr/local/hadoop/hadoop-hadoop/
On Nov 5, 2008, at 2:21 PM, Tarandeep Singh wrote:
I want to know whether the key,values received by a particular
reducer at a
node are stored locally on that node or are stored on DFS (and hence
replicated over cluster according to replication factor set by user)
Map outputs (and reduce inp
Hi,
I want to know whether the key,values received by a particular reducer at a
node are stored locally on that node or are stored on DFS (and hence
replicated over cluster according to replication factor set by user)
One more question- How does framework replicates the data? Say Node A
writes a
Is it possible there is a firewall blocking port 9000 on one or more of
the machines.
We had that happen to us with some machines that were kickstarted by our
IT, the firewall was configured to only allow ssh.
[EMAIL PROTECTED] wrote:
Hi,
I am trying to use hadoop 0.18.1. After I start t
[EMAIL PROTECTED] wrote:
Hi Alex,
ping works on both of the machines. And in fact I do ssh onto
both of these machines. I stopped the service and reformatted the
namenode, but the problem pertains. I use the same configuration file
hadoop-site.xml on both of the machines. The content
Hi Alex,
ping works on both of the machines. And in fact I do ssh onto
both of these machines. I stopped the service and reformatted the
namenode, but the problem pertains. I use the same configuration file
hadoop-site.xml on both of the machines. The content of the
configuration
12 matches
Mail list logo