sk.java:219)
at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2124)
On Tue, Sep 9, 2008 at 1:47 PM, Michael Di Domenico
<[EMAIL PROTECTED]>wrote:
> Apparently, the fix to my original error is because hadoop is setup for a
> single local machine out of the box and i had
CopyFiles.java:743)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.util.CopyFiles.main(CopyFiles.java:763)
On Tue, Sep 9, 2008 at 1:14 PM, Michael Di Domenico
<[EMAIL PROTECTED]>wro
13:12:41 INFO dfs.DFSClient: Abandoning block
blk_9189111926428577428
On Tue, Sep 9, 2008 at 1:03 PM, Michael Di Domenico
<[EMAIL PROTECTED]>wrote:
> a little more digging and it appears i cannot run distcp as someone other
> then hadoop on the namenode
> /tmp/hadoop-hadoop/mapred/system/job_2008090912
e a "local" directory
On Tue, Sep 9, 2008 at 12:41 PM, Michael Di Domenico <[EMAIL PROTECTED]
> wrote:
> i'm not sure that's the issue, i basically tarred up the hadoop directory
> from the cluster and copied it over to the non-data node
> but i do agree i've likel
tch the
> settings on the cluster itself.
>
> - Aaron
>
> On Sun, Sep 7, 2008 at 8:58 PM, Michael Di Domenico
> <[EMAIL PROTECTED]>wrote:
>
> > I'm attempting to load data into hadoop (version 0.17.1), from a
> > non-datanode machine in the cluster. I can
I'm attempting to load data into hadoop (version 0.17.1), from a
non-datanode machine in the cluster. I can run jobs and copyFromLocal works
fine, but when i try to use distcp i get the below. I'm don't understand
what the error, can anyone help?
Thanks
blue:hadoop-0.17.1 mdidomenico$ time bin/h
Oops, missed the part where you already tried that.
On Mon, Jun 2, 2008 at 3:23 PM, Michael Di Domenico <[EMAIL PROTECTED]>
wrote:
> Depending on your windows version, there is a dos command called "subst"
> which you could use to virtualize a drive letter on your third mac
Depending on your windows version, there is a dos command called "subst"
which you could use to virtualize a drive letter on your third machine
On Fri, May 30, 2008 at 4:35 AM, Sridhar Raman <[EMAIL PROTECTED]>
wrote:
> Should the installation paths be the same in all the nodes? Most
> documenta
I second that request...
I use DRDB for another project where I work and definitely see it's
benefits, but I haven't tried it with hadoop yet.
Thanks
On Tue, May 13, 2008 at 11:17 AM, Otis Gospodnetic <
[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'd love to see the DRBD+Hadoop write up! Not only woul
I'm trying to run the rand-sort benchmark on my cluster, but i see to
be running out of heap space.
I changed the heap parameter in hadoop-env.sh to HADOOP_HEAPSIZE=3000,
did i not change the write parameter?
[EMAIL PROTECTED] hadoop]$ bin/hadoop jar hadoop-*-examples.jar sort rand
rand-sort
Ru
10 matches
Mail list logo