Hi,
My source folder has a single folder and a single file inside that.
/user/user/distcpsrc/1/2 r 3 4 2008-07-22 04:22
In the destination, it is creating the folder '1' but not the file '2'.
The counters show 1 file has been skipped.
08/07/22 04:22:36 INFO mapred.JobClient:
Sure i'm interested. Copenhagen is fine for me.
Cheers,
Christian
Mads Toftum wrote:
On Mon, Jul 21, 2008 at 03:52:01PM +0200, tim robertson wrote:
Is there a user base in Scandinavia that would be interested in meeting to
exchange feedback / ideas ?
(in English...)
Yeah, I'd be
Hi,
KFS and HDFS sound like similar file systems. Could anyone outline the major
differences ? Pros and Cons for using each ?
Thanks, Naama
--
oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo 00 oo
00 oo 00 oo
If you want your children to be intelligent, read them fairy
Hi Ion,
I have the same problem with 0.16.3! I asked before but got no answer as
to what causes this. Anyone got any news on this?
Lars
---
Lars George, CTO
WorldLingo
Ion Badita wrote:
Ion Badita wrote:
Hi,
I have a problem with counters been updated, after i upgraded my
hadoop from
i had a test with a log file analysis that was written with java and ran on
Hadoop.
i ran my log file analysis on a Intel Quad Core processor with 2 GB of
memory.
i set the map task to 40 and reduce task to 8.
the size of the log files i had test are 1GB to 4GB because i ran out of
storage
hey all,
let us say that i have 3 boxes, A B and C. initially, map tasks are
running on all 3. after most of the mapping is done, C is 32% done
with reduce (so still copying stuff to its local disk) and A is stuck
on a particularly long map-task (it got an ill-behaved record from the
There were many fixes and improvements to distcp in 0.16, but most of
the critical fixes made it into 0.15.2 and 0.15.3. Is the destination
empty? Anything already existing at the destination is skipped. -C
On Jul 22, 2008, at 4:39 AM, Murali Krishna wrote:
Hi,
My source folder has a
I'm trying to install hadoop on our linux machine but after
start-all.sh none of the slaves can connect:
2008-07-22 16:35:27,534 INFO org.apache.hadoop.dfs.DataNode: STARTUP_MSG:
/
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host =
In the first instance make sure that all the relevant ports are actually
open. I would also check that your conf files are ok. Looking at the
example below, it seems that /work has a permissions problem.
(Note that telnet has nothing to do with Hadoop as far as I'm aware --a
better test would
If you have a static address for the machine, make sure that your
hosts file is pointing to the static address for the namenode host
name as opposed to the 127.0.0.1 address. It should look something
like this with the values replaced with your values.
127.0.0.1
That's interesting. Why letting reducer fetch local data through HTTP not SSH?
- Original Message
From: Arun C Murthy [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Tuesday, July 22, 2008 2:19:36 PM
Subject: Re: question on HDFS
Mori,
On Jul 22, 2008, at 12:22 PM, Mori
Lincoln,
Take a look at the MultipleOutputFormat class or MultipleOutputs (in SVN tip)
A
On Wed, Jul 23, 2008 at 5:34 AM, Lincoln Ritter
[EMAIL PROTECTED] wrote:
Greetings,
I have what I think is a pretty straight-forward, noobie question. I
would like to write one file per key in the
12 matches
Mail list logo