Hi everyone!
I tried hadoop cluster setup on 4 pcs. I ran into a
problem about hadoop-common. when I ran the command'bin/hadoop jar
hadoop-*-examples.jar wordcount input output', the map tasks could
complet quickly, but the reduce phase took very long to complet. I
think it's caused by the config,
-Original Message-
From: ext David B. Ritch [mailto:david.ri...@gmail.com]
Sent: Friday, September 11, 2009 11:07
To: common-user@hadoop.apache.org
Subject: Re: Decommissioning Individual Disks
Thank you both. That's what we did today. It seems fairly reasonable
when a node has a
--- 09年9月11日,周五, qiu tian tianqiu_...@yahoo.com.cn 写道:
发件人: qiu tian tianqiu_...@yahoo.com.cn
主题: a problem in hadoop cluster:reduce task couldn't find map tasks' output.
收件人: common-user@hadoop.apache.org, common-user-h...@hadoop.apache.org
日期: 2009年9月11日,周五,下午1:57
Hi everyone!
I tried hadoop
For the Thrift server bug, the best way to get it fixed is to file a
bug report at http://issues.apache.org/jira
HBase 0.20 is out, download here:
http://hadoop.apache.org/hbase/releases.html
There is an HBase mailing list, hbase-u...@hadoop.apache.org.
And yes, I believe you do still need to
Hi all,
I'd like to remind everyone that RSVP is open for the next monthly Bay Area
Hadoop user group organized by Yahoo!.
Agenda and registration available here
http://www.meetup.com/hadoop/calendar/11166700/
Looking forward to seeing you at September 23rd.
Dekel
On Fri, Sep 11, 2009 at 12:23 PM, Allen Wittenauer
awittena...@linkedin.com wrote:
On 9/10/09 8:06 PM, David B. Ritch david.ri...@gmail.com wrote:
Thank you both. That's what we did today. It seems fairly reasonable
when a node has a few disks, say 3-5. However, at some sites, with
larger
Is there any sense in setting mapred.child.java.opts to a high value
if all we're using is Hadoop streaming ? We've set it to 512MB but
I don't know if it even matters..
thanks
Dear All,
I have an input directories of depth 3, the actual files are in the deepest
levels. (something like /data/user/dir_0/file0 , /data/user/dir_1/file0,
/data/user/dir_2/file0) And I want to write a mapreduce job to process these
files in the deepest levels.
One way of doing so is
Dear all,
I have an input file hierarchy of depth 3, something like
/data/user/dir_0/file0, /data/user/dir_1/file0, /data/user/dir_2/file0. I
want to run a mapreduce job to process all the files in the deepest levels.
One way of doing so is to specify the input path like /data/user/dir_0,
You can give something like /path/to/directories/*/*/*
On Fri, Sep 11, 2009 at 2:10 PM, Boyu Zhang bzh...@cs.utsa.edu wrote:
Dear All,
I have an input directories of depth 3, the actual files are in the deepest
levels. (something like /data/user/dir_0/file0 , /data/user/dir_1/file0,
I use the following commandline to build fuse:
ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1
My ant version is 1.7.1
I got the following error:
[exec] if gcc -DPACKAGE_NAME=\fuse_dfs\
-DPACKAGE_TARNAME=\fuse_dfs\ -DPACKAGE_VERSION=\0.1.0\
-DPACKAGE_STRING=\fuse_dfs\ 0.1.0\
fuse.h should come with the FUSE software, not Hadoop. It should be
somewhere like /usr/include/fuse.h on a Linux machine. Possibly
/usr/local/include/fuse.h
Did you install FUSE from source? If not, you probably need something
like Debian's libfuse-dev package installed by your operating
12 matches
Mail list logo