. So I let it go and figured time could
be spent better elsewhere; e.g. helping test the set of patches that
could get us a sync/flush/append on a patched hadoop 0.20 (hdfs-200,
etc.).
Sorry, I should have added a note to cited thread that I'd wandered...
St.Ack
--
Todd Lipcon
Software
On Wed, Feb 24, 2010 at 10:39 PM, Susheel Varma susheel.va...@gmail.comwrote:
Hi,
We are trying evaluate a small set of distributed data management
solutions(iRODS, HDFS, Lustre) for our project. We don't really have a
need for scalable computation, but rather our focus is more on
Tested download with md5: 8f40198ed18bef28aeea1401ec536cb9
Tried to verify the GPG signature, but Chris is not in
http://download.nextag.com/apache/hadoop/core/KEYS - he should be
added there if he is going to sign releases.
I ran unit tests on my machine at home - TestStreamingExitStatus
failed
On Thu, Feb 18, 2010 at 5:19 PM, Jeff Hammerbacher ham...@cloudera.com wrote:
Thanks Owen, that's useful information. It sounds like the API
incompatibility vote can be a separate issue.
Do we have consensus around rebasing on 0.21? Anyone already testing on 0.21
who would be upset if the
On Thu, Feb 18, 2010 at 6:08 PM, Konstantin Shvachko s...@yahoo-inc.com wrote:
On 2/18/2010 5:19 PM, Jeff Hammerbacher wrote:
Do we have consensus around rebasing on 0.21? Anyone already testing on
0.21
who would be upset if the current branch were to be retired?
Rebasing 0.21 will further
Hey Owen,
Thanks for rolling the second rc!
Looks like that file needs a chmod, getting 404 Forbidden:
[t...@minotaur:/home/omalley/public_html/0.20.2]$ ls -l
total 40664
-rw-r- 1 omalley omalley 41662994 Feb 18 01:21 hadoop-0.20.2.tar.gz
-rw-r--r-- 1 omalley omalley 195 Feb 18
On Thu, Feb 11, 2010 at 9:00 AM, Owen O'Malley omal...@apache.org wrote:
On Feb 10, 2010, at 10:51 PM, Todd Lipcon wrote:
I applied HADOOP-5612 to fix this, though I think creating a tarball
after chmod 755ing the configure scripts would also be correct.
*Sigh* You can't blame a release
-0 on this particular tarball (md5sum
9759e01d7426c9bbe14758bf9ab69012). Trying to compile from the tarball
with ant -Dcompile.c++=true -Dcompile.native=true -Dlibhdfs=true
bin-package I ran into these issues on my karmic box at home:
1) The configure scripts are not all executable.
BUILD FAILED
If you can require a recent kernel, you could use cgroups:
http://broadcast.oreilly.com/2009/06/manage-your-performance-with-cgroups-and-projects.html
No one has integrated this with hadoop yet as it's still pretty new, and
Hadoop clusters are meant to be run on unshared hardware.
-Todd
On
Hi Naveen,
On Thu, Jan 21, 2010 at 7:54 PM, Naveen Kumar Prasad
naveenkum...@huawei.com wrote:
Hi All,
I am new to hadoop/Mapreduce usage.
Can anyone tell me how to write a simple MapReduce implementation to just
read some files from the input directory
and write to output directory.
Hi all,
Last week we had a vote regarding the compatibility problem introduced in
branch-0.20 by the backport of HDFS-793, necessary for HDFS-101, which fixes
a large bug in the write pipeline recovery code. The majority of people
seemed to indicate that this incompatibility was unacceptable, and
close to your second and third recommendations? Or what APIs I
should start with for my testing?
Thanks.
Xueling
On Sat, Dec 12, 2009 at 1:01 PM, Todd Lipcon t...@cloudera.com wrote:
Hi Xueling,
In that case, I would recommend the following:
1) Put all of your data on HDFS
Hey all,
In a recent discussion, we noticed that the 0.20.2 HDFS client will not be
wire-compatible with 0.20.0 or 0.20.1 due to the inclusion of HDFS-793
(required for HDFS-101). This begs a few questions:
1) Although we certainly do not guarantee wire compatibility between minor
versions (0.20
Hi Xueling,
One important question that can really change the answer:
How often does the dataset change? Can the changes be merged in in
bulk every once in a while, or do you need to actually update them
randomly very often?
Also, how fast is quick? Do you mean 1 minute, 10 seconds, 1 second,
On Thu, Dec 3, 2009 at 9:39 AM, huan@accenture.com wrote:
-Original Message-
From: Todd Lipcon [mailto:t...@cloudera.com]
Sent: Monday, November 30, 2009 8:15 AM
To: general@hadoop.apache.org
Subject: Re: what is the major difference between Hadoop and
cloudMapReduce
Hi Sergey,
I replied to your post in common-user - please don't double-post in the
future - just makes the threads harder to follow for others who might have
the same problem.
Thanks
-Todd
2009/12/1 Чуканов Сергей schuka...@rbc.ru
Hi, suddenly I’ve got problem with starting Namenode:
.
Thanks.
Hi Huan,
I guess I misremembered or misread the paper.
Given this technique, doesn't it mean that reducers can only work when
commutative and associative?
-Todd
-Original Message-
From: Todd Lipcon [mailto:t...@cloudera.com]
Sent: Sunday, November 29, 2009 10:15 AM
101 - 117 of 117 matches
Mail list logo