There is somewhat less need for this at this point since files appear
atomically and cannot be modified (even for append).
There are rumors of impending appendability.
On 10/9/07 7:35 PM, "谢纲" <[EMAIL PROTECTED]> wrote:
> Hi,
> Is there any lock service in hadoop to sync the access to file (su
Hi,
I have a Hadoop cluster of three machines. When a wordcount example was
submitted it was working fine.
Nowadays when i submit a job i get a socket error.. and job is not workin.
Any reason as why this is happenin?
Thanks in Advance
Preethi.
--
View this message in context:
http://www.nabb
Hi,
Is there any lock service in hadoop to sync the access to file (such
as chubby in GFS)?
Thanks
--
Xie Gang
this is totally rad. note that they are using tivoli instead of hadoop on
demand. any comment from the HOD camp?
On 10/9/07, Chris Dyer <[EMAIL PROTECTED]> wrote:
>
> I'm one of the guinea pigs in this project, and I can definitely
> confirm that it is Hadoop/HDFS. :)
>
> > > Plus, there's no ot
I ran into the same issue as Jason had (i.e. hadoop daemon is run as one
user, but the job is submitted by another user. please look for
details posted by Jason below) and saw Jason's email in the archive, but
no body replied. Is this the problem already been posted, answered
and I didn't
?? wrote:
Hi,
I try to debug hadoop DFS with eclipse. When I try to lauch
namenode.class as the main class. It fails to createSocketAddr. I find
that there is no host properties in the configuration. It seems that
the hadoop-default.xml and hadoop-site.xml is not loaded.
So, How to configure the
I'm one of the guinea pigs in this project, and I can definitely
confirm that it is Hadoop/HDFS. :)
> > Plus, there's no other clone of the published pieces of Google
> > infrastructure that's open source and this far along, so what else
> > could they be talking about? ;-)
> >
> If IBM is involve
Hi Toby,
Toby DiPasquale wrote
> Plus, there's no other clone of the published pieces of Google
> infrastructure that's open source and this far along, so what else
> could they be talking about? ;-)
>
If IBM is involved, logically , as you point out, the code may be
Hadoop,HDFS/Hbase - but if
On Tue, Oct 09, 2007 at 08:05:35AM -0400, Toby DiPasquale wrote:
> On 10/9/07, Jonathan Hendler <[EMAIL PROTECTED]> wrote:
> > Hey, where's Hadoop? I've never seen an open-source version of Bigtable.
>
> Its called HBase:
>
> http://wiki.apache.org/lucene-hadoop/Hbase
>
> Link is right on the fr
On 10/9/07, Jonathan Hendler <[EMAIL PROTECTED]> wrote:
> Hey, where's Hadoop? I've never seen an open-source version of Bigtable.
Its called HBase:
http://wiki.apache.org/lucene-hadoop/Hbase
Link is right on the front page of the wiki. AFAIK its not prime-time
yet, but its being actively worked
Hey, where's Hadoop? I've never seen an open-source version of Bigtable.
... "The centers will run an open-source version of Google’s data center
software, and I.B.M. is contributing open-source tools to help students
write Internet programs and data center management software."
via slashdot:
htt
Hello Dennis, Ted and Christophe.
I had, as a precaution, built my jar so that the lib/ directory
contained both the actual jars AND the jars unzipped, i.e:
lib/:
foo.jar
foo2.jar
foo3.jar
foo1/api/bar/gazonk/foo.class
foo2/
Just to cover as much ground as possible.
I do include a class
Hi,
I'm totally new to hadoop, and I was reading the docs on HDFS. It mentions
something called "streaming data access". Does this simply mean that HDFS
has been optimized for throughput rather than latency? I guess the part I'm
having trouble understanding is the "streaming" aspect of it. Except
- Original Message -
From: "qi wu" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, October 09, 2007 4:18 PM
Subject: Hadoop/Lucene/Nutch user in Beijing Get Together?
> Hi,
> I find some people from China also are in Lucene related mail list. Do any
> one have interest to me
Andrzej gave me a lot of help when he pointed me toward the kill
-SIGQUIT [pid] command line function. This will write a java thread
dump to stdout (which is caught in
logs/userlogs/[task]/stdout/part-##). This is a lifesaver if
you're getting caught anywhere and not sure why.
--Ned
On 10/8
On Tue, 2007-10-09 at 09:57 +0530, Preethi Chockalingam wrote:
> Hi,
>
> What is meant by an External HDFS or External Mapred??
In the Hadoop-On-Demand world, this is a term used to refer to an HDFS
or a Map/Reduce cluster that's been started outside the HOD provisioning
system. These are also re
16 matches
Mail list logo