n steaming ahead with FileSystem.
Standardizing FileSystem makes total sense to me, I just wanted to confirm
that plan.
Best,
Andrew
On Fri, Jun 14, 2013 at 9:38 AM, Stephen Watt wrote:
> This is a good point Andrew. The hangout was actually the first time I'd
> heard about the Abstr
read the wiki page, I figured that the mention of AFS was
essentially a typo, since everyone's been steaming ahead with FileSystem.
Standardizing FileSystem makes total sense to me, I just wanted to confirm
that plan.
Best,
Andrew
On Fri, Jun 14, 2013 at 9:38 AM, Stephen Watt
testing some
parts of FileSystem.
Are we going to end up with two different sets of validation tests? Or just
choose one API over the other? FileSystem is supposed to eventually be
deprecated in favor of FileContext (HADOOP-6446, filed in 2009), but actual
uptake in practice has been slow.
Best,
For those interested - I posted a recap of this mornings Google Hangout on the
Wiki Page at https://wiki.apache.org/hadoop/HCFS/Progress
On Jun 5, 2013, at 8:14 PM, Stephen Watt wrote:
> Hi Folks
>
> Per Roman's recommendation I've created a Wiki Page for organizing the w
respond back to me if you're
interested or would like to propose a different time. I'll update our Wiki page
with the logistics.
Regards
Steve Watt
- Original Message -
From: "Roman Shaposhnik"
To: "Stephen Watt"
Cc: common-dev@hadoop.apache.org, m
.org
Sent: Friday, May 24, 2013 3:47:04 PM
Subject: Re: [DISCUSS] Ensuring Consistent Behavior for Alternative Hadoop
FileSystems + Workshop
On 24 May 2013 00:52, Stephen Watt wrote:
> Hi Folks
>
> Hadoop's pluggable filesystem architecture supports the ability to enable
> an alte
Hi Folks
Hadoop's pluggable filesystem architecture supports the ability to enable an
alternate filesystem for use with Hadoop by writing a plugin for it. We now
have several alternate filesystems that have Hadoop FileSystem plugins and
because this isn't a very well understood topic, I've been
both bug fixes and features - also given
that you can CI it against Apache Hadoop trunk at the same time?
On Thu, May 23, 2013 at 11:47 PM, Stephen Watt wrote:
> (Resending - I think the first time I sent this out it got lost within all
> the ByLaws voting)
>
> Hi Folks
>
>
(Resending - I think the first time I sent this out it got lost within all the
ByLaws voting)
Hi Folks
My name is Steve Watt and I am presently working on enabling glusterfs to be
used as a Hadoop FileSystem. Most of the work thus far has involved developing
a Hadoop FileSystem plugin for glus
Hi Folks
My name is Steve Watt and I am presently working on enabling glusterfs to be
used as a Hadoop FileSystem. Most of the work thus far has involved developing
a Hadoop FileSystem plugin for glusterfs. I'm getting to the point where the
plugin is becoming stable and I've been trying to und
URL: https://issues.apache.org/jira/browse/HADOOP-6941
Project: Hadoop Common
Issue Type: Bug
Affects Versions: 0.21.0
Environment: SLES 11, Apache Harmony 6 and SLES 11, IBM Java 6
Reporter: Stephen Watt
Fix For: 0.21.1
://issues.apache.org/jira/browse/HADOOP-6924
Project: Hadoop Common
Issue Type: Bug
Affects Versions: 0.20.2, 0.20.1, 0.20.0
Environment: SLES 10, IBM Java 6
Reporter: Stephen Watt
Fix For: 0.20.3
The src/native/configure script used to build the native
://issues.apache.org/jira/browse/HADOOP-6923
Project: Hadoop Common
Issue Type: Bug
Components: native
Affects Versions: 0.20.2, 0.20.1, 0.20.0
Environment: SLES 10, IBM Java 6, Apache Hadoop 0.20.x
Reporter: Stephen Watt
Fix For
It seems quite self-explanatory " Annotation processing got disabled,
since it requires a 1.6 compliant JVM"
Hadoop 0.20.x and above needs Java 1.6. You're likely running with Java
1.5
Regards
Steve Watt
From:
Saikat Kanjilal
To:
"common-dev@hadoop.apache.org"
Date:
08/03/2010 07:59 AM
Su
://issues.apache.org/jira/browse/HADOOP-6895
Project: Hadoop Common
Issue Type: Bug
Components: native
Environment: SLES 10, IBM Java 6, Hadoop 0.21.0-rc0
Reporter: Stephen Watt
Priority: Minor
Fix For: 0.21.0, 0.22.0
bin
ck
On Mon, Jul 12, 2010 at 2:42 PM, Segel, Mike wrote:
> How can you say zip files are 'best codecs' to use?
>
> Call me silly but I seem to recall that if you're using a zip'd file for
> input you can't really use a file splitter?
> (Going from memo
This is likely a result of how things are now being built post
project-split, but previously, for the hadoop-0.20.x releases there was a
top level build.xml file which would orchestrate building the sub-projects
which were split underneath the src directory, resulting in a final
hadoop-20.x-cor
Please let me know if any of assertions are incorrect. I'm going to be
adding any feedback to the Hadoop Wiki. It seems well documented that the
LZO Codec is the most performant codec (
http://blog.oskarsson.nu/2009/03/hadoop-feat-lzo-save-disk-space-and.html)
but it is GPL infected and thus it
Hi Tom,
I'm trying to build Hadoop 0.21.0 locally, but its failing because the
hadoop root dir is missing the build.xml in the tar.gz. Is there a new
build process ? I'm currently using the ant clean tar test-core
directives.
Regards
Steve Watt
From:
Tom White
To:
common-dev
Date:
07/08/2
Sorry, left out the link to the patch -
https://issues.apache.org/jira/browse/MAPREDUCE-1262
Steve
From:
Stephen Watt/Austin/IBM
To:
common-dev@hadoop.apache.org
Date:
01/15/2010 01:26 PM
Subject:
Re: Eclipse plugin
Hi Leen/Philip
There is an open issue with Hudson using an old version of
Hi Leen/Philip
There is an open issue with Hudson using an old version of Eclipse to test
patches. Its presently preventing the patch I've contributed to be
committed. In the interim, I've contributed the plugin jar to the patch
which you can just download and use as is.
Kind regards
Steve Wat
Components: documentation
Affects Versions: 0.20.1
Reporter: Stephen Watt
Priority: Minor
Fix For: 0.20.2, 0.21.0, 0.22.0
The documentation on the Hadoop Site, Wiki and Download pages specify to use
the Sun distribution of Java. While many people use and test
://issues.apache.org/jira/browse/HADOOP-6416
Project: Hadoop Common
Issue Type: Bug
Components: contrib/eclipse-plugin
Affects Versions: 0.20.1
Reporter: Stephen Watt
Fix For: 0.20.2, 0.21.0, 0.22.0
contrib/eclipse-plugin is still listed under
tree.
Cheers,
Tom
On Wed, Dec 2, 2009 at 10:20 AM, Stephen Watt wrote:
> Hi Folks
>
> I am trying to apply the patch I have attached at
> https://issues.apache.org/jira/browse/HADOOP-6360 to check-in the fix
for
> the eclipse-plugin. I have followed all the steps on
> http
Hi Folks
I am trying to apply the patch I have attached at
https://issues.apache.org/jira/browse/HADOOP-6360 to check-in the fix for
the eclipse-plugin. I have followed all the steps on
http://wiki.apache.org/hadoop/HowToContribute and have run apply-patch
against trunk locally and therefore a
Hi Zijian
Are you asking if your algorithms are applicable to hadoop ? i.e. Can they
be decomposed down into a Map/Reduce equivalent ?
If this is the case, I'd recommend you provide the specifics of each
algorithm so that you can get a more specific response.
Kind regards
Steve Watt
From:
Hi Dan
I might be stating the obvious here, but have you looked at Nutch ? Nutch
uses Hadoop and is able to crawl, index and search (using Lucene). We've
been using it for awhile and it works well.
Kind regards
Steve Watt
From:
Dan Segel
To:
common-dev@hadoop.apache.org
Date:
11/11/2009 07:
/eclipse-plugin
Affects Versions: 0.20.1
Environment: SLES 10, Mac OS/X 10.5.8
Reporter: Stephen Watt
Attachments: hadoop-0.20.1-eclipse-plugin.jar
When trying to run the build script for the Eclipse Plugin in
src/contrib/eclipse-plugin there are several errors a user
I'm sending this to common list because license issues can have effects on
the entire release.
I've noticed (in Hadoop 0.20.0) that the src/c++/libhdfs/aclocal.m4 has a
GPL license but no waiver in it. The other aclocal.m4 files in the
src/native and src/c++ all contain the waiver below:
# As
29 matches
Mail list logo