I agree with Owen. If we move code out of the contrib project, then it is
more likely to create confusion among users, especially when multiple
versions of the code base float around.
But I agree that we should purge contrib code that is not being used or not
being actively developed.
thanks,
Hello ...
I need append file in HDFS ..
I see many forum on internet talking about problems in HDFS to append
files.
That´s is correct ?
Append file is working in on hadoop 0.20 ?
thank ´s
On 31/01/11 05:24, Konstantin Boudnik wrote:
Shall we not dictate a location of contrib projects once they are
moved of Hadoop? If ppl feel like they are better be served by GitHub
perhaps they should have an option to get hosted there?
-I see discussions about Git at the ASF infra mailing
On 31/01/11 03:42, Nigel Daley wrote:
Folks,
Now that http://apache-extras.org is launched
(https://blogs.apache.org/foundation/entry/the_apache_software_foundation_launches)
I'd like to start a discussion on moving contrib components out of common,
mapreduce, and hdfs.
These contrib
Hi Alessandro.
There is a debate among the experts if the 0.20 version of append is stable,
and technically the best way of doing it.
Saying that there are lots of people and companies who are using it in their
production clusters with no reported hassles.
I suggest you test it out in your
Steve-
It's hard to answer without more concrete criteria. Is this a
trademark question affecting the marketing of a product? A
cross-compatibility taxonomy for users? The minimum criteria to
publish a paper/release a product without eye-rolling? The particular
compatibility claims made by a
On Jan 31, 2011, at 8:18 AM, Steve Loughran wrote:
what does it mean to be compatible with Hadoop? And how do products that
consider themselves compatible with Hadoop say it?
I would like to define it in terms of API's and core functionality.
A product (say hive or pig) will run against a
On Sun, Jan 30, 2011 at 23:19, Owen O'Malley omal...@apache.org wrote:
On Jan 30, 2011, at 7:42 PM, Nigel Daley wrote:
Now that http://apache-extras.org is launched
(https://blogs.apache.org/foundation/entry/the_apache_software_foundation_launches)
I'd like to start a discussion on moving
Hey Konstantin,
The only build breakage I saw from HADOOP-6904 is MAPREDUCE-2290,
which was fixed. Trees from trunk are compiling against each other
for me (eg each installed to a local maven repo), perhaps the upstream
maven repo hasn't been updated with the latest bits yet.
Thanks,
Eli
On
The has been a problem with more than one build failing (Mahout is the one
that I saw first) due to a change in maven version which meant that the
clover license isn't being found properly. At least, that is the tale I
heard from infra.
On Mon, Jan 31, 2011 at 1:31 PM, Eli Collins
Current trunk for HDFS and MapReduce are not compiling at the moment. Try to
build trunk.
This is the result of that changes to common api introduced by HADOOP-6904
are not promoted to HDFS and MR trunks.
HDFS-1335 and MAPREDUCE-2263 depend on these changes.
Common is not promoted to HDFS and MR
On Mon, Jan 31, 2011 at 1:57 PM, Konstantin Shvachko
shv.had...@gmail.comwrote:
Anybody with gcc active could you please verify if the problem is caused by
HADOOP-6864.
I can build common trunk just fine on CentOS 5.5 including native.
I think the issue is somehow isolated to the build
By manually installing a new core jar into the cache, I can compile
trunk. Looks like we just need to kick a new Core into maven. Are
there instructions somewhere for committers to do this? I know Nigel
and Owen know how, but I don't know if the knowledge is diffused past
them.
-Jakob
On Mon,
Owen,
I am surprised to not see jute (aka hadoop recordio) on this list.
- milind
On Jan 30, 2011, at 11:19 PM, Owen O'Malley wrote:
On Jan 30, 2011, at 7:42 PM, Nigel Daley wrote:
Now that http://apache-extras.org is launched
On Sun, Jan 30, 2011 at 11:19 PM, Owen O'Malley omal...@apache.org wrote:
Also note that pushing code out of Hadoop has a high cost. There are at
least 3 forks of the hadoop-gpl-compression code. That creates a lot of
confusion for the users. A lot of users never go to the work to figure out
ant mvn-deploy would publish snapshot artifact to the apache maven repository
as long you have the right credentials in ~/.m2/settings.xml.
For settings.xml template pls look at http://wiki.apache.org/hadoop/HowToRelease
I'm pushing the latest common artifacts now.
-Giri
On Jan 31, 2011, at
Giri
looks like the last run you started failed the same way as previous ones.
Any thoughts on what's going on?
Thanks,
--Konstantin
On Mon, Jan 31, 2011 at 3:33 PM, Giridharan Kesavan
gkesa...@yahoo-inc.comwrote:
ant mvn-deploy would publish snapshot artifact to the apache maven
repository as
Konstantin,
I think I need to restart the slave which is running the commit build. For now
I have published the common artifact manually from commandline.
Thanks,
Giri
On Jan 31, 2011, at 4:27 PM, Konstantin Shvachko wrote:
Giri
looks like the last run you started failed the same way as
Thanks, Giri.
--Konst
On Mon, Jan 31, 2011 at 4:40 PM, Giridharan Kesavan
gkesa...@yahoo-inc.comwrote:
Konstantin,
I think I need to restart the slave which is running the commit build. For
now I have published the common artifact manually from commandline.
Thanks,
Giri
On Jan 31, 2011,
Hi Folks,
I'm pleased to announce that after some reflection, Yahoo! has decided to
discontinue the The Yahoo Distribution of Hadoop and focus on Apache Hadoop.
We plan to remove all references to a Yahoo distribution from our website
(developer.yahoo.com/hadoop), close our github repo
Excellent news! Will you also make Howl, Oozie, and Yarn Apache projects as
well?
On Mon, Jan 31, 2011 at 7:27 PM, Eric Baldeschwieler
eri...@yahoo-inc.comwrote:
Hi Folks,
I'm pleased to announce that after some reflection, Yahoo! has decided to
discontinue the The Yahoo Distribution of
21 matches
Mail list logo