+1 for option 4.
Let the User starts required services from it.
Regards,
Uma
- Original Message -
From: giridharan kesavan
Date: Wednesday, October 12, 2011 11:24 pm
Subject: Re: 0.23 & trunk tars, we'll we publishing 1 tar per component or a
single tar? What about source tar?
To: hdfs-
I feel #4 as a better option.
Regards,
Ravi Teja
-Original Message-
From: Alejandro Abdelnur [mailto:t...@cloudera.com]
Sent: Wednesday, October 12, 2011 9:38 PM
To: common-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org;
hdfs-...@hadoop.apache.org
Subject: 0.23 & trunk tars, we'l
security audit logger is not on by default, fix the log4j properties to enable
the logger
-
Key: HADOOP-7740
URL: https://issues.apache.org/jira/browse/HADOOP-7740
Reconcile FileUtil and SecureIOUtils APIs between 20x and trunk
---
Key: HADOOP-7739
URL: https://issues.apache.org/jira/browse/HADOOP-7739
Project: Hadoop Common
Issue Type: Bug
Document incompatible API changes between 0.20.20x and 0.23.0 release
-
Key: HADOOP-7738
URL: https://issues.apache.org/jira/browse/HADOOP-7738
Project: Hadoop Common
Issue
normalize hadoop-mapreduce & hadoop-dist dist/tar build with common/hdfs
Key: HADOOP-7737
URL: https://issues.apache.org/jira/browse/HADOOP-7737
Project: Hadoop Common
+1 for option 4
On 10/12/11 9:50 AM, Eric Yang wrote:
Option #4 is the most practical use case for making a release. For bleeding
edge developers, they would prefer to mix and match different version of hdfs
and mapreduce. Hence, it may be good to release the single tarball for
release, bu
I deployed on 5 node cluster with security turned on and ran mapreduce jobs.
Verified md5 signature of the rpm.
+1.
On Fri, Oct 7, 2011 at 5:10 PM, Matt Foley wrote:
> Many thanks to the community members who tried out RC1, and found several
> critical or blocker bugs.
> These have been resolve
Deployed to a cluster of 20 servers, with security and append features enabled.
Passed our internal test suite. Looks good!
+1 (non binding)
--
Arpit
ar...@hortonworks.com
On Oct 7, 2011, at 5:10 PM, Matt Foley wrote:
> Many thanks to the community members who tried out RC1, and found severa
I think it's simplest to publish a single Hadoop tarball and users
start the services they want. This is the model we have always
followed up to now.
Cheers,
Tom
On Wed, Oct 12, 2011 at 9:07 AM, Alejandro Abdelnur wrote:
> Currently common, hdfs and mapred create partial tars which are not usabl
Currently common, hdfs and mapred create partial tars which are not usable
unless they are stitched together into a single tar.
With HADOOP-7642 the stitching happens as part of the build.
The build currently produces the following tars:
1* common TAR
2* hdfs (partial) TAR
3* mapreduce (partial)
11 matches
Mail list logo