The original release script and instructions broke the build up into
three or so steps. When I rewrote it, I kept that same model. It’s probably
time to re-think that. In particular, it should probably be one big step that
even does the maven deploy. There’s really no harm in doing
>
>
> >> If we're struggling with being able to deliver new features in a safe
> and timely fashion, let's try to address that...
>
> This is interesting. Do you aware any means to do that? Thanks!
>
> I've mentioned this a few times on the lists before, but our biggest gap
in keeping branches
Thanks Andrew for the comments.
Yes, if we're "strictly" following the "maintenance release" practice, that'd
be great and it's never my intent to overload it and cause mess.
>> If we're struggling with being able to deliver new features in a safe and
>> timely fashion, let's try to address
I'm against including new features in maintenance releases, since they're
meant to be bug-fix only.
If we're struggling with being able to deliver new features in a safe and
timely fashion, let's try to address that, not overload the meaning of
"maintenance release".
Best,
Andrew
On Mon, Nov
On Mon, Nov 20, 2017 at 9:59 PM, Sangjin Lee wrote:
>
> On Mon, Nov 20, 2017 at 9:46 PM, Andrew Wang
> wrote:
>
>> Thanks for the spot Sangjin. I think this bug introduced in
>> create-release by HADOOP-14835. The multi-pass maven build generates
Thanks for the thorough review Vinod, some inline responses:
*Issues found during testing*
>
> Major
> - The previously supported way of being able to use different tar-balls
> for different sub-modules is completely broken - common and HDFS tar.gz are
> completely empty.
>
Is this something
On Mon, Nov 20, 2017 at 9:46 PM, Andrew Wang
wrote:
> Thanks for the spot Sangjin. I think this bug introduced in create-release
> by HADOOP-14835. The multi-pass maven build generates these dummy client
> jars during the site build since skipShade is specified.
>
>
Thanks for the spot Sangjin. I think this bug introduced in create-release
by HADOOP-14835. The multi-pass maven build generates these dummy client
jars during the site build since skipShade is specified.
This might be enough to cancel the RC. Thoughts?
Best,
Andrew
On Mon, Nov 20, 2017 at 7:51
> - When did we stop putting CHANGES files into the source artifacts?
CHANGES files were removed by https://issues.apache.org/jira/browse/HADOOP-11792
> - Even after "mvn install"ing once, shading is repeated again and again for
every new 'mvn install' even though there are no source
I checked the client jars that are supposed to contain shaded dependencies,
and they don't look quite right:
$ tar -tzvf hadoop-3.0.0.tar.gz | grep hadoop-client-api-3.0.0.jar
-rw-r--r-- 0 andrew andrew44531 Nov 14 11:53
hadoop-3.0.0/share/hadoop/client/hadoop-client-api-3.0.0.jar
$ tar
+1 (binding)
1) Compile with native from source code
2) Set up a one-node cluster, enabling short-circuit read
3) Tried some basic HDFS commands, ls, put etc.
4) MapReduce workloads: TestDFSIO, TeraGen and TeraSort
5) Test storage policy and mover successfully
Sorry for the late response.
On Mon, Nov 20, 2017 at 5:26 PM, Vinod Kumar Vavilapalli wrote:
> Thanks for all the push, Andrew!
>
> Looking at the RC. Went through my usual check-list. Here's my summary.
> Will cast my final vote after comparing and validating my findings with
> others.
>
> Verification
Thanks for all the push, Andrew!
Looking at the RC. Went through my usual check-list. Here's my summary. Will
cast my final vote after comparing and validating my findings with others.
Verification
- [Check] Successful recompilation from source tar-ball
- [Check] Signature verification
-
Hi Junping,
Thank you for making 2.8.2 happen and now planning the 2.8.3 release.
I have an ask, is it convenient to include the back port work for OSS connector
module? We have some Hadoop users that wish to have it by default for
convenience, though in the past they used it by back porting
Lei (Eddy) Xu created HDFS-12840:
Summary: Creating a replicated file in a EC zone does not
correctly serialized in EditLogs
Key: HDFS-12840
URL: https://issues.apache.org/jira/browse/HDFS-12840
Compilation passed for me. Using jdk1.8.0_40.jdk.
+Vinod
> On Nov 20, 2017, at 4:16 PM, Wei-Chiu Chuang wrote:
>
> @vinod
> I followed your command but I could not reproduce your problem.
>
> [weichiu@storage-1 hadoop-3.0.0-src]$ ls -al hadoop-common-project/hadoop-c
>
@vinod
I followed your command but I could not reproduce your problem.
[weichiu@storage-1 hadoop-3.0.0-src]$ ls -al hadoop-common-project/hadoop-c
ommon/target/hadoop-common-3.0.0.tar.gz
-rw-rw-r-- 1 weichiu weichiu 37052439 Nov 20 21:59
+1 (binding)
Thanks Andrew!
- Verified md5 and built from source
- Started a pseudo distributed cluster with KMS,
- Performed basic hdfs operations plus encryption related operations
- Verified logs and webui
- Confidence from CDH testings (will let Andrew answer officially, but
Thanks, Andrew!
+1 (non-binding)
- Verified checksums and signatures
- Deployed a single node cluster on CentOS 7.4 using the binary and source
release
- Ran hdfs commands
- Ran pi and distributed shell using the default and docker runtimes
- Verified the UIs
- Verified the change log
-Shane
Tsz Wo Nicholas Sze created HDFS-12839:
--
Summary: Refactor ratis-server tests to reduce the use
DEFAULT_CALLID
Key: HDFS-12839
URL: https://issues.apache.org/jira/browse/HDFS-12839
Project:
+1 binding
Run the following steps:
* Check md5 of sources and package.
* Run a YARN + HDFS pseudo cluster.
* Run terasuite on YARN.
* Run HDFS CLIs (ls , rm , etc)
On Mon, Nov 20, 2017 at 12:58 PM, Vinod Kumar Vavilapalli
wrote:
> Quick question.
>
> I used to be able (in
Quick question.
I used to be able (in 2.x line) to create dist tarballs (mvn clean install
-Pdist -Dtar -DskipTests -Dmaven.javadoc.skip=true) from the source being voted
on (hadoop-3.0.0-src.tar.gz).
The idea is to install HDFS, YARN, MR separately in separate root-directories
from the
Thanks for that proposal Andrew, and for not wrapping up the vote yesterday.
> In terms of downstream testing, we've done extensive
> integration testing with downstreams via the alphas
> and betas, and we have continuous integration running
> at Cloudera against branch-3.0.
Could you please
I'd definitely extend it for a few more days. I only see 3 binding +1s so far -
not a great number to brag about on our first major release in years.
Also going to nudge folks into voting.
+Vinod
> On Nov 17, 2017, at 3:26 PM, Andrew Wang wrote:
>
> Hi Arpit,
>
> I
Thanks Andrew for getting this out !
+1 (non-binding)
* Built from source on CentOS 7.3.1611, jdk1.8.0_111
* Deployed non-ha cluster and tested few EC file operations.
* Ran basic shell commands(ls, mkdir, put, get, ec, dfsadmin).
* Ran some sample jobs.
* HDFS Namenode UI looks good.
Thanks,
+1 (non-binding).
Built from source and deployed it on pseudo cluster.
Ran sample fs shell commands.
Ran coupe of jobs: sleep and pi.
Tested on java: 1.8.0_131 version.
Thanks Andrew for all your efforts in getting us here. :)
On Tue, Nov 14, 2017 at 3:34 PM, Andrew Wang
[
https://issues.apache.org/jira/browse/HDFS-12820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Gang Xie reopened HDFS-12820:
-
After carefully check the issue reported in HDFS-9279, and found this issue is
not a its dup.
this case is
For more details, see
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/46/
No changes
-1 overall
The following subsystems voted -1:
asflicense unit xml
The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint
28 matches
Mail list logo