Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
+1 (non-binding) Thanks for preparing the 2.7.4-RC0 release. - Built from source on Mac OS X 10.12.6 with Java 1.8.0_111 - Deployed to a pseudo cluster - Passed the following sanity checks - Basic dfs operations - Wordcount - DFSIO- read/write Thanks, Ajay On 7/31/17, 6:57 PM, "Konstantin Shvachko"wrote: Uploaded new binaries hadoop-2.7.4-RC0.tar.gz, which adds lib/native/. Same place: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ Thanks, --Konstantin On Mon, Jul 31, 2017 at 3:56 PM, Chris Douglas wrote: > On Mon, Jul 31, 2017 at 3:02 PM, Konstantin Shvachko > wrote: > > For the packaging, here is the exact phrasing from the sited > release-policy > > document relevant to binaries: > > "As a convenience to users that might not have the appropriate tools to > > build a compiled version of the source, binary/bytecode packages MAY be > > distributed alongside official Apache releases. In all such cases, the > > binary/bytecode package MUST have the same version number as the source > > release and MUST only add binary/bytecode files that are the result of > > compiling that version of the source code release and its dependencies." > > I don't think my binary package violates any of these. > > +1 The PMC VOTE applies to source code, only. If someone wants to > rebuild the binary tarball with native libs and replace this one, > that's fine. > > My reading of the above is that source code must be distributed with > binaries, not that we omit the source code from binary releases... -C > > > But I'll upload an additional tar.gz with native bits and no src, as you > > guys requested. > > Will keep it as RC0 as there is no source code change and it comes from > the > > same build. > > Hope this is satisfactory. > > > > Thanks, > > --Konstantin > > > > On Mon, Jul 31, 2017 at 1:53 PM, Andrew Wang > > wrote: > > > >> I agree with Brahma on the two issues flagged (having src in the binary > >> tarball, missing native libs). These are regressions from prior > releases. > >> > >> As an aside, "we release binaries as a convenience" doesn't relax the > >> quality bar. The binaries are linked on our website and distributed > through > >> official Apache channels. They have to adhere to Apache release > >> requirements. And, most users consume our work via Maven dependencies, > >> which are binary artifacts. > >> > >> http://www.apache.org/legal/release-policy.html goes into this in more > >> detail. A release must minimally include source packages, and can also > >> include binary artifacts. > >> > >> Best, > >> Andrew > >> > >> On Mon, Jul 31, 2017 at 12:30 PM, Konstantin Shvachko < > >> shv.had...@gmail.com> wrote: > >> > >>> To avoid any confusion in this regard. I built RC0 manually in > compliance > >>> with Apache release policy > >>> http://www.apache.org/legal/release-policy.html > >>> I edited the HowToReleasePreDSBCR page to make sure people don't use > >>> Jenkins option for building. > >>> > >>> A side note. This particular build is broken anyways, so no worries > there. > >>> I think though it would be useful to have it working for testing and > as a > >>> packaging standard. > >>> > >>> Thanks, > >>> --Konstantin > >>> > >>> On Mon, Jul 31, 2017 at 11:40 AM, Allen Wittenauer < > >>> a...@effectivemachines.com > >>> > wrote: > >>> > >>> > > >>> > > On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko < > >>> shv.had...@gmail.com> > >>> > wrote: > >>> > > > >>> > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR > >>> > > >>> > FYI: > >>> > > >>> > If you are using ASF Jenkins to create an ASF release > >>> > artifact, it's pretty much an automatic vote failure as any such > >>> release is > >>> > in violation of ASF policy. > >>> > > >>> > > >>> > >> > >> >
Re: Are binary artifacts are part of a release?
It does not. Just adding historical references, as Andrew raised the question. On Mon, Jul 31, 2017 at 7:38 PM, Allen Wittenauerwrote: > > ... that doesn't contradict anything I said. > > > On Jul 31, 2017, at 7:23 PM, Konstantin Shvachko > wrote: > > > > The issue was discussed on several occasions in the past. > > Took me a while to dig this out as an example: > > http://mail-archives.apache.org/mod_mbox/hadoop-general/ > 20.mbox/%3C4EB0827C.6040204%40apache.org%3E > > > > Doug Cutting: > > "Folks should not primarily evaluate binaries when voting. The ASF > primarily produces and publishes source-code > > so voting artifacts should be optimized for evaluation of that." > > > > Thanks, > > --Konst > > > > On Mon, Jul 31, 2017 at 4:51 PM, Allen Wittenauer < > a...@effectivemachines.com> wrote: > > > > > On Jul 31, 2017, at 4:18 PM, Andrew Wang > wrote: > > > > > > Forking this off to not distract from release activities. > > > > > > I filed https://issues.apache.org/jira/browse/LEGAL-323 to get > clarity on the matter. I read the entire webpage, and it could be improved > one way or the other. > > > > > > IANAL, my read has always lead me to believe: > > > > * An artifact is anything that is uploaded to dist.a.o > and repository.a.o > > * A release consists of one or more artifacts ("Releases > are, by definition, anything that is published beyond the group that owns > it. In our case, that means any publication outside the group of people on > the product dev list.") > > * One of those artifacts MUST be source > > * (insert voting rules here) > > * They must be built on a machine in control of the RM > > * There are no exceptions for alpha, nightly, etc > > * (various other requirements) > > > > i.e., release != artifact it's more like release = > artifact * n . > > > > Do you have to have binaries? No (e.g., Apache SpamAssassin has > no binaries to create). But if you place binaries in dist.a.o or > repository.a.o, they are effectively part of your release and must follow > the same rules. (Votes, etc.) > > > > > >
Re: Are binary artifacts are part of a release?
... that doesn't contradict anything I said. > On Jul 31, 2017, at 7:23 PM, Konstantin Shvachkowrote: > > The issue was discussed on several occasions in the past. > Took me a while to dig this out as an example: > http://mail-archives.apache.org/mod_mbox/hadoop-general/20.mbox/%3C4EB0827C.6040204%40apache.org%3E > > Doug Cutting: > "Folks should not primarily evaluate binaries when voting. The ASF primarily > produces and publishes source-code > so voting artifacts should be optimized for evaluation of that." > > Thanks, > --Konst > > On Mon, Jul 31, 2017 at 4:51 PM, Allen Wittenauer > wrote: > > > On Jul 31, 2017, at 4:18 PM, Andrew Wang wrote: > > > > Forking this off to not distract from release activities. > > > > I filed https://issues.apache.org/jira/browse/LEGAL-323 to get clarity on > > the matter. I read the entire webpage, and it could be improved one way or > > the other. > > > IANAL, my read has always lead me to believe: > > * An artifact is anything that is uploaded to dist.a.o and > repository.a.o > * A release consists of one or more artifacts ("Releases are, > by definition, anything that is published beyond the group that owns it. In > our case, that means any publication outside the group of people on the > product dev list.") > * One of those artifacts MUST be source > * (insert voting rules here) > * They must be built on a machine in control of the RM > * There are no exceptions for alpha, nightly, etc > * (various other requirements) > > i.e., release != artifact it's more like release = > artifact * n . > > Do you have to have binaries? No (e.g., Apache SpamAssassin has no > binaries to create). But if you place binaries in dist.a.o or > repository.a.o, they are effectively part of your release and must follow the > same rules. (Votes, etc.) > > - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Are binary artifacts are part of a release?
The issue was discussed on several occasions in the past. Took me a while to dig this out as an example: http://mail-archives.apache.org/mod_mbox/hadoop-general/20.mbox/%3C4EB0827C.6040204%40apache.org%3E Doug Cutting: "Folks should not primarily evaluate binaries when voting. The ASF primarily produces and publishes source-code so voting artifacts should be optimized for evaluation of that." Thanks, --Konst On Mon, Jul 31, 2017 at 4:51 PM, Allen Wittenauerwrote: > > > On Jul 31, 2017, at 4:18 PM, Andrew Wang > wrote: > > > > Forking this off to not distract from release activities. > > > > I filed https://issues.apache.org/jira/browse/LEGAL-323 to get clarity > on the matter. I read the entire webpage, and it could be improved one way > or the other. > > > IANAL, my read has always lead me to believe: > > * An artifact is anything that is uploaded to dist.a.o and > repository.a.o > * A release consists of one or more artifacts ("Releases > are, by definition, anything that is published beyond the group that owns > it. In our case, that means any publication outside the group of people on > the product dev list.") > * One of those artifacts MUST be source > * (insert voting rules here) > * They must be built on a machine in control of the RM > * There are no exceptions for alpha, nightly, etc > * (various other requirements) > > i.e., release != artifact it's more like release = > artifact * n . > > Do you have to have binaries? No (e.g., Apache SpamAssassin has > no binaries to create). But if you place binaries in dist.a.o or > repository.a.o, they are effectively part of your release and must follow > the same rules. (Votes, etc.) > >
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
Uploaded new binaries hadoop-2.7.4-RC0.tar.gz, which adds lib/native/. Same place: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ Thanks, --Konstantin On Mon, Jul 31, 2017 at 3:56 PM, Chris Douglaswrote: > On Mon, Jul 31, 2017 at 3:02 PM, Konstantin Shvachko > wrote: > > For the packaging, here is the exact phrasing from the sited > release-policy > > document relevant to binaries: > > "As a convenience to users that might not have the appropriate tools to > > build a compiled version of the source, binary/bytecode packages MAY be > > distributed alongside official Apache releases. In all such cases, the > > binary/bytecode package MUST have the same version number as the source > > release and MUST only add binary/bytecode files that are the result of > > compiling that version of the source code release and its dependencies." > > I don't think my binary package violates any of these. > > +1 The PMC VOTE applies to source code, only. If someone wants to > rebuild the binary tarball with native libs and replace this one, > that's fine. > > My reading of the above is that source code must be distributed with > binaries, not that we omit the source code from binary releases... -C > > > But I'll upload an additional tar.gz with native bits and no src, as you > > guys requested. > > Will keep it as RC0 as there is no source code change and it comes from > the > > same build. > > Hope this is satisfactory. > > > > Thanks, > > --Konstantin > > > > On Mon, Jul 31, 2017 at 1:53 PM, Andrew Wang > > wrote: > > > >> I agree with Brahma on the two issues flagged (having src in the binary > >> tarball, missing native libs). These are regressions from prior > releases. > >> > >> As an aside, "we release binaries as a convenience" doesn't relax the > >> quality bar. The binaries are linked on our website and distributed > through > >> official Apache channels. They have to adhere to Apache release > >> requirements. And, most users consume our work via Maven dependencies, > >> which are binary artifacts. > >> > >> http://www.apache.org/legal/release-policy.html goes into this in more > >> detail. A release must minimally include source packages, and can also > >> include binary artifacts. > >> > >> Best, > >> Andrew > >> > >> On Mon, Jul 31, 2017 at 12:30 PM, Konstantin Shvachko < > >> shv.had...@gmail.com> wrote: > >> > >>> To avoid any confusion in this regard. I built RC0 manually in > compliance > >>> with Apache release policy > >>> http://www.apache.org/legal/release-policy.html > >>> I edited the HowToReleasePreDSBCR page to make sure people don't use > >>> Jenkins option for building. > >>> > >>> A side note. This particular build is broken anyways, so no worries > there. > >>> I think though it would be useful to have it working for testing and > as a > >>> packaging standard. > >>> > >>> Thanks, > >>> --Konstantin > >>> > >>> On Mon, Jul 31, 2017 at 11:40 AM, Allen Wittenauer < > >>> a...@effectivemachines.com > >>> > wrote: > >>> > >>> > > >>> > > On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko < > >>> shv.had...@gmail.com> > >>> > wrote: > >>> > > > >>> > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR > >>> > > >>> > FYI: > >>> > > >>> > If you are using ASF Jenkins to create an ASF release > >>> > artifact, it's pretty much an automatic vote failure as any such > >>> release is > >>> > in violation of ASF policy. > >>> > > >>> > > >>> > >> > >> >
[jira] [Created] (HDFS-12234) [SPS] Allow setting Xattr without SPS running.
Lei (Eddy) Xu created HDFS-12234: Summary: [SPS] Allow setting Xattr without SPS running. Key: HDFS-12234 URL: https://issues.apache.org/jira/browse/HDFS-12234 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: HDFS-10285 Reporter: Lei (Eddy) Xu As discussed in HDFS-10285, if this API is widely used by downstream projects (i.e., HBase), it should allow the client to call this API without querying the running status of SPS service. It would introduce great burden for this API to be used. Given the constraints this SPS service has (i.e., can not run with Mover , and might be disabled by default), it should allow the API call success as long as related xattr being persisted. SPS can run later to catch on. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Created] (HDFS-12233) Add API to unset SPS on a path
Lei (Eddy) Xu created HDFS-12233: Summary: Add API to unset SPS on a path Key: HDFS-12233 URL: https://issues.apache.org/jira/browse/HDFS-12233 Project: Hadoop HDFS Issue Type: Sub-task Components: datanode, namenode Affects Versions: HDFS-10285 Reporter: Lei (Eddy) Xu As discussed in HDFS-10285, we should allow to unset SPS on a path. For example, an user might mistakenly set SPS on "/", and triggers significant amount of data movement. Unset SPS will allow user to fix his own mistake. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: Are binary artifacts are part of a release?
> On Jul 31, 2017, at 4:18 PM, Andrew Wangwrote: > > Forking this off to not distract from release activities. > > I filed https://issues.apache.org/jira/browse/LEGAL-323 to get clarity on the > matter. I read the entire webpage, and it could be improved one way or the > other. IANAL, my read has always lead me to believe: * An artifact is anything that is uploaded to dist.a.o and repository.a.o * A release consists of one or more artifacts ("Releases are, by definition, anything that is published beyond the group that owns it. In our case, that means any publication outside the group of people on the product dev list.") * One of those artifacts MUST be source * (insert voting rules here) * They must be built on a machine in control of the RM * There are no exceptions for alpha, nightly, etc * (various other requirements) i.e., release != artifact it's more like release = artifact * n . Do you have to have binaries? No (e.g., Apache SpamAssassin has no binaries to create). But if you place binaries in dist.a.o or repository.a.o, they are effectively part of your release and must follow the same rules. (Votes, etc.) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Are binary artifacts are part of a release?
Forking this off to not distract from release activities. I filed https://issues.apache.org/jira/browse/LEGAL-323 to get clarity on the matter. I read the entire webpage, and it could be improved one way or the other. Best, Andrew On Mon, Jul 31, 2017 at 3:56 PM, Chris Douglaswrote: > On Mon, Jul 31, 2017 at 3:02 PM, Konstantin Shvachko > wrote: > > For the packaging, here is the exact phrasing from the sited > release-policy > > document relevant to binaries: > > "As a convenience to users that might not have the appropriate tools to > > build a compiled version of the source, binary/bytecode packages MAY be > > distributed alongside official Apache releases. In all such cases, the > > binary/bytecode package MUST have the same version number as the source > > release and MUST only add binary/bytecode files that are the result of > > compiling that version of the source code release and its dependencies." > > I don't think my binary package violates any of these. > > +1 The PMC VOTE applies to source code, only. If someone wants to > rebuild the binary tarball with native libs and replace this one, > that's fine. > > My reading of the above is that source code must be distributed with > binaries, not that we omit the source code from binary releases... -C > > > But I'll upload an additional tar.gz with native bits and no src, as you > > guys requested. > > Will keep it as RC0 as there is no source code change and it comes from > the > > same build. > > Hope this is satisfactory. > > > > Thanks, > > --Konstantin > > > > On Mon, Jul 31, 2017 at 1:53 PM, Andrew Wang > > wrote: > > > >> I agree with Brahma on the two issues flagged (having src in the binary > >> tarball, missing native libs). These are regressions from prior > releases. > >> > >> As an aside, "we release binaries as a convenience" doesn't relax the > >> quality bar. The binaries are linked on our website and distributed > through > >> official Apache channels. They have to adhere to Apache release > >> requirements. And, most users consume our work via Maven dependencies, > >> which are binary artifacts. > >> > >> http://www.apache.org/legal/release-policy.html goes into this in more > >> detail. A release must minimally include source packages, and can also > >> include binary artifacts. > >> > >> Best, > >> Andrew > >> > >> On Mon, Jul 31, 2017 at 12:30 PM, Konstantin Shvachko < > >> shv.had...@gmail.com> wrote: > >> > >>> To avoid any confusion in this regard. I built RC0 manually in > compliance > >>> with Apache release policy > >>> http://www.apache.org/legal/release-policy.html > >>> I edited the HowToReleasePreDSBCR page to make sure people don't use > >>> Jenkins option for building. > >>> > >>> A side note. This particular build is broken anyways, so no worries > there. > >>> I think though it would be useful to have it working for testing and > as a > >>> packaging standard. > >>> > >>> Thanks, > >>> --Konstantin > >>> > >>> On Mon, Jul 31, 2017 at 11:40 AM, Allen Wittenauer < > >>> a...@effectivemachines.com > >>> > wrote: > >>> > >>> > > >>> > > On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko < > >>> shv.had...@gmail.com> > >>> > wrote: > >>> > > > >>> > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR > >>> > > >>> > FYI: > >>> > > >>> > If you are using ASF Jenkins to create an ASF release > >>> > artifact, it's pretty much an automatic vote failure as any such > >>> release is > >>> > in violation of ASF policy. > >>> > > >>> > > >>> > >> > >> >
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
On Mon, Jul 31, 2017 at 3:02 PM, Konstantin Shvachkowrote: > For the packaging, here is the exact phrasing from the sited release-policy > document relevant to binaries: > "As a convenience to users that might not have the appropriate tools to > build a compiled version of the source, binary/bytecode packages MAY be > distributed alongside official Apache releases. In all such cases, the > binary/bytecode package MUST have the same version number as the source > release and MUST only add binary/bytecode files that are the result of > compiling that version of the source code release and its dependencies." > I don't think my binary package violates any of these. +1 The PMC VOTE applies to source code, only. If someone wants to rebuild the binary tarball with native libs and replace this one, that's fine. My reading of the above is that source code must be distributed with binaries, not that we omit the source code from binary releases... -C > But I'll upload an additional tar.gz with native bits and no src, as you > guys requested. > Will keep it as RC0 as there is no source code change and it comes from the > same build. > Hope this is satisfactory. > > Thanks, > --Konstantin > > On Mon, Jul 31, 2017 at 1:53 PM, Andrew Wang > wrote: > >> I agree with Brahma on the two issues flagged (having src in the binary >> tarball, missing native libs). These are regressions from prior releases. >> >> As an aside, "we release binaries as a convenience" doesn't relax the >> quality bar. The binaries are linked on our website and distributed through >> official Apache channels. They have to adhere to Apache release >> requirements. And, most users consume our work via Maven dependencies, >> which are binary artifacts. >> >> http://www.apache.org/legal/release-policy.html goes into this in more >> detail. A release must minimally include source packages, and can also >> include binary artifacts. >> >> Best, >> Andrew >> >> On Mon, Jul 31, 2017 at 12:30 PM, Konstantin Shvachko < >> shv.had...@gmail.com> wrote: >> >>> To avoid any confusion in this regard. I built RC0 manually in compliance >>> with Apache release policy >>> http://www.apache.org/legal/release-policy.html >>> I edited the HowToReleasePreDSBCR page to make sure people don't use >>> Jenkins option for building. >>> >>> A side note. This particular build is broken anyways, so no worries there. >>> I think though it would be useful to have it working for testing and as a >>> packaging standard. >>> >>> Thanks, >>> --Konstantin >>> >>> On Mon, Jul 31, 2017 at 11:40 AM, Allen Wittenauer < >>> a...@effectivemachines.com >>> > wrote: >>> >>> > >>> > > On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko < >>> shv.had...@gmail.com> >>> > wrote: >>> > > >>> > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR >>> > >>> > FYI: >>> > >>> > If you are using ASF Jenkins to create an ASF release >>> > artifact, it's pretty much an automatic vote failure as any such >>> release is >>> > in violation of ASF policy. >>> > >>> > >>> >> >> - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11283) why should we not introduce distributed database to storage hdfs's metadata?
[ https://issues.apache.org/jira/browse/HDFS-11283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal resolved HDFS-11283. -- Resolution: Fixed Hi [~chenrongwei], the hdfs-dev mailing list is the right place for these questions. Resolving this. > why should we not introduce distributed database to storage hdfs's metadata? > - > > Key: HDFS-11283 > URL: https://issues.apache.org/jira/browse/HDFS-11283 > Project: Hadoop HDFS > Issue Type: Wish >Reporter: chenrongwei > > why should we not introduce distributed database to storage hdfs's metadata? > In my opinion,it maybe loss some performance,but it has below improvements: > 1、enhance NN's extend ability,such as NN can support much more files and > blocks. The problem of massive little files always make me headache. > 2、In most MR cluster aren't care the performance loss,but more care the > cluster's scale. > 3、NN's HA implements maybe more simpler and reasonable. > so I think maybe we should add a new work mode building on distributed > database for NN. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
Maven dependencies should be fine. For the packaging, here is the exact phrasing from the sited release-policy document relevant to binaries: "As a convenience to users that might not have the appropriate tools to build a compiled version of the source, binary/bytecode packages MAY be distributed alongside official Apache releases. In all such cases, the binary/bytecode package MUST have the same version number as the source release and MUST only add binary/bytecode files that are the result of compiling that version of the source code release and its dependencies." I don't think my binary package violates any of these. But I'll upload an additional tar.gz with native bits and no src, as you guys requested. Will keep it as RC0 as there is no source code change and it comes from the same build. Hope this is satisfactory. Thanks, --Konstantin On Mon, Jul 31, 2017 at 1:53 PM, Andrew Wangwrote: > I agree with Brahma on the two issues flagged (having src in the binary > tarball, missing native libs). These are regressions from prior releases. > > As an aside, "we release binaries as a convenience" doesn't relax the > quality bar. The binaries are linked on our website and distributed through > official Apache channels. They have to adhere to Apache release > requirements. And, most users consume our work via Maven dependencies, > which are binary artifacts. > > http://www.apache.org/legal/release-policy.html goes into this in more > detail. A release must minimally include source packages, and can also > include binary artifacts. > > Best, > Andrew > > On Mon, Jul 31, 2017 at 12:30 PM, Konstantin Shvachko < > shv.had...@gmail.com> wrote: > >> To avoid any confusion in this regard. I built RC0 manually in compliance >> with Apache release policy >> http://www.apache.org/legal/release-policy.html >> I edited the HowToReleasePreDSBCR page to make sure people don't use >> Jenkins option for building. >> >> A side note. This particular build is broken anyways, so no worries there. >> I think though it would be useful to have it working for testing and as a >> packaging standard. >> >> Thanks, >> --Konstantin >> >> On Mon, Jul 31, 2017 at 11:40 AM, Allen Wittenauer < >> a...@effectivemachines.com >> > wrote: >> >> > >> > > On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko < >> shv.had...@gmail.com> >> > wrote: >> > > >> > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR >> > >> > FYI: >> > >> > If you are using ASF Jenkins to create an ASF release >> > artifact, it's pretty much an automatic vote failure as any such >> release is >> > in violation of ASF policy. >> > >> > >> > >
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
Hi Konstantin, Thanks for putting up 2.7.4-RC0 release! +1 (non-binding). With following checks: - build the source on MacOS 10.12.5 with java version 1.8.0_91. - deployed on a single node pseudo-custer - tested HDFS basics: write/read/append/acl - mapreduce job wordcount/bbp/grep Thanks! Chen From: Konstantin ShvachkoSent: Saturday, July 29, 2017 4:29 PM To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org Subject: [VOTE] Release Apache Hadoop 2.7.4 (RC0) Hi everybody, Here is the next release of Apache Hadoop 2.7 line. The previous stable release 2.7.3 was available since 25 August, 2016. Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are critical bug fixes and major optimizations. See more details in Release Note: http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ Please give it a try and vote on this thread. The vote will run for 5 days ending 08/04/2017. Please note that my up to date public key are available from: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS Please don't forget to refresh the page if you've been there recently. There are other place on Apache sites, which may contain my outdated key. Thanks, --Konstantin - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
Just confirmed that HADOOP-13707 does fix the NN servlet issuet in branch-2.7. On Mon, Jul 31, 2017 at 11:38 AM, Konstantin Shvachkowrote: > Hi John, > > Thank you for checking and voting. > As far as I know test failures on 2.7.4 are intermittent. We have a jira > for that > https://issues.apache.org/jira/browse/HDFS-11985 > but decided it should not block the release. > The "dr,who" thing is a configuration issue. This page may be helpful: > http://hadoop.apache.org/docs/stable/hadoop-hdfs-httpfs/ServerSetup.html > > Thanks, > --Konstantin > > On Sun, Jul 30, 2017 at 11:24 PM, John Zhuge wrote: > >> Hi Konstantin, >> >> Thanks a lot for the effort to prepare the 2.7.4-RC0 release! >> >> +1 (non-binding) >> >>- Verified checksums and signatures of all tarballs >>- Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6 >>- Verified cloud connectors: >> - All S3A integration tests >>- Deployed both binary and built source to a pseudo cluster, passed >>the following sanity tests in insecure, SSL, and SSL+Kerberos mode: >> - HDFS basic and ACL >> - DistCp basic >> - MapReduce wordcount (only failed in SSL+Kerberos mode for binary >> tarball, probably unrelated) >> - KMS and HttpFS basic >> - Balancer start/stop >> >> Shall we worry this test failures? Likely fixed by >> https://issues.apache.org/jira/browse/HADOOP-13707. >> >>- Got “curl: (22) The requested URL returned error: 403 User dr.who >>is unauthorized to access this page.” when accessing NameNode web servlet >>/jmx, /conf, /logLevel, and /stacks. It passed in branch-2.8. >> >> >> On Sat, Jul 29, 2017 at 4:29 PM, Konstantin Shvachko < >> shv.had...@gmail.com> wrote: >> >>> Hi everybody, >>> >>> Here is the next release of Apache Hadoop 2.7 line. The previous stable >>> release 2.7.3 was available since 25 August, 2016. >>> Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are >>> critical bug fixes and major optimizations. See more details in Release >>> Note: >>> http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html >>> >>> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ >>> >>> Please give it a try and vote on this thread. The vote will run for 5 >>> days >>> ending 08/04/2017. >>> >>> Please note that my up to date public key are available from: >>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS >>> Please don't forget to refresh the page if you've been there recently. >>> There are other place on Apache sites, which may contain my outdated key. >>> >>> Thanks, >>> --Konstantin >>> >> >> >> >> -- >> John >> > > -- John
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
> On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko> wrote: > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR FYI: If you are using ASF Jenkins to create an ASF release artifact, it's pretty much an automatic vote failure as any such release is in violation of ASF policy. - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
Hi John, Thank you for checking and voting. As far as I know test failures on 2.7.4 are intermittent. We have a jira for that https://issues.apache.org/jira/browse/HDFS-11985 but decided it should not block the release. The "dr,who" thing is a configuration issue. This page may be helpful: http://hadoop.apache.org/docs/stable/hadoop-hdfs-httpfs/ServerSetup.html Thanks, --Konstantin On Sun, Jul 30, 2017 at 11:24 PM, John Zhugewrote: > Hi Konstantin, > > Thanks a lot for the effort to prepare the 2.7.4-RC0 release! > > +1 (non-binding) > >- Verified checksums and signatures of all tarballs >- Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6 >- Verified cloud connectors: > - All S3A integration tests >- Deployed both binary and built source to a pseudo cluster, passed >the following sanity tests in insecure, SSL, and SSL+Kerberos mode: > - HDFS basic and ACL > - DistCp basic > - MapReduce wordcount (only failed in SSL+Kerberos mode for binary > tarball, probably unrelated) > - KMS and HttpFS basic > - Balancer start/stop > > Shall we worry this test failures? Likely fixed by > https://issues.apache.org/jira/browse/HADOOP-13707. > >- Got “curl: (22) The requested URL returned error: 403 User dr.who is >unauthorized to access this page.” when accessing NameNode web servlet >/jmx, /conf, /logLevel, and /stacks. It passed in branch-2.8. > > > On Sat, Jul 29, 2017 at 4:29 PM, Konstantin Shvachko > wrote: > >> Hi everybody, >> >> Here is the next release of Apache Hadoop 2.7 line. The previous stable >> release 2.7.3 was available since 25 August, 2016. >> Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are >> critical bug fixes and major optimizations. See more details in Release >> Note: >> http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html >> >> The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ >> >> Please give it a try and vote on this thread. The vote will run for 5 days >> ending 08/04/2017. >> >> Please note that my up to date public key are available from: >> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS >> Please don't forget to refresh the page if you've been there recently. >> There are other place on Apache sites, which may contain my outdated key. >> >> Thanks, >> --Konstantin >> > > > > -- > John >
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
Hi Brahma Reddy Battula, Formally Apache releases sources. We provide binaries as a reference for convenience. The release instructions for Hadoop 2.7 line at https://wiki.apache.org/hadoop/HowToReleasePreDSBCR don't give much guidance on how to actually build and package the binary tarball. I intentionally included sources as the main content of the release. And did not include native binaries, which very much depend on the environment. LMK if binary packaging is a blocker for you. Thanks, --Konstantin On Sun, Jul 30, 2017 at 7:45 PM, Brahma Reddy Battula < brahmareddy.batt...@huawei.com> wrote: > Thanks konstantin. > > Native files (lib/native) is missed > And src folder exists in tar ball. > > Please check the same. > > Host1:/home/Brahma/Release/hadoop-2.7.4 # ll > total 144 > -rw-r--r-- 1 20415 messagebus 86424 Jul 30 02:54 LICENSE.txt > -rw-r--r-- 1 20415 messagebus 14978 Jul 30 02:54 NOTICE.txt > -rw-r--r-- 1 20415 messagebus 1366 Jul 30 02:54 README.txt > drwxr-xr-x 2 20415 messagebus 4096 Jul 30 02:54 bin > drwxr-xr-x 3 20415 messagebus 4096 Jul 30 02:54 etc > -rw-r--r-- 1 20415 messagebus 1683 Jul 30 03:15 hadoop-client.list > drwxr-xr-x 2 20415 messagebus 4096 Jul 30 02:54 include > drwxr-xr-x 2 20415 messagebus 4096 Jul 30 02:54 libexec > drwxr-xr-x 2 20415 messagebus 4096 Jul 30 02:54 sbin > drwxr-xr-x 4 20415 messagebus 4096 Jul 30 02:54 share > drwxr-xr-x 19 20415 messagebus 4096 Jul 30 03:01 src > > > > --Brahma Reddy Battula > > -Original Message- > From: Konstantin Shvachko [mailto:shv.had...@gmail.com] > Sent: 30 July 2017 07:29 > To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; > mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org > Subject: [VOTE] Release Apache Hadoop 2.7.4 (RC0) > > Hi everybody, > > Here is the next release of Apache Hadoop 2.7 line. The previous stable > release 2.7.3 was available since 25 August, 2016. > Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are > critical bug fixes and major optimizations. See more details in Release > Note: > http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html > > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ > > Please give it a try and vote on this thread. The vote will run for 5 days > ending 08/04/2017. > > Please note that my up to date public key are available from: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > Please don't forget to refresh the page if you've been there recently. > There are other place on Apache sites, which may contain my outdated key. > > Thanks, > --Konstantin >
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
Thanks for putting this up, Konstantin. I'm +1(non-binding) on the source tarball. * verified signature and mds, * built from source on CentOS 7 and OpenJDK 8 with native profile, * built RPMs by bigtop, deployed 5 nodes cluster by docker provisioner, ran smoke-tests of hdfs, yarn and mapreduce, * built site documentation and skimmed the contents. The binary tarball seems to be inconsistent with previous 2.7.x releases as Brahma pointed out. Bests, Masatake Iwasaki On 7/30/17 08:29, Konstantin Shvachko wrote: Hi everybody, Here is the next release of Apache Hadoop 2.7 line. The previous stable release 2.7.3 was available since 25 August, 2016. Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are critical bug fixes and major optimizations. See more details in Release Note: http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ Please give it a try and vote on this thread. The vote will run for 5 days ending 08/04/2017. Please note that my up to date public key are available from: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS Please don't forget to refresh the page if you've been there recently. There are other place on Apache sites, which may contain my outdated key. Thanks, --Konstantin - To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/480/ [Jul 31, 2017 2:09:13 AM] (aajisaka) YARN-5728. TestMiniYarnClusterNodeUtilization.testUpdateNodeUtilization [Jul 31, 2017 5:08:30 AM] (aajisaka) HADOOP-14690. RetryInvocationHandler should override toString(). [Jul 31, 2017 5:15:48 AM] (junping_du) HADOOP-14672. Shaded Hadoop-client-minicluster include unshaded classes, -1 overall The following subsystems voted -1: findbugs unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: FindBugs : module:hadoop-hdfs-project/hadoop-hdfs-client Possible exposure of partially initialized object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:object in org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At DFSClient.java:[line 2906] org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) makes inefficient use of keySet iterator instead of entrySet iterator At SlowDiskReports.java:keySet iterator instead of entrySet iterator At SlowDiskReports.java:[line 105] FindBugs : module:hadoop-hdfs-project/hadoop-hdfs Possible null pointer dereference in org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to return value of called method Dereferenced at JournalNode.java:[line 302] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String) unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId At HdfsServerConstants.java:[line 193] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int) unconditionally sets the field force At HdfsServerConstants.java:force At HdfsServerConstants.java:[line 217] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean) unconditionally sets the field isForceFormat At HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean) unconditionally sets the field isInteractiveFormat At HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 237] Possible null pointer dereference in org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, int, HardLink, boolean, File, List) due to return value of called method Dereferenced at DataStorage.java:[line 1339] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String, long) due to return value of called method Dereferenced at NNStorageRetentionManager.java:[line 258] Possible null pointer dereference in org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, BasicFileAttributes) due to return value of called method Dereferenced at NNUpgradeUtil.java:[line 133] Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 2100] Useless condition:numBlocks == -1 at this point At ImageLoaderCurrent.java:[line 727] FindBugs : module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager Useless object stored in variable removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:removedNullContainers of method org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List) At NodeStatusUpdaterImpl.java:[line 642] org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache() makes inefficient use of keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:keySet iterator instead of entrySet iterator At NodeStatusUpdaterImpl.java:[line 719] Hard coded reference to an absolute pathname in
RE: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
+1 (non-binding) -Installed HA cluster. -Verified HDFS basic. -Read/Write/Append. -Snapshot -ACL -xAttr -Truncate -Quota -Admin operations -Surendra Singh Lilhore -Original Message- From: Konstantin Shvachko [mailto:shv.had...@gmail.com] Sent: 30 July 2017 04:59 To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org Subject: [VOTE] Release Apache Hadoop 2.7.4 (RC0) Hi everybody, Here is the next release of Apache Hadoop 2.7 line. The previous stable release 2.7.3 was available since 25 August, 2016. Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are critical bug fixes and major optimizations. See more details in Release Note: http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ Please give it a try and vote on this thread. The vote will run for 5 days ending 08/04/2017. Please note that my up to date public key are available from: https://dist.apache.org/repos/dist/release/hadoop/common/KEYS Please don't forget to refresh the page if you've been there recently. There are other place on Apache sites, which may contain my outdated key. Thanks, --Konstantin
About HDFS consistency model
Hi Dev In short, is there a full introduction to hdfs consistency model? I have already read some miscellaneous stuffs, e.g. - some simple scenario - read+read, ok - write+write, forbidden (guaranteed be lease) - design doc of append, truncate - hflush/hsync - top pages of google "hdfs consistency model" But I didn't find a detailed doc/link to clarify this topic, especially how to deal with concurrent read+write, e.g.: Can a later reader can read the contents which the writer (doesn't close the file yet) appended? Is hflush/hsync the only ways to ensure the consistency? etc.. I known the GFS paper has a specific section to clarify this topic, so I think hdfs may has a similar one. Very thanks for helpful link. -- Regards, Hongxu.
Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)
Hi Konstantin, Thanks a lot for the effort to prepare the 2.7.4-RC0 release! +1 (non-binding) - Verified checksums and signatures of all tarballs - Built source with native, Java 1.8.0_131-b11 on Mac OS X 10.12.6 - Verified cloud connectors: - All S3A integration tests - Deployed both binary and built source to a pseudo cluster, passed the following sanity tests in insecure, SSL, and SSL+Kerberos mode: - HDFS basic and ACL - DistCp basic - MapReduce wordcount (only failed in SSL+Kerberos mode for binary tarball, probably unrelated) - KMS and HttpFS basic - Balancer start/stop Shall we worry this test failures? Likely fixed by https://issues.apache.org/jira/browse/HADOOP-13707. - Got “curl: (22) The requested URL returned error: 403 User dr.who is unauthorized to access this page.” when accessing NameNode web servlet /jmx, /conf, /logLevel, and /stacks. It passed in branch-2.8. On Sat, Jul 29, 2017 at 4:29 PM, Konstantin Shvachkowrote: > Hi everybody, > > Here is the next release of Apache Hadoop 2.7 line. The previous stable > release 2.7.3 was available since 25 August, 2016. > Release 2.7.4 includes 264 issues fixed after release 2.7.3, which are > critical bug fixes and major optimizations. See more details in Release > Note: > http://home.apache.org/~shv/hadoop-2.7.4-RC0/releasenotes.html > > The RC0 is available at: http://home.apache.org/~shv/hadoop-2.7.4-RC0/ > > Please give it a try and vote on this thread. The vote will run for 5 days > ending 08/04/2017. > > Please note that my up to date public key are available from: > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS > Please don't forget to refresh the page if you've been there recently. > There are other place on Apache sites, which may contain my outdated key. > > Thanks, > --Konstantin > -- John