Re: About 2.7.4 Release

2017-05-08 Thread Konstantin Shvachko
Hi Brahma Reddy Battula,

Actually the original link works fine: https://s.apache.org/Dzg4
Your link excludes closed and resolved issues, which needs backporting, and
which we cannot reopen, as discussed in this thread earlier.

Looked through the issues you proposed:

HDFS-9311 
Seems like a new feature. It helps failover to standby node when primary is
under heavy load, but it introduces new APIs, addresses, config parameters.
And needs at least one follow up jira.
Looks like a backward compatible change, though.
Did you have a chance to run it in production?

+1 on
HDFS-10987 
HDFS-9902 
HDFS-8312 
HADOOP-14100 

Added them to 2.7.4 release. You should see them via the above link now.
Would be good if you could attach backport patches for some of them?

Appreciate your help,
--Konstantin

On Mon, May 8, 2017 at 8:39 AM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

>
> Looks following link is not correct..
>
> https://s.apache.org/Dzg4
>
> It should be like following..?
>
> https://s.apache.org/wi3U
>
>
> Apart from Konstantin mentioned,Following also good to go..? let me know
> your thoughts on this.
>
> For Large Cluster:
> =
>
> https://issues.apache.org/jira/browse/HDFS-9311===Life Line Protocol
> https://issues.apache.org/jira/browse/HDFS-10987===Deecommission
> Expensive when lot's of blocks are present
>
> https://issues.apache.org/jira/browse/HDFS-9902===
> "dfs.datanode.du.reserved"  per Storage Type
>
> For Security:
> =
> https://issues.apache.org/jira/browse/HDFS-8312===Trash does not descent
> into child directories to check for permission
> https://issues.apache.org/jira/browse/HADOOP-14100===Upgrade Jsch jar to
> latest version to fix vulnerability in old versions
>
>
>
> Regards
> Brahma Reddy Battula
>
> -Original Message-
> From: Erik Krogen [mailto:ekro...@linkedin.com.INVALID]
> Sent: 06 May 2017 02:40
> To: Konstantin Shvachko
> Cc: Zhe Zhang; Hadoop Common; Hdfs-dev; mapreduce-dev@hadoop.apache.org;
> yarn-...@hadoop.apache.org
> Subject: Re: About 2.7.4 Release
>
> List LGTM Konstantin!
>
> Let's say that we will only create a new tracking JIRA for patches which
> do not backport cleanly, to avoid having too many lying around. Otherwise
> we can directly attach to old ticket. If a clean backport does happen to
> break a test the nightly build will help us catch it.
>
> Erik
>
> On Thu, May 4, 2017 at 7:21 PM, Konstantin Shvachko 
> wrote:
>
> > Great Zhe. Let's monitor the build.
> >
> > I marked all jiras I knew of for inclusion into 2.7.4 as I described
> > before.
> > Target Version/s: 2.7.4
> > Label: release-blocker
> >
> > Here is the link to the list: https://s.apache.org/Dzg4 Please let me
> > know if I missed anything.
> > And feel free to pick up any. Most of backports are pretty
> > straightforward, but not all.
> >
> > We can create tracking jiras for backporting if you need to run
> > Jenkins on the patch (and since Allen does not allow reopening them).
> > But I think the final patch should be attached to the original jira.
> > Otherwise history will be hard to follow.
> >
> > Thanks,
> > --Konstantin
> >
> > On Wed, May 3, 2017 at 4:53 PM, Zhe Zhang  wrote:
> >
> > > Thanks for volunteering as RM Konstantin! The plan LGTM.
> > >
> > > I've created a nightly Jenkins job for branch-2.7 (unit tests):
> > > https://builds.apache.org/job/Hadoop-branch2.7-nightly/
> > >
> > > On Wed, May 3, 2017 at 12:42 AM Konstantin Shvachko <
> > shv.had...@gmail.com>
> > > wrote:
> > >
> > >> Hey guys,
> > >>
> > >> I and a few of my colleagues would like to help here and move 2.7.4
> > >> release forward. A few points in this regard.
> > >>
> > >> 1. Reading through this thread since March 1 I see that Vinod
> > >> hinted on managing the release. Vinod, if you still want the job /
> > >> have bandwidth will be happy to work with you.
> > >> Otherwise I am glad to volunteer as the release manager.
> > >>
> > >> 2. In addition to current blockers and criticals, I would like to
> > propose
> > >> a
> > >> few issues to be included in the release, see the list below. Those
> > >> are mostly bug fixes and optimizations, which we already have in
> > >> our
> > internal
> > >> branch and run in production. Plus one minor feature "node
> > >> labeling", which we found very handy, when you have heterogeneous
> > >> environments and mixed workloads, like MR and Spark.
> > >>
> > >> 3. For marking issues for the release I propose to
> > >>  - set the target version to 2.7.4, and
> > >>  - add a new label "release-blocker"
> > >> That way we will know issues targeted for the release without
> > >> reopening them for backports.
> > >>
> > >> 4. I see quite a few people 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-05-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/

[May 7, 2017 8:59:15 PM] (wang) HADOOP-14298. TestHadoopArchiveLogsRunner 
fails. Contribute dby Akira
[May 7, 2017 9:45:26 PM] (wang) HDFS-9342. Erasure coding: client should update 
and commit block based




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.server.namenode.TestProcessCorruptBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.TestDFSUpgrade 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives 
   hadoop.hdfs.TestLocalDFS 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.TestFileAppend 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapred.TestShuffleHandler 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 

Timed out junit tests :

   org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-mvninstall-root.txt
  [492K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [1.5M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/308/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   

RE: About 2.7.4 Release

2017-05-08 Thread Brahma Reddy Battula

Looks following link is not correct.. 

https://s.apache.org/Dzg4

It should be like following..?

https://s.apache.org/wi3U


Apart from Konstantin mentioned,Following also good to go..? let me know your 
thoughts on this.

For Large Cluster:
=

https://issues.apache.org/jira/browse/HDFS-9311===Life Line Protocol
https://issues.apache.org/jira/browse/HDFS-10987===Deecommission Expensive when 
lot's of blocks are present

https://issues.apache.org/jira/browse/HDFS-9902=== "dfs.datanode.du.reserved"  
per Storage Type

For Security:
=
https://issues.apache.org/jira/browse/HDFS-8312===Trash does not descent into 
child directories to check for permission
https://issues.apache.org/jira/browse/HADOOP-14100===Upgrade Jsch jar to latest 
version to fix vulnerability in old versions



Regards
Brahma Reddy Battula

-Original Message-
From: Erik Krogen [mailto:ekro...@linkedin.com.INVALID] 
Sent: 06 May 2017 02:40
To: Konstantin Shvachko
Cc: Zhe Zhang; Hadoop Common; Hdfs-dev; mapreduce-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: Re: About 2.7.4 Release

List LGTM Konstantin!

Let's say that we will only create a new tracking JIRA for patches which do not 
backport cleanly, to avoid having too many lying around. Otherwise we can 
directly attach to old ticket. If a clean backport does happen to break a test 
the nightly build will help us catch it.

Erik

On Thu, May 4, 2017 at 7:21 PM, Konstantin Shvachko 
wrote:

> Great Zhe. Let's monitor the build.
>
> I marked all jiras I knew of for inclusion into 2.7.4 as I described 
> before.
> Target Version/s: 2.7.4
> Label: release-blocker
>
> Here is the link to the list: https://s.apache.org/Dzg4 Please let me 
> know if I missed anything.
> And feel free to pick up any. Most of backports are pretty 
> straightforward, but not all.
>
> We can create tracking jiras for backporting if you need to run 
> Jenkins on the patch (and since Allen does not allow reopening them).
> But I think the final patch should be attached to the original jira.
> Otherwise history will be hard to follow.
>
> Thanks,
> --Konstantin
>
> On Wed, May 3, 2017 at 4:53 PM, Zhe Zhang  wrote:
>
> > Thanks for volunteering as RM Konstantin! The plan LGTM.
> >
> > I've created a nightly Jenkins job for branch-2.7 (unit tests):
> > https://builds.apache.org/job/Hadoop-branch2.7-nightly/
> >
> > On Wed, May 3, 2017 at 12:42 AM Konstantin Shvachko <
> shv.had...@gmail.com>
> > wrote:
> >
> >> Hey guys,
> >>
> >> I and a few of my colleagues would like to help here and move 2.7.4 
> >> release forward. A few points in this regard.
> >>
> >> 1. Reading through this thread since March 1 I see that Vinod 
> >> hinted on managing the release. Vinod, if you still want the job / 
> >> have bandwidth will be happy to work with you.
> >> Otherwise I am glad to volunteer as the release manager.
> >>
> >> 2. In addition to current blockers and criticals, I would like to
> propose
> >> a
> >> few issues to be included in the release, see the list below. Those 
> >> are mostly bug fixes and optimizations, which we already have in 
> >> our
> internal
> >> branch and run in production. Plus one minor feature "node 
> >> labeling", which we found very handy, when you have heterogeneous 
> >> environments and mixed workloads, like MR and Spark.
> >>
> >> 3. For marking issues for the release I propose to
> >>  - set the target version to 2.7.4, and
> >>  - add a new label "release-blocker"
> >> That way we will know issues targeted for the release without 
> >> reopening them for backports.
> >>
> >> 4. I see quite a few people are interested in the release. With all 
> >> the help I think we can target to release by the end of May.
> >>
> >> Other things include fixing CHANGES.txt and fixing Jenkins build 
> >> for
> 2.7.4
> >> branch.
> >>
> >> Thanks,
> >> --Konstantin
> >>
> >> ==  List of issue for 2.7.4  ===
> >> -- Backports
> >> HADOOP-12975 . 
> >> Add
> du
> >> jitters
> >> HDFS-9710 . IBR
> batching
> >> HDFS-10715 . NPE 
> >> when applying AvailableSpaceBlockPlacementPolicy
> >> HDFS-2538 . fsck
> removal
> >> of dot printing
> >> HDFS-8131 .
> >> space-balanced
> >> policy for balancer
> >> HDFS-8549 . abort 
> >> balancer if upgrade in progress
> >> HDFS-9412 . skip 
> >> small blocks in getBlocks
> >>
> >> YARN-1471 . SLS 
> >> simulator
> >> YARN-4302 . SLS
> >> YARN-4367 . SLS
> >> YARN-4612 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-05-08 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/397/

[May 7, 2017 8:59:15 PM] (wang) HADOOP-14298. TestHadoopArchiveLogsRunner 
fails. Contribute dby Akira
[May 7, 2017 9:45:26 PM] (wang) HDFS-9342. Erasure coding: client should update 
and commit block based




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 350] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
100] 
   Useless object stored in variable seqOs of method