[jira] [Created] (HADOOP-14392) Memory leak issue in Zlib compressor

2017-05-05 Thread Jimmy Ouyang (JIRA)
Jimmy Ouyang created HADOOP-14392:
-

 Summary: Memory leak issue in Zlib compressor
 Key: HADOOP-14392
 URL: https://issues.apache.org/jira/browse/HADOOP-14392
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.0.0-alpha2, 3.0.0-alpha1, 2.6.0
Reporter: Jimmy Ouyang
Priority: Critical


While using Hadoop-2.6.0 and Hadoop-3.0, we noticed in 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressorStream.java,
 there is memory leak due to missing calls to compressor.end() in function 
close(). compressor.end() function calls zlib native function deflateEnd(), in 
which zlib buffers are freed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-05-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/305/

[May 4, 2017 6:39:14 PM] (wang) HDFS-11643. Add shouldReplicate option to 
create builder. Contributed by
[May 4, 2017 7:06:50 PM] (lei) HDFS-11687. Add new public encryption APIs 
required by Hive. (lei)
[May 4, 2017 8:17:46 PM] (jlowe) HADOOP-14380. Make the Guava version Hadoop 
which builds with
[May 4, 2017 10:25:56 PM] (vrushali) YARN-6375 App level aggregation should not 
consider metric values
[May 4, 2017 10:57:44 PM] (arp) HDFS-11448. JN log segment syncing should 
support HA upgrade.
[May 5, 2017 12:21:46 AM] (rkanter) YARN-6522. Make SLS JSON input file format 
simple and scalable (yufeigu
[May 5, 2017 3:54:50 AM] (yqlin) HDFS-11530. Use HDFS specific network topology 
to choose datanode in
[May 5, 2017 12:03:09 PM] (stevel) HADOOP-14382 Remove usages of 
MoreObjects.toStringHelper. Contributed by




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.ha.TestZKFailoverControllerStress 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190 
   hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 
   hadoop.hdfs.TestDFSRSDefault10x4StripedOutputStreamWithFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.TestFileAppend 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapred.TestShuffleHandler 
   hadoop.tools.TestHadoopArchiveLogsRunner 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 

Timed out junit tests :

   org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults 
   org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   
org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/305/artifact/out/patch-mvninstall-root.txt
  [496K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/305/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/305/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   

[jira] [Created] (HADOOP-14391) s3a: auto-detect region for bucket and use right endpoint

2017-05-05 Thread Aaron Fabbri (JIRA)
Aaron Fabbri created HADOOP-14391:
-

 Summary: s3a: auto-detect region for bucket and use right endpoint
 Key: HADOOP-14391
 URL: https://issues.apache.org/jira/browse/HADOOP-14391
 Project: Hadoop Common
  Issue Type: Improvement
  Components: s3
Affects Versions: 3.0.0-alpha2
Reporter: Aaron Fabbri


Specifying the S3A endpoint ({{fs.s3a.endpoint}}) is

- *required* for regions which only support v4 authentication
- A good practice for all regions.

The user experience of having to configure endpoints is not great.  Often it is 
neglected and leads to additional cost, reduced performance, or failures for v4 
auth regions.

I want to explore an option which, when enabled, auto-detects the region for an 
s3 bucket and uses the proper endpoint.  Not sure if this is possible or anyone 
has looked into it yet.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: About 2.7.4 Release

2017-05-05 Thread Erik Krogen
List LGTM Konstantin!

Let's say that we will only create a new tracking JIRA for patches which do
not backport cleanly, to avoid having too many lying around. Otherwise we
can directly attach to old ticket. If a clean backport does happen to break
a test the nightly build will help us catch it.

Erik

On Thu, May 4, 2017 at 7:21 PM, Konstantin Shvachko 
wrote:

> Great Zhe. Let's monitor the build.
>
> I marked all jiras I knew of for inclusion into 2.7.4 as I described
> before.
> Target Version/s: 2.7.4
> Label: release-blocker
>
> Here is the link to the list: https://s.apache.org/Dzg4
> Please let me know if I missed anything.
> And feel free to pick up any. Most of backports are pretty straightforward,
> but not all.
>
> We can create tracking jiras for backporting if you need to run Jenkins on
> the patch (and since Allen does not allow reopening them).
> But I think the final patch should be attached to the original jira.
> Otherwise history will be hard to follow.
>
> Thanks,
> --Konstantin
>
> On Wed, May 3, 2017 at 4:53 PM, Zhe Zhang  wrote:
>
> > Thanks for volunteering as RM Konstantin! The plan LGTM.
> >
> > I've created a nightly Jenkins job for branch-2.7 (unit tests):
> > https://builds.apache.org/job/Hadoop-branch2.7-nightly/
> >
> > On Wed, May 3, 2017 at 12:42 AM Konstantin Shvachko <
> shv.had...@gmail.com>
> > wrote:
> >
> >> Hey guys,
> >>
> >> I and a few of my colleagues would like to help here and move 2.7.4
> >> release
> >> forward. A few points in this regard.
> >>
> >> 1. Reading through this thread since March 1 I see that Vinod hinted on
> >> managing the release. Vinod, if you still want the job / have bandwidth
> >> will be happy to work with you.
> >> Otherwise I am glad to volunteer as the release manager.
> >>
> >> 2. In addition to current blockers and criticals, I would like to
> propose
> >> a
> >> few issues to be included in the release, see the list below. Those are
> >> mostly bug fixes and optimizations, which we already have in our
> internal
> >> branch and run in production. Plus one minor feature "node labeling",
> >> which
> >> we found very handy, when you have heterogeneous environments and mixed
> >> workloads, like MR and Spark.
> >>
> >> 3. For marking issues for the release I propose to
> >>  - set the target version to 2.7.4, and
> >>  - add a new label "release-blocker"
> >> That way we will know issues targeted for the release without reopening
> >> them for backports.
> >>
> >> 4. I see quite a few people are interested in the release. With all the
> >> help I think we can target to release by the end of May.
> >>
> >> Other things include fixing CHANGES.txt and fixing Jenkins build for
> 2.7.4
> >> branch.
> >>
> >> Thanks,
> >> --Konstantin
> >>
> >> ==  List of issue for 2.7.4  ===
> >> -- Backports
> >> HADOOP-12975 . Add
> du
> >> jitters
> >> HDFS-9710 . IBR
> batching
> >> HDFS-10715 . NPE when
> >> applying AvailableSpaceBlockPlacementPolicy
> >> HDFS-2538 . fsck
> removal
> >> of dot printing
> >> HDFS-8131 .
> >> space-balanced
> >> policy for balancer
> >> HDFS-8549 . abort
> >> balancer
> >> if upgrade in progress
> >> HDFS-9412 . skip small
> >> blocks in getBlocks
> >>
> >> YARN-1471 . SLS
> >> simulator
> >> YARN-4302 . SLS
> >> YARN-4367 . SLS
> >> YARN-4612 . SLS
> >>
> >> - Node labeling
> >> MAPREDUCE-6304 
> >> YARN-2943 
> >> YARN-4109 
> >> YARN-4140 
> >> YARN-4250 
> >> YARN-4925 
> >>
> > --
> > Zhe Zhang
> > Apache Hadoop Committer
> > http://zhe-thoughts.github.io/about/ | @oldcap
> >
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-05-05 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/394/

[May 4, 2017 6:39:14 PM] (wang) HDFS-11643. Add shouldReplicate option to 
create builder. Contributed by
[May 4, 2017 7:06:50 PM] (lei) HDFS-11687. Add new public encryption APIs 
required by Hive. (lei)
[May 4, 2017 8:17:46 PM] (jlowe) HADOOP-14380. Make the Guava version Hadoop 
which builds with
[May 4, 2017 10:25:56 PM] (vrushali) YARN-6375 App level aggregation should not 
consider metric values
[May 4, 2017 10:57:44 PM] (arp) HDFS-11448. JN log segment syncing should 
support HA upgrade.
[May 5, 2017 12:21:46 AM] (rkanter) YARN-6522. Make SLS JSON input file format 
simple and scalable (yufeigu
[May 5, 2017 3:54:50 AM] (yqlin) HDFS-11530. Use HDFS specific network topology 
to choose datanode in




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 350] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   

[jira] [Created] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-05-05 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-14389:
-

 Summary: Exception handling is incorrect in KerberosName.java
 Key: HADOOP-14389
 URL: https://issues.apache.org/jira/browse/HADOOP-14389
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Andras Bokor
Assignee: Andras Bokor


I found multiple inconsistency:

Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}}
Principal: {{nn/host.dom...@realm.tld}}
Expected exception: {{BadStringFormat: ...3 is out of range...}}
Actual exception: {{ArrayIndexOutOfBoundsException: 3}}

Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components)
Expected: {{IllegalArgumentException}}
Actual: {{java.lang.NumberFormatException: For input string: ""}}

Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}}
Expected {{BadStringFormat: -1 is outside of valid range...}}
Actual: {{java.lang.NumberFormatException: For input string: ""}}

Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}}
Expected {{java.lang.NumberFormatException: For input string: "one"}}
Acutal {{java.lang.NumberFormatException: For input string: ""}}


In addtion:
{code}[^\\]]{code}
does not really make sense in {{ruleParser}}. Most probably it was needed 
because we parse the whole rule string and remove the parsed rule from 
beginning of the string: {{KerberosName#parseRules}}. This made the regex 
engine parse wrong without it.

In addition:
In tests some corner cases are not covered.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14388) Don't set the key password if there is a problem reading SSL configuration

2017-05-05 Thread Colm O hEigeartaigh (JIRA)
Colm O hEigeartaigh created HADOOP-14388:


 Summary: Don't set the key password if there is a problem reading 
SSL configuration
 Key: HADOOP-14388
 URL: https://issues.apache.org/jira/browse/HADOOP-14388
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.3, 2.8.0
Reporter: Colm O hEigeartaigh
Assignee: Colm O hEigeartaigh
Priority: Minor
 Fix For: 2.7.4, 2.8.1


When launching MiniDFSCluster in a test, with 
"dfs.data.transfer.protection=integrity" and without specifying a 
ssl-server.xml, the code hangs on "builder.build()". 

This is because in HttpServer2, it is setting a null value on the 
SslSocketConnector:

c.setKeyPassword(keyPassword);

Instead, this call should be inside the "if (keystore != null) {" block. Once 
this is done the code exits as expected with an error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14387) new Configuration() fails if core-site.xml isn't on the classpath

2017-05-05 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14387:
---

 Summary: new Configuration() fails if core-site.xml isn't on the 
classpath
 Key: HADOOP-14387
 URL: https://issues.apache.org/jira/browse/HADOOP-14387
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0-alpha3
 Environment: test run in downstream project with no core-site in 
test/resources
Reporter: Steve Loughran
Priority: Blocker


If you try to create a config via {{new Configuration()}} and there isn't a 
{{core-site.xml}} on the CP, you get a stack trace. Previously it'd just skip 
the failure to load.

This is a regression which breaks downstream apps that don't need a core-site 
to run, but do want to load core-default 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org