[RESULT] [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread Junping Du
Thanks again for all who verified and voted!


I give my binding +1 to conclude the vote for 2.8.0 RC3, based on:
- Build from source and verify signatures
- Deploy pseudo-distributed cluster with capacity scheduler
- Verify UI of daemons, like: NameNode, ResourceManager, NodeManager, etc.
- Run some example MR jobs, like: PI, sleep, etc.

Now, we have:

7 binding +1s, from:
 Wangda Tan, Jason Lowe, Akira Ajisaka, Ravi Prakash,
 Karthik Kambatla, Jian He, Junping Du

18 non-binding +1s, from:
Miklos Szegedi, Eric Payne, Daniel Templeton, Mingliang Liu,
Sunil Govind, Marton Elek, Brahma Reddy Battula, Masatake Iwasaki,
Gergo Pasztor, Haibo Chen, Zhihai Xu, John Zhuge, Eric Badger,
Kuhu Shukla, Larry Mccay, Rakesh Radhakrishnan, Naganarasimha Garla,
Varun Saxena

1 binding +0, from:
Steve Loughran

and no -1s.

So I am glad to announce that the vote of 2.8.0 RC3 passes.

Thanks everyone listed above who tried the release candidate and vote.
Also, kudos to all who ever help with 2.8.0 release effort in all kinds of ways.
Without working together in community, we cannot afford so many issues get found
in RC stage (as a minor release with debit of ~2 years commits) and most of them
get fixed so quickly.

I'll push the release bits and send out an announcement for 2.8.0 soon.


Thanks,

Junping​



From: Karthik Kambatla 
Sent: Wednesday, March 22, 2017 2:10 PM
To: varunsax...@apache.org
Cc: Junping Du; common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

+1 (binding)

* Built from source
* Started a pseudo-distributed cluster with fairscheduler.
* Ran sample jobs
* Verified WebUI

On Wed, Mar 22, 2017 at 11:56 AM, 
varunsax...@apache.org 
mailto:varun.saxena.apa...@gmail.com>> wrote:
Thanks Junping for creating the release.

+1 (non-binding)

* Verified signatures.
* Built from source.
* Set up a pseudo-distributed cluster.
* Successfully ran pi and wordcount jobs.
* Navigated the YARN RM and NM UI.

Regards,
Varun Saxena.

On Wed, Mar 22, 2017 at 12:13 AM, Haibo Chen 
mailto:haiboc...@cloudera.com>> wrote:

> Thanks Junping for working on the new release!
>
> +1 non-binding
>
> 1) Downloaded the source, verified the checksum
> 2) Built natively from source, and deployed it to a pseudo-distributed
> cluster
> 3) Ran sleep and teragen job and checked both YARN and JHS web UI
> 4) Played with yarn + mapreduce command lines
>
> Best,
> Haibo Chen
>
> On Mon, Mar 20, 2017 at 11:18 AM, Junping Du 
> mailto:j...@hortonworks.com>> wrote:
>
> > ?Thanks for update, John. Then we should be OK with fixing this issue in
> > 2.8.1.
> >
> > Mark the target version of HADOOP-14205 to 2.8.1 instead of 2.8.0 and
> bump
> > up to blocker in case we could miss this in releasing 2.8.1. :)
> >
> >
> > Thanks,
> >
> >
> > Junping
> >
> > 
> > From: John Zhuge mailto:jzh...@cloudera.com>>
> > Sent: Monday, March 20, 2017 10:31 AM
> > To: Junping Du
> > Cc: common-dev@hadoop.apache.org; 
> > hdfs-...@hadoop.apache.org;
> > yarn-...@hadoop.apache.org; 
> > mapreduce-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> >
> > Yes, it only affects ADL. There is a workaround of adding these 2
> > properties to core-site.xml:
> >
> >   
> > fs.adl.impl
> > org.apache.hadoop.fs.adl.AdlFileSystem
> >   
> >
> >   
> > fs.AbstractFileSystem.adl.impl
> > org.apache.hadoop.fs.adl.Adl
> >   
> >
> > I have the initial patch ready but hitting these live unit test failures:
> >
> > Failed tests:
> >   TestAdlFileSystemContractLive.runTest:60->FileSystemContract
> BaseTest.testListStatus:257
> > expected:<1> but was:<10>
> >
> > Tests in error:
> >   TestAdlFileContextMainOperationsLive>FileContextMainOperatio
> nsBaseTest.
> > testMkdirsFailsForSubdirectoryOfExistingFile:254 » AccessControl
> >   TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> > testMkdirsFailsForSubdirectoryOfExistingFile:190 » AccessControl
> >
> >
> > Stay tuned...
> >
> > John Zhuge
> > Software Engineer, Cloudera
> >
> > On Mon, Mar 20, 2017 at 10:02 AM, Junping Du 
> > mailto:j...@hortonworks.com>
>  > j...@hortonworks.com>> wrote:
> >
> > Thank you for reporting the issue, John! Does this issue only affect ADL
> > (Azure Data Lake) which is a new feature for 2.8 rather than other
> existing
> > FS? If so, I think we can leave the fix to 2.8.1 to fix given this is
> not a
> > regression and just a new feature get broken.?
> >
> >
> > Thanks,
> >
> >
> > Junping
> >
> > 
> > From: John Zhuge 
> > mailto:jzh...@cloudera.com>

[jira] [Created] (HADOOP-14217) Object Storage: support colon in object path

2017-03-22 Thread Genmao Yu (JIRA)
Genmao Yu created HADOOP-14217:
--

 Summary: Object Storage: support colon in object path
 Key: HADOOP-14217
 URL: https://issues.apache.org/jira/browse/HADOOP-14217
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Genmao Yu






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14216) Improve Configuration XML Parsing Performance

2017-03-22 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created HADOOP-14216:


 Summary: Improve Configuration XML Parsing Performance
 Key: HADOOP-14216
 URL: https://issues.apache.org/jira/browse/HADOOP-14216
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jonathan Eagles






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14215) DynamoDB client should waitForActive on existing tables

2017-03-22 Thread Sean Mackrory (JIRA)
Sean Mackrory created HADOOP-14215:
--

 Summary: DynamoDB client should waitForActive on existing tables
 Key: HADOOP-14215
 URL: https://issues.apache.org/jira/browse/HADOOP-14215
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Sean Mackrory
Assignee: Sean Mackrory


I saw a case where 2 separate applications tried to use the same 
non-pre-existing table with table.create = true at about the same time. One 
failed with a ResourceInUse exception. If a table does not exist, we attempt to 
create it and then wait for it to enter the active state. If another jumps in 
in the middle of that, the table may exist, thus bypassing our call to 
waitForActive(), and then try to use the table immediately.

While we're at it, let's also make sure that the race condition where a table 
might get created between checking if it exists and attempting to create it is 
handled gracefully.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread Karthik Kambatla
+1 (binding)

* Built from source
* Started a pseudo-distributed cluster with fairscheduler.
* Ran sample jobs
* Verified WebUI

On Wed, Mar 22, 2017 at 11:56 AM, varunsax...@apache.org <
varun.saxena.apa...@gmail.com> wrote:

> Thanks Junping for creating the release.
>
> +1 (non-binding)
>
> * Verified signatures.
> * Built from source.
> * Set up a pseudo-distributed cluster.
> * Successfully ran pi and wordcount jobs.
> * Navigated the YARN RM and NM UI.
>
> Regards,
> Varun Saxena.
>
> On Wed, Mar 22, 2017 at 12:13 AM, Haibo Chen 
> wrote:
>
> > Thanks Junping for working on the new release!
> >
> > +1 non-binding
> >
> > 1) Downloaded the source, verified the checksum
> > 2) Built natively from source, and deployed it to a pseudo-distributed
> > cluster
> > 3) Ran sleep and teragen job and checked both YARN and JHS web UI
> > 4) Played with yarn + mapreduce command lines
> >
> > Best,
> > Haibo Chen
> >
> > On Mon, Mar 20, 2017 at 11:18 AM, Junping Du 
> wrote:
> >
> > > ?Thanks for update, John. Then we should be OK with fixing this issue
> in
> > > 2.8.1.
> > >
> > > Mark the target version of HADOOP-14205 to 2.8.1 instead of 2.8.0 and
> > bump
> > > up to blocker in case we could miss this in releasing 2.8.1. :)
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Junping
> > >
> > > 
> > > From: John Zhuge 
> > > Sent: Monday, March 20, 2017 10:31 AM
> > > To: Junping Du
> > > Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> > > yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> > > Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> > >
> > > Yes, it only affects ADL. There is a workaround of adding these 2
> > > properties to core-site.xml:
> > >
> > >   
> > > fs.adl.impl
> > > org.apache.hadoop.fs.adl.AdlFileSystem
> > >   
> > >
> > >   
> > > fs.AbstractFileSystem.adl.impl
> > > org.apache.hadoop.fs.adl.Adl
> > >   
> > >
> > > I have the initial patch ready but hitting these live unit test
> failures:
> > >
> > > Failed tests:
> > >   TestAdlFileSystemContractLive.runTest:60->FileSystemContract
> > BaseTest.testListStatus:257
> > > expected:<1> but was:<10>
> > >
> > > Tests in error:
> > >   TestAdlFileContextMainOperationsLive>FileContextMainOperatio
> > nsBaseTest.
> > > testMkdirsFailsForSubdirectoryOfExistingFile:254 » AccessControl
> > >   TestAdlFileSystemContractLive.runTest:60->
> FileSystemContractBaseTest.
> > > testMkdirsFailsForSubdirectoryOfExistingFile:190 » AccessControl
> > >
> > >
> > > Stay tuned...
> > >
> > > John Zhuge
> > > Software Engineer, Cloudera
> > >
> > > On Mon, Mar 20, 2017 at 10:02 AM, Junping Du  >  > > j...@hortonworks.com>> wrote:
> > >
> > > Thank you for reporting the issue, John! Does this issue only affect
> ADL
> > > (Azure Data Lake) which is a new feature for 2.8 rather than other
> > existing
> > > FS? If so, I think we can leave the fix to 2.8.1 to fix given this is
> > not a
> > > regression and just a new feature get broken.?
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Junping
> > >
> > > 
> > > From: John Zhuge mailto:jzh...@cloudera.com>>
> > > Sent: Monday, March 20, 2017 9:07 AM
> > > To: Junping Du
> > > Cc: common-dev@hadoop.apache.org;
> > > hdfs-...@hadoop.apache.org;
> > > yarn-...@hadoop.apache.org;
> > > mapreduce-...@hadoop.apache.org >
> > > Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> > >
> > > Discovered https://issues.apache.org/jira/browse/HADOOP-14205 "No
> > > FileSystem for scheme: adl".
> > >
> > > The issue were caused by backporting HADOOP-13037 to branch-2 and
> > earlier.
> > > HADOOP-12666 should not be backported, but some changes are needed:
> > > property fs.adl.impl in core-default.xml and hadoop-tools-dist/pom.xml.
> > >
> > > I am working on a patch.
> > >
> > >
> > > John Zhuge
> > > Software Engineer, Cloudera
> > >
> > > On Fri, Mar 17, 2017 at 2:18 AM, Junping Du   > jd
> > > u...@hortonworks.com>> wrote:
> > > Hi all,
> > >  With fix of HDFS-11431 get in, I've created a new release
> candidate
> > > (RC3) for Apache Hadoop 2.8.0.
> > >
> > >  This is the next minor release to follow up 2.7.0 which has been
> > > released for more than 1 year. It comprises 2,900+ fixes, improvements,
> > and
> > > new features. Most of these commits are released for the first time in
> > > branch-2.
> > >
> > >   More information about the 2.8.0 release plan can be found here:
> > > https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
> > >
> > >   New RC is available at: http://home.apache.org/~
> > > junping_du/hadoop-2.8.0-RC3
> > >
> > >   The RC tag in git is: release-2.8.0-RC3, and the latest commit id
> > > is: 91f2b7a13d1e97be65db92ddabc627cc29ac0009
> > >
> > >   The maven artifacts are available via repository.apache.o

[jira] [Created] (HADOOP-14214) DomainSocketWatcher::add()/delete() should not self interrupt while looping await()

2017-03-22 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14214:
--

 Summary: DomainSocketWatcher::add()/delete() should not self 
interrupt while looping await()
 Key: HADOOP-14214
 URL: https://issues.apache.org/jira/browse/HADOOP-14214
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mingliang Liu
Assignee: Mingliang Liu





Our hive team found a TPCDS job whose queries running on LLAP seem to be 
getting stuck. Dozens of threads were waiting for the 
{{DfsClientShmManager::lock}}, as following jstack:
{code}
Thread 251 (IO-Elevator-Thread-5):
  State: WAITING
  Blocked count: 3871
  Wtaited count: 4565
  Waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@16ead198
  Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitUninterruptibly(AbstractQueuedSynchronizer.java:1976)

org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager$EndpointShmManager.allocSlot(DfsClientShmManager.java:255)

org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.allocSlot(DfsClientShmManager.java:434)

org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.allocShmSlot(ShortCircuitCache.java:1017)

org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:476)

org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:784)

org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:718)

org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422)
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333)

org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1181)

org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1118)
org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1478)
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1441)
org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
org.apache.hadoop.fs.FSDataInputStream.readFully(FSDataInputStream.java:111)

org.apache.orc.impl.RecordReaderUtils$DefaultDataReader.readStripeFooter(RecordReaderUtils.java:166)

org.apache.hadoop.hive.llap.io.metadata.OrcStripeMetadata.(OrcStripeMetadata.java:64)

org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.readStripesMetadata(OrcEncodedDataReader.java:622)
{code}

The thread that is expected to signal those threads is calling 
{{DomainSocketWatcher::add()}} method, but it gets stuck there dealing with 
InterruptedException infinitely. The jstack is like:
{code}
Thread 44417 (TezTR-257387_2840_12_10_52_0):
  State: RUNNABLE
  Blocked count: 3
  Wtaited count: 5
  Stack:
java.lang.Throwable.fillInStackTrace(Native Method)
java.lang.Throwable.fillInStackTrace(Throwable.java:783)
java.lang.Throwable.(Throwable.java:250)
java.lang.Exception.(Exception.java:54)
java.lang.InterruptedException.(InterruptedException.java:57)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2034)

org.apache.hadoop.net.unix.DomainSocketWatcher.add(DomainSocketWatcher.java:325)

org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager$EndpointShmManager.allocSlot(DfsClientShmManager.java:266)

org.apache.hadoop.hdfs.shortcircuit.DfsClientShmManager.allocSlot(DfsClientShmManager.java:434)

org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.allocShmSlot(ShortCircuitCache.java:1017)

org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:476)

org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:784)

org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:718)

org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:422)
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:333)

org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1181)

org.apache.hadoop.hdfs.DFSInputStream.fetchBlockByteRange(DFSInputStream.java:1118)
org.apache.hadoop.hdfs.DFSInputStream.pread(DFSInputStream.java:1478)
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:1441)
org.apache.hadoop.fs.FSInputStream.readFully(FSInputStream.java:121)
{code}
The whole job makes no progress because of this.

The thread in {{DomainSocketWatcher::add()}} is expected to eventually break 
the while loop where it waits for the newly added entry being deleted by 
another thread. However, if this thread is ever interrupted, chances are that 
it will hold the lock forever so {{if(!toAdd.contains(e

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-03-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/

[Mar 21, 2017 8:16:31 AM] (sunilg) YARN-6362. Use frontend-maven-plugin 0.0.22 
version for new yarn ui.
[Mar 21, 2017 9:44:17 AM] (yqlin) HDFS-11358. DiskBalancer: Report command 
supports reading nodes from
[Mar 21, 2017 1:15:15 PM] (stevel) HADOOP-14204 S3A multipart commit failing,
[Mar 21, 2017 5:46:59 PM] (aajisaka) HADOOP-14187. Update ZooKeeper dependency 
to 3.4.9 and Curator
[Mar 21, 2017 5:53:27 PM] (junping_du) YARN-6367. YARN logs CLI needs alway 
check
[Mar 21, 2017 9:15:40 PM] (templedf) YARN-6284. hasAlreadyRun should be final in
[Mar 21, 2017 10:21:11 PM] (rkanter) YARN-6326. Shouldn't use AppAttemptIds to 
fetch applications while AM
[Mar 21, 2017 10:41:53 PM] (varunsaxena) YARN-5934. Fix 
TestTimelineWebServices.testPrimaryFilterNumericString




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestDiskFailures 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.hdfs.TestNNBench 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/diff-compile-javac-root.txt
  [184K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [556K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [88K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/353/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread varunsax...@apache.org
Thanks Junping for creating the release.

+1 (non-binding)

* Verified signatures.
* Built from source.
* Set up a pseudo-distributed cluster.
* Successfully ran pi and wordcount jobs.
* Navigated the YARN RM and NM UI.

Regards,
Varun Saxena.

On Wed, Mar 22, 2017 at 12:13 AM, Haibo Chen  wrote:

> Thanks Junping for working on the new release!
>
> +1 non-binding
>
> 1) Downloaded the source, verified the checksum
> 2) Built natively from source, and deployed it to a pseudo-distributed
> cluster
> 3) Ran sleep and teragen job and checked both YARN and JHS web UI
> 4) Played with yarn + mapreduce command lines
>
> Best,
> Haibo Chen
>
> On Mon, Mar 20, 2017 at 11:18 AM, Junping Du  wrote:
>
> > ?Thanks for update, John. Then we should be OK with fixing this issue in
> > 2.8.1.
> >
> > Mark the target version of HADOOP-14205 to 2.8.1 instead of 2.8.0 and
> bump
> > up to blocker in case we could miss this in releasing 2.8.1. :)
> >
> >
> > Thanks,
> >
> >
> > Junping
> >
> > 
> > From: John Zhuge 
> > Sent: Monday, March 20, 2017 10:31 AM
> > To: Junping Du
> > Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> > yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> >
> > Yes, it only affects ADL. There is a workaround of adding these 2
> > properties to core-site.xml:
> >
> >   
> > fs.adl.impl
> > org.apache.hadoop.fs.adl.AdlFileSystem
> >   
> >
> >   
> > fs.AbstractFileSystem.adl.impl
> > org.apache.hadoop.fs.adl.Adl
> >   
> >
> > I have the initial patch ready but hitting these live unit test failures:
> >
> > Failed tests:
> >   TestAdlFileSystemContractLive.runTest:60->FileSystemContract
> BaseTest.testListStatus:257
> > expected:<1> but was:<10>
> >
> > Tests in error:
> >   TestAdlFileContextMainOperationsLive>FileContextMainOperatio
> nsBaseTest.
> > testMkdirsFailsForSubdirectoryOfExistingFile:254 » AccessControl
> >   TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> > testMkdirsFailsForSubdirectoryOfExistingFile:190 » AccessControl
> >
> >
> > Stay tuned...
> >
> > John Zhuge
> > Software Engineer, Cloudera
> >
> > On Mon, Mar 20, 2017 at 10:02 AM, Junping Du   > j...@hortonworks.com>> wrote:
> >
> > Thank you for reporting the issue, John! Does this issue only affect ADL
> > (Azure Data Lake) which is a new feature for 2.8 rather than other
> existing
> > FS? If so, I think we can leave the fix to 2.8.1 to fix given this is
> not a
> > regression and just a new feature get broken.?
> >
> >
> > Thanks,
> >
> >
> > Junping
> >
> > 
> > From: John Zhuge mailto:jzh...@cloudera.com>>
> > Sent: Monday, March 20, 2017 9:07 AM
> > To: Junping Du
> > Cc: common-dev@hadoop.apache.org;
> > hdfs-...@hadoop.apache.org;
> > yarn-...@hadoop.apache.org;
> > mapreduce-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> >
> > Discovered https://issues.apache.org/jira/browse/HADOOP-14205 "No
> > FileSystem for scheme: adl".
> >
> > The issue were caused by backporting HADOOP-13037 to branch-2 and
> earlier.
> > HADOOP-12666 should not be backported, but some changes are needed:
> > property fs.adl.impl in core-default.xml and hadoop-tools-dist/pom.xml.
> >
> > I am working on a patch.
> >
> >
> > John Zhuge
> > Software Engineer, Cloudera
> >
> > On Fri, Mar 17, 2017 at 2:18 AM, Junping Du  jd
> > u...@hortonworks.com>> wrote:
> > Hi all,
> >  With fix of HDFS-11431 get in, I've created a new release candidate
> > (RC3) for Apache Hadoop 2.8.0.
> >
> >  This is the next minor release to follow up 2.7.0 which has been
> > released for more than 1 year. It comprises 2,900+ fixes, improvements,
> and
> > new features. Most of these commits are released for the first time in
> > branch-2.
> >
> >   More information about the 2.8.0 release plan can be found here:
> > https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
> >
> >   New RC is available at: http://home.apache.org/~
> > junping_du/hadoop-2.8.0-RC3
> >
> >   The RC tag in git is: release-2.8.0-RC3, and the latest commit id
> > is: 91f2b7a13d1e97be65db92ddabc627cc29ac0009
> >
> >   The maven artifacts are available via repository.apache.org
>  > repository.apache.org> at: https://repository.apache.org/
> > content/repositories/orgapachehadoop-1057
> >
> >   Please try the release and vote; the vote will run for the usual 5
> > days, ending on 03/22/2017 PDT time.
> >
> > Thanks,
> >
> > Junping
> >
> >
> >
>


Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread Naganarasimha Garla
Thanks Junping for putting this up,.

+1 (non-binding)

* downloaded verified signature and md5.
* Built from source & deployed  pseudo cluster with labels enabled
* ran some sample jobs with partitions specified.
* Navigate through YARN & MR UI's
* executed basic dfs cmds.

Regards,
+ Naga



On Wed, Mar 22, 2017 at 11:23 PM, Rakesh Radhakrishnan 
wrote:

> Thanks Junping for getting this out.
>
> +1 (non-binding)
>
> * downloaded and built from source with jdk1.8.0_45
> * deployed HDFS-HA cluster
> * ran some sample jobs
> * run balancer
> * executed basic dfs cmds
>
>
> Rakesh
>
> On Wed, Mar 22, 2017 at 8:30 PM, Jian He  wrote:
>
> > +1 (binding)
> >
> > - built from source
> > - deployed a pseudo cluster
> > - ran basic example tests.
> > - Navigate the UI a bit, looks good.
> >
> > Jian
> >
> > > On Mar 22, 2017, at 9:03 PM, larry mccay 
> wrote:
> > >
> > > +1 (non-binding)
> > >
> > > - verified signatures
> > > - built from source and ran tests
> > > - deployed pseudo cluster
> > > - ran basic tests for hdfs, wordcount, credential provider API and
> > related
> > > commands
> > > - tested webhdfs with knox
> > >
> > >
> > > On Wed, Mar 22, 2017 at 7:21 AM, Ravi Prakash 
> > wrote:
> > >
> > >> Thanks for all the effort Junping!
> > >>
> > >> +1 (binding)
> > >> + Verified signature and MD5, SHA1, SHA256 checksum of tarball
> > >> + Verified SHA ID in git corresponds to RC3 tag
> > >> + Verified wordcount for one small text file produces same output as
> > >> hadoop-2.7.3.
> > >> + HDFS Namenode UI looks good.
> > >>
> > >> I agree none of the issues reported so far are blockers. Looking
> > forward to
> > >> another great release.
> > >>
> > >> Thanks
> > >> Ravi
> > >>
> > >> On Tue, Mar 21, 2017 at 8:10 PM, Junping Du 
> > wrote:
> > >>
> > >>> Thanks all for response with verification work and vote!
> > >>>
> > >>>
> > >>> Sounds like we are hitting several issues here, although none seems
> to
> > be
> > >>> blockers so far. Given the large commit set - 2000+ commits first
> > landed
> > >> in
> > >>> branch-2 release, we may should follow 2.7.0 practice that to claim
> > this
> > >>> release is not for production cluster, just like Vinod's suggestion
> in
> > >>> previous email. We should quickly come up with 2.8.1 release in next
> 1
> > >> or 2
> > >>> month for production deployment.
> > >>>
> > >>>
> > >>> We will close the vote in next 24 hours. For people who haven't vote,
> > >>> please keep on verification work and report any issues if founded - I
> > >> will
> > >>> check if another round of RC is needed based on your findings.
> Thanks!
> > >>>
> > >>>
> > >>> Thanks,
> > >>>
> > >>>
> > >>> Junping
> > >>>
> > >>>
> > >>> 
> > >>> From: Kuhu Shukla 
> > >>> Sent: Tuesday, March 21, 2017 3:17 PM
> > >>> Cc: Junping Du; common-dev@hadoop.apache.org;
> > hdfs-...@hadoop.apache.org
> > >> ;
> > >>> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> > >>> Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> > >>>
> > >>>
> > >>> +1 (non-binding)
> > >>>
> > >>> - Verified signatures.
> > >>> - Downloaded and built from source tar.gz.
> > >>> - Deployed a pseudo-distributed cluster on Mac Sierra.
> > >>> - Ran example Sleep job successfully.
> > >>> - Deployed latest Apache Tez 0.9 and ran sample Tez orderedwordcount
> > >>> successfully.
> > >>>
> > >>> Thank you Junping and everyone else who worked on getting this
> release
> > >> out.
> > >>>
> > >>> Warm Regards,
> > >>> Kuhu
> > >>> On Tuesday, March 21, 2017, 3:42:46 PM CDT, Eric Badger
> > >>>  wrote:
> > >>> +1 (non-binding)
> > >>>
> > >>> - Verified checksums and signatures of all files
> > >>> - Built from source on MacOS Sierra via JDK 1.8.0 u65
> > >>> - Deployed single-node cluster
> > >>> - Successfully ran a few sample jobs
> > >>>
> > >>> Thanks,
> > >>>
> > >>> Eric
> > >>>
> > >>> On Tuesday, March 21, 2017 2:56 PM, John Zhuge  >
> > >>> wrote:
> > >>>
> > >>>
> > >>>
> > >>> +1. Thanks for the great effort, Junping!
> > >>>
> > >>>
> > >>>  - Verified checksums and signatures of the tarballs
> > >>>  - Built source code with Java 1.8.0_66-b17 on Mac OS X 10.12.3
> > >>>  - Built source and native code with Java 1.8.0_111 on Centos
> 7.2.1511
> > >>>  - Cloud connectors:
> > >>>  - s3a: integration tests, basic fs commands
> > >>>  - adl: live unit tests, basic fs commands. See notes below.
> > >>>  - Deployed a pseudo cluster, passed the following sanity tests in
> > >>>  both insecure and SSL mode:
> > >>>  - HDFS: basic dfs, distcp, ACL commands
> > >>>  - KMS and HttpFS: basic tests
> > >>>  - MapReduce wordcount
> > >>>  - balancer start/stop
> > >>>
> > >>>
> > >>> Needs the following JIRAs to pass all ADL tests:
> > >>>
> > >>>  - HADOOP-14205. No FileSystem for scheme: adl. Contributed by John
> > >> Zhuge.
> > >>>  - HDFS-11132. Allow AccessControlException in contract tests when
> > >>>  getFileStatus on subdirectory of existing files. Cont

Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-03-22 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/

[Mar 21, 2017 1:15:15 PM] (stevel) HADOOP-14204 S3A multipart commit failing,
[Mar 21, 2017 5:46:59 PM] (aajisaka) HADOOP-14187. Update ZooKeeper dependency 
to 3.4.9 and Curator
[Mar 21, 2017 5:53:27 PM] (junping_du) YARN-6367. YARN logs CLI needs alway 
check
[Mar 21, 2017 9:15:40 PM] (templedf) YARN-6284. hasAlreadyRun should be final in
[Mar 21, 2017 10:21:11 PM] (rkanter) YARN-6326. Shouldn't use AppAttemptIds to 
fetch applications while AM
[Mar 21, 2017 10:41:53 PM] (varunsaxena) YARN-5934. Fix 
TestTimelineWebServices.testPrimaryFilterNumericString




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.security.TestRaceWhenRelogin 
   hadoop.hdfs.TestWriteReadStripedFile 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapreduce.TestMRJobClient 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-compile-root.txt
  [140K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-compile-root.txt
  [140K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-compile-root.txt
  [140K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [140K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [344K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/265/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt
  [16K]
   
htt

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread Rakesh Radhakrishnan
Thanks Junping for getting this out.

+1 (non-binding)

* downloaded and built from source with jdk1.8.0_45
* deployed HDFS-HA cluster
* ran some sample jobs
* run balancer
* executed basic dfs cmds


Rakesh

On Wed, Mar 22, 2017 at 8:30 PM, Jian He  wrote:

> +1 (binding)
>
> - built from source
> - deployed a pseudo cluster
> - ran basic example tests.
> - Navigate the UI a bit, looks good.
>
> Jian
>
> > On Mar 22, 2017, at 9:03 PM, larry mccay  wrote:
> >
> > +1 (non-binding)
> >
> > - verified signatures
> > - built from source and ran tests
> > - deployed pseudo cluster
> > - ran basic tests for hdfs, wordcount, credential provider API and
> related
> > commands
> > - tested webhdfs with knox
> >
> >
> > On Wed, Mar 22, 2017 at 7:21 AM, Ravi Prakash 
> wrote:
> >
> >> Thanks for all the effort Junping!
> >>
> >> +1 (binding)
> >> + Verified signature and MD5, SHA1, SHA256 checksum of tarball
> >> + Verified SHA ID in git corresponds to RC3 tag
> >> + Verified wordcount for one small text file produces same output as
> >> hadoop-2.7.3.
> >> + HDFS Namenode UI looks good.
> >>
> >> I agree none of the issues reported so far are blockers. Looking
> forward to
> >> another great release.
> >>
> >> Thanks
> >> Ravi
> >>
> >> On Tue, Mar 21, 2017 at 8:10 PM, Junping Du 
> wrote:
> >>
> >>> Thanks all for response with verification work and vote!
> >>>
> >>>
> >>> Sounds like we are hitting several issues here, although none seems to
> be
> >>> blockers so far. Given the large commit set - 2000+ commits first
> landed
> >> in
> >>> branch-2 release, we may should follow 2.7.0 practice that to claim
> this
> >>> release is not for production cluster, just like Vinod's suggestion in
> >>> previous email. We should quickly come up with 2.8.1 release in next 1
> >> or 2
> >>> month for production deployment.
> >>>
> >>>
> >>> We will close the vote in next 24 hours. For people who haven't vote,
> >>> please keep on verification work and report any issues if founded - I
> >> will
> >>> check if another round of RC is needed based on your findings. Thanks!
> >>>
> >>>
> >>> Thanks,
> >>>
> >>>
> >>> Junping
> >>>
> >>>
> >>> 
> >>> From: Kuhu Shukla 
> >>> Sent: Tuesday, March 21, 2017 3:17 PM
> >>> Cc: Junping Du; common-dev@hadoop.apache.org;
> hdfs-...@hadoop.apache.org
> >> ;
> >>> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> >>> Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> >>>
> >>>
> >>> +1 (non-binding)
> >>>
> >>> - Verified signatures.
> >>> - Downloaded and built from source tar.gz.
> >>> - Deployed a pseudo-distributed cluster on Mac Sierra.
> >>> - Ran example Sleep job successfully.
> >>> - Deployed latest Apache Tez 0.9 and ran sample Tez orderedwordcount
> >>> successfully.
> >>>
> >>> Thank you Junping and everyone else who worked on getting this release
> >> out.
> >>>
> >>> Warm Regards,
> >>> Kuhu
> >>> On Tuesday, March 21, 2017, 3:42:46 PM CDT, Eric Badger
> >>>  wrote:
> >>> +1 (non-binding)
> >>>
> >>> - Verified checksums and signatures of all files
> >>> - Built from source on MacOS Sierra via JDK 1.8.0 u65
> >>> - Deployed single-node cluster
> >>> - Successfully ran a few sample jobs
> >>>
> >>> Thanks,
> >>>
> >>> Eric
> >>>
> >>> On Tuesday, March 21, 2017 2:56 PM, John Zhuge 
> >>> wrote:
> >>>
> >>>
> >>>
> >>> +1. Thanks for the great effort, Junping!
> >>>
> >>>
> >>>  - Verified checksums and signatures of the tarballs
> >>>  - Built source code with Java 1.8.0_66-b17 on Mac OS X 10.12.3
> >>>  - Built source and native code with Java 1.8.0_111 on Centos 7.2.1511
> >>>  - Cloud connectors:
> >>>  - s3a: integration tests, basic fs commands
> >>>  - adl: live unit tests, basic fs commands. See notes below.
> >>>  - Deployed a pseudo cluster, passed the following sanity tests in
> >>>  both insecure and SSL mode:
> >>>  - HDFS: basic dfs, distcp, ACL commands
> >>>  - KMS and HttpFS: basic tests
> >>>  - MapReduce wordcount
> >>>  - balancer start/stop
> >>>
> >>>
> >>> Needs the following JIRAs to pass all ADL tests:
> >>>
> >>>  - HADOOP-14205. No FileSystem for scheme: adl. Contributed by John
> >> Zhuge.
> >>>  - HDFS-11132. Allow AccessControlException in contract tests when
> >>>  getFileStatus on subdirectory of existing files. Contributed by
> >>> Vishwajeet
> >>>  Dusane
> >>>  - HADOOP-13928. TestAdlFileContextMainOperatio
> >> nsLive.testGetFileContext1
> >>>  runtime error. (John Zhuge via lei)
> >>>
> >>>
> >>> On Mon, Mar 20, 2017 at 10:31 AM, John Zhuge 
> >> wrote:
> >>>
>  Yes, it only affects ADL. There is a workaround of adding these 2
>  properties to core-site.xml:
> 
>  
>    fs.adl.impl
>    org.apache.hadoop.fs.adl.AdlFileSystem
>  
> 
>  
>    fs.AbstractFileSystem.adl.impl
>    org.apache.hadoop.fs.adl.Adl
>  
> 
>  I have the initial patch ready but hitting these live unit test
> >> failures:
> 
>  Failed test

[jira] [Created] (HADOOP-14213) Move Configuration runtime check for hadoop-site.xml to initialization

2017-03-22 Thread Jonathan Eagles (JIRA)
Jonathan Eagles created HADOOP-14213:


 Summary: Move Configuration runtime check for hadoop-site.xml to 
initialization
 Key: HADOOP-14213
 URL: https://issues.apache.org/jira/browse/HADOOP-14213
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles


Each Configuration object that loads defaults checks for hadoop-site.xml. It 
has been long deprecated and is not present in most if not nearly all 
installations. The getResource check for hadoop-site.xml has to check the 
entire classpath since it is not found. This jira proposes to 1) either remove 
hadoop-site.xml as a default resource or 2) move the check to static 
initialization of the class so the performance hit is only taken once.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread Jian He
+1 (binding)

- built from source
- deployed a pseudo cluster
- ran basic example tests.
- Navigate the UI a bit, looks good.

Jian

> On Mar 22, 2017, at 9:03 PM, larry mccay  wrote:
> 
> +1 (non-binding)
> 
> - verified signatures
> - built from source and ran tests
> - deployed pseudo cluster
> - ran basic tests for hdfs, wordcount, credential provider API and related
> commands
> - tested webhdfs with knox
> 
> 
> On Wed, Mar 22, 2017 at 7:21 AM, Ravi Prakash  wrote:
> 
>> Thanks for all the effort Junping!
>> 
>> +1 (binding)
>> + Verified signature and MD5, SHA1, SHA256 checksum of tarball
>> + Verified SHA ID in git corresponds to RC3 tag
>> + Verified wordcount for one small text file produces same output as
>> hadoop-2.7.3.
>> + HDFS Namenode UI looks good.
>> 
>> I agree none of the issues reported so far are blockers. Looking forward to
>> another great release.
>> 
>> Thanks
>> Ravi
>> 
>> On Tue, Mar 21, 2017 at 8:10 PM, Junping Du  wrote:
>> 
>>> Thanks all for response with verification work and vote!
>>> 
>>> 
>>> Sounds like we are hitting several issues here, although none seems to be
>>> blockers so far. Given the large commit set - 2000+ commits first landed
>> in
>>> branch-2 release, we may should follow 2.7.0 practice that to claim this
>>> release is not for production cluster, just like Vinod's suggestion in
>>> previous email. We should quickly come up with 2.8.1 release in next 1
>> or 2
>>> month for production deployment.
>>> 
>>> 
>>> We will close the vote in next 24 hours. For people who haven't vote,
>>> please keep on verification work and report any issues if founded - I
>> will
>>> check if another round of RC is needed based on your findings. Thanks!
>>> 
>>> 
>>> Thanks,
>>> 
>>> 
>>> Junping
>>> 
>>> 
>>> 
>>> From: Kuhu Shukla 
>>> Sent: Tuesday, March 21, 2017 3:17 PM
>>> Cc: Junping Du; common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org
>> ;
>>> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
>>> Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
>>> 
>>> 
>>> +1 (non-binding)
>>> 
>>> - Verified signatures.
>>> - Downloaded and built from source tar.gz.
>>> - Deployed a pseudo-distributed cluster on Mac Sierra.
>>> - Ran example Sleep job successfully.
>>> - Deployed latest Apache Tez 0.9 and ran sample Tez orderedwordcount
>>> successfully.
>>> 
>>> Thank you Junping and everyone else who worked on getting this release
>> out.
>>> 
>>> Warm Regards,
>>> Kuhu
>>> On Tuesday, March 21, 2017, 3:42:46 PM CDT, Eric Badger
>>>  wrote:
>>> +1 (non-binding)
>>> 
>>> - Verified checksums and signatures of all files
>>> - Built from source on MacOS Sierra via JDK 1.8.0 u65
>>> - Deployed single-node cluster
>>> - Successfully ran a few sample jobs
>>> 
>>> Thanks,
>>> 
>>> Eric
>>> 
>>> On Tuesday, March 21, 2017 2:56 PM, John Zhuge 
>>> wrote:
>>> 
>>> 
>>> 
>>> +1. Thanks for the great effort, Junping!
>>> 
>>> 
>>>  - Verified checksums and signatures of the tarballs
>>>  - Built source code with Java 1.8.0_66-b17 on Mac OS X 10.12.3
>>>  - Built source and native code with Java 1.8.0_111 on Centos 7.2.1511
>>>  - Cloud connectors:
>>>  - s3a: integration tests, basic fs commands
>>>  - adl: live unit tests, basic fs commands. See notes below.
>>>  - Deployed a pseudo cluster, passed the following sanity tests in
>>>  both insecure and SSL mode:
>>>  - HDFS: basic dfs, distcp, ACL commands
>>>  - KMS and HttpFS: basic tests
>>>  - MapReduce wordcount
>>>  - balancer start/stop
>>> 
>>> 
>>> Needs the following JIRAs to pass all ADL tests:
>>> 
>>>  - HADOOP-14205. No FileSystem for scheme: adl. Contributed by John
>> Zhuge.
>>>  - HDFS-11132. Allow AccessControlException in contract tests when
>>>  getFileStatus on subdirectory of existing files. Contributed by
>>> Vishwajeet
>>>  Dusane
>>>  - HADOOP-13928. TestAdlFileContextMainOperatio
>> nsLive.testGetFileContext1
>>>  runtime error. (John Zhuge via lei)
>>> 
>>> 
>>> On Mon, Mar 20, 2017 at 10:31 AM, John Zhuge 
>> wrote:
>>> 
 Yes, it only affects ADL. There is a workaround of adding these 2
 properties to core-site.xml:
 
 
   fs.adl.impl
   org.apache.hadoop.fs.adl.AdlFileSystem
 
 
 
   fs.AbstractFileSystem.adl.impl
   org.apache.hadoop.fs.adl.Adl
 
 
 I have the initial patch ready but hitting these live unit test
>> failures:
 
 Failed tests:
 
 TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
 testListStatus:257
 expected:<1> but was:<10>
 
 Tests in error:
 
 TestAdlFileContextMainOperationsLive>FileContextMainOperationsBaseT
>> est.
 testMkdirsFailsForSubdirectoryOfExistingFile:254
 » AccessControl
 
 TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
 testMkdirsFailsForSubdirectoryOfExistingFile:190
 » AccessControl
 
 
 Stay tuned...
 
 John Zhuge
>>>

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread larry mccay
+1 (non-binding)

- verified signatures
- built from source and ran tests
- deployed pseudo cluster
- ran basic tests for hdfs, wordcount, credential provider API and related
commands
- tested webhdfs with knox


On Wed, Mar 22, 2017 at 7:21 AM, Ravi Prakash  wrote:

> Thanks for all the effort Junping!
>
> +1 (binding)
> + Verified signature and MD5, SHA1, SHA256 checksum of tarball
> + Verified SHA ID in git corresponds to RC3 tag
> + Verified wordcount for one small text file produces same output as
> hadoop-2.7.3.
> + HDFS Namenode UI looks good.
>
> I agree none of the issues reported so far are blockers. Looking forward to
> another great release.
>
> Thanks
> Ravi
>
> On Tue, Mar 21, 2017 at 8:10 PM, Junping Du  wrote:
>
> > Thanks all for response with verification work and vote!
> >
> >
> > Sounds like we are hitting several issues here, although none seems to be
> > blockers so far. Given the large commit set - 2000+ commits first landed
> in
> > branch-2 release, we may should follow 2.7.0 practice that to claim this
> > release is not for production cluster, just like Vinod's suggestion in
> > previous email. We should quickly come up with 2.8.1 release in next 1
> or 2
> > month for production deployment.
> >
> >
> > We will close the vote in next 24 hours. For people who haven't vote,
> > please keep on verification work and report any issues if founded - I
> will
> > check if another round of RC is needed based on your findings. Thanks!
> >
> >
> > Thanks,
> >
> >
> > Junping
> >
> >
> > 
> > From: Kuhu Shukla 
> > Sent: Tuesday, March 21, 2017 3:17 PM
> > Cc: Junping Du; common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org
> ;
> > yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> >
> >
> > +1 (non-binding)
> >
> > - Verified signatures.
> > - Downloaded and built from source tar.gz.
> > - Deployed a pseudo-distributed cluster on Mac Sierra.
> > - Ran example Sleep job successfully.
> > - Deployed latest Apache Tez 0.9 and ran sample Tez orderedwordcount
> > successfully.
> >
> > Thank you Junping and everyone else who worked on getting this release
> out.
> >
> > Warm Regards,
> > Kuhu
> > On Tuesday, March 21, 2017, 3:42:46 PM CDT, Eric Badger
> >  wrote:
> > +1 (non-binding)
> >
> > - Verified checksums and signatures of all files
> > - Built from source on MacOS Sierra via JDK 1.8.0 u65
> > - Deployed single-node cluster
> > - Successfully ran a few sample jobs
> >
> > Thanks,
> >
> > Eric
> >
> > On Tuesday, March 21, 2017 2:56 PM, John Zhuge 
> > wrote:
> >
> >
> >
> > +1. Thanks for the great effort, Junping!
> >
> >
> >   - Verified checksums and signatures of the tarballs
> >   - Built source code with Java 1.8.0_66-b17 on Mac OS X 10.12.3
> >   - Built source and native code with Java 1.8.0_111 on Centos 7.2.1511
> >   - Cloud connectors:
> >   - s3a: integration tests, basic fs commands
> >   - adl: live unit tests, basic fs commands. See notes below.
> >   - Deployed a pseudo cluster, passed the following sanity tests in
> >   both insecure and SSL mode:
> >   - HDFS: basic dfs, distcp, ACL commands
> >   - KMS and HttpFS: basic tests
> >   - MapReduce wordcount
> >   - balancer start/stop
> >
> >
> > Needs the following JIRAs to pass all ADL tests:
> >
> >   - HADOOP-14205. No FileSystem for scheme: adl. Contributed by John
> Zhuge.
> >   - HDFS-11132. Allow AccessControlException in contract tests when
> >   getFileStatus on subdirectory of existing files. Contributed by
> > Vishwajeet
> >   Dusane
> >   - HADOOP-13928. TestAdlFileContextMainOperatio
> nsLive.testGetFileContext1
> >   runtime error. (John Zhuge via lei)
> >
> >
> > On Mon, Mar 20, 2017 at 10:31 AM, John Zhuge 
> wrote:
> >
> > > Yes, it only affects ADL. There is a workaround of adding these 2
> > > properties to core-site.xml:
> > >
> > >  
> > >fs.adl.impl
> > >org.apache.hadoop.fs.adl.AdlFileSystem
> > >  
> > >
> > >  
> > >fs.AbstractFileSystem.adl.impl
> > >org.apache.hadoop.fs.adl.Adl
> > >  
> > >
> > > I have the initial patch ready but hitting these live unit test
> failures:
> > >
> > > Failed tests:
> > >
> > > TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> > > testListStatus:257
> > > expected:<1> but was:<10>
> > >
> > > Tests in error:
> > >
> > > TestAdlFileContextMainOperationsLive>FileContextMainOperationsBaseT
> est.
> > > testMkdirsFailsForSubdirectoryOfExistingFile:254
> > > » AccessControl
> > >
> > > TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> > > testMkdirsFailsForSubdirectoryOfExistingFile:190
> > > » AccessControl
> > >
> > >
> > > Stay tuned...
> > >
> > > John Zhuge
> > > Software Engineer, Cloudera
> > >
> > > On Mon, Mar 20, 2017 at 10:02 AM, Junping Du 
> > wrote:
> > >
> > > > Thank you for reporting the issue, John! Does this issue only affect
> > ADL
> > > > (Azure Data Lake) which 

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-22 Thread Ravi Prakash
Thanks for all the effort Junping!

+1 (binding)
+ Verified signature and MD5, SHA1, SHA256 checksum of tarball
+ Verified SHA ID in git corresponds to RC3 tag
+ Verified wordcount for one small text file produces same output as
hadoop-2.7.3.
+ HDFS Namenode UI looks good.

I agree none of the issues reported so far are blockers. Looking forward to
another great release.

Thanks
Ravi

On Tue, Mar 21, 2017 at 8:10 PM, Junping Du  wrote:

> Thanks all for response with verification work and vote!
>
>
> Sounds like we are hitting several issues here, although none seems to be
> blockers so far. Given the large commit set - 2000+ commits first landed in
> branch-2 release, we may should follow 2.7.0 practice that to claim this
> release is not for production cluster, just like Vinod's suggestion in
> previous email. We should quickly come up with 2.8.1 release in next 1 or 2
> month for production deployment.
>
>
> We will close the vote in next 24 hours. For people who haven't vote,
> please keep on verification work and report any issues if founded - I will
> check if another round of RC is needed based on your findings. Thanks!
>
>
> Thanks,
>
>
> Junping
>
>
> 
> From: Kuhu Shukla 
> Sent: Tuesday, March 21, 2017 3:17 PM
> Cc: Junping Du; common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
>
>
> +1 (non-binding)
>
> - Verified signatures.
> - Downloaded and built from source tar.gz.
> - Deployed a pseudo-distributed cluster on Mac Sierra.
> - Ran example Sleep job successfully.
> - Deployed latest Apache Tez 0.9 and ran sample Tez orderedwordcount
> successfully.
>
> Thank you Junping and everyone else who worked on getting this release out.
>
> Warm Regards,
> Kuhu
> On Tuesday, March 21, 2017, 3:42:46 PM CDT, Eric Badger
>  wrote:
> +1 (non-binding)
>
> - Verified checksums and signatures of all files
> - Built from source on MacOS Sierra via JDK 1.8.0 u65
> - Deployed single-node cluster
> - Successfully ran a few sample jobs
>
> Thanks,
>
> Eric
>
> On Tuesday, March 21, 2017 2:56 PM, John Zhuge 
> wrote:
>
>
>
> +1. Thanks for the great effort, Junping!
>
>
>   - Verified checksums and signatures of the tarballs
>   - Built source code with Java 1.8.0_66-b17 on Mac OS X 10.12.3
>   - Built source and native code with Java 1.8.0_111 on Centos 7.2.1511
>   - Cloud connectors:
>   - s3a: integration tests, basic fs commands
>   - adl: live unit tests, basic fs commands. See notes below.
>   - Deployed a pseudo cluster, passed the following sanity tests in
>   both insecure and SSL mode:
>   - HDFS: basic dfs, distcp, ACL commands
>   - KMS and HttpFS: basic tests
>   - MapReduce wordcount
>   - balancer start/stop
>
>
> Needs the following JIRAs to pass all ADL tests:
>
>   - HADOOP-14205. No FileSystem for scheme: adl. Contributed by John Zhuge.
>   - HDFS-11132. Allow AccessControlException in contract tests when
>   getFileStatus on subdirectory of existing files. Contributed by
> Vishwajeet
>   Dusane
>   - HADOOP-13928. TestAdlFileContextMainOperationsLive.testGetFileContext1
>   runtime error. (John Zhuge via lei)
>
>
> On Mon, Mar 20, 2017 at 10:31 AM, John Zhuge  wrote:
>
> > Yes, it only affects ADL. There is a workaround of adding these 2
> > properties to core-site.xml:
> >
> >  
> >fs.adl.impl
> >org.apache.hadoop.fs.adl.AdlFileSystem
> >  
> >
> >  
> >fs.AbstractFileSystem.adl.impl
> >org.apache.hadoop.fs.adl.Adl
> >  
> >
> > I have the initial patch ready but hitting these live unit test failures:
> >
> > Failed tests:
> >
> > TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> > testListStatus:257
> > expected:<1> but was:<10>
> >
> > Tests in error:
> >
> > TestAdlFileContextMainOperationsLive>FileContextMainOperationsBaseTest.
> > testMkdirsFailsForSubdirectoryOfExistingFile:254
> > » AccessControl
> >
> > TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.
> > testMkdirsFailsForSubdirectoryOfExistingFile:190
> > » AccessControl
> >
> >
> > Stay tuned...
> >
> > John Zhuge
> > Software Engineer, Cloudera
> >
> > On Mon, Mar 20, 2017 at 10:02 AM, Junping Du 
> wrote:
> >
> > > Thank you for reporting the issue, John! Does this issue only affect
> ADL
> > > (Azure Data Lake) which is a new feature for 2.8 rather than other
> > existing
> > > FS? If so, I think we can leave the fix to 2.8.1 to fix given this is
> > not a
> > > regression and just a new feature get broken.?
> > >
> > >
> > > Thanks,
> > >
> > >
> > > Junping
> > > --
> > > *From:* John Zhuge 
> > > *Sent:* Monday, March 20, 2017 9:07 AM
> > > *To:* Junping Du
> > > *Cc:* common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> > > yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> > > *Subject:* Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
> > >
> > > 

[jira] [Created] (HADOOP-14212) Expose SecurityEnabled boolean field in JMX for other services besides NameNode

2017-03-22 Thread Ray Burgemeestre (JIRA)
Ray Burgemeestre created HADOOP-14212:
-

 Summary: Expose SecurityEnabled boolean field in JMX for other 
services besides NameNode
 Key: HADOOP-14212
 URL: https://issues.apache.org/jira/browse/HADOOP-14212
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ray Burgemeestre
Priority: Minor


The following commit 
https://github.com/apache/hadoop/commit/dc17bda4b677e30c02c2a9a053895a43e41f7a12
 introduced a "SecurityEnabled" field in the JMX output for the NameNode. I 
believe it would be nice to add this same change to the JMX output of other 
services: Secondary Namenode, ResourceManager, NodeManagers, DataNodes, etc. So 
that it can be queried whether Security is enabled in all JMX resources.

The reason I am suggesting this feature / improvement is that I think it  would 
provide a clean way to check whether your cluster is completely Kerberized or 
not. I don't think there is an easy/clean way to do this now, other than 
checking the logs, checking ports etc.? 

The file where the change was made is 
hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java 
has the following function now:

{code:java}
@Override // NameNodeStatusMXBean
public boolean isSecurityEnabled() {
return UserGroupInformation.isSecurityEnabled();
}
{code}

I would be happy to develop a patch if it seems useful by others as well?

This is a snippet from the JMX output from the NameNode in case security is not 
enabled:

{code}
  {
"name" : "Hadoop:service=NameNode,name=NameNodeStatus",
"modelerType" : "org.apache.hadoop.hdfs.server.namenode.NameNode",
"NNRole" : "NameNode",
"HostAndPort" : "node001.cm.cluster:8020",
"SecurityEnabled" : false,
"LastHATransitionTime" : 0,
"State" : "standby"
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org