[jira] [Created] (HADOOP-14206) TestSFTPFileSystem#testFileExists failure: Invalid encoding for signature

2017-03-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14206:
---

 Summary: TestSFTPFileSystem#testFileExists failure: Invalid 
encoding for signature
 Key: HADOOP-14206
 URL: https://issues.apache.org/jira/browse/HADOOP-14206
 Project: Hadoop Common
  Issue Type: Test
  Components: fs, test
Affects Versions: 2.9.0
Reporter: John Zhuge


https://builds.apache.org/job/PreCommit-HADOOP-Build/11862/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_121.txt:
{noformat}
Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.454 sec <<< 
FAILURE! - in org.apache.hadoop.fs.sftp.TestSFTPFileSystem
testFileExists(org.apache.hadoop.fs.sftp.TestSFTPFileSystem)  Time elapsed: 
0.19 sec  <<< ERROR!
java.io.IOException: com.jcraft.jsch.JSchException: Session.connect: 
java.security.SignatureException: Invalid encoding for signature
at com.jcraft.jsch.Session.connect(Session.java:565)
at com.jcraft.jsch.Session.connect(Session.java:183)
at 
org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:168)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
at 
org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

at 
org.apache.hadoop.fs.sftp.SFTPConnectionPool.connect(SFTPConnectionPool.java:180)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.connect(SFTPFileSystem.java:149)
at 
org.apache.hadoop.fs.sftp.SFTPFileSystem.getFileStatus(SFTPFileSystem.java:663)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1626)
at 
org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testFileExists(TestSFTPFileSystem.java:190)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-03-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/

[Mar 19, 2017 6:34:23 PM] (naganarasimha_gr) MAPREDUCE-6865. Fix typo in 
javadoc for DistributedCache. Contributed by




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.tracing.TestTracing 
   hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServices 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-compile-root.txt
  [136K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-compile-root.txt
  [136K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-compile-root.txt
  [136K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [352K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [324K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/263/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [44K]

Powered by Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)

2017-03-20 Thread Andrew Wang
On Mon, Mar 20, 2017 at 5:30 AM, Steve Loughran 
wrote:

>
> On 15 Mar 2017, at 21:06, Eric Badger  wrote:
>
> Verified signatures
>  - Minor note: Junping, I had a hard time finding your key. I grabbed the
> keys for hadoop from
> http://home.apache.org/keys/group/hadoop.asc and you had a key there, but
> it wasn't the one that you signed this commit with. Then with some help
> from Jason I found the correct key at
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS. So it
> would be nice if those were in sync.
> Compiled from source
> Deployed pseudo-distributed cluster
> Ran some sample MR jobs
>
>
>
> we need to do more key signing; the stuff in the various KEYS files have
> aged
>
> Alll ASF Committers can publish their ASF keys:
>
> https://people.apache.org/keys/committer/
>
> which you can retrieve on a committer-by-committer basis :
>
> junping https://people.apache.org/keys/committer/junping_du.asc
> me: https://people.apache.org/keys/committer/stevel.asc
>
> Committers should log in to https://id.apache.org/ and set them.
>
> Maybe that committer page should just be declared as the reference place
> to find keys; It bootstraps off the ASF HTTPS certificate for trusted D/L,
> and relies on login credentials being kept secure. But if not, well, people
> can publish code under your login, so signing is the least concern.
>
>
Hi Steve,

I said this in a previous email in this thread, but per INFRA we're not to
rely on the keys set on id.apache.org for release verification. Keys need
to be added to the dist KEYS file.

Best,
Andrew


Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-20 Thread Junping Du
?Thanks for update, John. Then we should be OK with fixing this issue in 2.8.1.

Mark the target version of HADOOP-14205 to 2.8.1 instead of 2.8.0 and bump up 
to blocker in case we could miss this in releasing 2.8.1. :)


Thanks,


Junping


From: John Zhuge 
Sent: Monday, March 20, 2017 10:31 AM
To: Junping Du
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

Yes, it only affects ADL. There is a workaround of adding these 2 properties to 
core-site.xml:

  
fs.adl.impl
org.apache.hadoop.fs.adl.AdlFileSystem
  

  
fs.AbstractFileSystem.adl.impl
org.apache.hadoop.fs.adl.Adl
  

I have the initial patch ready but hitting these live unit test failures:

Failed tests:
  
TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testListStatus:257
 expected:<1> but was:<10>

Tests in error:
  
TestAdlFileContextMainOperationsLive>FileContextMainOperationsBaseTest.testMkdirsFailsForSubdirectoryOfExistingFile:254
 » AccessControl
  
TestAdlFileSystemContractLive.runTest:60->FileSystemContractBaseTest.testMkdirsFailsForSubdirectoryOfExistingFile:190
 » AccessControl


Stay tuned...

John Zhuge
Software Engineer, Cloudera

On Mon, Mar 20, 2017 at 10:02 AM, Junping Du 
> wrote:

Thank you for reporting the issue, John! Does this issue only affect ADL (Azure 
Data Lake) which is a new feature for 2.8 rather than other existing FS? If so, 
I think we can leave the fix to 2.8.1 to fix given this is not a regression and 
just a new feature get broken.?


Thanks,


Junping


From: John Zhuge >
Sent: Monday, March 20, 2017 9:07 AM
To: Junping Du
Cc: common-dev@hadoop.apache.org; 
hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

Discovered https://issues.apache.org/jira/browse/HADOOP-14205 "No FileSystem 
for scheme: adl".

The issue were caused by backporting HADOOP-13037 to branch-2 and earlier. 
HADOOP-12666 should not be backported, but some changes are needed: property 
fs.adl.impl in core-default.xml and hadoop-tools-dist/pom.xml.

I am working on a patch.


John Zhuge
Software Engineer, Cloudera

On Fri, Mar 17, 2017 at 2:18 AM, Junping Du 
> wrote:
Hi all,
 With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

 This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

  More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

  New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

  The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

  The maven artifacts are available via 
repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

  Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.

Thanks,

Junping




Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-20 Thread Wangda Tan
Thanks Junping for doing this.

*+1 (Binding),*

Built from source code, deployed a single node cluster. Enabled node labels
and tried to run sample jobs. Haven't seen any issue so far.

Thanks,
Wangda


On Mon, Mar 20, 2017 at 10:02 AM, Junping Du  wrote:

> Thank you for reporting the issue, John! Does this issue only affect ADL
> (Azure Data Lake) which is a new feature for 2.8 rather than other existing
> FS? If so, I think we can leave the fix to 2.8.1 to fix given this is not a
> regression and just a new feature get broken.?
>
>
> Thanks,
>
>
> Junping
>
> 
> From: John Zhuge 
> Sent: Monday, March 20, 2017 9:07 AM
> To: Junping Du
> Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org;
> yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)
>
> Discovered https://issues.apache.org/jira/browse/HADOOP-14205 "No
> FileSystem for scheme: adl".
>
> The issue were caused by backporting HADOOP-13037 to branch-2 and earlier.
> HADOOP-12666 should not be backported, but some changes are needed:
> property fs.adl.impl in core-default.xml and hadoop-tools-dist/pom.xml.
>
> I am working on a patch.
>
>
> John Zhuge
> Software Engineer, Cloudera
>
> On Fri, Mar 17, 2017 at 2:18 AM, Junping Du  u...@hortonworks.com>> wrote:
> Hi all,
>  With fix of HDFS-11431 get in, I've created a new release candidate
> (RC3) for Apache Hadoop 2.8.0.
>
>  This is the next minor release to follow up 2.7.0 which has been
> released for more than 1 year. It comprises 2,900+ fixes, improvements, and
> new features. Most of these commits are released for the first time in
> branch-2.
>
>   More information about the 2.8.0 release plan can be found here:
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
>
>   New RC is available at: http://home.apache.org/~
> junping_du/hadoop-2.8.0-RC3
>
>   The RC tag in git is: release-2.8.0-RC3, and the latest commit id
> is: 91f2b7a13d1e97be65db92ddabc627cc29ac0009
>
>   The maven artifacts are available via repository.apache.org repository.apache.org> at: https://repository.apache.org/
> content/repositories/orgapachehadoop-1057
>
>   Please try the release and vote; the vote will run for the usual 5
> days, ending on 03/22/2017 PDT time.
>
> Thanks,
>
> Junping
>
>


Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-20 Thread Junping Du
Thank you for reporting the issue, John! Does this issue only affect ADL (Azure 
Data Lake) which is a new feature for 2.8 rather than other existing FS? If so, 
I think we can leave the fix to 2.8.1 to fix given this is not a regression and 
just a new feature get broken.?


Thanks,


Junping


From: John Zhuge 
Sent: Monday, March 20, 2017 9:07 AM
To: Junping Du
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

Discovered https://issues.apache.org/jira/browse/HADOOP-14205 "No FileSystem 
for scheme: adl".

The issue were caused by backporting HADOOP-13037 to branch-2 and earlier. 
HADOOP-12666 should not be backported, but some changes are needed: property 
fs.adl.impl in core-default.xml and hadoop-tools-dist/pom.xml.

I am working on a patch.


John Zhuge
Software Engineer, Cloudera

On Fri, Mar 17, 2017 at 2:18 AM, Junping Du 
> wrote:
Hi all,
 With fix of HDFS-11431 get in, I've created a new release candidate (RC3) 
for Apache Hadoop 2.8.0.

 This is the next minor release to follow up 2.7.0 which has been released 
for more than 1 year. It comprises 2,900+ fixes, improvements, and new 
features. Most of these commits are released for the first time in branch-2.

  More information about the 2.8.0 release plan can be found here: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release

  New RC is available at: 
http://home.apache.org/~junping_du/hadoop-2.8.0-RC3

  The RC tag in git is: release-2.8.0-RC3, and the latest commit id is: 
91f2b7a13d1e97be65db92ddabc627cc29ac0009

  The maven artifacts are available via 
repository.apache.org at: 
https://repository.apache.org/content/repositories/orgapachehadoop-1057

  Please try the release and vote; the vote will run for the usual 5 days, 
ending on 03/22/2017 PDT time.

Thanks,

Junping



Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-20 Thread John Zhuge
Discovered https://issues.apache.org/jira/browse/HADOOP-14205 "No
FileSystem for scheme: adl".

The issue were caused by backporting HADOOP-13037 to branch-2 and earlier.
HADOOP-12666 should not be backported, but some changes are needed:
property fs.adl.impl in core-default.xml and hadoop-tools-dist/pom.xml.

I am working on a patch.


John Zhuge
Software Engineer, Cloudera

On Fri, Mar 17, 2017 at 2:18 AM, Junping Du  wrote:

> Hi all,
>  With fix of HDFS-11431 get in, I've created a new release candidate
> (RC3) for Apache Hadoop 2.8.0.
>
>  This is the next minor release to follow up 2.7.0 which has been
> released for more than 1 year. It comprises 2,900+ fixes, improvements, and
> new features. Most of these commits are released for the first time in
> branch-2.
>
>   More information about the 2.8.0 release plan can be found here:
> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
>
>   New RC is available at: http://home.apache.org/~
> junping_du/hadoop-2.8.0-RC3
>
>   The RC tag in git is: release-2.8.0-RC3, and the latest commit id
> is: 91f2b7a13d1e97be65db92ddabc627cc29ac0009
>
>   The maven artifacts are available via repository.apache.org at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1057
>
>   Please try the release and vote; the vote will run for the usual 5
> days, ending on 03/22/2017 PDT time.
>
> Thanks,
>
> Junping
>


[jira] [Created] (HADOOP-14205) No FileSystem for scheme: adl

2017-03-20 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14205:
---

 Summary: No FileSystem for scheme: adl
 Key: HADOOP-14205
 URL: https://issues.apache.org/jira/browse/HADOOP-14205
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/adl
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge


{noformat}
$ bin/hadoop fs -ls /
ls: No FileSystem for scheme: adl
{noformat}

The problem is {{core-default.xml}} misses property {{fs.adl.impl}} and 
{{fs.AbstractFileSystem.adl.impl}}.

After adding these 2 properties to {{etc/hadoop/core-sitex.xml}}, got this 
error:
{noformat}
$ bin/hadoop fs -ls /
-ls: Fatal internal error
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.adl.AdlFileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2231)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3207)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3239)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:121)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3290)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3258)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:223)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:454)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)
at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:245)
at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:228)
at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103)
at org.apache.hadoop.fs.shell.Command.run(Command.java:175)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:378)
Caused by: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.adl.AdlFileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2137)
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2229)
... 18 more
{noformat}

The problem is ADLS jars are not copied to {{share/hadoop/tools/lib}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14204) S3A multipart commit failing, "UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"

2017-03-20 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14204:
---

 Summary: S3A multipart commit failing, 
"UnsupportedOperationException at java.util.Collections$UnmodifiableList.sort"
 Key: HADOOP-14204
 URL: https://issues.apache.org/jira/browse/HADOOP-14204
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Critical


Stack trace seen trying to commit a multipart upload, as the EMR code (which 
takes a {{List etags}} is trying to sort that list directly, which it 
can't do if the list doesn't want to be sorted.

later versions of the SDK clone the list before sorting.

We need to make sure that the list passed in can be sorted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-03-20 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/351/

[Mar 19, 2017 6:34:23 PM] (naganarasimha_gr) MAPREDUCE-6865. Fix typo in 
javadoc for DistributedCache. Contributed by


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 2.8.0 (RC2)

2017-03-20 Thread Steve Loughran

> On 15 Mar 2017, at 21:06, Eric Badger  wrote:
> 
> Verified signatures
>  - Minor note: Junping, I had a hard time finding your key. I grabbed the 
> keys for hadoop from
> http://home.apache.org/keys/group/hadoop.asc 
>  and you had a key there, but 
> it wasn't the one that you signed this commit with. Then with some help from 
> Jason I found the correct key at
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS 
> . So it would 
> be nice if those were in sync.
> Compiled from source
> Deployed pseudo-distributed cluster
> Ran some sample MR jobs


we need to do more key signing; the stuff in the various KEYS files have aged

Alll ASF Committers can publish their ASF keys:

https://people.apache.org/keys/committer/ 


which you can retrieve on a committer-by-committer basis :

junping https://people.apache.org/keys/committer/junping_du.asc 

me: https://people.apache.org/keys/committer/stevel.asc 


Committers should log in to https://id.apache.org/  and 
set them.

Maybe that committer page should just be declared as the reference place to 
find keys; It bootstraps off the ASF HTTPS certificate for trusted D/L, and 
relies on login credentials being kept secure. But if not, well, people can 
publish code under your login, so signing is the least concern.

-Steve


signature.asc
Description: Message signed with OpenPGP


Re: [VOTE] Release Apache Hadoop 2.8.0 (RC3)

2017-03-20 Thread Marton Elek
+1 (non-binding)

Tested from the released binary package
* 5 node cluster running from dockerized containers (every 
namenode/datanode/nodemanager, etc. are running in separated containers) 
* Bitcoin bockchain data (~100Gb) parsed and imported to HBase (1.2.4)
* Spark (2.1.0 with included hadoop) job (executing on YARN) to query the data 
from HBase and write the results to HDFS

Looks good.

Marton


> On Mar 19, 2017, at 6:01 PM, Sunil Govind  wrote:
> 
> +1 (non-binding). Thanks Junping for the effort.
> 
> I have used release package and verified below cases
> - Ran MR sleep job and wordcount successfully in where nodes are configured
> with labels.
> - Verified application priority feature and I could see high priority apps
> are getting resource over lower priority apps when configured
> - Verified RM web UI pages and looks fine (priority could be seen)
> - Intra-queue preemption related to app priority also seems fine
> 
> Thanks
> Sunil
> 
> 
> On Fri, Mar 17, 2017 at 2:48 PM Junping Du  wrote:
> 
>> Hi all,
>> With fix of HDFS-11431 get in, I've created a new release candidate
>> (RC3) for Apache Hadoop 2.8.0.
>> 
>> This is the next minor release to follow up 2.7.0 which has been
>> released for more than 1 year. It comprises 2,900+ fixes, improvements, and
>> new features. Most of these commits are released for the first time in
>> branch-2.
>> 
>>  More information about the 2.8.0 release plan can be found here:
>> https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
>> 
>>  New RC is available at:
>> http://home.apache.org/~junping_du/hadoop-2.8.0-RC3
>> 
>>  The RC tag in git is: release-2.8.0-RC3, and the latest commit id
>> is: 91f2b7a13d1e97be65db92ddabc627cc29ac0009
>> 
>>  The maven artifacts are available via repository.apache.org at:
>> https://repository.apache.org/content/repositories/orgapachehadoop-1057
>> 
>>  Please try the release and vote; the vote will run for the usual 5
>> days, ending on 03/22/2017 PDT time.
>> 
>> Thanks,
>> 
>> Junping
>> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14203) performAuthCheck fails with wasbs scheme

2017-03-20 Thread Varada Hemeswari (JIRA)
Varada Hemeswari created HADOOP-14203:
-

 Summary: performAuthCheck fails with wasbs scheme
 Key: HADOOP-14203
 URL: https://issues.apache.org/jira/browse/HADOOP-14203
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.5
Reporter: Varada Hemeswari
Assignee: Sivaguru Sankaridurg
Priority: Critical


Accessing Azure file system with 'wasbs' scheme fails on enabling wasb 
authorization.

Stack trace :
{code}
adminuser1@hn0-f6adaa:/etc/hadoop/conf$ yarn jar 
/usr/hdp/current/hadoop-mapreduce-client/hadoop-mapreduce-examples.jar 
wordcount "/examplefile" "/output"
17/03/20 07:58:48 INFO client.AHSProxy: Connecting to Application History 
server at hn0-f6adaa.team2testdomain.onmicrosoft.com/10.45.0.190:10200
17/03/20 07:58:48 INFO security.TokenCache: Got dt for 
wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net; Kind: 
WASB delegation, Service: 10.45.0.190:50911, Ident: (owner=adminuser1, 
renewer=yarn, realUser=, issueDate=1489996728687, maxDate=1490601528687, 
sequenceNumber=15, masterKeyId=11)
org.apache.hadoop.fs.azure.WasbAuthorizationException: getFileStatus operation 
for Path : 
wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net/output 
not allowed
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.performAuthCheck(NativeAzureFileSystem.java:1425)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:2058)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1447)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
at 
org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
{code}

In the above fs.defaultFS is set to 
"wasbs://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net"

If fs.defaultFS is changed to 
"wasb://vahemesw-2v6-201703200...@storagewuteam02.blob.core.windows.net", the 
job runs fine



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org