[jira] [Created] (HADOOP-13129) fix typo in dynamic subcommand docs

2016-05-11 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-13129:


 Summary: fix typo in dynamic subcommand docs
 Key: HADOOP-13129
 URL: https://issues.apache.org/jira/browse/HADOOP-13129
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: scripts
Affects Versions: 3.0.0
Reporter: Sean Busbey
Priority: Trivial


hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md line 
128 "funciton" should be "function"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Common-trunk #2745

2016-05-11 Thread Apache Jenkins Server
See 

Changes:

[Arun Suresh] YARN-5073. Refactor startContainerInternal() in ContainerManager 
to

--
[...truncated 5157 lines...]
Running org.apache.hadoop.security.TestKDiagNoKDC
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.991 sec - in 
org.apache.hadoop.security.TestKDiagNoKDC
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.952 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.6 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.946 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.083 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.498 sec - in 
org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.537 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.557 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.766 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.075 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.003 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.523 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.07 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.421 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.489 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.165 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.667 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.token.TestDtUtilShell
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.399 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.token.TestToken
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.565 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.533 sec - in 
org.apache.hadoop.security.token.TestToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.3 sec - in 
org.apache.hadoop.security.token.TestDtUtilShell
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.925 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.524 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
Tests run: 12, 

Re: [VOTE] Merge feature branch HADOOP-12930

2016-05-11 Thread Sean Busbey
+1 (non-binding)

reviewed everything, filed an additional subtask for a very trivial
typo in the docs. should be fine to make a full issue after close and
then fix.

tried merging locally, tried running through new shell tests (both
with and without bats installed), tried making an example custom
command (valid and malformed). everything looks great.

On Mon, May 9, 2016 at 1:26 PM, Allen Wittenauer  wrote:
>
> Hey gang!
>
> I’d like to call a vote to run for 7 days (ending May 16 at 13:30 PT) 
> to merge the HADOOP-12930 feature branch into trunk. This branch was 
> developed exclusively by me as per the discussion two months ago as a way to 
> make what would be a rather large patch hopefully easier to review.  The vast 
> majority of the branch is code movement in the same file, additional license 
> headers, maven assembly hooks for distribution, and variable renames. Not a 
> whole lot of new code, but a big diff file none-the-less.
>
> This branch modifies the ‘hadoop’, ‘hdfs’, ‘mapred’, and ‘yarn’ 
> commands to allow for subcommands to be added or modified at runtime.  This 
> allows for individual users or entire sites to tweak the execution 
> environment to suit their local needs.  For example, it has been a practice 
> for some locations to change the distcp jar out for a custom one.  Using this 
> functionality, it is possible that the ‘hadoop distcp’ command could run the 
> local version without overwriting the bundled jar and for existing 
> documentation (read: results from Internet searches) to work as written 
> without modification. This has the potential to be a huge win, especially for:
>
> * advanced end users looking to supplement the Apache Hadoop 
> experience
> * operations teams that may be able to leverage existing 
> documentation without having to remain local “exception” docs
> * development groups wanting an easy way to trial 
> experimental features
>
> Additionally, this branch includes the following, related changes:
>
> * Adds the first unit tests for the ‘hadoop’ command
> * Adds the infrastructure for hdfs script testing and the 
> first unit test for the ‘hdfs’ command
> * Modifies the hadoop-tools components to be dynamic rather 
> than hard coded
> * Renames the shell profiles for hdfs, mapred, and yarn to be 
> consistent with other bundled profiles, including the ones introduced in this 
> branch
>
> Documentation, including a ‘hello world’-style example, is in the 
> UnixShellGuide markdown file.  (Of course!)
>
>  I am at ApacheCon this week if anyone wants to discuss in-depth.
>
> Thanks!
>
> P.S.,
>
> There are still two open sub-tasks.  These are blocked by other 
> issues so that we may add unit testing to the shell code in those respective 
> areas.  I’ll covert to full issues after HADOOP-12930 is closed.
>
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>



-- 
busbey

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13130) s3a failures can surface as RTEs, not IOEs

2016-05-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13130:
---

 Summary: s3a failures can surface as RTEs, not IOEs
 Key: HADOOP-13130
 URL: https://issues.apache.org/jira/browse/HADOOP-13130
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.2
Reporter: Steve Loughran


S3A failures happening in the AWS library surface as {{AmazonClientException}} 
derivatives, rather than IOEs. As the amazon exceptions are runtime exceptions, 
any code which catches IOEs for error handling breaks.

The fix will be to catch and wrap. The hard thing will be to wrap it with 
meaningful exceptions rather than a generic IOE. Furthermore, if anyone has 
been catching AWS exceptions, they are going to be disappointed. That means 
that fixing this situation could be considered "incompatible" —but only for 
code which contains assumptions about the underlying FS and the exceptions they 
raise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13131) add a test to verify that s3a supports SSE-S3 encryption

2016-05-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13131:
---

 Summary: add a test to verify that s3a supports SSE-S3 encryption
 Key: HADOOP-13131
 URL: https://issues.apache.org/jira/browse/HADOOP-13131
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.7.2
Reporter: Steve Loughran
Priority: Minor


Although S3A claims to support server-side S3 encryption (and does, if you set 
the option), we don't have any test to verify this. Of course, as the 
encryption is transparent, it's hard to test.

Here's what I propose
# a test which sets encryption = AES256; expects things to work as normal.
# a test which sets encyption = DES and expects any operation creating a file 
or directory to fail with a 400 "bad request" error





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13133) [JDK8] Upgrade asm to 5.1

2016-05-11 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-13133:
--

 Summary: [JDK8] Upgrade asm to 5.1
 Key: HADOOP-13133
 URL: https://issues.apache.org/jira/browse/HADOOP-13133
 Project: Hadoop Common
  Issue Type: Task
Reporter: Akira AJISAKA


We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13133) [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later

2016-05-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa resolved HADOOP-13133.
-
Resolution: Duplicate

> [JDK8] Upgrade asm to 5.0.3 or later and upgrade cglib to 3.2.0 or later
> 
>
> Key: HADOOP-13133
> URL: https://issues.apache.org/jira/browse/HADOOP-13133
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira AJISAKA
>Assignee: Tsuyoshi Ozawa
>
> We should upgrade asm to the version that support JDK8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13134) WASB's file delete still throwing Blob not found exception

2016-05-11 Thread Lin Chan (JIRA)
Lin Chan created HADOOP-13134:
-

 Summary: WASB's file delete still throwing Blob not found exception
 Key: HADOOP-13134
 URL: https://issues.apache.org/jira/browse/HADOOP-13134
 Project: Hadoop Common
  Issue Type: Bug
  Components: azure
Affects Versions: 2.7.1
Reporter: Lin Chan
Assignee: Dushyanth


WASB is still throwing blob not found exception as shown in the following 
stack. Need to catch that and convert to Boolean return code in WASB delete.

16/05/07 01:24:57 ERROR InsertIntoHadoopFsRelation: Aborting job.
org.apache.hadoop.fs.azure.AzureException: 
com.microsoft.azure.storage.StorageException: The specified blob does not exist.
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2682)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2693)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.updateParentFolderLastModifiedTime(NativeAzureFileSystem.java:2495)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1860)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1836)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.delete(NativeAzureFileSystem.java:1603)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.cleanupJob(FileOutputCommitter.java:510)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:403)
at 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:364)
at 
org.apache.parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:46)
at 
org.apache.spark.sql.execution.datasources.BaseWriterContainer.commitJob(WriterContainer.scala:230)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:151)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
at 
org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-common-trunk-Java8 #1458

2016-05-11 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-13125 FS Contract tests don't report FS initialization errors

--
[...truncated 5601 lines...]
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.159 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.617 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.098 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.123 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.28 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.603 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.807 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.402 sec - in 
org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.266 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.471 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.137 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.17 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.685 sec - in 
org.apache.hadoop.util.TestIndexedSort
Running org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.343 sec - in 
org.apache.hadoop.util.TestLightWeightCache
Running org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.235 sec - 
in org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.hash.TestHash
Running org.apache.hadoop.util.TestSignalLogger
Tests run: 1, Failures: 0, Errors: 0, Skippe

Build failed in Jenkins: Hadoop-Common-trunk #2746

2016-05-11 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-13125 FS Contract tests don't report FS initialization errors

--
[...truncated 3846 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

[INFO] Building jar: 

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop MiniKDC 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-minikdc ---
[INFO] Deleting 

[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-minikdc ---
[INFO] There are 14 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-minikdc ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


---
 T E S T S

Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-11 Thread Steve Loughran
> 
> On 10 May 2016, at 16:32, Akira AJISAKA  wrote:
> 
> Hi developers,
> 
> Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in trunk.

+1

Given Robert Kanter first filed the patch to do this —why not give him the 
honour. In fact, why not have a webex screen share of the occasion so we can 
all celebrate?



> Given this is a critical change, I'm thinking we should get the consensus 
> first.
> 
> One concern I think is, when the minimum version is set to JDK8, we need to 
> configure Jenkins to disable multi JDK test only in trunk.
> 

LGTM

We'll need to be strict after the switch: all patches to go into branch 2 will 
have to go through yetus as branch-2 patches, then cherry picked to trunk, or a 
separate patch on the same JIRA done for the branch-2. Assuming yetus still 
tests the branch-2 stuff on JDK7, that will check version compatibility 
pre-commit
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-11 Thread Junping Du
bq. We'll need to be strict after the switch: all patches to go into branch 2 
will have to go through yetus as branch-2 patches, then cherry picked to trunk, 
or a separate patch on the same JIRA done for the branch-2. Assuming yetus 
still tests the branch-2 stuff on JDK7, that will check version compatibility 
pre-commit.
+1. That's something I want to say also. We should have separate process for 
backport effort between Trunk and branch-2 then. Otherwise, we could involve 
Java 7 related bug to branch-2 after the backport.

Thanks,

Junping

From: Steve Loughran 
Sent: Wednesday, May 11, 2016 5:20 PM
To: Akira AJISAKA
Cc: common-dev@hadoop.apache.org; hdfs-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

>
> On 10 May 2016, at 16:32, Akira AJISAKA  wrote:
>
> Hi developers,
>
> Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in trunk.

+1

Given Robert Kanter first filed the patch to do this —why not give him the 
honour. In fact, why not have a webex screen share of the occasion so we can 
all celebrate?



> Given this is a critical change, I'm thinking we should get the consensus 
> first.
>
> One concern I think is, when the minimum version is set to JDK8, we need to 
> configure Jenkins to disable multi JDK test only in trunk.
>

LGTM

We'll need to be strict after the switch: all patches to go into branch 2 will 
have to go through yetus as branch-2 patches, then cherry picked to trunk, or a 
separate patch on the same JIRA done for the branch-2. Assuming yetus still 
tests the branch-2 stuff on JDK7, that will check version compatibility 
pre-commit
-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Jenkins build is back to normal : Hadoop-common-trunk-Java8 #1459

2016-05-11 Thread Apache Jenkins Server
See 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Common-trunk #2747

2016-05-11 Thread Apache Jenkins Server
See 

Changes:

[junping_du] YARN-5029. RM needs to send update event with YarnApplicationState 
as

[cnauroth] HADOOP-13008. Add XFS Filter for UIs to Hadoop Common. Contributed by

[lmccay] HADOOP-12942. hadoop credential commands non-obviously use password of

--
[...truncated 3846 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

[INFO] Building jar: 

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop MiniKDC 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-minikdc ---
[INFO] Deleting 

[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-minikdc ---
[INFO] There are 14 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-minikdc ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-minikdc ---
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-minikdc ---
[INFO] Surefire report directory: 


Re: [DISCUSS] Treating LimitedPrivate({"MapReduce"}) as Public APIs for YARN applications

2016-05-11 Thread Karthik Kambatla
I wonder if we should add another annotation between @Private and @Public.
Can that be @LimitedPrivate itself?

There are some APIs we shouldn't expect end-users to recompile even across
major versions (e.g. FileSystem, JobClient). On the other hand, requiring a
Yarn application to recompile seems reasonable.

As Hitesh suggested, would it make sense to mark this @LimitedPrivate and
not just @LimitedPrivate{MR}? And, update our guidelines to say expect to
recompile code for major releases and that there could be semantic
incompatibilities?

On Tue, May 10, 2016 at 4:19 PM, Colin McCabe  wrote:

> Thanks for explaining, Chris.  I generally agree that
> UserGroupInformation should be annotated as Public rather than
> LimitedPrivate, although you guys have more context than I do.
>
> However, I do think it's important that we clarify that we can break
> public APIs across a major version transition such as 2.x -> 3.x.  It
> would be particularly nice to remove a lot of the static or global state
> in UGI, although I don't know if we'll get to that before 3.0 is
> released.
>
> best,
> Colin
>
> On Tue, May 10, 2016, at 14:46, Chris Nauroth wrote:
> > Yes, I agree with you Andrew.
> >
> > Sorry, I should clarify my prior response.  I didn't mean to imply a
> > blind s/LimitedPrivate/Public/g across the whole codebase.  Instead, I'm
> > +1 for the intent of HADOOP-10776: a transition to Public for
> > UserGroupInformation, and by extension the related parts of its API like
> > Credentials.
> >
> > I'm in the camp that generally questions the usefulness of
> > LimitedPrivate, but I agree that transitions to Public need case-by-case
> > consideration.
> >
> > --Chris Nauroth
> >
> > From: Andrew Wang
> > mailto:andrew.w...@cloudera.com>>
> > Date: Tuesday, May 10, 2016 at 2:40 PM
> > To: Chris Nauroth
> > mailto:cnaur...@hortonworks.com>>
> > Cc: Hitesh Shah mailto:hit...@apache.org>>,
> > "yarn-...@hadoop.apache.org"
> > mailto:yarn-...@hadoop.apache.org>>,
> > "mapreduce-...@hadoop.apache.org >"
> > mailto:mapreduce-...@hadoop.apache.org
> >>,
> > "common-dev@hadoop.apache.org"
> > mailto:common-dev@hadoop.apache.org>>
> > Subject: Re: [DISCUSS] Treating LimitedPrivate({"MapReduce"}) as Public
> > APIs for YARN applications
> >
> > Why don't we address these on a case-by-case basis, changing the
> > annotations on these key classes to Public? LimitedPrivate{"YARN
> > applications"} is the same thing as Public.
> >
> > This way we don't need to add special exceptions to our compatibility
> > policy. Keeps it simple.
> >
> > Best,
> > Andrew
> >
> > On Tue, May 10, 2016 at 2:26 PM, Chris Nauroth
> > mailto:cnaur...@hortonworks.com>> wrote:
> > +1 for transitioning from LimitedPrivate to Public.
> >
> > I view this as an extension of the need for UserGroupInformation and
> > related APIs to be Public.  Regardless of the original intent behind
> > LimitedPrivate, these are de facto public now, because there is no viable
> > alternative for applications that want to integrate with a secured Hadoop
> > cluster.
> >
> > There is prior discussion of this topic on HADOOP-10776 and HADOOP-12913.
> > HADOOP-10776 is a blocker for 2.8.0 to make the transition to Public.
> >
> > --Chris Nauroth
> >
> >
> >
> >
> > On 5/10/16, 11:34 AM, "Hitesh Shah"
> > mailto:hit...@apache.org>> wrote:
> >
> > >There seems to be some incorrect assumptions on why the application had
> > >an issue. For rolling upgrade deployments, the application bundles the
> > >client-side jars that it was compiled against and uses them in its
> > >classpath and expects to be able to communicate with upgraded servers.
> > >Given that hadoop-common is a monolithic jar, it ends up being used on
> > >both client-side and server-side. The problem in this case was caused by
> > >the fact that the ResourceManager was generating the credentials file
> > >with a format understood only by hadoop-common from 3.x. For an
> > >application compiled against 2.x and has *only* hadoop-common from 2.x
> on
> > >its classpath, trying to read this file fails.
> > >
> > >This is not about whether internal implementations can change for
> > >non-public APIs. The file format for the Credential file in this
> scenario
> > >is *not* internal implementation especially when you can have different
> > >versions of the library trying to read the file. If an older client is
> > >talking to a newer versioned server, the general backward compat
> > >assumption is that the client should receive a response that it can
> parse
> > >and understand. In this scenario, the credentials file provided to the
> > >YARN app by the RM should have been written out with the older version
> or
> > >at the very least been readable by the older hadoop-common.jar.
> > >
> > >In any case, does anyone have any specific concerns with changing
> > >LimitedPrivate({"MapReduce"}) to Public?
> > >
> >

[jira] [Reopened] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-05-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe reopened HADOOP-12749:
---

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-05-11 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-12749.
---
  Resolution: Fixed
   Fix Version/s: (was: 2.9.0)
  2.8.0
Target Version/s: 2.8.0

Backported to 2.8

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.8.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13135) Encounter response code 500 when accessing /metrics endpoint

2016-05-11 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-13135:
---

 Summary: Encounter response code 500 when accessing /metrics 
endpoint
 Key: HADOOP-13135
 URL: https://issues.apache.org/jira/browse/HADOOP-13135
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.1
Reporter: Ted Yu


When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got:
{code}
HTTP ERROR 500

Problem accessing /metrics. Reason:

INTERNAL_SERVER_ERROR
Caused by:

java.lang.NullPointerException
at 
org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029)
at 
org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
{code}
[~ajisakaa] suggested that code 500 should be 404 (NOT FOUND).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-common-trunk-Java8 #1460

2016-05-11 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HADOOP-13065. Add a new interface for retrieving FS and FC Statistics

--
[...truncated 5584 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.426 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.419 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoderLegacy
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.765 sec - 
in org.apache.hadoop.crypto.TestCryptoStreamsWithJceAesCtrCryptoCodec
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.729 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.052 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoderLegacy
Running org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.611 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Running org.apache.hadoop.io.erasurecode.TestECSchema
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.164 sec - in 
org.apache.hadoop.io.erasurecode.TestECSchema
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.164 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Running org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.014 sec - in 
org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.968 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Running org.apache.hadoop.io.TestWritableUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec - in 
org.apache.hadoop.io.TestWritableUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.532 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.68 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestHHXORErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.174 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDataByteBuffers
Running org.apache.hadoop.io.TestVersionedWritable
Running org.apache.hadoop.io.TestEnumSetWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.442 sec - in 
org.apache.hadoop.io.TestVersionedWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.782 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestArrayPrimitiveWritable
Running org.apache.hadoop.io.TestMapWritable
Tests run: 6, Failures: 0, Errors: 0, Skippe

Re: [Release thread] 2.8.0 release activities

2016-05-11 Thread Sangjin Lee
Where do we stand in terms of closing out blocker/critical issues for
2.8.0? I still see 50 open JIRAs in Vinod's list:
https://issues.apache.org/jira/issues/?filter=12334985

But I see a lot of JIRAs with no patches or very stale patches. It would be
a good exercise to come up with the list of JIRAs that we need to block
2.8.0 for and focus our attention on closing them out. Thoughts?

Thanks,
Sangjin

On Sat, Apr 23, 2016 at 5:05 AM, Steve Loughran 
wrote:

>
> > On 23 Apr 2016, at 01:24, Vinod Kumar Vavilapalli 
> wrote:
> >
> > We are not converging - there’s still 58 more. I need help from the
> community in addressing / review 2.8.0 blockers. If folks can start with
> reviewing Patch available tickets, that’ll be great.
> >
> >
>
>
> I'm still doing the s3a stuff, other people testing and reviewing this
> stuff welcome.
>
> in particular, I could do with others playing with this patch of mine,
> which adds counters and things into S3a, based on the azure instrumentation
>
> https://issues.apache.org/jira/browse/HADOOP-13028
>
>
>


Build failed in Jenkins: Hadoop-Common-trunk #2748

2016-05-11 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HADOOP-13065. Add a new interface for retrieving FS and FC Statistics

--
[...truncated 5162 lines...]
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.882 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.613 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.548 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.903 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.093 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.457 sec - in 
org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Running org.apache.hadoop.security.http.TestXFrameOptionsFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.55 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec - in 
org.apache.hadoop.security.http.TestXFrameOptionsFilter
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.484 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.565 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.48 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.055 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.305 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.94 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.334 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.368 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.18 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.581 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.token.TestDtUtilShell
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.555 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.247 sec - in 
org.apache.hadoop.security.token.TestDtUtilShell
Running org.apache.hadoop.security.token.TestToken
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.548 sec - in 
org.apache.hadoop.security.token.TestToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.564 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.864 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.752 sec - 
in org.apache.hadoop.security.token.delegation.web.

Re: [Release thread] 2.8.0 release activities

2016-05-11 Thread Wangda Tan
+1, we should close such staled JIRAs to avoid doing unnecessary checks for
every releases.

I'm working on reviewing YARN/MR critical/blocker patches currently, it
gonna very helpful if someone else can help with reviewing Common/HDFS
JIRAs.

Thanks,
Wangda


On Wed, May 11, 2016 at 4:20 PM, Sangjin Lee  wrote:

> Where do we stand in terms of closing out blocker/critical issues for
> 2.8.0? I still see 50 open JIRAs in Vinod's list:
> https://issues.apache.org/jira/issues/?filter=12334985
>
> But I see a lot of JIRAs with no patches or very stale patches. It would be
> a good exercise to come up with the list of JIRAs that we need to block
> 2.8.0 for and focus our attention on closing them out. Thoughts?
>
> Thanks,
> Sangjin
>
> On Sat, Apr 23, 2016 at 5:05 AM, Steve Loughran 
> wrote:
>
> >
> > > On 23 Apr 2016, at 01:24, Vinod Kumar Vavilapalli 
> > wrote:
> > >
> > > We are not converging - there’s still 58 more. I need help from the
> > community in addressing / review 2.8.0 blockers. If folks can start with
> > reviewing Patch available tickets, that’ll be great.
> > >
> > >
> >
> >
> > I'm still doing the s3a stuff, other people testing and reviewing this
> > stuff welcome.
> >
> > in particular, I could do with others playing with this patch of mine,
> > which adds counters and things into S3a, based on the azure
> instrumentation
> >
> > https://issues.apache.org/jira/browse/HADOOP-13028
> >
> >
> >
>


Re: [Release thread] 2.8.0 release activities

2016-05-11 Thread Sangjin Lee
How about this? I'll review the HADOOP/HDFS bugs in that list to come up
with true blockers for 2.8.0 or JIRAs that are close to being ready. I'll
report the list here. Then folks can chime in if you agree

Perhaps Wangda, you can go over the YARN/MR bugs. Sound like a plan?

Thanks,
Sangjin

On Wed, May 11, 2016 at 4:26 PM, Wangda Tan  wrote:

> +1, we should close such staled JIRAs to avoid doing unnecessary checks for
> every releases.
>
> I'm working on reviewing YARN/MR critical/blocker patches currently, it
> gonna very helpful if someone else can help with reviewing Common/HDFS
> JIRAs.
>
> Thanks,
> Wangda
>
>
> On Wed, May 11, 2016 at 4:20 PM, Sangjin Lee  wrote:
>
> > Where do we stand in terms of closing out blocker/critical issues for
> > 2.8.0? I still see 50 open JIRAs in Vinod's list:
> > https://issues.apache.org/jira/issues/?filter=12334985
> >
> > But I see a lot of JIRAs with no patches or very stale patches. It would
> be
> > a good exercise to come up with the list of JIRAs that we need to block
> > 2.8.0 for and focus our attention on closing them out. Thoughts?
> >
> > Thanks,
> > Sangjin
> >
> > On Sat, Apr 23, 2016 at 5:05 AM, Steve Loughran 
> > wrote:
> >
> > >
> > > > On 23 Apr 2016, at 01:24, Vinod Kumar Vavilapalli <
> vino...@apache.org>
> > > wrote:
> > > >
> > > > We are not converging - there’s still 58 more. I need help from the
> > > community in addressing / review 2.8.0 blockers. If folks can start
> with
> > > reviewing Patch available tickets, that’ll be great.
> > > >
> > > >
> > >
> > >
> > > I'm still doing the s3a stuff, other people testing and reviewing this
> > > stuff welcome.
> > >
> > > in particular, I could do with others playing with this patch of mine,
> > > which adds counters and things into S3a, based on the azure
> > instrumentation
> > >
> > > https://issues.apache.org/jira/browse/HADOOP-13028
> > >
> > >
> > >
> >
>


Re: [Release thread] 2.8.0 release activities

2016-05-11 Thread Wangda Tan
Sounds good to me :).

Jian and I have looked at all existing 2.8.0 blockers and criticals today.
To me more than half of MR/YARN blockers/criticals of 2.8 should be moved
out. Left comments on these JIRAs asked original owners, plan to update
target version of these JIRAs early next week.

Will keep this thread updated.

Thanks,
Wangda


On Wed, May 11, 2016 at 5:06 PM, Sangjin Lee  wrote:

> How about this? I'll review the HADOOP/HDFS bugs in that list to come up
> with true blockers for 2.8.0 or JIRAs that are close to being ready. I'll
> report the list here. Then folks can chime in if you agree
>
> Perhaps Wangda, you can go over the YARN/MR bugs. Sound like a plan?
>
> Thanks,
> Sangjin
>
> On Wed, May 11, 2016 at 4:26 PM, Wangda Tan  wrote:
>
>> +1, we should close such staled JIRAs to avoid doing unnecessary checks
>> for
>> every releases.
>>
>> I'm working on reviewing YARN/MR critical/blocker patches currently, it
>> gonna very helpful if someone else can help with reviewing Common/HDFS
>> JIRAs.
>>
>> Thanks,
>> Wangda
>>
>>
>> On Wed, May 11, 2016 at 4:20 PM, Sangjin Lee  wrote:
>>
>> > Where do we stand in terms of closing out blocker/critical issues for
>> > 2.8.0? I still see 50 open JIRAs in Vinod's list:
>> > https://issues.apache.org/jira/issues/?filter=12334985
>> >
>> > But I see a lot of JIRAs with no patches or very stale patches. It
>> would be
>> > a good exercise to come up with the list of JIRAs that we need to block
>> > 2.8.0 for and focus our attention on closing them out. Thoughts?
>> >
>> > Thanks,
>> > Sangjin
>> >
>> > On Sat, Apr 23, 2016 at 5:05 AM, Steve Loughran > >
>> > wrote:
>> >
>> > >
>> > > > On 23 Apr 2016, at 01:24, Vinod Kumar Vavilapalli <
>> vino...@apache.org>
>> > > wrote:
>> > > >
>> > > > We are not converging - there’s still 58 more. I need help from the
>> > > community in addressing / review 2.8.0 blockers. If folks can start
>> with
>> > > reviewing Patch available tickets, that’ll be great.
>> > > >
>> > > >
>> > >
>> > >
>> > > I'm still doing the s3a stuff, other people testing and reviewing this
>> > > stuff welcome.
>> > >
>> > > in particular, I could do with others playing with this patch of mine,
>> > > which adds counters and things into S3a, based on the azure
>> > instrumentation
>> > >
>> > > https://issues.apache.org/jira/browse/HADOOP-13028
>> > >
>> > >
>> > >
>> >
>>
>
>


Re: [Release thread] 2.8.0 release activities

2016-05-11 Thread Sangjin Lee
I see the following list of JIRAs from HADOOP and HDFS as blockers for
2.8.0. Other JIRAs are either old issues with little movement or new issues
that don't appear to be as critical/ready.

- HADOOP-12893
- HADOOP-12892
- HADOOP-10940 (patch ready?)
- HADOOP-12971 (can be done relatively quickly?)
- HDFS-7959
- HDFS-7597 (needs review/some more work?)

I would propose moving the rest out of scope for 2.8.0 (23 JIRAs). Let me
know what you think.


On Wed, May 11, 2016 at 5:37 PM, Wangda Tan  wrote:

> Sounds good to me :).
>
> Jian and I have looked at all existing 2.8.0 blockers and criticals today.
> To me more than half of MR/YARN blockers/criticals of 2.8 should be moved
> out. Left comments on these JIRAs asked original owners, plan to update
> target version of these JIRAs early next week.
>
> Will keep this thread updated.
>
> Thanks,
> Wangda
>
>
> On Wed, May 11, 2016 at 5:06 PM, Sangjin Lee  wrote:
>
>> How about this? I'll review the HADOOP/HDFS bugs in that list to come up
>> with true blockers for 2.8.0 or JIRAs that are close to being ready. I'll
>> report the list here. Then folks can chime in if you agree
>>
>> Perhaps Wangda, you can go over the YARN/MR bugs. Sound like a plan?
>>
>> Thanks,
>> Sangjin
>>
>> On Wed, May 11, 2016 at 4:26 PM, Wangda Tan  wrote:
>>
>>> +1, we should close such staled JIRAs to avoid doing unnecessary checks
>>> for
>>> every releases.
>>>
>>> I'm working on reviewing YARN/MR critical/blocker patches currently, it
>>> gonna very helpful if someone else can help with reviewing Common/HDFS
>>> JIRAs.
>>>
>>> Thanks,
>>> Wangda
>>>
>>>
>>> On Wed, May 11, 2016 at 4:20 PM, Sangjin Lee  wrote:
>>>
>>> > Where do we stand in terms of closing out blocker/critical issues for
>>> > 2.8.0? I still see 50 open JIRAs in Vinod's list:
>>> > https://issues.apache.org/jira/issues/?filter=12334985
>>> >
>>> > But I see a lot of JIRAs with no patches or very stale patches. It
>>> would be
>>> > a good exercise to come up with the list of JIRAs that we need to block
>>> > 2.8.0 for and focus our attention on closing them out. Thoughts?
>>> >
>>> > Thanks,
>>> > Sangjin
>>> >
>>> > On Sat, Apr 23, 2016 at 5:05 AM, Steve Loughran <
>>> ste...@hortonworks.com>
>>> > wrote:
>>> >
>>> > >
>>> > > > On 23 Apr 2016, at 01:24, Vinod Kumar Vavilapalli <
>>> vino...@apache.org>
>>> > > wrote:
>>> > > >
>>> > > > We are not converging - there’s still 58 more. I need help from the
>>> > > community in addressing / review 2.8.0 blockers. If folks can start
>>> with
>>> > > reviewing Patch available tickets, that’ll be great.
>>> > > >
>>> > > >
>>> > >
>>> > >
>>> > > I'm still doing the s3a stuff, other people testing and reviewing
>>> this
>>> > > stuff welcome.
>>> > >
>>> > > in particular, I could do with others playing with this patch of
>>> mine,
>>> > > which adds counters and things into S3a, based on the azure
>>> > instrumentation
>>> > >
>>> > > https://issues.apache.org/jira/browse/HADOOP-13028
>>> > >
>>> > >
>>> > >
>>> >
>>>
>>
>>
>


Re: [Release thread] 2.8.0 release activities

2016-05-11 Thread Jian He
For MapReduce/YARN, I closed a few staled ones. Only 4 jiras needs attention 
for 2.8

MAPREDUCE-6288
YARN-1815
YARN-4685
YARN-4844

The rest are either improvements or long-standing issues and does not qualify 
release blocker, IMO.
I think we’ll try to get these 4 jiras in asap. The rest will be on best 
effort, resolve as much as possible and move them out if not resolved in time.

Jian

On May 11, 2016, at 5:37 PM, Wangda Tan 
mailto:wheele...@gmail.com>> wrote:

Sounds good to me :).

Jian and I have looked at all existing 2.8.0 blockers and criticals today.
To me more than half of MR/YARN blockers/criticals of 2.8 should be moved
out. Left comments on these JIRAs asked original owners, plan to update
target version of these JIRAs early next week.

Will keep this thread updated.

Thanks,
Wangda


On Wed, May 11, 2016 at 5:06 PM, Sangjin Lee 
mailto:sj...@apache.org>> wrote:

How about this? I'll review the HADOOP/HDFS bugs in that list to come up
with true blockers for 2.8.0 or JIRAs that are close to being ready. I'll
report the list here. Then folks can chime in if you agree

Perhaps Wangda, you can go over the YARN/MR bugs. Sound like a plan?

Thanks,
Sangjin

On Wed, May 11, 2016 at 4:26 PM, Wangda Tan 
mailto:wheele...@gmail.com>> wrote:

+1, we should close such staled JIRAs to avoid doing unnecessary checks
for
every releases.

I'm working on reviewing YARN/MR critical/blocker patches currently, it
gonna very helpful if someone else can help with reviewing Common/HDFS
JIRAs.

Thanks,
Wangda


On Wed, May 11, 2016 at 4:20 PM, Sangjin Lee 
mailto:sj...@apache.org>> wrote:

Where do we stand in terms of closing out blocker/critical issues for
2.8.0? I still see 50 open JIRAs in Vinod's list:
https://issues.apache.org/jira/issues/?filter=12334985

But I see a lot of JIRAs with no patches or very stale patches. It
would be
a good exercise to come up with the list of JIRAs that we need to block
2.8.0 for and focus our attention on closing them out. Thoughts?

Thanks,
Sangjin

On Sat, Apr 23, 2016 at 5:05 AM, Steve Loughran 
mailto:ste...@hortonworks.com>

wrote:


On 23 Apr 2016, at 01:24, Vinod Kumar Vavilapalli <
vino...@apache.org>
wrote:

We are not converging - there’s still 58 more. I need help from the
community in addressing / review 2.8.0 blockers. If folks can start
with
reviewing Patch available tickets, that’ll be great.




I'm still doing the s3a stuff, other people testing and reviewing this
stuff welcome.

in particular, I could do with others playing with this patch of mine,
which adds counters and things into S3a, based on the azure
instrumentation

https://issues.apache.org/jira/browse/HADOOP-13028










Build failed in Jenkins: Hadoop-common-trunk-Java8 #1461

2016-05-11 Thread Apache Jenkins Server
See 

Changes:

[kasha] YARN-4995. FairScheduler: Display per-queue demand on the scheduler

[szetszwo] HDFS-10346. Implement asynchronous setPermission/setOwner for

[Arun Suresh] YARN-5049. Extend NMStateStore to save queued container 
information.

--
[...truncated 5584 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.627 sec - in 
org.apache.hadoop.io.file.tfile.TestTFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestCompression
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.638 sec - in 
org.apache.hadoop.io.file.tfile.TestCompression
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.814 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Running org.apache.hadoop.io.file.tfile.TestTFileSplit
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.971 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileJClassComparatorByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileComparator2
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.87 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.019 sec - in 
org.apache.hadoop.io.TestSequenceFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.516 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileComparator2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Running org.apache.hadoop.io.file.tfile.TestTFileSeqFileComparison
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.755 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileNoneCodecsByteArrays
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.file.tfile.TestVLong
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.539 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileSplit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 25, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.945 sec - in 
org.apache.hadoop.io.file.tfile.TestTFileByteArrays
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.208 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.133 sec - in 
org.apache.hadoop.io.file.tfile.TestVLong
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Running org.apache.hadoop.io.TestArrayWritable
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoderLegacy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.301 sec - in 
org.apache.hadoop.io.TestArrayWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.394 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.666 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoderLegacy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.25 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestDummyRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring

Build failed in Jenkins: Hadoop-Common-trunk #2749

2016-05-11 Thread Apache Jenkins Server
See 

Changes:

[kasha] YARN-4995. FairScheduler: Display per-queue demand on the scheduler

[szetszwo] HDFS-10346. Implement asynchronous setPermission/setOwner for

[Arun Suresh] YARN-5049. Extend NMStateStore to save queued container 
information.

--
[...truncated 5174 lines...]
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.993 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.787 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.1 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.587 sec - in 
org.apache.hadoop.security.http.TestRestCsrfPreventionFilter
Running org.apache.hadoop.security.http.TestXFrameOptionsFilter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.454 sec - in 
org.apache.hadoop.security.http.TestXFrameOptionsFilter
Running org.apache.hadoop.security.http.TestCrossOriginFilter
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.527 sec - in 
org.apache.hadoop.security.http.TestCrossOriginFilter
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.228 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.72 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.38 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.603 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.287 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.047 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.485 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.46 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.827 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.674 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.token.TestDtUtilShell
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.595 sec - in 
org.apache.hadoop.security.token.TestToken
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.728 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.45 sec - in 
org.apache.hadoop.security.token.TestDtUtilShell
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.056 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.691 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Running 
org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.813 sec - 
in or

Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-11 Thread Gangumalla, Uma
+1

Regards,
Uma

On 5/10/16, 2:24 PM, "Andrew Wang"  wrote:

>+1
>
>On Tue, May 10, 2016 at 12:36 PM, Ravi Prakash 
>wrote:
>
>> +1. Thanks for driving this Akira
>>
>> On Tue, May 10, 2016 at 10:25 AM, Tsuyoshi Ozawa 
>>wrote:
>>
>> > > Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in
>>trunk.
>> >
>> > Sounds good. To do so, we need to check the blockers of 3.0.0-alpha
>> > RC, especially upgrading all dependencies which use refractions at
>> > first.
>> >
>> > Thanks,
>> > - Tsuyoshi
>> >
>> > On Tue, May 10, 2016 at 8:32 AM, Akira AJISAKA
>> >  wrote:
>> > > Hi developers,
>> > >
>> > > Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in
>>trunk.
>> > > Given this is a critical change, I'm thinking we should get the
>> consensus
>> > > first.
>> > >
>> > > One concern I think is, when the minimum version is set to JDK8, we
>> need
>> > to
>> > > configure Jenkins to disable multi JDK test only in trunk.
>> > >
>> > > Any thoughts?
>> > >
>> > > Thanks,
>> > > Akira
>> > >
>> > > 
>>-
>> > > To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
>> > > For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>> > >
>> >
>> > -
>> > To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
>> > For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>> >
>> >
>>


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Hadoop CI with alternate architectures.

2016-05-11 Thread MrAsanjar .
I am writing this email to reduce mishaps similar to the issue reported by
JIRA https://issues.apache.org/jira/browse/HADOOP-11505. In a nutshell, an
x86 specific
performance enhancement broke Hadoop build on Power and SPARC architecture.
To avoid similar issues in future, could I offer my help here, as a
OpenPOWER foundation member.
For example, we could contribute a Power based Jenkins slave(s) to Apache
Hadoop CI. As we  have successfully done similar contribution to Apache
Bigtop CI in past  https://ci.bigtop.apache.org/computer/docker-slave-ppc-1/.
Thus, we could catch any regressions earlier in Hadoop  development cycle.
I'd appreciate community's guidance on this.