[jira] [Created] (HDFS-7451) Namenode HA failover happens very frequently from active to standby

2014-11-26 Thread LAXMAN KUMAR SAHOO (JIRA)
LAXMAN KUMAR SAHOO created HDFS-7451:


 Summary: Namenode HA failover happens very frequently from active 
to standby
 Key: HDFS-7451
 URL: https://issues.apache.org/jira/browse/HDFS-7451
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: LAXMAN KUMAR SAHOO


We have two namenode having HA enabled. From last couple of days we are 
observing that the failover happens very frequently from active to standby 
mode. Below is the log details of the active namenode during failover happens. 
Is there any fix to get rid of this issue?

Namenode logs:

{code}
2014-11-25 22:24:02,020 WARN org.apache.hadoop.ipc.Server: IPC Server 
Responder, call org.apache.hadoop.hdfs.protocol.Clie
ntProtocol.getListing from 10.2.16.214:40751: output error
2014-11-25 22:24:02,020 INFO org.apache.hadoop.ipc.Server: IPC Server handler 
23 on 8020 caught an exception
java.nio.channels.ClosedChannelException
at 
sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:265)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:474)
at org.apache.hadoop.ipc.Server.channelWrite(Server.java:2195)
at org.apache.hadoop.ipc.Server.access$2000(Server.java:110)
at 
org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:979)
at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1045)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1798)







2014-11-25 22:24:10,631 INFO 
org.apache.hadoop.hdfs.server.namenode.FileJournalManager: Finalizing edits 
file /sda/dfs/namenode/current/edits_inprogress_01643676954 -> 
/sda/dfs/namenode/current/edits_01643676954-01643677390
2014-11-25 22:24:10,631 INFO 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Closing
java.lang.Exception
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.close(IPCLoggerChannel.java:182)
at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.close(AsyncLoggerSet.java:102)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.close(QuorumJournalManager.java:446)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.close(JournalSet.java:107)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$4.apply(JournalSet.java:222)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:347)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.close(JournalSet.java:219)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:308)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:939)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1365)
at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
at 
org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:61)
at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.setState(ActiveState.java:52)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToStandby(NameNode.java:1278)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToStandby(NameNodeRpcServer.java:1046)
at 
org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToStandby(HAServiceProtocolServerSideTranslatorPB.java:119)
at 
org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:3635)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1752)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1748)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)


2014-11-25 22:24:10,632 INFO 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required 
for standby state
2014-11-25 22:24:10,633 INFO 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Will roll logs on 
active node at dc1-had03-m002.dc01.revsci.net/10.2.16.92:8020 every 120 seconds.
2014-11-25 22:24:10,634 INFO 
org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer: Starting standby 
checkpoint thread...
Checkpointing active NN at dc1-had03-m002.dc01.revsci.net:50070
Serving checkpoints at dc1-had03-m001.dc01.revsci.net/10.2.16.91:50070
{code}

zkfc logs:
{code}
2014-11-25 22:24:12,192 INFO org.apache.zookeeper.ClientCnxn: Unable to read 
additional data from server sessionid 0x449b8
ce9a110255, likely server has closed socket, closing socket connection and 
attem

[jira] [Resolved] (HDFS-5098) Enhance FileSystem.Statistics to have locality information

2014-11-26 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha resolved HDFS-5098.
--
Resolution: Duplicate

> Enhance FileSystem.Statistics to have locality information
> --
>
> Key: HDFS-5098
> URL: https://issues.apache.org/jira/browse/HDFS-5098
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bikas Saha
>Assignee: Suresh Srinivas
> Fix For: 2.6.0
>
>
> Currently in MR/Tez we dont have a good and accurate means to detect how much 
> the the IO was actually done locally. Getting this information from the 
> source of truth would be much better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7450) Consolidate GetFileInfo, GetListings and GetContentSummary into a single class

2014-11-26 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-7450:


 Summary: Consolidate GetFileInfo, GetListings and 
GetContentSummary into a single class
 Key: HDFS-7450
 URL: https://issues.apache.org/jira/browse/HDFS-7450
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai


This jira proposes to consolidate the implementation of {{GetFileInfo}}, 
{{GetListings}} and {{GetContentSummary}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7449) Add metrics to NFS gateway

2014-11-26 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7449:


 Summary: Add metrics to NFS gateway
 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-7437) Storing block ids instead of BlockInfo object in INodeFile

2014-11-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai reopened HDFS-7437:
--

Reopen the issue, resolved the wrong jira.

> Storing block ids instead of BlockInfo object in INodeFile
> --
>
> Key: HDFS-7437
> URL: https://issues.apache.org/jira/browse/HDFS-7437
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: 2.7.0
>
> Attachments: HDFS-7437.000.patch, HDFS-7437.001.patch
>
>
> Currently {{INodeFile}} stores the lists of blocks as references of 
> {{BlockInfo}} instead of the block ids. This creates implicit dependency 
> between the namespace and the block manager.
> The dependency blocks several recent efforts, such as separating the block 
> manager out as a standalone service, moving block information off heap, and 
> optimizing the memory usage of block manager.
> This jira proposes to decouple the dependency by storing block ids instead of 
> object reference in {{INodeFile}} objects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7448) TestBookKeeperHACheckpoints fails in trunk build

2014-11-26 Thread Ted Yu (JIRA)
Ted Yu created HDFS-7448:


 Summary: TestBookKeeperHACheckpoints fails in trunk build
 Key: HDFS-7448
 URL: https://issues.apache.org/jira/browse/HDFS-7448
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


The test failed against both java 7 and java 8.
>From https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/17/console :
{code}
testStandbyExceptionThrownDuringCheckpoint(org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints)
  Time elapsed: 6.822 sec  <<< ERROR!
org.apache.hadoop.ipc.RemoteException: File /testFile could only be replicated 
to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and 
no node(s) are excluded in this operation.
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1558)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3024)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:699)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:966)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2125)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2121)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1683)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2119)

at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:101)
at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1544)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:600)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #17

2014-11-26 Thread Apache Jenkins Server
See 

Changes:

[jianhe] YARN-2404. Removed ApplicationAttemptState and ApplicationState class 
in RMStateStore. Contributed by Tsuyoshi OZAWA

[jianhe] MAPREDUCE-5568. Fixed CompletedJob in JHS to show progress percentage 
correctly in case the number of mappers or reducers is zero. Contributed by 
MinJi Kim

[wang] HADOOP-11300. KMS startup scripts must not display the keystore / 
truststore passwords. Contributed by Arun Suresh.

[wang] HDFS-7097. Allow block reports to be processed during checkpointing on 
standby name node. (kihwal via wang)

[wang] HADOOP-11173. Improve error messages for some KeyShell commands.

[jianhe] YARN-2906. CapacitySchedulerPage shows HTML tags for a queue's Active 
Users. Contributed by Jason Lowe

[kasha] YARN-2188. [YARN-1492] Client service for cache manager. (Chris Trezzo 
and Sangjin Lee via kasha)

[kasha] Revert "MAPREDUCE-5785. Derive heap size or mapreduce.*.memory.mb 
automatically. (Gera Shegalov and Karthik Kambatla via kasha)"

[wheat9] HDFS-7440. Consolidate snapshot related operations in a single class. 
Contributed by Haohui Mai.

--
[...truncated 11268 lines...]
Tests in error: 
  TestBookKeeperHACheckpoints.testStandbyExceptionThrownDuringCheckpoint » 
Remote

Tests run: 35, Failures: 0, Errors: 1, Skipped: 0

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, 
no dependency information available
[WARNING] Failed to retrieve plugin descriptor for 
org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin 
org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be 
resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in 
http://repo.maven.apache.org/maven2 was cached in the local repository, 
resolution will not be reattempted until the update interval of central has 
elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 15 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hadoop-hdfs-nfs ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.107 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege
Running org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3Utils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.248 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3Utils
Running org.apache.hadoop.hdfs.nfs.nfs3.TestWrites
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.451 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestWrites
Running org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.782 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable
Running org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.354 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Running org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgr

Hadoop-Hdfs-trunk-Java8 - Build # 17 - Failure

2014-11-26 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/17/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11461 lines...]
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [  02:48 h]
[INFO] Apache Hadoop HttpFS .. SUCCESS [03:35 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [01:54 min]
[INFO] Apache Hadoop HDFS-NFS  SUCCESS [01:31 min]
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.039 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:55 h
[INFO] Finished at: 2014-11-26T14:29:36+00:00
[INFO] Final Memory: 102M/1495M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #16
Archived 2 artifacts
Archive block size is 32768
Received 99 blocks and 125673432 bytes
Compression is 2.5%
Took 59 sec
Recording test results
Updating MAPREDUCE-5568
Updating YARN-1492
Updating YARN-2188
Updating HADOOP-11173
Updating YARN-2906
Updating YARN-2404
Updating MAPREDUCE-5785
Updating HDFS-7097
Updating HADOOP-11300
Updating HDFS-7440
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

Build failed in Jenkins: Hadoop-Hdfs-trunk #1945

2014-11-26 Thread Apache Jenkins Server
See 

Changes:

[jianhe] YARN-2404. Removed ApplicationAttemptState and ApplicationState class 
in RMStateStore. Contributed by Tsuyoshi OZAWA

[jianhe] MAPREDUCE-5568. Fixed CompletedJob in JHS to show progress percentage 
correctly in case the number of mappers or reducers is zero. Contributed by 
MinJi Kim

[wang] HADOOP-11300. KMS startup scripts must not display the keystore / 
truststore passwords. Contributed by Arun Suresh.

[wang] HDFS-7097. Allow block reports to be processed during checkpointing on 
standby name node. (kihwal via wang)

[wang] HADOOP-11173. Improve error messages for some KeyShell commands.

[jianhe] YARN-2906. CapacitySchedulerPage shows HTML tags for a queue's Active 
Users. Contributed by Jason Lowe

[kasha] YARN-2188. [YARN-1492] Client service for cache manager. (Chris Trezzo 
and Sangjin Lee via kasha)

[kasha] Revert "MAPREDUCE-5785. Derive heap size or mapreduce.*.memory.mb 
automatically. (Gera Shegalov and Karthik Kambatla via kasha)"

[wheat9] HDFS-7440. Consolidate snapshot related operations in a single class. 
Contributed by Haohui Mai.

--
[...truncated 11264 lines...]
Tests in error: 
  TestBookKeeperHACheckpoints.testStandbyExceptionThrownDuringCheckpoint » 
Remote

Tests run: 35, Failures: 0, Errors: 1, Skipped: 0

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[WARNING] The POM for org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 is missing, 
no dependency information available
[WARNING] Failed to retrieve plugin descriptor for 
org.eclipse.m2e:lifecycle-mapping:1.0.0: Plugin 
org.eclipse.m2e:lifecycle-mapping:1.0.0 or one of its dependencies could not be 
resolved: Failure to find org.eclipse.m2e:lifecycle-mapping:jar:1.0.0 in 
http://repo.maven.apache.org/maven2 was cached in the local repository, 
resolution will not be reattempted until the update interval of central has 
elapsed or updates are forced
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 15 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.16:test (default-test) @ hadoop-hdfs-nfs ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.369 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Running org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.781 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable
Running org.apache.hadoop.hdfs.nfs.nfs3.TestWrites
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.429 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestWrites
Running org.apache.hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.138 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege
Running org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3Utils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.25 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3Utils
Running org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Tests run: 22, Failures: 0, 

Hadoop-Hdfs-trunk - Build # 1945 - Failure

2014-11-26 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1945/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 11457 lines...]
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [  02:47 h]
[INFO] Apache Hadoop HttpFS .. SUCCESS [03:33 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [02:15 min]
[INFO] Apache Hadoop HDFS-NFS  SUCCESS [01:31 min]
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.048 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:54 h
[INFO] Finished at: 2014-11-26T14:28:43+00:00
[INFO] Final Memory: 97M/1273M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #1944
Archived 2 artifacts
Archive block size is 32768
Received 103 blocks and 125549067 bytes
Compression is 2.6%
Took 55 sec
Recording test results
Updating MAPREDUCE-5568
Updating YARN-1492
Updating YARN-2188
Updating HADOOP-11173
Updating YARN-2906
Updating YARN-2404
Updating MAPREDUCE-5785
Updating HDFS-7097
Updating HADOOP-11300
Updating HDFS-7440
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All tests passed

[jira] [Resolved] (HDFS-6803) Documenting DFSClient#DFSInputStream expectations reading and preading in concurrent context

2014-11-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-6803.
--
   Resolution: Fixed
Fix Version/s: 2.7.0

committed patch -thanks!

> Documenting DFSClient#DFSInputStream expectations reading and preading in 
> concurrent context
> 
>
> Key: HDFS-6803
> URL: https://issues.apache.org/jira/browse/HDFS-6803
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs-client
>Affects Versions: 2.4.1
>Reporter: stack
>Assignee: stack
> Fix For: 2.7.0
>
> Attachments: 9117.md.txt, DocumentingDFSClientDFSInputStream (1).pdf, 
> DocumentingDFSClientDFSInputStream.v2.pdf, HDFS-6803v2.txt, HDFS-6803v3.txt, 
> fsdatainputstream.md.v3.html
>
>
> Reviews of the patch posted the parent task suggest that we be more explicit 
> about how DFSIS is expected to behave when being read by contending threads. 
> It is also suggested that presumptions made internally be made explicit 
> documenting expectations.
> Before we put up a patch we've made a document of assertions we'd like to 
> make into tenets of DFSInputSteam.  If agreement, we'll attach to this issue 
> a patch that weaves the assumptions into DFSIS as javadoc and class comments. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7447) Number of maximum Acl entries on a File/Folder should be made user configurable than hardcoding .

2014-11-26 Thread J.Andreina (JIRA)
J.Andreina created HDFS-7447:


 Summary: Number of maximum Acl entries on a File/Folder should be 
made user configurable than hardcoding .
 Key: HDFS-7447
 URL: https://issues.apache.org/jira/browse/HDFS-7447
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: security
Reporter: J.Andreina



By default on creating a folder1 will have 6 acl entries . On top of that 
assigning acl  on a folder1 exceeds 32 , then unable to assign acls for a 
group/user to folder1. 
{noformat}
2014-11-20 18:55:06,553 ERROR [qtp1279235236-17 - /rolexml/role/modrole] Error 
occured while setting permissions for Resource:[ hdfs://hacluster/folder1 ] and 
Error message is : Invalid ACL: ACL has 33 entries, which exceeds maximum of 32.
at 
org.apache.hadoop.hdfs.server.namenode.AclTransformation.buildAndValidateAcl(AclTransformation.java:274)
at 
org.apache.hadoop.hdfs.server.namenode.AclTransformation.mergeAclEntries(AclTransformation.java:181)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedModifyAclEntries(FSDirectory.java:2771)
at 
org.apache.hadoop.hdfs.server.namenode.FSDirectory.modifyAclEntries(FSDirectory.java:2757)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.modifyAclEntries(FSNamesystem.java:7734)
{noformat}

Here value 32 is hardcoded  , which can be made user configurable. 

{noformat}
private static List buildAndValidateAcl(ArrayList aclBuilder)
throws AclException
{
if(aclBuilder.size() > 32)
throw new AclException((new StringBuilder()).append("Invalid ACL: 
ACL has ").append(aclBuilder.size()).append(" entries, which exceeds maximum of 
").append(32).append(".").toString());
:
:
}
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)