Status///RE: Apache JIRA DOWN!!!!

2016-03-11 Thread Brahma Reddy Battula
FYKI.

Came to know the following status from " http://status.apache.org/;


Fri, 11 Mar 2016 19:56:31 GMT - Sat, 12 Mar 2016 19:56:31 GMT:
As part of the earlier emergency maintenance, the db is going to be slow until 
the underlying disk operations complete. This will impact services like JIRA 
for the time being.




-Original Message-
From: Brahma Reddy Battula [mailto:brahmareddy.batt...@huawei.com] 
Sent: 12 March 2016 12:45
To: common-dev@hadoop.apache.org; yarn-...@hadoop.apache.org; 
hdfs-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Apache JIRA DOWN

Hi All

Apache jira is down , any update on this..?


--Brahma Reddy Battula



[jira] [Resolved] (HADOOP-12919) MiniDFSCluster uses wrong IP address

2016-03-11 Thread Christopher Tubbs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christopher Tubbs resolved HADOOP-12919.

Resolution: Duplicate

Sorry about the duplicate. JIRA is so slow right now, I didn't realize the 
previous submit made it through.

> MiniDFSCluster uses wrong IP address
> 
>
> Key: HADOOP-12919
> URL: https://issues.apache.org/jira/browse/HADOOP-12919
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.2.0, 2.6.1, 2.6.3
>Reporter: Christopher Tubbs
>
> MiniDFSCluster seems to be registering the DataNode using the machine's 
> internal IP address, rather than "localhost/127.0.0.1". It looks like the 
> problem isn't MiniDFSCluster specific, but that's what's biting me right now 
> and I can't figure out a workaround.
> MiniDFSCluster logs show roughly the following (jetty services ignored):
> NameNode starts org.apache.hadoop.ipc.Server listening on 
> localhost/127.0.0.1:43023
> DataNode reports "Configured hostname is 127.0.0.1"
> DataNode reports "Opened streaming server at /127.0.0.1:57310"
> DataNode starts org.apache.hadoop.ipc.Server listening on 
> localhost/127.0.0.1:53015
> DataNode registers with NN using storage id 
> DS-X-172.31.3.214-57310-X with ipcPort=53015
> NameNode reports "Adding a new node: /default-rack/172.31.3.214:57310"
> The storage id should have been derived from 127.0.0.1, and the so should all 
> the other registered information.
> I've verified with netstat that all services were listening only on 127.0.0.1
> This resulted in the client being unable to write blocks to the datanode, 
> because it was not listening on the address given to it by the namenode (the 
> address it was registered under).
> The actual client error message is:
> {code:java}
> [IPC Server handler 0 on 43023} INFO  org.apache.hadoop.hdfs.StateChange  - 
> BLOCK* allocateBlock: /test-dir/HelloWorld.jar. 
> BP-460569874-172.31.3.214-1457727894640 
> blk_1073741825_1001{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[172.31.3.214:57310|RBW]]}
> [Thread-61} INFO  org.apache.hadoop.hdfs.DFSClient  - Exception in 
> createBlockOutputStream
> java.net.ConnectException: Connection refused
>   at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>   at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1305)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1128)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
> [Thread-61} INFO  org.apache.hadoop.hdfs.DFSClient  - Abandoning 
> BP-460569874-172.31.3.214-1457727894640:blk_1073741825_1001
> [Thread-61} INFO  org.apache.hadoop.hdfs.DFSClient  - Excluding datanode 
> 172.31.3.214:57310
> [IPC Server handler 2 on 43023} WARN  
> org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy  - Not 
> able to place enough replicas, still in need of 1 to reach 1
> For more information, please enable DEBUG log level on 
> org.apache.commons.logging.impl.Log4JLogger
> [IPC Server handler 2 on 43023} ERROR 
> org.apache.hadoop.security.UserGroupInformation  - PriviledgedActionException 
> as:christopher (auth:SIMPLE) cause:java.io.IOException: File 
> /test-dir/HelloWorld.jar could only be replicated to 0 nodes instead of 
> minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are 
> excluded in this operation.
> [IPC Server handler 2 on 43023} INFO  org.apache.hadoop.ipc.Server  - IPC 
> Server handler 2 on 43023, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 
> 172.31.3.214:57395 Call#12 Retry#0: error: java.io.IOException: File 
> /test-dir/HelloWorld.jar could only be replicated to 0 nodes instead of 
> minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are 
> excluded in this operation.
> java.io.IOException: File /test-dir/HelloWorld.jar could only be replicated 
> to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running 
> and 1 node(s) are excluded in this operation.
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
>   at 
> 

Build failed in Jenkins: Hadoop-common-trunk-Java8 #1205

2016-03-11 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] HDFS-9942. Add an HTrace span when refreshing the groups for a 
username

[cmccabe] HADOOP-11996. Improve and restructure native ISAL support (Kai Zheng 
via

--
[...truncated 5499 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericsUtil
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.487 sec - in 
org.apache.hadoop.util.TestGenericsUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestChunkedArrayList
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.243 sec - in 
org.apache.hadoop.util.TestChunkedArrayList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.405 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.239 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.663 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.217 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.403 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.139 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.494 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.506 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.312 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.232 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.006 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.286 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit 

[jira] [Created] (HADOOP-12920) The static Block#toString method should not include information from derived classes

2016-03-11 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12920:
-

 Summary: The static Block#toString method should not include 
information from derived classes
 Key: HADOOP-12920
 URL: https://issues.apache.org/jira/browse/HADOOP-12920
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor


The static {{Block#toString}} method should not include information from 
derived classes.  This was a regression introduced by HDFS-9350.  Thanks to 
[~cnauroth] for finding this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12919) MiniDFSCluster uses wrong IP address

2016-03-11 Thread Christopher Tubbs (JIRA)
Christopher Tubbs created HADOOP-12919:
--

 Summary: MiniDFSCluster uses wrong IP address
 Key: HADOOP-12919
 URL: https://issues.apache.org/jira/browse/HADOOP-12919
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.6.3, 2.6.1, 2.2.0
Reporter: Christopher Tubbs


MiniDFSCluster seems to be registering the DataNode using the machine's 
internal IP address, rather than "localhost/127.0.0.1". It looks like the 
problem isn't MiniDFSCluster specific, but that's what's biting me right now 
and I can't figure out a workaround.

MiniDFSCluster logs show roughly the following (jetty services ignored):

NameNode starts org.apache.hadoop.ipc.Server listening on 
localhost/127.0.0.1:43023
DataNode reports "Configured hostname is 127.0.0.1"
DataNode reports "Opened streaming server at /127.0.0.1:57310"
DataNode starts org.apache.hadoop.ipc.Server listening on 
localhost/127.0.0.1:53015
DataNode registers with NN using storage id 
DS-X-172.31.3.214-57310-X with ipcPort=53015
NameNode reports "Adding a new node: /default-rack/172.31.3.214:57310"

The storage id should have been derived from 127.0.0.1, and the so should all 
the other registered information.

I've verified with netstat that all services were listening only on 127.0.0.1
This resulted in the client being unable to write blocks to the datanode, 
because it was not listening on the address given to it by the namenode (the 
address it was registered under).

The actual client error message is:

{code:java}
[IPC Server handler 0 on 43023} INFO  org.apache.hadoop.hdfs.StateChange  - 
BLOCK* allocateBlock: /test-dir/HelloWorld.jar. 
BP-460569874-172.31.3.214-1457727894640 
blk_1073741825_1001{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.31.3.214:57310|RBW]]}
[Thread-61} INFO  org.apache.hadoop.hdfs.DFSClient  - Exception in 
createBlockOutputStream
java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
  at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
  at 
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1305)
  at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1128)
  at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
  at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
[Thread-61} INFO  org.apache.hadoop.hdfs.DFSClient  - Abandoning 
BP-460569874-172.31.3.214-1457727894640:blk_1073741825_1001
[Thread-61} INFO  org.apache.hadoop.hdfs.DFSClient  - Excluding datanode 
172.31.3.214:57310
[IPC Server handler 2 on 43023} WARN  
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy  - Not able 
to place enough replicas, still in need of 1 to reach 1
For more information, please enable DEBUG log level on 
org.apache.commons.logging.impl.Log4JLogger
[IPC Server handler 2 on 43023} ERROR 
org.apache.hadoop.security.UserGroupInformation  - PriviledgedActionException 
as:christopher (auth:SIMPLE) cause:java.io.IOException: File 
/test-dir/HelloWorld.jar could only be replicated to 0 nodes instead of 
minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are 
excluded in this operation.
[IPC Server handler 2 on 43023} INFO  org.apache.hadoop.ipc.Server  - IPC 
Server handler 2 on 43023, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 172.31.3.214:57395 
Call#12 Retry#0: error: java.io.IOException: File /test-dir/HelloWorld.jar 
could only be replicated to 0 nodes instead of minReplication (=1).  There are 
1 datanode(s) running and 1 node(s) are excluded in this operation.
java.io.IOException: File /test-dir/HelloWorld.jar could only be replicated to 
0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 
node(s) are excluded in this operation.
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
  at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
  at 

[jira] [Created] (HADOOP-12918) MiniDFSCluster uses wrong IP address

2016-03-11 Thread Christopher Tubbs (JIRA)
Christopher Tubbs created HADOOP-12918:
--

 Summary: MiniDFSCluster uses wrong IP address
 Key: HADOOP-12918
 URL: https://issues.apache.org/jira/browse/HADOOP-12918
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.6.3, 2.6.1, 2.2.0
Reporter: Christopher Tubbs


MiniDFSCluster seems to be registering the DataNode using the machine's 
internal IP address, rather than "localhost/127.0.0.1". It looks like the 
problem isn't MiniDFSCluster specific, but that's what's biting me right now 
and I can't figure out a workaround.

MiniDFSCluster logs show roughly the following (jetty services ignored):

NameNode starts org.apache.hadoop.ipc.Server listening on 
localhost/127.0.0.1:43023
DataNode reports "Configured hostname is 127.0.0.1"
DataNode reports "Opened streaming server at /127.0.0.1:57310"
DataNode starts org.apache.hadoop.ipc.Server listening on 
localhost/127.0.0.1:53015
DataNode registers with NN using storage id 
DS-X-172.31.3.214-57310-X with ipcPort=53015
NameNode reports "Adding a new node: /default-rack/172.31.3.214:57310"

The storage id should have been derived from 127.0.0.1, and the so should all 
the other registered information.

I've verified with netstat that all services were listening only on 127.0.0.1
This resulted in the client being unable to write blocks to the datanode, 
because it was not listening on the address given to it by the namenode (the 
address it was registered under).

The actual client error message is:

{code:java}
[IPC Server handler 0 on 43023} INFO  org.apache.hadoop.hdfs.StateChange  - 
BLOCK* allocateBlock: /test-dir/HelloWorld.jar. 
BP-460569874-172.31.3.214-1457727894640 
blk_1073741825_1001{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[172.31.3.214:57310|RBW]]}
[Thread-61} INFO  org.apache.hadoop.hdfs.DFSClient  - Exception in 
createBlockOutputStream
java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
  at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
  at 
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1305)
  at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1128)
  at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1088)
  at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514)
[Thread-61} INFO  org.apache.hadoop.hdfs.DFSClient  - Abandoning 
BP-460569874-172.31.3.214-1457727894640:blk_1073741825_1001
[Thread-61} INFO  org.apache.hadoop.hdfs.DFSClient  - Excluding datanode 
172.31.3.214:57310
[IPC Server handler 2 on 43023} WARN  
org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy  - Not able 
to place enough replicas, still in need of 1 to reach 1
For more information, please enable DEBUG log level on 
org.apache.commons.logging.impl.Log4JLogger
[IPC Server handler 2 on 43023} ERROR 
org.apache.hadoop.security.UserGroupInformation  - PriviledgedActionException 
as:christopher (auth:SIMPLE) cause:java.io.IOException: File 
/test-dir/HelloWorld.jar could only be replicated to 0 nodes instead of 
minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are 
excluded in this operation.
[IPC Server handler 2 on 43023} INFO  org.apache.hadoop.ipc.Server  - IPC 
Server handler 2 on 43023, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 172.31.3.214:57395 
Call#12 Retry#0: error: java.io.IOException: File /test-dir/HelloWorld.jar 
could only be replicated to 0 nodes instead of minReplication (=1).  There are 
1 datanode(s) running and 1 node(s) are excluded in this operation.
java.io.IOException: File /test-dir/HelloWorld.jar could only be replicated to 
0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 
node(s) are excluded in this operation.
  at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
  at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
  at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
  at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
  at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
  at 

Jenkins build is back to normal : Hadoop-Common-trunk #2498

2016-03-11 Thread Apache Jenkins Server
See 



[jira] [Reopened] (HADOOP-12672) RPC timeout should not override IPC ping interval

2016-03-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-12672:
-

this just broke everything. Rolling back across the board.

> RPC timeout should not override IPC ping interval
> -
>
> Key: HADOOP-12672
> URL: https://issues.apache.org/jira/browse/HADOOP-12672
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.8.0, 2.7.3, 2.6.4
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: HADOOP-12672.001.patch, HADOOP-12672.002.patch, 
> HADOOP-12672.003.patch, HADOOP-12672.004.patch
>
>
> Currently if the value of ipc.client.rpc-timeout.ms is greater than 0, the 
> timeout overrides the ipc.ping.interval and client will throw exception 
> instead of sending ping when the interval is passed. RPC timeout should work 
> without effectively disabling IPC ping.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12917) MiniDFS clusters failing to come up, "timeout can't be negative"

2016-03-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12917:
---

 Summary: MiniDFS clusters failing to come up, "timeout can't be 
negative"
 Key: HADOOP-12917
 URL: https://issues.apache.org/jira/browse/HADOOP-12917
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Critical


All attempts to bring up minidfs clusters are failing in java.io.IOException: 
Failed on local exception: java.io.IOException: Couldn't set up IO streams: 
java.lang.IllegalArgumentException: timeout can't be negative;

HADOOP-12672 just altered this code; assuming its the cause and rolling back.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #2497

2016-03-11 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] MAPREDUCE-6520. Migrate MR Client test cases part 1.

[aajisaka] Revert "MAPREDUCE-6520. Migrate MR Client test cases part 1."

[aajisaka] MAPREDUCE-6520. Migrate MR Client test cases part 1. Contributed by

--
[...truncated 5119 lines...]
Running org.apache.hadoop.ipc.TestRetryCacheMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.463 sec - in 
org.apache.hadoop.ipc.TestRetryCacheMetrics
Running org.apache.hadoop.ipc.TestMiniRPCBenchmark
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.1 sec - in 
org.apache.hadoop.ipc.TestMiniRPCBenchmark
Running org.apache.hadoop.ipc.TestIPC
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.165 sec - 
in org.apache.hadoop.ipc.TestIPC
Running org.apache.hadoop.ipc.TestDecayRpcScheduler
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.103 sec - in 
org.apache.hadoop.ipc.TestDecayRpcScheduler
Running org.apache.hadoop.ipc.TestFairCallQueue
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.322 sec - in 
org.apache.hadoop.ipc.TestFairCallQueue
Running org.apache.hadoop.ipc.TestServer
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.968 sec - in 
org.apache.hadoop.ipc.TestServer
Running org.apache.hadoop.ipc.TestRPC
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.583 sec - 
in org.apache.hadoop.ipc.TestRPC
Running org.apache.hadoop.ipc.TestProtoBufRPCCompatibility
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.988 sec - in 
org.apache.hadoop.ipc.TestProtoBufRPCCompatibility
Running org.apache.hadoop.ipc.TestSaslRPC
Tests run: 90, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.181 sec - 
in org.apache.hadoop.ipc.TestSaslRPC
Running org.apache.hadoop.ipc.TestRetryCache
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.487 sec - in 
org.apache.hadoop.ipc.TestRetryCache
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.535 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.94 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Running org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.215 sec - 
in org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.602 sec - 
in org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.77 sec - in 
org.apache.hadoop.security.token.delegation.TestZKDelegationTokenSecretManager
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.512 sec - in 
org.apache.hadoop.security.token.TestToken
Running org.apache.hadoop.security.TestWhitelistBasedResolver
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.528 sec - in 
org.apache.hadoop.security.TestWhitelistBasedResolver
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.655 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.TestGroupFallback
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.698 sec - in 
org.apache.hadoop.security.TestGroupFallback
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.649 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.438 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.644 sec - in 
org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Running org.apache.hadoop.security.TestKDiag
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.959 sec - in 
org.apache.hadoop.security.TestKDiag
Running org.apache.hadoop.security.TestShellBasedIdMapping
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.631 sec - in 
org.apache.hadoop.security.TestShellBasedIdMapping
Running org.apache.hadoop.security.TestNullGroupsMapping
Tests run: 1, 

[jira] [Resolved] (HADOOP-12913) Drop the @LimitedPrivate maker off UGI, as its clearly untrue

2016-03-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-12913.
-
Resolution: Duplicate

> Drop the @LimitedPrivate maker off UGI, as its clearly untrue
> -
>
> Key: HADOOP-12913
> URL: https://issues.apache.org/jira/browse/HADOOP-12913
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> UGI declares itself as
> {code}
> {@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce", "HBase", "Hive", 
> "Oozie"})
> {code}
> Really its "any application that interacts with services in a secure 
> cluster". 
> I propose: replace with {{@Public, @Evolving}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-common-trunk-Java8 #1201

2016-03-11 Thread Apache Jenkins Server
See