Re: Do we still have nightly (or even weekly) unit test run for Hadoop projects?

2017-10-18 Thread Akira Ajisaka

Yes, qbt runs nightly and it sends e-mail to dev lists.
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/

Regards,
Akira

On 2017/10/19 7:54, Wangda Tan wrote:

Hi,

Do we still have nightly (or even weekly) unit test run for Hadoop
projects? I couldn't find it on Jenkins dashboard and I haven't seen
reports set to dev lists for a while.

Thanks,
Wangda



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[DISCUSSION] Merging HDFS-7240 Object Store (Ozone) to trunk

2017-10-18 Thread Yang Weiwei
Hello everyone,


I would like to start this thread to discuss merging Ozone (HDFS-7240) to 
trunk. This feature implements an object store which can co-exist with HDFS. 
Ozone is disabled by default. We have tested Ozone with cluster sizes varying 
from 1 to 100 data nodes.



The merge payload includes the following:

  1.  All services, management scripts
  2.  Object store APIs, exposed via both REST and RPC
  3.  Master service UIs, command line interfaces
  4.  Pluggable pipeline Integration
  5.  Ozone File System (Hadoop compatible file system implementation, passes 
all FileSystem contract tests)
  6.  Corona - a load generator for Ozone.
  7.  Essential documentation added to Hadoop site.
  8.  Version specific Ozone Documentation, accessible via service UI.
  9.  Docker support for ozone, which enables faster development cycles.


To build Ozone and run ozone using docker, please follow instructions in this 
wiki page. 
https://cwiki.apache.org/confluence/display/HADOOP/Dev+cluster+with+docker.


We have built a passionate and diverse community to drive this feature 
development. As a team, we have achieved significant progress in past 3 years 
since first JIRA for HDFS-7240 was opened on Oct 2014. So far, we have resolved 
almost 400 JIRAs by 20+ contributors/committers from different countries and 
affiliations. We also want to thank the large number of community members who 
were supportive of our efforts and contributed ideas and participated in the 
design of ozone.


Please share your thoughts, thanks!


-- Weiwei Yang


Do we still have nightly (or even weekly) unit test run for Hadoop projects?

2017-10-18 Thread Wangda Tan
Hi,

Do we still have nightly (or even weekly) unit test run for Hadoop
projects? I couldn't find it on Jenkins dashboard and I haven't seen
reports set to dev lists for a while.

Thanks,
Wangda


[jira] [Created] (HDFS-12683) DFSZKFailOverController re-order logic for logging Exception

2017-10-18 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12683:
-

 Summary: DFSZKFailOverController re-order logic for logging 
Exception
 Key: HDFS-12683
 URL: https://issues.apache.org/jira/browse/HDFS-12683
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham


Log the exception before closing the connections and terminating server.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12682) ECAdmin -listPolicies will always show policy state as DISABLED

2017-10-18 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-12682:


 Summary: ECAdmin -listPolicies will always show policy state as 
DISABLED
 Key: HDFS-12682
 URL: https://issues.apache.org/jira/browse/HDFS-12682
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Reporter: Xiao Chen
Assignee: Xiao Chen


On a real cluster, {{hdfs ec -listPolicies}} will always show policy state as 
DISABLED.

{noformat}
[hdfs@nightly6x-1 root]$ hdfs ec -listPolicies
Erasure Coding Policies:
ErasureCodingPolicy=[Name=RS-10-4-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=10, numParityUnits=4]], CellSize=1048576, Id=5, State=DISABLED]
ErasureCodingPolicy=[Name=RS-3-2-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=3, numParityUnits=2]], CellSize=1048576, Id=2, State=DISABLED]
ErasureCodingPolicy=[Name=RS-6-3-1024k, Schema=[ECSchema=[Codec=rs, 
numDataUnits=6, numParityUnits=3]], CellSize=1048576, Id=1, State=DISABLED]
ErasureCodingPolicy=[Name=RS-LEGACY-6-3-1024k, 
Schema=[ECSchema=[Codec=rs-legacy, numDataUnits=6, numParityUnits=3]], 
CellSize=1048576, Id=3, State=DISABLED]
ErasureCodingPolicy=[Name=XOR-2-1-1024k, Schema=[ECSchema=[Codec=xor, 
numDataUnits=2, numParityUnits=1]], CellSize=1048576, Id=4, State=DISABLED]
[hdfs@nightly6x-1 root]$ hdfs ec -getPolicy -path /ecec
XOR-2-1-1024k
{noformat}

This is because when [deserializing 
protobuf|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelperClient.java#L2942],
 the static instance of [SystemErasureCodingPolicies 
class|https://github.com/apache/hadoop/blob/branch-3.0.0-beta1/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/SystemErasureCodingPolicies.java#L101]
 is first checked, and always returns the cached policy objects, which are 
created by default with state=DISABLED.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12681) Fold HdfsLocatedFileStatus into HdfsFileStatus

2017-10-18 Thread Chris Douglas (JIRA)
Chris Douglas created HDFS-12681:


 Summary: Fold HdfsLocatedFileStatus into HdfsFileStatus
 Key: HDFS-12681
 URL: https://issues.apache.org/jira/browse/HDFS-12681
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chris Douglas
Priority: Minor


{{HdfsLocatedFileStatus}} is a subtype of {{HdfsFileStatus}}, but not of 
{{LocatedFileStatus}}. Conversion requires copying common fields and shedding 
unknown data. It would be cleaner and sufficient for {{HdfsFileStatus}} to 
extend {{LocatedFileStatus}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12680) Ozone: SCM: Lease support for container creation

2017-10-18 Thread Nanda kumar (JIRA)
Nanda kumar created HDFS-12680:
--

 Summary: Ozone: SCM: Lease support for container creation
 Key: HDFS-12680
 URL: https://issues.apache.org/jira/browse/HDFS-12680
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nanda kumar
Assignee: Nanda kumar


This brings in lease support for container creation.
Lease should be give for a container that is moved to {{CREATING}} state while 
{{BEGIN_CREATE}} event happens, {{LeaseException}} should be thrown if the 
container already holds a lease. Lease must be released during 
{{COMPLETE_CREATE}} event. If the lease times out container should be moved to 
{{DELETING}} state, and exception is thrown if {{COMPLETE_CREATE}} event is 
received on that container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12676) when blocks has corrupted replicas,throws Exception

2017-10-18 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-12676.

Resolution: Duplicate

Resolving it as a dup. Thanks @lynn for reporting it.

> when blocks has corrupted replicas,throws Exception
> ---
>
> Key: HDFS-12676
> URL: https://issues.apache.org/jira/browse/HDFS-12676
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Reporter: lynn
>
> when blocks has corrupted replicas,throws Exception as follows:
> Exception 1:
> 2017-10-18 15:24:55,858 WARN  blockmanagement.BlockManager 
> (BlockManager.java:createLocatedBlock(938)) - Inconsistent number of corrupt 
> replicas for blk_1073750384_504374 blockMap has 0 but corrupt replicas map 
> has 1
> 2017-10-18 15:24:55,859 WARN  ipc.Server (Server.java:logException(2433)) - 
> IPC Server handler 116 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 
> 10.43.160.18:56313 Call#2 Retry#-1
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:972)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:911)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:884)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1011)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2010)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1873)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
>   at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1865)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)
> Exception 2:
> 2017-10-12 16:59:36,591 INFO  blockmanagement.BlockManager 
> (BlockManager.java:computeReplicationWorkForBlocks(1649)) - Blocks chosen but 
> could not be replicated = 4; of which 0 have no target, 4 have no source, 0 
> are UC, 0 are abandoned, 0 already have enough replicas.
> 2017-10-12 16:59:36,809 WARN  blockmanagement.BlockManager 
> (BlockManager.java:createLocatedBlock(938)) - Inconsistent number of corrupt 
> replicas for blk_1073789106_2278702 blockMap has 0 but corrupt replicas map 
> has 2
> 2017-10-12 16:59:36,810 WARN  ipc.Server (Server.java:logException(2433)) - 
> IPC Server handler 123 on 8020, call 
> org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 
> 10.46.230.12:47974 Call#2 Retry#-1
> java.lang.NegativeArraySizeException
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:946)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:911)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:884)
>   at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:997)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2010)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1960)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1873)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
>   at 
> 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-10-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/

[Oct 17, 2017 5:15:53 PM] (haibochen) YARN-7341. 
TestRouterWebServiceUtil#testMergeMetrics is flakey. (Robert
[Oct 17, 2017 7:38:06 PM] (subu) YARN-7311. Fix TestRMWebServicesReservation 
parametrization for fair
[Oct 17, 2017 10:52:09 PM] (lei) HDFS-12612. DFSStripedOutputStream.close will 
throw if called a second
[Oct 17, 2017 11:04:19 PM] (haibochen) YARN-6546. SLS is slow while loading 10k 
queues. (Yufei Gu via Haibo
[Oct 18, 2017 2:06:45 AM] (xiao) HADOOP-14944. Add JvmMetrics to KMS.
[Oct 18, 2017 2:18:39 AM] (aajisaka) MAPREDUCE-6972. Enable try-with-resources 
for RecordReader. Contributed




-1 overall


The following subsystems voted -1:
unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.datanode.TestFsDatasetCache 
   hadoop.hdfs.tools.TestDFSZKFailoverController 
   hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler 
   hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation 

Timed out junit tests :

   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/diff-compile-javac-root.txt
  [284K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/diff-javadoc-javadoc-root.txt
  [1.9M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [300K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/561/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [64K]

Powered by Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-12679) Namenode stale memory Datanode gets Wipes out

2017-10-18 Thread Vivek Ghatala (JIRA)
Vivek Ghatala created HDFS-12679:


 Summary: Namenode stale memory Datanode gets Wipes out
 Key: HDFS-12679
 URL: https://issues.apache.org/jira/browse/HDFS-12679
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.8.1
Reporter: Vivek Ghatala


Namenode stale mapping information about datanodes commands wiping out datanode.

LOG LINES:

2017-10-11 23:01:45,969 DEBUG 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: remove datanode 
XXX.XX.XX.1:Y

2017-10-11 23:01:45,969 DEBUG 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: 
DatanodeManager.wipeDatanode(XXX.XX.XX.1:Y): storage ##STORAGE_ID1## is 
removed from datanodeMap.

Scenario: Our environment uses shared storage. Whenever some datanode restarts, 
some other node comes up with that storage with different Ip address. It is 
possible that multiple datanodes restarts at same time. 

Case: the node 1 serving some storage X is now serving storage Y. node 2 comes 
up and is serving storage X now. Namenode here commands storage Y to get wiped 
out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12678) Ozone: Corona: Add statistical information to json output

2017-10-18 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDFS-12678:
--

 Summary: Ozone: Corona: Add statistical information to json output
 Key: HDFS-12678
 URL: https://issues.apache.org/jira/browse/HDFS-12678
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Lokesh Jain






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12677) Extend TestReconstructStripedFile with a random EC policy

2017-10-18 Thread Takanobu Asanuma (JIRA)
Takanobu Asanuma created HDFS-12677:
---

 Summary: Extend TestReconstructStripedFile with a random EC policy
 Key: HDFS-12677
 URL: https://issues.apache.org/jira/browse/HDFS-12677
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding, test
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12676) when blocks has corrupted replicas,throws Exception

2017-10-18 Thread lynn (JIRA)
lynn created HDFS-12676:
---

 Summary: when blocks has corrupted replicas,throws Exception
 Key: HDFS-12676
 URL: https://issues.apache.org/jira/browse/HDFS-12676
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: lynn


when blocks has corrupted replicas,throws Exception as follows:

Exception 1:
2017-10-18 15:24:55,858 WARN  blockmanagement.BlockManager 
(BlockManager.java:createLocatedBlock(938)) - Inconsistent number of corrupt 
replicas for blk_1073750384_504374 blockMap has 0 but corrupt replicas map has 1
2017-10-18 15:24:55,859 WARN  ipc.Server (Server.java:logException(2433)) - IPC 
Server handler 116 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 
10.43.160.18:56313 Call#2 Retry#-1
java.lang.ArrayIndexOutOfBoundsException: 1
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:972)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:911)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:884)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:1011)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2010)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1960)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1873)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2347)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1865)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2345)


Exception 2:

2017-10-12 16:59:36,591 INFO  blockmanagement.BlockManager 
(BlockManager.java:computeReplicationWorkForBlocks(1649)) - Blocks chosen but 
could not be replicated = 4; of which 0 have no target, 4 have no source, 0 are 
UC, 0 are abandoned, 0 already have enough replicas.
2017-10-12 16:59:36,809 WARN  blockmanagement.BlockManager 
(BlockManager.java:createLocatedBlock(938)) - Inconsistent number of corrupt 
replicas for blk_1073789106_2278702 blockMap has 0 but corrupt replicas map has 
2
2017-10-12 16:59:36,810 WARN  ipc.Server (Server.java:logException(2433)) - IPC 
Server handler 123 on 8020, call 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getBlockLocations from 
10.46.230.12:47974 Call#2 Retry#-1
java.lang.NegativeArraySizeException
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:946)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlock(BlockManager.java:911)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlockList(BlockManager.java:884)
at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.createLocatedBlocks(BlockManager.java:997)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:2010)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1960)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1873)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:693)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:373)
at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2351)
at 

[jira] [Created] (HDFS-12675) Ozone: TestLeaseManager#testLeaseCallbackWithMultipleLeases fails

2017-10-18 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12675:


 Summary: Ozone: 
TestLeaseManager#testLeaseCallbackWithMultipleLeases fails 
 Key: HDFS-12675
 URL: https://issues.apache.org/jira/browse/HDFS-12675
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


Caught one UT failure {{TestLeaseManager#testLeaseCallbackWithMultipleLeases}}:
{noformat}
jcava.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.ozone.lease.TestLeaseManager.testLeaseCallbackWithMultipleLeases(TestLeaseManager.java:293)
{noformat}
The reason of this error is {{leaseFive}} didn't expire in test.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org