[ 
https://issues.apache.org/jira/browse/HDFS-6102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935034#comment-13935034
 ] 

Hudson commented on HDFS-6102:
------------------------------

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1701 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1701/])
HDFS-6102. Lower the default maximum items per directory to fix PB fsimage 
loading. Contributed by Andrew Wang. (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1577426)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/fsimage.proto
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFsLimits.java


> Lower the default maximum items per directory to fix PB fsimage loading
> -----------------------------------------------------------------------
>
>                 Key: HDFS-6102
>                 URL: https://issues.apache.org/jira/browse/HDFS-6102
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.4.0
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>            Priority: Blocker
>             Fix For: 2.4.0
>
>         Attachments: hdfs-6102-1.patch, hdfs-6102-2.patch
>
>
> Found by [~schu] during testing. We were creating a bunch of directories in a 
> single directory to blow up the fsimage size, and it ends up we hit this 
> error when trying to load a very large fsimage:
> {noformat}
> 2014-03-13 13:57:03,901 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSImageFormatPBINode: Loading 24523605 
> INodes.
> 2014-03-13 13:57:59,038 ERROR org.apache.hadoop.hdfs.server.namenode.FSImage: 
> Failed to load image from 
> FSImageFile(file=/dfs/nn/current/fsimage_0000000000024532742, 
> cpktTxId=0000000000024532742)
> com.google.protobuf.InvalidProtocolBufferException: Protocol message was too 
> large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase 
> the size limit.
>         at 
> com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
>         at 
> com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
>         at 
> com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
>         at 
> com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
>         at 
> com.google.protobuf.CodedInputStream.readUInt64(CodedInputStream.java:188)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeDirectorySection$DirEntry.<init>(FsImageProto.java:9839)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeDirectorySection$DirEntry.<init>(FsImageProto.java:9770)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeDirectorySection$DirEntry$1.parsePartialFrom(FsImageProto.java:9901)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FsImageProto$INodeDirectorySection$DirEntry$1.parsePartialFrom(FsImageProto.java:9896)
>         at 52)
> ...
> {noformat}
> Some further research reveals there's a 64MB max size per PB message, which 
> seems to be what we're hitting here.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to