[ https://issues.apache.org/jira/browse/HDFS-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14934412#comment-14934412 ]
Jing Zhao commented on HDFS-9053: --------------------------------- Some further comments: # In {{INodeDirectory#replaceChild}}, can we directly call {{addOrReplace}} instead of calling {{get}} first? {code} final INode existing = children.get(newChild.getLocalNameBytes()); ...... oldChild = existing; ...... children.addOrReplace(newChild); {code} # Do you think we can avoid the following code? Maybe we can add the EK type to the ReadOnlyCollection/ReadOnlyList level? {code} public <EK> Iterator<INode> iterator(EK k) { final byte[] name = (byte[])k; {code} # {{DirectoryWithSnapshotFeature#getChildrenList#iterator(EK)}} forgot to increase {{pos}}? Maybe also add a new test for this (e.g., set a small ls limit and list a snapshot of a directory)? {code} public INode next() { if (pos >= childrenSize) { throw new NoSuchElementException(); } return children.get(pos); } {code} # In {{getListing}}, instead of continuing the iteration, can we just call {{size()}} to calculate the number of the remaining items? {code} while (i.hasNext()) { INode cur = i.next(); if (!(locationBudget > 0 && listingCnt < fsd.getLsLimit())) { remaining++; continue; } {code} > Support large directories efficiently using B-Tree > -------------------------------------------------- > > Key: HDFS-9053 > URL: https://issues.apache.org/jira/browse/HDFS-9053 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Reporter: Yi Liu > Assignee: Yi Liu > Priority: Critical > Attachments: HDFS-9053 (BTree with simple benchmark).patch, HDFS-9053 > (BTree).patch, HDFS-9053.001.patch, HDFS-9053.002.patch > > > This is a long standing issue, we were trying to improve this in the past. > Currently we use an ArrayList for the children under a directory, and the > children are ordered in the list, for insert/delete/search, the time > complexity is O(log n), but insertion/deleting causes re-allocations and > copies of big arrays, so the operations are costly. For example, if the > children grow to 1M size, the ArrayList will resize to > 1M capacity, so need > > 1M * 4bytes = 4M continuous heap memory, it easily causes full GC in HDFS > cluster where namenode heap memory is already highly used. I recap the 3 > main issues: > # Insertion/deletion operations in large directories are expensive because > re-allocations and copies of big arrays. > # Dynamically allocate several MB continuous heap memory which will be > long-lived can easily cause full GC problem. > # Even most children are removed later, but the directory INode still > occupies same size heap memory, since the ArrayList will never shrink. > This JIRA is similar to HDFS-7174 created by [~kihwal], but use B-Tree to > solve the problem suggested by [~shv]. > So the target of this JIRA is to implement a low memory footprint B-Tree and > use it to replace ArrayList. > If the elements size is not large (less than the maximum degree of B-Tree > node), the B-Tree only has one root node which contains an array for the > elements. And if the size grows large enough, it will split automatically, > and if elements are removed, then B-Tree nodes can merge automatically (see > more: https://en.wikipedia.org/wiki/B-tree). It will solve the above 3 > issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)