[ https://issues.apache.org/jira/browse/HDFS-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13852787#comment-13852787 ]
Edward Bortnikov commented on HDFS-5453: ---------------------------------------- Suresh - we have done this evaluation. We switched off all traces and edit log, and ran a workload of 3 reads:1 write, on an 8-core CPU. With the current (global lock) synchronization in place, it scales about 2.5x compared to single-core throughput. Fine grained locking does not change the picture much - i.e., in CPU-bound workloads most of the time is wasted on concurrency control. Without any synchronization, the code scales above 7.5x (as expected). This underscores the potential in re-architecting Hadoop's namenode to a lock-free, asynchronous server with a lean custom scheduler to take care of conflicts. This discussion can be started in a separate JIRA. > Support fine grain locking in FSNamesystem > ------------------------------------------ > > Key: HDFS-5453 > URL: https://issues.apache.org/jira/browse/HDFS-5453 > Project: Hadoop HDFS > Issue Type: New Feature > Components: namenode > Affects Versions: 2.0.0-alpha, 3.0.0 > Reporter: Daryn Sharp > Assignee: Daryn Sharp > > The namesystem currently uses a course grain lock to control access. This > prevents concurrent writers in different branches of the tree, and prevents > readers from accessing branches that writers aren't using. > Features that introduce latency to namesystem operations, such as cold > storage of inodes, will need fine grain locking to avoid degrading the entire > namesystem's throughput. -- This message was sent by Atlassian JIRA (v6.1.4#6159)