[ 
https://issues.apache.org/jira/browse/HDFS-2984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HDFS-2984:
-------------------------------

    Attachment: slive.tar.gz

Ok! I've been slacking on this bug for way too long. But here are my 
experiments and the data.

WHAT ARE THE FILES IN THIS TARBALL?
====================================
patch is the diff of 2 minor optimizations I made in hadoop-23.

I then ran Slive on clean HDFS installations for 0.23 and 0.204. These are the 
commands I ran. First create 200000 files (hopefully that's what it does... 
though its not important if it doesn't)
bin/hadoop org.apache.hadoop.fs.slive.SliveTest -duration 50 -dirSize 1225 
-files 200000 -maps 4 -readSize 104850,104850 -writeSize 104850,104850 
-appendSize 104850,104850 -replication 1,1 -reduces 1 -blockSize 1024,1024 
-mkdir 0,uniform -rename 0,uniform -append 0,uniform -delete 0,uniform -ls 
0,uniform -read 0,uniform -create 100,uniform and then delete 50000 files 
(again, hopefully that's what it does)
bin/hadoop org.apache.hadoop.fs.slive.SliveTest -duration 50 -dirSize 1225 
-files 50000 -maps 4 -readSize 104850,104850  -writeSize 104850,104850 
-appendSize 104850,104850 -replication 1,1 -reduces 1 -blockSize 1024,1024 
-mkdir 0,uniform -rename 0,uniform -append 0,uniform -delete 100,uniform -ls 
0,uniform -read 0,uniform -create 0,uniform
I do this 3 times. Hence the 6 files
<branch>.C200 <- create 200k files
<branch>.C200D50 <- delete 50k files

In the last run, I delete 500000 files, and use jvisualvm to create snapshots
while I am profiling. The two snapshot*.npm files can be loaded into jvisualvm.



OBSERVATIONS
=============

Create seems to be twice as fast in 0.23. So I'm not too worried about that.

Delete on the other hand is a lot slower. I've tried optimizing, but I don't
know if there's much else that can be done. A huge reason is probably this:
http://blog.rapleaf.com/dev/2011/06/16/java-performance-synchronized-vs-lock/
In 0.20 we were using the synchronized variable, which although is 2-7.5x
faster (as reported in the blog), is unfair. In 0.23 we are using a fair
ReentrantReadWriteLock. This is obviously going to be slower and since
writeLock() is what's taking the most amount of time (ref the jvisualvm
profile), I am led to believe that we must incur the performance hit in order
to be fair.

Comments are welcome. Please let me know your thoughts.


@Todd: These are on the latest branch-23 
74fd5cb929adc926a13eb062df7869894c0cc013
                
> S-live: Rate operation count for delete is worse than 0.20.204 by 28.8%
> -----------------------------------------------------------------------
>
>                 Key: HDFS-2984
>                 URL: https://issues.apache.org/jira/browse/HDFS-2984
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: benchmarks
>    Affects Versions: 0.23.1
>            Reporter: Vinay Kumar Thota
>            Assignee: Ravi Prakash
>            Priority: Critical
>         Attachments: slive.tar.gz
>
>
> Rate operation count for delete is worse than 0.20.204.xx by 28.8%

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to