On 08/14/2014 11:34 PM, Dave Chinner wrote:
<create sparse vm image file of 500TB on ssd with XFS on it> xfs_io -f -c "truncate 500t" -c "extsize 1m" /path/to/vm/image/file <start 16p/16GB RAM vm with image file configured as: -drive file=/path/to/vm/image/file,if=virtio,cache=none> In vm: download and build fsmark from here: git://oss.sgi.com/dgc/fs_mark download and install xfsprogs v3.2.1 from here: git://oss.sgi.com/xfs/cmds/xfsprogs.git tags/v3.2.1 Setup up the target filesystem: # mkfs.xfs -f -m "crc=1,finobt=1" /dev/vda # mount -o logbsize=262144,nobarrier /dev/vda /mnt/scratch Run: # fs_mark -D 10000 -S0 -n 50000 -s 0 -L 32 \ -d /mnt/scratch/0 -d /mnt/scratch/1 \ -d /mnt/scratch/2 -d /mnt/scratch/3 \ -d /mnt/scratch/4 -d /mnt/scratch/5 \ -d /mnt/scratch/6 -d /mnt/scratch/7 \ -d /mnt/scratch/8 -d /mnt/scratch/9 \ -d /mnt/scratch/10 -d /mnt/scratch/11 \ -d /mnt/scratch/12 -d /mnt/scratch/13 \ -d /mnt/scratch/14 -d /mnt/scratch/15 \ If you've got everything set up right, that should run at around 200-250,000 file creates/s. When finished, unmount and run: # xfs_repair -o bhash=500000 /dev/vda And that should spend quite a long while pounding on the mmap_sem until the the userspace buffer cache stops growing. I just ran the above on 3.16, saw this from perf: 37.30% [kernel] [k] _raw_spin_unlock_irqrestore - _raw_spin_unlock_irqrestore - 62.00% rwsem_wake - call_rwsem_wake + 83.52% sys_mprotect + 16.23% __do_page_fault + 35.15% try_to_wake_up + 0.96% update_blocked_averages + 0.61% pagevec_lru_move_fn - 23.35% [kernel] [k] _raw_spin_unlock_irq - _raw_spin_unlock_irq + 51.37% finish_task_switch + 39.37% rwsem_down_write_failed + 8.49% rwsem_down_read_failed 0.62% run_timer_softirq + 5.22% [kernel] [k] native_read_tsc + 3.89% [kernel] [k] rwsem_down_write_failed ..... Cheers, Dave.
Thank for the testing recipe. I am afraid that I can't find a 500TB SSD for testing purpose. Do you think the test will still be valid for exercising rwsem if I use a smaller SSD or maybe mechanical hard disk?
-Longman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/