On Fri, Apr 27, 2012 at 01:02:08PM +0200, Christian Brunner wrote:
> Am 24. April 2012 18:26 schrieb Sage Weil <s...@newdream.net>:
> > On Tue, 24 Apr 2012, Josef Bacik wrote:
> >> On Fri, Apr 20, 2012 at 05:09:34PM +0200, Christian Brunner wrote:
> >> > After running ceph on XFS for some time, I decided to try btrfs again.
> >> > Performance with the current "for-linux-min" branch and big metadata
> >> > is much better. The only problem (?) I'm still seeing is a warning
> >> > that seems to occur from time to time:
> >
> > Actually, before you do that... we have a new tool,
> > test_filestore_workloadgen, that generates a ceph-osd-like workload on the
> > local file system.  It's a subset of what a full OSD might do, but if
> > we're lucky it will be sufficient to reproduce this issue.  Something like
> >
> >  test_filestore_workloadgen --osd-data /foo --osd-journal /bar
> >
> > will hopefully do the trick.
> >
> > Christian, maybe you can see if that is able to trigger this warning?
> > You'll need to pull it from the current master branch; it wasn't in the
> > last release.
> 
> Trying to reproduce with test_filestore_workloadgen didn't work for
> me. So here are some instructions on how to reproduce with a minimal
> ceph setup.
> 
> You will need a single system with two disks and a bit of memory.
> 
> - Compile and install ceph (detailed instructions:
> http://ceph.newdream.net/docs/master/ops/install/mkcephfs/)
> 
> - For the test setup I've used two tmpfs files as journal devices. To
> create these, do the following:
> 
> # mkdir -p /ceph/temp
> # mount -t tmpfs tmpfs /ceph/temp
> # dd if=/dev/zero of=/ceph/temp/journal0 count=500 bs=1024k
> # dd if=/dev/zero of=/ceph/temp/journal1 count=500 bs=1024k
> 
> - Now you should create and mount btrfs. Here is what I did:
> 
> # mkfs.btrfs -l 64k -n 64k /dev/sda
> # mkfs.btrfs -l 64k -n 64k /dev/sdb
> # mkdir /ceph/osd.000
> # mkdir /ceph/osd.001
> # mount -o noatime,space_cache,inode_cache,autodefrag /dev/sda /ceph/osd.000
> # mount -o noatime,space_cache,inode_cache,autodefrag /dev/sdb /ceph/osd.001
> 
> - Create /etc/ceph/ceph.conf similar to the attached ceph.conf. You
> will probably have to change the btrfs devices and the hostname
> (os39).
> 
> - Create the ceph filesystems:
> 
> # mkdir /ceph/mon
> # mkcephfs -a -c /etc/ceph/ceph.conf
> 
> - Start ceph (e.g. "service ceph start")
> 
> - Now you should be able to use ceph - "ceph -s" will tell you about
> the state of the ceph cluster.
> 
> - "rbd create -size 100 testimg" will create an rbd image on the ceph cluster.
> 
> - Compile my test with "gcc -o rbdtest rbdtest.c -lrbd" and run it
> with "./rbdtest testimg".
> 
> I can see the first btrfs_orphan_commit_root warning after an hour or
> so... I hope that I've described all necessary steps. If there is a
> problem just send me a note.
> 

Well it's only taken me 2 weeks but I've finally git it all up and running,
hopefully I'll reproduce.  Thanks,

Josef
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to