Re: [PATCH] docs: Add CloudStack documentation

2012-09-05 Thread Calvin Morrow
I saw the limitations section references only being able to configure a single monitor. Some followup questions for someone interested in using RBD with Cloudstack 4: Is it that you can only specify a single monitor to connect to within Cloudstack 4 (but can still have a 3 monitor configuration)

Re: TIER: combine SSDs and HDDs into a single block device

2012-08-02 Thread Calvin Morrow
I played with TIER about a week ago. Definitely a decent implementation of HSM and seems to work well from my testing. Assuming a three tier setup (we'll say SSD, 15K SAS, 7K SATA), the code drops sequential I/O on tier 2 (the 15K) first, moving it down as tier 2 fills up and/or blocks are unused.

Re: Ceph doesn't update the block device size while a rbd image is mounted

2012-07-19 Thread Calvin Morrow
I haven't tried resizing an rbd yet, but I was changing partitions on a non-ceph two-node cluster with shared storage yesterday while certain partitions were in use (partitions 1,2,5 were mounted, deleting partition ids 6+, adding new ones) and fdisk wasn't re-reading disk changes. Partprobe follo

Re: Poor read performance in KVM

2012-07-19 Thread Calvin Morrow
On Thu, Jul 19, 2012 at 9:52 AM, Tommi Virtanen wrote: > > On Thu, Jul 19, 2012 at 5:19 AM, Vladimir Bashkirtsev > wrote: > > Look like that osd.0 performs with low latency but osd.1 latency is way > > too > > high and on average it appears as 200ms. osd is backed by btrfs over > > LVM2. > > May

Re: Ceph doesn't update the block device size while a rbd image is mounted

2012-07-19 Thread Calvin Morrow
I've had a little more luck using cfdisk than vanilla fdisk when it comes to detecting changes. You might try running partprobe and then cfdisk and seeing if you get anything different. Calvin On Thu, Jul 19, 2012 at 9:50 AM, Sébastien Han wrote: > Hum ok, I see. Thanks! > > But if you have any

Re: How will Ceph cope with a failed Journal device?

2012-05-18 Thread Calvin Morrow
I posted the same question to the list last week and never got a reply. In addition, I'd also like to know if there's a difference in failure behavior between XFS backed Ceph (writeahead journaling) and BTRFS backed Ceph (parallel journaling). Calvin On Fri, May 18, 2012 at 12:30 PM, Guido Winke

OSD Journal Failure Behavior

2012-05-11 Thread Calvin Morrow
The Ceph Wiki (http://ceph.com/wiki/OSD_journal) does a pretty good job explaining the purpose of the journal and various modes available. What isn't clear is what happens during the failure of a journal. With the use of btrfs enabling parallel journaling, it sounds like failure of a journal devi

Re: slow performance even when using SSDs

2012-05-10 Thread Calvin Morrow
I was getting roughly the same results of your tmpfs test using spinning disks for OSDs with a 160GB Intel 320 SSD being used for the journal. Theoretically the 520 SSD should give better performance than my 320s. Keep in mind that even with balance-alb, multiple GigE connections will only be use

Re: NFS over Ceph

2012-04-23 Thread Calvin Morrow
On Mon, Apr 23, 2012 at 9:01 PM, Sage Weil wrote: > On Mon, 23 Apr 2012, Calvin Morrow wrote: >> I've been testing a couple different use scenarios with Ceph 0.45 >> (two-node cluster, single mon, active/standby mds).  I have a pair of >> KVM virtual machines acting as

NFS over Ceph

2012-04-23 Thread Calvin Morrow
I've been testing a couple different use scenarios with Ceph 0.45 (two-node cluster, single mon, active/standby mds). I have a pair of KVM virtual machines acting as ceph clients to re-export iSCSI over RBD block devices, and also NFS over a Ceph mount (mount -t ceph). The iSCSI re-export is goin

Re: Log files with 0.45

2012-04-20 Thread Calvin Morrow
I'm seeing the same. < 12 hours with 6 OSDs resulted in ~18 GB of logs. I had to change my log rotate config to compress based on size instead of once a day or I ended up with a full root partition. I would love to know if there's a better way to handle it. Calvin On Fri, Apr 20, 2012 at 5:05