Hi Ilya,

I would like to document or blog about a great many things.
Unfortunately my time for that is limited right now.

As for what tests were run:  I was basically just doing a standard
'dd' test, which isn't a great indicator of real world usage, but
works well enough for comparison testing.

We were running write and read tests with 'bs=1M count=1024' and
'oflag/iflag=direct' in Linux.  The tests are all relative though, so
any options should work.

On Linux backed by RBD we were seeing roughly 150MB/s writes and
175MB/s+ reads.  On FreeBSD on RBD (with VirtIO) we were seeing
roughly 30MB/s writes and 100MB/s reads.

For FreeBSD we tested the following configurations (write speeds
indicated, reads were always reasonable @ ~100-200MB/s), we ran each
test a few times to get an average:
- RBD - Qemu 'cache=none': 30MB/s
- RBD - Qemu 'cache=writeback': 50MB/s
- Local SSD - Qemu 'cache=none': 50MB/s
- Local SSD - Qemu 'cache=writeback': ~125MB/s

As I mentioned before the 'Local SSD: writeback' configuration was the
only one that got us even close to Linux speeds.

I got a response on the Ceph list that I've yet to fully look over,
but it seems to indicate that there may be an issue with the FreeBSD
I/O sizes.  I'm also thinking it may just be a problem with the
FreeBSD VirtIO drivers, as they are still fairly new.

We'd like to be able to offer FreeBSD services on RBD, but right now
it's looking like we'd have to go with local storage to make it
viable.


Thank You,

Logan Barfield
Tranquil Hosting


On Thu, Feb 19, 2015 at 8:23 AM, Andrei Mikhailovsky <and...@arhont.com> wrote:
> Ilya, i have followed the instructions on ceph website and it worked 
> perfectly well. The only addition is to enable rbd caching in the ceph.conf
>
> Andrei
>
> ----- Original Message -----
>
>> From: "ilya musayev" <ilya.mailing.li...@gmail.com>
>> To: dev@cloudstack.apache.org
>> Sent: Thursday, 19 February, 2015 10:21:22 AM
>> Subject: Re: Libvirt & RBD caching
>
>> Logan
>
>> Side note: it would help great if you can post your notes/guide on
>> setting up Ceph as primary with CloudStack.
>> There arent any docs out there.
>
>> Thanks
>> ilya
>> On 2/18/15 12:00 PM, Logan Barfield wrote:
>> > Our current deployment is KVM with Ceph RBD primary storage. We
>> > have
>> > rbd_cache enabled, and use "cache=none" in Qemu by default.
>> >
>> > I've been running some tests to try to figure out why our write
>> > speeds
>> > with FreeBSD are significantly lower than Linux. I was testing both
>> > RBD and local SSD storage, with various cache configurations. Out
>> > of
>> > all of them the only one that performed close to our standard Linux
>> > images was local SSD, Qemu cache=writeback, FreeBSD gpt journal
>> > enabled.
>> >
>> > I've been reading on various lists the reasons and risks for
>> > cache=none vs cache=writeback:
>> > - cache=none: Safer for live migration
>> > - cache=writeback: Ceph RBD docs claim that this is required for
>> > data
>> > integrity when using rbd_cache
>> >
>> > From what I can tell performance is generally the same with both,
>> > except in the case of FreeBSD.
>> >
>> > What is the current line of thinking on this? Should be using
>> > 'none'
>> > or 'writeback' with RBD by default? Is 'writeback' considered safe
>> > for live migration?

Reply via email to