Re: [ceph-users] Anybody doing Ceph for OpenStack with OSDs across compute/hypervisor nodes?

2013-12-10 Thread Chris Hoy Poy

Ceph can be quite hard on CPU at times, so I would avoid this unless you have 
lots of CPU cycles to spare as well. 

\C 
- Original Message -

From: Blair Bethwaite blair.bethwa...@gmail.com 
To: ceph-users@lists.ceph.com 
Sent: Tuesday, 10 December, 2013 10:04:01 AM 
Subject: [ceph-users] Anybody doing Ceph for OpenStack with OSDs across 
compute/hypervisor nodes? 

We're running OpenStack (KVM) with local disk for ephemeral storage. Currently 
we use local RAID10 arrays of 10k SAS drives, so we're quite rich for IOPS and 
have 20GE across the board. Some recent patches in OpenStack Havana make it 
possible to use Ceph RBD as the source of ephemeral VM storage, so I'm 
interested in the potential for clustered storage across our hypervisors for 
this purpose. Any experience out there? 

-- 
Cheers, 
~Blairo 

___ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Anybody doing Ceph for OpenStack with OSDs across compute/hypervisor nodes?

2013-12-09 Thread Blair Bethwaite
We're running OpenStack (KVM) with local disk for ephemeral storage.
Currently we use local RAID10 arrays of 10k SAS drives, so we're quite rich
for IOPS and have 20GE across the board. Some recent patches in OpenStack
Havana make it possible to use Ceph RBD as the source of ephemeral VM
storage, so I'm interested in the potential for clustered storage across
our hypervisors for this purpose. Any experience out there?

-- 
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Anybody doing Ceph for OpenStack with OSDs across compute/hypervisor nodes?

2013-12-09 Thread Kyle Bader
 We're running OpenStack (KVM) with local disk for ephemeral storage.
 Currently we use local RAID10 arrays of 10k SAS drives, so we're quite rich
 for IOPS and have 20GE across the board. Some recent patches in OpenStack
 Havana make it possible to use Ceph RBD as the source of ephemeral VM
 storage, so I'm interested in the potential for clustered storage across our
 hypervisors for this purpose. Any experience out there?

I believe Piston converges their storage/compute, they refer to it as
a null-tier architecture.

http://www.pistoncloud.com/openstack-cloud-software/technology/#storage
-- 

Kyle
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com