Re: [Openstack] Ceph performance as volume image store?

2012-07-24 Thread Josh Durgin

On 07/23/2012 08:24 PM, Jonathan Proulx wrote:

Hi All,

I've been looking at Ceph as a storage back end.  I'm running a
research cluster and while people need to use it and want it 24x7 I
don't need as many nines as a commercial customer facing service does
so I think I'm OK with the current maturity level as far as that goes,
but I have less of a sense of how far along performance is.

My OpenStack deployment is 768 cores across 64 physical hosts which
I'd like to double in the next 12 months.  What it's used for is
widely varying and hard to classify some uses are hundreds of tiny
nodes others are looking to monopolize the biggest physical system
they can get.  I think most really heavy IO currently goes to our NAS
servers rather than through nova-volumes but that could change.

Anyone using ceph at that scale (or preferably larger)?  Does it keep
up if you keep throwing hardware at it?  My proof of concept ceph
cluster on crappy salvaged hardware has proved the concept to me but
has (unsurprisingly) crappy salvaged performance. Trying to get a
sense of what performance expectations I should have given decent
hardware before I decide if I should buy decent hardware for it...

Thanks,
-Jon


Hi Jon,

You might be interested in Jim Schutt's numbers on better hardware:

http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7487

You'll probably get more response on the ceph mailing list though.

Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph performance as volume image store?

2012-07-24 Thread Anne Gentle
I don't know if it will confirm or correlate with your findings, but
do take a look at this blog post with benchmarks in one of the last
sections:

http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/

I'm trying to determine what parts should go into the OpenStack
documentation, please let me know if the post is useful to you in your
setting and what sections are most valuable.
Thanks,
Anne


On Tue, Jul 24, 2012 at 6:08 PM, Josh Durgin josh.dur...@inktank.com wrote:
 On 07/23/2012 08:24 PM, Jonathan Proulx wrote:

 Hi All,

 I've been looking at Ceph as a storage back end.  I'm running a
 research cluster and while people need to use it and want it 24x7 I
 don't need as many nines as a commercial customer facing service does
 so I think I'm OK with the current maturity level as far as that goes,
 but I have less of a sense of how far along performance is.

 My OpenStack deployment is 768 cores across 64 physical hosts which
 I'd like to double in the next 12 months.  What it's used for is
 widely varying and hard to classify some uses are hundreds of tiny
 nodes others are looking to monopolize the biggest physical system
 they can get.  I think most really heavy IO currently goes to our NAS
 servers rather than through nova-volumes but that could change.

 Anyone using ceph at that scale (or preferably larger)?  Does it keep
 up if you keep throwing hardware at it?  My proof of concept ceph
 cluster on crappy salvaged hardware has proved the concept to me but
 has (unsurprisingly) crappy salvaged performance. Trying to get a
 sense of what performance expectations I should have given decent
 hardware before I decide if I should buy decent hardware for it...

 Thanks,
 -Jon


 Hi Jon,

 You might be interested in Jim Schutt's numbers on better hardware:

 http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7487

 You'll probably get more response on the ceph mailing list though.

 Josh


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph performance as volume image store?

2012-07-24 Thread Leandro Reox
Were pretty intrested too in large scale performance benchmarks. anyone?

regards
On Jul 24, 2012 10:22 PM, Anne Gentle a...@openstack.org wrote:

 I don't know if it will confirm or correlate with your findings, but
 do take a look at this blog post with benchmarks in one of the last
 sections:

 http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/

 I'm trying to determine what parts should go into the OpenStack
 documentation, please let me know if the post is useful to you in your
 setting and what sections are most valuable.
 Thanks,
 Anne


 On Tue, Jul 24, 2012 at 6:08 PM, Josh Durgin josh.dur...@inktank.com
 wrote:
  On 07/23/2012 08:24 PM, Jonathan Proulx wrote:
 
  Hi All,
 
  I've been looking at Ceph as a storage back end.  I'm running a
  research cluster and while people need to use it and want it 24x7 I
  don't need as many nines as a commercial customer facing service does
  so I think I'm OK with the current maturity level as far as that goes,
  but I have less of a sense of how far along performance is.
 
  My OpenStack deployment is 768 cores across 64 physical hosts which
  I'd like to double in the next 12 months.  What it's used for is
  widely varying and hard to classify some uses are hundreds of tiny
  nodes others are looking to monopolize the biggest physical system
  they can get.  I think most really heavy IO currently goes to our NAS
  servers rather than through nova-volumes but that could change.
 
  Anyone using ceph at that scale (or preferably larger)?  Does it keep
  up if you keep throwing hardware at it?  My proof of concept ceph
  cluster on crappy salvaged hardware has proved the concept to me but
  has (unsurprisingly) crappy salvaged performance. Trying to get a
  sense of what performance expectations I should have given decent
  hardware before I decide if I should buy decent hardware for it...
 
  Thanks,
  -Jon
 
 
  Hi Jon,
 
  You might be interested in Jim Schutt's numbers on better hardware:
 
  http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7487
 
  You'll probably get more response on the ceph mailing list though.
 
  Josh
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Ceph performance as volume image store?

2012-07-23 Thread Jonathan Proulx
Hi All,

I've been looking at Ceph as a storage back end.  I'm running a
research cluster and while people need to use it and want it 24x7 I
don't need as many nines as a commercial customer facing service does
so I think I'm OK with the current maturity level as far as that goes,
but I have less of a sense of how far along performance is.

My OpenStack deployment is 768 cores across 64 physical hosts which
I'd like to double in the next 12 months.  What it's used for is
widely varying and hard to classify some uses are hundreds of tiny
nodes others are looking to monopolize the biggest physical system
they can get.  I think most really heavy IO currently goes to our NAS
servers rather than through nova-volumes but that could change.

Anyone using ceph at that scale (or preferably larger)?  Does it keep
up if you keep throwing hardware at it?  My proof of concept ceph
cluster on crappy salvaged hardware has proved the concept to me but
has (unsurprisingly) crappy salvaged performance. Trying to get a
sense of what performance expectations I should have given decent
hardware before I decide if I should buy decent hardware for it...

Thanks,
-Jon

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp