Re: OSD hardware suggestion

2012-11-13 Thread Josh Durgin
On 11/09/2012 05:11 AM, Gandalf Corvotempesta wrote: Based on your experience, will the following configuration be ok for a production cluster? - R515 12 disks + 2 internal disks I'll start with 2x or 4x 2TB SATA disks in each R515. One OSD for each disk, 3 servers. Each R515 will have one

Re: improve speed with auth supported=none

2012-11-13 Thread Josh Durgin
On 11/12/2012 11:52 PM, Stefan Priebe wrote: Am 13.11.2012 08:42, schrieb Josh Durgin: On 11/12/2012 01:57 PM, Stefan Priebe wrote: Thanks, this gives another burst for iops. I'm now at 23.000 iops ;-) So for random 4k iops ceph auth and especially the logging is a lot of overhead. How much

Re: improve speed with auth supported=none

2012-11-13 Thread Stefan Priebe
Am 13.11.2012 09:04, schrieb Josh Durgin: On 11/12/2012 11:52 PM, Stefan Priebe wrote: Am 13.11.2012 08:42, schrieb Josh Durgin: On 11/12/2012 01:57 PM, Stefan Priebe wrote: Thanks, this gives another burst for iops. I'm now at 23.000 iops ;-) So for random 4k iops ceph auth and especially

Re: optmize librbd for iops

2012-11-13 Thread Josh Durgin
On 11/12/2012 11:55 PM, Stefan Priebe wrote: Am 13.11.2012 08:51, schrieb Josh Durgin: On 11/12/2012 05:50 AM, Stefan Priebe - Profihost AG wrote: Hello list, are there any plans to optimize librbd for iops? Right now i'm able to get 50.000 iop/s via iscsi and 100.000 iop/s using multipathing

Re: optmize librbd for iops

2012-11-13 Thread Stefan Priebe - Profihost AG
Am 13.11.2012 09:20, schrieb Josh Durgin: On 11/12/2012 11:55 PM, Stefan Priebe wrote: rados bench uses librados aio, keeping several operations in flight. IO size is the same as object size for it. You can do a 4k write benchmark that doesn't delete the objects it writes, with 32 IOs in flight

Re: Removed directory is back in the Ceph FS

2012-11-13 Thread Franck Marchand
Hi, I have a weird pb. I remove a folder using a mounted fs partition. I did it and it worked well. I checked later to see if I had all my folders in ceph fs ... : the folder I removed was back and I can't remove it ! Here is the error message I got : rm -rf 2012-11-10/ rm cannot remove

Fwd: rbd kernel module fail

2012-11-13 Thread ruslan usifov
Hello I test ceph cluster on VmWare machines (3 nodes in cluster) to make rbd scalable block device, and have troubles when try to map rbd image to device, i got follow message in kernel.log Nov 13 15:32:47 ceph-precie-64-02 kernel: [ 188.319319] [ cut here ] Nov 13

Performance problems or expected behavior?

2012-11-13 Thread Sébastien Han
Hi all, I performed some benchmarks with fio with a block size of 4K. I guess I experienced some performance problems, I can hardly imagine that the IOPS can be so low... My setup: - 4 servers HP DL 360 G7 : - E5606/ 2.13GHz 4C - 6GB RAM - root fs: HP 72GB 15K SAS RAID 1 -

Re: Fwd: rbd kernel module fail

2012-11-13 Thread Alex Elder
On 11/13/2012 05:54 AM, ruslan usifov wrote: Hello I test ceph cluster on VmWare machines (3 nodes in cluster) to make rbd scalable block device, and have troubles when try to map rbd image to device, i got follow message in kernel.log I haven't looked into this really yet, but this is a

Re: Fwd: rbd kernel module fail

2012-11-13 Thread ruslan usifov
How can i compile current version of rbd module? Now i use rbd module that goes with standart lunux kernel in ubuntu 12.04 2012/11/13 Alex Elder el...@inktank.com: On 11/13/2012 05:54 AM, ruslan usifov wrote: Hello I test ceph cluster on VmWare machines (3 nodes in cluster) to make rbd

Re: Fwd: rbd kernel module fail

2012-11-13 Thread Alex Elder
On 11/13/2012 07:48 AM, ruslan usifov wrote: How can i compile current version of rbd module? Now i use rbd module that goes with standart lunux kernel in ubuntu 12.04 The Ubuntu kernel team builds mainline kernel packages. Perhaps you could try that.

a Hibernate OGM DataStore Provider for RADOS object stores?

2012-11-13 Thread Steffen Yount
Hi all, I'm not sure this is the place to post something like this, but here goes... I've mainly been introduced to Ceph from the context of OpenStack integration. But then, the other day, I read an article about RedHat's new Hibernate OGM project: a Java Persistence API (JPA) implementation

osd recovery extremely slow with current master

2012-11-13 Thread Stefan Priebe
Hi list, osd recovery seems to be really slow with current master. I see only 1-8 active+recovering out of 1200. Even there's no load on ceph cluster. Greets, Stefan -- To unsubscribe from this list: send the line unsubscribe ceph-devel in the body of a message to majord...@vger.kernel.org

Re: [Help] Use Ceph RBD as primary storage in CloudStack 4.0

2012-11-13 Thread Dan Mick
Hi Alex: did you install the ceph packages before trying to build qemu? It sounds like qemu is looking for the Ceph libraries and not finding them. On 11/12/2012 09:38 PM, Alex Jiang wrote: Hi, All Has somebody used Ceph RBD in CloudStack as primary storage? I see that in the new features

Re: Braindump: multiple clusters on the same hardware

2012-11-13 Thread Jimmy Tang
On 18 Oct 2012, at 10:47, Tommi Virtanen wrote: That's the async replication for disaster recovery feature that has been mentioned every now and then. You could build it as read from one cluster, write to another yourself, the client libraries are perfectly able to talk to two clusters

v0.54 released

2012-11-13 Thread Sage Weil
The v0.54 development release is ready! This will be the last development release before v0.55 bobtail, our next long-term stable release, is ready. Notable changes this time around include: * osd: use entire device if journal is a block device * osd: new caps structure (see below) * osd:

keystone and libnss

2012-11-13 Thread Yehuda Sadeh
One issue to keep in mind with Keystone is that it requires nss to work efficiently. With nss both presigned (pki) tokens and token revocation work. Without it, the following happens: - we go to the keystone server for every non-cached token, even if it's presigned - when we fail to decode the