[ceph-users] [SOLVED] RE: failing on 0.67.1 radosgw install

2013-08-22 Thread Fuchs, Andreas (SwissTXT)
My radosgw is up now. There were two problems in my config 1.) I missed to copy the "FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock" entry from the instructions into my apache config 2.) I did a mistake with the ceph conf, I entered: [client.radosgw.gateway] h

[ceph-users] (no subject)

2013-08-22 Thread Rong Zhang
___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Destroyed Ceph Cluster

2013-08-22 Thread Georg Höllrigl
Thank you - It works now as expected. I've removed the MDS. As soon as the 2nd osd machine came up, it fixed the other errors!? On 19.08.2013 18:28, Gregory Farnum wrote: Have you ever used the FS? It's missing an object which we're intermittently seeing failures to create (on initial setup) w

Re: [ceph-users] Network failure scenarios

2013-08-22 Thread Sage Weil
On Fri, 23 Aug 2013, Keith Phua wrote: > Hi, > > It was mentioned in the devel mailing list that for 2 networks setup, if > the cluster network failed, the cluster behave pretty badly. Ref: > http://article.gmane.org/gmane.comp.file-systems.ceph.devel/12285/match=cluster+network+fail > > May I

[ceph-users] Network failure scenarios

2013-08-22 Thread Keith Phua
Hi, It was mentioned in the devel mailing list that for 2 networks setup, if the cluster network failed, the cluster behave pretty badly. Ref: http://article.gmane.org/gmane.comp.file-systems.ceph.devel/12285/match=cluster+network+fail May I know if this problem still exist in cuttlefish or dum

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Greg Poirier
On Thu, Aug 22, 2013 at 2:34 PM, Gregory Farnum wrote: > You don't appear to have accounted for the 2x replication (where all > writes go to two OSDs) in these calculations. I assume your pools have > Ah. Right. So I should then be looking at: # OSDs * Throughput per disk / 2 / repl factor ?

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Greg Poirier
I should have also said that we experienced similar performance on Cuttlefish. I have run identical benchmarks on both. On Thu, Aug 22, 2013 at 2:23 PM, Oliver Daudey wrote: > Hey Greg, > > I encountered a similar problem and we're just in the process of > tracking it down here on the list. Tr

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Oliver Daudey
Hey Greg, I didn't know that option, but I'm always careful to downgrade and upgrade the OSDs one by one and wait for the cluster to report healthy again before proceeding to the next, so, as you said, chances of losing data should have been minimal. Will flush the journals too next time. Thanks!

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Gregory Farnum
On Thu, Aug 22, 2013 at 2:47 PM, Oliver Daudey wrote: > Hey Greg, > > Thanks for the tip! I was assuming a clean shutdown of the OSD should > flush the journal for you and have the OSD try to exit with it's > data-store in a clean state? Otherwise, I would first have to stop > updates a that par

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Oliver Daudey
Hey Greg, Thanks for the tip! I was assuming a clean shutdown of the OSD should flush the journal for you and have the OSD try to exit with it's data-store in a clean state? Otherwise, I would first have to stop updates a that particular OSD, then flush the journal, then stop it? Regards,

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Gregory Farnum
On Thu, Aug 22, 2013 at 2:23 PM, Oliver Daudey wrote: > Hey Greg, > > I encountered a similar problem and we're just in the process of > tracking it down here on the list. Try downgrading your OSD-binaries to > 0.61.8 Cuttlefish and re-test. If it's significantly faster on RBD, > you're probably

Re: [ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Oliver Daudey
Hey Greg, I encountered a similar problem and we're just in the process of tracking it down here on the list. Try downgrading your OSD-binaries to 0.61.8 Cuttlefish and re-test. If it's significantly faster on RBD, you're probably experiencing the same problem I have with Dumpling. PS: Only dow

[ceph-users] Snapshot a KVM VM with RBD backend and libvirt

2013-08-22 Thread Tobias Brunner
Hi, I'm trying to create a snapshot from a KVM VM: # virsh snapshot-create one-5 error: unsupported configuration: internal checkpoints require at least one disk to be selected for snapshot RBD should support such snapshot, according to the wiki: http://ceph.com/w/index.php?title=QEMU-RBD#Sn

[ceph-users] Unexpectedly slow write performance (RBD cinder volumes)

2013-08-22 Thread Greg Poirier
I have been benchmarking our Ceph installation for the last week or so, and I've come across an issue that I'm having some difficulty with. Ceph bench reports reasonable write throughput at the OSD level: ceph tell osd.0 bench { "bytes_written": 1073741824, "blocksize": 4194304, "bytes_per_se

Re: [ceph-users] Failed to create a single mon using" ceph-deploy mon create **“

2013-08-22 Thread Nico Massenberg
Same problem with me. It took me one step further to add the public network parameter to all the ceph.conf files. However, ceph-deploy tells me mons are created but those won’t show up in ceph -w output. Am 22.08.2013 um 18:43 schrieb Alfredo Deza : > On Wed, Aug 21, 2013 at 10:05 PM, SOLO wro

Re: [ceph-users] Failed to create a single mon using" ceph-deploy mon create **“

2013-08-22 Thread Alfredo Deza
On Wed, Aug 21, 2013 at 10:05 PM, SOLO wrote: > Hi! > > I am trying ceph on RHEL 6.4 > My ceph version is cuttlefish > I followed the intro and ceph-deploy new .. ceph-deploy instal .. > --stable cuttlefish > It didn't appear an error until here. > And then I typed ceph-deploy mon create

Re: [ceph-users] RBD hole punching

2013-08-22 Thread Michael Lowe
I use the virtio-scsi driver. On Aug 22, 2013, at 12:05 PM, David Blundell wrote: >> I see yet another caveat: According to that documentation, it only works with >> the IDE driver, not with virtio. >> >>Guido > > I've just been looking into this but have not yet tested. It looks like >

Re: [ceph-users] RBD hole punching

2013-08-22 Thread David Blundell
> I see yet another caveat: According to that documentation, it only works with > the IDE driver, not with virtio. > > Guido I've just been looking into this but have not yet tested. It looks like discard is supported in the newer virtio-scsi devices but not virtio-blk. This Sheepdog pag

Re: [ceph-users] NFS vs. CephFS for /var/lib/nova/instances

2013-08-22 Thread Gregory Farnum
On Thursday, August 22, 2013, Amit Vijairania wrote: > Hello! > > We, in our environment, need a shared file system for > /var/lib/nova/instances and Glance image cache (_base).. > > Is anyone using CephFS for this purpose? > When folks say CephFS is not production ready, is the primary concern >

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-22 Thread Mike Dawson
Jumping in pretty late on this thread, but I can confirm much higher CPU load on ceph-osd using 0.67.1 compared to 0.61.7 under a write-heavy RBD workload. Under my workload, it seems like it might be 2x-5x higher CPU load per process. Thanks, Mike Dawson On 8/22/2013 4:41 AM, Oliver Daudey

Re: [ceph-users] RBD hole punching

2013-08-22 Thread Guido Winkelmann
Am Donnerstag, 22. August 2013, 10:32:30 schrieb Mike Lowe: > There is TRIM/discard support and I use it with some success. There are some > details here http://ceph.com/docs/master/rbd/qemu-rbd/ The one caveat I > have is that I've sometimes been able to crash an osd by doing fstrim > inside a gu

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-22 Thread Mark Nelson
For what it's worth, I was still seeing some small sequential write degradation with kernel RBD with dumpling, though random writes were not consistently slower in the testing I did. There was also some variation in performance between 0.61.2 and 0.61.7 likely due to the workaround we had to i

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-22 Thread Sage Weil
We should perhaps hack the old (cuttlefish and earlier) flushing behavior into the new code so that we can confirm that it is really the writeback that is causing the problem and not something else... sage On Thu, 22 Aug 2013, Oliver Daudey wrote: > Hey Samuel, > > On wo, 2013-08-21 at 20:27

[ceph-users] rbd in centos6.4

2013-08-22 Thread raj kumar
ceph cluster is running fine in centos6.4. Now I would like to export the block device to client using rbd. my question is, 1. I used to modprobe rbd in one of the monitor host. But I got error, FATAL: Module rbd not found I could not find rbd module. How can i do this? 2. Once the rbd is

[ceph-users] Failed to create a single mon using" ceph-deploy mon create **??

2013-08-22 Thread SOLO
Hi! I am trying ceph on RHEL 6.4 My ceph version is cuttlefish I followed the intro and ceph-deploy new .. ceph-deploy instal .. --stable cuttlefish It didn't appear an error until here. And then I typed ceph-deploy mon create .. Here comes the error as bellow . . . [ceph@cephadmi

[ceph-users] ??????Failed to create a single mon using" ceph-deploy mon create **??

2013-08-22 Thread SOLO
And here is my ceph.log . . . [ceph@cephadmin my-clusters]$ less ceph.log 2013-08-22 09:01:27,375 ceph_deploy.new DEBUG Creating new cluster named ceph 2013-08-22 09:01:27,375 ceph_deploy.new DEBUG Resolving host cephs1 2013-08-22 09:01:27,382 ceph_deploy.new DEBUG Monitor cephs1 at 10.2.9.223 2013

[ceph-users] bucket count limit

2013-08-22 Thread Mostowiec Dominik
Hi, I think about sharding s3 buckets in CEPH cluster, create bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets) where XXX is sign from object md5 url. Could this be the problem? (performance, or some limits) -- Regards Dominik ___ ceph-

Re: [ceph-users] One rados account, more S3 API keyes

2013-08-22 Thread Sage Weil
On Thu, 22 Aug 2013, Mih?ly ?rva-T?th wrote: > Hello, > > Is there any method to one radosgw user has more than one access/secret_key? Yes, you can have multiple keys for each user: radosgw-admin key create ... sage ___ ceph-users mailing list ceph-u

Re: [ceph-users] failing on 0.67.1 radosgw install

2013-08-22 Thread Yehuda Sadeh
On Thu, Aug 22, 2013 at 12:36 AM, Fuchs, Andreas (SwissTXT) wrote: > My apache conf is as follows > > cat /etc/apache2/httpd.conf > ServerName radosgw01.swisstxt.ch > > cat /etc/apache2/sites-enabled/000_radosgw > > > ServerName *.radosgw01.swisstxt.ch > # ServerAdmin {email.addre

Re: [ceph-users] bucket count limit

2013-08-22 Thread Dominik Mostowiec
Thank's for your answer. -- Regards Dominik 2013/8/22 Yehuda Sadeh : > On Thu, Aug 22, 2013 at 7:11 AM, Dominik Mostowiec > wrote: >> Hi, >> I think about sharding s3 buckets in CEPH cluster, create >> bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets) >> where XXX is sign from ob

Re: [ceph-users] RBD hole punching

2013-08-22 Thread Mike Lowe
There is TRIM/discard support and I use it with some success. There are some details here http://ceph.com/docs/master/rbd/qemu-rbd/ The one caveat I have is that I've sometimes been able to crash an osd by doing fstrim inside a guest. On Aug 22, 2013, at 10:24 AM, Guido Winkelmann wrote: > H

Re: [ceph-users] radosgw crash

2013-08-22 Thread Yehuda Sadeh
On Thu, Aug 22, 2013 at 5:18 AM, Pawel Stefanski wrote: > hello! > > Today our radosgw crashed while running multiple deletions via s3 api. > > Is this known bug ? > > POST > WSTtobXBlBrm2r78B67LtQ== > > Thu, 22 Aug 2013 11:38:34 GMT > /inna-a/?delete >-11> 2013-08-22 13:39:26.650499 7f36347d8

Re: [ceph-users] bucket count limit

2013-08-22 Thread Yehuda Sadeh
On Thu, Aug 22, 2013 at 7:11 AM, Dominik Mostowiec wrote: > Hi, > I think about sharding s3 buckets in CEPH cluster, create > bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets) > where XXX is sign from object md5 url. > Could this be the problem? (performance, or some limits) > The

[ceph-users] RBD hole punching

2013-08-22 Thread Guido Winkelmann
Hi, RBD has had support for sparse allocation for some time now. However, when using an RBD volume as a virtual disk for a virtual machine, the RBD volume will inevitably grow until it reaches its actual nominal size, even if the filesystem in the guest machine never reaches full utilization.

Re: [ceph-users] bucket count limit

2013-08-22 Thread Dominik Mostowiec
I'm sorry for the spam :-( -- Dominik 2013/8/22 Dominik Mostowiec : > Hi, > I think about sharding s3 buckets in CEPH cluster, create > bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets) > where XXX is sign from object md5 url. > Could this be the problem? (performance, or some lim

[ceph-users] NFS vs. CephFS for /var/lib/nova/instances

2013-08-22 Thread Amit Vijairania
Hello! We, in our environment, need a shared file system for /var/lib/nova/instances and Glance image cache (_base).. Is anyone using CephFS for this purpose? When folks say CephFS is not production ready, is the primary concern stability/data-integrity or performance? Is NFS (with NFS-Ganesha) i

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-22 Thread Alfredo Deza
On Thu, Aug 22, 2013 at 4:36 AM, Pavel Timoschenkov wrote: > Hi. > With this patch - is all ok. > Thanks for help! > Thanks for confirming this, I have opened a ticket (http://tracker.ceph.com/issues/6085 ) and will work on this patch to get it merged. > -Original Message- > From: Alfred

[ceph-users] bucket count limit

2013-08-22 Thread Dominik Mostowiec
Hi, I think about sharding s3 buckets in CEPH cluster, create bucket-per-XX (256 buckets) or even bucket-per-XXX (4096 buckets) where XXX is sign from object md5 url. Could this be the problem? (performance, or some limits) -- Regards Dominik ___ ceph-us

[ceph-users] radosgw crash

2013-08-22 Thread Pawel Stefanski
hello! Today our radosgw crashed while running multiple deletions via s3 api. Is this known bug ? POST WSTtobXBlBrm2r78B67LtQ== Thu, 22 Aug 2013 11:38:34 GMT /inna-a/?delete -11> 2013-08-22 13:39:26.650499 7f36347d8700 2 req 95:0.000555:s3:POST /inna-a/:multi_object_delete:reading permissio

[ceph-users] RE : OpenStack Cinder + Ceph, unable to remove unattached volumes, still watchers

2013-08-22 Thread HURTEVENT VINCENT
Hi Josh, thank you for your answer, but I was in Bobtail so no listwatchers command :) I planned a reboot of concerned compute nodes and all went fine then. I updated Ceph to last stable though. De : Josh Durgin [josh.dur...@inktank.com] Date d'envoi

Re: [ceph-users] Significant slowdown of osds since v0.67 Dumpling

2013-08-22 Thread Oliver Daudey
Hey Samuel, On wo, 2013-08-21 at 20:27 -0700, Samuel Just wrote: > I think the rbd cache one you'd need to run for a few minutes to get > meaningful results. It should stabilize somewhere around the actual > throughput of your hardware. Ok, I now also ran this test on Cuttlefish as well as Dumpl

Re: [ceph-users] ceph-deploy and journal on separate disk

2013-08-22 Thread Pavel Timoschenkov
Hi. With this patch - is all ok. Thanks for help! -Original Message- From: Alfredo Deza [mailto:alfredo.d...@inktank.com] Sent: Wednesday, August 21, 2013 7:16 PM To: Pavel Timoschenkov Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] ceph-deploy and journal on separate disk On Wed, Aug

Re: [ceph-users] failing on 0.67.1 radosgw install

2013-08-22 Thread Fuchs, Andreas (SwissTXT)
My apache conf is as follows cat /etc/apache2/httpd.conf ServerName radosgw01.swisstxt.ch cat /etc/apache2/sites-enabled/000_radosgw ServerName *.radosgw01.swisstxt.ch # ServerAdmin {email.address} ServerAdmin serviced...@swisstxt.ch DocumentRoot /var/www