Re: [ceph-users] Copying RBD images between clusters?

2014-04-25 Thread Vladislav Gorbunov
rbd -m mon-cluster1 export rbd/one-1 - | rbd -m mon-cluster2 import - rbd/one-1 пятница, 25 апреля 2014 г. пользователь Brian Rak написал: > Is there a recommended way to copy an RBD image between two different > clusters? > > My initial thought was 'rbd export - | ssh "rbd import -"', but I'm no

Re: [ceph-users] Copying RBD images between clusters?

2014-04-25 Thread Vladislav Gorbunov
rbd -m mon-cluster1 export rbd/one-1 - | rbd -m mon-cluster2 - rbd/one-1 пятница, 25 апреля 2014 г. пользователь Brian Rak написал: > Is there a recommended way to copy an RBD image between two different > clusters? > > My initial thought was 'rbd export - | ssh "rbd import -"', but I'm not > sur

Re: [ceph-users] Ceph mds laggy and failed assert in function replay mds/journal.cc

2014-04-25 Thread Luke Jing Yuan
HI Greg, Actually the cluster that my colleague and I is working is rather new and still have plenty of space left (less than 7% used). What we noticed just before the MDS gave us this problem, was a temporary network issue in the data center so we are not sure that could have been the root cau

Re: [ceph-users] bandwidth with Ceph - v0.59 (Bobtail)

2014-04-25 Thread Mark Nelson
I don't have any recent results published, but you can see some of the older results from bobtail here: http://ceph.com/performance-2/argonaut-vs-bobtail-performance-preview/ Specifically, look at the 256 concurrent 4MB rados bench tests. In a 6 disk, 2 SSD configuration we could push about 8

Re: [ceph-users] bandwidth with Ceph - v0.59 (Bobtail)

2014-04-25 Thread Mark Nelson
For what it's worth, I've been able to achieve up to around 120MB/s with btrfs before things fragment. Mark On 04/25/2014 03:59 PM, Xing wrote: Hi Gregory, Thanks very much for your quick reply. When I started to look into Ceph, Bobtail was the latest stable release and that was why I picked

Re: [ceph-users] packages for Trusty

2014-04-25 Thread Craig Lewis
Using the Emperor builds for Precise seems to work on Trusty. I just put a hold on all of the ceph, rados, and apache packages before the release upgrade. It makes me nervous though. I haven't stressed it much, and I don't really want to roll it out to production. I would like to see Emper

Re: [ceph-users] packages for Trusty

2014-04-25 Thread Travis Rhoden
Thanks guys. I don't know why I didn't try that. I guess just too much habit of setting up the additional repo. =) On Fri, Apr 25, 2014 at 4:09 PM, Cédric Lemarchand wrote: > Yes, juste apt-get install ceph ;-) > > Cheers > > -- > Cédric Lemarchand > > Le 25 avr. 2014 à 21:07, Drew Weaver a

Re: [ceph-users] bandwidth with Ceph - v0.59 (Bobtail)

2014-04-25 Thread Gregory Farnum
Bobtail is really too old to draw any meaningful conclusions from; why did you choose it? That's not to say that performance on current code will be better (though it very much might be), but the internal architecture has changed in some ways that will be particularly important for the futex profi

Re: [ceph-users] packages for Trusty

2014-04-25 Thread Cédric Lemarchand
Yes, juste apt-get install ceph ;-) Cheers -- Cédric Lemarchand > Le 25 avr. 2014 à 21:07, Drew Weaver a écrit : > > You can actually just install it using the Ubuntu packages. I did it > yesterday on Trusty. > > Thanks, > -Drew > > > From: ceph-users-boun...@lists.ceph.com > [mailto:c

[ceph-users] packages for Trusty

2014-04-25 Thread Sebastien
Well as far as I know trusty has 0.79 and will get firefly as soon as it's ready so I'm not sure if it's that urgent. Precise repo should work fine. My 2 cents Sébastien Han Cloud Engineer "Always give 100%. Unless you're giving blood.” Phone: +33 (0)1 49 70 99 72 Mail: sebastien@enovance

Re: [ceph-users] packages for Trusty

2014-04-25 Thread Drew Weaver
You can actually just install it using the Ubuntu packages. I did it yesterday on Trusty. Thanks, -Drew From: ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Travis Rhoden Sent: Friday, April 25, 2014 3:06 PM To: ceph-users Subject: [ceph-users] packag

[ceph-users] packages for Trusty

2014-04-25 Thread Travis Rhoden
Are there packages for Trusty being built yet? I don't see it listed at http://ceph.com/debian-emperor/dists/ Thanks, - Travis ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] mon osd min down reports

2014-04-25 Thread Gregory Farnum
The monitor requires at least number of reports, from a set of OSDs whose size is at least . So with 9 reporters and 3 reports, it would wait until 9 OSDs had reported an OSD down (basically ignoring the reports setting, as it is smaller). -Greg On Friday, April 25, 2014, Craig Lewis wrote: >

[ceph-users] Bit of trouble using the S3 api for adminops

2014-04-25 Thread Drew Weaver
Greetings, I got a ceph test cluster setup this week and I thought it would be neat if I could write a php script that let me start working with the adminops API. I did some research to figure out how to correctly 'authorize' in the AWS fashion and wrote this little script. http://host.com/";;

Re: [ceph-users] SSD journal overload?

2014-04-25 Thread Craig Lewis
I am not able to do a dd test on the SSDs since it's not mounted as filesystem, but dd on the OSD (non-SSD) drives gives normal result. Since you have free space on the SSDs, you could add a 3rd 10G partition to one of the SSDs. Then you could put a filesystem on that partition, or just dd

[ceph-users] mon osd min down reports

2014-04-25 Thread Craig Lewis
I was reading about mon osd min down reports at http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/, and I had a question. Are mon osd min down reporters and mon osd min down reports both required to mark an OSD down, or just one? For example, if I set [global] mon osd min d

Re: [ceph-users] Ceph mds laggy and failed assert in function replay mds/journal.cc

2014-04-25 Thread Gregory Farnum
Hmm, it looks like your on-disk SessionMap is horrendously out of date. Did your cluster get full at some point? In any case, we're working on tools to repair this now but they aren't ready for use yet. Probably the only thing you could do is create an empty sessionmap with a higher version than t

Re: [ceph-users] Access denied error

2014-04-25 Thread Yehuda Sadeh
On Fri, Apr 25, 2014 at 1:03 AM, Punit Dambiwal wrote: > Hi Yehuda, > > Thanks for your help...that missing date error gone but still i am getting > the access denied error :- > > - > 2014-04-25 15:52:56.988025 7f00d37c6700 1 == starting new request > req=0x237a090

Re: [ceph-users] [Bug]radosgw-agent can't sync files with Chinese filename

2014-04-25 Thread Yehuda Sadeh
On Thu, Apr 24, 2014 at 7:03 PM, wsnote wrote: > Hi, Yehuda. > It doesn't matter.We have fixed it. > The filename will be transcoded by url_encode and decoded by url_decode. > There is a bug when decoding the filename. > There is another bug when decoding the filename. when radosgw-agent fails > d

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Sebastien Han
I just tried, I have the same problem, it looks like a regression… It’s weird because the code didn’t change that much during the Icehouse cycle. I just reported the bug here: https://bugs.launchpad.net/cinder/+bug/1312819 Sébastien Han Cloud Engineer "Always give 100%. Unless you're giv

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Maciej Gałkiewicz
On 25 April 2014 16:37, Sebastien Han wrote: > This is a COW clone, but the BP you pointed doesn’t match the feature you > described. This might explain Greg’s answer. > The BP refers to the libvirt_image_type functionality for Nova. > > What do you get now when you try to create a volume from an

[ceph-users] Ceph mds laggy and failed to assert session in function mds/journal.cc line 1303

2014-04-25 Thread Bazli Karim
Dear Ceph-devel, ceph-users, I am currently facing issue with my ceph mds server. Ceph-mds daemon does not want to bring up back. I tried running that manually with ceph-mds –i mon01 –d but it got aborted and the log shows that it stucks at failed assert(session) line 1303 in mds/journal.cc.

Re: [ceph-users] OSD distribution unequally -- osd crashes

2014-04-25 Thread Kenneth Waegeman
- Message from Craig Lewis - Date: Thu, 24 Apr 2014 11:20:08 -0700 From: Craig Lewis Subject: Re: [ceph-users] OSD distribution unequally -- osd crashes To: Kenneth Waegeman Cc: ceph-users@lists.ceph.com Your OSDs shouldn't be crashing during a remap. Although..

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Sebastien Han
This is a COW clone, but the BP you pointed doesn’t match the feature you described. This might explain Greg’s answer. The BP refers to the libvirt_image_type functionality for Nova. What do you get now when you try to create a volume from an image? Sébastien Han Cloud Engineer "Always

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Maciej Gałkiewicz
On 25 April 2014 16:00, Gregory Farnum wrote: > If you had it working in Havana I think you must have been using a > customized code base; you can still do the same for Icehouse. > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com I was using a standard OpenStack version from

Re: [ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Gregory Farnum
If you had it working in Havana I think you must have been using a customized code base; you can still do the same for Icehouse. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Fri, Apr 25, 2014 at 12:55 AM, Maciej Gałkiewicz wrote: > Hi > > After upgrading my OpenStack clu

Re: [ceph-users] [Ceph-rgw] pool assignment

2014-04-25 Thread ghislain.chevalier
Before configuring region and zone, I would like to known which tags can be updated in metadata bucket.instance? Are there restrictions according to the capabilities applied to radosgw-admin? -Message d'origine- De : ceph-users-boun...@lists.ceph.com [mailto:ceph-users-boun...@lists.ce

Re: [ceph-users] UI of radosgw admin

2014-04-25 Thread alain.dechorgnat
For yet, the inkscope project is independent of Ceph. The S3 user management could be finished before the end of the next week. Alain -Message d'origine- De : Ray Lv [mailto:ra...@yahoo-inc.com] Envoyé : vendredi 25 avril 2014 11:57 À : DECHORGNAT Alain IMT/OLPS Cc : ceph-users@lists.cep

Re: [ceph-users] Pool with empty name recreated

2014-04-25 Thread mykr0t
I think yes, you can see request headers in the attached radosgw.log. Can you try access you cluster with curl or wget with none-existent bucket and file? and then show ceph osd dump? -- Regards, Mikhail On Fri, 25 Apr 2014 16:26:09 +0400 Irek Fasikhov wrote: > You correctly configured DNS re

Re: [ceph-users] Pool with empty name recreated

2014-04-25 Thread Irek Fasikhov
You correctly configured DNS records? 2014-04-25 16:24 GMT+04:00 : > $ radosgw-admin bucket list > [ > "test"] > > > -- > Regards, > Mikhail > > > On Fri, 25 Apr 2014 15:48:23 +0400 > Irek Fasikhov wrote: > > > Hi. > > > > radosgw-admin bucket list > > > > > > > > 2014-04-25 15:32 GMT+04:00

Re: [ceph-users] Pool with empty name recreated

2014-04-25 Thread mykr0t
$ radosgw-admin bucket list [ "test"] -- Regards, Mikhail On Fri, 25 Apr 2014 15:48:23 +0400 Irek Fasikhov wrote: > Hi. > > radosgw-admin bucket list > > > > 2014-04-25 15:32 GMT+04:00 : > > > Hi, All. > > Yesterday i managed to reproduce the bug on my test environment > > with a fr

Re: [ceph-users] Pool with empty name recreated

2014-04-25 Thread Irek Fasikhov
Hi. radosgw-admin bucket list 2014-04-25 15:32 GMT+04:00 : > Hi, All. > Yesterday i managed to reproduce the bug on my test environment > with a fresh installation of dumpling release. I`ve attached the > link to archive with debug logs. > http://lamcdn.net/pool_with_empty_name_bug_logs.tar.gz

Re: [ceph-users] Pool with empty name recreated

2014-04-25 Thread mykr0t
Hi, All. Yesterday i managed to reproduce the bug on my test environment with a fresh installation of dumpling release. I`ve attached the link to archive with debug logs. http://lamcdn.net/pool_with_empty_name_bug_logs.tar.gz Test cluster contains only one bucket with name "test" and one file in th

Re: [ceph-users] UI of radosgw admin

2014-04-25 Thread Ray Lv
Hi Alain, Thanks for your prompt answer. It looks cool. It seems to be a separated project rather under ceph. Will it be incorporated into ceph? And what¹s the schedule for remaining features such as user/bucket management? Thanks, Ray On 4/22/14, 10:21 PM, "alain.dechorg...@orange.com" wrote:

Re: [ceph-users] Access denied error

2014-04-25 Thread Punit Dambiwal
Hi Yehuda, Thanks for your help...that missing date error gone but still i am getting the access denied error :- - 2014-04-25 15:52:56.988025 7f00d37c6700 1 == starting new request req=0x237a090 = 2014-04-25 15:52:56.988072 7f00d37c6700 2 req 24:0.46::GET

[ceph-users] OpenStack Icehouse and ephemeral disks created from image

2014-04-25 Thread Maciej Gałkiewicz
Hi After upgrading my OpenStack cluster to Icehouse I came across a very suprising bug. It is no longer possible to create cinder volumes (rbd-backed) from image (rbd-backed) by copy-on-write cloning: https://blueprints.launchpad.net/nova/+spec/rbd-clone-image-handler Both rbd volumes would be st

[ceph-users] FW: Ceph mds laggy and failed assert in function replay mds/journal.cc

2014-04-25 Thread Mohd Bazli Ab Karim
Dear ceph-users, I am currently facing issue with my ceph mds server. Ceph-mds daemon does not want to bring up back. I tried running it manually with ceph-mds -i mon01 -d but aborted and the log shows that it stucks at failed assert(session) line 1303 in mds/journal.cc. Can someone shed some l

[ceph-users] SSD journal overload?

2014-04-25 Thread Indra Pramana
Hi, On one of our test clusters, I have a node with 4 OSDs with SAS / non-SSD drives (sdb, sdc, sdd, sde) and 2 SSD drives (sdf and sdg) for journals to serve the 4 OSDs (2 each). Model: ATA ST100FM0012 (scsi) Disk /dev/sdf: 100GB Sector size (logical/physical): 512B/4096B Partition Table: gpt N