Re: [ceph-users] rbd + openstack nova instance snapshots?

2014-10-01 Thread Sebastien Han
Hi, Unfortunately this is expected. If you take a snapshot you should not expect a clone but a RBD snapshot. Please see this BP: https://blueprints.launchpad.net/nova/+spec/implement-rbd-snapshots-instead-of-qemu-snapshots A major part of the code is ready, however we missed nova-specs feature

Re: [ceph-users] SSD MTBF

2014-10-01 Thread Dan Van Der Ster
On 30 Sep 2014, at 16:38, Mark Nelson mark.nel...@inktank.com wrote: On 09/29/2014 03:58 AM, Dan Van Der Ster wrote: Hi Emmanuel, This is interesting, because we’ve had sales guys telling us that those Samsung drives are definitely the best for a Ceph journal O_o ! Our sales guys or

[ceph-users] Cloudstack operations and Ceph RBD in degraded state

2014-10-01 Thread Indra Pramana
Dear all, Anyone using CloudStack with Ceph RBD as primary storage? I am using CloudStack 4.2.0 with KVM hypervisors and Ceph latest stable version of dumpling. Based on what I see, when Ceph cluster is in degraded state (not active+clean), for example due to one node is down and in recovering

Re: [ceph-users] IO wait spike in VM

2014-10-01 Thread Bécholey Alexandre
Hey, Thanks all for your replies. We finished the migration to XFS yesterday morning and we can see that the load average on our VMs is back to normal. Our cluster was just a test before scaling with bigger nodes. We don't know yet how to use the SSDs between journals (as it was recommended)

Re: [ceph-users] SSD MTBF

2014-10-01 Thread Kasper Dieter
On Tue, Sep 30, 2014 at 04:38:41PM +0200, Mark Nelson wrote: On 09/29/2014 03:58 AM, Dan Van Der Ster wrote: Hi Emmanuel, This is interesting, because we?ve had sales guys telling us that those Samsung drives are definitely the best for a Ceph journal O_o ! Our sales guys or Samsung

Re: [ceph-users] SSD MTBF

2014-10-01 Thread Christian Balzer
On Wed, 1 Oct 2014 09:28:12 +0200 Kasper Dieter wrote: On Tue, Sep 30, 2014 at 04:38:41PM +0200, Mark Nelson wrote: On 09/29/2014 03:58 AM, Dan Van Der Ster wrote: Hi Emmanuel, This is interesting, because we?ve had sales guys telling us that those Samsung drives are definitely the

[ceph-users] Mapping rbd with read permission

2014-10-01 Thread Ramakrishnan Periyasamy
Hi, I have a doubt in mapping rbd using client keyring file. Created keyring as below sudo ceph-authtool -C -n client.foo --gen-key /etc/ceph/keyring sudo chmod +r /etc/ceph/keyring sudo ceph-authtool -n client.foo --cap mds 'allow' --cap osd 'allow rw pool=pool1' --cap mon 'allow r'

Re: [ceph-users] SSD MTBF

2014-10-01 Thread Martin B Nielsen
Hi, We settled on Samsung pro 840 240GB drives 1½ year ago and we've been happy so far. We've over-provisioned them a lot (left 120GB unpartitioned). We have 16x 240GB and 32x 500GB - we've lost 1x 500GB so far. smartctl states something like Wear = 092%, Hours = 12883, Datawritten = 15321.83

Re: [ceph-users] SSD MTBF

2014-10-01 Thread Emmanuel Lacour
On Wed, Oct 01, 2014 at 01:31:38PM +0200, Martin B Nielsen wrote: Hi, We settled on Samsung pro 840 240GB drives 1½ year ago and we've been happy so far. We've over-provisioned them a lot (left 120GB unpartitioned). We have 16x 240GB and 32x 500GB - we've lost 1x 500GB so

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-10-01 Thread Timur Nurlygayanov
Hello Christian, Thank you for your detailed answer! I have other pre-production environment with 4 Ceph servers, 4 SSD disks per Ceph server (each Ceph OSD node on the separate SSD disk) Probably I should move journals to other disks or it is not required in my case? [root@ceph-node ~]# mount

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-10-01 Thread Andrei Mikhailovsky
Timur, As far as I know, the latest master has a number of improvements for ssd disks. If you check the mailing list discussion from a couple of weeks back, you can see that the latest stable firefly is not that well optimised for ssd drives and IO is limited. However changes are being made

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-10-01 Thread Irek Fasikhov
Timur, read this thread: https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html Тимур, прочитай эту ветку. 2014-10-01 16:24 GMT+04:00 Andrei Mikhailovsky and...@arhont.com: Timur, As far as I know, the latest master has a number of improvements for ssd disks. If you check the

Re: [ceph-users] rbd + openstack nova instance snapshots?

2014-10-01 Thread Jonathan Proulx
On Wed, Oct 1, 2014 at 2:57 AM, Sebastien Han sebastien@enovance.com wrote: Hi, Unfortunately this is expected. If you take a snapshot you should not expect a clone but a RBD snapshot. Unfortunate that it doesn't work, but fortunate for me I don't need to figure out what I'm doing wrong

Re: [ceph-users] rbd + openstack nova instance snapshots?

2014-10-01 Thread Sebastien Han
On 01 Oct 2014, at 15:26, Jonathan Proulx j...@jonproulx.com wrote: On Wed, Oct 1, 2014 at 2:57 AM, Sebastien Han sebastien@enovance.com wrote: Hi, Unfortunately this is expected. If you take a snapshot you should not expect a clone but a RBD snapshot. Unfortunate that it doesn't

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-10-01 Thread Gregory Farnum
On Wed, Oct 1, 2014 at 5:24 AM, Andrei Mikhailovsky and...@arhont.com wrote: Timur, As far as I know, the latest master has a number of improvements for ssd disks. If you check the mailing list discussion from a couple of weeks back, you can see that the latest stable firefly is not that well

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-10-01 Thread Andrei Mikhailovsky
Greg, are they going to be a part of the next stable release? Cheers - Original Message - From: Gregory Farnum g...@inktank.com To: Andrei Mikhailovsky and...@arhont.com Cc: Timur Nurlygayanov tnurlygaya...@mirantis.com, ceph-users ceph-us...@ceph.com Sent: Wednesday, 1 October,

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-10-01 Thread Christian Balzer
Hello, On Wed, 1 Oct 2014 13:24:43 +0100 (BST) Andrei Mikhailovsky wrote: Timur, As far as I know, the latest master has a number of improvements for ssd disks. If you check the mailing list discussion from a couple of weeks back, you can see that the latest stable firefly is not that

Re: [ceph-users] SSD MTBF

2014-10-01 Thread Christian Balzer
Hello, On Wed, 1 Oct 2014 13:31:38 +0200 Martin B Nielsen wrote: Hi, We settled on Samsung pro 840 240GB drives 1½ year ago and we've been happy so far. We've over-provisioned them a lot (left 120GB unpartitioned). We have 16x 240GB and 32x 500GB - we've lost 1x 500GB so far.

[ceph-users] Ceph mds remove data pool

2014-10-01 Thread Thomas Lemarchand
Hello everyone, I plan to use CephFS in production with Giant release, knowing it's not perfectly ready at the moment and using a hot backup. That said, I'm currently testing CephFS on version 0.80.5. I have a 7 servers cluster (3 mon, 3 osd, 1 mon), and 30 osd (disks). My mds has been working

Re: [ceph-users] Ceph mds remove data pool

2014-10-01 Thread John Spray
Thomas, Sounds like you're looking for ceph mds remove_data_pool. In general you would do that *before* removing the pool itself (in more recent versions we enforce that). John On Wed, Oct 1, 2014 at 4:58 PM, Thomas Lemarchand thomas.lemarch...@cloud-solutions.fr wrote: Hello everyone, I

Re: [ceph-users] Ceph mds remove data pool

2014-10-01 Thread Thomas Lemarchand
Thank you very much, it's what I needed. root@a-mon:~# ceph mds remove_data_pool 3 removed data pool 3 from mdsmap It worked, and mds is ok. -- Thomas Lemarchand Cloud Solutions SAS - Responsable des systèmes d'information On mer., 2014-10-01 at 17:02 +0100, John Spray wrote: Thomas,

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-10-01 Thread Gregory Farnum
All the stuff I'm aware of is part of the testing we're doing for Giant. There is probably ongoing work in the pipeline, but the fast dispatch, sharded work queues, and sharded internal locking structures that Somnath has discussed all made it. -Greg Software Engineer #42 @ http://inktank.com |

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-10-01 Thread Mark Nelson
On 10/01/2014 11:18 AM, Gregory Farnum wrote: All the stuff I'm aware of is part of the testing we're doing for Giant. There is probably ongoing work in the pipeline, but the fast dispatch, sharded work queues, and sharded internal locking structures that Somnath has discussed all made it. I

Re: [ceph-users] Why performance of benchmarks with small blocks is extremely small?

2014-10-01 Thread Gregory Farnum
On Wed, Oct 1, 2014 at 9:21 AM, Mark Nelson mark.nel...@inktank.com wrote: On 10/01/2014 11:18 AM, Gregory Farnum wrote: All the stuff I'm aware of is part of the testing we're doing for Giant. There is probably ongoing work in the pipeline, but the fast dispatch, sharded work queues, and

[ceph-users] OSD - choose the right controller card, HBA/IT mode explanation

2014-10-01 Thread Massimiliano Cuttini
Dear all, i need few tips about Ceph best solution for driver controller. I'm getting confused about IT mode, RAID and JBoD. I read many posts about don't go for RAID but use instead a JBoD configuration. I have 2 storage alternatives right now in my mind: *SuperStorage Server

[ceph-users] ceph hadoop

2014-10-01 Thread Gurmeet Singh
Hi, I am trying to run hadoop with ceph as the backend. I installed the libcephfs-jni and libcephfs-java to get the libcephfs.jar and the related .so libraries. Also I compiled the cephfs-hadoop-1.0-SNAPSHOT.jar from https://github.com/GregBowyer/cephfs-hadoop since this was the only jar which

Re: [ceph-users] OSD - choose the right controller card, HBA/IT mode explanation

2014-10-01 Thread Christian Balzer
Hello, On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote: Dear all, i need few tips about Ceph best solution for driver controller. I'm getting confused about IT mode, RAID and JBoD. I read many posts about don't go for RAID but use instead a JBoD configuration. I have 2

Re: [ceph-users] OSD - choose the right controller card, HBA/IT mode explanation

2014-10-01 Thread Massimiliano Cuttini
Hello Christian, Il 01/10/2014 19:20, Christian Balzer ha scritto: Hello, On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote: Dear all, i need few tips about Ceph best solution for driver controller. I'm getting confused about IT mode, RAID and JBoD. I read many posts about

Re: [ceph-users] Ceph Developer Summit: Hammer

2014-10-01 Thread Sage Weil
[adding ceph-devel] On Tue, Sep 30, 2014 at 5:30 PM, Yann Dupont y...@objoo.org wrote: Le 30/09/2014 22:55, Patrick McGarry a ?crit : Hey cephers, The schedule and call for blueprints is now up for our next CDS as we aim for the Hammer release:

Re: [ceph-users] Ceph Developer Summit: Hammer

2014-10-01 Thread Patrick McGarry
On Wed, Oct 1, 2014 at 4:32 PM, Sage Weil sw...@redhat.com wrote: [adding ceph-devel] Is there a way to know if those blueprints are implemented, or in active developpement ? In case of postponed blueprints, is there a way to promote them again, to get consideration for hammer ? Hmm,

Re: [ceph-users] Ceph Developer Summit: Hammer

2014-10-01 Thread Yann Dupont
Le 01/10/2014 22:32, Sage Weil a écrit : https://wiki.ceph.com/Planning/Blueprints/Giant/librados%3A_support_parallel_reads (I made some comments yesterday) Not implemented OK, and even an older one (somewhat related):

[ceph-users] Bad cluster benchmark results

2014-10-01 Thread Jakes John
Hi Ceph users, I am stuck with the benchmark results that I obtained from the ceph cluster. Ceph Cluster: 1 Mon node, 4 osd nodes of 1 TB. I have one journal for each osd. All disks are identical and nodes are connected by 10 G. Below is the dd results dd if=/dev/zero

[ceph-users] endpoints used during synchronization

2014-10-01 Thread Lyn Mitchell
Hello all, For a federated configuration, does the radosgw-agent use any type of prioritization in regards to the way endpoints are used for the synchronization process? (i.e. the order they are listed in the region-map, maybe rgw dns name used, etc). We have a dedicated node in each zone to

[ceph-users] Add osd to a Ceph cluster : how to choose the osd id?

2014-10-01 Thread Francois Lafont
Hi, I use Ceph firefly (0.80.6) on Ubuntu Trusty (14.04). When I add a new osd to a Ceph cluster, I run these commands : uuid=$(uuidgen) osd_id=$(ceph --cluster my_cluster osd create $uuid) printf The id of this osd will be $osd_id.\n And the osd id is chosen automatically by the

Re: [ceph-users] endpoints used during synchronization

2014-10-01 Thread Lyn Mitchell
Sorry all for the typo. The master in zone-1 is rgw01-zone1-r1.domain-name.com not rgw01-zone1-d1.domain-name.com. The first paragraph should have read as follows: For a federated configuration, does the radosgw-agent use any type of prioritization in regards to the way endpoints are used for

Re: [ceph-users] Add osd to a Ceph cluster : how to choose the osd id?

2014-10-01 Thread Loic Dachary
Hi François, It's probably better to leave the OSD id to the Ceph cluster. Why do you need it ? Cheers On 02/10/2014 00:38, Francois Lafont wrote: Hi, I use Ceph firefly (0.80.6) on Ubuntu Trusty (14.04). When I add a new osd to a Ceph cluster, I run these commands :

Re: [ceph-users] Add osd to a Ceph cluster : how to choose the osd id?

2014-10-01 Thread Francois Lafont
Le 02/10/2014 00:53, Loic Dachary a écrit : Hi François, Hello, It's probably better to leave the OSD id to the Ceph cluster. Ah, ok. Why do you need it ? It's just to have: srv1 172.31.10.1 -- osd-1 srv2 172.31.10.2 -- osd-2 srv3 172.31.10.3 -- osd-3 It's more friendly than: srv1

Re: [ceph-users] Ceph Developer Summit: Hammer

2014-10-01 Thread Alexandre DERUMIER
Hi, any news about this blueprint ? https://wiki.ceph.com/Planning/Blueprints/Giant/rbd%3A_journaling Regards, Alexandre - Mail original - De: Sage Weil sw...@redhat.com À: Patrick McGarry patr...@inktank.com Cc: Ceph-User ceph-us...@ceph.com, ceph-de...@vger.kernel.org Envoyé:

Re: [ceph-users] endpoints used during synchronization

2014-10-01 Thread Yehuda Sadeh
The agent itself only goes to the gateways it was configured to use. However, in a cross zone copy of objects, the gateway will round robin to any of the specified endpoints in its regionmap. Yehuda On Wed, Oct 1, 2014 at 3:46 PM, Lyn Mitchell mitc...@bellsouth.net wrote: Sorry all for the

Re: [ceph-users] v0.80.6 Firefly released

2014-10-01 Thread Yehuda Sadeh
On Wed, Oct 1, 2014 at 5:56 PM, Sage Weil s...@inktank.com wrote: This is a major bugfix release for firefly, fixing a range of issues in the OSD and monitor, particularly with cache tiering. There are also important fixes in librados, with the watch/notify mechanism used by librbd, and in

Re: [ceph-users] OSD - choose the right controller card, HBA/IT mode explanation

2014-10-01 Thread Christian Balzer
On Wed, 01 Oct 2014 20:12:03 +0200 Massimiliano Cuttini wrote: Hello Christian, Il 01/10/2014 19:20, Christian Balzer ha scritto: Hello, On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote: Dear all, i need few tips about Ceph best solution for driver controller.