Re: [ceph-users] Mapping rbd with read permission

2014-10-02 Thread Ilya Dryomov
On Wed, Oct 1, 2014 at 2:56 PM, Ramakrishnan Periyasamy wrote: > Hi, > > > > I have a doubt in mapping rbd using client keyring file. Created keyring as > below > > > > sudo ceph-authtool -C -n client.foo --gen-key /etc/ceph/keyring > > sudo chmod +r /etc/ceph/keyring > > sudo ceph-authtool -n cli

Re: [ceph-users] Bad cluster benchmark results

2014-10-02 Thread Christian Balzer
Hello, On Wed, 1 Oct 2014 23:08:53 -0700 Jakes John wrote: > Thanks Christian. You saved my time! I mistakenly assumed -b value to be > in KB. > > Now, when I ran same benchmarks, I am getting ~106 MB/s for writes and > ~1050MB/s for reads for replica of 2. > > I am slightly confused about th

Re: [ceph-users] OSD - choose the right controller card, HBA/IT mode explanation

2014-10-02 Thread Massimiliano Cuttini
Il 02/10/2014 03:18, Christian Balzer ha scritto: On Wed, 01 Oct 2014 20:12:03 +0200 Massimiliano Cuttini wrote: Hello Christian, Il 01/10/2014 19:20, Christian Balzer ha scritto: Hello, On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote: Dear all, i need few tips about Ceph

Re: [ceph-users] ceph debian systemd

2014-10-02 Thread Carl-Johan Schenström
Robert LeBlanc wrote: > Systemd is supposed to still use the init.d scripts if they are > present, however I've run into problems with it on my CentOS 7 > boxes. The biggest issue is that systemd does not like having multiple > arguments to the scripts. There is a systemd directory in the Master

[ceph-users] Multi node dev environment

2014-10-02 Thread Johnu George (johnugeo)
Hi, I was trying to set up a multi node dev environment. Till now, I was building ceph by executing ./configure and make. I then used to test the features by using vstart in a single node. Instead of it, if I still need to use the multi node cluster for testing, what is the proper way to do?.

Re: [ceph-users] Multi node dev environment

2014-10-02 Thread Loic Dachary
Hi, I would use ceph-deploy http://ceph.com/docs/master/start/quick-start-preflight/#ceph-deploy-setup but ... I've only done tests a few times and other people may have a more elaborate answer to this question ;-) Cheers On 02/10/2014 15:44, Johnu George (johnugeo) wrote:> Hi, > I was trying

Re: [ceph-users] SSD MTBF

2014-10-02 Thread Ron Allred
One thing being missed, Samsung 850 Pro has only been available for about 1-2 months. The OP, noted that drives are failing after approx 1 year. This would probably mean the SSDs are actually Samsung 840 Pro. The write-durabilities of 850 and 840 are quite different. That being said, Sams

Re: [ceph-users] OSD - choose the right controller card, HBA/IT mode explanation

2014-10-02 Thread Christian Balzer
On Thu, 02 Oct 2014 12:20:06 +0200 Massimiliano Cuttini wrote: > > Il 02/10/2014 03:18, Christian Balzer ha scritto: > > On Wed, 01 Oct 2014 20:12:03 +0200 Massimiliano Cuttini wrote: > > > >> Hello Christian, > >> > >> > >> Il 01/10/2014 19:20, Christian Balzer ha scritto: > >>> Hello, > >>> > >

Re: [ceph-users] SSD MTBF

2014-10-02 Thread Emmanuel Lacour
Le 02/10/2014 17:14, Ron Allred a écrit : > One thing being missed, > > Samsung 850 Pro has only been available for about 1-2 months. > > The OP, noted that drives are failing after approx 1 year. This would > probably mean the SSDs are actually Samsung 840 Pro. The > write-durabilities of 850

Re: [ceph-users] SSD MTBF

2014-10-02 Thread Massimiliano Cuttini
I don't think this is true. If you have a SSD disk of 60Gb or 100GB then your TBW/day is really limited (the disk is small then will wrote always on same sectors). The bigger is the SSD the longer will be alive, you have limited write per day then if your disk is bigger you have more sectors to

Re: [ceph-users] SSD MTBF

2014-10-02 Thread Adam Boyhan
What about the Intel DC S3500 instead of the DC S3700? - Original Message - From: "Emmanuel Lacour" To: ceph-users@lists.ceph.com Sent: Thursday, October 2, 2014 11:48:26 AM Subject: Re: [ceph-users] SSD MTBF Le 02/10/2014 17:14, Ron Allred a écrit : > One thing being missed, >

Re: [ceph-users] SSD MTBF

2014-10-02 Thread Emmanuel Lacour
Le 02/10/2014 17:58, Adam Boyhan a écrit : > What about the Intel DC S3500 instead of the DC S3700? > A matter of x10 for TBW supported between the two. See the small analysis I made previously in this thread regarding the cost per TBW. -- Easter-eggs Spécialist

Re: [ceph-users] SSD MTBF

2014-10-02 Thread Emmanuel Lacour
Le 02/10/2014 17:50, Massimiliano Cuttini a écrit : > I don't think this is true. > > If you have a SSD disk of 60Gb or 100GB then your TBW/day is really > limited (the disk is small then will wrote always on same sectors). > The bigger is the SSD the longer will be alive, you have limited write >

[ceph-users] Support the Ada Initiative: a challenge to the open storage community

2014-10-02 Thread Sage Weil
I'd like to take a moment away from your regularly scheduled storage revolution to talk about the Ada Initiative: who they are, what they do, and why it is important to open source storage communities. I'm also going to challenge you to raise $8192 for them, and I'll match that dollar for dolla

[ceph-users] Ceph SSD array with Intel DC S3500's

2014-10-02 Thread Adam Boyhan
Hey everyone, loving Ceph so far! We are looking to role out a Ceph cluster with all SSD's. Our application is around 30% writes and 70% reads random IO. The plan is to start with roughly 8 servers with 8 800GB Intel DC S3500's per server. I wanted to get some input on the use of the DC S3500.

Re: [ceph-users] Ceph SSD array with Intel DC S3500's

2014-10-02 Thread Mark Nelson
On 10/02/2014 12:48 PM, Adam Boyhan wrote: Hey everyone, loving Ceph so far! Hi! We are looking to role out a Ceph cluster with all SSD's. Our application is around 30% writes and 70% reads random IO. The plan is to start with roughly 8 servers with 8 800GB Intel DC S3500's per server. I

Re: [ceph-users] Multi node dev environment

2014-10-02 Thread Johnu George (johnugeo)
How do I use ceph-deploy in this case?. How do I get ceph-deploy to use my privately built ceph package (with my changes) and install them in all ceph nodes? Johnu On 10/2/14, 7:22 AM, "Loic Dachary" wrote: >Hi, > >I would use ceph-deploy >http://ceph.com/docs/master/start/quick-start-prefligh

[ceph-users] ceph, ssds, hdds, journals and caching

2014-10-02 Thread Andrei Mikhailovsky
Hello Cephers, I am a bit lost on the best ways of using ssd and hdds for ceph cluster which uses rbd + kvm for guest vms. At the moment I've got 2 osd servers which currently have 8 hdd osds (max 16 bays) each and 4 ssd disks. Currently, I am using 2 ssds for osd journals and I've got 2x512

Re: [ceph-users] Multi node dev environment

2014-10-02 Thread Somnath Roy
I think you should just skip 'ceph-deploy install' command and install your version of the ceph package in all the nodes manually. Otherwise there is ceph-deploy install --dev you can try out. Thanks & Regards Somnath -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.

Re: [ceph-users] Multi node dev environment

2014-10-02 Thread Alfredo Deza
On Thu, Oct 2, 2014 at 4:07 PM, Johnu George (johnugeo) wrote: > How do I use ceph-deploy in this case?. How do I get ceph-deploy to use my > privately built ceph package (with my changes) and install them in all > ceph nodes? That would not be possible with ceph-deploy *unless* you have a reposi

Re: [ceph-users] Multi node dev environment

2014-10-02 Thread Johnu George (johnugeo)
Hi Somnath, I will try out with ‹dev option which you told. Does it mean that I have to remove osds and mon each time and then do ceph-deploy install ‹dev, ceph mon create, ceph osd create ? The problem with the first option is that, I have to manually install in 5-6 nodes each time fo

Re: [ceph-users] Ceph Developer Summit: Hammer

2014-10-02 Thread Patrick McGarry
Hey Alexandre, Sounds like work has started on this (under the title "RBD Mirroring") and is continuing in Hammer. I have updated the title and moved it to Hammer so there will be a design discussion at CDS. Thanks! https://wiki.ceph.com/Planning/Blueprints/Hammer/RBD%3A_Mirroring Best Regard

Re: [ceph-users] Ceph SSD array with Intel DC S3500's

2014-10-02 Thread Christian Balzer
Hello, On Thu, 2 Oct 2014 13:48:27 -0400 (EDT) Adam Boyhan wrote: > Hey everyone, loving Ceph so far! > > We are looking to role out a Ceph cluster with all SSD's. Our > application is around 30% writes and 70% reads random IO. The plan is to > start with roughly 8 servers with 8 800GB Intel D

Re: [ceph-users] ceph, ssds, hdds, journals and caching

2014-10-02 Thread Christian Balzer
On Thu, 2 Oct 2014 21:54:54 +0100 (BST) Andrei Mikhailovsky wrote: > Hello Cephers, > > I am a bit lost on the best ways of using ssd and hdds for ceph cluster > which uses rbd + kvm for guest vms. > > At the moment I've got 2 osd servers which currently have 8 hdd osds > (max 16 bays) each an

Re: [ceph-users] How to avoid deep-scrubbing performance hit?

2014-10-02 Thread Mark Kirkwood
We are also becoming interested in understanding and taming the impact of deep scrubbing. We may start running something similar to the cron tasks mentioned. Looking at these fine examples of bash + awk I wondered if I could do the job using the python rados api. I have attached my initial (un