Re: [ceph-users] Regarding Primary affinity configuration

2014-10-10 Thread Johnu George (johnugeo)
complex. As in your eg: if osd0, osd1,osd2 has primary affinity value of [1,0.5,0.1] and there are 600 pgs, the final distribution comes in 440:140:20 or 22:7:1 which is slighly skewed from expected. Johnu On 10/9/14, 4:51 PM, "Gregory Farnum" wrote: >On Thu, Oct 9, 2014 at 4:24 PM,

Re: [ceph-users] Regarding Primary affinity configuration

2014-10-09 Thread Johnu George (johnugeo)
Hi Greg, Thanks for your extremely informative post. My related questions are posted inline On 10/9/14, 2:21 PM, "Gregory Farnum" wrote: >On Thu, Oct 9, 2014 at 10:55 AM, Johnu George (johnugeo) > wrote: >> Hi All, >> I have few questions regardi

Re: [ceph-users] Monitor segfaults when updating the crush map

2014-10-09 Thread Johnu George (johnugeo)
rack1 and may become unbalanced.(Ensure enough storage in rack1) Thanks, Johnu From: Stephen Jahl mailto:stephenj...@gmail.com>> Date: Thursday, October 9, 2014 at 11:11 AM To: Loic Dachary mailto:l...@dachary.org>> Cc: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph

[ceph-users] Regarding Primary affinity configuration

2014-10-09 Thread Johnu George (johnugeo)
What happens for a situation with [1,0.5,1] ? Is osd.0 always returned? D) After calculating primary based on the affinity values, I see a shift of osds so that primary comes to the front. Why is this needed?. I thought, primary affinity value affects only reads and hence, osd ordering need not

Re: [ceph-users] Multi node dev environment

2014-10-07 Thread Johnu George (johnugeo)
Thanks Alfredo. Is there any other possible way that will work for my situation? Anything would be helpful Johnu On 10/7/14, 2:25 PM, "Alfredo Deza" wrote: >On Tue, Oct 7, 2014 at 5:05 PM, Johnu George (johnugeo) > wrote: >> Even when I try ceph-deploy install --dev , I &

Re: [ceph-users] Multi node dev environment

2014-10-07 Thread Johnu George (johnugeo)
Even when I try ceph-deploy install --dev , I am seeing that it is getting installed from official ceph repo. How can I install ceph from my github repo or my local repo in all ceph nodes? (Or any other possibility? ). Someone can help me in setting this? Johnu On 10/2/14, 1:55 PM, "So

Re: [ceph-users] Multi node dev environment

2014-10-02 Thread Johnu George (johnugeo)
for every small change. Johnu On 10/2/14, 1:55 PM, "Somnath Roy" wrote: >I think you should just skip 'ceph-deploy install' command and install >your version of the ceph package in all the nodes manually. >Otherwise there is ceph-deploy install --dev you can try out.

Re: [ceph-users] Multi node dev environment

2014-10-02 Thread Johnu George (johnugeo)
How do I use ceph-deploy in this case?. How do I get ceph-deploy to use my privately built ceph package (with my changes) and install them in all ceph nodes? Johnu On 10/2/14, 7:22 AM, "Loic Dachary" wrote: >Hi, > >I would use ceph-deploy >http://ceph.com/docs/ma

[ceph-users] Multi node dev environment

2014-10-02 Thread Johnu George (johnugeo)
?. If I need to run benchmarks(using rados bench or other benchmarking tools) after any code change, what is the right practice to test some change in a multi node dev setup? ( Multi node setup is needed as part of getting right performance results in benchmark tests) Thanks, Johnu

Re: [ceph-users] Crushmap ruleset for rack aware PG placement

2014-09-16 Thread Johnu George (johnugeo)
Hi Daniel, Can you provide your exact crush map and exact crushtool command that results in segfaults? Johnu On 9/16/14, 10:23 AM, "Daniel Swarbrick" wrote: >Replying to myself, and for the benefit of other caffeine-starved people: > >Setting the last rule to &

Re: [ceph-users] How to set Object Size/Stripe Width/Stripe Count?

2013-08-08 Thread johnu
This can help you. http://www.sebastien-han.fr/blog/2013/02/11/mount-a-specific-pool-with-cephfs/ On Thu, Aug 8, 2013 at 7:48 AM, Da Chun wrote: > Hi list, > I saw the info about data striping in > http://ceph.com/docs/master/architecture/#data-striping . > But couldn't find the way to set thes

[ceph-users] rbd read write very slow for heavy I/O operations

2013-07-30 Thread johnu
Hi, I have an openstack cluster which runs on ceph . I tried running hadoop inside VM's and I noticed that map tasks take long time to complete with time and finally it fails. RDB read/writes are getting slower with time. Is it because of too many objects in ceph per volume? I have 8 node clu

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread johnu
, Gregory Farnum wrote: > On Fri, Jul 26, 2013 at 10:11 AM, johnu wrote: > > Greg, > > Yes, the outputs match > > Nope, they don't. :) You need the secret_uuid to be the same on each > node, because OpenStack is generating configuration snippets on one > node (which

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread johnu
I am surprised that no one else has seen or reported this issue. Any idea? On Fri, Jul 26, 2013 at 9:45 AM, Gregory Farnum wrote: > On Fri, Jul 26, 2013 at 9:35 AM, johnu wrote: > > Greg, > > I verified in all cluster nodes that rbd_secret_uuid is same as > > virsh secr

Re: [ceph-users] Cinder volume creation issues

2013-07-26 Thread johnu
n Fri, Jul 26, 2013 at 9:17 AM, johnu wrote: > > Hi all, > > I need to know whether someone else also faced the same issue. > > > > > > I tried openstack + ceph integration. I have seen that I could create > > volumes from horizon and it is created in r

[ceph-users] Cinder volume creation issues

2013-07-26 Thread johnu
Hi all, I need to know whether someone else also faced the same issue. I tried openstack + ceph integration. I have seen that I could create volumes from horizon and it is created in rados. When I check the created volumes in admin panel, all volumes are shown to be created in the same h

Re: [ceph-users] Error when volume is attached in openstack

2013-07-24 Thread johnu
you have for your client.volumes? > > On Jul 24, 2013, at 12:11 PM, johnu wrote: > > Abel, >What did you change in nova.conf? . I have added rbd_username and > rbd_secret_uuid in cinder.conf. I verified that rbd_secret_uuid is same as > virsh secret-list . > > > On W

Re: [ceph-users] Error when volume is attached in openstack

2013-07-24 Thread johnu
entation, > I Created the secret once on 1 compute node, then I reused the UUID when > creating it in the rest of the compute nodes. > I then was able to use this value in cinder.conf AND nova.conf. > > On Jul 24, 2013, at 11:39 AM, johnu wrote: > >

Re: [ceph-users] Error when volume is attached in openstack

2013-07-24 Thread johnu
ode, and you can verify with > virsh secret-list > > On Jul 24, 2013, at 11:20 AM, johnu wrote: > > > I was trying openstack on ceph. I could create volumes but I am not able > to attach the volume to any running instance. If I attach a instance to an > instance and reboot

Re: [ceph-users] Error when volume is attached in openstack

2013-07-24 Thread johnu
ecret using > virsh. > http://ceph.com/docs/next/rbd/rbd-openstack/ > > > On Jul 24, 2013, at 11:20 AM, johnu wrote: > > > I was trying openstack on ceph. I could create volumes but I am not able > to attach the volume to any running instance. If I attach a instance

[ceph-users] Error when volume is attached in openstack

2013-07-24 Thread johnu
I was trying openstack on ceph. I could create volumes but I am not able to attach the volume to any running instance. If I attach a instance to an instance and reboot it, it goes to error state. Compute error logs are given below. 15:32.666 ERROR nova.compute.manager [#033[01;36mreq-464776fd-283

Re: [ceph-users] Openstack on ceph rbd installation failure

2013-07-23 Thread johnu
>> >> >> >> >> >> *Phone : *+33 (0)1 49 70 99 72 – *Mobile : *+33 (0)6 52 84 44 70 >> *Email :* sebastien@enovance.com – *Skype : *han.sbastien >> *Address :* 10, rue de la Victoire – 75009 Paris >> *Web : *www.enovance.com – *Twitter : *@enova

Re: [ceph-users] Openstack on ceph rbd installation failure

2013-07-23 Thread johnu
ce.com – *Skype : *han.sbastien > *Address :* 10, rue de la Victoire – 75009 Paris > *Web : *www.enovance.com – *Twitter : *@enovance > > On Jul 23, 2013, at 5:39 AM, johnu wrote: > > Hi, > I have a three node ceph cluster. ceph -w says health ok . I have > openstac

[ceph-users] Openstack on ceph rbd installation failure

2013-07-22 Thread johnu
Hi, I have a three node ceph cluster. ceph -w says health ok . I have openstack in the same cluster and trying to map cinder and glance onto rbd. I have followed steps as given in http://ceph.com/docs/next/rbd/rbd-openstack/ New Settings that is added in cinder.conf for three files volu