complex. As in your eg: if osd0, osd1,osd2 has primary affinity value
of [1,0.5,0.1] and there are 600 pgs, the final distribution comes in
440:140:20 or 22:7:1 which is slighly skewed from expected.
Johnu
On 10/9/14, 4:51 PM, "Gregory Farnum" wrote:
>On Thu, Oct 9, 2014 at 4:24 PM,
Hi Greg,
Thanks for your extremely informative post. My related questions
are posted inline
On 10/9/14, 2:21 PM, "Gregory Farnum" wrote:
>On Thu, Oct 9, 2014 at 10:55 AM, Johnu George (johnugeo)
> wrote:
>> Hi All,
>> I have few questions regardi
rack1 and may become
unbalanced.(Ensure enough storage in rack1)
Thanks,
Johnu
From: Stephen Jahl mailto:stephenj...@gmail.com>>
Date: Thursday, October 9, 2014 at 11:11 AM
To: Loic Dachary mailto:l...@dachary.org>>
Cc: "ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph
What happens for a situation with [1,0.5,1] ? Is osd.0
always returned?
D) After calculating primary based on the affinity values, I see a shift of
osds so that primary comes to the front. Why is this needed?. I thought,
primary affinity value affects only reads and hence, osd ordering need not
Thanks Alfredo. Is there any other possible way that will work for my
situation? Anything would be helpful
Johnu
On 10/7/14, 2:25 PM, "Alfredo Deza" wrote:
>On Tue, Oct 7, 2014 at 5:05 PM, Johnu George (johnugeo)
> wrote:
>> Even when I try ceph-deploy install --dev , I
&
Even when I try ceph-deploy install --dev , I
am seeing that it is getting installed from official ceph repo. How can I
install ceph from my github repo or my local repo in all ceph nodes? (Or
any other possibility? ). Someone can help me in setting this?
Johnu
On 10/2/14, 1:55 PM, "So
for every small change.
Johnu
On 10/2/14, 1:55 PM, "Somnath Roy" wrote:
>I think you should just skip 'ceph-deploy install' command and install
>your version of the ceph package in all the nodes manually.
>Otherwise there is ceph-deploy install --dev you can try out.
How do I use ceph-deploy in this case?. How do I get ceph-deploy to use my
privately built ceph package (with my changes) and install them in all
ceph nodes?
Johnu
On 10/2/14, 7:22 AM, "Loic Dachary" wrote:
>Hi,
>
>I would use ceph-deploy
>http://ceph.com/docs/ma
?. If I need to run
benchmarks(using rados bench or other benchmarking tools) after any code
change, what is the right practice to test some change in a multi node dev
setup? ( Multi node setup is needed as part of getting right performance
results in benchmark tests)
Thanks,
Johnu
Hi Daniel,
Can you provide your exact crush map and exact crushtool command
that results in segfaults?
Johnu
On 9/16/14, 10:23 AM, "Daniel Swarbrick"
wrote:
>Replying to myself, and for the benefit of other caffeine-starved people:
>
>Setting the last rule to &
This can help you.
http://www.sebastien-han.fr/blog/2013/02/11/mount-a-specific-pool-with-cephfs/
On Thu, Aug 8, 2013 at 7:48 AM, Da Chun wrote:
> Hi list,
> I saw the info about data striping in
> http://ceph.com/docs/master/architecture/#data-striping .
> But couldn't find the way to set thes
Hi,
I have an openstack cluster which runs on ceph . I tried running
hadoop inside VM's and I noticed that map tasks take long time to complete
with time and finally it fails. RDB read/writes are getting slower with
time. Is it because of too many objects in ceph per volume?
I have 8 node clu
, Gregory Farnum wrote:
> On Fri, Jul 26, 2013 at 10:11 AM, johnu wrote:
> > Greg,
> > Yes, the outputs match
>
> Nope, they don't. :) You need the secret_uuid to be the same on each
> node, because OpenStack is generating configuration snippets on one
> node (which
I am surprised that no
one else has seen or reported this issue. Any idea?
On Fri, Jul 26, 2013 at 9:45 AM, Gregory Farnum wrote:
> On Fri, Jul 26, 2013 at 9:35 AM, johnu wrote:
> > Greg,
> > I verified in all cluster nodes that rbd_secret_uuid is same as
> > virsh secr
n Fri, Jul 26, 2013 at 9:17 AM, johnu wrote:
> > Hi all,
> > I need to know whether someone else also faced the same issue.
> >
> >
> > I tried openstack + ceph integration. I have seen that I could create
> > volumes from horizon and it is created in r
Hi all,
I need to know whether someone else also faced the same issue.
I tried openstack + ceph integration. I have seen that I could create
volumes from horizon and it is created in rados.
When I check the created volumes in admin panel, all volumes are shown to
be created in the same h
you have for your client.volumes?
>
> On Jul 24, 2013, at 12:11 PM, johnu wrote:
>
> Abel,
>What did you change in nova.conf? . I have added rbd_username and
> rbd_secret_uuid in cinder.conf. I verified that rbd_secret_uuid is same as
> virsh secret-list .
>
>
> On W
entation,
> I Created the secret once on 1 compute node, then I reused the UUID when
> creating it in the rest of the compute nodes.
> I then was able to use this value in cinder.conf AND nova.conf.
>
> On Jul 24, 2013, at 11:39 AM, johnu wrote:
>
>
ode, and you can verify with
> virsh secret-list
>
> On Jul 24, 2013, at 11:20 AM, johnu wrote:
>
>
> I was trying openstack on ceph. I could create volumes but I am not able
> to attach the volume to any running instance. If I attach a instance to an
> instance and reboot
ecret using
> virsh.
> http://ceph.com/docs/next/rbd/rbd-openstack/
>
>
> On Jul 24, 2013, at 11:20 AM, johnu wrote:
>
>
> I was trying openstack on ceph. I could create volumes but I am not able
> to attach the volume to any running instance. If I attach a instance
I was trying openstack on ceph. I could create volumes but I am not able to
attach the volume to any running instance. If I attach a instance to an
instance and reboot it, it goes to error state.
Compute error logs are given below.
15:32.666 ERROR nova.compute.manager
[#033[01;36mreq-464776fd-283
>>
>>
>>
>>
>>
>> *Phone : *+33 (0)1 49 70 99 72 – *Mobile : *+33 (0)6 52 84 44 70
>> *Email :* sebastien@enovance.com – *Skype : *han.sbastien
>> *Address :* 10, rue de la Victoire – 75009 Paris
>> *Web : *www.enovance.com – *Twitter : *@enova
ce.com – *Skype : *han.sbastien
> *Address :* 10, rue de la Victoire – 75009 Paris
> *Web : *www.enovance.com – *Twitter : *@enovance
>
> On Jul 23, 2013, at 5:39 AM, johnu wrote:
>
> Hi,
> I have a three node ceph cluster. ceph -w says health ok . I have
> openstac
Hi,
I have a three node ceph cluster. ceph -w says health ok . I have
openstack in the same cluster and trying to map cinder and glance onto rbd.
I have followed steps as given in
http://ceph.com/docs/next/rbd/rbd-openstack/
New Settings that is added in cinder.conf for three files
volu
24 matches
Mail list logo