Re: [rdo-users] Scaling integrated Ceph cluster with post deployment customization

2018-10-23 Thread Cody
Thank you John. I really appreciate your help!

Best regards,
Cody
On Tue, Oct 23, 2018 at 12:39 PM John Fulton  wrote:
>
> On Tue, Oct 23, 2018 at 12:22 PM Cody  wrote:
> >
> > Hi John,
> >
> > Thank you so much for the explanation.
> >
> > Now I am clear that I need to include the same previously used
> > environment file(s) for every subsequent scaling or upgrades in order
> > to keep the settings. But, what about those changes made via ceph
> > commands during daily operation since the last deployment?
>
> Ideally, all the changes would be made w/ TripleO which would trigger
> ceph-ansible. It can depend on the command. E.g. if you created a pool
> but don't have it in the pool list ceph-ansible isn't going to remove
> the pool. However, you should include that pool in the pool list
> should you wish to recreate the environment.
>
> > Do I also
> > need to include those changes in an environment file to reflect the
> > latest status quo when it comes to add new OSD nodes? I guess the
> > answer to it would be yes, but just wish to be sure on it.
>
> If they can be expressed that way, yes. If you've made changes outside
> of your configuration management system (e.g. TripleO's using
> ceph-ansible) then you should probably address them so that the
> configuration management system is sufficient and the manual changes
> aren't necessary to achieve the desired configuration.
>
>   John
>
> >
> > Thank you very much.
> >
> > Best regards,
> > Cody
> > On Tue, Oct 23, 2018 at 8:20 AM John Fulton  wrote:
> > >
> > > On Mon, Oct 22, 2018 at 8:51 PM Cody  wrote:
> > > >
> > > > Thank you John for the reply.
> > > >
> > > > I am unsure to what extent I should include in an environment file
> > > > when it comes to scale a Ceph cluster. Should I include every
> > > > customization done to the cluster since previous deployment? In my
> > > > case, I have altered the CRUSH hierarchy, changed failure domains, and
> > > > created an EC pool with a custom EC rule. Do I need to count in all of
> > > > those?
> > >
> > > In short: yes.
> > >
> > > In long: in general with TripleO, if you deploy and include (via a -e)
> > > N environment files and you re-run 'openstack overcloud deploy ...'
> > > you must include the same N files or you'd be asking TripleO to change
> > > something about your deployment. The ceph-ansible integration assumes
> > > the same. ceph-ansible will re-run the site.yml playbook and
> > > idempotence will keep things the same unless you change the input
> > > variables. So if you defined the CRUSH hierarchy in an environment
> > > file, then please include the same environment file. Similarly, if you
> > > defined a pool with the CephPools parameter, then please keep that
> > > list of pools unchanged. How exactly things will behave if you don't,
> > > could be undefined depending on implementation details of the tasks.
> > > E.g. ceph-ansible isn't going to remove a pool if it's not in the
> > > pools list, but you'll be on the safe side if you reassert
> > > consistently with each update as this is how both tools are tested.
> > >
> > >   John
> > >
> > > >
> > > > Thank you very much.
> > > >
> > > > Best regards,
> > > > Cody
> > > >
> > > > On Mon, Oct 22, 2018 at 7:03 AM John Fulton  wrote:
> > > > >
> > > > > No, I don't see why it would hurt the existing settings, provided you 
> > > > > continue to pass the CRUSH data environment files.
> > > > >
> > > > >   John
> > > > >
> > > > > On Sun, Oct 21, 2018, 10:08 PM Cody  wrote:
> > > > >>
> > > > >> Hello folks,
> > > > >>
> > > > >> I have made some changes to a Ceph cluster initially deployed with
> > > > >> OpenStack using TripleO. Specifically, I have changed the CRUSH map
> > > > >> and failure domain for the pools used by the overcloud. Now, if I
> > > > >> attempt to add new storage nodes (with identical specs) to the 
> > > > >> cluster
> > > > >> simply by increasing the CephStorageCount, would that mess up the
> > > > >> existing settings?
> > > > >>
> > > > >> Thank you very much.
> > > > >>
> > > > >> Best regards,
> > > > >> Cody
> > > > >> ___
> > > > >> users mailing list
> > > > >> users@lists.rdoproject.org
> > > > >> http://lists.rdoproject.org/mailman/listinfo/users
> > > > >>
> > > > >> To unsubscribe: users-unsubscr...@lists.rdoproject.org
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users

To unsubscribe: users-unsubscr...@lists.rdoproject.org


Re: [rdo-users] Scaling integrated Ceph cluster with post deployment customization

2018-10-23 Thread John Fulton
On Tue, Oct 23, 2018 at 12:22 PM Cody  wrote:
>
> Hi John,
>
> Thank you so much for the explanation.
>
> Now I am clear that I need to include the same previously used
> environment file(s) for every subsequent scaling or upgrades in order
> to keep the settings. But, what about those changes made via ceph
> commands during daily operation since the last deployment?

Ideally, all the changes would be made w/ TripleO which would trigger
ceph-ansible. It can depend on the command. E.g. if you created a pool
but don't have it in the pool list ceph-ansible isn't going to remove
the pool. However, you should include that pool in the pool list
should you wish to recreate the environment.

> Do I also
> need to include those changes in an environment file to reflect the
> latest status quo when it comes to add new OSD nodes? I guess the
> answer to it would be yes, but just wish to be sure on it.

If they can be expressed that way, yes. If you've made changes outside
of your configuration management system (e.g. TripleO's using
ceph-ansible) then you should probably address them so that the
configuration management system is sufficient and the manual changes
aren't necessary to achieve the desired configuration.

  John

>
> Thank you very much.
>
> Best regards,
> Cody
> On Tue, Oct 23, 2018 at 8:20 AM John Fulton  wrote:
> >
> > On Mon, Oct 22, 2018 at 8:51 PM Cody  wrote:
> > >
> > > Thank you John for the reply.
> > >
> > > I am unsure to what extent I should include in an environment file
> > > when it comes to scale a Ceph cluster. Should I include every
> > > customization done to the cluster since previous deployment? In my
> > > case, I have altered the CRUSH hierarchy, changed failure domains, and
> > > created an EC pool with a custom EC rule. Do I need to count in all of
> > > those?
> >
> > In short: yes.
> >
> > In long: in general with TripleO, if you deploy and include (via a -e)
> > N environment files and you re-run 'openstack overcloud deploy ...'
> > you must include the same N files or you'd be asking TripleO to change
> > something about your deployment. The ceph-ansible integration assumes
> > the same. ceph-ansible will re-run the site.yml playbook and
> > idempotence will keep things the same unless you change the input
> > variables. So if you defined the CRUSH hierarchy in an environment
> > file, then please include the same environment file. Similarly, if you
> > defined a pool with the CephPools parameter, then please keep that
> > list of pools unchanged. How exactly things will behave if you don't,
> > could be undefined depending on implementation details of the tasks.
> > E.g. ceph-ansible isn't going to remove a pool if it's not in the
> > pools list, but you'll be on the safe side if you reassert
> > consistently with each update as this is how both tools are tested.
> >
> >   John
> >
> > >
> > > Thank you very much.
> > >
> > > Best regards,
> > > Cody
> > >
> > > On Mon, Oct 22, 2018 at 7:03 AM John Fulton  wrote:
> > > >
> > > > No, I don't see why it would hurt the existing settings, provided you 
> > > > continue to pass the CRUSH data environment files.
> > > >
> > > >   John
> > > >
> > > > On Sun, Oct 21, 2018, 10:08 PM Cody  wrote:
> > > >>
> > > >> Hello folks,
> > > >>
> > > >> I have made some changes to a Ceph cluster initially deployed with
> > > >> OpenStack using TripleO. Specifically, I have changed the CRUSH map
> > > >> and failure domain for the pools used by the overcloud. Now, if I
> > > >> attempt to add new storage nodes (with identical specs) to the cluster
> > > >> simply by increasing the CephStorageCount, would that mess up the
> > > >> existing settings?
> > > >>
> > > >> Thank you very much.
> > > >>
> > > >> Best regards,
> > > >> Cody
> > > >> ___
> > > >> users mailing list
> > > >> users@lists.rdoproject.org
> > > >> http://lists.rdoproject.org/mailman/listinfo/users
> > > >>
> > > >> To unsubscribe: users-unsubscr...@lists.rdoproject.org
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users

To unsubscribe: users-unsubscr...@lists.rdoproject.org


Re: [rdo-users] Scaling integrated Ceph cluster with post deployment customization

2018-10-23 Thread Cody
Hi John,

Thank you so much for the explanation.

Now I am clear that I need to include the same previously used
environment file(s) for every subsequent scaling or upgrades in order
to keep the settings. But, what about those changes made via ceph
commands during daily operation since the last deployment? Do I also
need to include those changes in an environment file to reflect the
latest status quo when it comes to add new OSD nodes? I guess the
answer to it would be yes, but just wish to be sure on it.

Thank you very much.

Best regards,
Cody
On Tue, Oct 23, 2018 at 8:20 AM John Fulton  wrote:
>
> On Mon, Oct 22, 2018 at 8:51 PM Cody  wrote:
> >
> > Thank you John for the reply.
> >
> > I am unsure to what extent I should include in an environment file
> > when it comes to scale a Ceph cluster. Should I include every
> > customization done to the cluster since previous deployment? In my
> > case, I have altered the CRUSH hierarchy, changed failure domains, and
> > created an EC pool with a custom EC rule. Do I need to count in all of
> > those?
>
> In short: yes.
>
> In long: in general with TripleO, if you deploy and include (via a -e)
> N environment files and you re-run 'openstack overcloud deploy ...'
> you must include the same N files or you'd be asking TripleO to change
> something about your deployment. The ceph-ansible integration assumes
> the same. ceph-ansible will re-run the site.yml playbook and
> idempotence will keep things the same unless you change the input
> variables. So if you defined the CRUSH hierarchy in an environment
> file, then please include the same environment file. Similarly, if you
> defined a pool with the CephPools parameter, then please keep that
> list of pools unchanged. How exactly things will behave if you don't,
> could be undefined depending on implementation details of the tasks.
> E.g. ceph-ansible isn't going to remove a pool if it's not in the
> pools list, but you'll be on the safe side if you reassert
> consistently with each update as this is how both tools are tested.
>
>   John
>
> >
> > Thank you very much.
> >
> > Best regards,
> > Cody
> >
> > On Mon, Oct 22, 2018 at 7:03 AM John Fulton  wrote:
> > >
> > > No, I don't see why it would hurt the existing settings, provided you 
> > > continue to pass the CRUSH data environment files.
> > >
> > >   John
> > >
> > > On Sun, Oct 21, 2018, 10:08 PM Cody  wrote:
> > >>
> > >> Hello folks,
> > >>
> > >> I have made some changes to a Ceph cluster initially deployed with
> > >> OpenStack using TripleO. Specifically, I have changed the CRUSH map
> > >> and failure domain for the pools used by the overcloud. Now, if I
> > >> attempt to add new storage nodes (with identical specs) to the cluster
> > >> simply by increasing the CephStorageCount, would that mess up the
> > >> existing settings?
> > >>
> > >> Thank you very much.
> > >>
> > >> Best regards,
> > >> Cody
> > >> ___
> > >> users mailing list
> > >> users@lists.rdoproject.org
> > >> http://lists.rdoproject.org/mailman/listinfo/users
> > >>
> > >> To unsubscribe: users-unsubscr...@lists.rdoproject.org
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users

To unsubscribe: users-unsubscr...@lists.rdoproject.org


Re: [rdo-users] Scaling integrated Ceph cluster with post deployment customization

2018-10-23 Thread John Fulton
On Mon, Oct 22, 2018 at 8:51 PM Cody  wrote:
>
> Thank you John for the reply.
>
> I am unsure to what extent I should include in an environment file
> when it comes to scale a Ceph cluster. Should I include every
> customization done to the cluster since previous deployment? In my
> case, I have altered the CRUSH hierarchy, changed failure domains, and
> created an EC pool with a custom EC rule. Do I need to count in all of
> those?

In short: yes.

In long: in general with TripleO, if you deploy and include (via a -e)
N environment files and you re-run 'openstack overcloud deploy ...'
you must include the same N files or you'd be asking TripleO to change
something about your deployment. The ceph-ansible integration assumes
the same. ceph-ansible will re-run the site.yml playbook and
idempotence will keep things the same unless you change the input
variables. So if you defined the CRUSH hierarchy in an environment
file, then please include the same environment file. Similarly, if you
defined a pool with the CephPools parameter, then please keep that
list of pools unchanged. How exactly things will behave if you don't,
could be undefined depending on implementation details of the tasks.
E.g. ceph-ansible isn't going to remove a pool if it's not in the
pools list, but you'll be on the safe side if you reassert
consistently with each update as this is how both tools are tested.

  John

>
> Thank you very much.
>
> Best regards,
> Cody
>
> On Mon, Oct 22, 2018 at 7:03 AM John Fulton  wrote:
> >
> > No, I don't see why it would hurt the existing settings, provided you 
> > continue to pass the CRUSH data environment files.
> >
> >   John
> >
> > On Sun, Oct 21, 2018, 10:08 PM Cody  wrote:
> >>
> >> Hello folks,
> >>
> >> I have made some changes to a Ceph cluster initially deployed with
> >> OpenStack using TripleO. Specifically, I have changed the CRUSH map
> >> and failure domain for the pools used by the overcloud. Now, if I
> >> attempt to add new storage nodes (with identical specs) to the cluster
> >> simply by increasing the CephStorageCount, would that mess up the
> >> existing settings?
> >>
> >> Thank you very much.
> >>
> >> Best regards,
> >> Cody
> >> ___
> >> users mailing list
> >> users@lists.rdoproject.org
> >> http://lists.rdoproject.org/mailman/listinfo/users
> >>
> >> To unsubscribe: users-unsubscr...@lists.rdoproject.org
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users

To unsubscribe: users-unsubscr...@lists.rdoproject.org


Re: [rdo-users] Scaling integrated Ceph cluster with post deployment customization

2018-10-22 Thread Cody
Thank you John for the reply.

I am unsure to what extent I should include in an environment file
when it comes to scale a Ceph cluster. Should I include every
customization done to the cluster since previous deployment? In my
case, I have altered the CRUSH hierarchy, changed failure domains, and
created an EC pool with a custom EC rule. Do I need to count in all of
those?

Thank you very much.

Best regards,
Cody

On Mon, Oct 22, 2018 at 7:03 AM John Fulton  wrote:
>
> No, I don't see why it would hurt the existing settings, provided you 
> continue to pass the CRUSH data environment files.
>
>   John
>
> On Sun, Oct 21, 2018, 10:08 PM Cody  wrote:
>>
>> Hello folks,
>>
>> I have made some changes to a Ceph cluster initially deployed with
>> OpenStack using TripleO. Specifically, I have changed the CRUSH map
>> and failure domain for the pools used by the overcloud. Now, if I
>> attempt to add new storage nodes (with identical specs) to the cluster
>> simply by increasing the CephStorageCount, would that mess up the
>> existing settings?
>>
>> Thank you very much.
>>
>> Best regards,
>> Cody
>> ___
>> users mailing list
>> users@lists.rdoproject.org
>> http://lists.rdoproject.org/mailman/listinfo/users
>>
>> To unsubscribe: users-unsubscr...@lists.rdoproject.org
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users

To unsubscribe: users-unsubscr...@lists.rdoproject.org


Re: [rdo-users] Scaling integrated Ceph cluster with post deployment customization

2018-10-22 Thread John Fulton
No, I don't see why it would hurt the existing settings, provided you
continue to pass the CRUSH data environment files.

  John

On Sun, Oct 21, 2018, 10:08 PM Cody  wrote:

> Hello folks,
>
> I have made some changes to a Ceph cluster initially deployed with
> OpenStack using TripleO. Specifically, I have changed the CRUSH map
> and failure domain for the pools used by the overcloud. Now, if I
> attempt to add new storage nodes (with identical specs) to the cluster
> simply by increasing the CephStorageCount, would that mess up the
> existing settings?
>
> Thank you very much.
>
> Best regards,
> Cody
> ___
> users mailing list
> users@lists.rdoproject.org
> http://lists.rdoproject.org/mailman/listinfo/users
>
> To unsubscribe: users-unsubscr...@lists.rdoproject.org
>
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users

To unsubscribe: users-unsubscr...@lists.rdoproject.org


[rdo-users] Scaling integrated Ceph cluster with post deployment customization

2018-10-21 Thread Cody
Hello folks,

I have made some changes to a Ceph cluster initially deployed with
OpenStack using TripleO. Specifically, I have changed the CRUSH map
and failure domain for the pools used by the overcloud. Now, if I
attempt to add new storage nodes (with identical specs) to the cluster
simply by increasing the CephStorageCount, would that mess up the
existing settings?

Thank you very much.

Best regards,
Cody
___
users mailing list
users@lists.rdoproject.org
http://lists.rdoproject.org/mailman/listinfo/users

To unsubscribe: users-unsubscr...@lists.rdoproject.org