Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-18 Thread Duncan Thomas
On 5 January 2016 at 18:55, Ryan Rossiter 
wrote:

> This is definitely good to know. Are you planning on setting up something
> off to the side of o.vo within that holds a dictionary of all values for a
> release? Something like:
>
> {‘liberty’: {‘volume’: ‘1.3’, …},
>  ‘mitaka’: {‘volume’: ‘1.8’, …}, }
>
> With the possibility of replacing the release name with the RPC version or
> some other version placeholder. Playing devil’s advocate, how does this
> work out if I want to be continuously deploying Cinder from HEAD?


As far as I know (the design has iterated a bit, but I think I'm still
right), there is no need for such a table - before you start a rolling
upgrade, you call the 'pin now' api, and all of the services write their
max supported version to the DB. Once the DB is written to by all services,
the running services can then read that table and cache the max value. Any
new services bought up will also build a max volume cache on startup. Once
everything is upgraded, you can call 'pin now' again and the services can
figure out a new (hopefully higher) version limit.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-18 Thread Michał Dulko
On 01/18/2016 03:31 PM, Duncan Thomas wrote:
> On 5 January 2016 at 18:55, Ryan Rossiter  > wrote:
>
> This is definitely good to know. Are you planning on setting up
> something off to the side of o.vo within that holds a dictionary
> of all values for a release? Something like:
>
> {‘liberty’: {‘volume’: ‘1.3’, …},
>  ‘mitaka’: {‘volume’: ‘1.8’, …}, }
>
> With the possibility of replacing the release name with the RPC
> version or some other version placeholder. Playing devil’s
> advocate, how does this work out if I want to be continuously
> deploying Cinder from HEAD?
>
>
> As far as I know (the design has iterated a bit, but I think I'm still
> right), there is no need for such a table - before you start a rolling
> upgrade, you call the 'pin now' api, and all of the services write
> their max supported version to the DB. Once the DB is written to by
> all services, the running services can then read that table and cache
> the max value. Any new services bought up will also build a max volume
> cache on startup. Once everything is upgraded, you can call 'pin now'
> again and the services can figure out a new (hopefully higher) version
> limit.
>

You're right, that was the initial design we've agreed on in Liberty.
Personally I'm now more in favor of how it's implemented in Nova [1].
Basically on service startup RPC API is pinned to the lowest version
among all the managers running in the environment. I've prepared PoC
patches and successfully executed multiple runs of Tempest on deployment
with Mitaka's c-api and mixed Liberty and Mitaka c-sch, c-vol, c-bak
(two of each service).

I think we should discuss this in details at the mid-cycle meetup next week.

[1] https://blueprints.launchpad.net/nova/+spec/service-version-behavior
[2] https://review.openstack.org/#/c/268025/
[3] https://review.openstack.org/#/c/268026/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-05 Thread Ryan Rossiter

> On Jan 5, 2016, at 7:13 AM, Michał Dulko  wrote:
> 
> On 01/04/2016 11:41 PM, Ryan Rossiter wrote:
>> My first question is: what will be handling the object backports that the 
>> different cinder services need? In Nova, we have the conductor service, 
>> which handles all of the messy RPC and DB work. When anyone needs something 
>> backported, they ask conductor, and it handles everything. That also gives 
>> us a starting point for the rolling upgrades: start with conductor, and now 
>> he has the new master list of objects, and can handle the backporting of 
>> objects when giving them to the older services. From what I see, the main 
>> services in cinder are API, scheduler, and volume. Does there need to be 
>> another service added to handle RPC stuff?
> What Duncan is describing is correct - we intent to backport objects on
> sender's side in a similar manner like RPC methods backporting (version
> pinning). This model was discussed a few times and seems to be fine, but
> if you think otherwise - please let us know.
This is definitely good to know. Are you planning on setting up something off 
to the side of o.vo within that holds a dictionary of all values for a release? 
Something like:

{‘liberty’: {‘volume’: ‘1.3’, …},
 ‘mitaka’: {‘volume’: ‘1.8’, …}, }

With the possibility of replacing the release name with the RPC version or some 
other version placeholder. Playing devil’s advocate, how does this work out if 
I want to be continuously deploying Cinder from HEAD? I will be pinned to the 
previous release’s version until the new release comes out right? I don’t think 
that’s a bad thing, just something to think about. Nova’s ability to be 
continuously deployable off of HEAD is still a big magical black box to me, so 
to be fair I have no idea how a rolling upgrade works when doing CD off of HEAD.

>> The next question is: are there plans to do manifest backports? That is a 
>> very o.vo-jargoned question, but basically from what I can see, Cinder’s 
>> obj_to_primitive() calls do not use o.vo’s newer method of backporting, 
>> which uses a big dictionary of known versions (a manifest) to do one big 
>> backport instead of clogging up RPC with multiple backport requests every 
>> time a subobject needs to be backported after a parent has been backported 
>> (see [1] if you’re interested). I think this is a pretty simple change that 
>> I can help out with if need be (/me knocks on wood).
> We want to backport on sender's side, so no RPC calls should be needed.
> This is also connected with the fact that in Cinder we have all the
> services accessing the DB directly (and currently no plans to to change
> it). This means that o.vo are of no use for us to support schema
> upgrades in an upgradeable way (as described in [1]). We intent to use
> o.vo just to version the payloads sent through RPC methods arguments.
Is this documented in specs/bps somewhere? This is a pretty big detail that I 
didn’t know about. The only thing I could find was [1] from kilo (which I 
totally understand if it hasn’t been updated since merging, I don’t think *any* 
project that I’ve seen keeps the merged specs up to date).

> 
> This however rises a question that came to my mind a few times - why do
> we even mark any of our o.vo methods as remoteable?
Well, is there hope to change over to do o.vo more like Nova in the future? If 
so, then there’s basically no cost to doing @base.remotable right now if you 
want to add it in the future. But that’s not for me to decide :)

> 
> I really want to thank you for giving all this stuff in Cinder a good
> double check. It's very helpful to have an insight of someone more
> experienced with o.vo stuff. :)
I try to make Dan Smith proud ;). I can’t hold a candle to Dan’s knowledge of 
this stuff, but I definitely have more free time than he does.
> 
> I think we have enough bricks and blocks in place to show a complete
> rolling upgrade case that will include DB schema upgrade, o.vo
> backporting and RPC API version pinning. I'll be working on putting this
> all together before the mid cycle meetup.
Record it, document it, post it somewhere when you get it done! I’ve never 
actually done a rolling upgrade on my own (thank goodness for grenade) and I 
would love to see it.
> 
> [1]
> http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

This is definitely a huge undertaking that takes multiple releases to get done. 
I think you are doing a good job of taking this in smaller parts, and it looks 
like doing so is allowing you to start doing rolling upgrades very quickly. 
Well done!

[1] 

Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-05 Thread Michał Dulko
On 01/04/2016 11:41 PM, Ryan Rossiter wrote:
> My first question is: what will be handling the object backports that the 
> different cinder services need? In Nova, we have the conductor service, which 
> handles all of the messy RPC and DB work. When anyone needs something 
> backported, they ask conductor, and it handles everything. That also gives us 
> a starting point for the rolling upgrades: start with conductor, and now he 
> has the new master list of objects, and can handle the backporting of objects 
> when giving them to the older services. From what I see, the main services in 
> cinder are API, scheduler, and volume. Does there need to be another service 
> added to handle RPC stuff?
What Duncan is describing is correct - we intent to backport objects on
sender's side in a similar manner like RPC methods backporting (version
pinning). This model was discussed a few times and seems to be fine, but
if you think otherwise - please let us know.
> The next question is: are there plans to do manifest backports? That is a 
> very o.vo-jargoned question, but basically from what I can see, Cinder’s 
> obj_to_primitive() calls do not use o.vo’s newer method of backporting, which 
> uses a big dictionary of known versions (a manifest) to do one big backport 
> instead of clogging up RPC with multiple backport requests every time a 
> subobject needs to be backported after a parent has been backported (see [1] 
> if you’re interested). I think this is a pretty simple change that I can help 
> out with if need be (/me knocks on wood).
We want to backport on sender's side, so no RPC calls should be needed.
This is also connected with the fact that in Cinder we have all the
services accessing the DB directly (and currently no plans to to change
it). This means that o.vo are of no use for us to support schema
upgrades in an upgradeable way (as described in [1]). We intent to use
o.vo just to version the payloads sent through RPC methods arguments.

This however rises a question that came to my mind a few times - why do
we even mark any of our o.vo methods as remoteable?

I really want to thank you for giving all this stuff in Cinder a good
double check. It's very helpful to have an insight of someone more
experienced with o.vo stuff. :)

I think we have enough bricks and blocks in place to show a complete
rolling upgrade case that will include DB schema upgrade, o.vo
backporting and RPC API version pinning. I'll be working on putting this
all together before the mid cycle meetup.

[1]
http://www.danplanet.com/blog/2015/10/07/upgrades-in-nova-database-migrations/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Object backporting and the associated service

2016-01-04 Thread Ryan Rossiter
Hey everybody, your favorite versioned objects guy is back!

So as I’m helping out more and more with the objects stuff around Cinder, I’m 
starting to notice something that may be a problem with rolling upgrades/object 
backporting. Feel free to say “you’re wrong” at any point during this email, I 
very well may have missed something.

My first question is: what will be handling the object backports that the 
different cinder services need? In Nova, we have the conductor service, which 
handles all of the messy RPC and DB work. When anyone needs something 
backported, they ask conductor, and it handles everything. That also gives us a 
starting point for the rolling upgrades: start with conductor, and now he has 
the new master list of objects, and can handle the backporting of objects when 
giving them to the older services. From what I see, the main services in cinder 
are API, scheduler, and volume. Does there need to be another service added to 
handle RPC stuff?

The next question is: are there plans to do manifest backports? That is a very 
o.vo-jargoned question, but basically from what I can see, Cinder’s 
obj_to_primitive() calls do not use o.vo’s newer method of backporting, which 
uses a big dictionary of known versions (a manifest) to do one big backport 
instead of clogging up RPC with multiple backport requests every time a 
subobject needs to be backported after a parent has been backported (see [1] if 
you’re interested). I think this is a pretty simple change that I can help out 
with if need be (/me knocks on wood).

I don’t mean to pile more work onto this, I understand that this is a big task 
to take on, and so far, it’s progressing very well. Michal’s been really 
helpful as a liaison so far, he’s been a lot of help :).

[1] 
https://github.com/openstack/oslo.versionedobjects/blob/master/oslo_versionedobjects/base.py#L522

-
Thanks,

Ryan Rossiter (rlrossit)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev