Re: [openstack-dev] manila community meeting this week is *on*

2018-11-14 Thread Erik McCormick
Are you gathering somewhere in the building for in-person discussion?

On Wed, Nov 14, 2018, 10:11 PM Tom Barron  As we discussed last week, we *will* have our normal weekly manila
> community meeting this week, at the regular time and place
>
>Thursday, 15 November, 1500 UTC, #openstack-meetings-alt on
> freenode
>
> Some of us are at Summit but we need to continue to discuss/review
> outstanding specs, links for which can be found in our next meeting
> agenda [1].
>
> It was great meeting new folks at the project onboarding and project
> update sessions at Summit this week -- please feel free to join the
> IRC meeting tomorrow!
>
> Cheers,
>
> -- Tom Barron (tbarron)
>
> [1]
> https://wiki.openstack.org/w/index.php?title=Manila/Meetings&action=edit§ion=2
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface

2018-10-23 Thread Erik McCormick
On Tue, Oct 23, 2018 at 10:20 AM Tobias Urdin  wrote:
>
> Hello Erik,
>
> Could you specify the DNs you used for all certificates just so that I
> can rule it out on my side.
> You can redact anything sensitive with some to just get the feel on how
> it's configured.
>
> Best regards
> Tobias
>
I'm not actually using anything special or custom. For right now I
just let it use the default www.example.com stuff. These are the
settings in the playbook which I distilled from OSA

octavia_cert_key_length_server: '4096' # key length
octavia_cert_cipher_server: 'aes256'
octavia_cert_cipher_client: 'aes256'
octavia_cert_key_length_client: '4096' # key length
octavia_cert_server_ca_subject:
'/C=US/ST=Denial/L=Nowhere/O=Dis/CN=www.example.com' # change this to
something more real
octavia_cert_client_ca_subject:
'/C=US/ST=Denial/L=Nowhere/O=Dis/CN=www.example.com' # change this to
something more real
octavia_cert_client_req_common_name: 'www.example.com' # change this
to something more real
octavia_cert_client_req_country_name: 'US'
octavia_cert_client_req_state_or_province_name: 'Denial'
octavia_cert_client_req_locality_name: 'Nowhere'
octavia_cert_client_req_organization_name: 'Dis'
octavia_cert_validity_days: 1825 # 5 years

-Erik

> On 10/22/2018 04:47 PM, Erik McCormick wrote:
> > On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin  wrote:
> >> Hello,
> >>
> >> I've been having a lot of issues with SSL certificates myself, on my
> >> second trip now trying to get it working.
> >>
> >> Before I spent a lot of time walking through every line in the DevStack
> >> plugin and fixing my config options, used the generate
> >> script [1] and still it didn't work.
> >>
> >> When I got the "invalid padding" issue it was because of the DN I used
> >> for the CA and the certificate IIRC.
> >>
> >>   > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING
> >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
> >> to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa
> >> routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'),
> >> ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'),
> >> ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",)
> >>   > 19:47 < tobias-urdin> after a quick google "The problem was that my
> >> CA DN was the same as the certificate DN."
> >>
> >> IIRC I think that solved it, but then again I wouldn't remember fully
> >> since I've been at so many different angles by now.
> >>
> >> Here is my IRC logs history from the #openstack-lbaas channel, perhaps
> >> it can help you out
> >> http://paste.openstack.org/show/732575/
> >>
> > Tobias, I owe you a beer. This was precisely the issue. I'm deploying
> > Octavia with kolla-ansible. It only deploys a single CA. After hacking
> > the templates and playbook to incorporate a separate server CA, the
> > amphorae now load and provision the required namespace. I'm adding a
> > kolla tag to the subject of this in hopes that someone might want to
> > take on changing this behavior in the project. Hopefully after I get
> > through Upstream Institute in Berlin I'll be able to do it myself if
> > nobody else wants to do it.
> >
> > For certificate generation, I extracted the contents of
> > octavia_certs_install.yml (which sets up the directory structure,
> > openssl.cnf, and the client CA), and octavia_certs.yml (which creates
> > the server CA and the client certificate) and mashed them into a
> > separate playbook just for this purpose. At the end I get:
> >
> > ca_01.pem - Client CA Certificate
> > ca_01.key - Client CA Key
> > ca_server_01.pem - Server CA Certificate
> > cakey.pem - Server CA Key
> > client.pem - Concatenated Client Key and Certificate
> >
> > If it would help to have the playbook, I can stick it up on github
> > with a huge "This is a hack" disclaimer on it.
> >
> >> -
> >>
> >> Sorry for hijacking the thread but I'm stuck as well.
> >>
> >> I've in the past tried to generate the certificates with [1] but now
> >> moved on to using the openstack-ansible way of generating them [2]
> >> with some modifications.
> >>
> >> Right now I'm just gett

Re: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface

2018-10-22 Thread Erik McCormick
Oops, dropped Operators. Can't wait until it's all one list...
On Mon, Oct 22, 2018 at 10:44 AM Erik McCormick
 wrote:
>
> On Mon, Oct 22, 2018 at 4:23 AM Tobias Urdin  wrote:
> >
> > Hello,
> >
> > I've been having a lot of issues with SSL certificates myself, on my
> > second trip now trying to get it working.
> >
> > Before I spent a lot of time walking through every line in the DevStack
> > plugin and fixing my config options, used the generate
> > script [1] and still it didn't work.
> >
> > When I got the "invalid padding" issue it was because of the DN I used
> > for the CA and the certificate IIRC.
> >
> >  > 19:34 < tobias-urdin> 2018-09-10 19:43:15.312 15032 WARNING
> > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect
> > to instance. Retrying.: SSLError: ("bad handshake: Error([('rsa
> > routines', 'RSA_padding_check_PKCS1_type_1', 'block type is not 01'),
> > ('rsa routines', 'RSA_EAY_PUBLIC_DECRYPT', 'padding check failed'),
> > ('SSL routines', 'ssl3_get_key_exchange', 'bad signature')],)",)
> >  > 19:47 < tobias-urdin> after a quick google "The problem was that my
> > CA DN was the same as the certificate DN."
> >
> > IIRC I think that solved it, but then again I wouldn't remember fully
> > since I've been at so many different angles by now.
> >
> > Here is my IRC logs history from the #openstack-lbaas channel, perhaps
> > it can help you out
> > http://paste.openstack.org/show/732575/
> >
>
> Tobias, I owe you a beer. This was precisely the issue. I'm deploying
> Octavia with kolla-ansible. It only deploys a single CA. After hacking
> the templates and playbook to incorporate a separate server CA, the
> amphorae now load and provision the required namespace. I'm adding a
> kolla tag to the subject of this in hopes that someone might want to
> take on changing this behavior in the project. Hopefully after I get
> through Upstream Institute in Berlin I'll be able to do it myself if
> nobody else wants to do it.
>
> For certificate generation, I extracted the contents of
> octavia_certs_install.yml (which sets up the directory structure,
> openssl.cnf, and the client CA), and octavia_certs.yml (which creates
> the server CA and the client certificate) and mashed them into a
> separate playbook just for this purpose. At the end I get:
>
> ca_01.pem - Client CA Certificate
> ca_01.key - Client CA Key
> ca_server_01.pem - Server CA Certificate
> cakey.pem - Server CA Key
> client.pem - Concatenated Client Key and Certificate
>
> If it would help to have the playbook, I can stick it up on github
> with a huge "This is a hack" disclaimer on it.
>
> > -
> >
> > Sorry for hijacking the thread but I'm stuck as well.
> >
> > I've in the past tried to generate the certificates with [1] but now
> > moved on to using the openstack-ansible way of generating them [2]
> > with some modifications.
> >
> > Right now I'm just getting: Could not connect to instance. Retrying.:
> > SSLError: [SSL: BAD_SIGNATURE] bad signature (_ssl.c:579)
> > from the amphoras, haven't got any further but I've eliminated a lot of
> > stuck in the middle.
> >
> > Tried deploying Ocatavia on Ubuntu with python3 to just make sure there
> > wasn't an issue with CentOS and OpenSSL versions since it tends to lag
> > behind.
> > Checking the amphora with openssl s_client [3] it gives the same one,
> > but the verification is successful just that I don't understand what the
> > bad signature
> > part is about, from browsing some OpenSSL code it seems to be related to
> > RSA signatures somehow.
> >
> > 140038729774992:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad
> > signature:s3_clnt.c:2032:
> >
> > So I've basicly ruled out Ubuntu (openssl-1.1.0g) and CentOS
> > (openssl-1.0.2k) being the problem, ruled out signing_digest, so I'm
> > back to something related
> > to the certificates or the communication between the endpoints, or what
> > actually responds inside the amphora (gunicorn IIUC?). Based on the
> > "verify" functions actually causing that bad signature error I would
> > assume it's the generated certificate that the amphora presents that is
> > causing it.
> >
> > I'll have to continue the troubleshooting to the inside of the amphora,
> > I've used the test-on

Re: [openstack-dev] [Octavia] [Kolla] SSL errors polling amphorae and missing tenant network interface

2018-10-22 Thread Erik McCormick
 http://paste.openstack.org/show/732486/
> [4] http://paste.openstack.org/show/732487/
>
> On 10/20/2018 01:53 AM, Michael Johnson wrote:
> > Hi Erik,
> >
> > Sorry to hear you are still having certificate issues.
> >
> > Issue #2 is probably caused by issue #1. Since we hot-plug the tenant
> > network for the VIP, one of the first steps after the worker connects
> > to the amphora agent is finishing the required configuration of the
> > VIP interface inside the network namespace on the amphroa.
> >
Thanks for the hint on the workflow of this. I hadn't gotten deep
enough into the code to find that yet, but I suspected it was blocking
since the namespace never got created either. Thanks

> > If I remember correctly, you are attempting to configure Octavia with
> > the dual CA option (which is good for non-development use).
> >
> > This is what I have for notes:
> >
> > [certificates] gets the following:
> > cert_generator = local_cert_generator
> > ca_certificate = server CA's "server.pem" file
> > ca_private_key = server CA's "server.key" file
> > ca_private_key_passphrase = pass phrase for ca_private_key
> >   [controller_worker]
> >   client_ca = Client CA's ca_cert file
> >   [haproxy_amphora]
> > client_cert = Client CA's client.pem file (I think with it's key
> > concatenated is what rm_work said the other day)
> > server_ca = Server CA's ca_cert file
> >

This is all very helpful. It's a bit difficult to know what goes where
the way the documentation is written presently. For something that's
going to be the defacto standard for loadbalancing, we as a community
need to do a better job of documenting how to set up, configure, and
manage this in production. I'm trying to capture my lessons learned
and processes as I go to help with that if I can.

-Erik

> > That said, I can probably run through this and write something up next
> > week that is more step-by-step/detailed.
> >
> > Michael
> >
> > On Fri, Oct 19, 2018 at 2:31 PM Erik McCormick
> >  wrote:
> >> Apologies for cross-posting, but in the event that these might be
> >> worth filing as bugs, I wanted the Octavia devs to see it as well...
> >>
> >> I've been wrestling with getting Octavia up and running and have
> >> become stuck on two issues. I'm hoping someone has run into these
> >> before. My google foo has come up empty.
> >>
> >> Issue 1:
> >> When the Octavia controller tries to poll the amphora instance, it
> >> tries repeatedly and eventually fails. The error on the controller
> >> side is:
> >>
> >> 2018-10-19 14:17:39.181 26 ERROR
> >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection
> >> retries (currently set to 300) exhausted.  The amphora is unavailable.
> >> Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries
> >> exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by
> >> SSLError(SSLError("bad handshake: Error([('rsa routines',
> >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
> >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
> >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
> >> 'tls_process_server_certificate', 'certificate verify
> >> failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112',
> >> port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15
> >> (Caused by SSLError(SSLError("bad handshake: Error([('rsa routines',
> >> 'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
> >> 'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
> >> routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
> >> 'tls_process_server_certificate', 'certificate verify
> >> failed')],)",),))
> >>
> >> On the amphora side I see:
> >> [2018-10-19 17:52:54 +] [1331] [DEBUG] Error processing SSL request.
> >> [2018-10-19 17:52:54 +] [1331] [DEBUG] Invalid request from
> >> ip=:::10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake
> >> failure (_ssl.c:1754)
> >>
> >> I've generated certificates both with the script in the Octavia git
> >> repo, and with the O

[openstack-dev] [Octavia] SSL errors polling amphorae and missing tenant network interface

2018-10-19 Thread Erik McCormick
Apologies for cross-posting, but in the event that these might be
worth filing as bugs, I wanted the Octavia devs to see it as well...

I've been wrestling with getting Octavia up and running and have
become stuck on two issues. I'm hoping someone has run into these
before. My google foo has come up empty.

Issue 1:
When the Octavia controller tries to poll the amphora instance, it
tries repeatedly and eventually fails. The error on the controller
side is:

2018-10-19 14:17:39.181 26 ERROR
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection
retries (currently set to 300) exhausted.  The amphora is unavailable.
Reason: HTTPSConnectionPool(host='10.7.0.112', port=9443): Max retries
exceeded with url: /0.5/plug/vip/10.250.20.15 (Caused by
SSLError(SSLError("bad handshake: Error([('rsa routines',
'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
'tls_process_server_certificate', 'certificate verify
failed')],)",),)): SSLError: HTTPSConnectionPool(host='10.7.0.112',
port=9443): Max retries exceeded with url: /0.5/plug/vip/10.250.20.15
(Caused by SSLError(SSLError("bad handshake: Error([('rsa routines',
'RSA_padding_check_PKCS1_type_1', 'invalid padding'), ('rsa routines',
'rsa_ossl_public_decrypt', 'padding check failed'), ('asn1 encoding
routines', 'ASN1_item_verify', 'EVP lib'), ('SSL routines',
'tls_process_server_certificate', 'certificate verify
failed')],)",),))

On the amphora side I see:
[2018-10-19 17:52:54 +] [1331] [DEBUG] Error processing SSL request.
[2018-10-19 17:52:54 +] [1331] [DEBUG] Invalid request from
ip=:::10.7.0.40: [SSL: SSL_HANDSHAKE_FAILURE] ssl handshake
failure (_ssl.c:1754)

I've generated certificates both with the script in the Octavia git
repo, and with the Openstack Ansible playbook. I can see that they are
present in /etc/octavia/certs.

I'm using the Kolla (Queens) containers for the control plane so I'm
sure I've satisfied all the python library constraints.

Issue 2:
I"m not sure how it gets configured, but the tenant network interface
(ens6) never comes up. I can spawn other instances on that network
with no issue, and I can see that Neutron has the port attached to the
instance. However, in the instance this is all I get:

ubuntu@amphora-33e0aab3-8bc4-4fcb-bc42-b9b36afb16d4:~$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: ens3:  mtu 9000 qdisc pfifo_fast
state UP group default qlen 1000
link/ether fa:16:3e:30:c4:60 brd ff:ff:ff:ff:ff:ff
inet 10.7.0.112/16 brd 10.7.255.255 scope global ens3
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe30:c460/64 scope link
   valid_lft forever preferred_lft forever
3: ens6:  mtu 1500 qdisc noop state DOWN group
default qlen 1000
link/ether fa:16:3e:89:a2:7f brd ff:ff:ff:ff:ff:ff

There's no evidence of the interface anywhere else including udev rules.

Any help with either or both issues would be greatly appreciated.

Cheers,
Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ops Meetups - Call for Hosts

2018-10-16 Thread Erik McCormick
Hello all,

The Ops Meetup team has embarked on a mission to revive the
traditional Operators Meetup that have historically been held between
Summits. With the upcoming merger of the PTG into the Summit week, and
the merger of most Ops discussion sessions at Summits into the Forum,
we felt that we needed to get back to our original format.

With that in mind, we are beginning the process of selecting venues
for both 2019 Meetups. Some guidelines for what is needed to host can
be found here:
https://wiki.openstack.org/wiki/Operations/Meetups#Venue_Selection

Each of the etherpads below contains a template to collect information
about the potential host and venue. If you are interested in hosting a
meetup, simply copy and paste the template into a blank etherpad, fill
it out, and place a link above the template on the original etherpad.

Ops Meetup 2019 #1 - Late February / Early March - Somewhere in Europe
https://etherpad.openstack.org/p/ops-meetup-venue-discuss-1st-2019

Ops Meetup 2019 #2 - Late July / Early August - Somewhere in North America
https://etherpad.openstack.org/p/ops-meetup-venue-discuss-2nd-2019

Reply back to this thread with any questions or comments. If you are
coming to the Berlin Summit, we will be having an Ops Meetup Team
catch-up Forum session. We encourage all of you to join in making
these events a success.

Cheers,
Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ops Forum Session Brainstorming

2018-09-18 Thread Erik McCormick
This is a friendly reminder for anyone wishing to see Ops-focused sessions
in Berlin to get your submissions in soon. We have a couple things there
that came out of the PTG, but that's it so far. See below for details.

Cheers,
Erik



On Wed, Sep 12, 2018, 5:07 PM Erik McCormick 
wrote:

> Hello everyone,
>
> I have set up an etherpad to collect Ops related session ideas for the
> Forum at the Berlin Summit. Please suggest any topics that you would
> like to see covered, and +1 existing topics you like.
>
> https://etherpad.openstack.org/p/ops-forum-stein
>
> Cheers,
> Erik
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ops Forum Session Brainstorming

2018-09-12 Thread Erik McCormick
Hello everyone,

I have set up an etherpad to collect Ops related session ideas for the
Forum at the Berlin Summit. Please suggest any topics that you would
like to see covered, and +1 existing topics you like.

https://etherpad.openstack.org/p/ops-forum-stein

Cheers,
Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Openstack-operators] revamped ops meetup day 2

2018-09-10 Thread Erik McCormick
-- Forwarded message -
From: Chris Morgan 
Date: Mon, Sep 10, 2018, 5:55 PM
Subject: [Openstack-operators] revamped ops meetup day 2
To: OpenStack Operators , <
openstaack-...@lists.openstack.org>


Hi All,
  We (ops meetups team) got several additional suggestions for ops meetups
session, so we've attempted to revamp day 2 to fit them in, please see

https://docs.google.com/spreadsheets/d/1EUSYMs3GfglnD8yfFaAXWhLe0F5y9hCUKqCYe0Vp1oA/edit#gid=981527336

Given the timing, we'll attempt to confirm the rest of the day starting at
9am over coffee. If you're moderating something tomorrow please check out
the adjusted times. If something doesn't work for you we'll try and swap
sessions to make it work.

Cheers
Chris, Erik, Sean

-- 
Chris Morgan 
___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-specific news on extraction

2018-09-06 Thread Erik McCormick
On Thu, Sep 6, 2018, 8:40 PM Rochelle Grober 
wrote:

> Sounds like an important discussion to have with the operators in Denver.
> Should put this on the schedule for the Ops meetup.
>
> --Rocky
>

We are planning to attend the upgrade sessions on Monday as a group. How
about we put it there?

-Erik


>
> > -Original Message-
> > From: Matt Riedemann [mailto:mriede...@gmail.com]
> > Sent: Thursday, September 06, 2018 1:59 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > ; openstack-
> > operat...@lists.openstack.org
> > Subject: [openstack-dev] [nova][placement][upgrade][qa] Some upgrade-
> > specific news on extraction
> >
> > I wanted to recap some upgrade-specific stuff from today outside of the
> > other [1] technical extraction thread.
> >
> > Chris has a change up for review [2] which prompted the discussion.
> >
> > That change makes placement only work with placement.conf, not
> > nova.conf, but does get a passing tempest run in the devstack patch [3].
> >
> > The main issue here is upgrades. If you think of this like deprecating
> config
> > options, the old config options continue to work for a release and then
> are
> > dropped after a full release (or 3 months across boundaries for CDers)
> [4].
> > Given that, Chris's patch would break the standard deprecation policy.
> Clearly
> > one simple way outside of code to make that work is just copy and rename
> > nova.conf to placement.conf and voila. But that depends on *all*
> > deployment/config tooling to get that right out of the gate.
> >
> > The other obvious thing is the database. The placement repo code as-is
> > today still has the check for whether or not it should use the placement
> > database but falls back to using the nova_api database [5]. So
> technically you
> > could point the extracted placement at the same nova_api database and it
> > should work. However, at some point deployers will clearly need to copy
> the
> > placement-related tables out of the nova_api DB to a new placement DB and
> > make sure the 'migrate_version' table is dropped so that placement DB
> > schema versions can reset to 1.
> >
> > With respect to grenade and making this work in our own upgrade CI
> testing,
> > we have I think two options (which might not be mutually
> > exclusive):
> >
> > 1. Make placement support using nova.conf if placement.conf isn't found
> for
> > Stein with lots of big warnings that it's going away in T. Then Rocky
> nova.conf
> > with the nova_api database configuration just continues to work for
> > placement in Stein. I don't think we then have any grenade changes to
> make,
> > at least in Stein for upgrading *from* Rocky. Assuming fresh devstack
> installs
> > in Stein use placement.conf and a placement-specific database, then
> > upgrades from Stein to T should also be OK with respect to grenade, but
> > likely punts the cut-over issue for all other deployment projects
> (because we
> > don't CI with grenade doing
> > Rocky->Stein->T, or FFU in other words).
> >
> > 2. If placement doesn't support nova.conf in Stein, then grenade will
> require
> > an (exceptional) [6] from-rocky upgrade script which will (a) write out
> > placement.conf fresh and (b) run a DB migration script, likely housed in
> the
> > placement repo, to create the placement database and copy the placement-
> > specific tables out of the nova_api database. Any script like this is
> likely
> > needed regardless of what we do in grenade because deployers will need to
> > eventually do this once placement would drop support for using nova.conf
> (if
> > we went with option 1).
> >
> > That's my attempt at a summary. It's going to be very important that
> > operators and deployment project contributors weigh in here if they have
> > strong preferences either way, and note that we can likely do both
> options
> > above - grenade could do the fresh cutover from rocky to stein but we
> allow
> > running with nova.conf and nova_api DB in placement in stein with plans
> to
> > drop that support in T.
> >
> > [1]
> > http://lists.openstack.org/pipermail/openstack-dev/2018-
> > September/subject.html#134184
> > [2] https://review.openstack.org/#/c/600157/
> > [3] https://review.openstack.org/#/c/600162/
> > [4]
> > https://governance.openstack.org/tc/reference/tags/assert_follows-
> > standard-deprecation.html#requirements
> > [5]
> > https://github.com/openstack/placement/blob/fb7c1909/placement/db_api
> > .py#L27
> > [6] https://docs.openstack.org/grenade/latest/readme.html#theory-of-
> > upgrade
> >
> > --
> >
> > Thanks,
> >
> > Matt
> >
> > __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Devel

Re: [openstack-dev] [kayobe] Kayobe update

2018-08-29 Thread Erik McCormick
Hey Mark,

Here's the link to the Ops etherpad

https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018

I added a listing for you, and we'll have a schedule out shortly, but
I noted that it should be Tuesday. See you in Denver!

Cheers,
Erik

On Wed, Aug 22, 2018 at 5:01 PM Mark Goddard  wrote:
>
>
>
> On Wed, 22 Aug 2018, 19:08 Erik McCormick,  wrote:
>>
>>
>>
>> On Wed, Aug 22, 2018, 1:52 PM Mark Goddard  wrote:
>>>
>>> Hello Kayobians,
>>>
>>> I thought it is about time to do another update.
>>
>>
>> 
>>
>>>
>>> # PTG
>>>
>>> There won't be an official Kayobe session at the PTG in Denver, although I 
>>> and a few others from the team will be present. If anyone would like to 
>>> meet to discuss Kayobe then don't be shy. Please get in touch either via 
>>> email or IRC (mgoddard).
>>
>>
>> Would you have any interest in doing an overview / Q&A session with 
>> Operators Monday before lunch or sometime Tuesday? It doesn't need to be 
>> anything fancy or formal as these are all fishbowl sessions. It might be a 
>> good way to get some traction and feedback.
>
>
> Absolutely, that's a great idea. I was hoping to attend the Scientific SIG 
> session on Monday, but any time on Tuesday would work.
>
>>
>>>
>>>
>>> Cheers,
>>> Mark
>>
>>
>> -Erik
>>
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kayobe] Kayobe update

2018-08-22 Thread Erik McCormick
On Wed, Aug 22, 2018, 1:52 PM Mark Goddard  wrote:

> Hello Kayobians,
>
> I thought it is about time to do another update.
>




> # PTG
>
> There won't be an official Kayobe session at the PTG in Denver, although I
> and a few others from the team will be present. If anyone would like to
> meet to discuss Kayobe then don't be shy. Please get in touch either via
> email or IRC (mgoddard).
>

Would you have any interest in doing an overview / Q&A session with
Operators Monday before lunch or sometime Tuesday? It doesn't need to be
anything fancy or formal as these are all fishbowl sessions. It might be a
good way to get some traction and feedback.


>
> Cheers,
> Mark
>

-Erik


>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ceph multiattach support

2018-05-30 Thread Erik McCormick
The lack of ceph support is a ceph problem rather than a Cinder problem.
There are issues with replication and multi-attached RBD volumes asd I
understand it. The ceph folks are aware but have other priorities
presently. I encourage making your interest known to them.

In the meantime,  check out Manilla with cephfs if you are running modern
versions of both Ceph and Openstack.

-Erik



On Wed, May 30, 2018, 10:02 PM fengyd  wrote:

> Hi,
>
> I'm using Ceph for cinder backend.
> Do you have any plan to support multiattach for Ceph backend?
>
> Thanks
>
> Yafeng
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fast Forward Upgrades (FFU) Forum Sessions

2018-05-18 Thread Erik McCormick
On Fri, May 18, 2018 at 3:59 PM, Thierry Carrez  wrote:
> Erik McCormick wrote:
>> There are two forum sessions in Vancouver covering Fast Forward Upgrades.
>>
>> Session 1 (Current State): Wednesday May 23rd, 09:00 - 09:40, Room 220
>> Session 2 (Future Work): Wednesday May 23rd, 09:50 - 10:30, Room 220
>>
>> The combined etherpad for both sessions can be found at:
>> https://etherpad.openstack.org/p/YVR-forum-fast-forward-upgrades
>
> You should add it to the list of all etherpads at:
> https://wiki.openstack.org/wiki/Forum/Vancouver2018
>
Done

> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fast Forward Upgrades (FFU) Forum Sessions

2018-05-18 Thread Erik McCormick
Hello all,

There are two forum sessions in Vancouver covering Fast Forward Upgrades.

Session 1 (Current State): Wednesday May 23rd, 09:00 - 09:40, Room 220
Session 2 (Future Work): Wednesday May 23rd, 09:50 - 10:30, Room 220

The combined etherpad for both sessions can be found at:
https://etherpad.openstack.org/p/YVR-forum-fast-forward-upgrades

Please take some time to add in topics you would like to see discussed
or add any other pertinent information. There are several reference
links at the top which are worth reviewing prior to the sessions if
you have the time.

See you all in Vancover!

Cheers,
Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-14 Thread Erik McCormick
On Tue, Nov 14, 2017 at 4:10 PM, Rochelle Grober
 wrote:
> Folks,
>
> This discussion and the people interested in it seem like a perfect 
> application of the SIG process.  By turning LTS into a SIG, everyone can 
> discuss the issues on the SIG mailing list and the discussion shouldn't end 
> up split.  If it turns into a project, great.  If a solution is found that 
> doesn't need a new project, great.  Even once  there is a decision on how to 
> move forward, there will still be implementation issues and enhancements, so 
> the SIG could very well be long-lived.  But the important aspect of this is:  
> keeping the discussion in a place where both devs and ops can follow the 
> whole thing and act on recommendations.
>
> Food for thought.
>
> --Rocky
>
Just to add more legs to the spider that is this thread: I think the
SIG idea is a good one. It may evolve into a project team some day,
but for now it's a free-for-all polluting 2 mailing lists, and
multiple etherpads. How do we go about creating one?

-Erik

>> -Original Message-
>> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com]
>> Sent: Tuesday, November 14, 2017 8:31 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> ; openstack-oper. > operat...@lists.openstack.org>
>> Subject: Re: [openstack-dev] Upstream LTS Releases
>>
>> Hi all - please note this conversation has been split variously across -dev 
>> and -
>> operators.
>>
>> One small observation from the discussion so far is that it seems as though
>> there are two issues being discussed under the one banner:
>> 1) maintain old releases for longer
>> 2) do stable releases less frequently
>>
>> It would be interesting to understand if the people who want longer
>> maintenance windows would be helped by #2.
>>
>> On 14 November 2017 at 09:25, Doug Hellmann 
>> wrote:
>> > Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>> >> >> The concept, in general, is to create a new set of cores from
>> >> >> these groups, and use 3rd party CI to validate patches. There are
>> >> >> lots of details to be worked out yet, but our amazing UC (User
>> >> >> Committee) will be begin working out the details.
>> >> >
>> >> > What is the most worrying is the exact "take over" process. Does it
>> >> > mean that the teams will give away the +2 power to a different
>> >> > team? Or will our (small) stable teams still be responsible for
>> >> > landing changes? If so, will they have to learn how to debug 3rd party 
>> >> > CI
>> jobs?
>> >> >
>> >> > Generally, I'm scared of both overloading the teams and losing the
>> >> > control over quality at the same time :) Probably the final proposal 
>> >> > will
>> clarify it..
>> >>
>> >> The quality of backported fixes is expected to be a direct (and
>> >> only?) interest of those new teams of new cores, coming from users
>> >> and operators and vendors. The more parties to establish their 3rd
>> >> party
>> >
>> > We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> > should not assume that they are needed or will be present. They may
>> > be, but we shouldn't build policy around the assumption that they
>> > will. Why would we have third-party jobs on an old branch that we
>> > don't have on master, for instance?
>> >
>> >> checking jobs, the better proposed changes communicated, which
>> >> directly affects the quality in the end. I also suppose, contributors
>> >> from ops world will likely be only struggling to see things getting
>> >> fixed, and not new features adopted by legacy deployments they're used
>> to maintain.
>> >> So in theory, this works and as a mainstream developer and
>> >> maintainer, you need no to fear of losing control over LTS code :)
>> >>
>> >> Another question is how to not block all on each over, and not push
>> >> contributors away when things are getting awry, jobs failing and
>> >> merging is blocked for a long time, or there is no consensus reached
>> >> in a code review. I propose the LTS policy to enforce CI jobs be
>> >> non-voting, as a first step on that way, and giving every LTS team
>> >> member a core rights maybe? Not sure if that works though.
>> >
>> > I'm not sure what change you're proposing for CI jobs and their voting
>> > status. Do you mean we should make the jobs non-voting as soon as the
>> > branch passes out of the stable support period?
>> >
>> > Regarding the review team, anyone on the review team for a branch that
>> > goes out of stable support will need to have +2 rights in that branch.
>> > Otherwise there's no point in saying that they're maintaining the
>> > branch.
>> >
>> > Doug
>> >
>> >
>> __
>> 
>> >  OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Cheers,
>> ~Blairo
>>
>

Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-14 Thread Erik McCormick
On Tue, Nov 14, 2017 at 6:44 PM, John Dickinson  wrote:
>
>
> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>
>> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M  wrote:
>>> The pressure for #2 comes from the inability to skip upgrades and the fact 
>>> that upgrades are hugely time consuming still.
>>>
>>> If you want to reduce the push for number #2 and help developers get their 
>>> wish of getting features into users hands sooner, the path to upgrade 
>>> really needs to be much less painful.
>>>
>>
>> +1000
>>
>> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
>> execute the upgrade. (and we skipped a version)
>> Scheduling all the relevant internal teams is a monumental task
>> because we don't have dedicated teams for those projects and they have
>> other priorities.
>> Upgrading affects a LOT of our systems, some we don't fully have
>> control over. And it can takes months to get new deployment on those
>> systems. (and after, we have to test compatibility, of course)
>>
>> So I guess you can understand my frustration when I'm told to upgrade
>> more often and that skipping versions is discouraged/unsupported.
>> At the current pace, I'm just falling behind. I *need* to skip
>> versions to keep up.
>>
>> So for our next upgrades, we plan on skipping even more versions if
>> the database migration allows it. (except for Nova which is a huge
>> PITA to be honest due to CellsV1)
>> I just don't see any other ways to keep up otherwise.
>
> ?!?!
>
> What does it take for this to never happen again? No operator should need to 
> plan and execute an upgrade for a whole year to upgrade one year's worth of 
> code development.
>
> We don't need new policies, new teams, more releases, fewer releases, or 
> anything like that. The goal is NOT "let's have an LTS release". The goal 
> should be "How do we make sure Mattieu and everyone else in the world can 
> actually deploy and use the software we are writing?"
>
> Can we drop the entire LTS discussion for now and focus on "make upgrades 
> take less than a year" instead? After we solve that, let's come back around 
> to LTS versions, if needed. I know there's already some work around that. 
> Let's focus there and not be distracted about the best bureaucracy for not 
> deleting two-year-old branches.
>
>
> --John
>
>
>
> /me puts on asbestos pants
>

OK, let's tone down the flamethrower there a bit Mr. Asbestos Pants
;). The LTS push is not lieu of the quest for simpler upgrades. There
is also an effort to enable fast-forward upgrades going on. However,
this is a non-trivial task that will take many cycles to get to a
point where it's truly what you're looking for. The long term desire
of having LTS releases encompasses being able to hop from one LTS to
the next without stopping over. We just aren't there yet.

However, what we *can* do is make it so when mgagne finally gets to
Newton (or Ocata or wherever) on his next run, the code isn't
completely EOL and it can still receive some important patches. This
can be accomplished in the very near term, and that is what a certain
subset of us are focused on.

We still desire to skip versions. We still desire to have upgrades be
non-disruptive and non-destructive. This is just one step on the way
to that. This discussion has been going on for cycle after cycle with
little more than angst between ops and devs to show for it. This is
the first time we've had progress on this ball of goo that really
matters. Let's all be proactive contributors to the solution.

Those interested in having a say in the policy, put your $0.02 here:
https://etherpad.openstack.org/p/LTS-proposal

Peace, Love, and International Grooviness,
Erik

>>
>> --
>> Mathieu
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-14 Thread Erik McCormick
On Tue, Nov 14, 2017 at 11:30 AM, Blair Bethwaite
 wrote:
> Hi all - please note this conversation has been split variously across
> -dev and -operators.
>
> One small observation from the discussion so far is that it seems as
> though there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
>
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
>

I would like to hear from people who do *not* want #2 and why not.
What are the benefits of 6 months vs. 1 year? I have heard objections
in the hallway track, but I have struggled to retain the rationale for
more than 10 seconds. I think this may be more of a religious
discussion that could take a while though.

#1 is something we can act on right now with the eventual goal of
being able to skip releases entirely. We are addressing the
maintenance of the old issue right now. As we get farther down the
road of fast-forward upgrade tooling, then we will be able to please
those wishing for a slower upgrade cadence, and those that want to
stay on the bleeding edge simultaneously.

-Erik

> On 14 November 2017 at 09:25, Doug Hellmann  wrote:
>> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>>> >> The concept, in general, is to create a new set of cores from these
>>> >> groups, and use 3rd party CI to validate patches. There are lots of
>>> >> details to be worked out yet, but our amazing UC (User Committee) will
>>> >> be begin working out the details.
>>> >
>>> > What is the most worrying is the exact "take over" process. Does it mean 
>>> > that
>>> > the teams will give away the +2 power to a different team? Or will our 
>>> > (small)
>>> > stable teams still be responsible for landing changes? If so, will they 
>>> > have to
>>> > learn how to debug 3rd party CI jobs?
>>> >
>>> > Generally, I'm scared of both overloading the teams and losing the 
>>> > control over
>>> > quality at the same time :) Probably the final proposal will clarify it..
>>>
>>> The quality of backported fixes is expected to be a direct (and only?)
>>> interest of those new teams of new cores, coming from users and
>>> operators and vendors. The more parties to establish their 3rd party
>>
>> We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> should not assume that they are needed or will be present. They may be,
>> but we shouldn't build policy around the assumption that they will. Why
>> would we have third-party jobs on an old branch that we don't have on
>> master, for instance?
>>
>>> checking jobs, the better proposed changes communicated, which directly
>>> affects the quality in the end. I also suppose, contributors from ops
>>> world will likely be only struggling to see things getting fixed, and
>>> not new features adopted by legacy deployments they're used to maintain.
>>> So in theory, this works and as a mainstream developer and maintainer,
>>> you need no to fear of losing control over LTS code :)
>>>
>>> Another question is how to not block all on each over, and not push
>>> contributors away when things are getting awry, jobs failing and merging
>>> is blocked for a long time, or there is no consensus reached in a code
>>> review. I propose the LTS policy to enforce CI jobs be non-voting, as a
>>> first step on that way, and giving every LTS team member a core rights
>>> maybe? Not sure if that works though.
>>
>> I'm not sure what change you're proposing for CI jobs and their voting
>> status. Do you mean we should make the jobs non-voting as soon as the
>> branch passes out of the stable support period?
>>
>> Regarding the review team, anyone on the review team for a branch
>> that goes out of stable support will need to have +2 rights in that
>> branch. Otherwise there's no point in saying that they're maintaining
>> the branch.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Cheers,
> ~Blairo
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-07 Thread Erik McCormick
On Nov 8, 2017 1:52 PM, "James E. Blair"  wrote:

Erik McCormick  writes:

> On Tue, Nov 7, 2017 at 6:45 PM, James E. Blair 
wrote:
>> Erik McCormick  writes:
>>
>>> The concept, in general, is to create a new set of cores from these
>>> groups, and use 3rd party CI to validate patches. There are lots of
>>> details to be worked out yet, but our amazing UC (User Committee) will
>>> be begin working out the details.
>>
>> I regret that due to a conflict I was unable to attend this session.
>> Can you elaborate on why third-party CI would be necessary for this,
>> considering that upstream CI already exists on all active branches?
>
> Lack of infra resources, people are already maintaining their own
> testing for old releases, and distribution of work across
> organizations I think were the chief reasons. Someone else feel free
> to chime in and expand on it.

Which resources are lacking?  I wasn't made aware of a shortage of
upstream CI resources affecting stable branch work, but if there is, I'm
sure we can address it -- this is a very important effort.




It's not a matter of things lacking for today's release cadence and
deprecation policy. That is working fine.  The problems would come if you
had to,  say,  continue to run it for Mitaka until Queens is released.

The upstream CI system is also a collaboratively maintained system with
folks from many organizations participating in it.  Indeed we're now
distributing its maintenance and operation into projects themselves.
It seems like an ideal place for folks from different organizations to
collaborate.


Monty, as well as the Stable Branch cores, were in the room, so perhaps
they can elaborate on this for us.  I'm no expert on what can and cannot be
done.

-Jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Upstream LTS Releases

2017-11-07 Thread Erik McCormick
On Tue, Nov 7, 2017 at 6:45 PM, James E. Blair  wrote:
> Erik McCormick  writes:
>
>> The concept, in general, is to create a new set of cores from these
>> groups, and use 3rd party CI to validate patches. There are lots of
>> details to be worked out yet, but our amazing UC (User Committee) will
>> be begin working out the details.
>
> I regret that due to a conflict I was unable to attend this session.
> Can you elaborate on why third-party CI would be necessary for this,
> considering that upstream CI already exists on all active branches?
>
> Thanks,
>
> Jim

Lack of infra resources, people are already maintaining their own
testing for old releases, and distribution of work across
organizations I think were the chief reasons. Someone else feel free
to chime in and expand on it.

-Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Upstream LTS Releases

2017-11-07 Thread Erik McCormick
Hello Ops folks,

This morning at the Sydney Summit we had a very well attended and very
productive session about how to go about keeping a selection of past
releases available and maintained for a longer period of time (LTS).

There was agreement in the room that this could be accomplished by
moving the responsibility for those releases from the Stable Branch
team down to those who are already creating and testing patches for
old releases: The distros, deployers, and operators.

The concept, in general, is to create a new set of cores from these
groups, and use 3rd party CI to validate patches. There are lots of
details to be worked out yet, but our amazing UC (User Committee) will
be begin working out the details.

Please take a look at the Etherpad from the session if you'd like to
see the details. More importantly, if you would like to contribute to
this effort, please add your name to the list starting on line 133.

https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases

Thanks to everyone who participated!

Cheers,
Erik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-10-31 Thread Erik McCormick
The etherpad for the Fast-Forward Upgrades session at the Sydney forum is here:

https://etherpad.openstack.org/p/SYD-forum-fast-forward-upgrades

Please help us flesh it out and frame the discussion to make the best
use of our time. I have included reference materials from previous
sessions to use as a starting point. Thanks to everyone for
participating!

Cheers,
Erik

On Mon, Oct 30, 2017 at 11:25 PM,   wrote:
> See you there Eric.
>
>
>
> From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
> Sent: Monday, October 30, 2017 10:58 AM
> To: Matt Riedemann 
> Cc: OpenStack Development Mailing List ;
> openstack-operators 
> Subject: Re: [openstack-dev] [Openstack-operators]
> [skip-level-upgrades][fast-forward-upgrades] PTG summary
>
>
>
>
>
>
>
> On Oct 30, 2017 11:53 AM, "Matt Riedemann"  wrote:
>
> On 9/20/2017 9:42 AM, arkady.kanev...@dell.com wrote:
>
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
>
>
>
> Arkady,
>
> Are you actually moderating the forum session in Sydney because the session
> says Eric McCormick is the session moderator:
>
>
>
> I submitted it so it gets my name on it. I think Arkady and I are going to
> do it together.
>
>
>
> https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20451/fast-forward-upgrades
>
> People are asking in the nova IRC channel about this session and were told
> to ask Jay Pipes about it, but Jay isn't going to be in Sydney and isn't
> involved in fast-forward upgrades, as far as I know anyway.
>
> So whoever is moderating this session, can you please create an etherpad and
> get it linked to the wiki?
>
> https://wiki.openstack.org/wiki/Forum/Sydney2017
>
>
>
> I'll have the etherpad up today and pass it among here and on the wiki.
>
>
>
>
>
> --
>
> Thanks,
>
> Matt
>
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-10-30 Thread Erik McCormick
On Oct 30, 2017 11:53 AM, "Matt Riedemann"  wrote:

On 9/20/2017 9:42 AM, arkady.kanev...@dell.com wrote:

> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
>

Arkady,

Are you actually moderating the forum session in Sydney because the session
says Eric McCormick is the session moderator:


I submitted it so it gets my name on it. I think Arkady and I are going to
do it together.

https://www.openstack.org/summit/sydney-2017/summit-schedule
/events/20451/fast-forward-upgrades

People are asking in the nova IRC channel about this session and were told
to ask Jay Pipes about it, but Jay isn't going to be in Sydney and isn't
involved in fast-forward upgrades, as far as I know anyway.

So whoever is moderating this session, can you please create an etherpad
and get it linked to the wiki?

https://wiki.openstack.org/wiki/Forum/Sydney2017


I'll have the etherpad up today and pass it among here and on the wiki.



-- 

Thanks,

Matt


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-28 Thread Erik McCormick
On Sep 28, 2017 4:31 AM, "Lee Yarwood"  wrote:

On 20-09-17 14:56:20, arkady.kanev...@dell.com wrote:
> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady

Thanks Arkady!

FYI I see that emccormickva has created the following Forum session to
discuss FF upgrades:

http://forumtopics.openstack.org/cfp/details/19

You might want to reach out to him to help craft the agenda for the
session based on our discussions in Denver.

.
I just didn't want to risk it not getting in, and it was on our etherpad as
well. I'm happy to help, but would love for you guys to lead.

Thanks,
Erik


Thanks again,

Lee
--
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672
2D76

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [tc][nova][ironic][mogan] Evaluate Mogan project

2017-09-26 Thread Erik McCormick
My main question here would be this: If you feel there are deficiencies in
Ironic, why not contribute to improving Ironic rather than spawning a whole
new project?

I am happy to take a look at it, and I'm by no means trying to contradict
your assumptions here. I just get concerned with the overhead and confusion
that comes with competing projects.

Also, if you'd like to discuss this in detail with a room full of bodies, I
suggest proposing a session for the Forum in Sydney. If some of the
contributors will be there, it would be a good opportunity for you to get
feedback.

Cheers,
Erik


On Sep 26, 2017 8:41 PM, "Matt Riedemann"  wrote:

> On 9/25/2017 6:27 AM, Zhenguo Niu wrote:
>
>> Hi folks,
>>
>> First of all, thanks for the audiences for Mogan project update in the TC
>> room during Denver PTG. Here we would like to get more suggestions before
>> we apply for inclusion.
>>
>> Speaking only for myself, I find the current direction of one
>> API+scheduler for vm/baremetal/container unfortunate. After containers
>> management moved out to be a separated project Zun, baremetal with Nova and
>> Ironic continues to be a pain point.
>>
>> #. API
>> Only part of the Nova APIs and parameters can apply to baremetal
>> instances, meanwhile for interoperable with other virtual drivers, bare
>> metal specific APIs such as deploy time RAID, advanced partitions can not
>>  be included. It's true that we can support various compute drivers, but
>> the reality is that the support of each of hypervisor is not equal,
>> especially for bare metals in a virtualization world. But I understand the
>> problems with that as Nova was designed to provide compute
>> resources(virtual machines) instead of bare metals.
>>
>> #. Scheduler
>> Bare metal doesn't fit in to the model of 1:1 nova-compute to resource,
>> as nova-compute processes can't be run on the inventory nodes themselves.
>> That is to say host aggregates, availability zones and such things based on
>> compute service(host) can't be applied to bare metal resources. And for
>> grouping like anti-affinity, the granularity is also not same with virtual
>> machines, bare metal users may want their HA instances not on the same
>> failure domain instead of the node itself. Short saying, we can only get a
>> rigid resource class only scheduling for bare metals.
>>
>>
>> And most of the cloud providers in the market offering virtual machines
>> and bare metals as separated resources, but unfortunately, it's hard to
>> achieve this with one compute service. I heard people are deploying
>> seperated Nova for virtual machines and bare metals with many downstream
>> hacks to the bare metal single-driver Nova but as the changes to Nova would
>> be massive and may invasive to virtual machines, it seems not practical to
>> be upstream.
>>
>> So we created Mogan [1] about one year ago, which aims to offer bare
>> metals as first class resources to users with a set of bare metal specific
>> API and a baremetal-centric scheduler(with Placement service). It was like
>> an experimental project at the beginning, but the outcome makes us believe
>> it's the right way. Mogan will fully embrace Ironic for bare metal
>> provisioning and with RSD server [2] introduced to OpenStack, it will be a
>> new world for bare metals, as with that we can compose hardware resources
>> on the fly.
>>
>> Also, I would like to clarify the overlaps between Mogan and Nova, I bet
>> there must be some users who wants to use one API for the compute resources
>> management as they don't care about whether it's a virtual machine or a
>> bare metal server. Baremetal driver with Nova is still the right choice for
>> such users to get raw performance compute resources. On the contrary, Mogan
>> is for real bare metal users and cloud providers who wants to offer bare
>> metals as a separated resources.
>>
>> Thank you for your time!
>>
>>
>> [1] https://wiki.openstack.org/wiki/Mogan
>> [2] https://www.intel.com/content/www/us/en/architecture-and-tec
>> hnology/rack-scale-design-overview.html
>>
>> --
>> Best Regards,
>> Zhenguo Niu
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> Cross-posting to the operators list since they are the community that
> you'll likely need to convince the most about Mogan and whether or not they
> want to start experimenting with it.
>
> --
>
> Thanks,
>
> Matt
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ..

Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-06 Thread Erik McCormick
On Tue, Jun 6, 2017 at 4:44 PM, Lance Bragstad  wrote:
>
>
> On Tue, Jun 6, 2017 at 3:06 PM, Marc Heckmann 
> wrote:
>>
>> Hi,
>>
>> On Tue, 2017-06-06 at 10:09 -0500, Lance Bragstad wrote:
>>
>> Also, with all the people involved with this thread, I'm curious what the
>> best way is to get consensus. If I've tallied the responses properly, we
>> have 5 in favor of option #2 and 1 in favor of option #3. This week is spec
>> freeze for keystone, so I see a slim chance of this getting committed to
>> Pike [0]. If we do have spare cycles across the team we could start working
>> on an early version and get eyes on it. If we straighten out everyone
>> concerns early we could land option #2 early in Queens.
>>
>>
>> I was the only one in favour of option 3 only because I've spent a bunch
>> of time playing with option #1 in the past. As I mentioned previously in the
>> thread, if #2 is more in line with where the project is going, then I'm all
>> for it. At this point, the admin scope issue has been around long enough
>> that Queens doesn't seem that far off.
>
>
> From an administrative point-of-view, would you consider option #1 or option
> #2 to better long term?
>

Count me as another +1 for option 2. It's the right way to go long
term, and we've lived with how it is now long enough that I'm OK
waiting a release or even 2 more for it with things as is. I think
option 3 would just muddy the waters.

-Erik

>>
>>
>> -m
>>
>>
>> I guess it comes down to how fast folks want it.
>>
>> [0] https://review.openstack.org/#/c/464763/
>>
>> On Tue, Jun 6, 2017 at 10:01 AM, Lance Bragstad 
>> wrote:
>>
>> I replied to John, but directly. I'm sending the responses I sent to him
>> but with the intended audience on the thread. Sorry for not catching that
>> earlier.
>>
>>
>> On Fri, May 26, 2017 at 2:44 AM, John Garbutt 
>> wrote:
>>
>> +1 on not forcing Operators to transition to something new twice, even if
>> we did go for option 3.
>>
>>
>> The more I think about this, the more it worries me from a developer
>> perspective. If we ended up going with option 3, then we'd be supporting
>> both methods of elevating privileges. That means two paths for doing the
>> same thing in keystone. It also means oslo.context, keystonemiddleware, or
>> any other library consuming tokens that needs to understand elevated
>> privileges needs to understand both approaches.
>>
>>
>>
>> Do we have an agreed non-distruptive upgrade path mapped out yet? (For any
>> of the options) We spoke about fallback rules you pass but with a warning to
>> give us a smoother transition. I think that's my main objection with the
>> existing patches, having to tell all admins to get their token for a
>> different project, and give them roles in that project, all before being
>> able to upgrade.
>>
>>
>> Thanks for bringing up the upgrade case! You've kinda described an upgrade
>> for option 1. This is what I was thinking for option 2:
>>
>> - deployment upgrades to a release that supports global role assignments
>> - operator creates a set of global roles (i.e. global_admin)
>> - operator grants global roles to various people that need it (i.e. all
>> admins)
>> - operator informs admins to create globally scoped tokens
>> - operator rolls out necessary policy changes
>>
>> If I'm thinking about this properly, nothing would change at the
>> project-scope level for existing users (who don't need a global role
>> assignment). I'm hoping someone can help firm ^ that up or improve it if
>> needed.
>>
>>
>>
>> Thanks,
>> johnthetubaguy
>>
>> On Fri, 26 May 2017 at 08:09, Belmiro Moreira
>>  wrote:
>>
>> Hi,
>> thanks for bringing this into discussion in the Operators list.
>>
>> Option 1 and 2 and not complementary but complety different.
>> So, considering "Option 2" and the goal to target it for Queens I would
>> prefer not going into a migration path in
>> Pike and then again in Queens.
>>
>> Belmiro
>>
>> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>>
>> I think a option 2 is better.
>>
>> Best Regards
>> Chaoyi Huang (joehuang)
>> 
>> From: Lance Bragstad [lbrags...@gmail.com]
>> Sent: 25 May 2017 3:47
>> To: OpenStack Development Mailing List (not for usage questions);
>> openstack-operat...@lists.openstack.org
>> Subject: Re: [openstack-dev]
>> [keystone][nova][cinder][glance][neutron][horizon][policy] defining
>> admin-ness
>>
>> I'd like to fill in a little more context here. I see three options with
>> the current two proposals.
>>
>> Option 1
>>
>> Use a special admin project to denote elevated privileges. For those
>> unfamiliar with the approach, it would rely on every deployment having an
>> "admin" project defined in configuration [0].
>>
>> How it works:
>>
>> Role assignments on this project represent global scope which is denoted
>> by a boolean attribute in the token response. A user with an 'admin' role
>> assignment on this project is equivalent to the global or cloud
>> administrator. Ideally, if a u

Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Erik McCormick
On Fri, Nov 6, 2015 at 12:28 PM, Mark Baker  wrote:
> Worth mentioning that OpenStack releases that come out at the same time as
> Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
> supported for 5 years by Canonical so are already kind of an LTS. Support in
> this context means patches, updates and commercial support (for a fee).
> For paying customers 3 years of patches, updates and commercial support for
> April releases, (Kilo, O, Q etc..) is also available.
>

Does that mean that you are actually backporting and gate testing
patches downstream that aren't being done upstream? I somehow doubt
it, but if so, then it would be great if you could lead some sort of
initiative to push those patches back upstream.


-Erik

>
>
> Best Regards
>
>
> Mark Baker
>
> On Fri, Nov 6, 2015 at 5:03 PM, James King  wrote:
>>
>> +1 for some sort of LTS release system.
>>
>> Telcos and risk-averse organizations working with sensitive data might not
>> be able to upgrade nearly as fast as the releases keep coming out. From the
>> summit in Japan it sounds like companies running some fairly critical public
>> infrastructure on Openstack aren’t going to be upgrading to Kilo any time
>> soon.
>>
>> Public clouds might even benefit from this. I know we (Dreamcompute) are
>> working towards tracking the upstream releases closer… but it’s not feasible
>> for everyone.
>>
>> I’m not sure whether the resources exist to do this but it’d be a nice to
>> have, imho.
>>
>> > On Nov 6, 2015, at 11:47 AM, Donald Talton 
>> > wrote:
>> >
>> > I like the idea of LTS releases.
>> >
>> > Speaking to my own deployments, there are many new features we are not
>> > interested in, and wouldn't be, until we can get organizational (cultural)
>> > change in place, or see stability and scalability.
>> >
>> > We can't rely on, or expect, that orgs will move to the CI/CD model for
>> > infra, when they aren't even ready to do that for their own apps. It's 
>> > still
>> > a new "paradigm" for many of us. CI/CD requires a considerable engineering
>> > effort, and given that the decision to "switch" to OpenStack is often 
>> > driven
>> > by cost-savings over enterprise virtualization, adding those costs back in
>> > via engineering salaries doesn't make fiscal sense.
>> >
>> > My big argument is that if Icehouse/Juno works and is stable, and I
>> > don't need newer features from subsequent releases, why would I expend the
>> > effort until such a time that I do want those features? Thankfully there 
>> > are
>> > vendors that understand this. Keeping up with the release cycle just for 
>> > the
>> > sake of keeping up with the release cycle is exhausting.
>> >
>> > -Original Message-
>> > From: Tony Breeds [mailto:t...@bakeyournoodle.com]
>> > Sent: Thursday, November 05, 2015 11:15 PM
>> > To: OpenStack Development Mailing List
>> > Cc: openstack-operat...@lists.openstack.org
>> > Subject: [Openstack-operators] [stable][all] Keeping Juno "alive" for
>> > longer.
>> >
>> > Hello all,
>> >
>> > I'll start by acknowledging that this is a big and complex issue and I
>> > do not claim to be across all the view points, nor do I claim to be
>> > particularly persuasive ;P
>> >
>> > Having stated that, I'd like to seek constructive feedback on the idea
>> > of keeping Juno around for a little longer.  During the summit I spoke to a
>> > number of operators, vendors and developers on this topic.  There was some
>> > support and some "That's crazy pants!" responses.  I clearly didn't make it
>> > around to everyone, hence this email.
>> >
>> > Acknowledging my affiliation/bias:  I work for Rackspace in the private
>> > cloud team.  We support a number of customers currently running Juno that
>> > are, for a variety of reasons, challenged by the Kilo upgrade.
>> >
>> > Here is a summary of the main points that have come up in my
>> > conversations, both for and against.
>> >
>> > Keep Juno:
>> > * According to the current user survey[1] Icehouse still has the
>> >   biggest install base in production clouds.  Juno is second, which
>> > makes
>> >   sense. If we EOL Juno this month that means ~75% of production clouds
>> >   will be running an EOL'd release.  Clearly many of these operators
>> > have
>> >   support contracts from their vendor, so those operators won't be left
>> >   completely adrift, but I believe it's the vendors that benefit from
>> > keeping
>> >   Juno around. By working together *in the community* we'll see the best
>> >   results.
>> >
>> > * We only recently EOL'd Icehouse[2].  Sure it was well communicated,
>> > but we
>> >   still have a huge Icehouse/Juno install base.
>> >
>> > For me this is pretty compelling but for balance 
>> >
>> > Keep the current plan and EOL Juno Real Soon Now:
>> > * There is also no ignoring the elephant in the room that with HP
>> > stepping
>> >   back from public cloud there are questions about our CI capacity, and
>> >   keeping Juno will have an impact on that crit