[Openstack-operators] Developer Mailing List Digest September 23-29

2017-09-29 Thread Mike Perez
HTML version: 
https://www.openstack.org/blog/2017/09/developer-mailing-list-digest-september-23-29-2017/

# Summaries
* [TC Report 
39](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122679.html)
* [Release countdown for week R-21,September 29 - October 
6](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122819.html)
* [Technical committee status update, September 
29](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122879.html)
* [Placement/resource providers update 
36](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122883.html)
* [POST 
/api-sig/news](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122805.html)

## Sydney Forum

### General Links
* [What the heck is the form?](https://wiki.openstack.org/wiki/Forum)
* When: November 6-8, 2017
* Where: OpenStack Summit in Sydney Australia
* Register for [The OpenStack Sydney 
Summit](https://www.openstack.org/summit/sydney-2017/) and show up!
* Deadline for topic sessions was September 29th UTC by this [submission 
form](http://forumtopics.openstack.org/).
* [All Sydney Forum etherpads](https://wiki.openstack.org/wiki/Forum/Sydney2017)

### Etherpads (copied from Sydney Forum wiki)

 Catch-alls
If you want to post an idea, but aren’t working with a specific team or working 
group, you can use these:

* [Technical Committee 
Catch-all](https://etherpad.openstack.org/p/SYD-TC-brainstorming) 
* [User Committee 
Catch-all](https://etherpad.openstack.org/p/SYD-UC-brainstorming) 

 Etherpads from Teams and Working Groups
* [Nova](https://etherpad.openstack.org/p/SYD-nova-brainstorming) 
* [Cinder](https://etherpad.openstack.org/p/cinder-sydney-forum-topics) 
* [Ops Meetups 
Team](https://etherpad.openstack.org/p/SYD-ops-session-ideas) 
* [OpenStack 
Ansible](https://etherpad.openstack.org/p/osa-sydney-summit-planning) 
* [Self-healing 
SIG](https://etherpad.openstack.org/p/self-healing-rocky-forum) 
* [Neutron Quality-Of-Service 
Discussion](https://etherpad.openstack.org/p/qos-talk-sydney) 
* [QA Team](https://etherpad.openstack.org/p/qa-sydney-forum-topics) 
* [Watcher](https://etherpad.openstack.org/p/watcher-Sydney-meetings) 
* [SIG K8s](https://etherpad.openstack.org/p/sig-k8s-sydney-forum-topics) 
* [Kolla](https://etherpad.openstack.org/p/kolla-sydney-forum-topics) 


## Garbage Patches for Simple Typos Fixes
* There is some agreement that we as a community have to do something beyond 
mentoring new developers.
* Others have mentioned that some companies are doing this to game the 
system in other communities besides OpenStack.
* Gain: show a high contribution level was “low quality” patches.
* Some people in the community want to put a stop to this figuratively with 
a stop sign, otherwise things will never improve. If we don't do something now 
we are hurting everyone, including those developers who could have done more 
meaningful contributions.
* Others would like before we go into creating harsh processes, we need to 
collect the data to show other times to provide guidance I Have not worked.
* We have a lot of anecdotal information right now that we need to collect 
and summarize.
* If the results show that there are clear abuses, rather than 
misunderstandings, then we can use that data to design effective blocks without 
hurting other contributors or creating a reputation that our community is not 
welcoming.
* Some are unclear why there is so much outrage about these patches to begin 
with. They are fixing real things.
* Maybe there is a CI cost, but the faster they are merged the less likely 
someone is to propose it in the future which keeps the CI cost down.
* If people are deeply concerned about CI resources, step one is to give us 
a better accounting into their existing system to see where resources are 
currently spent. 
* 
[Thread](http://lists.openstack.org/pipermail/openstack-dev/2017-September/thread.html#122472)


## Status of the Stewardship Working Group
* The stewardship working group was created after the first session of 
leadership training that the Technical Committee, User Committee, Board and 
other community members were invited to participate in 2016.
* Follow-up on what we learned at ZingTrain and push adoption of the tools we 
discovered there.
* While we did (and continue)
* The activity of the workgroup mostly died when we decided to experiment 
getting rid of weekly meetings for greater inclusion.
* Lost original leadership.
* The workgroup is dormant, until someone steps up and leads it again.
* Join us on IRC Freenode in channel openstack-swg if interested.
* 
[Message](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122868.html)


## Improving the Process for Release Marketing
* Release marketing is a critical heart for sharing what's new in each release.
* Let's work together on reworking how the market

Re: [Openstack-operators] [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-29 Thread Arkady.Kanevsky
There are some loose ends that Saverio correctly bringing up.
These perfect points to discuss at Forum.
Suggest we start etherpad to collect agenda for it.

-Original Message-
From: Lee Yarwood [mailto:lyarw...@redhat.com] 
Sent: Friday, September 29, 2017 7:04 AM
To: Saverio Proto 
Cc: OpenStack Development Mailing List (not for usage questions) 
; openstack-operators@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary

On 29-09-17 11:40:21, Saverio Proto wrote:
> Hello,
> 
> sorry I could not make it to the PTG.
> 
> I have an idea that I want to share with the community. I hope this is 
> a good place to start the discussion.
> 
> After years of Openstack operations, upgrading releases from Icehouse 
> to Newton, the feeling is that the control plane upgrade is doable.
> 
> But it is also a lot of pain to upgrade all the compute nodes. This 
> really causes downtime to the VMs that are running.
> I can't always make live migrations, sometimes the VMs are just too 
> big or too busy.
> 
> It would be nice to guarantee the ability to run an updated control 
> plane with compute nodes up to N-3 Release.
> 
> This way even if we have to upgrade the control plane every 6 months, 
> we can keep a longer lifetime for compute nodes. Basically we can 
> never upgrade them until we decommission the hardware.
> 
> If there are new features that require updated compute nodes, we can 
> always organize our datacenter in availability zones, not scheduling 
> new VMs to those compute nodes.
> 
> To my understanding this means having compatibility at least for the 
> nova-compute agent and the neutron-agents running on the compute node.
> 
> Is it a very bad idea ?
> 
> Do other people feel like me that upgrading all the compute nodes is 
> also a big part of the burden regarding the upgrade ?

Yeah, I don't think the Nova community would ever be able or willing to verify 
and maintain that level of backward compatibility. Ultimately there's nothing 
stopping you from upgrading Nova on the computes while also keeping instance 
running.

You only run into issues with kernel, OVS and QEMU (for n-cpu with
libvirt) etc upgrades that require reboots or instances to be restarted (either 
hard or via live-migration). If you're unable or just unwilling to take 
downtime for instances that can't be moved when these components require an 
update then you have bigger problems IMHO.

Regards,

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] Did you know archive_deleted_rows isn't super terrible anymore?

2017-09-29 Thread melanie witt

On Fri, 29 Sep 2017 13:49:55 -0500, Matt Riedemann wrote:

For awhile now actually.

Someone was asking about when archive_deleted_rows would actually work, 
and the answer is, it should since at least mitaka:


https://review.openstack.org/#/q/I77255c77780f0c2b99d59a9c20adecc85335bb18

And starting in Ocata there is the --until-complete option which lets 
you run it continuously until its done, rather than the weird manual 
batching from before:


https://review.openstack.org/#/c/378718/

So this shouldn't be news, but it might be. So FYI.


True that. However, I want to give people a heads up about something I 
learned recently (today actually). I think problems with archive can 
arise if you've restarted your database after archiving, and attempt to 
do a future archive. The InnoDB engine in MySQL keeps the AUTO_INCREMENT 
counter only in memory, so after a restart it selects the maximum value 
and adds 1 to use as the next value [1].


So if you had soft-deleted rows with primary keys 1 through 10 in the 
main table and ran archive_deleted_rows, those rows would get inserted 
into the shadow table and be hard-deleted from the main table. Then, if 
you restarted the database, the primary key AUTO_INCREMENT counter would 
be initialized to 1 again and the primary keys you had archived would be 
reused. If those new rows with primary keys 1 through 10 were eventually 
soft-deleted and then you ran archive_deleted_rows, the archive would 
fail with something like, "DBDuplicateEntry: 
(pymysql.err.IntegrityError) (1062, u"Duplicate entry '1' for key 
'PRIMARY'")". The workaround would be to delete or otherwise move the 
archived rows containing duplicate keys out of the shadow table.


-melanie

[1] 
https://dev.mysql.com/doc/refman/5.7/en/innodb-auto-increment-handling.html#innodb-auto-increment-initialization



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] Did you know archive_deleted_rows isn't super terrible anymore?

2017-09-29 Thread Matt Riedemann

For awhile now actually.

Someone was asking about when archive_deleted_rows would actually work, 
and the answer is, it should since at least mitaka:


https://review.openstack.org/#/q/I77255c77780f0c2b99d59a9c20adecc85335bb18

And starting in Ocata there is the --until-complete option which lets 
you run it continuously until its done, rather than the weird manual 
batching from before:


https://review.openstack.org/#/c/378718/

So this shouldn't be news, but it might be. So FYI.

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceph Session at the Forum

2017-09-29 Thread Martial Michel
>
> Looks like Blair Bethwaite already has one in there called CEPH in
> OpenStack as a BOF.
> http://forumtopics.openstack.org/cfp/details/46


Hello,

Yes, during our last IRC meeting, the Scientific Working Group, discussed
our contributions to add to the Forum sessions, and a Ceph session came out
of this discussion.

For now, we had the following moderators in mind:
Blair Bethwaite
Mike May

I am sure as things get firmed up, we can and will add moderators.

Thank you,

-- Martial

PS: we also have an HPC using OpenStack one
http://forumtopics.openstack.org/cfp/details/49
where Tim Randles and myself were going to help moderate
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceph Session at the Forum

2017-09-29 Thread David Medberry
Looks like Blair Bethwaite already has one in there called CEPH in
OpenStack as a BOF.

http://forumtopics.openstack.org/cfp/details/46


On Fri, Sep 29, 2017 at 10:44 AM, Edgar Magana 
wrote:

> Hello,
>
> I know Patrick Donnelly (@pjdbatrick) was very interested in this session.
> Unfortunately, I do not have his email. Should I go ahead and add the
> session in his behalf?
>
> Edgar
>
> On 9/28/17, 10:32 AM, "Erik McCormick"  wrote:
>
> Hey Ops folks,
>
> A Ceph session was put on the discussion Etherpad for the forum, and I
> know a lot of folks have expressed interest in doing one, especially
> since there's no Ceph Day going on this time around.
>
> I need a volunteer to run the session and set up an agenda. If you're
> willing and able to do it, you can either submit the session yourself
> at https://urldefense.proofpoint.com/v2/url?u=http-3A__
> forumtopics.openstack.org_&d=DwIGaQ&c=DS6PUFBBr_
> KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P6
> 9y4yP_5mLxjm4&s=8d0MJ5TbH_7l75t5m_eSGkFTQYe-9V_ADJPAiSKI0nY&e= or let me
> know and I'll be happy
> to add it.
>
> Cheers,
> Erik
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.
> openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=
> DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P6
> 9y4yP_5mLxjm4&s=tcg5ce8INVI3hHGpR_EGKa5G75iZXhkOkNszfAWHWUM&e=
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceph Session at the Forum

2017-09-29 Thread David Medberry
Looks like Blair Bethwaite already has one in there called CEPH in
OpenStack as a BOF.

http://forumtopics.openstack.org/cfp/details/46


On Fri, Sep 29, 2017 at 10:44 AM, Edgar Magana 
wrote:

> Hello,
>
> I know Patrick Donnelly (@pjdbatrick) was very interested in this session.
> Unfortunately, I do not have his email. Should I go ahead and add the
> session in his behalf?
>
> Edgar
>
> On 9/28/17, 10:32 AM, "Erik McCormick"  wrote:
>
> Hey Ops folks,
>
> A Ceph session was put on the discussion Etherpad for the forum, and I
> know a lot of folks have expressed interest in doing one, especially
> since there's no Ceph Day going on this time around.
>
> I need a volunteer to run the session and set up an agenda. If you're
> willing and able to do it, you can either submit the session yourself
> at https://urldefense.proofpoint.com/v2/url?u=http-3A__
> forumtopics.openstack.org_&d=DwIGaQ&c=DS6PUFBBr_
> KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P6
> 9y4yP_5mLxjm4&s=8d0MJ5TbH_7l75t5m_eSGkFTQYe-9V_ADJPAiSKI0nY&e= or let me
> know and I'll be happy
> to add it.
>
> Cheers,
> Erik
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.
> openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=
> DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_
> wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P6
> 9y4yP_5mLxjm4&s=tcg5ce8INVI3hHGpR_EGKa5G75iZXhkOkNszfAWHWUM&e=
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Ceph Session at the Forum

2017-09-29 Thread Edgar Magana
Hello,

I know Patrick Donnelly (@pjdbatrick) was very interested in this session. 
Unfortunately, I do not have his email. Should I go ahead and add the session 
in his behalf?

Edgar

On 9/28/17, 10:32 AM, "Erik McCormick"  wrote:

Hey Ops folks,

A Ceph session was put on the discussion Etherpad for the forum, and I
know a lot of folks have expressed interest in doing one, especially
since there's no Ceph Day going on this time around.

I need a volunteer to run the session and set up an agenda. If you're
willing and able to do it, you can either submit the session yourself
at 
https://urldefense.proofpoint.com/v2/url?u=http-3A__forumtopics.openstack.org_&d=DwIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P69y4yP_5mLxjm4&s=8d0MJ5TbH_7l75t5m_eSGkFTQYe-9V_ADJPAiSKI0nY&e=
 or let me know and I'll be happy
to add it.

Cheers,
Erik

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org

https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators&d=DwIGaQ&c=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc&r=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ&m=oMjA5n4jHkmjmYmFOyj7FKQ6zUX3P69y4yP_5mLxjm4&s=tcg5ce8INVI3hHGpR_EGKa5G75iZXhkOkNszfAWHWUM&e=


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [magnum] issue using magnum on Pike

2017-09-29 Thread Andy Wojnarek
Interesting.

The SuSE repos have 5.0.2 in the Pike repo:

gvicopnstk01:~ # zypper se -s magnum
zypper se -s magnum
Loading repository data...
Reading installed packages...

S  | Name| Type   | Version| 
Arch   | Repository
---+-++++-
   | openstack-horizon-plugin-magnum-ui  | package| 3.0.1~dev3-1.1 | 
noarch | Pike
   | openstack-horizon-plugin-magnum-ui  | srcpackage | 3.0.1~dev3-1.1 | 
noarch | Pike
   | openstack-horizon-plugin-magnum-ui-test | package| 3.0.1~dev3-1.1 | 
noarch | Pike
i  | openstack-magnum| package| 5.0.2~dev8-1.2 | 
noarch | (System Packages)
v  | openstack-magnum| package| 5.0.2~dev8-2.1 | 
noarch | Pike
   | openstack-magnum| srcpackage | 5.0.2~dev8-2.1 | 
noarch | Pike
i+ | openstack-magnum-api| package| 5.0.2~dev8-1.2 | 
noarch | (System Packages)
v  | openstack-magnum-api| package| 5.0.2~dev8-2.1 | 
noarch | Pike
i+ | openstack-magnum-conductor  | package| 5.0.2~dev8-1.2 | 
noarch | (System Packages)
v  | openstack-magnum-conductor  | package| 5.0.2~dev8-2.1 | 
noarch | Pike
   | openstack-magnum-doc| package| 5.0.2~dev8-2.1 | 
noarch | Pike
   | openstack-magnum-doc| srcpackage | 5.0.2~dev8-2.1 | 
noarch | Pike
   | openstack-magnum-test   | package| 5.0.2~dev8-2.1 | 
noarch | Pike
   | python-horizon-plugin-magnum-ui | package| 3.0.1~dev3-1.1 | 
noarch | Pike
i  | python-magnum   | package| 5.0.2~dev8-1.2 | 
noarch | (System Packages)
v  | python-magnum   | package| 5.0.2~dev8-2.1 | 
noarch | Pike
i+ | python-magnumclient | package| 2.6.0-1.11 | 
noarch | Pike
v  | python-magnumclient | package| 2.3.0-3.3  | 
noarch | openSUSE-Leap-42.3-0
   | python-magnumclient | srcpackage | 2.6.0-1.11 | 
noarch | Pike
   | python-magnumclient-doc | package| 2.6.0-1.11 | 
noarch | Pike
   | python-magnumclient-doc | package| 2.3.0-3.3  | 
noarch | openSUSE-Leap-42.3-0



Thanks,
Andrew Wojnarek |  Sr. Systems Engineer| ATS Group, LLC
mobile 717.856.6901 | 
andy.wojna...@theatsgroup.com
Galileo Performance Explorer Blog Offers Deep 
Insights for Server/Storage Systems

From: Erik McCormick 
Date: Friday, September 29, 2017 at 10:01 AM
To: Andrew Wojnarek 
Cc: openstack-operators 
Subject: Re: [Openstack-operators] [magnum] issue using magnum on Pike

The current release of Magnum is 5.0.1. You seem to be running a later dev 
release. Perhaps sine regression got introduced in that build?

-Erik


On Sep 29, 2017 8:59 AM, "Andy Wojnarek" 
mailto:andy.wojna...@theatsgroup.com>> wrote:
So I started a fresh install of Pike on OpenSuSE in my test lab at work, and 
I’m having a hard time getting Magnum to work. I’m getting this error on 
Cluster Create:

http://paste.openstack.org/show/622304/

(AttributeError: 'module' object has no attribute 'APIClient')


I’m running OpenSuSE 42.2, here are my magnum packages:
gvicopnstk01:~ # rpm -qa | grep -i magnum
openstack-magnum-api-5.0.2~dev8-1.2.noarch
python-magnum-5.0.2~dev8-1.2.noarch
openstack-magnum-5.0.2~dev8-1.2.noarch
openstack-magnum-conductor-5.0.2~dev8-1.2.noarch
python-magnumclient-2.6.0-1.11.noarch


Command I’m running to create the cluster:
gvicopnstk01:~ # magnum cluster-create --name k8s-cluster  --cluster-template 
k8s-cluster-template   --master-count 1   --node-count 1


The Template I’m using:
gvicopnstk01:~ # magnum cluster-template-show 
6fa514c1-f598-46b1-8bba-6c7c728094bc
+---+--+
| Property  | Value|
+---+--+
| insecure_registry | -|
| labels| {}   |
| updated_at| -|
| floating_ip_enabled   | True |
| fixed_subnet  | -|
| master_flavor_id  | m1.small |
| uuid  | 6fa514c1-f598-46b1-8bba-6c7c728094bc |
| no_proxy  | -|
| https_proxy   | -|
| tls_disabled  | False|
| keypair_id| AW   |
| public| False|
| http_proxy| -   

Re: [Openstack-operators] [nova] Forum topics brainstorming

2017-09-29 Thread Matt Riedemann

On 9/28/2017 4:45 PM, Matt Riedemann wrote:

2. Placement update and direction

Same as the Cells v2 discussion - a Pike update and the focus items for 
Queens. This would also be a place we can mention the Ironic flavor 
migration to custom resource classes that happens in Pike.


Someone else proposed something very similar sounding to this:

http://forumtopics.openstack.org/cfp/details/50

But it's unclear what overlap there is, or if ^ is proposing just 
talking about future use cases for nested resource providers and custom 
resource classes for all the external-to-nova thingies people would like 
to do.


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [magnum] issue using magnum on Pike

2017-09-29 Thread Erik McCormick
The current release of Magnum is 5.0.1. You seem to be running a later dev
release. Perhaps sine regression got introduced in that build?

-Erik


On Sep 29, 2017 8:59 AM, "Andy Wojnarek" 
wrote:

So I started a fresh install of Pike on OpenSuSE in my test lab at work,
and I’m having a hard time getting Magnum to work. I’m getting this error
on Cluster Create:



http://paste.openstack.org/show/622304/



(AttributeError: 'module' object has no attribute 'APIClient')





*I’m running OpenSuSE 42.2, here are my magnum packages:*

gvicopnstk01:~ # rpm -qa | grep -i magnum

openstack-magnum-api-5.0.2~dev8-1.2.noarch

python-magnum-5.0.2~dev8-1.2.noarch

openstack-magnum-5.0.2~dev8-1.2.noarch

openstack-magnum-conductor-5.0.2~dev8-1.2.noarch

python-magnumclient-2.6.0-1.11.noarch





*Command I’m running to create the cluster:*

gvicopnstk01:~ # magnum cluster-create --name k8s-cluster
 --cluster-template k8s-cluster-template   --master-count 1   --node-count 1





*The Template I’m using:*

gvicopnstk01:~ # magnum cluster-template-show 6fa514c1-f598-46b1-8bba-
6c7c728094bc

+---+--+

| Property  | Value|

+---+--+

| insecure_registry | -|

| labels| {}   |

| updated_at| -|

| floating_ip_enabled   | True |

| fixed_subnet  | -|

| master_flavor_id  | m1.small |

| uuid  | 6fa514c1-f598-46b1-8bba-6c7c728094bc |

| no_proxy  | -|

| https_proxy   | -|

| tls_disabled  | False|

| keypair_id| AW   |

| public| False|

| http_proxy| -|

| docker_volume_size| -|

| server_type   | vm   |

| external_network_id   | provider |

| cluster_distro| fedora-atomic|

| image_id  | fedora-atomic-ocata  |

| volume_driver | -|

| registry_enabled  | False|

| docker_storage_driver | devicemapper |

| apiserver_port| -|

| name  | k8s-cluster-template |

| created_at| 2017-09-28T19:25:58+00:00|

| network_driver| flannel  |

| fixed_network | -|

| coe   | kubernetes   |

| flavor_id | m1.small |

| master_lb_enabled | False|

| dns_nameserver| 192.168.240.150  |







(The image name is Ocata because I downloaded the Ocata image, I figured it
was fine)



The Error I’m getting I cannot find anything about it on Google. Any got
any ideas the right direction I should go?



Thanks,

Andrew Wojnarek |  Sr. Systems Engineer| ATS Group, LLC

mobile 717.856.6901 <(717)%20856-6901> | andy.wojna...@theatsgroup.com

*Galileo Performance Explorer Blog* * Offers
Deep Insights for Server/Storage Systems*

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [magnum] issue using magnum on Pike

2017-09-29 Thread Andy Wojnarek
So I started a fresh install of Pike on OpenSuSE in my test lab at work, and 
I’m having a hard time getting Magnum to work. I’m getting this error on 
Cluster Create:

http://paste.openstack.org/show/622304/

(AttributeError: 'module' object has no attribute 'APIClient')


I’m running OpenSuSE 42.2, here are my magnum packages:
gvicopnstk01:~ # rpm -qa | grep -i magnum
openstack-magnum-api-5.0.2~dev8-1.2.noarch
python-magnum-5.0.2~dev8-1.2.noarch
openstack-magnum-5.0.2~dev8-1.2.noarch
openstack-magnum-conductor-5.0.2~dev8-1.2.noarch
python-magnumclient-2.6.0-1.11.noarch


Command I’m running to create the cluster:
gvicopnstk01:~ # magnum cluster-create --name k8s-cluster  --cluster-template 
k8s-cluster-template   --master-count 1   --node-count 1


The Template I’m using:
gvicopnstk01:~ # magnum cluster-template-show 
6fa514c1-f598-46b1-8bba-6c7c728094bc
+---+--+
| Property  | Value|
+---+--+
| insecure_registry | -|
| labels| {}   |
| updated_at| -|
| floating_ip_enabled   | True |
| fixed_subnet  | -|
| master_flavor_id  | m1.small |
| uuid  | 6fa514c1-f598-46b1-8bba-6c7c728094bc |
| no_proxy  | -|
| https_proxy   | -|
| tls_disabled  | False|
| keypair_id| AW   |
| public| False|
| http_proxy| -|
| docker_volume_size| -|
| server_type   | vm   |
| external_network_id   | provider |
| cluster_distro| fedora-atomic|
| image_id  | fedora-atomic-ocata  |
| volume_driver | -|
| registry_enabled  | False|
| docker_storage_driver | devicemapper |
| apiserver_port| -|
| name  | k8s-cluster-template |
| created_at| 2017-09-28T19:25:58+00:00|
| network_driver| flannel  |
| fixed_network | -|
| coe   | kubernetes   |
| flavor_id | m1.small |
| master_lb_enabled | False|
| dns_nameserver| 192.168.240.150  |





(The image name is Ocata because I downloaded the Ocata image, I figured it was 
fine)

The Error I’m getting I cannot find anything about it on Google. Any got any 
ideas the right direction I should go?

Thanks,
Andrew Wojnarek |  Sr. Systems Engineer| ATS Group, LLC
mobile 717.856.6901 | 
andy.wojna...@theatsgroup.com
Galileo Performance Explorer Blog Offers Deep 
Insights for Server/Storage Systems
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-29 Thread Lee Yarwood
On 29-09-17 11:40:21, Saverio Proto wrote:
> Hello,
> 
> sorry I could not make it to the PTG.
> 
> I have an idea that I want to share with the community. I hope this is a
> good place to start the discussion.
> 
> After years of Openstack operations, upgrading releases from Icehouse to
> Newton, the feeling is that the control plane upgrade is doable.
> 
> But it is also a lot of pain to upgrade all the compute nodes. This
> really causes downtime to the VMs that are running.
> I can't always make live migrations, sometimes the VMs are just too big
> or too busy.
> 
> It would be nice to guarantee the ability to run an updated control
> plane with compute nodes up to N-3 Release.
> 
> This way even if we have to upgrade the control plane every 6 months, we
> can keep a longer lifetime for compute nodes. Basically we can never
> upgrade them until we decommission the hardware.
> 
> If there are new features that require updated compute nodes, we can
> always organize our datacenter in availability zones, not scheduling new
> VMs to those compute nodes.
> 
> To my understanding this means having compatibility at least for the
> nova-compute agent and the neutron-agents running on the compute node.
> 
> Is it a very bad idea ?
> 
> Do other people feel like me that upgrading all the compute nodes is
> also a big part of the burden regarding the upgrade ?

Yeah, I don't think the Nova community would ever be able or willing to
verify and maintain that level of backward compatibility. Ultimately
there's nothing stopping you from upgrading Nova on the computes while
also keeping instance running.

You only run into issues with kernel, OVS and QEMU (for n-cpu with
libvirt) etc upgrades that require reboots or instances to be restarted
(either hard or via live-migration). If you're unable or just unwilling
to take downtime for instances that can't be moved when these components
require an update then you have bigger problems IMHO.

Regards,

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76


signature.asc
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators