[openstack-dev] [Dragonflow]Weekly meeting is canceled today

2017-10-30 Thread Omer Anson
Hi,

Sorry for the very late notice. The weekly scheduled for today is canceled.

Next one will be next week, as planned.

Thanks, and sorry again,
Omer.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all][stable] Zuul v3 changes and stable branches

2017-10-30 Thread Boden Russell
On 10/27/17 6:35 PM, James E. Blair wrote:
> 
> We're rolling out a new version of Zuul that corrects the issues, and
> the migration doc has been updated.  The main things to know are:
> 
> * If your project has stable branches, we recommend backporting the Zuul
>   config along with all the playbooks and roles that are in your repo to
>   the stable branches.

Does this apply to projects that don't have an in-repo config in master
and only use shared artifacts?

For example, our project's (master) pipeline is in project-config's
projects.yaml and only uses shared templates/jobs/playbooks. Is the
expectation that we copy this pipeline to an in-repo zuul.yaml for each
stable branch as well as the "shared" playbooks?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] q-agt failed to start with ovs firevalldriver

2017-10-30 Thread Lajos Katona

Hi,

Perhaps this is my fault but this morning I wanted to start devstack 
(master) with q-trunk enabled, and got this error:
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]:   File 
"/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 65, in _launch

Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: raise e
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ImportError: 
Class not found.
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: CRITICAL 
neutron [-] Unhandled error: ImportError: Class not found.
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR neutron 
Traceback (most recent call last):
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File "/usr/local/bin/neutron-openvswitch-agent", line 10, in 

Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron sys.exit(main())
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File 
"/opt/stack/neutron/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py", 
line 20, in main
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron agent_main.main()
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/main.py", 
line 49, in main
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron mod.main()
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py", 
line 35, in main
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron 'neutron.plugins.ml2.drivers.openvswitch.agent.'
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File 
"/usr/local/lib/python2.7/dist-packages/ryu/base/app_manager.py", line 
375, in run_apps
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron hub.joinall(services)
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", 
line 103, in joinall
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron t.wait()
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
175, in wait
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron return self._exit_event.wait()
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 125, in 
wait
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron current.throw(*self._exc)
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron result = function(*args, **kwargs)
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron   File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", 
line 65, in _launch
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR 
neutron raise e
Oct 30 08:59:22 horizon neutron-openvswitch-agent[24556]: ERROR neutron 
ImportError: Class not found.


I started successfully with the noop firewall driver after the problem.
I run devstack with *PIP_UPGRADE=True* & *RECLONE=yes*.

Could you help me if this is some misconfiguration from my side, or 
perhaps some requirements issue?


Regards
Lajos

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Notification update week 44

2017-10-30 Thread Balazs Gibizer

Hi,

Here is the status update / focus settings mail for w44.

Bugs

[Undecided] https://bugs.launchpad.net/nova/+bug/1535254 illustration 
of 'notify_on_state_change' are different from implementation
As the behavior is unchanged in the last 5 years a patch is proposed to 
update the documentation to reflect this long standing behavior.

https://review.openstack.org/516264


Versioned notification transformation
-
There are 3 patches only needs a second +2:
* https://review.openstack.org/#/c/467514 Transform keypair.import 
notification
* https://review.openstack.org/#/c/396225 Transform 
instance.trigger_crash_dump notification
* https://review.openstack.org/#/c/443764 use context mgr in 
instance.delete



Service create and destroy notifications

https://blueprints.launchpad.net/nova/+spec/service-create-destroy-notification
https://specs.openstack.org/openstack/nova-specs/specs/queens/approved/service-create-destroy-notification.html

Waiting for the implementation to be proposed.


Small improvements
--

* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
Finally I had time to introduce the possiblity to override fields 
coming from a common sample. This way the samples in the documentation 
can be kept realistic even if we deduplicate most of the sample data. 
The s
eries are up to date and show the way how to drastically decrease the 
amount of json sample data stored in the nova tree.


Weekly meeting
--
Next subteam meeting will be held on 31th of October, Tuesday 17:00 UTC 
on openstack-meeting-4. (Please note that EU already went through the 
daylight saving time switch on the last weekend but the USA has not 
done that yet.)

https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171031T17


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] Security SIG

2017-10-30 Thread Thierry Carrez
Luke Hinds wrote:
> On Fri, Oct 27, 2017 at 6:08 PM, Jeremy Stanley  > wrote:
> 
>> On 2017-10-27 15:30:34 +0200 (+0200), Thierry Carrez wrote:
>>> [...]
>>> I think the Security project team would benefit from becoming a
>>> proper SIG.
>>> [...]
>> I tend to agree, though it's worth also considering what the
>> implications are for vulnerability management under the new model.
>> The VMT tended to act as an independent task force in the
>> beforetime, until the big t^W^Wproject reform of 2014, and then
>> allied itself with the newly-formed Security Team while continuing
>> operation autonomously under a fairly independent mandate. Does this
>> still make sense in a Security SIG context, or should we be
>> considering alternative (perhaps more formal?) governance for the
>> VMT in that scenario? I don't have especially cogent thoughts around
>> this yet, so interested to hear what others in the community think. 

So the activity of the Security project team can be split into a number
of things:

- Security advisories for supported projects (ossa by the VMT subteam)
- General security notices / information (ossn)
- Promotion of secure coding practices (bandit, syntribos)
- Promotion of secure operations (security-doc, anchor)
- Audit activities (security-analysis)

The only thing here that is not performed by an open group is the VMT
stuff. It also happens to be the most "upstream" of all the team
activity: it's closely related to stable branch maintenance.

Personally I think the VMT would be better split off from a Security SIG
-- it's suboptimal to have a part of a SIG to be a restricted group. It
could be made it's own team, or attached to an existing group (stable
branch maintenance) or converted to a TC-owned "workgroup" (a TC
delegation of power, like it's always been).

> We discussed the SIG proposal on the security meeting and I planned to
> invite you in for a session to discuss Thierry (apologies for being late
> for getting this together). 
> 
> Overall folks thought it an idea worth while enough to explore further.
> 
> My own view is that if its leads to getting more eyes on security, then
> its a good thing. With that in mind, I had the idea that we could run a
> "Security SIG" in parallel to the security project and see if it gains
> traction and security minded people from the wider community do actually
> come forward to get involved and merit the change worth while (and it's
> not just the Security Project rearranging the furniture). We could then
> review how its gone at the end of the Queens cycle and if a success (not
> sure how we would define that as yet), then implement the change at the
> juncture of a new release.

Sure, we can definitely try it out and keep the project team around
while we try. The only issue I see with that approach is that it's a bit
confusing, and not as strong of a statement compared to saying "all the
security activity now happens there". But if you feel more comfortable
that way, we can definitely follow that road.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [nova] traits discussion call

2017-10-30 Thread Dmitry Tantsur
Aaaand sorry again, but due to sudden errands I won't be able to attend.
Please feel free to use my bluejeans room anyway. I think my position on
traits is more or less clear from previous discussions with John, Sam and
Eric.

2017-10-24 18:07 GMT+02:00 Dmitry Tantsur :

> Sigh, sorry. I forgot that we're moving back to winter time this weekend.
> I *think* the time is 3pm UTC then. It seems to be 11am eastern US:
> https://www.timeanddate.com/worldclock/converter.html?iso=20
> 171030T15&p1=37&p2=tz_et.
>
>
> On 10/24/2017 06:00 PM, Dmitry Tantsur wrote:
>
>> And the winner is Mon, 30 Oct, 2pm UTC!
>>
>> The bluejeans ID is https://bluejeans.com/757528759
>> (works without plugins in recent FF and Chrome; if it asks to install an
>> app, ignore it and look for a link saying "join with browser")
>>
>> On 10/23/2017 05:02 PM, Dmitry Tantsur wrote:
>>
>>> Hi all!
>>>
>>> I'd like to invite you to the discussion of the way to implement traits
>>> in
>>> ironic and the ironic virt driver. Please vote for the time at
>>> https://doodle.com/poll/ts43k98kkvniv8uz. Please vote by EOD tomorrow.
>>>
>>> Note that it's going to be a technical discussion - please make sure you
>>> understand what traits are and why ironic cares about them. See below
>>> for more
>>> context.
>>>
>>> We'll probably use my bluejeans account, as it works without plugins in
>>> modern
>>> browsers. I'll post a meeting ID when we pick the date.
>>>
>>>
>>> On 10/23/2017 04:09 PM, Eric Fried wrote:
>>>
 We discussed this a little bit further in IRC [1].  We're all in
 agreement, but it's worth being precise on a couple of points:

 * We're distinguishing between a "feature" and the "trait" that
 represents it in placement.  For the sake of this discussion, a
 "feature" can (maybe) be switched on or off, but a "trait" can either be
 present or absent on a RP.
 * It matters *who* can turn a feature on/off.
 * If it can be done by virt at spawn time, then it makes sense to
 have
 the trait on the RP, and you can switch the feature on/off via a
 separate extra_spec.
 * But if it's e.g. an admin action, and spawn has no control, then
 the
 trait needs to be *added* whenever the feature is *on*, and *removed*
 whenever the feature is *off*.

 [1]
 http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%
 23openstack-nova.2017-10-23.log.html#t2017-10-23T13:12:13

 On 10/23/2017 08:15 AM, Sylvain Bauza wrote:

>
>
> On Mon, Oct 23, 2017 at 2:54 PM, Eric Fried  > wrote:
>
>   I agree with Sean.  In general terms:
>
>   * A resource provider should be marked with a trait if that
> feature
> * Can be turned on or off (whether it's currently on or not);
> or
> * Is always on and can't ever be turned off.
>
>
> No, traits are not boolean. If a resource provider stops providing a
> capability, then the existing related trait should just be removed,
> that's it.
> If you see a trait, that's just means that the related capability for
> the Resource Provider is supported, that's it too.
>
> MHO.
>
> -Sylvain
>
>
>
>   * A consumer wanting that feature present (doesn't matter
> whether it's
>   on or off) should specify it as a required *trait*.
>   * A consumer wanting that feature present and turned on should
> * Specify it as a required trait; AND
> * Indicate that it be turned on via some other mechanism (e.g.
> a
>   separate extra_spec).
>
>   I believe this satisfies Dmitry's (Ironic's) needs, but also
> Jay's drive
>   for placement purity.
>
>   Please invite me to the hangout or whatever.
>
>   Thanks,
>   Eric
>
>   On 10/23/2017 07:22 AM, Mooney, Sean K wrote:
>   >
>   >
>   >
>   >
>   > *From:*Jay Pipes [mailto:jaypi...@gmail.com
>   ]
>   > *Sent:* Monday, October 23, 2017 12:20 PM
>   > *To:* OpenStack Development Mailing List
>      >
>   > *Subject:* Re: [openstack-dev] [ironic] ironic and traits
>   >
>   >
>   >
>   > Writing from my phone... May I ask that before you proceed
> with any plan
>   > that uses traits for state information that we have a hangout
> or
>   > videoconference to discuss this? Unfortunately today and
> tomorrow I'm
>   > not able to do a hangout but I can do one on Wednesday any
> time of the day.
>   >
>   >
>   >
>   > */[Mooney, Sean K] on the uefi boot topic I did bring up at the
>   ptg that
>   > we wanted to standardizes tratis for “verified boot” /*

Re: [openstack-dev] [ironic] [nova] traits discussion call

2017-10-30 Thread Jay Pipes
I'd prefer to have you on the call, Dima. How about we push it back to 
tomorrow at the same time?


Can everyone make it then?

-jay

On 10/30/2017 10:11 AM, Dmitry Tantsur wrote:
Aaaand sorry again, but due to sudden errands I won't be able to attend. 
Please feel free to use my bluejeans room anyway. I think my position on 
traits is more or less clear from previous discussions with John, Sam 
and Eric.


2017-10-24 18:07 GMT+02:00 Dmitry Tantsur >:


Sigh, sorry. I forgot that we're moving back to winter time this
weekend. I *think* the time is 3pm UTC then. It seems to be 11am
eastern US:

https://www.timeanddate.com/worldclock/converter.html?iso=20171030T15&p1=37&p2=tz_et

.


On 10/24/2017 06:00 PM, Dmitry Tantsur wrote:

And the winner is Mon, 30 Oct, 2pm UTC!

The bluejeans ID is https://bluejeans.com/757528759

(works without plugins in recent FF and Chrome; if it asks to
install an app, ignore it and look for a link saying "join with
browser")

On 10/23/2017 05:02 PM, Dmitry Tantsur wrote:

Hi all!

I'd like to invite you to the discussion of the way to
implement traits in
ironic and the ironic virt driver. Please vote for the time at
https://doodle.com/poll/ts43k98kkvniv8uz
. Please vote by
EOD tomorrow.

Note that it's going to be a technical discussion - please
make sure you
understand what traits are and why ironic cares about them.
See below for more
context.

We'll probably use my bluejeans account, as it works without
plugins in modern
browsers. I'll post a meeting ID when we pick the date.


On 10/23/2017 04:09 PM, Eric Fried wrote:

We discussed this a little bit further in IRC [1]. 
We're all in

agreement, but it's worth being precise on a couple of
points:

* We're distinguishing between a "feature" and the
"trait" that
represents it in placement.  For the sake of this
discussion, a
"feature" can (maybe) be switched on or off, but a
"trait" can either be
present or absent on a RP.
* It matters *who* can turn a feature on/off.
 * If it can be done by virt at spawn time, then it
makes sense to have
the trait on the RP, and you can switch the feature
on/off via a
separate extra_spec.
 * But if it's e.g. an admin action, and spawn has
no control, then the
trait needs to be *added* whenever the feature is *on*,
and *removed*
whenever the feature is *off*.

[1]

http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-10-23.log.html#t2017-10-23T13:12:13




On 10/23/2017 08:15 AM, Sylvain Bauza wrote:



On Mon, Oct 23, 2017 at 2:54 PM, Eric Fried
mailto:openst...@fried.cc
>> wrote:

   I agree with Sean.  In general terms:

   * A resource provider should be marked with a
trait if that feature
 * Can be turned on or off (whether it's
currently on or not); or
 * Is always on and can't ever be turned off.


No, traits are not boolean. If a resource provider
stops providing a
capability, then the existing related trait should
just be removed,
that's it.
If you see a trait, that's just means that the
related capability for
the Resource Provider is supported, that's it too.

MHO.

-Sylvain



   * A consumer wanting that feature present
(doesn't matter whether it's
   on or off) should specify it as a required
*trait*.
   * A consumer wanting that feature present and
turned on should
 * Specify it as a required trait; AND
 * Indicate that it be turned on via some
  

Re: [openstack-dev] [ironic] [nova] traits discussion call

2017-10-30 Thread Dmitry Tantsur
It's a holiday here tomorrow, but I don't have any specific plans, so I think 
I'll be able to make it.


On 10/30/2017 03:13 PM, Jay Pipes wrote:
I'd prefer to have you on the call, Dima. How about we push it back to tomorrow 
at the same time?


Can everyone make it then?

-jay

On 10/30/2017 10:11 AM, Dmitry Tantsur wrote:
Aaaand sorry again, but due to sudden errands I won't be able to attend. 
Please feel free to use my bluejeans room anyway. I think my position on 
traits is more or less clear from previous discussions with John, Sam and Eric.


2017-10-24 18:07 GMT+02:00 Dmitry Tantsur >:


    Sigh, sorry. I forgot that we're moving back to winter time this
    weekend. I *think* the time is 3pm UTC then. It seems to be 11am
    eastern US:

https://www.timeanddate.com/worldclock/converter.html?iso=20171030T15&p1=37&p2=tz_et 


. 




    On 10/24/2017 06:00 PM, Dmitry Tantsur wrote:

    And the winner is Mon, 30 Oct, 2pm UTC!

    The bluejeans ID is https://bluejeans.com/757528759
    
    (works without plugins in recent FF and Chrome; if it asks to
    install an app, ignore it and look for a link saying "join with
    browser")

    On 10/23/2017 05:02 PM, Dmitry Tantsur wrote:

    Hi all!

    I'd like to invite you to the discussion of the way to
    implement traits in
    ironic and the ironic virt driver. Please vote for the time at
    https://doodle.com/poll/ts43k98kkvniv8uz
    . Please vote by
    EOD tomorrow.

    Note that it's going to be a technical discussion - please
    make sure you
    understand what traits are and why ironic cares about them.
    See below for more
    context.

    We'll probably use my bluejeans account, as it works without
    plugins in modern
    browsers. I'll post a meeting ID when we pick the date.


    On 10/23/2017 04:09 PM, Eric Fried wrote:

    We discussed this a little bit further in IRC [1]. 
    We're all in

    agreement, but it's worth being precise on a couple of
    points:

    * We're distinguishing between a "feature" and the
    "trait" that
    represents it in placement.  For the sake of this
    discussion, a
    "feature" can (maybe) be switched on or off, but a
    "trait" can either be
    present or absent on a RP.
    * It matters *who* can turn a feature on/off.
 * If it can be done by virt at spawn time, then it
    makes sense to have
    the trait on the RP, and you can switch the feature
    on/off via a
    separate extra_spec.
 * But if it's e.g. an admin action, and spawn has
    no control, then the
    trait needs to be *added* whenever the feature is *on*,
    and *removed*
    whenever the feature is *off*.

    [1]

http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-10-23.log.html#t2017-10-23T13:12:13 


 




    On 10/23/2017 08:15 AM, Sylvain Bauza wrote:



    On Mon, Oct 23, 2017 at 2:54 PM, Eric Fried
    mailto:openst...@fried.cc
    >> wrote:

   I agree with Sean.  In general terms:

   * A resource provider should be marked with a
    trait if that feature
 * Can be turned on or off (whether it's
    currently on or not); or
 * Is always on and can't ever be turned off.


    No, traits are not boolean. If a resource provider
    stops providing a
    capability, then the existing related trait should
    just be removed,
    that's it.
    If you see a trait, that's just means that the
    related capability for
    the Resource Provider is supported, that's it too.

    MHO.

    -Sylvain



   * A consumer wanting that feature present
    (doesn't matter whether it's
   on or off) should specify it as a required
    *trait*.
   * A consumer wanting that feature present and
    turned o

[openstack-dev] [ironic] [nova] traits discussion call - moved to Tue!!

2017-10-30 Thread Dmitry Tantsur
It seems that the new time works for the most of key people, so let's move it to 
tomorrow (Tue), the same time, the same bluejeans.


Apologies to those who won't be able to attend, and sorry for the late notice.

On 10/30/2017 03:13 PM, Jay Pipes wrote:
I'd prefer to have you on the call, Dima. How about we push it back to tomorrow 
at the same time?


Can everyone make it then?

-jay

On 10/30/2017 10:11 AM, Dmitry Tantsur wrote:
Aaaand sorry again, but due to sudden errands I won't be able to attend. 
Please feel free to use my bluejeans room anyway. I think my position on 
traits is more or less clear from previous discussions with John, Sam and Eric.


2017-10-24 18:07 GMT+02:00 Dmitry Tantsur >:


    Sigh, sorry. I forgot that we're moving back to winter time this
    weekend. I *think* the time is 3pm UTC then. It seems to be 11am
    eastern US:

https://www.timeanddate.com/worldclock/converter.html?iso=20171030T15&p1=37&p2=tz_et 


. 




    On 10/24/2017 06:00 PM, Dmitry Tantsur wrote:

    And the winner is Mon, 30 Oct, 2pm UTC!

    The bluejeans ID is https://bluejeans.com/757528759
    
    (works without plugins in recent FF and Chrome; if it asks to
    install an app, ignore it and look for a link saying "join with
    browser")

    On 10/23/2017 05:02 PM, Dmitry Tantsur wrote:

    Hi all!

    I'd like to invite you to the discussion of the way to
    implement traits in
    ironic and the ironic virt driver. Please vote for the time at
    https://doodle.com/poll/ts43k98kkvniv8uz
    . Please vote by
    EOD tomorrow.

    Note that it's going to be a technical discussion - please
    make sure you
    understand what traits are and why ironic cares about them.
    See below for more
    context.

    We'll probably use my bluejeans account, as it works without
    plugins in modern
    browsers. I'll post a meeting ID when we pick the date.


    On 10/23/2017 04:09 PM, Eric Fried wrote:

    We discussed this a little bit further in IRC [1]. 
    We're all in

    agreement, but it's worth being precise on a couple of
    points:

    * We're distinguishing between a "feature" and the
    "trait" that
    represents it in placement.  For the sake of this
    discussion, a
    "feature" can (maybe) be switched on or off, but a
    "trait" can either be
    present or absent on a RP.
    * It matters *who* can turn a feature on/off.
 * If it can be done by virt at spawn time, then it
    makes sense to have
    the trait on the RP, and you can switch the feature
    on/off via a
    separate extra_spec.
 * But if it's e.g. an admin action, and spawn has
    no control, then the
    trait needs to be *added* whenever the feature is *on*,
    and *removed*
    whenever the feature is *off*.

    [1]

http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-10-23.log.html#t2017-10-23T13:12:13 


 




    On 10/23/2017 08:15 AM, Sylvain Bauza wrote:



    On Mon, Oct 23, 2017 at 2:54 PM, Eric Fried
    mailto:openst...@fried.cc
    >> wrote:

   I agree with Sean.  In general terms:

   * A resource provider should be marked with a
    trait if that feature
 * Can be turned on or off (whether it's
    currently on or not); or
 * Is always on and can't ever be turned off.


    No, traits are not boolean. If a resource provider
    stops providing a
    capability, then the existing related trait should
    just be removed,
    that's it.
    If you see a trait, that's just means that the
    related capability for
    the Resource Provider is supported, that's it too.

    MHO.

    -Sylvain



   * A consumer wanting that feature present
    (doesn't matter whether it's
   on or off) should specify it as a required
    *

Re: [openstack-dev] [ironic] [nova] traits discussion call - moved to Tue!!

2017-10-30 Thread Matt Riedemann

On 10/30/2017 9:32 AM, Dmitry Tantsur wrote:

the same bluejeans.


Forever in bluejeans?

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][zuul] About devstack plugin orders and the log to contain the running local.conf

2017-10-30 Thread James E. Blair
"gong_ys2004"  writes:

> Hi, everyone
> I am trying to migrate tacker's functional CI job into new zuul v3 framework, 
> but it seems:
> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I have:  
> devstack_plugins:
> heat: https://git.openstack.org/openstack/heat
> networking-sfc:  https://git.openstack.org/openstack/networking-sfc
> aodh: https://git.openstack.org/openstack/aodh
> ceilometer: https://git.openstack.org/openstack/ceilometer
> barbican: https://git.openstack.org/openstack/barbican
> mistral: https://git.openstack.org/openstack/mistral
> tacker: https://git.openstack.org/openstack/tacker
> but the running order 
> seems:http://logs.openstack.org/04/516004/4/check/tacker-functional-devstack/f365f21/job-output.txt.gz:
> local plugins=,ceilometer,aodh,mistral,networking-sfc,heat,tacker,barbican
> I need barbican to start before tacker.

[I changed the subject to replace the 'openstack' tag with 'devstack',
which is what I assume was intended.]


As Yatin Karel later notes, this is handled as a regular python
dictionary which means we process the keys in an indeterminate order.

I can think of a few ways we can address this:

1) Add dependency information to devstack plugins so that devstack
itself is able to work out the correct order.  This is perhaps the ideal
solution from a user experience perspective, but perhaps the most
difficult.

2) Add dependency information to the Ansible role so that it resolves
the order on its own.  This is attractive because it solves a problem
that is unique to this Ansible role entirely within the role.  However,
it means that new plugins would need to also update this role which is
in devstack itself, which partially defeats the purpose of plugins.

3) Add dependency information to devstack plugins, but rather than
having devstack resolve it, have the Ansible role which writes out the
local.conf read that information and resolve the order.  This lets us
keep the actual information in plugins so we don't have to continually
update the role, but it lets us perform the processing in the role
(which is in Python) when writing the config file.

4) Alter Zuul's handling of this to an ordered dictionary.  Then when
you specify a series of plugins, they would be processed in that order.
However, I'm not sure this works very well with Zuul job inheritance.
Imagine that a parent job enabled the barbican plugin, and a child job
enabled ceilometer, needed ceilometer to start before barbican.  There
would be no way to express that.

5) Change the definition of the dictionary to encode ordering
information.  Currently the dictionary schema is simply the name of the
plugin as the key, and either the contents of the "enable_plugin" line,
or "null" if the plugin should be disabled.  We could alter it to be:

  devstack_plugins:
barbican:
  enabled: true
  url: https://git.openstack.org/openstack/barbican
  branch: testing
tacker:
  enabled: true
  url: https://git.openstack.org/openstack/tacker
  requires:
barbican: true

This option is very flexible, but makes using the jobs somewhat more
difficult because of the complexity of the data structure.

After considering all of those, I think I favor option 3, because we
should be able to implement it without too much difficulty, it will
improve things by providing a known and documented location for plugins
to specify dependencies, and once it is in place, we can still implement
option 1 later if we want, using the same declaration.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all][stable] Zuul v3 changes and stable branches

2017-10-30 Thread James E. Blair
Boden Russell  writes:

> On 10/27/17 6:35 PM, James E. Blair wrote:
>> 
>> We're rolling out a new version of Zuul that corrects the issues, and
>> the migration doc has been updated.  The main things to know are:
>> 
>> * If your project has stable branches, we recommend backporting the Zuul
>>   config along with all the playbooks and roles that are in your repo to
>>   the stable branches.
>
> Does this apply to projects that don't have an in-repo config in master
> and only use shared artifacts?
>
> For example, our project's (master) pipeline is in project-config's
> projects.yaml and only uses shared templates/jobs/playbooks. Is the
> expectation that we copy this pipeline to an in-repo zuul.yaml for each
> stable branch as well as the "shared" playbooks?

No it doesn't apply -- if your project's Zuul config is entirely in
project-config, then this doesn't apply to you.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Kernel parameters needed to boot from iscsi

2017-10-30 Thread Julia Kreger
Sorry! A little late to the discussion with how busy I was last week.
Replies/thoughts in-line with trimmed text.


>> When I tried it I got this
>> [  370.704896] dracut-initqueue[387]: Warning: iscistart: Could not
>> get list of targets from firmware.
>>
>> perhaps we could alter iscistart to not complain if there are no
>> targets attached and just continue, then simply always have
>> rd.iscsi.firmware=1 in the kernel param regardless of storage type
>

For those that haven't been following IRC discussion, Derek was kind
enough to submit a pull request to address this in dracut.


> I think we can fix ironic (the PXE boot interface) to pass this flag when
> using boot-from-volume, what do you think?

Not exactly. We perform iPXE sanhook attachments which causes iPXE to
speak iSCSI once we trigger boot. We have no means to pass kernel
arguments. We could rewrite the way the interface works to work more
like traditional linux netbooting where the linux kernel/ramdisk are
loaded up and arguments get passed on the kernel command line, but
then we are really Linux specific instead of booting whatever is on
the filesystem.

The other case to consider is outside our specific boot-from-volume
scenario. What if an operator chose to use a SAN system outside of
OpenStack's knowledge or control for root filesystems, and a parameter
is needed by the booting OS to see the storage hardware. If we don't
provide a mechanism, then the operator has no choice but to drive the
usage of highly specific "known-good" images for their baremetal cloud
tenants.

[trim]

 So can we reconsider the proposal to add kernel parameters there? It
 could
 be a settable argument (driver_info/kernel_args), and then the IPA could
 set
 the parameters properly on the image. Or any other option is welcome.
 What are your thoughts there?
>>>
>>>
>>>
>>> Well, we could probably do that *for IPA only*. Something like
>>> driver_info/deploy_image_append_params. This is less controversial than
>>> doing that for user instances, as we fully control the IPA boot. If you
>>> want
>>> to work on it, let's start with a detailed RFE please.
>>>

I believe the reason we avoided providing the ability to pass
parameters to the deployed image when a partition image is used, was
because we wanted whatever was written to be pristine and unmodified
until it first booted, but I don't think the argument holds with the
way we presently operate by mounting [1] and placing a grub-config
file [2]. Regardless of what we do on the filesystem, we still end up
changing filesystem metadata in this process because it is a
read/write mount for root partition that has been written out.

Personally, It _feels_ like it wouldn't add much complexity to add a
file to /etc/grub.d or content to /etc/defaults/grub to facilitate
allowing an operator to pass standardized kernel parameters when
needed for their environment. Such a capability would realistically
help ease use of TripleO Overcloud deployments as well as bare metal
instance users when partition images are used. Of course, there is no
real option for a whole disk image to support doing such.

-Julia

[1]: 
http://git.openstack.org/cgit/openstack/ironic-python-agent/tree/ironic_python_agent/extensions/image.py#n94
[2]: 
http://git.openstack.org/cgit/openstack/ironic-python-agent/tree/ironic_python_agent/extensions/image.py#n136

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [Openstack-operators] replace node "tags" with node "traits"

2017-10-30 Thread Ruby Loo
Hi,

Thanks for your 3 votes. Every vote counts; you've convinced me of the
usefulness of having both tags and traits as separate features. I shall
advocate for you all :)

--ruby


On Fri, Oct 27, 2017 at 5:41 AM, Vladyslav Drok  wrote:

>
>
> On Fri, Oct 27, 2017 at 12:19 AM, Jay Pipes  wrote:
>
>> On 10/25/2017 12:55 PM, Mathieu Gagné wrote:
>>
>>> Hi,
>>>
>>> On Wed, Oct 25, 2017 at 10:17 AM, Loo, Ruby  wrote:
>>>
 Hello ironic'ers,

 A while ago, we approved a spec to add node tag support to ironic [1].
 The
 feature itself did not land yet (although some of the code has). Now
 that
 the (nova) community has come up with traits, ironic wants to support
 node
 traits, and there is a spec proposing that [2]. At the ironic node
 level,
 this is VERY similar to the node tag support, so the thought is to drop
 (not
 implement) the node tagging feature, since the node traits feature
 could be
 used instead. There are a few differences between the tags and traits.
 "Traits" means something in OpenStack, and there are some restrictions
 about
 it:

 - max 50 per node

 - names must be one of those in os-traits library OR prefixed with
 'CUSTOM_'

 For folks that wanted the node tagging feature, will this new node
 traits
 feature work for your use case? Should we support both tags and traits?
 I
 was wondering about e.g. using ironic standalone.

 Please feel free to comment in [2].

 Thanks in advance,

 --ruby

 [1]
 http://specs.openstack.org/openstack/ironic-specs/specs/appr
 oved/nodes-tagging.html

 [2] https://review.openstack.org/#/c/504531/


>>> Are tags/traits serving a different purpose? One serves the purpose of
>>> helping the scheduling/placement while the other is more or less aims
>>> at grouping for the "end users"?
>>> I understand that the code will be *very* similar but who/what will be
>>> the consumers/users?
>>> I fell they won't be the same and could artificially limit its use due
>>> to technical/design "limitations". (must be in os-traits or be
>>> prefixed by CUSTOM)
>>>
>>> For example which I personally foresee:
>>> * I might want to populate Ironic inventory from an external system
>>> which would also injects the appropriate traits.
>>> * I might also want some technical people to use/query Ironic and
>>> allow them to tag nodes based on their own needs while not messing
>>> with the traits part (as it's managed by an external system and will
>>> influence the scheduling later)
>>>
>>> Lets not assume traits/tags have the same purpose and same user.
>>>
>>
>> I agree with Matthieu 100% here.
>>
>> Traits are structured, formalized, and set by the system or the operator
>> against resource providers.
>>
>> Tags are for end-users to, well, tag their instances with whatever
>> strings they want.
>>
>> Best,
>> -jay
>
>
> I'd also vote for having them separate. We can refactor the common bits of
> code instead.
>
> -Vlad
>
>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-10-30 Thread Matt Riedemann

On 9/20/2017 9:42 AM, arkady.kanev...@dell.com wrote:

Lee,
I can chair meeting in Sydney.
Thanks,
Arkady


Arkady,

Are you actually moderating the forum session in Sydney because the 
session says Eric McCormick is the session moderator:


https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20451/fast-forward-upgrades

People are asking in the nova IRC channel about this session and were 
told to ask Jay Pipes about it, but Jay isn't going to be in Sydney and 
isn't involved in fast-forward upgrades, as far as I know anyway.


So whoever is moderating this session, can you please create an etherpad 
and get it linked to the wiki?


https://wiki.openstack.org/wiki/Forum/Sydney2017

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-10-30 Thread Erik McCormick
On Oct 30, 2017 11:53 AM, "Matt Riedemann"  wrote:

On 9/20/2017 9:42 AM, arkady.kanev...@dell.com wrote:

> Lee,
> I can chair meeting in Sydney.
> Thanks,
> Arkady
>

Arkady,

Are you actually moderating the forum session in Sydney because the session
says Eric McCormick is the session moderator:


I submitted it so it gets my name on it. I think Arkady and I are going to
do it together.

https://www.openstack.org/summit/sydney-2017/summit-schedule
/events/20451/fast-forward-upgrades

People are asking in the nova IRC channel about this session and were told
to ask Jay Pipes about it, but Jay isn't going to be in Sydney and isn't
involved in fast-forward upgrades, as far as I know anyway.

So whoever is moderating this session, can you please create an etherpad
and get it linked to the wiki?

https://wiki.openstack.org/wiki/Forum/Sydney2017


I'll have the etherpad up today and pass it among here and on the wiki.



-- 

Thanks,

Matt


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [TripleO] roles_data.yaml equivalent in containers

2017-10-30 Thread Abhishek Kane
Hi Steven,

I was out of the town and hence couldn’t reply to the email.
I will take a look at the examples you have shared and get back with the 
results tomorrow.

Thanks,
Abhishek

On 10/25/17, 2:21 PM, "Steven Hardy"  wrote:

On Wed, Oct 25, 2017 at 6:41 AM, Abhishek Kane
 wrote:
>
> Hi,
>
>
>
> In THT I have an environment file and corresponding puppet service for 
Veritas HyperScale.
>
> 
https://github.com/openstack/tripleo-heat-templates/blob/master/environments/veritas-hyperscale/veritas-hyperscale-config.yaml
>
> 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/veritas-hyperscale-controller.yaml
>
>
>
> This service needs rabbitmq user the hooks for it is 
“veritas_hyperscale::hs_rabbitmq”-
>
> 
https://github.com/openstack/puppet-tripleo/blob/master/manifests/profile/base/rabbitmq.pp#L172
>
>
>
> In order to configure Veritas HyperScale, I add 
“OS::TripleO::Services::VRTSHyperScale” to roles_data.yaml file and use 
following command-
>
>
>
> # openstack overcloud deploy --templates -r /home/stack/roles_data.yaml 
-e 
/usr/share/openstack-tripleo-heat-templates/environments/veritas-hyperscale/veritas-hyperscale-config.yaml
 -e 
/usr/share/openstack-tripleo-heat-templates/environments/veritas-hyperscale/cinder-veritas-hyperscale-config.yaml
>
>
>
> This command sets “veritas_hyperscale_controller_enabled” to true in 
hieradata and all the hooks gets called.
>
>
>
> I am trying to containerize Veritas HyperScale services. I used following 
config file in quickstart-
>
> http://paste.openstack.org/show/624438/
>
>
>
> It has the environment files-
>
>   -e 
{{overcloud_templates_path}}/environments/veritas-hyperscale/cinder-veritas-hyperscale-config.yaml
>
>   -e 
{{overcloud_templates_path}}/environments/veritas-hyperscale/veritas-hyperscale-config.yaml
>
>
>
> But this itself doesn’t set “veritas_hyperscale_controller_enabled” to 
true in hieradata and veritas_hyperscale::hs_rabbitmq doesn’t get called.
>
> 
https://github.com/openstack/tripleo-heat-templates/blob/master/roles_data.yaml#L56
>
>
>
>
>
> How do I add OS::TripleO::Services::VRTSHyperScale in case of containers?

the roles_data.yaml approach you used previously should still work in
the case of containers, but the service template referenced will be
different (the files linked above still refer to the puppet service
template)

e.g


https://github.com/openstack/tripleo-heat-templates/blob/master/environments/veritas-hyperscale/veritas-hyperscale-config.yaml#L18

defines:

OS::TripleO::Services::VRTSHyperScale:
../../puppet/services/veritas-hyperscale-controller.yaml

Which overrides this default mapping to OS::Heat::None:


https://github.com/openstack/tripleo-heat-templates/blob/master/overcloud-resource-registry-puppet.j2.yaml#L297

For containerized services, there are different resource_registry
mappings that refer to the templates in
tripleo-heat-templates/docker/services. e.g like this:


https://github.com/openstack/tripleo-heat-templates/blob/master/environments/services-docker/sahara.yaml

I think you'll need to create similar new service templates under
docker/services, then create some new environment files which map to
the new implementation that defines the data needed to start the
contianers.

You can get help with this in #tripleo on Freenode, and there are some
docs here:


https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/README.rst

https://docs.openstack.org/tripleo-docs/latest/install/containers_deployment/index.html

There was also a deep-dive recorded which is linked from here:

https://etherpad.openstack.org/p/tripleo-deep-dive-topics

Hope that helps somewhat?

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] tomorrow's meeting and queens-1 retrospective

2017-10-30 Thread Lance Bragstad
Hey all,

Just a reminder that tomorrow's team meeting will be dedicated to our
queens-1 retrospective. Harry and I cleared the board from the Pike
retrospective [0] and it should be ready to go. Given the last
retrospective took longer than an hour, we want to try and jump start
the process.

_Before tomorrow, please try and take some time to fill out the columns
with your feedback_ [0]. This will let us jump right into the
retrospective with voting, which we will dedicate 5 minutes to.
Depending on the number of cards, we're only going to be able to discuss
each for 3 - 4 minutes before moving on (instead of the usual 5
minutes). This means about 10 - 12 cards total and Harry stepped up to
moderate.

We do have office hours scheduled after the team meeting, so we can
spill over into that if we need to. Let me know if you have any
questions. Thanks!

Lance


[0] https://postfacto.io/retros/openstack-keystone



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-10-30 Thread Dmitry Tantsur

Hi all,

So far driver requirements [1] have been managed outside of global-requirements. 
This was mostly necessary because some dependencies were not on PyPI. This is no 
longer the case, and I'd like to consider managing them just like any other 
dependencies. Pros:

1. making these dependencies (and their versions) more visible for packagers
2. following the same policies for regular and driver dependencies
3. ensuring co-installability of these dependencies with each other and with the 
remaining openstack
4. potentially using upper-constraints in 3rd party CI to test what packagers 
will probably package
5. we'll be able to finally create a tox job running unit tests with all these 
dependencies installed (FYI these often breaks in RDO CI)


Cons:
1. more work for both the requirements team and the vendor teams
2. inability to use ironic release notes to explain driver requirements changes
3. any objections from the requirements team?

If we make this change, we'll drop driver-requirements.txt, and will use 
setuptools extras to list then in setup.cfg (this way is supported by g-r) 
similar to what we do in ironicclient [2].


We either will have one list:

[extras]
drivers =
  sushy>=a.b
  python-dracclient>=x.y
  python-prolianutils>=v.w
  ...

or (and I like this more) we'll have a list per hardware type:

[extras]
redfish =
  sushy>=a.b
idrac =
  python-dracclient>=x.y
ilo =
  ...
...

WDYT?

[1] https://github.com/openstack/ironic/blob/master/driver-requirements.txt
[2] https://github.com/openstack/python-ironicclient/blob/master/setup.cfg#L115

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] reusable code moving to neutron-lib

2017-10-30 Thread Boden Russell
Just a quick update on the neutron-lib workstream.

Although there hasn't been many "neutron-lib impact" emails lately, the
effort is still active. The reason for decreased email volume is that
rather than just updating stadium consumers during consumption [1], all
(stadium/non-stadium) consumers who use neutron/neutron-lib stable
branches are updated (for ~free).

To summarize what this means:
- If your project uses neutron/neutron-lib stable branches, you should
see consumption patches as we consume [1] in neutron. Just help the
review along please.
- If your project "pins" older versions of neutron/neutron-lib, your
project will need to address the consumption as you move the "pin" up.
- To stay up to date on the latest neutron-lib consumption patches,
please consider attending the weekly neutron meeting and/or checking the
open patches tagged with NeutronLibImpact [2].

Finally; a heads up that the neutron.plugins.ml2.driver_api module is in
lib and will be removed in neutron as part of consumption [3]. For those
using stable branches, a consumption patch should already be queued up
in your gate [4].

Thanks

[1]
https://docs.openstack.org/neutron-lib/latest/contributor/contributing.html#phase-4-consume
[2]
https://wiki.openstack.org/wiki/Network/Meetings#Neutron-lib.2C_planned_refactoring_and_other_impacts
[3] https://review.openstack.org/#/c/488173/
[4] https://review.openstack.org/#/q/topic:use-lib-ml2-driverapi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Scheduling error with RamFilter ... on integrating ironic into our OpenStack Distribution

2017-10-30 Thread Waines, Greg
Hey,

We are in the process of integrating OpenStack Ironic into our own OpenStack 
Distribution.
Still pulling all the pieces together ... have not yet got a successful ‘nova 
boot’ yet, so issues below could be configuration or setup issues.

We have ironic node enrolled ... and corresponding nova hypervisor has been 
created for it ... ALTHOUGH does not seem to be populated correctly (see below).
AND then the ‘nova boot’ fails with the error:

 "No valid host was found. There are not enough hosts available. 
66aaf6fa-3cbe-4744-8d55-c90eeae4800a: (RamFilter) Insufficient total RAM: 
req:20480, avail:0 MB,

NOTE: the nova.conf that we are using for the nova.compute being used for 
ironic servers is attached.

Any Ideas what could be wrong ?
Greg.


[wrsroot@controller-1 ~(keystone_admin)]$ ironic node-show metallica

++--+

| Property   | Value
|

++--+

| chassis_uuid   |  
|

| clean_step | {}   
|

| console_enabled| False
|

| created_at | 2017-10-27T20:37:12.241352+00:00 
|

| driver | pxe_ipmitool 
|

| driver_info| {u'ipmi_password': u'**', u'ipmi_address': 
u'128.224.64.212',|

|| u'ipmi_username': u'root', u'deploy_kernel': 
u'2939e2d4-da3f-4917-b99a-  |

|| 01030fd30345', u'deploy_ramdisk':
|

|| u'73ad43c4-4300-45a5-87ec-f28646518430'} 
|

| driver_internal_info   | {}   
|

| extra  | {}   
|

| inspection_finished_at | None 
|

| inspection_started_at  | None 
|

| instance_info  | {}   
|

| instance_uuid  | None 
|

| last_error | None 
|

| maintenance| False
|

| maintenance_reason | None 
|

| name   | metallica
|

| network_interface  |  
|

| power_state| power off
|

| properties | {u'memory_mb': 20480, u'cpu_arch': u'x86_64', 
u'local_gb': 100, u'cpus': |

|| 20, u'capabilities': u'boot_option:local'}   
|

| provision_state| manageable   
|

| provision_updated_at   | 2017-10-30T15:47:33.397317+00:00 
|

| raid_config|  
|

| reservation| None 
|

| resource_class |  
|

| target_power_state | None 
|

| target_provision_state | None 
|

| target_raid_config |  
|

| updated_at | 2017-10-30T15:47:51.396471+00:00 
|

| uuid   | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a 
|

++--+

[wrsroot@controller-1 ~(keystone_admin)]$ nova hypervisor-show 
66aaf6fa-3cbe-4744-8d55-c90eeae4800a

+-+--+

| Property| Value|

+-+--+

| cpu_info

Re: [openstack-dev] [tc] [all] TC Report 43

2017-10-30 Thread Mike Perez
On 11:17 Oct 25, Flavio Percoco wrote:
> On 24/10/17 19:26 +0100, Chris Dent wrote:
> >It's clear that anyone and everyone _could_ write their own blogs and
> >syndicate to the [OpenStack planet](http://planet.openstack.org/) but
> >this doesn't have the same panache and potential cadence as an
> >official thing _might_. It comes down to people having the time. Eking
> >out the time for this blog, for example, can be challenging.
> >
> >Since this is the second [week in a
> >row](https://anticdent.org/tc-report-42.html) that Josh showed up with
> >an idea, I wonder what next week will bring?
> 
> I might not be exactly the same but, I think the superuser's blog could be a
> good place to do some of this writing. There are posts of various kinds in 
> that
> blog: technical, community, news, etc. I wonder how many folks from the
> community are aware of it and how many would be willing to contribute to it 
> too.
> Contributing to the superuser's blog is quite simple, really.

Anne used to do TC updates and they were posted to the OpenStack Blog:

https://www.openstack.org/blog/category/technical-committee-updates/

-- 
Mike Perez


pgpHJDadMERqZ.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Scheduling error with RamFilter ... on integrating ironic into our OpenStack Distribution

2017-10-30 Thread Jay Pipes
You need to set the node's resource_class attribute to the custom 
resource class you will use for that chassis/hardware type.


Then you need to add a specific extra_specs key/value to a flavor to 
indicate that that flavor is requesting that specific hardware type:


openstack flavor set $flavorname --property resources:$RESOURCE_CLASS=1

for instance, let's say you set your node's resource class to 
CUSTOM_METALLICA, you would do this to the flavor you are using to grab 
one of those Ironic resources:


openstack flavor set $flavorname --property resources:CUSTOM_METALLICA=1

Then nova boot with that flavor and you should be good to go.

-jay

On 10/30/2017 01:05 PM, Waines, Greg wrote:

Hey,

We are in the process of integrating OpenStack Ironic into our own 
OpenStack Distribution.


Still pulling all the pieces together ... have not yet got a successful 
‘nova boot’ yet, so issues below could be configuration or setup issues.


We have ironic node enrolled ... and corresponding nova hypervisor has 
been created for it ... ALTHOUGH does not seem to be populated correctly 
(see below).


AND then the ‘nova boot’ fails with the error:

"No valid host was found. There are not enough hosts available. 
66aaf6fa-3cbe-4744-8d55-c90eeae4800a: (RamFilter) Insufficient total 
RAM: req:20480, avail:0 MB,


NOTE: the nova.conf that we are using for the nova.compute being used 
for ironic servers is attached.


Any Ideas what could be wrong ?

Greg.

[wrsroot@controller-1 ~(keystone_admin)]$ ironic node-show metallica

++--+

| Property | Value|

++--+

| chassis_uuid ||

| clean_step | {} |

| console_enabled| False|

| created_at | 2017-10-27T20:37:12.241352+00:00 |

| driver | pxe_ipmitool |

| driver_info| {u'ipmi_password': u'**', u'ipmi_address': 
u'128.224.64.212',|


|| u'ipmi_username': u'root', u'deploy_kernel': u'2939e2d4-da3f-4917-b99a-|

|| 01030fd30345', u'deploy_ramdisk':|

|| u'73ad43c4-4300-45a5-87ec-f28646518430'} |

| driver_internal_info | {} |

| extra| {} |

| inspection_finished_at | None |

| inspection_started_at| None |

| instance_info| {} |

| instance_uuid| None |

| last_error | None |

| maintenance| False|

| maintenance_reason | None |

| name | metallica|

| network_interface||

| power_state| power off|

| properties | {u'memory_mb': 20480, u'cpu_arch': u'x86_64', 
u'local_gb': 100, u'cpus': |


|| 20, u'capabilities': u'boot_option:local'} |

| provision_state| manageable |

| provision_updated_at | 2017-10-30T15:47:33.397317+00:00 |

| raid_config||

| reservation| None |

| resource_class ||

| target_power_state | None |

| target_provision_state | None |

| target_raid_config ||

| updated_at | 2017-10-30T15:47:51.396471+00:00 |

| uuid | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |

++--+

[wrsroot@controller-1 ~(keystone_admin)]$ nova hypervisor-show 
66aaf6fa-3cbe-4744-8d55-c90eeae4800a


+-+--+

| Property| Value|

+-+--+

| cpu_info| {} |

| current_workload| 0|

| disk_available_least| 0|

| free_disk_gb| 0|

| free_ram_mb | 0|

| host_ip | 127.0.0.1|

| hypervisor_hostname | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |

| hypervisor_type | ironic |

| hypervisor_version| 1|

| id| 5|

| local_gb| 0|

| local_gb_used | 0|

| memory_mb | 0|

| memory_mb_node| None |

| memory_mb_used| 0|

| memory_mb_used_node | None |

| running_vms | 0|

| service_disabled_reason | None |

| service_host| controller-1 |

| service_id| 28 |

| state | up |

| status| enabled|

| vcpus | 0|

| vcpus_node| None |

| vcpus_used| 0.0|

| vcpus_used_node | None |

+-+--+

[wrsroot@controller-1 ~(keystone_admin)]$



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Scheduling error with RamFilter ... on integrating ironic into our OpenStack Distribution

2017-10-30 Thread Waines, Greg
Thanks Jay ... i’ll try this out and let you know.

BTW ... i should have mentioned that i am currently @Newton ... and will 
eventually move to @PIKE
   Does that change anything you suggested below ?

Greg.



From: Jay Pipes 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Monday, October 30, 2017 at 1:23 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [ironic] Scheduling error with RamFilter ... on 
integrating ironic into our OpenStack Distribution

You need to set the node's resource_class attribute to the custom
resource class you will use for that chassis/hardware type.

Then you need to add a specific extra_specs key/value to a flavor to
indicate that that flavor is requesting that specific hardware type:

openstack flavor set $flavorname --property resources:$RESOURCE_CLASS=1

for instance, let's say you set your node's resource class to
CUSTOM_METALLICA, you would do this to the flavor you are using to grab
one of those Ironic resources:

openstack flavor set $flavorname --property resources:CUSTOM_METALLICA=1

Then nova boot with that flavor and you should be good to go.

-jay

On 10/30/2017 01:05 PM, Waines, Greg wrote:
Hey,
We are in the process of integrating OpenStack Ironic into our own
OpenStack Distribution.
Still pulling all the pieces together ... have not yet got a successful
‘nova boot’ yet, so issues below could be configuration or setup issues.
We have ironic node enrolled ... and corresponding nova hypervisor has
been created for it ... ALTHOUGH does not seem to be populated correctly
(see below).
AND then the ‘nova boot’ fails with the error:
"No valid host was found. There are not enough hosts available.
66aaf6fa-3cbe-4744-8d55-c90eeae4800a: (RamFilter) Insufficient total
RAM: req:20480, avail:0 MB,
NOTE: the nova.conf that we are using for the nova.compute being used
for ironic servers is attached.
Any Ideas what could be wrong ?
Greg.
[wrsroot@controller-1 ~(keystone_admin)]$ ironic node-show metallica
++--+
| Property | Value|
++--+
| chassis_uuid ||
| clean_step | {} |
| console_enabled| False|
| created_at | 2017-10-27T20:37:12.241352+00:00 |
| driver | pxe_ipmitool |
| driver_info| {u'ipmi_password': u'**', u'ipmi_address':
u'128.224.64.212',|
|| u'ipmi_username': u'root', u'deploy_kernel': u'2939e2d4-da3f-4917-b99a-|
|| 01030fd30345', u'deploy_ramdisk':|
|| u'73ad43c4-4300-45a5-87ec-f28646518430'} |
| driver_internal_info | {} |
| extra| {} |
| inspection_finished_at | None |
| inspection_started_at| None |
| instance_info| {} |
| instance_uuid| None |
| last_error | None |
| maintenance| False|
| maintenance_reason | None |
| name | metallica|
| network_interface||
| power_state| power off|
| properties | {u'memory_mb': 20480, u'cpu_arch': u'x86_64',
u'local_gb': 100, u'cpus': |
|| 20, u'capabilities': u'boot_option:local'} |
| provision_state| manageable |
| provision_updated_at | 2017-10-30T15:47:33.397317+00:00 |
| raid_config||
| reservation| None |
| resource_class ||
| target_power_state | None |
| target_provision_state | None |
| target_raid_config ||
| updated_at | 2017-10-30T15:47:51.396471+00:00 |
| uuid | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |
++--+
[wrsroot@controller-1 ~(keystone_admin)]$ nova hypervisor-show
66aaf6fa-3cbe-4744-8d55-c90eeae4800a
+-+--+
| Property| Value|
+-+--+
| cpu_info| {} |
| current_workload| 0|
| disk_available_least| 0|
| free_disk_gb| 0|
| free_ram_mb | 0|
| host_ip | 127.0.0.1|
| hypervisor_hostname | 66aaf6fa-3cbe-4744-8d55-c90eeae4800a |
| hypervisor_type | ironic |
| hypervisor_version| 1|
| id| 5|
| local_gb| 0|
| local_gb_used | 0|
| memory_mb | 0|
| memory_mb_node| None |
| memory_mb_used| 0|
| memory_mb_used_node | None |
| running_vms | 0|
| service_disabled_reason | None |
| service_host| controller-1 |
| service_id| 28 |
| state | up |
| status| enabled|
| vcpus | 0|
| vcpus_node| None |
| vcpus_used| 0.0|
| vcpus_used_node | None |
+-+--+
[wrsroot@controller-1 ~(keystone_admin)]$
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.open

Re: [openstack-dev] [ironic] Scheduling error with RamFilter ... on integrating ironic into our OpenStack Distribution

2017-10-30 Thread Jay Pipes

On 10/30/2017 01:37 PM, Waines, Greg wrote:

Thanks Jay ... i’ll try this out and let you know.

BTW ... i should have mentioned that i am currently @Newton ... and will 
eventually move to @PIKE 

Does that change anything you suggested below ?


Hmm, yes, it does.

In Pike, we began requiring the custom resource class thing with Ironic. 
In Newton, I don't believe we had yet changed the scheduler to look at 
the resource class "overrides".


Looking at your output, I see that the Ironic node's power_state is set 
to "power off". I'm not sure if that's as it should be. Perhaps some 
Ironic devs can help with the answer to that..


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-10-30 Thread James Penick
Big +1 to re-evaluating this. In my environment we have many users
deploying and managing a number of different apps in different tenants.
Some of our users, such as Yahoo Mail service engineers could be in up to
40 different tenants. Those service engineers may change products as their
careers develop. Having to re-deploy part of an application stack because
Sally SE changed products would be unnecessarily disruptive.

 I regret that I missed the bus on this back in June. But at Oath we've
built a system (Called Copper Argos) on top of Athenz (it's open source:
www.athenz.io) to provide instance identity in a way that is both unique
but doesn't have all of the problems of a static persistent identity.

 The really really really* high level overview is:
1. Users pass application identity data to Nova as metadata during the boot
process.
2. Our vendor-data driver works with a service called HostSignd to validate
that data and create a one time use attestation document which is injected
into the instance's config drive.
3. On boot an agent within the instance will use that time-limited host
attestation document to identify itself to the Athenz identity service,
which will then exchange the document for a unique certificate containing
the application data passed in the boot call.
4. From then on the instance identity (TLS certificate) is periodically
exchanged by the agent for a new certificate.
5. The host attestation document and the instance TLS certificate can each
only be used a single time to exchange for another certificate. The
attestation document has a very short ttl, and the instance identity is set
to live slightly longer than the planned rotation frequency. So if you
rotate your certificates once an hour, the ttl on the cert should be 2
hours. This gives some wiggle room in the event the identity service is
down for any reason.

The agent is also capable of supporting SSH CA by passing the SSH host key
up to be re-signed whenever it exchanges the TLS certificate. All instances
leveraging Athens identity can communicate to one another using TLS mutual
auth.

If there's any interest i'd be happy to go into more detail here on the ML
and/or at the summit in Sydney

-James
* With several more zoolander-style Really's thrown in for good measure.


On Tue, Oct 10, 2017 at 12:34 PM, Fox, Kevin M  wrote:

> Big +1 for reevaluating the bigger picture. We have a pile of api's that
> together don't always form the most useful of api's due to lack of big
> picture analysis.
>
> +1 to thinking through the dev's/devops use case.
>
> Another one to really think over is single user that != application
> developer. IE, Pure user type person deploying cloud app in their tenant
> written by dev not employees by user's company. User shouldn't have to go
> to Operator to provision service accounts and other things. App dev should
> be able to give everything needed to let OpenStack launch say a heat
> template that provisions the service accounts for the User, not making the
> user twiddle the api themselves. It should be a "here, launch this" kind of
> thing, and they fill out the heat form, and out pops a working app. If they
> have to go prevision a bunch of stuff themselves before passing stuff to
> the form, game over. Likewise, if they have to look at yaml, game over. How
> do app credentials fit into this?
>
> Thanks,
> Kevin
>
> 
> From: Zane Bitter [zbit...@redhat.com]
> Sent: Monday, October 09, 2017 9:39 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [keystone][nova] Persistent application
> credentials
>
> On 12/09/17 18:58, Colleen Murphy wrote:
> > While it's fresh in our minds, I wanted to write up a short recap of
> > where we landed in the Application Credentials discussion in the BM/VM
> > room today. For convenience the (as of yet unrevised) spec is here:
>
> Thanks so much for staying on this Colleen, it's tremendously helpful to
> have someone from the core team keeping an eye on it :)
>
> > http://specs.openstack.org/openstack/keystone-specs/
> specs/keystone/backlog/application-credentials.html
> >
> > Attached are images of the whiteboarded notes.
> >
> > On the contentious question of the lifecycle of an application
> > credential, we re-landed in the same place we found ourselves in when
> > the spec originally landed, which is that the credential becomes invalid
> > when its creating user is disabled or deleted. The risk involved in
> > allowing a credential to continue to be valid after its creating user
> > has been disabled is not really surmountable, and we are basically
> > giving up on this feature. The benefits we still get from not having to
> > embed user passwords in config files, especially for LDAP or federated
> > users, is still a vast improvement over the situation today, as is the
> > ability to rotate credentials.
>
> OK, there were lots of smart people in the room so I trust that y'all
> made the right deci

Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-10-30 Thread Doug Hellmann
Excerpts from Dmitry Tantsur's message of 2017-10-30 17:51:49 +0100:
> Hi all,
> 
> So far driver requirements [1] have been managed outside of 
> global-requirements. 
> This was mostly necessary because some dependencies were not on PyPI. This is 
> no 
> longer the case, and I'd like to consider managing them just like any other 
> dependencies. Pros:
> 1. making these dependencies (and their versions) more visible for packagers
> 2. following the same policies for regular and driver dependencies
> 3. ensuring co-installability of these dependencies with each other and with 
> the 
> remaining openstack
> 4. potentially using upper-constraints in 3rd party CI to test what packagers 
> will probably package
> 5. we'll be able to finally create a tox job running unit tests with all 
> these 
> dependencies installed (FYI these often breaks in RDO CI)
> 
> Cons:
> 1. more work for both the requirements team and the vendor teams
> 2. inability to use ironic release notes to explain driver requirements 
> changes
> 3. any objections from the requirements team?
> 
> If we make this change, we'll drop driver-requirements.txt, and will use 
> setuptools extras to list then in setup.cfg (this way is supported by g-r) 
> similar to what we do in ironicclient [2].
> 
> We either will have one list:
> 
> [extras]
> drivers =
>sushy>=a.b
>python-dracclient>=x.y
>python-prolianutils>=v.w
>...
> 
> or (and I like this more) we'll have a list per hardware type:
> 
> [extras]
> redfish =
>sushy>=a.b
> idrac =
>python-dracclient>=x.y
> ilo =
>...
> ...
> 
> WDYT?

The second option is what I would expect.

Doug

> 
> [1] https://github.com/openstack/ironic/blob/master/driver-requirements.txt
> [2] 
> https://github.com/openstack/python-ironicclient/blob/master/setup.cfg#L115
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-10-30 Thread Arkady.Kanevsky
The second seem to be better suited for per driver requirement handling and per 
HW type per function.
Which option is easier to handle for container per dependency for the future?


Thanks,
Arkady

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: Monday, October 30, 2017 2:47 PM
To: openstack-dev 
Subject: Re: [openstack-dev] [ironic] [requirements] moving driver dependencies 
to global-requirements?

Excerpts from Dmitry Tantsur's message of 2017-10-30 17:51:49 +0100:
> Hi all,
> 
> So far driver requirements [1] have been managed outside of 
> global-requirements. 
> This was mostly necessary because some dependencies were not on PyPI. 
> This is no longer the case, and I'd like to consider managing them 
> just like any other dependencies. Pros:
> 1. making these dependencies (and their versions) more visible for 
> packagers 2. following the same policies for regular and driver 
> dependencies 3. ensuring co-installability of these dependencies with 
> each other and with the remaining openstack 4. potentially using 
> upper-constraints in 3rd party CI to test what packagers will probably 
> package 5. we'll be able to finally create a tox job running unit 
> tests with all these dependencies installed (FYI these often breaks in 
> RDO CI)
> 
> Cons:
> 1. more work for both the requirements team and the vendor teams 2. 
> inability to use ironic release notes to explain driver requirements 
> changes 3. any objections from the requirements team?
> 
> If we make this change, we'll drop driver-requirements.txt, and will 
> use setuptools extras to list then in setup.cfg (this way is supported 
> by g-r) similar to what we do in ironicclient [2].
> 
> We either will have one list:
> 
> [extras]
> drivers =
>sushy>=a.b
>python-dracclient>=x.y
>python-prolianutils>=v.w
>...
> 
> or (and I like this more) we'll have a list per hardware type:
> 
> [extras]
> redfish =
>sushy>=a.b
> idrac =
>python-dracclient>=x.y
> ilo =
>...
> ...
> 
> WDYT?

The second option is what I would expect.

Doug

> 
> [1] 
> https://github.com/openstack/ironic/blob/master/driver-requirements.tx
> t [2] 
> https://github.com/openstack/python-ironicclient/blob/master/setup.cfg
> #L115
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-10-30 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. CI migration to Zuul v3: take legacy jobs in tree: 
https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking
1.1. repair stable branches by backporting the jobs to them
2. Move the "ironic" CLI to "latest" version: 
https://review.openstack.org/515064
3. BIOS interface spec: https://review.openstack.org/#/c/496481/

Vendor priorities
-
cisco-ucs:
Patchs in works for SDK update, but not posted yet, currently rebuilding 
third party CI infra after a disaster...
idrac:

ilo:
https://review.openstack.org/207337 - Out-of-band Boot from UEFI iSCSI 
volume for HPE Proliant server
irmc:
SPEC to add a new hardware type for another FUJITSU server: PRIMEQUEST MMB:
  https://review.openstack.org/#/c/515717/

oneview:
Migrate python-oneviewclient validations to Ironic OneView Drivers - 
https://review.openstack.org/#/c/468428/

Subproject priorities
-
bifrost:
ironic-inspector (or its client):
- dnsmasq-based inspector PXE filter driver: 
https://review.openstack.org/#/c/466448/ TL;DR: replaces iptables with a 
dynamic configuration of dnsmasq (pretty cool thing too ;)
- folks might consider trying the test patch to experiment manually with 
this https://review.openstack.org/#/c/468712/54
networking-baremetal:
neutron baremetal agent https://review.openstack.org/#/c/456235/
sushy and the redfish driver:
(dtantsur) implement redfish sessions: 
https://review.openstack.org/#/c/471942/

Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 23 Oct 2017 and 30 Oct 2017)
- Ironic: 251 bugs (-2) + 252 wishlist items (-6). 16 new (-3), 197 in progress 
(+2), 0 critical, 32 high and 35 incomplete (+1)
- Inspector: 16 bugs (-1) + 31 wishlist items (+2). 2 new, 16 in progress (+1), 
0 critical, 4 high (-1) and 3 incomplete
- Nova bugs with Ironic tag: 12. 0 new, 0 critical, 1 high
- HIGH bugs with patches to review:
- Clean steps are not tested in gate 
https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic 
standalone test https://review.openstack.org/#/c/429770/15
- prepare_instance() is not called for whole disk images with 'agent' deploy 
interface https://bugs.launchpad.net/ironic/+bug/1713916:
- Fix to return 'root_uuid' as part of command status 
https://review.openstack.org/#/c/500719/4
- Fix ``agent`` deploy interface to call ``boot.prepare_instance`` 
https://review.openstack.org/#/c/499050/
- If provisioning network is changed, Ironic conductor does not behave 
correctly https://bugs.launchpad.net/ironic/+bug/1679260: Ironic conductor 
works correctly on changes of networks: https://review.openstack.org/#/c/462931/
- (rloo) needs some direction

CI refactoring and missing test coverage

- Zuul v3 jobs in-tree migration tracking 
https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking
- not considered a priority, it's a 'do it always' thing
- Standalone CI tests (vsaienk0)
- next patch to be reviewed, needed for 3rd party CI: 
https://review.openstack.org/#/c/429770/
- localboot with partitioned image patches:
- IPA - build tinycore based partitioned image with grub 
https://review.openstack.org/#/c/504888/
- Ironic - add localboot partitioned image test: 
https://review.openstack.org/#/c/502886/
- when previous are merged TODO (vsaienko)
- Upload tinycore partitioned image to tarbals.openstack.org
- Switch ironic to use tinyipa partitioned image by default
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests
- root device hints: TODO
- node take over
- resource classes integration tests: 
https://review.openstack.org/#/c/443628/

Essential Priorities


Ironic client API version negotiation (TheJulia, dtantsur)
--
- RFE https://bugs.launchpad.net/python-ironicclient/+bug/1671145
- gerrit topic: https://review.openstack.org/#/q/topic:bug/1671145
- status as of 30 Oct 2017:
- patches on review:
- correct "latest" logic https://review.openstack.org/#/c/512986/ MERGED
- make the switch https://review.openstack.org/#/c/512989/ MERGED
- missing: make --os-baremetal-api-version=1 equal to 
--os-baremetal-api-version=latest
- switch the "ironic" CLI as well: https://review.openstack.org/515064 
needs update

Exter

[openstack-dev] [puppet][qa][ubuntu][neutron] Xenial Neutron Timeouts

2017-10-30 Thread Mohammed Naser
Hi everyone,

I'm looking for some help regarding an issue that we're having with
the Puppet OpenStack modules, we've had very inconsistent failures in
the Xenial with the following error:


http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/

http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/testr_results.html.gz
Details: {u'message': u'Unable to associate floating IP
172.24.5.17 to fixed IP 10.100.0.8 for instance
d265626a-77c1-4d2f-8260-46abe548293e. Error: Request to
https://127.0.0.1:9696/v2.0/floatingips/2e3fa334-d6ac-443c-b5ba-eeb521d6324c
timed out', u'code': 400}

At this point, we're at a bit of a loss.  I've tried my best in order
to find the root cause however we have not been able to do this.  It
was persistent enough that we elected to go non-voting for our Xenial
gates, however, with no fix ahead of us, I feel like this is a waste
of resources and we need to either fix this or drop CI for Ubuntu.  We
don't deploy on Ubuntu and most of the developers working on the
project don't either at this point, so we need a bit of resources.

If you're a user of Puppet on Xenial, we need your help!  Without any
resources going to fix this, we'd unfortunately have to drop support
for Ubuntu because of the lack of resources to maintain it (or
assistance).  We (Puppet OpenStack team) would be more than happy to
work together to fix this so pop-in at #puppet-openstack or reply to
this email and let's get this issue fixed.

Thanks,
Mohammed

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][qa][ubuntu][neutron] Xenial Neutron Timeouts

2017-10-30 Thread Matthew Treinish
From a quick glance at the logs my guess is that the issue is related to this 
stack trace in the l3 agent logs:

http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/neutron/neutron-l3-agent.txt.gz?level=TRACE#_2017-10-29_23_11_15_146

I'm not sure what's causing it to complain there. But, I'm on a plane right now 
(which is why this is a top post, sorry) so I can't really dig much more than 
that. I'll try to take a deeper look at things later when I'm on solid ground. 
(hopefully someone will beat me to it by then though) 

-Matt Treinish

On October 31, 2017 1:25:55 AM GMT+04:00, Mohammed Naser  
wrote:
>Hi everyone,
>
>I'm looking for some help regarding an issue that we're having with
>the Puppet OpenStack modules, we've had very inconsistent failures in
>the Xenial with the following error:
>
>http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/
>http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/testr_results.html.gz
>Details: {u'message': u'Unable to associate floating IP
>172.24.5.17 to fixed IP 10.100.0.8 for instance
>d265626a-77c1-4d2f-8260-46abe548293e. Error: Request to
>https://127.0.0.1:9696/v2.0/floatingips/2e3fa334-d6ac-443c-b5ba-eeb521d6324c
>timed out', u'code': 400}
>
>At this point, we're at a bit of a loss.  I've tried my best in order
>to find the root cause however we have not been able to do this.  It
>was persistent enough that we elected to go non-voting for our Xenial
>gates, however, with no fix ahead of us, I feel like this is a waste
>of resources and we need to either fix this or drop CI for Ubuntu.  We
>don't deploy on Ubuntu and most of the developers working on the
>project don't either at this point, so we need a bit of resources.
>
>If you're a user of Puppet on Xenial, we need your help!  Without any
>resources going to fix this, we'd unfortunately have to drop support
>for Ubuntu because of the lack of resources to maintain it (or
>assistance).  We (Puppet OpenStack team) would be more than happy to
>work together to fix this so pop-in at #puppet-openstack or reply to
>this email and let's get this issue fixed.
>
>Thanks,
>Mohammed
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe:
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-10-30 Thread Matthew Thode
On 17-10-30 20:48:37, arkady.kanev...@dell.com wrote:
> The second seem to be better suited for per driver requirement handling and 
> per HW type per function.
> Which option is easier to handle for container per dependency for the future?
> 
> 
> Thanks,
> Arkady
> 
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com] 
> Sent: Monday, October 30, 2017 2:47 PM
> To: openstack-dev 
> Subject: Re: [openstack-dev] [ironic] [requirements] moving driver 
> dependencies to global-requirements?
> 
> Excerpts from Dmitry Tantsur's message of 2017-10-30 17:51:49 +0100:
> > Hi all,
> > 
> > So far driver requirements [1] have been managed outside of 
> > global-requirements. 
> > This was mostly necessary because some dependencies were not on PyPI. 
> > This is no longer the case, and I'd like to consider managing them 
> > just like any other dependencies. Pros:
> > 1. making these dependencies (and their versions) more visible for 
> > packagers 2. following the same policies for regular and driver 
> > dependencies 3. ensuring co-installability of these dependencies with 
> > each other and with the remaining openstack 4. potentially using 
> > upper-constraints in 3rd party CI to test what packagers will probably 
> > package 5. we'll be able to finally create a tox job running unit 
> > tests with all these dependencies installed (FYI these often breaks in 
> > RDO CI)
> > 
> > Cons:
> > 1. more work for both the requirements team and the vendor teams 2. 
> > inability to use ironic release notes to explain driver requirements 
> > changes 3. any objections from the requirements team?
> > 
> > If we make this change, we'll drop driver-requirements.txt, and will 
> > use setuptools extras to list then in setup.cfg (this way is supported 
> > by g-r) similar to what we do in ironicclient [2].
> > 
> > We either will have one list:
> > 
> > [extras]
> > drivers =
> >sushy>=a.b
> >python-dracclient>=x.y
> >python-prolianutils>=v.w
> >...
> > 
> > or (and I like this more) we'll have a list per hardware type:
> > 
> > [extras]
> > redfish =
> >sushy>=a.b
> > idrac =
> >python-dracclient>=x.y
> > ilo =
> >...
> > ...
> > 
> > WDYT?
> 
> The second option is what I would expect.
> 
> Doug
> 
> > 
> > [1] 
> > https://github.com/openstack/ironic/blob/master/driver-requirements.tx
> > t [2] 
> > https://github.com/openstack/python-ironicclient/blob/master/setup.cfg
> > #L115
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

The first question I have is if ALL the drivers are suposed to be co-installable
with eachother.  If so, adding them to requirements sounds fine, as long as each
one follows https://github.com/openstack/requirements/#for-new-requirements .

As far as the format, I prefer option 2 (the breakout option).  I'm not sure if
the bot will need an update, but I suspect not as it tries to keep ordering 
iirc.

-- 
Matthew Thode


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Developer Mailing List Digest September 30 – October 6

2017-10-30 Thread Mike Perez
Thanks to Thierry Carrez and Jeremy Stanley for summarizing this issue of the
Dev Digest!

Contribute to the Dev Digest by summarizing OpenStack Dev List thread:

* https://etherpad.openstack.org/p/devdigest
* http://lists.openstack.org/pipermail/openstack-dev/

HTML Version: 
https://www.openstack.org/blog/2017/10/developer-mailing-list-digest-october-21-27-2017/



News

* TC election results [0]
* Next PTG will be in Dublin, the week of February 26, 2018. More details will
  be posted on openstack.org/ptg as soon as we have them. [1]

[0] http://lists.openstack.org/pipermail/openstack-dev/2017-October/123845.html
[1] http://lists.openstack.org/pipermail/openstack-dev/2017-October/124021.html

 
SuccessBot Says
===
* gothamr_  [0]:changes to the manila driverfixes branches can finally be
  merged xD Thanks infra folks for ZuulV3!
* andreaf [1]: Tempest test base class is now a stable API for plugins
* More [2]

[0] - 
http://eavesdrop.openstack.org/irclogs/%23openstack-manila/%23openstack-manila.2017-10-17.log.html
[1] - 
http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2017-10-24.log.html
[2] - https://wiki.openstack.org/wiki/Successes

 
Community Summaries
===
* TC Report 43 by Chris Dent [0]
* Nova Notification Update Week 43 by Balazs Gibizer [1]
* POST /api-sig/news by Chris Dent [2]
* Technical Committee Status Update by Thierry Carrez [3]
* Nova Placements Resource Provider Update [4]

[0] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123944.html
[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123990.html
[2] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-October/124023.html
[3] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123818.html
[4] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-October/124052.html

 
Time to Remove the Ceilometer API?
==
Summarized by Jeremy Stanley
 
The Ceilometer REST API was deprecated in Ocata, a year ago, and the User
Survey indicates more than half its users have switched to the non-OpenStack
Gnocchi service's API instead (using Ceilometer as a backend). The Ceilometer
install guide has also been recommending Gnocchi at least as long ago as
Newton. The old API has become an attractive nuisance from the Telemetry team's
perspective, and they'd like to go ahead and drop it altogether in Queens.
 
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123593.html
 
 
Keystone v2.0 API Removal
=
Summarized by Thierry Carrez
 
Keystone Queen's PTL Lance Bragstad gives notice that the Queen's release will
not be included v2.0, except the ec2-api. This is being done after a lengthy
given deprecation period.
 
http://lists.openstack.org/pipermail/openstack-dev/2017-October/123783.html


pgpB2eZcqVoRU.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][qa][ubuntu][neutron] Xenial Neutron Timeouts

2017-10-30 Thread Brian Haley

On 10/30/2017 05:46 PM, Matthew Treinish wrote:
 From a quick glance at the logs my guess is that the issue is related 
to this stack trace in the l3 agent logs:


http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/neutron/neutron-l3-agent.txt.gz?level=TRACE#_2017-10-29_23_11_15_146

I'm not sure what's causing it to complain there. But, I'm on a plane 
right now (which is why this is a top post, sorry) so I can't really dig 
much more than that. I'll try to take a deeper look at things later when 
I'm on solid ground. (hopefully someone will beat me to it by then though)


I don't think that l3-agent trace is it, as the failure is coming from 
the API.  It's actually a trace that's happening due to the async nature 
of how the agent runs arping, fix is 
https://review.openstack.org/#/c/507914/ but it only removes the log noise.


http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/neutron/neutron-server.txt.gz 
has some tracebacks that look config related, possible missing DB table? 
 But I haven't looked very closely.


-Brian


On October 31, 2017 1:25:55 AM GMT+04:00, Mohammed Naser 
 wrote:


Hi everyone,

I'm looking for some help regarding an issue that we're having with
the Puppet OpenStack modules, we've had very inconsistent failures in
the Xenial with the following error:

 
http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/
 
http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/testr_results.html.gz
 Details: {u'message': u'Unable to associate floating IP
172.24.5.17   to fixed IP10.100.0.8  
 for instance
d265626a-77c1-4d2f-8260-46abe548293e. Error: Request to
https://127.0.0.1:9696/v2.0/floatingips/2e3fa334-d6ac-443c-b5ba-eeb521d6324c
timed out', u'code': 400}

At this point, we're at a bit of a loss.  I've tried my best in order
to find the root cause however we have not been able to do this.  It
was persistent enough that we elected to go non-voting for our Xenial
gates, however, with no fix ahead of us, I feel like this is a waste
of resources and we need to either fix this or drop CI for Ubuntu.  We
don't deploy on Ubuntu and most of the developers working on the
project don't either at this point, so we need a bit of resources.

If you're a user of Puppet on Xenial, we need your help!  Without any
resources going to fix this, we'd unfortunately have to drop support
for Ubuntu because of the lack of resources to maintain it (or
assistance).  We (Puppet OpenStack team) would be more than happy to
work together to fix this so pop-in at #puppet-openstack or reply to
this email and let's get this issue fixed.

Thanks,
Mohammed



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rh1 outage today

2017-10-30 Thread Ben Nemec
It turns out this wasn't _quite_ resolved yet.  I was still seeing some 
excessively long stack creation times today and it turns out one of our 
compute nodes had virtualization turned off.  This caused all of its 
instances to fail and need a retry.  Once I disabled the compute service 
on it stacks seemed to be creating in a normal amount of time again.


This happened because the node had some hardware issues, and apparently 
the fix was to replace the system board so we got it back with 
everything set to default.  I fixed this and re-enabled the node and all 
seems well again.


On 10/28/2017 02:07 AM, Juan Antonio Osorio wrote:

Thanks for the postmortem; it's always a good read tp learn stuff :)

On 28 Oct 2017 00:11, "Ben Nemec" > wrote:


Hi all,

As you may or may not have noticed all ovb jobs on rh1 started
failing sometime last night.  After some investigation today I found
a few issues.

First, our nova db archiving wasn't working.  This was due to the
auto-increment counter issue described by melwitt in

http://lists.openstack.org/pipermail/openstack-dev/2017-September/122903.html
 
Deleting the problematic rows from the shadow table got us past that.


On another db-related note, we seem to have turned ceilometer back
on at some point in rh1.  I think that was intentional to avoid
notification queues backing up, but it led to a different problem. 
We had approximately 400 GB of mongodb data from ceilometer that we

don't actually care about.  I cleaned that up and set a TTL in
ceilometer so hopefully this won't happen again.

Is there an alarm or something we could set to get notified about this 
kind of stuff? Or better yet, something we could automate to avoid this? 
What's usimg mongodb nowadays?


Setting a TTL should avoid this in the future.  Note that I don't think 
mongo is still used by default, but in our old Mitaka version it was.


For the nova archiving thing I think we'd have to set up email 
notifications for failed cron jobs.  That would be a good RFE.





Unfortunately neither of these things completely resolved the
extreme slowness in the cloud that was causing every testenv to
fail.  After trying a number of things that made no difference, the
culprit seems to have been rabbitmq.  There was nothing obviously
wrong with it according to the web interface, the queues were all
short and messages seemed to be getting delivered.  However, when I
ran rabbitmqctl status at the CLI it reported that the node was
down.  Since something was clearly wrong I went ahead and restarted
it.  After that everything seems to be back to normal.

Same questiom as above, could we set and alarm or automate the node 
recovery?


On this one I have no idea.  As I noted, when I looked at the rabbit web 
ui everything looked fine.  This isn't like the notification queue 
problem where one look at the queue lengths made it obvious something 
was wrong.  Messages were being delivered successfully, just very, very 
slowly.  Maybe looking at messages per second would help, but that would 
be hard to automate.  You'd have to know if there were few messages 
going through because of performance issues or if the cloud is just 
under light load.


I guess it's also worth noting that at some point this cloud is going 
away in favor of RDO cloud.  Of course we said that back in December 
when we discussed the OVS port exhaustion issue and now 11 months later 
it still hasn't happened.  That's why I haven't been too inclined to 
pursue extensive monitoring for the existing cloud though.





I'm not sure exactly what the cause of all this was.  We did get
kind of inundated with jobs yesterday after a zuul restart which I
think is what probably pushed us over the edge, but that has
happened before without bringing the cloud down.  It was probably a
combination of some previously unnoticed issues stacking up over
time and the large number of testenvs requested all at once.

In any case, testenvs are creating successfully again and the jobs
in the queue look good so far.  If you notice any problems please
let me know though.  I'm hoping this will help with the job
timeouts, but that remains to be seen.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for us

Re: [openstack-dev] [tripleo] rh1 outage today

2017-10-30 Thread Ben Nemec



On 10/30/2017 05:14 PM, Ben Nemec wrote:
It turns out this wasn't _quite_ resolved yet.  I was still seeing some 
excessively long stack creation times today and it turns out one of our 
compute nodes had virtualization turned off.  This caused all of its 
instances to fail and need a retry.  Once I disabled the compute service 
on it stacks seemed to be creating in a normal amount of time again.


This happened because the node had some hardware issues, and apparently 
the fix was to replace the system board so we got it back with 
everything set to default.  I fixed this and re-enabled the node and all 
seems well again.


On 10/28/2017 02:07 AM, Juan Antonio Osorio wrote:

Thanks for the postmortem; it's always a good read tp learn stuff :)

On 28 Oct 2017 00:11, "Ben Nemec" > wrote:


Hi all,

As you may or may not have noticed all ovb jobs on rh1 started
failing sometime last night.  After some investigation today I found
a few issues.

First, our nova db archiving wasn't working.  This was due to the
auto-increment counter issue described by melwitt in

http://lists.openstack.org/pipermail/openstack-dev/2017-September/122903.html 


 
Deleting the problematic rows from the shadow table got us past that.


On another db-related note, we seem to have turned ceilometer back
on at some point in rh1.  I think that was intentional to avoid
notification queues backing up, but it led to a different problem. 
We had approximately 400 GB of mongodb data from ceilometer that we

don't actually care about.  I cleaned that up and set a TTL in
ceilometer so hopefully this won't happen again.

Is there an alarm or something we could set to get notified about this 
kind of stuff? Or better yet, something we could automate to avoid 
this? What's usimg mongodb nowadays?


Setting a TTL should avoid this in the future.  Note that I don't think 
mongo is still used by default, but in our old Mitaka version it was.


For the nova archiving thing I think we'd have to set up email 
notifications for failed cron jobs.  That would be a good RFE.


And done: https://bugs.launchpad.net/tripleo/+bug/1728737






Unfortunately neither of these things completely resolved the
extreme slowness in the cloud that was causing every testenv to
fail.  After trying a number of things that made no difference, the
culprit seems to have been rabbitmq.  There was nothing obviously
wrong with it according to the web interface, the queues were all
short and messages seemed to be getting delivered.  However, when I
ran rabbitmqctl status at the CLI it reported that the node was
down.  Since something was clearly wrong I went ahead and restarted
it.  After that everything seems to be back to normal.

Same questiom as above, could we set and alarm or automate the node 
recovery?


On this one I have no idea.  As I noted, when I looked at the rabbit web 
ui everything looked fine.  This isn't like the notification queue 
problem where one look at the queue lengths made it obvious something 
was wrong.  Messages were being delivered successfully, just very, very 
slowly.  Maybe looking at messages per second would help, but that would 
be hard to automate.  You'd have to know if there were few messages 
going through because of performance issues or if the cloud is just 
under light load.


I guess it's also worth noting that at some point this cloud is going 
away in favor of RDO cloud.  Of course we said that back in December 
when we discussed the OVS port exhaustion issue and now 11 months later 
it still hasn't happened.  That's why I haven't been too inclined to 
pursue extensive monitoring for the existing cloud though.





I'm not sure exactly what the cause of all this was.  We did get
kind of inundated with jobs yesterday after a zuul restart which I
think is what probably pushed us over the edge, but that has
happened before without bringing the cloud down.  It was probably a
combination of some previously unnoticed issues stacking up over
time and the large number of testenvs requested all at once.

In any case, testenvs are creating successfully again and the jobs
in the queue look good so far.  If you notice any problems please
let me know though.  I'm hoping this will help with the job
timeouts, but that remains to be seen.

-Ben


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe



http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___

Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-10-30 Thread Matthew Thode
On 17-10-30 20:48:37, arkady.kanev...@dell.com wrote:
> The second seem to be better suited for per driver requirement handling and 
> per HW type per function.
> Which option is easier to handle for container per dependency for the future?
> 
> 
> Thanks,
> Arkady
> 
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com] 
> Sent: Monday, October 30, 2017 2:47 PM
> To: openstack-dev 
> Subject: Re: [openstack-dev] [ironic] [requirements] moving driver 
> dependencies to global-requirements?
> 
> Excerpts from Dmitry Tantsur's message of 2017-10-30 17:51:49 +0100:
> > Hi all,
> > 
> > So far driver requirements [1] have been managed outside of 
> > global-requirements. 
> > This was mostly necessary because some dependencies were not on PyPI. 
> > This is no longer the case, and I'd like to consider managing them 
> > just like any other dependencies. Pros:
> > 1. making these dependencies (and their versions) more visible for 
> > packagers 2. following the same policies for regular and driver 
> > dependencies 3. ensuring co-installability of these dependencies with 
> > each other and with the remaining openstack 4. potentially using 
> > upper-constraints in 3rd party CI to test what packagers will probably 
> > package 5. we'll be able to finally create a tox job running unit 
> > tests with all these dependencies installed (FYI these often breaks in 
> > RDO CI)
> > 
> > Cons:
> > 1. more work for both the requirements team and the vendor teams 2. 
> > inability to use ironic release notes to explain driver requirements 
> > changes 3. any objections from the requirements team?
> > 
> > If we make this change, we'll drop driver-requirements.txt, and will 
> > use setuptools extras to list then in setup.cfg (this way is supported 
> > by g-r) similar to what we do in ironicclient [2].
> > 
> > We either will have one list:
> > 
> > [extras]
> > drivers =
> >sushy>=a.b
> >python-dracclient>=x.y
> >python-prolianutils>=v.w
> >...
> > 
> > or (and I like this more) we'll have a list per hardware type:
> > 
> > [extras]
> > redfish =
> >sushy>=a.b
> > idrac =
> >python-dracclient>=x.y
> > ilo =
> >...
> > ...
> > 
> > WDYT?
> 
> The second option is what I would expect.
> 
> Doug
> 
> > 
> > [1] 
> > https://github.com/openstack/ironic/blob/master/driver-requirements.tx
> > t [2] 
> > https://github.com/openstack/python-ironicclient/blob/master/setup.cfg
> > #L115
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Meant to reply from this address, but below is my original response.

The first question I have is if ALL the drivers are suposed to be co-installable
with eachother.  If so, adding them to requirements sounds fine, as long as each
one follows https://github.com/openstack/requirements/#for-new-requirements .

As far as the format, I prefer option 2 (the breakout option).  I'm not sure if
the bot will need an update, but I suspect not as it tries to keep ordering 
iirc.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-10-30 Thread Richard.Pioso
> From: Dmitry Tantsur [mailto:dtant...@redhat.com]

> Cons:
> 1. more work for both the requirements team and the vendor teams

Please elaborate on the additional work you envision for the vendor teams.

> 2. inability to use ironic release notes to explain driver requirements 
> changes

Where could that information move to?

> We either will have one list:
> 
> [extras]
> drivers =
>sushy>=a.b
>python-dracclient>=x.y
>python-prolianutils>=v.w
>...
> 
> or (and I like this more) we'll have a list per hardware type:
> 
> [extras]
> redfish =
>sushy>=a.b
> idrac =
>python-dracclient>=x.y
> ilo =
>...
> ...
> 
> WDYT?
> 

Overall, a big +1. I prefer the second approach.

A couple of questions ...

1. If two (2) hardware types have the same requirement, would they both
enter it in their lists?
2. And would that be correctly handled?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][qa][ubuntu][neutron] Xenial Neutron Timeouts

2017-10-30 Thread Mohammed Naser
On Mon, Oct 30, 2017 at 6:07 PM, Brian Haley  wrote:
> On 10/30/2017 05:46 PM, Matthew Treinish wrote:
>>
>>  From a quick glance at the logs my guess is that the issue is related to
>> this stack trace in the l3 agent logs:
>>
>>
>> http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/neutron/neutron-l3-agent.txt.gz?level=TRACE#_2017-10-29_23_11_15_146
>>
>> I'm not sure what's causing it to complain there. But, I'm on a plane
>> right now (which is why this is a top post, sorry) so I can't really dig
>> much more than that. I'll try to take a deeper look at things later when I'm
>> on solid ground. (hopefully someone will beat me to it by then though)
>
>
> I don't think that l3-agent trace is it, as the failure is coming from the
> API.  It's actually a trace that's happening due to the async nature of how
> the agent runs arping, fix is https://review.openstack.org/#/c/507914/ but
> it only removes the log noise.

Indeed, I've reached out to Neutron team on IRC and Brian informed me
that this was just log noise.

> http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/neutron/neutron-server.txt.gz
> has some tracebacks that look config related, possible missing DB table?
> But I haven't looked very closely.

The tracebacks are because the Neutron server is started before the
MySQL database is sync'd (afaik, Ubuntu behaviour is to start services
on install, so we haven't had a chance to sync the db).  You can see
the service later restart with none of these database issues.  The
other reason to eliminate config issues is the fact that this happens
intermittently (though, often enough that we had to switch it to
non-voting).  If it was a config issue, it would constantly and always
fail.

Thank you Brian & Matthew for your help so far.

> -Brian
>
>
>> On October 31, 2017 1:25:55 AM GMT+04:00, Mohammed Naser
>>  wrote:
>>
>> Hi everyone,
>>
>> I'm looking for some help regarding an issue that we're having with
>> the Puppet OpenStack modules, we've had very inconsistent failures in
>> the Xenial with the following error:
>>
>>
>> http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/
>>
>> http://logs.openstack.org/47/514347/1/check/puppet-openstack-integration-4-scenario001-tempest-ubuntu-xenial/ed5a657/logs/testr_results.html.gz
>>  Details: {u'message': u'Unable to associate floating IP
>> 172.24.5.17   to fixed IP10.100.0.8
>>   for instance
>> d265626a-77c1-4d2f-8260-46abe548293e. Error: Request to
>>
>> https://127.0.0.1:9696/v2.0/floatingips/2e3fa334-d6ac-443c-b5ba-eeb521d6324c
>> timed out', u'code': 400}
>>
>> At this point, we're at a bit of a loss.  I've tried my best in order
>> to find the root cause however we have not been able to do this.  It
>> was persistent enough that we elected to go non-voting for our Xenial
>> gates, however, with no fix ahead of us, I feel like this is a waste
>> of resources and we need to either fix this or drop CI for Ubuntu.  We
>> don't deploy on Ubuntu and most of the developers working on the
>> project don't either at this point, so we need a bit of resources.
>>
>> If you're a user of Puppet on Xenial, we need your help!  Without any
>> resources going to fix this, we'd unfortunately have to drop support
>> for Ubuntu because of the lack of resources to maintain it (or
>> assistance).  We (Puppet OpenStack team) would be more than happy to
>> work together to fix this so pop-in at #puppet-openstack or reply to
>> this email and let's get this issue fixed.
>>
>> Thanks,
>> Mohammed
>>
>>
>> 
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/ope

Re: [openstack-dev] [ironic] [requirements] moving driver dependencies to global-requirements?

2017-10-30 Thread Doug Hellmann
Excerpts from Richard.Pioso's message of 2017-10-30 23:11:31 +:
> > From: Dmitry Tantsur [mailto:dtant...@redhat.com]
> 
> > Cons:
> > 1. more work for both the requirements team and the vendor teams
> 
> Please elaborate on the additional work you envision for the vendor teams.
> 
> > 2. inability to use ironic release notes to explain driver requirements 
> > changes
> 
> Where could that information move to?
> 
> > We either will have one list:
> > 
> > [extras]
> > drivers =
> >sushy>=a.b
> >python-dracclient>=x.y
> >python-prolianutils>=v.w
> >...
> > 
> > or (and I like this more) we'll have a list per hardware type:
> > 
> > [extras]
> > redfish =
> >sushy>=a.b
> > idrac =
> >python-dracclient>=x.y
> > ilo =
> >...
> > ...
> > 
> > WDYT?
> > 
> 
> Overall, a big +1. I prefer the second approach.
> 
> A couple of questions ...
> 
> 1. If two (2) hardware types have the same requirement, would they both
> enter it in their lists?

Yes.

> 2. And would that be correctly handled?

Good question. We should test the requirements update script to see.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-10-30 Thread Arkady.Kanevsky
See you there Eric.

From: Erik McCormick [mailto:emccorm...@cirrusseven.com]
Sent: Monday, October 30, 2017 10:58 AM
To: Matt Riedemann 
Cc: OpenStack Development Mailing List ; 
openstack-operators 
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary



On Oct 30, 2017 11:53 AM, "Matt Riedemann" 
mailto:mriede...@gmail.com>> wrote:
On 9/20/2017 9:42 AM, arkady.kanev...@dell.com 
wrote:
Lee,
I can chair meeting in Sydney.
Thanks,
Arkady

Arkady,

Are you actually moderating the forum session in Sydney because the session 
says Eric McCormick is the session moderator:

I submitted it so it gets my name on it. I think Arkady and I are going to do 
it together.

https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20451/fast-forward-upgrades

People are asking in the nova IRC channel about this session and were told to 
ask Jay Pipes about it, but Jay isn't going to be in Sydney and isn't involved 
in fast-forward upgrades, as far as I know anyway.

So whoever is moderating this session, can you please create an etherpad and 
get it linked to the wiki?

https://wiki.openstack.org/wiki/Forum/Sydney2017

I'll have the etherpad up today and pass it among here and on the wiki.



--

Thanks,

Matt


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari]I submitted a patch that fixes py27 unit tests of Masakari.

2017-10-30 Thread Rikimaru Honjo

Hello,

I submitted a patch that fixes py27 unit tests of Masakari.

https://review.openstack.org/#/c/516517/

This is a 2nd solution which we discussed in today's IRC meeting.[1]

http://eavesdrop.openstack.org/meetings/masakari/2017/masakari.2017-10-31-04.00.log.html#l-54

Please check it.

[1]
1st solution is this:
https://review.openstack.org/#/c/513520/

Best Regards,
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev