Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-18 Thread Ihar Hrachyshka

Sławek Kapłoński  wrote:


Hello,

What MTU have You got configured on VMs? I had issue with performance on
vxlan network with standard MTU (1500) but when I configured Jumbo
frames on vms and on hosts then it was much better.


Right. Note that custom MTU works out of the box only starting from Mitaka.  
You can find details on how to configure Neutron for Jumbo frames in the  
official docs:


http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html


Ihar

signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova][horizon] Serial console support for ironic instances

2016-04-18 Thread Zhenguo Niu
Thanks Yuiko for doing this, but I'm sorry that I can't go to Austin, so I
would like to add more details about my proposal here, hope someone can
bring it to the session.

Add a custom HTTP proxy for web based consoles to Nova
https://review.openstack.org/#/c/300582/

* Pros:
- for Ironic
  - Don't need any change to Ironic API
  - We can continue use the web interface from Ironic, don't rely on
Nova's websocketproxy to provide a ws/wss URL
- for Nova and Horizon
  - Support one more console type for hypervisors which provide web
based consoles not only for Ironic, here's another one which also needs it
https://blueprints.launchpad.net/nova/+spec/spice-http-proxy

* Cons:
- Don't output log file
  but I think session logging capability is a great extension for
shellinabox, will explore this more.


And Ironic will support different console drivers, I don't think only one
proposal will be accepted here.

On Thu, Apr 14, 2016 at 10:11 PM, Jim Rollenhagen 
wrote:

> On Wed, Apr 13, 2016 at 05:47:15PM +0900, Yuiko Takada wrote:
> > Hi,
> >
> > I also want to discuss about it at summit session.
> >
> > 2016-04-13 0:41 GMT+09:00 Ruby Loo :
> >
> > > Yes, I think it would be good to have a summit session on that.
> However,
> > > before the session, it would really be helpful if the folks with
> proposals
> > > got together and/or reviewed each other's proposals, and summarized
> their
> > > findings.
> > >
> >
> > I've summarized all of related proposals.
> >
> > (1)Add driver using Socat
> > https://review.openstack.org/#/c/293827/
> >
> > * Pros:
> > - There is no influence to other components
> > - Don't need to change any other Ironic drivers(like
> IPMIShellinaboxConsole)
> > - Don't need to bump API microversion/RPC
> >
> > * Cons:
> > - Don't output log file
> >
> > (2)Add driver starting ironic-console-server
> > https://review.openstack.org/#/c/302291/
> > (There is no spec, yet)
> >
> > * Pros:
> > - There is no influence to other components
> > - Output log file
> > - Don't need to change any other Ironic drivers(like
> IPMIShellinaboxConsole)
> > - No adding any Ironic services required, only add tools
> >
> > * Cons:
> > - Need to bump API microversion/RPC
> >
> > (3)Add a custom HTTP proxy to Nova
> > https://review.openstack.org/#/c/300582/
> >
> > * Pros:
> > - Don't need any change to Ironic API
> >
> > * Cons:
> > - Need Nova API changes(bump microversion)
> > - Need Horizon changes
> > - Don't output log file
> >
> > (4)Add Ironic-ipmiproxy server
> > https://review.openstack.org/#/c/296869/
> >
> > * Pros:
> > - There is no influence to other components
> > - Output log file
> > - IPMIShellinaboxConsole will be also available via Horizon
> >
> > * Cons:
> > - Need IPMIShellinaboxConsole changes?
> > - Need to bump API microversion/RPC
> >
> > If there is any mistake, please give me comment.
>
> Thanks for doing this Yuiko, this will be helpful for everyone preparing
> for this session. Looking forward to chatting about it. :)
>
> // jim
>
> >
> >
> > Best Regards,
> > Yuiko Takada
> >
> > 2016-04-13 0:41 GMT+09:00 Ruby Loo :
> >
> > > Yes, I think it would be good to have a summit session on that.
> However,
> > > before the session, it would really be helpful if the folks with
> proposals
> > > got together and/or reviewed each other's proposals, and summarized
> their
> > > findings. You may find after reviewing the proposals, that eg only 2
> are
> > > really different. Or they several have merit because they are
> addressing
> > > slightly different issues. That would make it easier to
> > > present/discuss/decide at the session.
> > >
> > > --ruby
> > >
> > >
> > > On 12 April 2016 at 09:17, Jim Rollenhagen 
> wrote:
> > >
> > >> On Tue, Apr 12, 2016 at 02:02:44AM +0800, Zhenguo Niu wrote:
> > >> > Maybe we can continue the discussion here, as there's no enough
> time in
> > >> the
> > >> > irc meeting :)
> > >>
> > >> Someone mentioned this would make a good summit session, as there's a
> > >> few competing proposals that are all good options. I do welcome
> > >> discussion here until then, but I'm going to put it on the schedule.
> :)
> > >>
> > >> // jim
> > >>
> > >> >
> > >> > On Fri, Apr 8, 2016 at 1:06 AM, Zhenguo Niu 
> > >> wrote:
> > >> >
> > >> > >
> > >> > > Ironic is currently using shellinabox to provide a serial
> console, but
> > >> > > it's not compatible
> > >> > > with nova, so I would like to propose a new console type and a
> custom
> > >> HTTP
> > >> > > proxy [1]
> > >> > > which validate token and connect to ironic console from nova.
> > >> > >
> > >> > > On Horizon side, we should add support for the new console type
> [2] as
> > >> > > well, here are some screenshots from my local environment.
> > >> > >
> > >> > >
> > >> > >
> > >> > > ​
> > >> > >
> > >> > > Additionally, shellinabox console ports management should be
> improved
> > >> in
> > >> > > ironic, instead of manually specified, we should introduce
> dynamically
> > >> > > allocat

Re: [openstack-dev] [Cinder] API features discoverability

2016-04-18 Thread Ramakrishna, Deepti
Hi Michal,

This seemed like a good idea when I first read it. What more, the server code 
for extension listing [1] does not do any authorization, so it can be used for 
any logged in user.

However, I don't know if requiring the admin to manually disable an extension 
is practical. First, admins can always forget to do that. Second, even if they 
wanted to, it is not clear how they could disable specific extensions. I assume 
they would need to edit the cinder.conf file. This file currently lists the set 
of extensions to load as cinder.api.contrib.standard_extensions. The server 
code [2] implements this by walking the cinder/api/contrib directory and 
loading all discovered extensions. How is it possible to subtract just one 
extension from the "standard extensions"? Also, system capabilities and 
extensions may not have a 1:1 relationship in general.

Having a new extension API (as proposed by me in [3]) for returning the 
available services/functionality does not have the above problems. It will 
dynamically check the existence of the cinder-backup service, so it does not 
need manual action from admin. I have published a BP [4] related to this. Can 
you please comment on that?

Thanks,
Deepti

[1] 
https://github.com/openstack/cinder/blob/2596004a542053bc19bb56b9a99f022368816871/cinder/api/extensions.py#L152
[2] 
https://github.com/openstack/cinder/blob/2596004a542053bc19bb56b9a99f022368816871/cinder/api/extensions.py#L312
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077209.html
[4] https://review.openstack.org/#/c/306930/

-Original Message-
From: Michał Dulko [mailto:michal.du...@intel.com] 
Sent: Thursday, April 14, 2016 7:06 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Cinder] API features discoverability

Hi,

When looking at bug [1] I've thought that we could simply use 
/v2//extensions to signal features available in the deployment - in 
this case backups, as these are implemented as API extension too. Cloud admin 
can disable an extension if his cloud doesn't support a particular feature and 
this is easily discoverable using aforementioned call. Looks like that solution 
weren't proposed when the bug was initially raised.

Now the problem is that we're actually planning to move all API extensions to 
the core API. Do we plan to keep this API for features discovery? How to 
approach API compatibility in this case if we want to change it? Do we have a 
plan for that?

We could keep this extensions API controlled from the cinder.conf, regardless 
of the fact that we've moved everything to the core, but that doesn't seem 
right (API will still be functional, even if administrator disables it in 
configuration, am I right?)

Anyone have thoughts on that?

Thanks,
Michal

[1] https://bugs.launchpad.net/cinder/+bug/1334856

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Thierry Carrez

Monty Taylor wrote:

On 04/17/2016 10:13 AM, Doug Hellmann wrote:

I am organizing a summit session for the cross-project track to
(re)consider how we manage our list of global dependencies [1].
Some of the changes I propose would have a big impact, and so I
want to ensure everyone doing packaging work for distros is available
for the discussion. Please review the etherpad [2] and pass the
information along to colleagues who might be interested.

Doug

[1]
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9473
[2] https://etherpad.openstack.org/p/newton-global-requirements


Sadly the session conflicts with a different one that I'm leading, so I
cannot be there. That, of course, makes me sad, because I think it's an
important conversation to have, and I have some strong opinions on the
topic in both directions.


We might be able to adapt the schedule to accommodate your presence... 
if we do the change ASAP and communicate it widely.


We could for example swap the "Co-installability Requirements" 
discussion with the "Stable Branch End of Life Policy" discussion.


Such a swap could help with the conflict someone reported between 
"Identity v3 API only devstack" and the "Stable Branch" discussion 
(can't remember who/where though).


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Openstack rpm build

2016-04-18 Thread Qiming Teng
On Mon, Apr 18, 2016 at 06:05:41AM +0200, Andreas Jaeger wrote:
> On 04/18/2016 05:30 AM, Kenny Ji-work wrote:
> > Hi all,
> > 
> > In our developing environment, we want to create openstack's rpms by
> > ourselves. By typing 'python setup.py bdist_rpm', there would be some
> > files not packaged in. Now is there some tools or methods to package the
> > openstack's module to rpms? Thank you for answering!
> 
> 
> There's the RPM packaging team that creates spec files for all projects,
> check out their repo,
> 

Where is the repo? Thanks.

Regards,
  Qiming 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Removal of in progress console log access

2016-04-18 Thread Sean Dague
On 04/15/2016 07:23 PM, Monty Taylor wrote:
> tl;dr Effective immediately we've put firewalls in front of the Jenkins
> servers removing in-progress console log streaming access
> 
> Longer version
> 
> 
> Recently some potential security issues have come to our attention with
> Jenkins [1] and the way we run it that are non-trivial to fix. As a
> precaution, we have put firewalls in front of the Jenkins web interfaces
> to give us time to react in a reasoned manner. Zuul will still operate
> as usual, and we'll still get log information as usual when the jobs are
> done. However, it does mean that in-progress console log streaming will
> go away for the time being.
> 
> We have some plans as to how to address the situation, but they will
> take a few weeks to finalize and implement. Although we regret the
> inconvenience and temporary loss of functionality, it seems the most
> prudent step to take at the moment. As soon as we have an ETA on
> resumption of console log streaming, we'll be sure to let everyone know.
> 
> Thanks,
> OpenStack Infra team
> 
> [1]
> https://groups.google.com/forum/#!msg/jenkinsci-advisories/lJfvDs5s6bk/4dRqSc4pHgAJ

Bummer. This gets used a to figure out the state of things given that
zuul links to the console even after the job is complete. Changing that
to the log server link would mitigate the blind spot.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Backport exception request // discussion: setting the default root device

2016-04-18 Thread Steven Hardy
On Wed, Apr 06, 2016 at 10:28:44AM +0200, Dmitry Tantsur wrote:
> Hi OOO'ers!
> 
> I'd like to get your permission to backport
> https://review.openstack.org/#/c/288417/ to stable/{liberty,mitaka} or seek
> alternative suggestions on how to make life easier for folks upgrading for
> Kilo.
> 
> The context of the problem is the following. In the Liberty release we (with
> the whole Ironic world) have switched from the old bash-based Ironic deploy
> ramdisk to IPA. I can't talk enough about benefits that it brought us, but
> today I want to talk about one drawback.
> 
> IPA has a different logic for choosing the root device for deployment, when
> several root devices are present. The old ramdisk always tried to find a
> disk by name present in the Ironic disk_devices configuration option,
> defaulting to something like "sda,hda,vda". IPA takes the smallest device
> which is greater than 4 GiB. Obviously, it's not guaranteed to be the same.
> 
> What it means is that when people upgrade their undercloud from Kilo and
> Liberty and beyond, and rebuild an overcloud node, this node may end up with
> a different root device picked by default. In the absence of cleaning, that
> will probably result in deployment failure (e.g. due to duplicated config
> drive).
> 
> A side note: the same was possible and actually happened back in Kilo,
> because device names are not reliable, and can change between reboots.
> 
> Now, the Ironic team has always recommended using root device hints for
> several root devices. However, there are valid complaints from users that
> running node-update on every node is not really convenient. And he is the
> patch in question: it adds a new flag to 'baremetal configure boot' to
> bulk-set root device hints based on a strategy or list of device names. The
> root device information is fetched from the introspection data. This allows
> people upgrading from Kilo to just do:
> 
>  openstack baremetal configure boot --root-device=sda,hda,vda
> 
> to create root device hints matching the previous behavior. I suggest we
> backport this patch to simplify life for people.

+1 - I think we can consider this a bug, e.g from a user perspective
choosing the wrong/different device after upgrade is a bug, and this
provides a way around it.

So I'd suggest you raise a bug (if one doesn't already exist) and propose
the backports.

Also it would appear from a quick look at the patch that it's low-risk from
a backport perspective, as it adds new options which a user may ignore if
they desire the default (current) behavior instead.

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-18 Thread Akihiro Motoki
2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka :
> Sławek Kapłoński  wrote:
>
>> Hello,
>>
>> What MTU have You got configured on VMs? I had issue with performance on
>> vxlan network with standard MTU (1500) but when I configured Jumbo
>> frames on vms and on hosts then it was much better.
>
>
> Right. Note that custom MTU works out of the box only starting from Mitaka.
> You can find details on how to configure Neutron for Jumbo frames in the
> official docs:
>
> http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html

If you want to advertise MTU using DHCP in releases before Mitaka,
you can prepare your custom dnsmasq config file like below and
set it to dhcp-agent dnsmasq_config_file config option.
You also need to set network_device_mtu config parameters appropriately.

sample dnsmasq config file:
--
dhcp-option-force=26,8950
--
dhcp option 26 specifies MTU.

Akihiro


>
>
> Ihar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-04-18 Thread Vladimir Kozhukalov
Colleagues,

Whether we are going to continue using Shotgun or
substitute it with something else, we still need to
decouple it from Fuel because Shotgun is a generic
tool. Please review these [1], [2].

[1] https://review.openstack.org/#/c/298603
[2] https://review.openstack.org/#/c/298615


Btw, one of the ideas was to use Fuel task capabilities
to gather diagnostic snapshot.

Vladimir Kozhukalov

On Thu, Mar 31, 2016 at 1:32 PM, Evgeniy L  wrote:

> Hi,
>
> Problems which I see with current Shotgun are:
> 1. Luck of parallelism, so it's not going to fetch data fast enough from
> medium/big clouds.
> 2. There should be an easy way to run it manually (it's possible, but
> there is no ready-to-use config), it would be really helpful in case if
> Nailgun/Astute/MCollective are down.
>
> As far as I know 1st is partly covered by Ansible, but the problem is it
> executes a single task in parallel, so there is probability that lagging
> node will slow down fetching from entire environment.
> Also we will have to build a tool around Ansible to generate playbooks.
>
> Thanks,
>
> On Wed, Mar 30, 2016 at 5:18 PM, Tomasz 'Zen' Napierala <
> tnapier...@mirantis.com> wrote:
>
>> Hi,
>>
>> Do we have any requirements for the new tool? Do we know what we don’t
>> like about current implementation, what should be avoided, etc.? Before
>> that we can only speculate.
>> From my ops experience, shotgun like tools will not work conveniently on
>> medium to big environments. Even on medium env amount of logs is just too
>> huge to handle by such simple tool. In such environments better pattern is
>> to use dedicated log collection / analysis tool, just like StackLight.
>> At the other hand I’m not sure if ansible is the right tool for that. It
>> has some features (like ‘fetch’ command) but in general it’s a
>> configuration management tool, and I’m not sure how it would act under such
>> heavy load.
>>
>> Regards,
>>
>> > On 30 Mar 2016, at 15:20, Vladimir Kozhukalov 
>> wrote:
>> >
>> > ​Igor,
>> >
>> > I can not agree more. Wherever possible we should
>> > use existent mature solutions. Ansible is really
>> > convenient and well known solution, let's try to
>> > use it.
>> >
>> > Yet another thing should be taken into account.
>> > One of Shotgun features is diagnostic report
>> > that could then be attached to bugs to identify
>> > the content of env. This report could also be
>> > used to reproduce env and then fight a bug.
>> > I'd like we to have this kind of report.
>> > Is it possible to implement such a feature
>> > using Ansible? If yes, then let's switch to Ansible
>> > as soon as possible.
>> >
>> > ​
>> >
>> > Vladimir Kozhukalov
>> >
>> > On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com> wrote:
>> > Neil Jerram wrote:
>> > > But isn't Ansible also over-complicated for just running commands
>> over SSH?
>> >
>> > It may be not so "simple" to ignore that. Ansible has a lot of modules
>> > which might be very helpful. For instance, Shotgun makes a database
>> > dump and there're Ansible modules with the same functionality [1].
>> >
>> > Don't think I advocate Ansible as a replacement. My point is, let's
>> > think about reusing ready solutions. :)
>> >
>> > - igor
>> >
>> >
>> > [1]: http://docs.ansible.com/ansible/list_of_database_modules.html
>> >
>> > On Wed, Mar 30, 2016 at 1:14 PM, Neil Jerram <
>> neil.jer...@metaswitch.com> wrote:
>> > >
>> > > FWIW, as a naive bystander:
>> > >
>> > > On 30/03/16 11:06, Igor Kalnitsky wrote:
>> > >> Hey Fuelers,
>> > >>
>> > >> I know that you probably wouldn't like to hear that, but in my
>> opinion
>> > >> Fuel has to stop using Shotgun. It's nothing more but a command
>> runner
>> > >> over SSH. Besides, it has well known issues such as retrieving remote
>> > >> directories with broken symlinks inside.
>> > >
>> > > It makes sense to me that a command runner over SSH might not need to
>> be
>> > > a whole Fuel-specific component.
>> > >
>> > >> So I propose to find a modern alternative and reuse it. If we stop
>> > >> supporting Shotgun, we can spend extra time to focus on more
>> important
>> > >> things.
>> > >>
>> > >> As an example, we can consider to use Ansible. It should not be
>> tricky
>> > >> to generate Ansible playbook instead of generating Shotgun one.
>> > >> Ansible is a  well known tool for devops and cloud operators, and
>> they
>> > >> we will only benefit if we provide possibility to extend diagnostic
>> > >> recipes in usual (for them) way. What do you think?
>> > >
>> > > But isn't Ansible also over-complicated for just running commands
>> over SSH?
>> > >
>> > > Neil
>> > >
>> > >
>> > >
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> _

Re: [openstack-dev] [neutron][kilo] - vxlan's max bandwidth

2016-04-18 Thread Ihar Hrachyshka

Akihiro Motoki  wrote:


2016-04-18 15:58 GMT+09:00 Ihar Hrachyshka :

Sławek Kapłoński  wrote:


Hello,

What MTU have You got configured on VMs? I had issue with performance on
vxlan network with standard MTU (1500) but when I configured Jumbo
frames on vms and on hosts then it was much better.



Right. Note that custom MTU works out of the box only starting from  
Mitaka.

You can find details on how to configure Neutron for Jumbo frames in the
official docs:

http://docs.openstack.org/mitaka/networking-guide/adv-config-mtu.html


If you want to advertise MTU using DHCP in releases before Mitaka,
you can prepare your custom dnsmasq config file like below and
set it to dhcp-agent dnsmasq_config_file config option.
You also need to set network_device_mtu config parameters appropriately.

sample dnsmasq config file:
--
dhcp-option-force=26,8950
--
dhcp option 26 specifies MTU.


Several notes:

- In Liberty, above can be achieved by setting advertise_mtu in  
neutron.conf on nodes hosting DHCP agents.
- You should set [ml2] segment_mtu on controller nodes to MTU value for  
underlying physical networks. After that, DHCP agents will advertise  
correct MTU for all new networks created after the configuration applied.
- It won’t work in OVS hybrid setup, where intermediate devices (qbr) will  
still have mtu = 1500, that will result in Jumbo frames dropped. We have  
backports to fix it in Liberty at: https://review.openstack.org/305782 and  
https://review.openstack.org/#/c/285710/


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Sean Dague
On 04/17/2016 11:34 AM, Monty Taylor wrote:
> On 04/17/2016 10:13 AM, Doug Hellmann wrote:
>> I am organizing a summit session for the cross-project track to
>> (re)consider how we manage our list of global dependencies [1].
>> Some of the changes I propose would have a big impact, and so I
>> want to ensure everyone doing packaging work for distros is available
>> for the discussion. Please review the etherpad [2] and pass the
>> information along to colleagues who might be interested.
>>
>> Doug
>>
>> [1]
>> https://www.openstack.org/summit/austin-2016/summit-schedule/events/9473
>> [2] https://etherpad.openstack.org/p/newton-global-requirements
> 
> Sadly the session conflicts with a different one that I'm leading, so I
> cannot be there. That, of course, makes me sad, because I think it's an
> important conversation to have, and I have some strong opinions on the
> topic in both directions.

Whether or not this session gets moved around to accommodate conflicts,
this session represents potentially the most disruptive change up for
consideration this cycle, which means this is going to have to include a
community conversation beyond just the DS session.

So if you have strong feelings and ideas, why not get them out in email
now? That will help in the framing of the conversation.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Chris Dent

On Mon, 18 Apr 2016, Sean Dague wrote:


So if you have strong feelings and ideas, why not get them out in email
now? That will help in the framing of the conversation.


I won't be at summit and I feel pretty strongly about this topic, so
I'll throw out my comments:

I agree with the basic premise: In the big tent universe co-
installability is holding us back and is a huge cost in terms of spent
energy. In a world where service isolation is desirable and common
(whether by virtualenv, containers, different hosts, etc) targeting an
all-in-one install seems only to serve the purposes of all-in-one rpm-
or deb-based installations.

Many (most?) people won't be doing those kinds of installations. If all-in-
one installations are important to the rpm- and deb- based distributions
then _they_ should be resolving the dependency issues local to their own
infrastructure (or realizing that it is too painful and start
containerizing or otherwise as well).

I think making these changes will help to improve and strengthen the
boundaries and contracts between services. If not technically then
at least socially, in the sense that the negotiations that people
make to get things to work are about what actually matters in their
services, not unwinding python dependencies and the like.

A lot of the basics of getting this to work are already in place 
in devstack. One challenge I've run into the past is when devstack

plugin A has made an assumption about having access to a python
script provided by devstack plugin B, but it's not on $PATH or its
dependencies are not in the site-packages visible to the current
context. The solution here is to use full paths _into_ virtenvs.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tricircle] Error runnig py27

2016-04-18 Thread Khayam Gondal
Sorry, email went to spam. Just saw it.
Following is link
http://paste.openstack.org/show/494403/

On Mon, Apr 18, 2016 at 8:41 AM, joehuang  wrote:

> Hi, Khayam,
>
>
>
> Could you paste your whole file for test cases and the source code to be
> tested in the http://paste.openstack.org/ seperatly.
>
>
>
> Please share the links, so that to find out what happened.
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
>
>
> *From:* Khayam Gondal [mailto:khayam.gon...@gmail.com]
> *Sent:* Thursday, April 14, 2016 5:30 PM
> *To:* joehuang
> *Cc:* OpenStack Development Mailing List (not for usage questions);
> Zhiyuan Cai
> *Subject:* Re: [Tricircle] Error runnig py27
>
>
>
> Hi joehuang,
>
> by removing self it is showing following error.
>
>
>
>   File
> "/home/khayam/tricircle/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
> line 1206, in _importer
>
> thing = __import __ (import_path)
>
> Import Error: No module named app
>
> Ran 135 tests in 2.994s (+ 0.940s)
>
>
>
>
>
> On Thu, Apr 14, 2016 at 5:53 AM, joehuang  wrote:
>
> Hi, Khayam,
>
>
>
> @mock.patch(*'self.app.post_json'*)
>
>
>
> No “self.” needed.
>
>
>
> Best Regards
>
> Chaoyi Huang ( Joe Huang )
>
>
>
> *From:* Khayam Gondal [mailto:khayam.gon...@gmail.com]
> *Sent:* Wednesday, April 13, 2016 2:50 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* joehuang; Zhiyuan Cai
> *Subject:* [Tricircle] Error runnig py27
>
>
>
> Hi I am writing a test for exception . Following is my testing function.
>
> @mock.patch(*'self.app.post_json'*)
> *def *test_post_exp(self, mock_get, mock_http_error_handler):
>
> mock_response = mock.Mock()
> mock_response.raise_for_status.side_effect = db_exc.DBDuplicateEntry
> mock_get.return_value = mock_response
> mock_http_error_handler.side_effect = db_exc.DBDuplicateEntry
> *with *self.assertRaise(db_exc.DBDuplicateEntry):
> self.app.post_json(
> *'/v1.0/pods'*,
> dict(pod=None),
> expect_errors=True)
>
> But when I run tox -epy27 it shows:
>
>  * File 
> "/home/khayam/tricircle/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
>  line 1206, in _importer*
>
> *thing = __import__(import_path)*
>
> *ImportError: No module named self*
>
> Can someone guide me whats wrong here. I already had installed latest version 
> of mock, python-dev.
>
>
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Tempest sources for testing tripleo in CI environment

2016-04-18 Thread Sagi Shnaidman
For making clear all advantages and disadvantages, I've created a doc:
https://docs.google.com/document/d/1HmY-I8OzoJt0SzLzs79hCa1smKGltb-byrJOkKKGXII/edit?usp=sharing

Please comment.

On Sun, Apr 17, 2016 at 12:14 PM, Sagi Shnaidman 
wrote:

>
> Hi,
>
> John raised up the issue - where should we take tempest sources from.
> I'm not sure where to take them from, so I bring it to wider discussion.
>
> Right now I use tempest from delorean packages. In comparison with
> original tempest I don't see any difference in tests, only additional
> configuration scripts:
> https://github.com/openstack/tempest/compare/master...redhat-openstack:master
> It's worth to mention that in case of delorean tempest the configuration
> scripts fit tempest tests configuration, however in case of original
> tempest repo it's required to change them and maintain according to very
> dynamical configuration.
>
> So, do we need to use pure upstream tempest from current source and to
> maintain configuration scripts or we can use packaged from delorean and not
> duplicate effort of test teams?
>
> Thanks
> --
> Best regards
> Sagi Shnaidman
>



-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Sean Dague
On 04/18/2016 08:22 AM, Chris Dent wrote:
> On Mon, 18 Apr 2016, Sean Dague wrote:
> 
>> So if you have strong feelings and ideas, why not get them out in email
>> now? That will help in the framing of the conversation.
> 
> I won't be at summit and I feel pretty strongly about this topic, so
> I'll throw out my comments:
> 
> I agree with the basic premise: In the big tent universe co-
> installability is holding us back and is a huge cost in terms of spent
> energy. In a world where service isolation is desirable and common
> (whether by virtualenv, containers, different hosts, etc) targeting an
> all-in-one install seems only to serve the purposes of all-in-one rpm-
> or deb-based installations.
> 
> Many (most?) people won't be doing those kinds of installations. If all-in-
> one installations are important to the rpm- and deb- based distributions
> then _they_ should be resolving the dependency issues local to their own
> infrastructure (or realizing that it is too painful and start
> containerizing or otherwise as well).
> 
> I think making these changes will help to improve and strengthen the
> boundaries and contracts between services. If not technically then
> at least socially, in the sense that the negotiations that people
> make to get things to work are about what actually matters in their
> services, not unwinding python dependencies and the like.
> 
> A lot of the basics of getting this to work are already in place in
> devstack. One challenge I've run into the past is when devstack
> plugin A has made an assumption about having access to a python
> script provided by devstack plugin B, but it's not on $PATH or its
> dependencies are not in the site-packages visible to the current
> context. The solution here is to use full paths _into_ virtenvs.

As Chris said, doing virtualenvs on the Devstack side for services is
pretty much there. The team looked at doing this last year, then stopped
due to operator feedback.

One of the things that gets a little weird (when using devstack for
development) is if you actually want to see the impact of library
changes on the environment. As you'll need to make sure you loop and
install those libraries into every venv where they are used. This
forward reference doesn't really exist. So some tooling there will be
needed.

Middleware that's pushed from one project into another (like Ceilometer
-> Swift) is also a funny edge case that I think get funnier here.

Those are mostly implementation details, that probably have work
arounds, but would need people on them.


>From a strategic perspective this would basically make traditional Linux
Packaging of OpenStack a lot harder. That might be the right call,
because traditional Linux Packaging definitely suffers from the fact
that everything on a host needs to be upgraded at the same time. For
large installs of OpenStack (especially public cloud cases) traditional
packages are definitely less used.

However Linux Packaging is how a lot of people get exposed to software.
The power of onboarding with apt-get / yum install is a big one.

I've been through the ups and downs of both approaches so many times now
in my own head, I no longer have a strong preference beyond the fact
that we do one approach today, and doing a different one is effort to
make the transition.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Tempest sources for testing tripleo in CI environment

2016-04-18 Thread Wesley Hayutin
On Mon, Apr 18, 2016 at 8:36 AM, Sagi Shnaidman  wrote:

> For making clear all advantages and disadvantages, I've created a doc:
>
> https://docs.google.com/document/d/1HmY-I8OzoJt0SzLzs79hCa1smKGltb-byrJOkKKGXII/edit?usp=sharing
>
> Please comment.
>
> On Sun, Apr 17, 2016 at 12:14 PM, Sagi Shnaidman 
> wrote:
>
>>
>> Hi,
>>
>> John raised up the issue - where should we take tempest sources from.
>> I'm not sure where to take them from, so I bring it to wider discussion.
>>
>> Right now I use tempest from delorean packages. In comparison with
>> original tempest I don't see any difference in tests, only additional
>> configuration scripts:
>> https://github.com/openstack/tempest/compare/master...redhat-openstack:master
>> It's worth to mention that in case of delorean tempest the configuration
>> scripts fit tempest tests configuration, however in case of original
>> tempest repo it's required to change them and maintain according to very
>> dynamical configuration.
>>
>> So, do we need to use pure upstream tempest from current source and to
>> maintain configuration scripts or we can use packaged from delorean and not
>> duplicate effort of test teams?
>>
>> Thanks
>> --
>> Best regards
>> Sagi Shnaidman
>>
>
>
>
> --
> Best regards
> Sagi Shnaidman
>


Hrm..  can't we use upstream tempest and the midstream configure script
that will be checked into tripleo upstream repos?
I don't think we need to make the choice you are proposing.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #79

2016-04-18 Thread Emilien Macchi
Hi,

We'll have our weekly meeting tomorrow at 3pm UTC on
#openstack-meeting4.

https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack

As usual, free free to bring topics in this etherpad:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160419

We'll also have open discussion for bugs & reviews, so anyone is welcome
to join.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Openstack rpm build

2016-04-18 Thread Andreas Jaeger
On 2016-04-18 12:01, Qiming Teng wrote:
> On Mon, Apr 18, 2016 at 06:05:41AM +0200, Andreas Jaeger wrote:
>> On 04/18/2016 05:30 AM, Kenny Ji-work wrote:
>>> Hi all,
>>>
>>> In our developing environment, we want to create openstack's rpms by
>>> ourselves. By typing 'python setup.py bdist_rpm', there would be some
>>> files not packaged in. Now is there some tools or methods to package the
>>> openstack's module to rpms? Thank you for answering!
>>
>>
>> There's the RPM packaging team that creates spec files for all projects,
>> check out their repo,
>>
> 
> Where is the repo? Thanks.

Landing page of the team:

https://wiki.openstack.org/wiki/Rpm-packaging

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-18 Thread Daniel P. Berrange
There have been threads in the past about the slowness of the "openstack"
client tool such as this one by Sean last year:

  http://lists.openstack.org/pipermail/openstack-dev/2015-April/061317.html

Sean mentioned a 1.5s fixed overhead on openstack client, and mentions it
is significantly slower than the equivalent nova command. In my testing
I don't see any real speed difference between openstack & nova client
programs, so maybe that differential has been addressed since Sean's
original thread, or maybe nova has got slower.

Overall though, I find it is way too sluggish considering it is running
on a local machine with 12 cpus and 30 GB of RAM.

I had a quick go at trying to profile the tools with cprofile and analyse
with KCacheGrind as per this blog:

  
https://julien.danjou.info/blog/2015/guide-to-python-profiling-cprofile-concrete-case-carbonara

And notice that in profiling 'nova help' for example, the big sink appears
to come from the 'pkg_resource' module and its use of pyparsing. I didn't
spend any real time to dig into this in detail, because it got me wondering
whether we can easily just avoid the big startup penalty by not having to
startup a new python interpretor for each command we run.

I traced devstack and saw it run 'openstack' and 'neutron' commands approx
140 times in my particular configuration. If each one of those has a 1.5s
overhead, we could potentially save 3 & 1/2 minutes off devstack execution
time.

So as a proof of concept I have created an 'openstack-server' command
which listens on a unix socket for requests and then invokes the
OpenStackShell.run / OpenStackComputeShell.main / NeutronShell.run
methods as appropriate.

I then replaced the 'openstack', 'nova' and 'neutron' commands with
versions that simply call to the 'openstack-server' service over the
UNIX socket. Since devstack will always recreate these commands in
/usr/bin, I simply put my replacements in $HOME/bin and then made
sure $HOME/bin was first in the $PATH

You might call this 'command line as a service' :-)

Anyhow, with my devstack setup a traditional install takes

  real  21m34.050s
  user  7m8.649s
  sys   1m57.865s

And when using openstack-server it only takes

  real  17m47.059s
  user  3m51.087s
  sys   1m42.428s

So that has cut 18% off the total running time for devstack, which
is quite considerable really.

I'm attaching the openstack-server & replacement openstack commands
so you can see what I did. You have to manually run the openstack-server
command ahead of time and it'll print out details of every command run
on stdout.

Anyway, I'm not personally planning to take this experiment any further.
I'll probably keep using this wrapper in my own local dev env since it
does cut down on devstack time significantly. This mail is just to see
if it'll stimulate any interesting discussion or motivate someone to
explore things further.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
#!/usr/bin/python

import socket
import sys
import os
import os.path
import json

server_address = "/tmp/openstack.sock"

sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)

try:
sock.connect(server_address)
except socket.error, msg:
print >>sys.stderr, msg
sys.exit(1)


def send(sock, doc):
jdoc = json.dumps(doc)
sock.send('%d\n' % len(jdoc))
sock.sendall(jdoc)

def recv(sock):
length_str = ''

char = sock.recv(1)
if len(char) == 0:
print >>sys.stderr, "Unexpected end of file"
sys.exit(1)

while char != '\n':
length_str += char
char = sock.recv(1)
if len(char) == 0:
print >>sys.stderr, "Unexpected end of file"
sys.exit(1)

total = int(length_str)

# use a memoryview to receive the data chunk by chunk efficiently
jdoc = memoryview(bytearray(total))
next_offset = 0
while total - next_offset > 0:
recv_size = sock.recv_into(jdoc[next_offset:], total - next_offset)
next_offset += recv_size
try:
doc = json.loads(jdoc.tobytes())
except (TypeError, ValueError), e:
raise Exception('Data received was not in JSON format')
return doc

try:
env = {}
passenv = ["CINDER_VERSION",
   "OS_AUTH_URL",
   "OS_IDENTITY_API_VERSION",
   "OS_NO_CACHE",
   "OS_PASSWORD",
   "OS_PROJECT_NAME",
   "OS_REGION_NAME",
   "OS_TENANT_NAME",
   "OS_USERNAME",
   "OS_VOLUME_API_VERSION"]
for name in passenv:
if name in os.environ:
env[name] = os.environ[name]

cmd = {
"app": os.path.basename(sys.argv[0]),
"env": env,
"argv": sys.arg

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Hayes, Graham
On 18/04/2016 13:51, Sean Dague wrote:
> On 04/18/2016 08:22 AM, Chris Dent wrote:
>> On Mon, 18 Apr 2016, Sean Dague wrote:
>>
>>> So if you have strong feelings and ideas, why not get them out in email
>>> now? That will help in the framing of the conversation.
>>
>> I won't be at summit and I feel pretty strongly about this topic, so
>> I'll throw out my comments:
>>
>> I agree with the basic premise: In the big tent universe co-
>> installability is holding us back and is a huge cost in terms of spent
>> energy. In a world where service isolation is desirable and common
>> (whether by virtualenv, containers, different hosts, etc) targeting an
>> all-in-one install seems only to serve the purposes of all-in-one rpm-
>> or deb-based installations.
>>
>> Many (most?) people won't be doing those kinds of installations. If all-in-
>> one installations are important to the rpm- and deb- based distributions
>> then _they_ should be resolving the dependency issues local to their own
>> infrastructure (or realizing that it is too painful and start
>> containerizing or otherwise as well).
>>
>> I think making these changes will help to improve and strengthen the
>> boundaries and contracts between services. If not technically then
>> at least socially, in the sense that the negotiations that people
>> make to get things to work are about what actually matters in their
>> services, not unwinding python dependencies and the like.
>>
>> A lot of the basics of getting this to work are already in place in
>> devstack. One challenge I've run into the past is when devstack
>> plugin A has made an assumption about having access to a python
>> script provided by devstack plugin B, but it's not on $PATH or its
>> dependencies are not in the site-packages visible to the current
>> context. The solution here is to use full paths _into_ virtenvs.
>
> As Chris said, doing virtualenvs on the Devstack side for services is
> pretty much there. The team looked at doing this last year, then stopped
> due to operator feedback.
>
> One of the things that gets a little weird (when using devstack for
> development) is if you actually want to see the impact of library
> changes on the environment. As you'll need to make sure you loop and
> install those libraries into every venv where they are used. This
> forward reference doesn't really exist. So some tooling there will be
> needed.
>
> Middleware that's pushed from one project into another (like Ceilometer
> -> Swift) is also a funny edge case that I think get funnier here.
>
> Those are mostly implementation details, that probably have work
> arounds, but would need people on them.
>
>
>  From a strategic perspective this would basically make traditional Linux
> Packaging of OpenStack a lot harder. That might be the right call,
> because traditional Linux Packaging definitely suffers from the fact
> that everything on a host needs to be upgraded at the same time. For
> large installs of OpenStack (especially public cloud cases) traditional
> packages are definitely less used.
>
> However Linux Packaging is how a lot of people get exposed to software.
> The power of onboarding with apt-get / yum install is a big one.
>
> I've been through the ups and downs of both approaches so many times now
> in my own head, I no longer have a strong preference beyond the fact
> that we do one approach today, and doing a different one is effort to
> make the transition.
>
>   -Sean
>

It is also worth noting that according to the OpenStack User Survey [0]
56% of deployments use "Unmodifed packages from the operating system".

Granted it was a small sample size (302 responses to that question)
but it is worth keeping this in mind as we talk about moving the burden
to packagers.

0 - 
https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf (page 
36)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-04-18 Thread Evgeniy L
>> Btw, one of the ideas was to use Fuel task capabilities to gather
diagnostic snapshot.

I think such kind of tools should use as less as possible existing
infrastructure, because in case if something went wrong, you should be able
to easily get diagnostic information, even with broken RabbitMQ, Astute and
MCollective.

Thanks,


On Mon, Apr 18, 2016 at 2:26 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Colleagues,
>
> Whether we are going to continue using Shotgun or
> substitute it with something else, we still need to
> decouple it from Fuel because Shotgun is a generic
> tool. Please review these [1], [2].
>
> [1] https://review.openstack.org/#/c/298603
> [2] https://review.openstack.org/#/c/298615
>
>
> Btw, one of the ideas was to use Fuel task capabilities
> to gather diagnostic snapshot.
>
> Vladimir Kozhukalov
>
> On Thu, Mar 31, 2016 at 1:32 PM, Evgeniy L  wrote:
>
>> Hi,
>>
>> Problems which I see with current Shotgun are:
>> 1. Luck of parallelism, so it's not going to fetch data fast enough from
>> medium/big clouds.
>> 2. There should be an easy way to run it manually (it's possible, but
>> there is no ready-to-use config), it would be really helpful in case if
>> Nailgun/Astute/MCollective are down.
>>
>> As far as I know 1st is partly covered by Ansible, but the problem is it
>> executes a single task in parallel, so there is probability that lagging
>> node will slow down fetching from entire environment.
>> Also we will have to build a tool around Ansible to generate playbooks.
>>
>> Thanks,
>>
>> On Wed, Mar 30, 2016 at 5:18 PM, Tomasz 'Zen' Napierala <
>> tnapier...@mirantis.com> wrote:
>>
>>> Hi,
>>>
>>> Do we have any requirements for the new tool? Do we know what we don’t
>>> like about current implementation, what should be avoided, etc.? Before
>>> that we can only speculate.
>>> From my ops experience, shotgun like tools will not work conveniently on
>>> medium to big environments. Even on medium env amount of logs is just too
>>> huge to handle by such simple tool. In such environments better pattern is
>>> to use dedicated log collection / analysis tool, just like StackLight.
>>> At the other hand I’m not sure if ansible is the right tool for that. It
>>> has some features (like ‘fetch’ command) but in general it’s a
>>> configuration management tool, and I’m not sure how it would act under such
>>> heavy load.
>>>
>>> Regards,
>>>
>>> > On 30 Mar 2016, at 15:20, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>> >
>>> > ​Igor,
>>> >
>>> > I can not agree more. Wherever possible we should
>>> > use existent mature solutions. Ansible is really
>>> > convenient and well known solution, let's try to
>>> > use it.
>>> >
>>> > Yet another thing should be taken into account.
>>> > One of Shotgun features is diagnostic report
>>> > that could then be attached to bugs to identify
>>> > the content of env. This report could also be
>>> > used to reproduce env and then fight a bug.
>>> > I'd like we to have this kind of report.
>>> > Is it possible to implement such a feature
>>> > using Ansible? If yes, then let's switch to Ansible
>>> > as soon as possible.
>>> >
>>> > ​
>>> >
>>> > Vladimir Kozhukalov
>>> >
>>> > On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky <
>>> ikalnit...@mirantis.com> wrote:
>>> > Neil Jerram wrote:
>>> > > But isn't Ansible also over-complicated for just running commands
>>> over SSH?
>>> >
>>> > It may be not so "simple" to ignore that. Ansible has a lot of modules
>>> > which might be very helpful. For instance, Shotgun makes a database
>>> > dump and there're Ansible modules with the same functionality [1].
>>> >
>>> > Don't think I advocate Ansible as a replacement. My point is, let's
>>> > think about reusing ready solutions. :)
>>> >
>>> > - igor
>>> >
>>> >
>>> > [1]: http://docs.ansible.com/ansible/list_of_database_modules.html
>>> >
>>> > On Wed, Mar 30, 2016 at 1:14 PM, Neil Jerram <
>>> neil.jer...@metaswitch.com> wrote:
>>> > >
>>> > > FWIW, as a naive bystander:
>>> > >
>>> > > On 30/03/16 11:06, Igor Kalnitsky wrote:
>>> > >> Hey Fuelers,
>>> > >>
>>> > >> I know that you probably wouldn't like to hear that, but in my
>>> opinion
>>> > >> Fuel has to stop using Shotgun. It's nothing more but a command
>>> runner
>>> > >> over SSH. Besides, it has well known issues such as retrieving
>>> remote
>>> > >> directories with broken symlinks inside.
>>> > >
>>> > > It makes sense to me that a command runner over SSH might not need
>>> to be
>>> > > a whole Fuel-specific component.
>>> > >
>>> > >> So I propose to find a modern alternative and reuse it. If we stop
>>> > >> supporting Shotgun, we can spend extra time to focus on more
>>> important
>>> > >> things.
>>> > >>
>>> > >> As an example, we can consider to use Ansible. It should not be
>>> tricky
>>> > >> to generate Ansible playbook instead of generating Shotgun one.
>>> > >> Ansible is a  well known tool for devops and cloud operators, and
>>> they
>>> > >> we will o

Re: [openstack-dev] [Fuel] [Shotgun] Decoupling Shotgun from Fuel

2016-04-18 Thread Igor Kalnitsky
Evgeniy L. wrote:
> I think such kind of tools should use as less as possible existing
> infrastructure, because in case if something went wrong, you should
> be able to easily get diagnostic information, even with broken RabbitMQ,
> Astute and MCollective.

It's a good point indeed! Moreover, troubleshooting scenarios may vary
from case to case, so it should be easily extendable and changeable.
So users can use various (probably, downloaded) scenarios to gather
diagnostic info.

That's why I think Ansible could really be helpful here. Such
scenarios may be distributed as Ansible playbooks.

On Mon, Apr 18, 2016 at 4:25 PM, Evgeniy L  wrote:
>>> Btw, one of the ideas was to use Fuel task capabilities to gather
>>> diagnostic snapshot.
>
> I think such kind of tools should use as less as possible existing
> infrastructure, because in case if something went wrong, you should be able
> to easily get diagnostic information, even with broken RabbitMQ, Astute and
> MCollective.
>
> Thanks,
>
>
> On Mon, Apr 18, 2016 at 2:26 PM, Vladimir Kozhukalov
>  wrote:
>>
>> Colleagues,
>>
>> Whether we are going to continue using Shotgun or
>> substitute it with something else, we still need to
>> decouple it from Fuel because Shotgun is a generic
>> tool. Please review these [1], [2].
>>
>> [1] https://review.openstack.org/#/c/298603
>> [2] https://review.openstack.org/#/c/298615
>>
>>
>> Btw, one of the ideas was to use Fuel task capabilities
>> to gather diagnostic snapshot.
>>
>> Vladimir Kozhukalov
>>
>> On Thu, Mar 31, 2016 at 1:32 PM, Evgeniy L  wrote:
>>>
>>> Hi,
>>>
>>> Problems which I see with current Shotgun are:
>>> 1. Luck of parallelism, so it's not going to fetch data fast enough from
>>> medium/big clouds.
>>> 2. There should be an easy way to run it manually (it's possible, but
>>> there is no ready-to-use config), it would be really helpful in case if
>>> Nailgun/Astute/MCollective are down.
>>>
>>> As far as I know 1st is partly covered by Ansible, but the problem is it
>>> executes a single task in parallel, so there is probability that lagging
>>> node will slow down fetching from entire environment.
>>> Also we will have to build a tool around Ansible to generate playbooks.
>>>
>>> Thanks,
>>>
>>> On Wed, Mar 30, 2016 at 5:18 PM, Tomasz 'Zen' Napierala
>>>  wrote:

 Hi,

 Do we have any requirements for the new tool? Do we know what we don’t
 like about current implementation, what should be avoided, etc.? Before 
 that
 we can only speculate.
 From my ops experience, shotgun like tools will not work conveniently on
 medium to big environments. Even on medium env amount of logs is just too
 huge to handle by such simple tool. In such environments better pattern is
 to use dedicated log collection / analysis tool, just like StackLight.
 At the other hand I’m not sure if ansible is the right tool for that. It
 has some features (like ‘fetch’ command) but in general it’s a 
 configuration
 management tool, and I’m not sure how it would act under such heavy load.

 Regards,

 > On 30 Mar 2016, at 15:20, Vladimir Kozhukalov
 >  wrote:
 >
 > Igor,
 >
 > I can not agree more. Wherever possible we should
 > use existent mature solutions. Ansible is really
 > convenient and well known solution, let's try to
 > use it.
 >
 > Yet another thing should be taken into account.
 > One of Shotgun features is diagnostic report
 > that could then be attached to bugs to identify
 > the content of env. This report could also be
 > used to reproduce env and then fight a bug.
 > I'd like we to have this kind of report.
 > Is it possible to implement such a feature
 > using Ansible? If yes, then let's switch to Ansible
 > as soon as possible.
 >
 >
 >
 > Vladimir Kozhukalov
 >
 > On Wed, Mar 30, 2016 at 3:31 PM, Igor Kalnitsky
 >  wrote:
 > Neil Jerram wrote:
 > > But isn't Ansible also over-complicated for just running commands
 > > over SSH?
 >
 > It may be not so "simple" to ignore that. Ansible has a lot of modules
 > which might be very helpful. For instance, Shotgun makes a database
 > dump and there're Ansible modules with the same functionality [1].
 >
 > Don't think I advocate Ansible as a replacement. My point is, let's
 > think about reusing ready solutions. :)
 >
 > - igor
 >
 >
 > [1]: http://docs.ansible.com/ansible/list_of_database_modules.html
 >
 > On Wed, Mar 30, 2016 at 1:14 PM, Neil Jerram
 >  wrote:
 > >
 > > FWIW, as a naive bystander:
 > >
 > > On 30/03/16 11:06, Igor Kalnitsky wrote:
 > >> Hey Fuelers,
 > >>
 > >> I know that you probably wouldn't like to hear that, but in my
 > >> opinion
 > >> Fuel has to stop using Shotgun. It's nothing more but a command
 > >> runner
 > >> over SSH. Besides, it has well known 

[openstack-dev] [all] removal of "Using keystoneauth correctly in projects" from cross project schedule

2016-04-18 Thread Sean Dague
After chatting with Monty and Thierry this morning, and trying to figure
out the right way to ensure that enough voices are in the requirements
x-p session, we've decided to do the following:

* remove "Using keystoneauth correctly in projects" from cross project
schedule
* Do a special edition OpenStack Bootstrapping Hour on the topic on Wed
May 11th (details / url to be posted post summit).

That will give us the same content / or better content as was to be on
the schedule, but also record it for future consumption.

Sorry for any inconvenience these last minute changes provide. Thanks folks.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] removal of "Using keystoneauth correctly in projects" from cross project schedule

2016-04-18 Thread Morgan Fainberg
On Mon, Apr 18, 2016 at 6:50 AM, Sean Dague  wrote:

> After chatting with Monty and Thierry this morning, and trying to figure
> out the right way to ensure that enough voices are in the requirements
> x-p session, we've decided to do the following:
>
> * remove "Using keystoneauth correctly in projects" from cross project
> schedule
> * Do a special edition OpenStack Bootstrapping Hour on the topic on Wed
> May 11th (details / url to be posted post summit).
>
> That will give us the same content / or better content as was to be on
> the schedule, but also record it for future consumption.
>
> Sorry for any inconvenience these last minute changes provide. Thanks
> folks.
>
> -Sean
>
>
>
Thanks for the heads up Sean! I think this sounds like a good alternative
and will help to ensure a bulk of voices that are needed for the
requirements discussion(s) will be available.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [watcher] Api and Decision Engine integration - design question

2016-04-18 Thread Vincent FRANÇOISE
Hi Tomasz,

Overall, I don't really have any strong opinion on this. I just think
that if we do it with option 1, it may become quite hard to make Watcher
more scalable in the long run if we need to. That's why I would tend to
choose option 2. Also, it's not so easy for me to evaluate how much work
the DB sync would require compared to what I did for the strategies and
goals (see
https://review.openstack.org/#/c/305965/2/watcher/decision_engine/sync.py),
so take my answer with a grain of salt :)


On 15/04/2016 10:10, Kaczynski, Tomasz wrote:
> Hi guys,
> 
> I’m implementing the Watcher Scoring Module. As part of that, I need to
> expose the information about Scoring Engines through the API/Python CLI.
> 
>  
> 
> The scoring engine list might be quite dynamic. Although the scoring
> engines will be pluggable through the stevedore plug-in model, a single
> plug-in might contain one or more scoring engines. In some scenarios
> this list will be static – a plug-in developer will just expose few
> algorithms and that’s it. But in some other scenarios, the scoring
> engines might be implemented as external web services for example and
> there might be an on-going development process on data models, which
> will result in multiple scoring engines in multiple versions, which
> might change quite frequently (e.g. few times a day).
> 
>  
> 
> Of course, the responsibility for handling all of that is entirely on
> the scoring engine plug-in developer. But it would be good to keep the
> scoring engine abstraction layer clean and simple, hiding all of these
> details.
> 
>  
> 
> And here comes the problem:
> 
> Somehow the dynamic list of scoring engines has to be passed from
> Decision Engine (where the Scoring Engine abstraction layer will be
> sitting) to the Api / CLI. There are currently 2 options on the table
> how this could be done:
> 
>  
> 
> Option 1:
> 
> Allow Api to call Decision Engine directly through existing RPC Api
> (currently using messaging transport).
> 
>  
> 
> Option 2:
> 
> Let Decision Engine keep Scoring Engine information synced in the DB so
> that Watcher Api can simply query for this information as required.
> 
>  
> 
> Pros and cons of each option:
> 
> Option 1:
> 
> -  Good: Simpler implementation and no need for keeping DB in sync.
> 
> -  Good: No risk of data inconsistency. Nothing is being cached,
> data is always accurate. Decision Engine is a single source of truth.
> 
> -  Good: Scoring Engine Plug-in creates a simple stevedore
> plug-in, implements scoring engine classes, implements a factory class
> returning scoring engines and that’s all.
> 
> -  Good: Supports also more complicated scenarios with dynamic
> scoring engine list – encapsulated in the factory class.
> 
> -  Bad: Dependency on Decision Engine – it needs to be up and
> running. Can be mitigated by caching the last response from Decision
> Engine – if DE RCP Api is not responding, the last known data could be
> returned.
> 
> -  Bad: Not sure how reliable/performant RPC over messaging
> transport is. Need to test.
> 
> -  Bad: Might have scalability issues (I believe there is only
> one Decision Engine instance, please confirm!). But this might be at
> least partially mitigated by caching on the Watcher Api level (e.g. if
> the last data was retrieved less than X minutes ago, no need to query
> Decision Engine). In the context that this information is only used by
> Strategy developers to actually implement strategies using some Scoring
> Engines, it might be perfectly fine to cache data for longer periods of
> time (1 hour or more).
I confirm there is only one DE process whereas the API can have as many
workers as you want (configurable).
> 
> Option 2:
> 
> -  Good: Watcher Api decoupled from Decision Engine. Can work
> even if DE is not working or busy.
> 
> -  Good: In case of Watcher this option should scale better.
> Decision Engine typically has only one instance and is not subject to
> horizontal scalability (please confirm my understanding!).
> 
> -  Bad: More complicated implementation. For dynamic scenarios
> (adding scoring engines on the fly) requires some sort of notification
> mechanism, so that the DB will stay in sync. Can be done by exposing
> event handling in scoring engine abstraction layer, but it’s unnecessary
> complication for simple cases with static data. But can be mitigated by
> using helper classes enforcing DB sync without actually exposing any
> events in the abstract classes (so if plug-in needs to sync DB, it calls
> some helper method, all others just do nothing).
> 
I agree on the difficulty here, but we can do implement this
incrementally so we wouldn't have to handle all these cases straight away.
> -  Bad: Potential issues with data consistency. If there is a
> problem or a bug in the sync code, it might be hard to recover from the
> problem without Watcher redeployment.
> 
> -   

Re: [openstack-dev] [nova] Launchpad bug spring cleaning day Monday 4/18

2016-04-18 Thread Markus Zoeller
In case the dashboard is not loading, you can use 
* query_inconsistent.py
* query_stale_incomplete.py
from 
https://github.com/markuszoeller/openstack/tree/master/scripts/launchpad

Regards, Markus Zoeller (markus_z)

> From: Matt Riedemann 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 04/05/2016 08:45 PM
> Subject: [openstack-dev] [nova] Launchpad bug spring cleaning day Monday 
4/18
> 
> We're going to have a day of just cleaning out the launchpad bugs for 
> Nova on Monday 4/18.
> 
> This isn't a bug squashing day where people are proposing patches and 
> the core team is reviewing them.
> 
> This is purely about cleaning the garbage out of launchpad.
> 
> Markus Zoeller has a nice dashboard we can use. I'd like to specifically 

> focus on trimming these two tabs:
> 
> 1. Inconsistent: 
> http://45.55.105.55:8082/bugs-dashboard.html#tabInconsistent (142 bugs 
> today)
> 
> 2. Stale Incomplete: 
> http://45.55.105.55:8082/bugs-dashboard.html#tabIncompleteStale (59 bugs 

> today)
> 
> A lot of these are probably duplicates by now, or fixed, or just invalid 

> and we should close them out. That's what we'll focus on.
> 
> I'd really like to see solid participation from the core team given the 
> core team should know a lot of what's already fixed or invalid, and 
> being part of the core team is more than just reviewing code, it's also 
> making sure our bug backlog is reasonably sane.
> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] L3 HA testing on scale

2016-04-18 Thread Anna Kamyshnikova
Hi guys!

As a developer I use Devstack or multinode OpenStack installation (4-5
nodes) for work, but these are "abstract" environments, where you are not
able to perform some scenarios as your machine is not powerful enough. But
it is really important to understand the issues that real deployments have.

Recently I've performed testing of L3 HA on the scale environment 49 nodes
(3 controllers, 46 computes) Fuel 8.0. On this environment I ran shaker and
rally tests and also performed some manual destructive scenarios. I think
that this is very important to share these results. Ideally, I think that
we should collect statistics for different configurations each release to
compare and check it to make sure that we are heading the right way.

The results of shaker and rally tests [1]. I put detailed report in google
doc [2]. I would appreciate all comments on these results.

[1] - http://akamyshnikova.github.io/neutron-benchmark-results/
[2] -
https://docs.google.com/a/mirantis.com/document/d/1TFEUzRRlRIt2HpsOzFh-RqWwgTzJPBefePPA0f0x9uw/edit?usp=sharing

Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Brant Knudson
On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:

> We all want Fernet to be a reality.  We ain't there yet (Except for mfish
> who has no patience) but we are getting closer.  The goal is to get Fernet
> as the default token provider as soon as possible. The review to do this
> has uncovered a few details that need to be fixed before we can do this.
>
> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
> https://review.openstack.org/#/c/278693/ Patch is still failing on Python
> 3.  The tests are kindof racy due to the revocation event 1 second
> granularity.  Some of the tests here have A sleep (1) in them still, but
> all should be using the time control aspect of the unit test fixtures.
>
> Some of the tests also use the same user to validate a token as that have,
> for example, a role unassigned.  These expose a problem that the revocation
> events are catching too many tokens, some of which should not be treated as
> revoked.
>
> Also, some of the logic for revocation checking has to change. Before, if
> a user had two roles, and had one removed, the token would be revoked.
> Now, however, the token will validate successful, but the response will
> only have the single assigned role in it.
>
>
> Python 3 tests are failing because the Fernet formatter is insisting that
> all project-ids be valid UUIDs, but some of the old tests have "FOO" and
> "BAR" as ids.  These either need to be converted to UUIDS, or the formatter
> needs to be more forgiving.
>
> Caching of token validations was messing with revocation checking. Tokens
> that were valid once were being reported as always valid. Thus, the current
> review  removes all caching on token validations, a change we cannot
> maintain.  Once all the test are successfully passing, we will re-introduce
> the cache, and be far more aggressive about cache invalidation.
>
> Tempest tests are currently failing due to Devstack not properly
> identifying Fernet as the default token provider, and creating the Fernet
> key repository.  I'm tempted to just force devstack to always create the
> directory, as a user would need it if they ever switched the token provider
> post launch anyway.
>
>
There's a review to change devstack to default to fernet:
https://review.openstack.org/#/c/195780/ . This was mostly to show that
tempest still passes with fernet configured. It uncovered a couple of test
issues (similar in nature to the revocation checking issues mentioned in
the original note) that have since been fixed.

We'd prefer to not have devstack overriding config options and instead use
keystone's defaults. The problem is if fernet is the default in keystone
then it won't work out of the box since the key database won't exist. One
option that I think we should investigate is to have keystone create the
key database on startup if it doesn't exist.

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Adding release notes to changes

2016-04-18 Thread Peter Stachowski
Right, we are still waiting on the final +2/+1 for that  ;)

See https://review.openstack.org/#/c/306018/

Thanks!
Peter

-Original Message-
From: Andreas Jaeger [mailto:a...@suse.com] 
Sent: April-18-16 2:32 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Adding release notes to changes

On 2016-04-17 19:04, Amrith Kumar wrote:
>> -Original Message-
>> > From: Andreas Jaeger [mailto:a...@suse.com]
>> > Sent: Sunday, April 17, 2016 4:31 AM
>> > To: OpenStack Development Mailing List (not for usage questions) 
>> > 
>> > Subject: Re: [openstack-dev] [trove] Adding release notes to 
>> > changes
>> > 
>> > On 04/16/2016 05:07 PM, Amrith Kumar wrote:
>>> > > Folks,
>>> > >
>>> > > We are now using reno[1] for release notes in trove, 
>>> > > trove-dashboard, and python-troveclient.
>> > 
>> > Note that the trove-dashboard changes are not published and tested 
>> > at all, you do not have set it up in project-config yet,
> [amrith] Oink? I thought this was dealt with in 
> https://review.openstack.org/#/c/306012/
> 
> Yes, there are currently no release notes for trove-dashboard, and I know 
> that the release notes jobs need to be added in zuul etc., Am I missing 
> something else?
> 

That's all I wanted to point out - add it to project-config repo.
Without that, you have no job to test and publish the release notes,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] SR-IOV IRC meeting and sub-team - passing the torch

2016-04-18 Thread Nikola Đipanov
Hi team,

As I'll be focusing on different things going forward, I was wondering
if someone from the group of people who were normally working in this
area would want to step up and take over the sub-team IRC meeting.

It is normally not a massive overhead, mostly tracking ongoing efforts
and patches and making sure there's a list of things that need reviews,
so don't be shy :)

Cheers,
Nikola

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Removal of in progress console log access

2016-04-18 Thread James E. Blair
Sean Dague  writes:

> Bummer. This gets used a to figure out the state of things given that
> zuul links to the console even after the job is complete. Changing that
> to the log server link would mitigate the blind spot.

Yeah, we know it's important, which is why we're working on getting it
back, but will take a little bit of time.  In the interim, rather than
linking to a dead URL, I removed the links from the status page
altogether.  However, if it would be better overall to link to the log
server (which will result in 404s until the logs are actually uploaded
at the end of the job), we could probably do that instead.  I'm sure
we'll get questions, but we could probably put a banner at the top of
the page and we may get slightly fewer of them.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Matt Fischer
On Mon, Apr 18, 2016 at 8:29 AM, Brant Knudson  wrote:

>
>
> On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:
>
>> We all want Fernet to be a reality.  We ain't there yet (Except for mfish
>> who has no patience) but we are getting closer.  The goal is to get Fernet
>> as the default token provider as soon as possible. The review to do this
>> has uncovered a few details that need to be fixed before we can do this.
>>
>> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
>> https://review.openstack.org/#/c/278693/ Patch is still failing on
>> Python 3.  The tests are kindof racy due to the revocation event 1 second
>> granularity.  Some of the tests here have A sleep (1) in them still, but
>> all should be using the time control aspect of the unit test fixtures.
>>
>> Some of the tests also use the same user to validate a token as that
>> have, for example, a role unassigned.  These expose a problem that the
>> revocation events are catching too many tokens, some of which should not be
>> treated as revoked.
>>
>> Also, some of the logic for revocation checking has to change. Before, if
>> a user had two roles, and had one removed, the token would be revoked.
>> Now, however, the token will validate successful, but the response will
>> only have the single assigned role in it.
>>
>>
>> Python 3 tests are failing because the Fernet formatter is insisting that
>> all project-ids be valid UUIDs, but some of the old tests have "FOO" and
>> "BAR" as ids.  These either need to be converted to UUIDS, or the formatter
>> needs to be more forgiving.
>>
>> Caching of token validations was messing with revocation checking. Tokens
>> that were valid once were being reported as always valid. Thus, the current
>> review  removes all caching on token validations, a change we cannot
>> maintain.  Once all the test are successfully passing, we will re-introduce
>> the cache, and be far more aggressive about cache invalidation.
>>
>> Tempest tests are currently failing due to Devstack not properly
>> identifying Fernet as the default token provider, and creating the Fernet
>> key repository.  I'm tempted to just force devstack to always create the
>> directory, as a user would need it if they ever switched the token provider
>> post launch anyway.
>>
>>
> There's a review to change devstack to default to fernet:
> https://review.openstack.org/#/c/195780/ . This was mostly to show that
> tempest still passes with fernet configured. It uncovered a couple of test
> issues (similar in nature to the revocation checking issues mentioned in
> the original note) that have since been fixed.
>
> We'd prefer to not have devstack overriding config options and instead use
> keystone's defaults. The problem is if fernet is the default in keystone
> then it won't work out of the box since the key database won't exist. One
> option that I think we should investigate is to have keystone create the
> key database on startup if it doesn't exist.
>
> - Brant
>
>

I'm not a devstack user, but as I mentioned before, I assume devstack
called keystone-manage db_sync? Why couldn't it also call keystone-manage
fernet_setup?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Removal of in progress console log access

2016-04-18 Thread Sean Dague
On 04/18/2016 11:22 AM, James E. Blair wrote:
> Sean Dague  writes:
> 
>> Bummer. This gets used a to figure out the state of things given that
>> zuul links to the console even after the job is complete. Changing that
>> to the log server link would mitigate the blind spot.
> 
> Yeah, we know it's important, which is why we're working on getting it
> back, but will take a little bit of time.  In the interim, rather than
> linking to a dead URL, I removed the links from the status page
> altogether.  However, if it would be better overall to link to the log
> server (which will result in 404s until the logs are actually uploaded
> at the end of the job), we could probably do that instead.  I'm sure
> we'll get questions, but we could probably put a banner at the top of
> the page and we may get slightly fewer of them.

The links could be added only after the individual test run completes.
That would mean no 404s right? But allow link access once their are
results to be seen.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 08:24 AM, Hayes, Graham wrote:
> On 18/04/2016 13:51, Sean Dague wrote:
>> On 04/18/2016 08:22 AM, Chris Dent wrote:
>>> On Mon, 18 Apr 2016, Sean Dague wrote:
>>>
 So if you have strong feelings and ideas, why not get them out in email
 now? That will help in the framing of the conversation.
>>>
>>> I won't be at summit and I feel pretty strongly about this topic, so
>>> I'll throw out my comments:
>>>
>>> I agree with the basic premise: In the big tent universe co-
>>> installability is holding us back and is a huge cost in terms of spent
>>> energy. In a world where service isolation is desirable and common
>>> (whether by virtualenv, containers, different hosts, etc) targeting an
>>> all-in-one install seems only to serve the purposes of all-in-one rpm-
>>> or deb-based installations.
>>>
>>> Many (most?) people won't be doing those kinds of installations. If all-in-
>>> one installations are important to the rpm- and deb- based distributions
>>> then _they_ should be resolving the dependency issues local to their own
>>> infrastructure (or realizing that it is too painful and start
>>> containerizing or otherwise as well).
>>>
>>> I think making these changes will help to improve and strengthen the
>>> boundaries and contracts between services. If not technically then
>>> at least socially, in the sense that the negotiations that people
>>> make to get things to work are about what actually matters in their
>>> services, not unwinding python dependencies and the like.
>>>
>>> A lot of the basics of getting this to work are already in place in
>>> devstack. One challenge I've run into the past is when devstack
>>> plugin A has made an assumption about having access to a python
>>> script provided by devstack plugin B, but it's not on $PATH or its
>>> dependencies are not in the site-packages visible to the current
>>> context. The solution here is to use full paths _into_ virtenvs.
>>
>> As Chris said, doing virtualenvs on the Devstack side for services is
>> pretty much there. The team looked at doing this last year, then stopped
>> due to operator feedback.
>>
>> One of the things that gets a little weird (when using devstack for
>> development) is if you actually want to see the impact of library
>> changes on the environment. As you'll need to make sure you loop and
>> install those libraries into every venv where they are used. This
>> forward reference doesn't really exist. So some tooling there will be
>> needed.
>>
>> Middleware that's pushed from one project into another (like Ceilometer
>> -> Swift) is also a funny edge case that I think get funnier here.
>>
>> Those are mostly implementation details, that probably have work
>> arounds, but would need people on them.
>>
>>
>>  From a strategic perspective this would basically make traditional Linux
>> Packaging of OpenStack a lot harder. That might be the right call,
>> because traditional Linux Packaging definitely suffers from the fact
>> that everything on a host needs to be upgraded at the same time. For
>> large installs of OpenStack (especially public cloud cases) traditional
>> packages are definitely less used.
>>
>> However Linux Packaging is how a lot of people get exposed to software.
>> The power of onboarding with apt-get / yum install is a big one.
>>
>> I've been through the ups and downs of both approaches so many times now
>> in my own head, I no longer have a strong preference beyond the fact
>> that we do one approach today, and doing a different one is effort to
>> make the transition.
>>
>>  -Sean
>>
> 
> It is also worth noting that according to the OpenStack User Survey [0]
> 56% of deployments use "Unmodifed packages from the operating system".
> 
> Granted it was a small sample size (302 responses to that question)
> but it is worth keeping this in mind as we talk about moving the burden
> to packagers.
> 
> 0 - 
> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> (page 
> 36)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
To add to this, I'd also note that I as a packager would likely stop
packaging Openstack at whatever release this goes into.  While the
option to package and ship a virtualenv installed to /usr/local or /opt
exists bundling is not something that should be supported given the
issues it can have (update cadence and security issues mainly).

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Lance Bragstad
It looks like it does [0].


[0]
https://github.com/openstack-dev/devstack/blob/4e7804431ada7e2cc0db63bd4c52b17782d33b5b/lib/keystone#L494-L497

On Mon, Apr 18, 2016 at 10:20 AM, Matt Fischer  wrote:

> On Mon, Apr 18, 2016 at 8:29 AM, Brant Knudson  wrote:
>
>>
>>
>> On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:
>>
>>> We all want Fernet to be a reality.  We ain't there yet (Except for
>>> mfish who has no patience) but we are getting closer.  The goal is to get
>>> Fernet as the default token provider as soon as possible. The review to do
>>> this has uncovered a few details that need to be fixed before we can do
>>> this.
>>>
>>> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
>>> https://review.openstack.org/#/c/278693/ Patch is still failing on
>>> Python 3.  The tests are kindof racy due to the revocation event 1 second
>>> granularity.  Some of the tests here have A sleep (1) in them still, but
>>> all should be using the time control aspect of the unit test fixtures.
>>>
>>> Some of the tests also use the same user to validate a token as that
>>> have, for example, a role unassigned.  These expose a problem that the
>>> revocation events are catching too many tokens, some of which should not be
>>> treated as revoked.
>>>
>>> Also, some of the logic for revocation checking has to change. Before,
>>> if a user had two roles, and had one removed, the token would be revoked.
>>> Now, however, the token will validate successful, but the response will
>>> only have the single assigned role in it.
>>>
>>>
>>> Python 3 tests are failing because the Fernet formatter is insisting
>>> that all project-ids be valid UUIDs, but some of the old tests have "FOO"
>>> and "BAR" as ids.  These either need to be converted to UUIDS, or the
>>> formatter needs to be more forgiving.
>>>
>>> Caching of token validations was messing with revocation checking.
>>> Tokens that were valid once were being reported as always valid. Thus, the
>>> current review  removes all caching on token validations, a change we
>>> cannot maintain.  Once all the test are successfully passing, we will
>>> re-introduce the cache, and be far more aggressive about cache invalidation.
>>>
>>> Tempest tests are currently failing due to Devstack not properly
>>> identifying Fernet as the default token provider, and creating the Fernet
>>> key repository.  I'm tempted to just force devstack to always create the
>>> directory, as a user would need it if they ever switched the token provider
>>> post launch anyway.
>>>
>>>
>> There's a review to change devstack to default to fernet:
>> https://review.openstack.org/#/c/195780/ . This was mostly to show that
>> tempest still passes with fernet configured. It uncovered a couple of test
>> issues (similar in nature to the revocation checking issues mentioned in
>> the original note) that have since been fixed.
>>
>> We'd prefer to not have devstack overriding config options and instead
>> use keystone's defaults. The problem is if fernet is the default in
>> keystone then it won't work out of the box since the key database won't
>> exist. One option that I think we should investigate is to have keystone
>> create the key database on startup if it doesn't exist.
>>
>> - Brant
>>
>>
>
> I'm not a devstack user, but as I mentioned before, I assume devstack
> called keystone-manage db_sync? Why couldn't it also call keystone-manage
> fernet_setup?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][manila] manila/glusterfs gateway jobs are UNSTABLE due to perm isssue

2016-04-18 Thread Csaba Henk
Hi,

The

gate-manila-tempest-dsvm-glusterfs and
gate-manila-tempest-dsvm-glusterfs-native

gateway jobs end up UNSTABLE since a while,
as one can observe at any recent change in

https://review.openstack.org/#/q/project:openstack/manila

eg.

https://review.openstack.org/302801 .

The issue is that in the end some log files can't be scp-d off
the test node with hitting EPERM. It must have been so before
as well, just the latest infra update introduced a new Jenkins
version which is less tolerant about the situation.

In change https://review.openstack.org/302477/ of
devstack-plugin-glusterfs I try to amend the post_test_hook
of said jobs to chmod the affected log files into sanity. However,
that does not work: as the output of some diagnostic ls/find
I also added to post_test_hook shows the files don't
exist at the location of the incident yet:

http://logs.openstack.org/77/302477/3/check/gate-manila-tempest-dsvm-glusterfs/025343a/console.html.gz#_2016-04-07_17_46_13_066
http://logs.openstack.org/77/302477/3/check/gate-manila-tempest-dsvm-glusterfs/025343a/console.html.gz#_2016-04-07_17_46_13_082

Nb. there are some other possible chmods that one
could think of as a fix, but that's already in place:
https://github.com/openstack/devstack-plugin-glusterfs/blob/658d3cc/devstack/plugin.sh#L46

Do you have any idea where / how to fix it?

Thanks
Csaba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Stepping down from puppet-openstack-core

2016-04-18 Thread Sebastien Badia
Hello here,

I would like to ask to be removed from the core reviewers team on the
Puppet for OpenStack project.

I lack dedicated time to contribute on my spare time to the project. And I
don't work anymore on OpenStack deployments.

In the past months, I stopped reviewing and submitting changes on our project,
that's why I slopes down gradually into the abyss stats of the group :-)
Community coc¹ suggests I step down considerately.

I've never been very talkative, but retrospectively it was a great adventure, I
learned a lot at your side. I'm very proud to see where the project is now.

So Long, and Thanks for All the Fish
I whish you the best ♥

Seb

¹http://www.openstack.org/legal/community-code-of-conduct/
²http://stackalytics.com/report/contribution/puppetopenstack-group/90
-- 
Sebastien Badia


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Brant Knudson
On Mon, Apr 18, 2016 at 10:20 AM, Matt Fischer  wrote:

> On Mon, Apr 18, 2016 at 8:29 AM, Brant Knudson  wrote:
>
>>
>>
>> On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:
>>
>>> We all want Fernet to be a reality.  We ain't there yet (Except for
>>> mfish who has no patience) but we are getting closer.  The goal is to get
>>> Fernet as the default token provider as soon as possible. The review to do
>>> this has uncovered a few details that need to be fixed before we can do
>>> this.
>>>
>>> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
>>> https://review.openstack.org/#/c/278693/ Patch is still failing on
>>> Python 3.  The tests are kindof racy due to the revocation event 1 second
>>> granularity.  Some of the tests here have A sleep (1) in them still, but
>>> all should be using the time control aspect of the unit test fixtures.
>>>
>>> Some of the tests also use the same user to validate a token as that
>>> have, for example, a role unassigned.  These expose a problem that the
>>> revocation events are catching too many tokens, some of which should not be
>>> treated as revoked.
>>>
>>> Also, some of the logic for revocation checking has to change. Before,
>>> if a user had two roles, and had one removed, the token would be revoked.
>>> Now, however, the token will validate successful, but the response will
>>> only have the single assigned role in it.
>>>
>>>
>>> Python 3 tests are failing because the Fernet formatter is insisting
>>> that all project-ids be valid UUIDs, but some of the old tests have "FOO"
>>> and "BAR" as ids.  These either need to be converted to UUIDS, or the
>>> formatter needs to be more forgiving.
>>>
>>> Caching of token validations was messing with revocation checking.
>>> Tokens that were valid once were being reported as always valid. Thus, the
>>> current review  removes all caching on token validations, a change we
>>> cannot maintain.  Once all the test are successfully passing, we will
>>> re-introduce the cache, and be far more aggressive about cache invalidation.
>>>
>>> Tempest tests are currently failing due to Devstack not properly
>>> identifying Fernet as the default token provider, and creating the Fernet
>>> key repository.  I'm tempted to just force devstack to always create the
>>> directory, as a user would need it if they ever switched the token provider
>>> post launch anyway.
>>>
>>>
>> There's a review to change devstack to default to fernet:
>> https://review.openstack.org/#/c/195780/ . This was mostly to show that
>> tempest still passes with fernet configured. It uncovered a couple of test
>> issues (similar in nature to the revocation checking issues mentioned in
>> the original note) that have since been fixed.
>>
>> We'd prefer to not have devstack overriding config options and instead
>> use keystone's defaults. The problem is if fernet is the default in
>> keystone then it won't work out of the box since the key database won't
>> exist. One option that I think we should investigate is to have keystone
>> create the key database on startup if it doesn't exist.
>>
>> - Brant
>>
>>
>
> I'm not a devstack user, but as I mentioned before, I assume devstack
> called keystone-manage db_sync? Why couldn't it also call keystone-manage
> fernet_setup?
>
>
When you tell devstack that it's using fernet then it does keystone-manage
fernet_setup. When you tell devstack to use the default, it doesn't
fernet_setup because for now it thinks the default is UUID and doesn't
require keys. One way to have devstack work when fernet is the default is
to have devstack always do keystone-manage fernet_setup.

Really what we want to do is have devstack work like other deployment
methods. We can reasonably expect featureful deployers like puppet to
keystone-manage fernet_setup in the course of setting up keystone. There's
more basic deployers like RPMs or debs that in the past have said they like
the defaults to "just work" (like UUID tokens) and not require extra
commands.

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] Gate bused until Sunday April 17th

2016-04-18 Thread Jeremy Stanley
On 2016-04-17 03:38:28 + (+), Steven Dake (stdake) wrote:
> Hey folks.  A shade + nodepool change has broken our gating
> functionality on the OVH cloud provider and probably some others.
> The net result is we can't actually gate our changes.
[...]

A change[*] to record the public address as the private address when
the provider lacks a separate private network has merged, and
nodepool was subsequently restarted within the past hour. However,
due to unrelated issues in OVH we've taken their regions completely
offline in nodepool today anyway, so this issue hopefully shouldn't
persist for you.

[*] https://review.openstack.org/306835
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from puppet-openstack-core

2016-04-18 Thread Emilien Macchi
On Mon, Apr 18, 2016 at 11:37 AM, Sebastien Badia  wrote:
> Hello here,
>
> I would like to ask to be removed from the core reviewers team on the
> Puppet for OpenStack project.
>
> I lack dedicated time to contribute on my spare time to the project. And I
> don't work anymore on OpenStack deployments.
>
> In the past months, I stopped reviewing and submitting changes on our project,
> that's why I slopes down gradually into the abyss stats of the group :-)
> Community coc¹ suggests I step down considerately.
>
> I've never been very talkative, but retrospectively it was a great adventure, 
> I
> learned a lot at your side. I'm very proud to see where the project is now.
>
> So Long, and Thanks for All the Fish
> I whish you the best ♥

Thanks a lot Seb, you brought a lot in the project.
Your experience in Puppet & Ruby was very appreciated. Also your
careful reviews helped to improve our code quality.

I hope you'll have fun in your next projects!

Cheers,

> Seb
>
> ¹http://www.openstack.org/legal/community-code-of-conduct/
> ²http://stackalytics.com/report/contribution/puppetopenstack-group/90
> --
> Sebastien Badia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA testing on scale

2016-04-18 Thread John Schwarz
This is some awesome work, Ann. It's very neat to see that all the
races we've struggled with w.r.t. the l3 scheduler has paid off. I
would definitely like to see how these results are effected by
https://review.openstack.org/#/c/305774/ but understandably 49
physical nodes are hard to come by.

Also, we should see how to best handle of the issue Ann found (and is
tracked at https://review.openstack.org/#/c/305774/). Specifically,
reproducing this should be our goal.

John.

On Mon, Apr 18, 2016 at 5:15 PM, Anna Kamyshnikova
 wrote:
> Hi guys!
>
> As a developer I use Devstack or multinode OpenStack installation (4-5
> nodes) for work, but these are "abstract" environments, where you are not
> able to perform some scenarios as your machine is not powerful enough. But
> it is really important to understand the issues that real deployments have.
>
> Recently I've performed testing of L3 HA on the scale environment 49 nodes
> (3 controllers, 46 computes) Fuel 8.0. On this environment I ran shaker and
> rally tests and also performed some manual destructive scenarios. I think
> that this is very important to share these results. Ideally, I think that we
> should collect statistics for different configurations each release to
> compare and check it to make sure that we are heading the right way.
>
> The results of shaker and rally tests [1]. I put detailed report in google
> doc [2]. I would appreciate all comments on these results.
>
> [1] - http://akamyshnikova.github.io/neutron-benchmark-results/
> [2] -
> https://docs.google.com/a/mirantis.com/document/d/1TFEUzRRlRIt2HpsOzFh-RqWwgTzJPBefePPA0f0x9uw/edit?usp=sharing
>
> Regards,
> Ann Kamyshnikova
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [kolla][security][infra][ops] Kolla Design Summit Agenda Published

2016-04-18 Thread Jimmy McArthur

Hi All -

Just a quick note to let you konw that you do not need to re-download 
the app in order to get Schedule updates. We do have a cached API, so it 
might take a few minutes for it to update. We also limit the # of 
updates to 30 at a time so your app doesn't crash in the event of a slow 
network / bad connection. Once the core set of data is uploaded, future 
updates are pretty quick.


Please feel free to send an email to summit...@openstack.org if you 
still have trouble, but this particular issue should not be occurring.


Thanks!
Jimmy McArthur

Jonathan Bryce wrote:




Begin forwarded message:

*From: *"Steven Dake (stdake)" >
*Subject: **[openstack-dev] [kolla][security][infra][ops] Kolla 
Design Summit Agenda Published*

*Date: *April 17, 2016 at 8:14:00 PM CDT
*To: *"OpenStack Development Mailing List (not for usage questions)" 
>
*Reply-To: *"OpenStack Development Mailing List \(not for usage 
questions\)" >


OpenStackers,

The full Kolla summit agenda is here:
https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Kolla%3A

We would super appreciate any operator presence in our fishbowl or 
design sessions so we do right by the Operators that use Kolla 
community generated code including documentation.


If folks have conflicts, please email me off list and I'll make an 
effort to rearrange the agenda if feasible.  I am especially willing 
to move around design sessions tagged with security or infrastructure 
if the security or infrastructure team has conflicts.  I am not 
certain when my ability to re-arrange schedules will end, so please 
mail as soon as possible.


NB for the security team and the kolla coresec team.:

I have scheduled 3 hours for TA.  We begin this process Thursday @ 
11:50 AM here:

https://www.openstack.org/summit/austin-2016/summit-schedule/events/9307?goback=1

And finish Friday morning here:
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9400?goback=1

I have heard you may need to re-download the phone application if you 
have it already downloaded it because of how it caches, but don't 
have data to back up this claim.  If the schedule looks out of date, 
try that workaround.  If that doesn't work, contact me (irc nick 
sdake) on #openstack-kolla on IRC and we can work together with the 
#openstack-infra team to get you setup.


Regards,
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org 
?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] review dashboard, review priorities for Newton

2016-04-18 Thread Victoria Martínez de la Cruz
Thanks Amrith, this looks great!

Just added the new dashboard to my Gerrit dashboard.

Cannot think on anything else we could need.

2016-04-16 10:45 GMT-03:00 Amrith Kumar :

> At the Trove meeting last week, I agreed to send out a simple way in
> which we can all view the patches that are in need of review.
>
> In the past, as we got closer to release milestones, we have used the
> starredby: method. I first remember using this when Nikhil
> proposed it in the Juno/Liberty timeframe, and it worked out well for us
> towards the end of the Mitaka cycle as well. I'd earlier asked on the ML
> for simple ways to tag and prioritize reviews [1] the sense that I have
> is that while there may be better tools in the future, our best bet for
> now is to use the same method we've been using so far.
>
> Also, thanks to Flavio who brought my attention to
> gerrit-dash-creator[2] and the Trove dashboard registered there, I have
> built and been using a dashboard for some time and I have found that it
> reflects some of the reviewing challenges that we've been facing as a
> project.
>
> To that end, I have proposed [3], a change that updates the existing
> Trove dashboard and provides a mechanism for all members actively
> working on Trove to prioritize reviews. The rationale for the change(s)
> and my motivations are part of the commit message in [3].
>
> To get the benefit of this dashboard, you will have to do the following.
>
> 1. login to review.openstack.org
> 2. on the right-hand top corner, click on your name and then click on
> the "settings" link.
> 3. on the left-hand pane, click on Preferences
> 4. on the right-hand pane you should now see a section entitled "My
> Menu" and a box that says "Name" and one next to it that says "URL".
> 5. in the name, enter Trove-Dashboard and in the URL, you will have to
> enter the URL for the dashboard. Since this is a very long (about 2000
> characters long) I've posted the text in a gist[4]. the URL will start
> with ... "#/dashboard/?foreach=" and end with "%3D%2D2+NOT+is%
> 3Amergeable"
> 6. Once you enter the name and the URL, press the "+" to the left of the
> name.
>
> You will now notice that your top menu in review.openstack.org will have
> at the very right hand end, a link called "Trove-Dashboard". If you
> click on it, you will see the dashboard I've created.
>
> Changes in this dashboard are shown in the following sections:
>
> My Patches Requiring Attention
>
> These are patches that you submitted, and that either have been
> given negative reviews, have failed check or gate, or are
> currently not in a state where they can merge
>
> Patches waiting longer than 14 days
>
> These are patches that have been waiting over 14 days for a
> review
>
> Patches waiting longer than 7 days
>
> These are patches that have been waiting over 7 days for a
> review
>
> Specs requiring review
>
> These are specs that are in need of review, and that have passed
> check, and have not been blocked (-2'ed)
>
> Trove: Priority code reviews
>
> These are changes in trove or trove-integration in need of
> review, and that have passed check, are mergeable, and have not
> been blocked (-2'ed), and have been starred by me
>
> Trove Client and Dashboard: Priority code reviews
>
> These are changes in python-troveclient or trove-dashboard in
> need of review, and that have passed check, are mergeable, and
> have not been blocked (-2'ed), , and have been starred by me
>
> Changes needing Final Approval
>
> These are changes that have one +2, that have passed check, are
> mergeable and need an approval
>
> Changes on branches other than master
>
> These are changes on branches other than master that are
> currently open
>
> Needing attention
>
> These are patches that are in need of attention; either check or
> gate failures, and that are not currently mergeable. This
> however does not show patches that are marked as WF-1 or have
> been blocked (-2'ed).
>
> Please post comments and feedback on this dashboard at [3]. If you wish
> to change the dashboard or make your own, you can do that very easily.
>
> $ git clone http://git.openstack.org/openstack/gerrit-dash-creator
> [...]
> $ cd gerrit-dash-creator/
>
> [If you want to change my proposed dashboard, get it ...]
> $ git review -d 306716
> Creating a git remote called "gerrit" that maps to:
>
> ssh://amr...@review.openstack.org:29418/openstack/gerrit-dash-creator.git
> Downloading refs/changes/16/306716/1 from gerrit
> Switched to branch "review/amrith/trove-dashboard"
>
> $ emacs dashboards/trove.dash
>
> $ ./gerrit-dash-creator dashboards/trove.dash
> https://review.openstack.org/#/dashboard/?foreach=status%3Aopen++%
> [... long URL deleted...]
>
> You can either navigate directly to that URL, or you can take the
> portion after the '#' and put it into 

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Michał Jastrzębski
So I also want to stress out that shared libraries are huge pain
during upgrades. While I'm not in favor of packages with embedded
virtualenvs (as Matt pointed out, this has a lot of issues), having
shared dependency pool pretty much means that you need to upgrade
*everything* that is openstack at single run, and that is prone to
errors, volatile and nearly impossible to rollback if something goes
wrong. One way to address this issue is putting services in
containers, but that is not an solution to problem at hand (56% use
apt-get install as Graham says). Packagers have hard time keeping up
already, if we add fairly complex logic to this (virtualenvs) we will
probably end up with cross-compatibility hell of people not keeping up
with changes.

That being said, in my opinion, this percentage is this high because
that's exactly what we suggest in install docs, once we came out with
a solution we should fix it there as well.


On 18 April 2016 at 10:23, Matthew Thode  wrote:
> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
>> On 18/04/2016 13:51, Sean Dague wrote:
>>> On 04/18/2016 08:22 AM, Chris Dent wrote:
 On Mon, 18 Apr 2016, Sean Dague wrote:

> So if you have strong feelings and ideas, why not get them out in email
> now? That will help in the framing of the conversation.

 I won't be at summit and I feel pretty strongly about this topic, so
 I'll throw out my comments:

 I agree with the basic premise: In the big tent universe co-
 installability is holding us back and is a huge cost in terms of spent
 energy. In a world where service isolation is desirable and common
 (whether by virtualenv, containers, different hosts, etc) targeting an
 all-in-one install seems only to serve the purposes of all-in-one rpm-
 or deb-based installations.

 Many (most?) people won't be doing those kinds of installations. If all-in-
 one installations are important to the rpm- and deb- based distributions
 then _they_ should be resolving the dependency issues local to their own
 infrastructure (or realizing that it is too painful and start
 containerizing or otherwise as well).

 I think making these changes will help to improve and strengthen the
 boundaries and contracts between services. If not technically then
 at least socially, in the sense that the negotiations that people
 make to get things to work are about what actually matters in their
 services, not unwinding python dependencies and the like.

 A lot of the basics of getting this to work are already in place in
 devstack. One challenge I've run into the past is when devstack
 plugin A has made an assumption about having access to a python
 script provided by devstack plugin B, but it's not on $PATH or its
 dependencies are not in the site-packages visible to the current
 context. The solution here is to use full paths _into_ virtenvs.
>>>
>>> As Chris said, doing virtualenvs on the Devstack side for services is
>>> pretty much there. The team looked at doing this last year, then stopped
>>> due to operator feedback.
>>>
>>> One of the things that gets a little weird (when using devstack for
>>> development) is if you actually want to see the impact of library
>>> changes on the environment. As you'll need to make sure you loop and
>>> install those libraries into every venv where they are used. This
>>> forward reference doesn't really exist. So some tooling there will be
>>> needed.
>>>
>>> Middleware that's pushed from one project into another (like Ceilometer
>>> -> Swift) is also a funny edge case that I think get funnier here.
>>>
>>> Those are mostly implementation details, that probably have work
>>> arounds, but would need people on them.
>>>
>>>
>>>  From a strategic perspective this would basically make traditional Linux
>>> Packaging of OpenStack a lot harder. That might be the right call,
>>> because traditional Linux Packaging definitely suffers from the fact
>>> that everything on a host needs to be upgraded at the same time. For
>>> large installs of OpenStack (especially public cloud cases) traditional
>>> packages are definitely less used.
>>>
>>> However Linux Packaging is how a lot of people get exposed to software.
>>> The power of onboarding with apt-get / yum install is a big one.
>>>
>>> I've been through the ups and downs of both approaches so many times now
>>> in my own head, I no longer have a strong preference beyond the fact
>>> that we do one approach today, and doing a different one is effort to
>>> make the transition.
>>>
>>>  -Sean
>>>
>>
>> It is also worth noting that according to the OpenStack User Survey [0]
>> 56% of deployments use "Unmodifed packages from the operating system".
>>
>> Granted it was a small sample size (302 responses to that question)
>> but it is worth keeping this in mind as we talk about moving the burden
>> to packagers.
>>
>> 0 -
>> https://www.openstack.org/assets

Re: [openstack-dev] [oslo][keystone][documentation][gate] Babel dependency for oslo.log

2016-04-18 Thread Jeremy Stanley
On 2016-04-17 15:15:49 -0400 (-0400), Davanum Srinivas wrote:
[...]
> Is there anyone working to fix all tox CI jobs to honor upper
> constraints?
[...]

The present implementation relies on zuul-cloner, which doesn't
(yet) do what we'd need in the post and release pipelines. Sachi is
working on post currently: https://review.openstack.org/293194
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Removal of in progress console log access

2016-04-18 Thread James E. Blair
Sean Dague  writes:

> On 04/18/2016 11:22 AM, James E. Blair wrote:
>> Sean Dague  writes:
>> 
>>> Bummer. This gets used a to figure out the state of things given that
>>> zuul links to the console even after the job is complete. Changing that
>>> to the log server link would mitigate the blind spot.
>> 
>> Yeah, we know it's important, which is why we're working on getting it
>> back, but will take a little bit of time.  In the interim, rather than
>> linking to a dead URL, I removed the links from the status page
>> altogether.  However, if it would be better overall to link to the log
>> server (which will result in 404s until the logs are actually uploaded
>> at the end of the job), we could probably do that instead.  I'm sure
>> we'll get questions, but we could probably put a banner at the top of
>> the page and we may get slightly fewer of them.
>
> The links could be added only after the individual test run completes.
> That would mean no 404s right? But allow link access once their are
> results to be seen.

Yes we could do that -- though for the final job, you may need to watch
closely to grab the link before it disappears.  However, I guess in that
case, you can just grab them from the change, eh?

So -- the best plan is: no links on jobs names to start, then as each
individual job completes, switch the name to a link to the log url for
that job.  Yeah?

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 10:57 AM, Michał Jastrzębski wrote:
> So I also want to stress out that shared libraries are huge pain
> during upgrades. While I'm not in favor of packages with embedded
> virtualenvs (as Matt pointed out, this has a lot of issues), having
> shared dependency pool pretty much means that you need to upgrade
> *everything* that is openstack at single run, and that is prone to
> errors, volatile and nearly impossible to rollback if something goes
> wrong. One way to address this issue is putting services in
> containers, but that is not an solution to problem at hand (56% use
> apt-get install as Graham says). Packagers have hard time keeping up
> already, if we add fairly complex logic to this (virtualenvs) we will
> probably end up with cross-compatibility hell of people not keeping up
> with changes.
> 
> That being said, in my opinion, this percentage is this high because
> that's exactly what we suggest in install docs, once we came out with
> a solution we should fix it there as well.
> 
> 
> On 18 April 2016 at 10:23, Matthew Thode  wrote:
>> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
>>> On 18/04/2016 13:51, Sean Dague wrote:
 On 04/18/2016 08:22 AM, Chris Dent wrote:
> On Mon, 18 Apr 2016, Sean Dague wrote:
>
>> So if you have strong feelings and ideas, why not get them out in email
>> now? That will help in the framing of the conversation.
>
> I won't be at summit and I feel pretty strongly about this topic, so
> I'll throw out my comments:
>
> I agree with the basic premise: In the big tent universe co-
> installability is holding us back and is a huge cost in terms of spent
> energy. In a world where service isolation is desirable and common
> (whether by virtualenv, containers, different hosts, etc) targeting an
> all-in-one install seems only to serve the purposes of all-in-one rpm-
> or deb-based installations.
>
> Many (most?) people won't be doing those kinds of installations. If 
> all-in-
> one installations are important to the rpm- and deb- based distributions
> then _they_ should be resolving the dependency issues local to their own
> infrastructure (or realizing that it is too painful and start
> containerizing or otherwise as well).
>
> I think making these changes will help to improve and strengthen the
> boundaries and contracts between services. If not technically then
> at least socially, in the sense that the negotiations that people
> make to get things to work are about what actually matters in their
> services, not unwinding python dependencies and the like.
>
> A lot of the basics of getting this to work are already in place in
> devstack. One challenge I've run into the past is when devstack
> plugin A has made an assumption about having access to a python
> script provided by devstack plugin B, but it's not on $PATH or its
> dependencies are not in the site-packages visible to the current
> context. The solution here is to use full paths _into_ virtenvs.

 As Chris said, doing virtualenvs on the Devstack side for services is
 pretty much there. The team looked at doing this last year, then stopped
 due to operator feedback.

 One of the things that gets a little weird (when using devstack for
 development) is if you actually want to see the impact of library
 changes on the environment. As you'll need to make sure you loop and
 install those libraries into every venv where they are used. This
 forward reference doesn't really exist. So some tooling there will be
 needed.

 Middleware that's pushed from one project into another (like Ceilometer
 -> Swift) is also a funny edge case that I think get funnier here.

 Those are mostly implementation details, that probably have work
 arounds, but would need people on them.


  From a strategic perspective this would basically make traditional Linux
 Packaging of OpenStack a lot harder. That might be the right call,
 because traditional Linux Packaging definitely suffers from the fact
 that everything on a host needs to be upgraded at the same time. For
 large installs of OpenStack (especially public cloud cases) traditional
 packages are definitely less used.

 However Linux Packaging is how a lot of people get exposed to software.
 The power of onboarding with apt-get / yum install is a big one.

 I've been through the ups and downs of both approaches so many times now
 in my own head, I no longer have a strong preference beyond the fact
 that we do one approach today, and doing a different one is effort to
 make the transition.

  -Sean

>>>
>>> It is also worth noting that according to the OpenStack User Survey [0]
>>> 56% of deployments use "Unmodifed packages from the operating system".
>>>
>>> Granted it was a small sample siz

[openstack-dev] [horizon][keystone] Getting Auth Token from Horizon when using Federation

2016-04-18 Thread Martin Millnert
Hi,

we're deploying Liberty (soon Mitaka) with heavy reliance on the SAML2
Federation system by Keystone where we're a Service Provider (SP).

The problem in this situation is getting a token for direct API
access.(*)

There are conceptually two methods to use the CLI:
 1) Modify ones (each customer -- in our case O(100)) IdP to add support
for a feature called ECP(**), and then use keystoneauth with SAML2
plugin,
 2) Go to (for example) "Access & Security / API Access / View
Credentials" in Horizon, and check out a token from there.

2) isn't implemented. 1) is a complete blocker for many customers.

Are there any principal and fundamental reasons why 2 is not doable?
What I imagine needs to happen:
  A) User is authenticated (see *) in Horizon,
  B) User uses said authentication (token) to request another token from
Keystone, which is displayed under the "API Access" tab on "Access &
Security".

>From a general perspective, I can't see why this shouldn't work.

Whatever scoping the user currently has should be sufficient to check
out a similarly-or-lesser scoped token.

Anyway, we will, if this is at all doable, bolt this onto our local
deployment. I do, A) believe we're not alone with this use case (***),
B) look for input on doability.

We'll be around in Austin for discussion with Horizon/Keystone regarding
this if necessary.

Regards,
Martin Millnert

(* The reason this is a problem: With Federation, there are no local
users and passwords in the Keystone database. When authenticating to
Horizon in this setup, Keystone (I think) redirects the user to an HTTP
page on the home site's Identity Provider (IdP), which performs the
authentication. The IdP then signs a set of entitlements about this
identity, and sends these back to Keystone. Passwords stay at home. Epic
Win.)

(** ECP is a new feature, not supported by all IdP's, that at (second)
best requires reconfiguration of core authentication services at each
customer, and at worst requires customers to change IdP software
completely. This is a varying degree of showstopper for various
customers.)

(*** 
https://stackoverflow.com/questions/20034143/getting-auth-token-from-keystone-in-horizon
https://ask.openstack.org/en/question/51072/get-keystone-auth-token-via-horizon-url/
 
)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] Getting the ball rolling on glance v2 in nova in newton cycle

2016-04-18 Thread Flavio Percoco

On 15/04/16 11:42 -0400, Nikhil Komawar wrote:

comment inline

On 4/15/16 11:08 AM, Sean Dague wrote:

On 04/15/2016 10:42 AM, Jay Pipes wrote:

On 04/01/2016 06:45 AM, Sean Dague wrote:

#2 - move discover major version back to glanceclient -
https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L108


I don't understand why this was ever in nova. This really should be

glanceclient.discover... something. It uses internal methods from
glanceclient and internal structures of the content returned.

Catching, if desired, should also be on the glanceclient side.
glanceclient.reset_version() could exist to clear any caching.

This is exactly what I said in the original review in Mitaka :(

https://review.openstack.org/#/c/222150/

Note that on PS10 I wrote:

This code belongs in glanceclient, not Nova, IMHO...

Line 169: Most of the above code should actually be in the
python-glanceclient package, not here. The Nova code should be able to
call glanceclient with a URI and get the latest supported Glance API
version.

and then later I said:

To be a little clearer... the Nova code should be able to do something
like this:

 glance_uris = CONF.glance_api_servers
 glance_uri_version_map = {glance_uri: glanceclient.get_latest_version(uri)
for uri in glance_uris}

So... pretty much in line with what you say above.

Flavio then responded to me:

"While I agree with you, I believe we should let this in "for now"
whilst it's added to glanceclient. One of the things that blocked the
previous work during the Kilo cycle were things being added to
glanceclient and the fact that they weren't available right away.

Can we agree on removing this code as soon as there's a glanceclient
release with it? Happy to have a bug filed against glanceclient. The
glance team will take care of this."

and I wrote:

"OK, if you promise to remove some of this code when glanceclient gets
this functionality, then I suppose I'm good with this going in Nova for
now."

So, there's your answer to "why this was ever in Nova".

My expectation is that the turn around time on something like this would
have been weeks, not months. I feel like the delays getting things into
glanceclient makes me a pretty firm -2 on "let's just hack it into Nova
for now". Because for now seems to equal for ever. :(


Sure, let's avoid more delay.


Honestly, staring at all this again this morning I think that version
discovery in Nova is probably just complexity we don't need. Especially
because it tends to lead to code that goes and leans on version
discovery at weird deep layers, and it turns out you are using v1 for a
piece of a flow and v2 for a different piece.


I can figure out a way to get this done in glanceclient soon-ish.


FWIW, We had figure out a way to do *ALL* this in glanceclient during Mitaka but
then nova patches were blocked and the plans we had agreed on were changed.
Therefore, The glance team decided to dedicate resources on other priorities
that were actually going to make it.

So, forgive me if I jump in with a defensive tone but I disagree with the
feeling this would've taken forever. The Glance team was already acting towards
this but then *Nova* happened.


I've proposed new addendum to the spec to just make this a config, with
a flow of how we'd get rid of it - https://review.openstack.org/#/c/306447/


Having said all of the above, I'm good with your proposal so as to keep
the current traction on the work. (Guessing that was the reason to delay
glanceclient work before)



I'm happy to see this moving forward. TBH, anything that would let glance (and
Nova) move on from Glance's v1 is fine.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] removal of "Using keystoneauth correctly in projects" from cross project schedule

2016-04-18 Thread Henrique Truta
Hi Sean/Morgan,

I think there is room for the proper use of keystoneauth sessions in the
Keystone v3 only cross-project session too:
https://etherpad.openstack.org/p/newton-keystone-v3-devstack

Feel free to add anything related to ksa sessions.

Henrique

Em seg, 18 de abr de 2016 às 11:06, Morgan Fainberg <
morgan.fainb...@gmail.com> escreveu:

> On Mon, Apr 18, 2016 at 6:50 AM, Sean Dague  wrote:
>
>> After chatting with Monty and Thierry this morning, and trying to figure
>> out the right way to ensure that enough voices are in the requirements
>> x-p session, we've decided to do the following:
>>
>> * remove "Using keystoneauth correctly in projects" from cross project
>> schedule
>> * Do a special edition OpenStack Bootstrapping Hour on the topic on Wed
>> May 11th (details / url to be posted post summit).
>>
>> That will give us the same content / or better content as was to be on
>> the schedule, but also record it for future consumption.
>>
>> Sorry for any inconvenience these last minute changes provide. Thanks
>> folks.
>>
>> -Sean
>>
>>
>>
> Thanks for the heads up Sean! I think this sounds like a good alternative
> and will help to ensure a bulk of voices that are needed for the
> requirements discussion(s) will be available.
>
> --Morgan
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Henrique
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][ssecurity] Threat Analysis Design Session

2016-04-18 Thread michael mccune

On 04/16/2016 02:19 PM, Steven Dake (stdake) wrote:

If the security team has a conflict  with this slot that I didn't see or
am unaware of, please speak up so I can have it corrected in the main
schedule.  Our schedule is here:


sadly, i will miss the beginning of this as i have a conflicting 
presentation.


as much as i'd like to participate, i don't think my absence should 
necessitate a reschedule. just wanted to give a heads up.


regards,
mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] Getting the ball rolling on glance v2 in nova in newton cycle

2016-04-18 Thread Nikhil Komawar


On 4/16/16 1:33 PM, Flavio Percoco wrote:
> On 15/04/16 11:42 -0400, Nikhil Komawar wrote:
>> comment inline
>>
>> On 4/15/16 11:08 AM, Sean Dague wrote:
>>> On 04/15/2016 10:42 AM, Jay Pipes wrote:
 On 04/01/2016 06:45 AM, Sean Dague wrote:
> #2 - move discover major version back to glanceclient -
> https://github.com/openstack/nova/blob/3cdaa30566c17a2add5d9163a0693c97dc1d065b/nova/image/glance.py#L108
>
>
>
> I don't understand why this was ever in nova. This really should be
>
> glanceclient.discover... something. It uses internal methods from
> glanceclient and internal structures of the content returned.
>
> Catching, if desired, should also be on the glanceclient side.
> glanceclient.reset_version() could exist to clear any caching.
 This is exactly what I said in the original review in Mitaka :(

 https://review.openstack.org/#/c/222150/

 Note that on PS10 I wrote:

 This code belongs in glanceclient, not Nova, IMHO...

 Line 169: Most of the above code should actually be in the
 python-glanceclient package, not here. The Nova code should be able to
 call glanceclient with a URI and get the latest supported Glance API
 version.

 and then later I said:

 To be a little clearer... the Nova code should be able to do something
 like this:

  glance_uris = CONF.glance_api_servers
  glance_uri_version_map = {glance_uri:
 glanceclient.get_latest_version(uri)
 for uri in glance_uris}

 So... pretty much in line with what you say above.

 Flavio then responded to me:

 "While I agree with you, I believe we should let this in "for now"
 whilst it's added to glanceclient. One of the things that blocked the
 previous work during the Kilo cycle were things being added to
 glanceclient and the fact that they weren't available right away.

 Can we agree on removing this code as soon as there's a glanceclient
 release with it? Happy to have a bug filed against glanceclient. The
 glance team will take care of this."

 and I wrote:

 "OK, if you promise to remove some of this code when glanceclient gets
 this functionality, then I suppose I'm good with this going in Nova
 for
 now."

 So, there's your answer to "why this was ever in Nova".
>>> My expectation is that the turn around time on something like this
>>> would
>>> have been weeks, not months. I feel like the delays getting things into
>>> glanceclient makes me a pretty firm -2 on "let's just hack it into Nova
>>> for now". Because for now seems to equal for ever. :(
>>
>> Sure, let's avoid more delay.
>>
>>> Honestly, staring at all this again this morning I think that version
>>> discovery in Nova is probably just complexity we don't need. Especially
>>> because it tends to lead to code that goes and leans on version
>>> discovery at weird deep layers, and it turns out you are using v1 for a
>>> piece of a flow and v2 for a different piece.
>>
>> I can figure out a way to get this done in glanceclient soon-ish.
>
> FWIW, We had figure out a way to do *ALL* this in glanceclient during
> Mitaka but
> then nova patches were blocked and the plans we had agreed on were
> changed.
> Therefore, The glance team decided to dedicate resources on other
> priorities
> that were actually going to make it.
>
> So, forgive me if I jump in with a defensive tone but I disagree with the
> feeling this would've taken forever. The Glance team was already
> acting towards
> this but then *Nova* happened.
>

Thanks for the clarification Flavio. I guess now we can move on with the
alternate plan proposed and keep the momentum on deprecating Glance v1
intact. I'm hoping that the cross track (nova-glance) summit session
(Thanks Matt) will help relieve from some of such issues and
perspective gaps that different people have.

>>> I've proposed new addendum to the spec to just make this a config, with
>>> a flow of how we'd get rid of it -
>>> https://review.openstack.org/#/c/306447/
>>
>> Having said all of the above, I'm good with your proposal so as to keep
>> the current traction on the work. (Guessing that was the reason to delay
>> glanceclient work before)
>>
>
> I'm happy to see this moving forward. TBH, anything that would let
> glance (and
> Nova) move on from Glance's v1 is fine.
>
> Flavio
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.o

Re: [openstack-dev] [all] Removal of in progress console log access

2016-04-18 Thread Sean Dague
On 04/18/2016 12:18 PM, James E. Blair wrote:
> Sean Dague  writes:
> 
>> On 04/18/2016 11:22 AM, James E. Blair wrote:
>>> Sean Dague  writes:
>>>
 Bummer. This gets used a to figure out the state of things given that
 zuul links to the console even after the job is complete. Changing that
 to the log server link would mitigate the blind spot.
>>>
>>> Yeah, we know it's important, which is why we're working on getting it
>>> back, but will take a little bit of time.  In the interim, rather than
>>> linking to a dead URL, I removed the links from the status page
>>> altogether.  However, if it would be better overall to link to the log
>>> server (which will result in 404s until the logs are actually uploaded
>>> at the end of the job), we could probably do that instead.  I'm sure
>>> we'll get questions, but we could probably put a banner at the top of
>>> the page and we may get slightly fewer of them.
>>
>> The links could be added only after the individual test run completes.
>> That would mean no 404s right? But allow link access once their are
>> results to be seen.
> 
> Yes we could do that -- though for the final job, you may need to watch
> closely to grab the link before it disappears.  However, I guess in that
> case, you can just grab them from the change, eh?
> 
> So -- the best plan is: no links on jobs names to start, then as each
> individual job completes, switch the name to a link to the log url for
> that job.  Yeah?

Yes, that would let you see the results of an individual experimental
run that is complete before they all return and post to the change. Once
they are all done, they are listed on the change, so that's good enough.

Thanks much!

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Michał Jastrzębski
What I meant is if you have liberty Nova and liberty Cinder, and you
want to upgrade Nova to Mitaka, you also upgrade Oslo to Mitaka and
Cinder which was liberty either needs to be upgraded or is broken,
therefore during upgrade you need to do cinder and nova at the same
time. DB can be snapshotted for rollbacks.

On 18 April 2016 at 11:15, Matthew Thode  wrote:
> On 04/18/2016 10:57 AM, Michał Jastrzębski wrote:
>> So I also want to stress out that shared libraries are huge pain
>> during upgrades. While I'm not in favor of packages with embedded
>> virtualenvs (as Matt pointed out, this has a lot of issues), having
>> shared dependency pool pretty much means that you need to upgrade
>> *everything* that is openstack at single run, and that is prone to
>> errors, volatile and nearly impossible to rollback if something goes
>> wrong. One way to address this issue is putting services in
>> containers, but that is not an solution to problem at hand (56% use
>> apt-get install as Graham says). Packagers have hard time keeping up
>> already, if we add fairly complex logic to this (virtualenvs) we will
>> probably end up with cross-compatibility hell of people not keeping up
>> with changes.
>>
>> That being said, in my opinion, this percentage is this high because
>> that's exactly what we suggest in install docs, once we came out with
>> a solution we should fix it there as well.
>>
>>
>> On 18 April 2016 at 10:23, Matthew Thode  wrote:
>>> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
 On 18/04/2016 13:51, Sean Dague wrote:
> On 04/18/2016 08:22 AM, Chris Dent wrote:
>> On Mon, 18 Apr 2016, Sean Dague wrote:
>>
>>> So if you have strong feelings and ideas, why not get them out in email
>>> now? That will help in the framing of the conversation.
>>
>> I won't be at summit and I feel pretty strongly about this topic, so
>> I'll throw out my comments:
>>
>> I agree with the basic premise: In the big tent universe co-
>> installability is holding us back and is a huge cost in terms of spent
>> energy. In a world where service isolation is desirable and common
>> (whether by virtualenv, containers, different hosts, etc) targeting an
>> all-in-one install seems only to serve the purposes of all-in-one rpm-
>> or deb-based installations.
>>
>> Many (most?) people won't be doing those kinds of installations. If 
>> all-in-
>> one installations are important to the rpm- and deb- based distributions
>> then _they_ should be resolving the dependency issues local to their own
>> infrastructure (or realizing that it is too painful and start
>> containerizing or otherwise as well).
>>
>> I think making these changes will help to improve and strengthen the
>> boundaries and contracts between services. If not technically then
>> at least socially, in the sense that the negotiations that people
>> make to get things to work are about what actually matters in their
>> services, not unwinding python dependencies and the like.
>>
>> A lot of the basics of getting this to work are already in place in
>> devstack. One challenge I've run into the past is when devstack
>> plugin A has made an assumption about having access to a python
>> script provided by devstack plugin B, but it's not on $PATH or its
>> dependencies are not in the site-packages visible to the current
>> context. The solution here is to use full paths _into_ virtenvs.
>
> As Chris said, doing virtualenvs on the Devstack side for services is
> pretty much there. The team looked at doing this last year, then stopped
> due to operator feedback.
>
> One of the things that gets a little weird (when using devstack for
> development) is if you actually want to see the impact of library
> changes on the environment. As you'll need to make sure you loop and
> install those libraries into every venv where they are used. This
> forward reference doesn't really exist. So some tooling there will be
> needed.
>
> Middleware that's pushed from one project into another (like Ceilometer
> -> Swift) is also a funny edge case that I think get funnier here.
>
> Those are mostly implementation details, that probably have work
> arounds, but would need people on them.
>
>
>  From a strategic perspective this would basically make traditional Linux
> Packaging of OpenStack a lot harder. That might be the right call,
> because traditional Linux Packaging definitely suffers from the fact
> that everything on a host needs to be upgraded at the same time. For
> large installs of OpenStack (especially public cloud cases) traditional
> packages are definitely less used.
>
> However Linux Packaging is how a lot of people get exposed to software.
> The power of onboarding with apt-get / yum install is a big one.
>
> I've been through th

[openstack-dev] [searchlight] [ceilometer] [fuel] [freezer] [monasca] Elasticsearch 2.x gate support

2016-04-18 Thread McLellan, Steven
Hi,

I'm looking into supporting and testing Elasticsearch 2.x in Searchlight's test 
jobs. Currently I don't know of any other projects that run tests against 
Elasticsearch in the gate (since we had to add it to Jenkins [1]). Several 
projects install the python elasticsearch client in requirements.txt, and it is 
currently capped to <2.0 in global-requirements [2], and others consume it 
directly through HTTP requests. Searchlight needs to move to support 
Elasticsearch 2.x in Newton but we are aware that doing so will affect other 
projects.

Elasticsearch 2.x is backwards incompatible [3] with 1.x in some ways. The 
python client library is similarly backwards-incompatible; it is strongly 
recommended the client major version matches the server major version. In 
testing Searchlight we found only a couple of fairly minor changes were needed 
(and the 1.x client library seems to continue to work against a 2.x server) , 
but YMMV. Devstack's default ES version is 1.4.2 [4] (which should be changed 
to 1.7 in any case) and we obviously cannot change that until all projects 
support 2.x.

A wholesale change to move to Elasticsearch 2.x would require changing 
global-requirements, but this may obviously break projects not ready for the 
change. My questions for the projects affected are:

* Have you tested with ES 2.x at all?
* Do you have plans to move to ES 2.x?

Our likely fallback is testing with the 1.x client until we can move devstack 
and global-requirements to 2.x; if we discover issues in the meantime we will 
include a deployer note that the python library needs to be updated if 
Elasticsearch 2.x is in use.

Thanks,

Steve

[1] 
https://github.com/openstack-infra/project-config/commit/1ac8e52e2be6ff9a8d72e842929ea00e55f6b075
[2] 
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L42
[3] 
https://www.elastic.co/guide/en/elasticsearch/reference/current/breaking-changes-2.0.html
[4] 
https://github.com/openstack-dev/devstack/blob/f0f371951f0df7b797556fd6c5f3ceb0fcc9d76c/pkg/elasticsearch.sh#L13


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
> > On 18/04/2016 13:51, Sean Dague wrote:
> >> On 04/18/2016 08:22 AM, Chris Dent wrote:
> >>> On Mon, 18 Apr 2016, Sean Dague wrote:
> >>>
>  So if you have strong feelings and ideas, why not get them out in email
>  now? That will help in the framing of the conversation.
> >>>
> >>> I won't be at summit and I feel pretty strongly about this topic, so
> >>> I'll throw out my comments:
> >>>
> >>> I agree with the basic premise: In the big tent universe co-
> >>> installability is holding us back and is a huge cost in terms of spent
> >>> energy. In a world where service isolation is desirable and common
> >>> (whether by virtualenv, containers, different hosts, etc) targeting an
> >>> all-in-one install seems only to serve the purposes of all-in-one rpm-
> >>> or deb-based installations.
> >>>
> >>> Many (most?) people won't be doing those kinds of installations. If 
> >>> all-in-
> >>> one installations are important to the rpm- and deb- based distributions
> >>> then _they_ should be resolving the dependency issues local to their own
> >>> infrastructure (or realizing that it is too painful and start
> >>> containerizing or otherwise as well).
> >>>
> >>> I think making these changes will help to improve and strengthen the
> >>> boundaries and contracts between services. If not technically then
> >>> at least socially, in the sense that the negotiations that people
> >>> make to get things to work are about what actually matters in their
> >>> services, not unwinding python dependencies and the like.
> >>>
> >>> A lot of the basics of getting this to work are already in place in
> >>> devstack. One challenge I've run into the past is when devstack
> >>> plugin A has made an assumption about having access to a python
> >>> script provided by devstack plugin B, but it's not on $PATH or its
> >>> dependencies are not in the site-packages visible to the current
> >>> context. The solution here is to use full paths _into_ virtenvs.
> >>
> >> As Chris said, doing virtualenvs on the Devstack side for services is
> >> pretty much there. The team looked at doing this last year, then stopped
> >> due to operator feedback.
> >>
> >> One of the things that gets a little weird (when using devstack for
> >> development) is if you actually want to see the impact of library
> >> changes on the environment. As you'll need to make sure you loop and
> >> install those libraries into every venv where they are used. This
> >> forward reference doesn't really exist. So some tooling there will be
> >> needed.
> >>
> >> Middleware that's pushed from one project into another (like Ceilometer
> >> -> Swift) is also a funny edge case that I think get funnier here.
> >>
> >> Those are mostly implementation details, that probably have work
> >> arounds, but would need people on them.
> >>
> >>
> >>  From a strategic perspective this would basically make traditional Linux
> >> Packaging of OpenStack a lot harder. That might be the right call,
> >> because traditional Linux Packaging definitely suffers from the fact
> >> that everything on a host needs to be upgraded at the same time. For
> >> large installs of OpenStack (especially public cloud cases) traditional
> >> packages are definitely less used.
> >>
> >> However Linux Packaging is how a lot of people get exposed to software.
> >> The power of onboarding with apt-get / yum install is a big one.
> >>
> >> I've been through the ups and downs of both approaches so many times now
> >> in my own head, I no longer have a strong preference beyond the fact
> >> that we do one approach today, and doing a different one is effort to
> >> make the transition.
> >>
> >> -Sean
> >>
> > 
> > It is also worth noting that according to the OpenStack User Survey [0]
> > 56% of deployments use "Unmodifed packages from the operating system".
> > 
> > Granted it was a small sample size (302 responses to that question)
> > but it is worth keeping this in mind as we talk about moving the burden
> > to packagers.
> > 
> > 0 - 
> > https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> > (page 
> > 36)
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> To add to this, I'd also note that I as a packager would likely stop
> packaging Openstack at whatever release this goes into.  While the
> option to package and ship a virtualenv installed to /usr/local or /opt
> exists bundling is not something that should be supported given the
> issues it can have (update cadence and security issues mainly).

That's a useful data point, but it comes across as a threat and I'm
having trouble taking it as a constructive c

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Hayes, Graham's message of 2016-04-18 13:24:40 +:
> On 18/04/2016 13:51, Sean Dague wrote:
> > On 04/18/2016 08:22 AM, Chris Dent wrote:
> >> On Mon, 18 Apr 2016, Sean Dague wrote:
> >>
> >>> So if you have strong feelings and ideas, why not get them out in email
> >>> now? That will help in the framing of the conversation.
> >>
> >> I won't be at summit and I feel pretty strongly about this topic, so
> >> I'll throw out my comments:
> >>
> >> I agree with the basic premise: In the big tent universe co-
> >> installability is holding us back and is a huge cost in terms of spent
> >> energy. In a world where service isolation is desirable and common
> >> (whether by virtualenv, containers, different hosts, etc) targeting an
> >> all-in-one install seems only to serve the purposes of all-in-one rpm-
> >> or deb-based installations.
> >>
> >> Many (most?) people won't be doing those kinds of installations. If all-in-
> >> one installations are important to the rpm- and deb- based distributions
> >> then _they_ should be resolving the dependency issues local to their own
> >> infrastructure (or realizing that it is too painful and start
> >> containerizing or otherwise as well).
> >>
> >> I think making these changes will help to improve and strengthen the
> >> boundaries and contracts between services. If not technically then
> >> at least socially, in the sense that the negotiations that people
> >> make to get things to work are about what actually matters in their
> >> services, not unwinding python dependencies and the like.
> >>
> >> A lot of the basics of getting this to work are already in place in
> >> devstack. One challenge I've run into the past is when devstack
> >> plugin A has made an assumption about having access to a python
> >> script provided by devstack plugin B, but it's not on $PATH or its
> >> dependencies are not in the site-packages visible to the current
> >> context. The solution here is to use full paths _into_ virtenvs.
> >
> > As Chris said, doing virtualenvs on the Devstack side for services is
> > pretty much there. The team looked at doing this last year, then stopped
> > due to operator feedback.
> >
> > One of the things that gets a little weird (when using devstack for
> > development) is if you actually want to see the impact of library
> > changes on the environment. As you'll need to make sure you loop and
> > install those libraries into every venv where they are used. This
> > forward reference doesn't really exist. So some tooling there will be
> > needed.
> >
> > Middleware that's pushed from one project into another (like Ceilometer
> > -> Swift) is also a funny edge case that I think get funnier here.
> >
> > Those are mostly implementation details, that probably have work
> > arounds, but would need people on them.
> >
> >
> >  From a strategic perspective this would basically make traditional Linux
> > Packaging of OpenStack a lot harder. That might be the right call,
> > because traditional Linux Packaging definitely suffers from the fact
> > that everything on a host needs to be upgraded at the same time. For
> > large installs of OpenStack (especially public cloud cases) traditional
> > packages are definitely less used.
> >
> > However Linux Packaging is how a lot of people get exposed to software.
> > The power of onboarding with apt-get / yum install is a big one.
> >
> > I've been through the ups and downs of both approaches so many times now
> > in my own head, I no longer have a strong preference beyond the fact
> > that we do one approach today, and doing a different one is effort to
> > make the transition.
> >
> > -Sean
> >
> 
> It is also worth noting that according to the OpenStack User Survey [0]
> 56% of deployments use "Unmodifed packages from the operating system".
> 
> Granted it was a small sample size (302 responses to that question)
> but it is worth keeping this in mind as we talk about moving the burden
> to packagers.

To be clear, "Moving the burden to packagers" is not the only option
available to us. I've proposed one option for eliminating the issue,
which has some benefits for us upstream but obviously introduces
some other issues we would need to resolve. Another option is for
more people to get involved in managing the dependency list. Some
(most? all?) of those new people may come from distros, and sharing
the effort among them would make it easier than each of them doing
all of the work individually. Sort of like an open source project.

Doug

> 
> 0 - 
> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> (page 
> 36)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone][documentation][gate] Babel dependency for oslo.log

2016-04-18 Thread Joshua Harlow

Andreas Jaeger wrote:

On 04/17/2016 09:15 PM, Davanum Srinivas wrote:

Hi Oslo folks, Andreas and others,

Over the weekend oslo.log 3.4.0 was released. This broke keystone CI
jobs [2], even though the 3.4.0 was not specified in upper-constraints
as keystone jobs were not honoring the upper-constraints.txt, so we
fixed it in [3].

So the first big problem after [3] was that several tox targets do not
inject u-c and hence fail, so in [3] we also added install_commands
for testenv:releasenotes and testenv:cover, based on the pattern set
in Nova's tox.ini [4]. That was still not enough and we had to add an
entry in keystone's requirements.txt for Babel even though it was not
there before (and hence pulling in latest Babel from somewhere).

So Here are the questions:
1) Is there anyone working to fix all tox CI jobs to honor upper constraints?
2) Why do we need Babel in oslo.log's requirements.txt?
3) Can we remove Babel from all requirements.txt and
test-requirements.txt and leave them in just tox.ini when needed?

Note that there was nothing wrong either in oslo.log itself it
published a release with what was in global-requirements.txt, nor in
keystone, which has traditionally not run with constraints on. Just
the combination of situations with Babel going bad broke at least
keystone.

Did anyone else see other jobs break? Please respond!

Thanks,
Dims


[1] http://markmail.org/message/ygyxpjpbhlbz3q5d
[2] 
http://logs.openstack.org/86/249486/32/check/gate-keystone-python34-db/29ace4f/console.html#_2016-04-17_04_31_51_138
[3] https://review.openstack.org/#/c/306846/
[4] http://git.openstack.org/cgit/openstack/nova/tree/tox.ini


I think what happened is:
1) oslo.log indirectly requires Babel
2) requirements blacklists Babel 2.3.x
3) keystone has new requirements included and thus fails

The problem here is that oslo.log requires olso.i18n which requires
Babel. And if oslo.i18n would have had a release with the blacklisting
of Babel 2.3.x, this wouldn't have happened. So, I propose to release
oslo.i18n.

Babel 2.3.4 which fixes the known problems might be out soon as well -
and if that does not introduce regressions, this will self-heal,


Ok, so which option should we go with here?

I'm ok with releasing olso.i18n or Babel 2.3.4 (when is this release 
happening, soon? like soon soon?)




Andreas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Sean Dague
On 04/18/2016 01:33 PM, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:

>> To add to this, I'd also note that I as a packager would likely stop
>> packaging Openstack at whatever release this goes into.  While the
>> option to package and ship a virtualenv installed to /usr/local or /opt
>> exists bundling is not something that should be supported given the
>> issues it can have (update cadence and security issues mainly).
> 
> That's a useful data point, but it comes across as a threat and I'm
> having trouble taking it as a constructive comment.
> 
> Can you truly not imagine any other useful way to package OpenStack
> other than individual packages with shared dependencies that would
> be acceptable?

I think it's important to realize that if we go down this route, I'd
expect a lot of community  distros to take that stand point. Only
distros with a product will be able to take on the work.

We often get annoyed with projects in our own space being "special
snowflakes" and doing things differently. OpenStack demanding that every
component has a copy of it's own dependencies is definitely being a
special snowflake to the distros. And for those not building product,
it's probably just going to be too much work. I'd rather be thankful of
Matthew's honesty about that up front instead of not saying anything,
and it getting quietly dropped, and people being mad later.

A lot of distros specifically have policies against this kind of
bundling as well, because of security issues like this (which was so
very bad) - http://www.zlib.net/advisory-2002-03-11.txt

How to mitigate that kind of issue and "fleet deploy" CVEed libraries in
these environments is definitely an open question, especially as it
doesn't fit into the security stream and tools that distros have built
over the last couple of decades.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Hayes, Graham
 On Mon, 18 Apr 2016, Sean Dague wrote:



 Many (most?) people won't be doing those kinds of installations. If all-in-
 one installations are important to the rpm- and deb- based distributions
 then _they_ should be resolving the dependency issues local to their own
 infrastructure (or realizing that it is too painful and start
 containerizing or otherwise as well).


Sorry - I was responding to the point above - I should have made that
clearer.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Cinder-Nova API meeting

2016-04-18 Thread Ildikó Váncsa
Hi All,

We are having the last Cinder-Nova API interactions meeting before the Summit 
this Wednesday __20th April 2100UTC__, on the #openstack-meeting-cp channel.

You can find the information about the recent discussions here: 
https://etherpad.openstack.org/p/cinder-nova-api-changes

This week will mainly focus on the scenarios beyond attach/detach, we need to 
address during the design sessions mostly from multi-attach perspective.

Best Regards,
/Ildikó

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Question on removal of 'arbitrary' pluggable interfaces

2016-04-18 Thread Joshua Harlow

Hi nova folks,

I was reading over the following:

http://lists.openstack.org/pipermail/openstack-operators/2016-April/010186.html

And I am wondering if there is a list of all the plugin points and there 
schedule for being deprecated and then removed (or am I misreading that 
mail/thread?).


I would assume this includes things like:

- baremetal_scheduler_default_filters
- cells/driver
- cells/scheduler
- cells/scheduler_filter_classes
- compute_manager
- compute_stats_class
- conductor/manager
- console_driver
- console_manager
- consoleauth_manager
- db_driver
- firewall_driver
- floating_ip_dns_manager
- instance_dns_manager
- keymgr/api_class
- l3_lib
- linuxnet_interface_driver
- metadata_manager
- network_api_class
- network_driver
- network_manager
- osapi_compute_extension
- quota_driver
- scheduler_available_filters
- scheduler_default_filters
- scheduler_driver
- scheduler_host_manager
- scheduler_weight_classes
- servicegroup_driver
- vendordata_driver
- volume_api_class
- xenserver/vif_driver

(and or any I missed the end with '_driver' or 'manager').

Also is there any docs on the reasons for removal (I think I get why, 
just wanted to be able to reference something for others); because I can 
imagine such a thing/wiki/docs/reason will be needed or people will 
start flipping a lot of tables.


Any thoughts on the above (or a table showing timelines and reasons) 
would be great,


Much appreciated!

-Josh


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Matt Fischer
Thanks Brant,

I will missing that distinction.

On Mon, Apr 18, 2016 at 9:43 AM, Brant Knudson  wrote:

>
>
> On Mon, Apr 18, 2016 at 10:20 AM, Matt Fischer 
> wrote:
>
>> On Mon, Apr 18, 2016 at 8:29 AM, Brant Knudson  wrote:
>>
>>>
>>>
>>> On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:
>>>
 We all want Fernet to be a reality.  We ain't there yet (Except for
 mfish who has no patience) but we are getting closer.  The goal is to get
 Fernet as the default token provider as soon as possible. The review to do
 this has uncovered a few details that need to be fixed before we can do
 this.

 Trusts for V2 tokens were not working correctly.  Relatively easy fix.
 https://review.openstack.org/#/c/278693/ Patch is still failing on
 Python 3.  The tests are kindof racy due to the revocation event 1 second
 granularity.  Some of the tests here have A sleep (1) in them still, but
 all should be using the time control aspect of the unit test fixtures.

 Some of the tests also use the same user to validate a token as that
 have, for example, a role unassigned.  These expose a problem that the
 revocation events are catching too many tokens, some of which should not be
 treated as revoked.

 Also, some of the logic for revocation checking has to change. Before,
 if a user had two roles, and had one removed, the token would be revoked.
 Now, however, the token will validate successful, but the response will
 only have the single assigned role in it.


 Python 3 tests are failing because the Fernet formatter is insisting
 that all project-ids be valid UUIDs, but some of the old tests have "FOO"
 and "BAR" as ids.  These either need to be converted to UUIDS, or the
 formatter needs to be more forgiving.

 Caching of token validations was messing with revocation checking.
 Tokens that were valid once were being reported as always valid. Thus, the
 current review  removes all caching on token validations, a change we
 cannot maintain.  Once all the test are successfully passing, we will
 re-introduce the cache, and be far more aggressive about cache 
 invalidation.

 Tempest tests are currently failing due to Devstack not properly
 identifying Fernet as the default token provider, and creating the Fernet
 key repository.  I'm tempted to just force devstack to always create the
 directory, as a user would need it if they ever switched the token provider
 post launch anyway.


>>> There's a review to change devstack to default to fernet:
>>> https://review.openstack.org/#/c/195780/ . This was mostly to show that
>>> tempest still passes with fernet configured. It uncovered a couple of test
>>> issues (similar in nature to the revocation checking issues mentioned in
>>> the original note) that have since been fixed.
>>>
>>> We'd prefer to not have devstack overriding config options and instead
>>> use keystone's defaults. The problem is if fernet is the default in
>>> keystone then it won't work out of the box since the key database won't
>>> exist. One option that I think we should investigate is to have keystone
>>> create the key database on startup if it doesn't exist.
>>>
>>> - Brant
>>>
>>>
>>
>> I'm not a devstack user, but as I mentioned before, I assume devstack
>> called keystone-manage db_sync? Why couldn't it also call keystone-manage
>> fernet_setup?
>>
>>
> When you tell devstack that it's using fernet then it does keystone-manage
> fernet_setup. When you tell devstack to use the default, it doesn't
> fernet_setup because for now it thinks the default is UUID and doesn't
> require keys. One way to have devstack work when fernet is the default is
> to have devstack always do keystone-manage fernet_setup.
>
> Really what we want to do is have devstack work like other deployment
> methods. We can reasonably expect featureful deployers like puppet to
> keystone-manage fernet_setup in the course of setting up keystone. There's
> more basic deployers like RPMs or debs that in the past have said they like
> the defaults to "just work" (like UUID tokens) and not require extra
> commands.
>
> - Brant
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question on removal of 'arbitrary' pluggable interfaces

2016-04-18 Thread Jay Pipes
Each configuration option's deprecation is indicated in the Nova source 
code in the configuration option's declaration. For instance:


https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L86-L91

grep for deprecated_for_removal=True

Best,
-jay

On 04/18/2016 02:08 PM, Joshua Harlow wrote:

Hi nova folks,

I was reading over the following:

http://lists.openstack.org/pipermail/openstack-operators/2016-April/010186.html


And I am wondering if there is a list of all the plugin points and there
schedule for being deprecated and then removed (or am I misreading that
mail/thread?).

I would assume this includes things like:

- baremetal_scheduler_default_filters
- cells/driver
- cells/scheduler
- cells/scheduler_filter_classes
- compute_manager
- compute_stats_class
- conductor/manager
- console_driver
- console_manager
- consoleauth_manager
- db_driver
- firewall_driver
- floating_ip_dns_manager
- instance_dns_manager
- keymgr/api_class
- l3_lib
- linuxnet_interface_driver
- metadata_manager
- network_api_class
- network_driver
- network_manager
- osapi_compute_extension
- quota_driver
- scheduler_available_filters
- scheduler_default_filters
- scheduler_driver
- scheduler_host_manager
- scheduler_weight_classes
- servicegroup_driver
- vendordata_driver
- volume_api_class
- xenserver/vif_driver

(and or any I missed the end with '_driver' or 'manager').

Also is there any docs on the reasons for removal (I think I get why,
just wanted to be able to reference something for others); because I can
imagine such a thing/wiki/docs/reason will be needed or people will
start flipping a lot of tables.

Any thoughts on the above (or a table showing timelines and reasons)
would be great,

Much appreciated!

-Josh


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 12:33 PM, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
>> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
>>> On 18/04/2016 13:51, Sean Dague wrote:
 On 04/18/2016 08:22 AM, Chris Dent wrote:
> On Mon, 18 Apr 2016, Sean Dague wrote:
>
>> So if you have strong feelings and ideas, why not get them out in email
>> now? That will help in the framing of the conversation.
>
> I won't be at summit and I feel pretty strongly about this topic, so
> I'll throw out my comments:
>
> I agree with the basic premise: In the big tent universe co-
> installability is holding us back and is a huge cost in terms of spent
> energy. In a world where service isolation is desirable and common
> (whether by virtualenv, containers, different hosts, etc) targeting an
> all-in-one install seems only to serve the purposes of all-in-one rpm-
> or deb-based installations.
>
> Many (most?) people won't be doing those kinds of installations. If 
> all-in-
> one installations are important to the rpm- and deb- based distributions
> then _they_ should be resolving the dependency issues local to their own
> infrastructure (or realizing that it is too painful and start
> containerizing or otherwise as well).
>
> I think making these changes will help to improve and strengthen the
> boundaries and contracts between services. If not technically then
> at least socially, in the sense that the negotiations that people
> make to get things to work are about what actually matters in their
> services, not unwinding python dependencies and the like.
>
> A lot of the basics of getting this to work are already in place in
> devstack. One challenge I've run into the past is when devstack
> plugin A has made an assumption about having access to a python
> script provided by devstack plugin B, but it's not on $PATH or its
> dependencies are not in the site-packages visible to the current
> context. The solution here is to use full paths _into_ virtenvs.

 As Chris said, doing virtualenvs on the Devstack side for services is
 pretty much there. The team looked at doing this last year, then stopped
 due to operator feedback.

 One of the things that gets a little weird (when using devstack for
 development) is if you actually want to see the impact of library
 changes on the environment. As you'll need to make sure you loop and
 install those libraries into every venv where they are used. This
 forward reference doesn't really exist. So some tooling there will be
 needed.

 Middleware that's pushed from one project into another (like Ceilometer
 -> Swift) is also a funny edge case that I think get funnier here.

 Those are mostly implementation details, that probably have work
 arounds, but would need people on them.


  From a strategic perspective this would basically make traditional Linux
 Packaging of OpenStack a lot harder. That might be the right call,
 because traditional Linux Packaging definitely suffers from the fact
 that everything on a host needs to be upgraded at the same time. For
 large installs of OpenStack (especially public cloud cases) traditional
 packages are definitely less used.

 However Linux Packaging is how a lot of people get exposed to software.
 The power of onboarding with apt-get / yum install is a big one.

 I've been through the ups and downs of both approaches so many times now
 in my own head, I no longer have a strong preference beyond the fact
 that we do one approach today, and doing a different one is effort to
 make the transition.

 -Sean

>>>
>>> It is also worth noting that according to the OpenStack User Survey [0]
>>> 56% of deployments use "Unmodifed packages from the operating system".
>>>
>>> Granted it was a small sample size (302 responses to that question)
>>> but it is worth keeping this in mind as we talk about moving the burden
>>> to packagers.
>>>
>>> 0 - 
>>> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
>>> (page 
>>> 36)
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> To add to this, I'd also note that I as a packager would likely stop
>> packaging Openstack at whatever release this goes into.  While the
>> option to package and ship a virtualenv installed to /usr/local or /opt
>> exists bundling is not something that should be supported given the
>> issues it can have (update cadence and security issues mainly).
> 
> That's a useful data point, but it comes across as a threat 

Re: [openstack-dev] [oslo][keystone][documentation][gate] Babel dependency for oslo.log

2016-04-18 Thread Davanum Srinivas
Josh,

So Andreas and i talked a bit, it seems like NONE of the oslo.* libs
except oslo.i18n needs a direct dependency on Babel. So we should yank
them all out and bump major versions
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/latest.log.html#t2016-04-18T11:58:10

Thanks,
Dims

On Mon, Apr 18, 2016 at 1:42 PM, Joshua Harlow  wrote:
> Andreas Jaeger wrote:
>>
>> On 04/17/2016 09:15 PM, Davanum Srinivas wrote:
>>>
>>> Hi Oslo folks, Andreas and others,
>>>
>>> Over the weekend oslo.log 3.4.0 was released. This broke keystone CI
>>> jobs [2], even though the 3.4.0 was not specified in upper-constraints
>>> as keystone jobs were not honoring the upper-constraints.txt, so we
>>> fixed it in [3].
>>>
>>> So the first big problem after [3] was that several tox targets do not
>>> inject u-c and hence fail, so in [3] we also added install_commands
>>> for testenv:releasenotes and testenv:cover, based on the pattern set
>>> in Nova's tox.ini [4]. That was still not enough and we had to add an
>>> entry in keystone's requirements.txt for Babel even though it was not
>>> there before (and hence pulling in latest Babel from somewhere).
>>>
>>> So Here are the questions:
>>> 1) Is there anyone working to fix all tox CI jobs to honor upper
>>> constraints?
>>> 2) Why do we need Babel in oslo.log's requirements.txt?
>>> 3) Can we remove Babel from all requirements.txt and
>>> test-requirements.txt and leave them in just tox.ini when needed?
>>>
>>> Note that there was nothing wrong either in oslo.log itself it
>>> published a release with what was in global-requirements.txt, nor in
>>> keystone, which has traditionally not run with constraints on. Just
>>> the combination of situations with Babel going bad broke at least
>>> keystone.
>>>
>>> Did anyone else see other jobs break? Please respond!
>>>
>>> Thanks,
>>> Dims
>>>
>>>
>>> [1] http://markmail.org/message/ygyxpjpbhlbz3q5d
>>> [2]
>>> http://logs.openstack.org/86/249486/32/check/gate-keystone-python34-db/29ace4f/console.html#_2016-04-17_04_31_51_138
>>> [3] https://review.openstack.org/#/c/306846/
>>> [4] http://git.openstack.org/cgit/openstack/nova/tree/tox.ini
>>
>>
>> I think what happened is:
>> 1) oslo.log indirectly requires Babel
>> 2) requirements blacklists Babel 2.3.x
>> 3) keystone has new requirements included and thus fails
>>
>> The problem here is that oslo.log requires olso.i18n which requires
>> Babel. And if oslo.i18n would have had a release with the blacklisting
>> of Babel 2.3.x, this wouldn't have happened. So, I propose to release
>> oslo.i18n.
>>
>> Babel 2.3.4 which fixes the known problems might be out soon as well -
>> and if that does not introduce regressions, this will self-heal,
>
>
> Ok, so which option should we go with here?
>
> I'm ok with releasing olso.i18n or Babel 2.3.4 (when is this release
> happening, soon? like soon soon?)
>
>>
>> Andreas
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-04-18 13:49:31 -0400:
> On 04/18/2016 01:33 PM, Doug Hellmann wrote:
> > Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
> 
> >> To add to this, I'd also note that I as a packager would likely stop
> >> packaging Openstack at whatever release this goes into.  While the
> >> option to package and ship a virtualenv installed to /usr/local or /opt
> >> exists bundling is not something that should be supported given the
> >> issues it can have (update cadence and security issues mainly).
> > 
> > That's a useful data point, but it comes across as a threat and I'm
> > having trouble taking it as a constructive comment.
> > 
> > Can you truly not imagine any other useful way to package OpenStack
> > other than individual packages with shared dependencies that would
> > be acceptable?
> 
> I think it's important to realize that if we go down this route, I'd
> expect a lot of community  distros to take that stand point. Only
> distros with a product will be able to take on the work.
> 
> We often get annoyed with projects in our own space being "special
> snowflakes" and doing things differently. OpenStack demanding that every
> component has a copy of it's own dependencies is definitely being a
> special snowflake to the distros. And for those not building product,
> it's probably just going to be too much work. I'd rather be thankful of
> Matthew's honesty about that up front instead of not saying anything,
> and it getting quietly dropped, and people being mad later.

That's fair. It's still bothersome that the answer is "we'd walk away
from you" rather than "we understand the pressure our requirement places
on you and would like to work on a solution with you."

> 
> A lot of distros specifically have policies against this kind of
> bundling as well, because of security issues like this (which was so
> very bad) - http://www.zlib.net/advisory-2002-03-11.txt
> 
> How to mitigate that kind of issue and "fleet deploy" CVEed libraries in
> these environments is definitely an open question, especially as it
> doesn't fit into the security stream and tools that distros have built
> over the last couple of decades.

Yep. That's why I'm not trying to prescribe a solution. Our upstream
solution can be pretty light-weight, and that leaves room for
downstream folks to make different choices.

Doug

> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Stepping down from puppet-openstack-core

2016-04-18 Thread Matt Fischer
On Mon, Apr 18, 2016 at 9:37 AM, Sebastien Badia  wrote:

> Hello here,
>
> I would like to ask to be removed from the core reviewers team on the
> Puppet for OpenStack project.
>
> I lack dedicated time to contribute on my spare time to the project. And I
> don't work anymore on OpenStack deployments.
>
> In the past months, I stopped reviewing and submitting changes on our
> project,
> that's why I slopes down gradually into the abyss stats of the group :-)
> Community coc¹ suggests I step down considerately.
>
> I've never been very talkative, but retrospectively it was a great
> adventure, I
> learned a lot at your side. I'm very proud to see where the project is now.
>
> So Long, and Thanks for All the Fish
> I whish you the best ♥
>
> Seb
>
> ¹http://www.openstack.org/legal/community-code-of-conduct/
> ²http://stackalytics.com/report/contribution/puppetopenstack-group/90
> --
> Sebastien Badia
>
>
Thanks Sebastian for all your work!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Matthew Thode's message of 2016-04-18 13:22:38 -0500:
> On 04/18/2016 12:33 PM, Doug Hellmann wrote:
> > Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
> >> On 04/18/2016 08:24 AM, Hayes, Graham wrote:
> >>> On 18/04/2016 13:51, Sean Dague wrote:
>  On 04/18/2016 08:22 AM, Chris Dent wrote:
> > On Mon, 18 Apr 2016, Sean Dague wrote:
> >
> >> So if you have strong feelings and ideas, why not get them out in email
> >> now? That will help in the framing of the conversation.
> >
> > I won't be at summit and I feel pretty strongly about this topic, so
> > I'll throw out my comments:
> >
> > I agree with the basic premise: In the big tent universe co-
> > installability is holding us back and is a huge cost in terms of spent
> > energy. In a world where service isolation is desirable and common
> > (whether by virtualenv, containers, different hosts, etc) targeting an
> > all-in-one install seems only to serve the purposes of all-in-one rpm-
> > or deb-based installations.
> >
> > Many (most?) people won't be doing those kinds of installations. If 
> > all-in-
> > one installations are important to the rpm- and deb- based distributions
> > then _they_ should be resolving the dependency issues local to their own
> > infrastructure (or realizing that it is too painful and start
> > containerizing or otherwise as well).
> >
> > I think making these changes will help to improve and strengthen the
> > boundaries and contracts between services. If not technically then
> > at least socially, in the sense that the negotiations that people
> > make to get things to work are about what actually matters in their
> > services, not unwinding python dependencies and the like.
> >
> > A lot of the basics of getting this to work are already in place in
> > devstack. One challenge I've run into the past is when devstack
> > plugin A has made an assumption about having access to a python
> > script provided by devstack plugin B, but it's not on $PATH or its
> > dependencies are not in the site-packages visible to the current
> > context. The solution here is to use full paths _into_ virtenvs.
> 
>  As Chris said, doing virtualenvs on the Devstack side for services is
>  pretty much there. The team looked at doing this last year, then stopped
>  due to operator feedback.
> 
>  One of the things that gets a little weird (when using devstack for
>  development) is if you actually want to see the impact of library
>  changes on the environment. As you'll need to make sure you loop and
>  install those libraries into every venv where they are used. This
>  forward reference doesn't really exist. So some tooling there will be
>  needed.
> 
>  Middleware that's pushed from one project into another (like Ceilometer
>  -> Swift) is also a funny edge case that I think get funnier here.
> 
>  Those are mostly implementation details, that probably have work
>  arounds, but would need people on them.
> 
> 
>   From a strategic perspective this would basically make traditional Linux
>  Packaging of OpenStack a lot harder. That might be the right call,
>  because traditional Linux Packaging definitely suffers from the fact
>  that everything on a host needs to be upgraded at the same time. For
>  large installs of OpenStack (especially public cloud cases) traditional
>  packages are definitely less used.
> 
>  However Linux Packaging is how a lot of people get exposed to software.
>  The power of onboarding with apt-get / yum install is a big one.
> 
>  I've been through the ups and downs of both approaches so many times now
>  in my own head, I no longer have a strong preference beyond the fact
>  that we do one approach today, and doing a different one is effort to
>  make the transition.
> 
>  -Sean
> 
> >>>
> >>> It is also worth noting that according to the OpenStack User Survey [0]
> >>> 56% of deployments use "Unmodifed packages from the operating system".
> >>>
> >>> Granted it was a small sample size (302 responses to that question)
> >>> but it is worth keeping this in mind as we talk about moving the burden
> >>> to packagers.
> >>>
> >>> 0 - 
> >>> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> >>> (page 
> >>> 36)
> >>>
> >>> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >> To add to this, I'd also note that I as a packager would likely stop
> >> packaging Openstack at whatever release this goes into.  While the
> >> option to packa

Re: [openstack-dev] [oslo][keystone][documentation][gate] Babel dependency for oslo.log

2016-04-18 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2016-04-18 14:27:48 -0400:
> Josh,
> 
> So Andreas and i talked a bit, it seems like NONE of the oslo.* libs
> except oslo.i18n needs a direct dependency on Babel. So we should yank
> them all out and bump major versions
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/latest.log.html#t2016-04-18T11:58:10

I don't think we need to raise major versions to drop a dependency. We
only need to do that for backwards-incompatible changes, and this
doesn't seem to be one.

Doug

> 
> Thanks,
> Dims
> 
> On Mon, Apr 18, 2016 at 1:42 PM, Joshua Harlow  wrote:
> > Andreas Jaeger wrote:
> >>
> >> On 04/17/2016 09:15 PM, Davanum Srinivas wrote:
> >>>
> >>> Hi Oslo folks, Andreas and others,
> >>>
> >>> Over the weekend oslo.log 3.4.0 was released. This broke keystone CI
> >>> jobs [2], even though the 3.4.0 was not specified in upper-constraints
> >>> as keystone jobs were not honoring the upper-constraints.txt, so we
> >>> fixed it in [3].
> >>>
> >>> So the first big problem after [3] was that several tox targets do not
> >>> inject u-c and hence fail, so in [3] we also added install_commands
> >>> for testenv:releasenotes and testenv:cover, based on the pattern set
> >>> in Nova's tox.ini [4]. That was still not enough and we had to add an
> >>> entry in keystone's requirements.txt for Babel even though it was not
> >>> there before (and hence pulling in latest Babel from somewhere).
> >>>
> >>> So Here are the questions:
> >>> 1) Is there anyone working to fix all tox CI jobs to honor upper
> >>> constraints?
> >>> 2) Why do we need Babel in oslo.log's requirements.txt?
> >>> 3) Can we remove Babel from all requirements.txt and
> >>> test-requirements.txt and leave them in just tox.ini when needed?
> >>>
> >>> Note that there was nothing wrong either in oslo.log itself it
> >>> published a release with what was in global-requirements.txt, nor in
> >>> keystone, which has traditionally not run with constraints on. Just
> >>> the combination of situations with Babel going bad broke at least
> >>> keystone.
> >>>
> >>> Did anyone else see other jobs break? Please respond!
> >>>
> >>> Thanks,
> >>> Dims
> >>>
> >>>
> >>> [1] http://markmail.org/message/ygyxpjpbhlbz3q5d
> >>> [2]
> >>> http://logs.openstack.org/86/249486/32/check/gate-keystone-python34-db/29ace4f/console.html#_2016-04-17_04_31_51_138
> >>> [3] https://review.openstack.org/#/c/306846/
> >>> [4] http://git.openstack.org/cgit/openstack/nova/tree/tox.ini
> >>
> >>
> >> I think what happened is:
> >> 1) oslo.log indirectly requires Babel
> >> 2) requirements blacklists Babel 2.3.x
> >> 3) keystone has new requirements included and thus fails
> >>
> >> The problem here is that oslo.log requires olso.i18n which requires
> >> Babel. And if oslo.i18n would have had a release with the blacklisting
> >> of Babel 2.3.x, this wouldn't have happened. So, I propose to release
> >> oslo.i18n.
> >>
> >> Babel 2.3.4 which fixes the known problems might be out soon as well -
> >> and if that does not introduce regressions, this will self-heal,
> >
> >
> > Ok, so which option should we go with here?
> >
> > I'm ok with releasing olso.i18n or Babel 2.3.4 (when is this release
> > happening, soon? like soon soon?)
> >
> >>
> >> Andreas
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Morgan Fainberg
On Mon, Apr 18, 2016 at 7:29 AM, Brant Knudson  wrote:

>
>
> On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:
>
>> We all want Fernet to be a reality.  We ain't there yet (Except for mfish
>> who has no patience) but we are getting closer.  The goal is to get Fernet
>> as the default token provider as soon as possible. The review to do this
>> has uncovered a few details that need to be fixed before we can do this.
>>
>> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
>> https://review.openstack.org/#/c/278693/ Patch is still failing on
>> Python 3.  The tests are kindof racy due to the revocation event 1 second
>> granularity.  Some of the tests here have A sleep (1) in them still, but
>> all should be using the time control aspect of the unit test fixtures.
>>
>> Some of the tests also use the same user to validate a token as that
>> have, for example, a role unassigned.  These expose a problem that the
>> revocation events are catching too many tokens, some of which should not be
>> treated as revoked.
>>
>> Also, some of the logic for revocation checking has to change. Before, if
>> a user had two roles, and had one removed, the token would be revoked.
>> Now, however, the token will validate successful, but the response will
>> only have the single assigned role in it.
>>
>>
>> Python 3 tests are failing because the Fernet formatter is insisting that
>> all project-ids be valid UUIDs, but some of the old tests have "FOO" and
>> "BAR" as ids.  These either need to be converted to UUIDS, or the formatter
>> needs to be more forgiving.
>>
>> Caching of token validations was messing with revocation checking. Tokens
>> that were valid once were being reported as always valid. Thus, the current
>> review  removes all caching on token validations, a change we cannot
>> maintain.  Once all the test are successfully passing, we will re-introduce
>> the cache, and be far more aggressive about cache invalidation.
>>
>> Tempest tests are currently failing due to Devstack not properly
>> identifying Fernet as the default token provider, and creating the Fernet
>> key repository.  I'm tempted to just force devstack to always create the
>> directory, as a user would need it if they ever switched the token provider
>> post launch anyway.
>>
>>
> There's a review to change devstack to default to fernet:
> https://review.openstack.org/#/c/195780/ . This was mostly to show that
> tempest still passes with fernet configured. It uncovered a couple of test
> issues (similar in nature to the revocation checking issues mentioned in
> the original note) that have since been fixed.
>
> We'd prefer to not have devstack overriding config options and instead use
> keystone's defaults. The problem is if fernet is the default in keystone
> then it won't work out of the box since the key database won't exist. One
> option that I think we should investigate is to have keystone create the
> key database on startup if it doesn't exist.
>
>
I am unsure if this is the right path, unless we consider possibly moving
the key-DB for fernet into the SQL backend (possible?) notably so we can
control a cluster of keystones.

If we aren't making the data shared by default, I would rather have
devstack override the keystone default as UUID still seems like the sanest
default due to other config overhead (with filesystem-based fernet keys).

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] More on the topic of DELIMITER, the Quota Management Library proposal

2016-04-18 Thread Jay Pipes

On 04/16/2016 05:51 PM, Amrith Kumar wrote:

If we therefore assume that this will be a Quota Management Library, it
is safe to assume  that quotas are going to be managed on a per-project
basis, where participating projects will use this library. I believe
that it stands to reason that any data persistence will have to be in a
location decided by the individual project.


Depends on what you mean by "any data persistence". If you are referring 
to the storage of quota values (per user, per tenant, global, etc) I 
think that should be done by the Keystone service. This data is 
essentially an attribute of the user or the tenant or the service 
endpoint itself (i.e. global defaults). This data also rarely changes 
and logically belongs to the service that manages users, tenants, and 
service endpoints: Keystone.


If you are referring to the storage of resource usage records, yes, each 
service project should own that data (and frankly, I don't see a need to 
persist any quota usage data at all, as I mentioned in a previous reply 
to Attila).



That may not be a very interesting statement but the corollary is, I
think, a very significant statement; it cannot be assumed that the
quota management information for all participating projects is in the
same database.


It cannot be assumed that this information is even in a database at all...


A hypothetical service consuming the Delimiter library provides
requesters with some widgets, and wishes to track the widgets that it
has provisioned both on a per-user basis, and on the whole. It should
therefore multi-tenant and able to track the widgets on a per tenant
basis and if required impose limits on the number of widgets that a
tenant may consume at a time, during a course of a period of time, and
so on.


No, this last part is absolutely not what I think quota management 
should be about.


Rate limiting -- i.e. how many requests a particular user can make of an 
API in a given period of time -- should *not* be handled by OpenStack 
API services, IMHO. It is the responsibility of the deployer to handle 
this using off-the-shelf rate-limiting solutions (open source or 
proprietary).


Quotas should only be about the hard limit of different types of 
resources that a user or group of users can consume at a given time.



Such a hypothetical service may also consume resources from other
services that it wishes to track, and impose limits on.


Yes, absolutely agreed.


It is also understood as Jay Pipes points out in [4] that the actual
process of provisioning widgets could be time consuming and it is
ill-advised to hold a database transaction of any kind open for that
duration of time. Ensuring that a user does not exceed some limit on the
number of concurrent widgets that he or she may create therefore
requires some mechanism to track in-flight requests for widgets. I view
these as “intent” but not yet materialized.


It has nothing to do with the amount of concurrent widgets that a user 
can create. It's just about the total number of some resource that may 
be consumed by that user.


As for an "intent", I don't believe tracking intent is the right way to 
go at all. As I've mentioned before, the major problem in Nova's quota 
system is that there are two tables storing resource usage records: the 
*actual* resource usage tables (the allocations table in the new 
resource-providers modeling and the instance_extra, pci_devices and 
instances table in the legacy modeling) and the *quota usage* tables 
(quota_usages and reservations tables). The quota_usages table does not 
need to exist at all, and neither does the reservations table. Don't do 
intent-based consumption. Instead, just consume (claim) by writing a 
record for the resource class consumed on a provider into the actual 
resource usages table and then "check quotas" by querying the *actual* 
resource usages and comparing the SUM(used) values, grouped by resource 
class, against the appropriate quota limits for the user. The 
introduction of the quota_usages and reservations tables to cache usage 
records is the primary reason for the race problems in the Nova (and 
other) quota system because every time you introduce a caching system 
for highly-volatile data (like usage records) you introduce complexity 
into the write path and the need to track the same thing across multiple 
writes to different tables needlessly.



Looking up at this whole infrastructure from the perspective of the
database, I think we should require that the database must not be
required to operate in any isolation mode higher than READ-COMMITTED;
more about that later (i.e. requiring a database run either serializable
or repeatable read is a show stopper).


This is an implementation detail is not relevant to the discussion about 
what the interface of a quota library would look like.



In general therefore, I believe that the hypothetical service processing
requests for widgets would have to handle three kinds of operations,

Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 01:40 PM, Doug Hellmann wrote:
> Excerpts from Matthew Thode's message of 2016-04-18 13:22:38 -0500:
>> On 04/18/2016 12:33 PM, Doug Hellmann wrote:
>>> Excerpts from Matthew Thode's message of 2016-04-18 10:23:37 -0500:
 On 04/18/2016 08:24 AM, Hayes, Graham wrote:
> On 18/04/2016 13:51, Sean Dague wrote:
>> On 04/18/2016 08:22 AM, Chris Dent wrote:
>>> On Mon, 18 Apr 2016, Sean Dague wrote:
>>>
 So if you have strong feelings and ideas, why not get them out in email
 now? That will help in the framing of the conversation.
>>>
>>> I won't be at summit and I feel pretty strongly about this topic, so
>>> I'll throw out my comments:
>>>
>>> I agree with the basic premise: In the big tent universe co-
>>> installability is holding us back and is a huge cost in terms of spent
>>> energy. In a world where service isolation is desirable and common
>>> (whether by virtualenv, containers, different hosts, etc) targeting an
>>> all-in-one install seems only to serve the purposes of all-in-one rpm-
>>> or deb-based installations.
>>>
>>> Many (most?) people won't be doing those kinds of installations. If 
>>> all-in-
>>> one installations are important to the rpm- and deb- based distributions
>>> then _they_ should be resolving the dependency issues local to their own
>>> infrastructure (or realizing that it is too painful and start
>>> containerizing or otherwise as well).
>>>
>>> I think making these changes will help to improve and strengthen the
>>> boundaries and contracts between services. If not technically then
>>> at least socially, in the sense that the negotiations that people
>>> make to get things to work are about what actually matters in their
>>> services, not unwinding python dependencies and the like.
>>>
>>> A lot of the basics of getting this to work are already in place in
>>> devstack. One challenge I've run into the past is when devstack
>>> plugin A has made an assumption about having access to a python
>>> script provided by devstack plugin B, but it's not on $PATH or its
>>> dependencies are not in the site-packages visible to the current
>>> context. The solution here is to use full paths _into_ virtenvs.
>>
>> As Chris said, doing virtualenvs on the Devstack side for services is
>> pretty much there. The team looked at doing this last year, then stopped
>> due to operator feedback.
>>
>> One of the things that gets a little weird (when using devstack for
>> development) is if you actually want to see the impact of library
>> changes on the environment. As you'll need to make sure you loop and
>> install those libraries into every venv where they are used. This
>> forward reference doesn't really exist. So some tooling there will be
>> needed.
>>
>> Middleware that's pushed from one project into another (like Ceilometer
>> -> Swift) is also a funny edge case that I think get funnier here.
>>
>> Those are mostly implementation details, that probably have work
>> arounds, but would need people on them.
>>
>>
>>  From a strategic perspective this would basically make traditional Linux
>> Packaging of OpenStack a lot harder. That might be the right call,
>> because traditional Linux Packaging definitely suffers from the fact
>> that everything on a host needs to be upgraded at the same time. For
>> large installs of OpenStack (especially public cloud cases) traditional
>> packages are definitely less used.
>>
>> However Linux Packaging is how a lot of people get exposed to software.
>> The power of onboarding with apt-get / yum install is a big one.
>>
>> I've been through the ups and downs of both approaches so many times now
>> in my own head, I no longer have a strong preference beyond the fact
>> that we do one approach today, and doing a different one is effort to
>> make the transition.
>>
>> -Sean
>>
>
> It is also worth noting that according to the OpenStack User Survey [0]
> 56% of deployments use "Unmodifed packages from the operating system".
>
> Granted it was a small sample size (302 responses to that question)
> but it is worth keeping this in mind as we talk about moving the burden
> to packagers.
>
> 0 - 
> https://www.openstack.org/assets/survey/April-2016-User-Survey-Report.pdf 
> (page 
> 36)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
 To add to this, I'd also note that I as a packager would likely stop
 packaging Openstack at whatever release

[openstack-dev] [ironic] Meetings cancelled April 25 and May 2

2016-04-18 Thread Jim Rollenhagen
Hi friends,

We won't have our weekly meeting on April 25 (because most of us will be
at the summit) or May 2 (because people will still be recovering from
summit travel).

See you all there on May 9.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-18 Thread Sean M. Collins
Markus Zoeller wrote:
> I guess having an IF-ELSE block in a "local.conf" is 
> crazy talk?

Yes, I think it is. local.conf is already a pretty big complex thing for
someone starting out, as it is. 

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA testing on scale

2016-04-18 Thread Kosnik, Lubosz
Great work Ann.
About testing on scale it’s not so problematic because of the Cloud For All 
project.
Here [1] you can request for a multi node cluster which you can use to
perform tests. Exact requirements are specified on that website.

[1] http://osic.org

Regards,
Lubosz “diltram” Kosnik

On Apr 18, 2016, at 10:42 AM, John Schwarz 
mailto:jschw...@redhat.com>> wrote:

This is some awesome work, Ann. It's very neat to see that all the
races we've struggled with w.r.t. the l3 scheduler has paid off. I
would definitely like to see how these results are effected by
https://review.openstack.org/#/c/305774/ but understandably 49
physical nodes are hard to come by.

Also, we should see how to best handle of the issue Ann found (and is
tracked at https://review.openstack.org/#/c/305774/). Specifically,
reproducing this should be our goal.

John.

On Mon, Apr 18, 2016 at 5:15 PM, Anna Kamyshnikova
mailto:akamyshnik...@mirantis.com>> wrote:
Hi guys!

As a developer I use Devstack or multinode OpenStack installation (4-5
nodes) for work, but these are "abstract" environments, where you are not
able to perform some scenarios as your machine is not powerful enough. But
it is really important to understand the issues that real deployments have.

Recently I've performed testing of L3 HA on the scale environment 49 nodes
(3 controllers, 46 computes) Fuel 8.0. On this environment I ran shaker and
rally tests and also performed some manual destructive scenarios. I think
that this is very important to share these results. Ideally, I think that we
should collect statistics for different configurations each release to
compare and check it to make sure that we are heading the right way.

The results of shaker and rally tests [1]. I put detailed report in google
doc [2]. I would appreciate all comments on these results.

[1] - http://akamyshnikova.github.io/neutron-benchmark-results/
[2] -
https://docs.google.com/a/mirantis.com/document/d/1TFEUzRRlRIt2HpsOzFh-RqWwgTzJPBefePPA0f0x9uw/edit?usp=sharing

Regards,
Ann Kamyshnikova
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Jeremy Stanley
On 2016-04-18 13:58:03 -0500 (-0500), Matthew Thode wrote:
> Ya, I'd be happy to work more with upstream.  I already review the
> stable-reqs updates and watch them for the stable branches I package
> for.  Not sure what else is needed.

Reviewing the master branch openstack/requirements repository
changes (to make sure deps being added are going to be sane things
for someone in your distro to maintain packages of in the long term)
would also make sense.

https://review.openstack.org/#/q/project:openstack/requirements+status:open
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [devstack] Adding example "local.conf" files for testing?

2016-04-18 Thread John Griffith
On Thu, Apr 14, 2016 at 1:31 AM, Markus Zoeller  wrote:

> Sometimes (especially when I try to reproduce bugs) I have the need
> to set up a local environment with devstack. Everytime I have to look
> at my notes to check which option in the "local.conf" have to be set
> for my needs. I'd like to add a folder in devstacks tree which hosts
> multiple example local.conf files for different, often used setups.
> Something like this:
>
> example-confs
> --- newton
> --- --- x86-ubuntu-1404
> --- --- --- minimum-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- serial-console-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- live-migration-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf.controller
> --- --- --- --- local.conf.compute1
> --- --- --- --- local.conf.compute2
> --- --- --- minimal-neutron-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- s390x-1.1.1-vulcan
> --- --- --- minimum-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf
> --- --- --- live-migration-setup
> --- --- --- --- README.rst
> --- --- --- --- local.conf.controller
> --- --- --- --- local.conf.compute1
> --- --- --- --- local.conf.compute2
> --- mitaka
> --- --- # same structure as master branch. omitted for brevity
> --- liberty
> --- --- # same structure as master branch. omitted for brevity
>
> Thoughts?
>
> Regards, Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​Love the idea personally.  Maybe we could start with a working Neutron
multi node deployment!!!​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly subteam status report

2016-04-18 Thread Ruby Loo
Hi,

We are overjoyed to present this week's subteam report for Ironic. As
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 11.04.2016):
- Ironic: 203 bugs (+3) + 166 wishlist items (+3). 30 new (+5), 134 in
progress (-2), 1 critical (+1), 24 high (-2) and 18 incomplete
- Inspector: 13 bugs + 16 wishlist items (+1). 1 new, 6 in progress, 0
critical, 4 high and 0 incomplete
- Nova bugs with Ironic tag: 15 (-1). 0 new, 0 critical, 0 high

Network isolation (Neutron/Ironic work) (jroll)
===
- Still needs review
- cross-project session tuesday on the future of bare metal networking:
https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
https://etherpad.openstack.org/p/newton-baremetal-networking

Live upgrades (lucasagomes, lintan)
===
- A PoC patch is ready: https://review.openstack.org/306357
- An Implementation referece to refact configdrive is also ready:
https://review.openstack.org/#/c/306358/

Node filter API and claims endpoint (jroll, devananda, lucasagomes)
===
- jroll working on re-writing specs

Nova Liaisons (jlvillal & mrda)
===
- mrda & jlvillal did a clean-up of the bug wiki
- https://wiki.openstack.org/wiki/Nova-Ironic-Bugs

Testing/Quality (jlvillal/krtaylor)
===
- Grenade: Interuppted by downstream work. Did have some progress but no
breakthroughs. Debugging continues.

Inspector (dtansur)
===
- rerunning introspection on stored data was merged (2 weeks ago), client
part ready for reviews
- HA spec getting updated

Drivers:

CIMC and UCSM (sambetts)

- CIMC and UCS CIs were stable as of 10:00am UTC 18th April

.

Until after the summit,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] L3 HA testing on scale

2016-04-18 Thread Kosnik, Lubosz
Great work Ann.
About testing on scale it’s not so problematic because of the Cloud For All 
project.
Here [1] you can request for a multi node cluster which you can use to
perform tests. Exact requirements are specified on that website.

[1] http://osic.org

Regards,
Lubosz “diltram” Kosnik

On Apr 18, 2016, at 10:42 AM, John Schwarz 
mailto:jschw...@redhat.com>> wrote:

This is some awesome work, Ann. It's very neat to see that all the
races we've struggled with w.r.t. the l3 scheduler has paid off. I
would definitely like to see how these results are effected by
https://review.openstack.org/#/c/305774/ but understandably 49
physical nodes are hard to come by.

Also, we should see how to best handle of the issue Ann found (and is
tracked at https://review.openstack.org/#/c/305774/). Specifically,
reproducing this should be our goal.

John.

On Mon, Apr 18, 2016 at 5:15 PM, Anna Kamyshnikova
mailto:akamyshnik...@mirantis.com>> wrote:
Hi guys!

As a developer I use Devstack or multinode OpenStack installation (4-5
nodes) for work, but these are "abstract" environments, where you are not
able to perform some scenarios as your machine is not powerful enough. But
it is really important to understand the issues that real deployments have.

Recently I've performed testing of L3 HA on the scale environment 49 nodes
(3 controllers, 46 computes) Fuel 8.0. On this environment I ran shaker and
rally tests and also performed some manual destructive scenarios. I think
that this is very important to share these results. Ideally, I think that we
should collect statistics for different configurations each release to
compare and check it to make sure that we are heading the right way.

The results of shaker and rally tests [1]. I put detailed report in google
doc [2]. I would appreciate all comments on these results.

[1] - http://akamyshnikova.github.io/neutron-benchmark-results/
[2] -
https://docs.google.com/a/mirantis.com/document/d/1TFEUzRRlRIt2HpsOzFh-RqWwgTzJPBefePPA0f0x9uw/edit?usp=sharing

Regards,
Ann Kamyshnikova
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
John Schwarz,
Red Hat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2016-04-18 19:10:52 +:
> On 2016-04-18 13:58:03 -0500 (-0500), Matthew Thode wrote:
> > Ya, I'd be happy to work more with upstream.  I already review the
> > stable-reqs updates and watch them for the stable branches I package
> > for.  Not sure what else is needed.
> 
> Reviewing the master branch openstack/requirements repository
> changes (to make sure deps being added are going to be sane things
> for someone in your distro to maintain packages of in the long term)
> would also make sense.
> 
> https://review.openstack.org/#/q/project:openstack/requirements+status:open

Right, we see far far more changes on master than on the stable
branches.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Matthew Thode
On 04/18/2016 02:10 PM, Jeremy Stanley wrote:
> On 2016-04-18 13:58:03 -0500 (-0500), Matthew Thode wrote:
>> Ya, I'd be happy to work more with upstream.  I already review the
>> stable-reqs updates and watch them for the stable branches I package
>> for.  Not sure what else is needed.
> 
> Reviewing the master branch openstack/requirements repository
> changes (to make sure deps being added are going to be sane things
> for someone in your distro to maintain packages of in the long term)
> would also make sense.
> 
> https://review.openstack.org/#/q/project:openstack/requirements+status:open
> 
We can (and do) maintain multiple versions of packages available to be
installed.  The problem is that dependencies might conflict.  That's
what I'd like to avoid.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][barbican][designate][murano][fuel][ironic][cue][ceilometer][astara][gce-api][kiloeyes] keystoneclient 3.0.0 release - no more CLI!

2016-04-18 Thread Steve Martinelli


Everyone,

I sent out a note about this on Friday [1], but I'll repeat it here and tag
individual projects. The keystone team *will* be releasing a new version of
keystoneclient on *Thursday* that will not include a CLI.

A quick codesearch showed that a few projects are still using the old
`keystone` CLI in either their docs, scripts that create sample data or in
devstack plugins; the latter being the more immediate issue here. These
fixes should be very quick, use `openstackclient` CLI instead. I've gone
ahead and created a listed off some files that include a keystone CLI
command (keystone user-list, keystone tenant-list, keystone user-create,
keystone role-list, etc )

Barbican:

http://git.openstack.org/cgit/openstack/barbican/tree/bin/keystone_data.sh

Designate:

http://git.openstack.org/cgit/openstack/designate/tree/tools/designate-keystone-setup
 (already being addressed by: https://review.openstack.org/307433 )

Murano:

http://git.openstack.org/cgit/openstack/murano-deployment/tree/murano-ci/config/devstack/local.sh


Fuel:

http://git.openstack.org/cgit/openstack/fuel-plugin-plumgrid/tree/deployment_scripts/cleanup_os.sh


http://git.openstack.org/cgit/openstack/fuel-octane/tree/octane/tests/create_vms.sh


http://git.openstack.org/cgit/openstack/fuel-plugin-plumgrid/tree/deployment_scripts/cleanup_os.sh


Ironic:

http://git.openstack.org/cgit/openstack/ironic-inspector/tree/devstack/exercise.sh#n49


Cue:
http://git.openstack.org/cgit/openstack/cue/tree/devstack/plugin.sh

Ceilometer:

http://git.openstack.org/cgit/openstack/ceilometer/tree/tools/make_test_data.sh


Astara:

http://git.openstack.org/cgit/openstack/astara/tree/tools/run_functional.sh

GCE-API:
http://git.openstack.org/cgit/openstack/gce-api/tree/install.sh

Kiloeyes:
http://git.openstack.org/cgit/openstack/kiloeyes/tree/setup_horizon.sh

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-April/092471.html

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone][documentation][gate] Babel dependency for oslo.log

2016-04-18 Thread Joshua Harlow

Okie, the following reviews are up:

https://review.openstack.org/307461 (oslo.concurrency)
https://review.openstack.org/307463 (oslo.cache)
https://review.openstack.org/307464 (oslo.privsep)
https://review.openstack.org/307466 (oslo.middleware)
https://review.openstack.org/307467 (oslo.log)
https://review.openstack.org/307468 (oslo.db)
https://review.openstack.org/307469 (oslo.versionedobjects)
https://review.openstack.org/307470 (oslo.service)
https://review.openstack.org/307471 (oslo.reports)

Do note that the following have a dependency on babel but do not depend 
on oslo.il8n:


tooz
oslo.context
oslo.serialization
debtcollector

Should we do anything about the above four?

-Josh

Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2016-04-18 14:27:48 -0400:

Josh,

So Andreas and i talked a bit, it seems like NONE of the oslo.* libs
except oslo.i18n needs a direct dependency on Babel. So we should yank
them all out and bump major versions
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/latest.log.html#t2016-04-18T11:58:10


I don't think we need to raise major versions to drop a dependency. We
only need to do that for backwards-incompatible changes, and this
doesn't seem to be one.

Doug


Thanks,
Dims

On Mon, Apr 18, 2016 at 1:42 PM, Joshua Harlow  wrote:

Andreas Jaeger wrote:

On 04/17/2016 09:15 PM, Davanum Srinivas wrote:

Hi Oslo folks, Andreas and others,

Over the weekend oslo.log 3.4.0 was released. This broke keystone CI
jobs [2], even though the 3.4.0 was not specified in upper-constraints
as keystone jobs were not honoring the upper-constraints.txt, so we
fixed it in [3].

So the first big problem after [3] was that several tox targets do not
inject u-c and hence fail, so in [3] we also added install_commands
for testenv:releasenotes and testenv:cover, based on the pattern set
in Nova's tox.ini [4]. That was still not enough and we had to add an
entry in keystone's requirements.txt for Babel even though it was not
there before (and hence pulling in latest Babel from somewhere).

So Here are the questions:
1) Is there anyone working to fix all tox CI jobs to honor upper
constraints?
2) Why do we need Babel in oslo.log's requirements.txt?
3) Can we remove Babel from all requirements.txt and
test-requirements.txt and leave them in just tox.ini when needed?

Note that there was nothing wrong either in oslo.log itself it
published a release with what was in global-requirements.txt, nor in
keystone, which has traditionally not run with constraints on. Just
the combination of situations with Babel going bad broke at least
keystone.

Did anyone else see other jobs break? Please respond!

Thanks,
Dims


[1] http://markmail.org/message/ygyxpjpbhlbz3q5d
[2]
http://logs.openstack.org/86/249486/32/check/gate-keystone-python34-db/29ace4f/console.html#_2016-04-17_04_31_51_138
[3] https://review.openstack.org/#/c/306846/
[4] http://git.openstack.org/cgit/openstack/nova/tree/tox.ini


I think what happened is:
1) oslo.log indirectly requires Babel
2) requirements blacklists Babel 2.3.x
3) keystone has new requirements included and thus fails

The problem here is that oslo.log requires olso.i18n which requires
Babel. And if oslo.i18n would have had a release with the blacklisting
of Babel 2.3.x, this wouldn't have happened. So, I propose to release
oslo.i18n.

Babel 2.3.4 which fixes the known problems might be out soon as well -
and if that does not introduce regressions, this will self-heal,


Ok, so which option should we go with here?

I'm ok with releasing olso.i18n or Babel 2.3.4 (when is this release
happening, soon? like soon soon?)


Andreas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone][documentation][gate] Babel dependency for oslo.log

2016-04-18 Thread Davanum Srinivas
On Mon, Apr 18, 2016 at 4:28 PM, Joshua Harlow  wrote:
> Okie, the following reviews are up:
>
> https://review.openstack.org/307461 (oslo.concurrency)
> https://review.openstack.org/307463 (oslo.cache)
> https://review.openstack.org/307464 (oslo.privsep)
> https://review.openstack.org/307466 (oslo.middleware)
> https://review.openstack.org/307467 (oslo.log)
> https://review.openstack.org/307468 (oslo.db)
> https://review.openstack.org/307469 (oslo.versionedobjects)
> https://review.openstack.org/307470 (oslo.service)
> https://review.openstack.org/307471 (oslo.reports)
>
> Do note that the following have a dependency on babel but do not depend on
> oslo.il8n:
>
> tooz
> oslo.context
> oslo.serialization
> debtcollector
>
> Should we do anything about the above four?

Josh,

Babel is mainly for translations:
https://wiki.openstack.org/wiki/Translations

So we can remove them

-- Dims

>
> -Josh
>
> Doug Hellmann wrote:
>>
>> Excerpts from Davanum Srinivas (dims)'s message of 2016-04-18 14:27:48
>> -0400:
>>>
>>> Josh,
>>>
>>> So Andreas and i talked a bit, it seems like NONE of the oslo.* libs
>>> except oslo.i18n needs a direct dependency on Babel. So we should yank
>>> them all out and bump major versions
>>>
>>> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/latest.log.html#t2016-04-18T11:58:10
>>
>>
>> I don't think we need to raise major versions to drop a dependency. We
>> only need to do that for backwards-incompatible changes, and this
>> doesn't seem to be one.
>>
>> Doug
>>
>>> Thanks,
>>> Dims
>>>
>>> On Mon, Apr 18, 2016 at 1:42 PM, Joshua Harlow
>>> wrote:

 Andreas Jaeger wrote:
>
> On 04/17/2016 09:15 PM, Davanum Srinivas wrote:
>>
>> Hi Oslo folks, Andreas and others,
>>
>> Over the weekend oslo.log 3.4.0 was released. This broke keystone CI
>> jobs [2], even though the 3.4.0 was not specified in upper-constraints
>> as keystone jobs were not honoring the upper-constraints.txt, so we
>> fixed it in [3].
>>
>> So the first big problem after [3] was that several tox targets do not
>> inject u-c and hence fail, so in [3] we also added install_commands
>> for testenv:releasenotes and testenv:cover, based on the pattern set
>> in Nova's tox.ini [4]. That was still not enough and we had to add an
>> entry in keystone's requirements.txt for Babel even though it was not
>> there before (and hence pulling in latest Babel from somewhere).
>>
>> So Here are the questions:
>> 1) Is there anyone working to fix all tox CI jobs to honor upper
>> constraints?
>> 2) Why do we need Babel in oslo.log's requirements.txt?
>> 3) Can we remove Babel from all requirements.txt and
>> test-requirements.txt and leave them in just tox.ini when needed?
>>
>> Note that there was nothing wrong either in oslo.log itself it
>> published a release with what was in global-requirements.txt, nor in
>> keystone, which has traditionally not run with constraints on. Just
>> the combination of situations with Babel going bad broke at least
>> keystone.
>>
>> Did anyone else see other jobs break? Please respond!
>>
>> Thanks,
>> Dims
>>
>>
>> [1] http://markmail.org/message/ygyxpjpbhlbz3q5d
>> [2]
>>
>> http://logs.openstack.org/86/249486/32/check/gate-keystone-python34-db/29ace4f/console.html#_2016-04-17_04_31_51_138
>> [3] https://review.openstack.org/#/c/306846/
>> [4] http://git.openstack.org/cgit/openstack/nova/tree/tox.ini
>
>
> I think what happened is:
> 1) oslo.log indirectly requires Babel
> 2) requirements blacklists Babel 2.3.x
> 3) keystone has new requirements included and thus fails
>
> The problem here is that oslo.log requires olso.i18n which requires
> Babel. And if oslo.i18n would have had a release with the blacklisting
> of Babel 2.3.x, this wouldn't have happened. So, I propose to release
> oslo.i18n.
>
> Babel 2.3.4 which fixes the known problems might be out soon as well -
> and if that does not introduce regressions, this will self-heal,


 Ok, so which option should we go with here?

 I'm ok with releasing olso.i18n or Babel 2.3.4 (when is this release
 happening, soon? like soon soon?)

> Andreas



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _

[openstack-dev] [nova] [neutron] the nova network facade that isn't

2016-04-18 Thread Sean Dague
When doing bug triage this morning a few bugs popped up:

- https://bugs.launchpad.net/nova/+bug/1456899 - nova absolute-limits
Security groups count incorrect when using Neutron
- https://bugs.launchpad.net/nova/+bug/1376316 - nova absolute-limits
floating ip count is incorrect in a neutron based deployment
- https://bugs.launchpad.net/nova/+bug/1456897 - nova absolute-limits
Floating ip

The crux of this is the Nova limits API basically returns junk about
resources it doesn't own. It's been this way forever.

Last year there was a spec to add proxying to Neutron to the Nova API -
https://review.openstack.org/#/c/206735/ - which died on the vine.


I think we've moved to a point in time where we need to stop thinking
about nova-net / neutron parity in our API. Neutron is the predominate
stack out there. Where things don't work correctly with Neutron from our
proxy API we should start saying "yes, that's not supported, please go
talk to Neutron" one way or another.

I feel like in this case it would be dropping the keys which we know are
lies (terrible lies). If using OpenStack client, it can smooth over
this. In general, people should assume they should be talking to neutron
when getting this kind of data. I feel like in other cases where we
don't return good neutron data today, we should accept that as status
quo, and not fix it.

I'd like to propose an alternative spec which is this kind of approach,
to by policy not enhance any of the proxies and instead focus on ways in
which we can aggressively deprecate them. But figured it was worth
discussion first. Flame away!

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone][documentation][gate] Babel dependency for oslo.log

2016-04-18 Thread Joshua Harlow

Davanum Srinivas wrote:

On Mon, Apr 18, 2016 at 4:28 PM, Joshua Harlow  wrote:

Okie, the following reviews are up:

https://review.openstack.org/307461 (oslo.concurrency)
https://review.openstack.org/307463 (oslo.cache)
https://review.openstack.org/307464 (oslo.privsep)
https://review.openstack.org/307466 (oslo.middleware)
https://review.openstack.org/307467 (oslo.log)
https://review.openstack.org/307468 (oslo.db)
https://review.openstack.org/307469 (oslo.versionedobjects)
https://review.openstack.org/307470 (oslo.service)
https://review.openstack.org/307471 (oslo.reports)

Do note that the following have a dependency on babel but do not depend on
oslo.il8n:

tooz
oslo.context
oslo.serialization
debtcollector

Should we do anything about the above four?


Josh,

Babel is mainly for translations:
https://wiki.openstack.org/wiki/Translations

So we can remove them

-- Dims


Okie, sounds fine with me,

I hope there isn't any translation(s) in those four that people want/are 
using, because its to my understanding that it will no longer exist if I 
remove that dependency ;)





-Josh

Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2016-04-18 14:27:48
-0400:

Josh,

So Andreas and i talked a bit, it seems like NONE of the oslo.* libs
except oslo.i18n needs a direct dependency on Babel. So we should yank
them all out and bump major versions

http://eavesdrop.openstack.org/irclogs/%23openstack-infra/latest.log.html#t2016-04-18T11:58:10


I don't think we need to raise major versions to drop a dependency. We
only need to do that for backwards-incompatible changes, and this
doesn't seem to be one.

Doug


Thanks,
Dims

On Mon, Apr 18, 2016 at 1:42 PM, Joshua Harlow
wrote:

Andreas Jaeger wrote:

On 04/17/2016 09:15 PM, Davanum Srinivas wrote:

Hi Oslo folks, Andreas and others,

Over the weekend oslo.log 3.4.0 was released. This broke keystone CI
jobs [2], even though the 3.4.0 was not specified in upper-constraints
as keystone jobs were not honoring the upper-constraints.txt, so we
fixed it in [3].

So the first big problem after [3] was that several tox targets do not
inject u-c and hence fail, so in [3] we also added install_commands
for testenv:releasenotes and testenv:cover, based on the pattern set
in Nova's tox.ini [4]. That was still not enough and we had to add an
entry in keystone's requirements.txt for Babel even though it was not
there before (and hence pulling in latest Babel from somewhere).

So Here are the questions:
1) Is there anyone working to fix all tox CI jobs to honor upper
constraints?
2) Why do we need Babel in oslo.log's requirements.txt?
3) Can we remove Babel from all requirements.txt and
test-requirements.txt and leave them in just tox.ini when needed?

Note that there was nothing wrong either in oslo.log itself it
published a release with what was in global-requirements.txt, nor in
keystone, which has traditionally not run with constraints on. Just
the combination of situations with Babel going bad broke at least
keystone.

Did anyone else see other jobs break? Please respond!

Thanks,
Dims


[1] http://markmail.org/message/ygyxpjpbhlbz3q5d
[2]

http://logs.openstack.org/86/249486/32/check/gate-keystone-python34-db/29ace4f/console.html#_2016-04-17_04_31_51_138
[3] https://review.openstack.org/#/c/306846/
[4] http://git.openstack.org/cgit/openstack/nova/tree/tox.ini


I think what happened is:
1) oslo.log indirectly requires Babel
2) requirements blacklists Babel 2.3.x
3) keystone has new requirements included and thus fails

The problem here is that oslo.log requires olso.i18n which requires
Babel. And if oslo.i18n would have had a release with the blacklisting
of Babel 2.3.x, this wouldn't have happened. So, I propose to release
oslo.i18n.

Babel 2.3.4 which fixes the known problems might be out soon as well -
and if that does not introduce regressions, this will self-heal,


Ok, so which option should we go with here?

I'm ok with releasing olso.i18n or Babel 2.3.4 (when is this release
happening, soon? like soon soon?)


Andreas



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






__
OpenStack Development 

Re: [openstack-dev] [nova] [neutron] the nova network facade that isn't

2016-04-18 Thread Matt Riedemann



On 4/18/2016 3:33 PM, Sean Dague wrote:

When doing bug triage this morning a few bugs popped up:

- https://bugs.launchpad.net/nova/+bug/1456899 - nova absolute-limits
Security groups count incorrect when using Neutron
- https://bugs.launchpad.net/nova/+bug/1376316 - nova absolute-limits
floating ip count is incorrect in a neutron based deployment
- https://bugs.launchpad.net/nova/+bug/1456897 - nova absolute-limits
Floating ip

The crux of this is the Nova limits API basically returns junk about
resources it doesn't own. It's been this way forever.

Last year there was a spec to add proxying to Neutron to the Nova API -
https://review.openstack.org/#/c/206735/ - which died on the vine.


I think we've moved to a point in time where we need to stop thinking
about nova-net / neutron parity in our API. Neutron is the predominate
stack out there. Where things don't work correctly with Neutron from our
proxy API we should start saying "yes, that's not supported, please go
talk to Neutron" one way or another.

I feel like in this case it would be dropping the keys which we know are
lies (terrible lies). If using OpenStack client, it can smooth over
this. In general, people should assume they should be talking to neutron
when getting this kind of data. I feel like in other cases where we
don't return good neutron data today, we should accept that as status
quo, and not fix it.

I'd like to propose an alternative spec which is this kind of approach,
to by policy not enhance any of the proxies and instead focus on ways in
which we can aggressively deprecate them. But figured it was worth
discussion first. Flame away!

-Sean



I guess at a high level my thinking was always, if nova-network isn't 
deprecated, and these APIs are broken when using Neutron, it's (mostly) 
trivial to add a proxy to fill those gaps (like my spec for 
os-virtual-interfaces). So then when people move from deprecated 
nova-network to neutron, all of their tooling doesn't start breaking.


In thinking about it another way, if we just say nova-network is 
deprecated again and therefore we have no incentive to make these APIs 
work in the Neutron case, and want to force people off them, then I can 
see that point.


It was different back in Havana when I was originally looking at this 
because Neutron adoption was very different. With the recent survey, 
however, it looks like nova-network is 7% of deployments now, and that's 
including non-production. So I concede that it's making less sense to 
put effort into making the APIs work with a proxy.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][keystone][documentation][gate] Babel dependency for oslo.log

2016-04-18 Thread Andreas Jaeger
Please check whether translation of these is setup in project-config, 

Andreas 
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu,
   Graham Norton, HRB 21284 (AG Nürnberg) 
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

This email was sent from my phone


Gesendet mit AquaMail für Android
http://www.aqua-mail.com


On April 18, 2016 22:49:26 Joshua Harlow  wrote:

> Davanum Srinivas wrote:
>> On Mon, Apr 18, 2016 at 4:28 PM, Joshua Harlow  wrote:
>>> Okie, the following reviews are up:
>>>
>>> https://review.openstack.org/307461 (oslo.concurrency)
>>> https://review.openstack.org/307463 (oslo.cache)
>>> https://review.openstack.org/307464 (oslo.privsep)
>>> https://review.openstack.org/307466 (oslo.middleware)
>>> https://review.openstack.org/307467 (oslo.log)
>>> https://review.openstack.org/307468 (oslo.db)
>>> https://review.openstack.org/307469 (oslo.versionedobjects)
>>> https://review.openstack.org/307470 (oslo.service)
>>> https://review.openstack.org/307471 (oslo.reports)
>>>
>>> Do note that the following have a dependency on babel but do not depend on
>>> oslo.il8n:
>>>
>>> tooz
>>> oslo.context
>>> oslo.serialization
>>> debtcollector
>>>
>>> Should we do anything about the above four?
>>
>> Josh,
>>
>> Babel is mainly for translations:
>> https://wiki.openstack.org/wiki/Translations
>>
>> So we can remove them
>>
>> -- Dims
>
> Okie, sounds fine with me,
>
> I hope there isn't any translation(s) in those four that people want/are 
> using, because its to my understanding that it will no longer exist if I 
> remove that dependency ;)
>
>>
>>> -Josh
>>>
>>> Doug Hellmann wrote:
 Excerpts from Davanum Srinivas (dims)'s message of 2016-04-18 14:27:48
 -0400:
> Josh,
>
> So Andreas and i talked a bit, it seems like NONE of the oslo.* libs
> except oslo.i18n needs a direct dependency on Babel. So we should yank
> them all out and bump major versions
>
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/latest.log.html#t2016-04-18T11:58:10

 I don't think we need to raise major versions to drop a dependency. We
 only need to do that for backwards-incompatible changes, and this
 doesn't seem to be one.

 Doug

> Thanks,
> Dims
>
> On Mon, Apr 18, 2016 at 1:42 PM, Joshua Harlow
> wrote:
>> Andreas Jaeger wrote:
>>> On 04/17/2016 09:15 PM, Davanum Srinivas wrote:
 Hi Oslo folks, Andreas and others,

 Over the weekend oslo.log 3.4.0 was released. This broke keystone CI
 jobs [2], even though the 3.4.0 was not specified in upper-constraints
 as keystone jobs were not honoring the upper-constraints.txt, so we
 fixed it in [3].

 So the first big problem after [3] was that several tox targets do not
 inject u-c and hence fail, so in [3] we also added install_commands
 for testenv:releasenotes and testenv:cover, based on the pattern set
 in Nova's tox.ini [4]. That was still not enough and we had to add an
 entry in keystone's requirements.txt for Babel even though it was not
 there before (and hence pulling in latest Babel from somewhere).

 So Here are the questions:
 1) Is there anyone working to fix all tox CI jobs to honor upper
 constraints?
 2) Why do we need Babel in oslo.log's requirements.txt?
 3) Can we remove Babel from all requirements.txt and
 test-requirements.txt and leave them in just tox.ini when needed?

 Note that there was nothing wrong either in oslo.log itself it
 published a release with what was in global-requirements.txt, nor in
 keystone, which has traditionally not run with constraints on. Just
 the combination of situations with Babel going bad broke at least
 keystone.

 Did anyone else see other jobs break? Please respond!

 Thanks,
 Dims


 [1] http://markmail.org/message/ygyxpjpbhlbz3q5d
 [2]

 http://logs.openstack.org/86/249486/32/check/gate-keystone-python34-db/29ace4f/console.html#_2016-04-17_04_31_51_138
 [3] https://review.openstack.org/#/c/306846/
 [4] http://git.openstack.org/cgit/openstack/nova/tree/tox.ini
>>>
>>> I think what happened is:
>>> 1) oslo.log indirectly requires Babel
>>> 2) requirements blacklists Babel 2.3.x
>>> 3) keystone has new requirements included and thus fails
>>>
>>> The problem here is that oslo.log requires olso.i18n which requires
>>> Babel. And if oslo.i18n would have had a release with the blacklisting
>>> of Babel 2.3.x, this wouldn't have happened. So, I propose to release
>>> oslo.i18n.
>>>
>>> Babel 2.3.4 which fix

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-18 Thread Hongbin Lu
Hi all,

Magnum will have a fishbowl session to discuss if it makes sense to build a 
common abstraction layer for all COEs (kubernetes, docker swarm and mesos):

https://www.openstack.org/summit/austin-2016/summit-schedule/events/9102

Frankly, this is a controversial topic since I heard agreements and 
disagreements from different people. It would be great if all of you can join 
the session and share your opinions and use cases. I wish we will have a 
productive discussion.

Best regards,
Hongbin

> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: April-12-16 8:40 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: foundat...@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
> One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
> 
> On 11/04/16 16:53 +, Adrian Otto wrote:
> >Amrith,
> >
> >I respect your point of view, and agree that the idea of a common
> >compute API is attractive… until you think a bit deeper about what
> that
> >would mean. We seriously considered a “global” compute API at the time
> >we were first contemplating Magnum. However, what we came to learn
> >through the journey of understanding the details of how such a thing
> >would be implemented, that such an API would either be (1) the lowest
> >common denominator (LCD) of all compute types, or (2) an exceedingly
> complex interface.
> >
> >You expressed a sentiment below that trying to offer choices for VM,
> >Bare Metal (BM), and Containers for Trove instances “adds considerable
> complexity”.
> >Roughly the same complexity would accompany the use of a comprehensive
> >compute API. I suppose you were imagining an LCD approach. If that’s
> >what you want, just use the existing Nova API, and load different
> >compute drivers on different host aggregates. A single Nova client can
> >produce VM, BM (Ironic), and Container (lbvirt-lxc) instances all with
> >a common API (Nova) if it’s configured in this way. That’s what we do.
> >Flavors determine which compute type you get.
> >
> >If what you meant is that you could tap into the power of all the
> >unique characteristics of each of the various compute types (through
> >some modular extensibility framework) you’ll likely end up with
> >complexity in Trove that is comparable to integrating with the native
> >upstream APIs, along with the disadvantage of waiting for OpenStack to
> >continually catch up to the pace of change of the various upstream
> >systems on which it depends. This is a recipe for disappointment.
> >
> >We concluded that wrapping native APIs is a mistake, particularly when
> >they are sufficiently different than what the Nova API already offers.
> >Containers APIs have limited similarities, so when you try to make a
> >universal interface to all of them, you end up with a really
> >complicated mess. It would be even worse if we tried to accommodate
> all
> >the unique aspects of BM and VM as well. Magnum’s approach is to offer
> >the upstream native API’s for the different container orchestration
> >engines (COE), and compose Bays for them to run on that are built from
> >the compute types that OpenStack supports. We do this by using
> >different Heat orchestration templates (and conditional templates) to
> >arrange a COE on the compute type of your choice. With that said,
> there
> >are still gaps where not all storage or network drivers work with
> >Ironic, and there are non-trivial security hurdles to clear to safely
> use Bays composed of libvirt-lxc instances in a multi-tenant
> environment.
> >
> >My suggestion to get what you want for Trove is to see if the cloud
> has
> >Magnum, and if it does, create a bay with the flavor type specified
> for
> >whatever compute type you want, and then use the native API for the
> COE
> >you selected for that bay. Start your instance on the COE, just like
> >you use Nova today. This way, you have low complexity in Trove, and
> you
> >can scale both the number of instances of your data nodes (containers),
> >and the infrastructure on which they run (Nova instances).
> 
> 
> I've been researching on this area and I've reached pretty much the
> same conclusion. I've had moments of wondering whether creating bays is
> something Trove should do but I now think it should.
> 
> The need of handling the native API is the part I find a bit painful as
> that means more code needs to happen in Trove for us to provide this
> provisioning facilities. I wonder if a common *library* would help here,
> at least to handle those "simple" cases. Anyway, I look forward to
> chatting with you all about this.
> 
> It'd be great if you (and other magnum folks) could join this session:
> 
> https://etherpad.openstack.org/p/trove-newton-summit-container
> 
> Thanks for chiming in, Adrian.
> Flavio
> 
> >Regards,
> >
> >Adrian
> >
> >
> >
> >
> >On Apr 11, 2016, at 8:47 AM, Amrith Kumar 
> wrote:
> >
> >Monty, Dims,
> >
> 

Re: [openstack-dev] [magnum][kuryr] Shared session in design summit

2016-04-18 Thread Hongbin Lu
Hi all,

The Magnum-Kuryr joined session was scheduled to Thursday 11:50 – 12:30: 
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9099 . I am 
looking forward to seeing you all there.

In addition, Magnum will have another session for container storage: 
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9098 . I 
saw Kuryr recently expanded its scope to storage so it would be great if the 
relevant Kuryr contributors can join the storage session as well.

Best regards,
Hongbin

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: March-30-16 10:36 AM
To: OpenStack Development Mailing List (not for usage questions); Antoni Segura 
Puimedon; Fawad Khaliq; Mohammad Banikazemi; Taku Fukushima; Irena Berezovsky; 
Mike Spreitzer
Subject: Re: [openstack-dev] [magnum][kuryr] Shared session in design summit

All these slots are fine with me, added Kuryr team as CC to make sure most can 
attend any of these times.



On Wed, Mar 30, 2016 at 5:12 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Gal,

Thursday 4:10 – 4:50 conflicts with a Magnum workroom session, but we can 
choose from:

• 11:00 – 11:40

• 11:50 – 12:30

• 3:10 – 3:50

Please let us know if some of the slots don’t work well with your schedule.

Best regards,
Hongbin

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: March-30-16 2:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][kuryr] Shared session in design summit

Anything you pick is fine with me, Kuryr fishbowl session is on Thursday 4:10 - 
4:50, i personally
think the Magnum integration is important enough and i dont mind using this 
time for the session as well.

Either way i am also ok with the 11-11:40 and the 11:50-12:30 sessions or the 
3:10-3:50

On Tue, Mar 29, 2016 at 11:32 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi all,

As discussed before, our team members want to establish a shared session 
between Magnum and Kuryr. We expected a lot of attendees in the session so we 
need a large room (fishbowl). Currently, Kuryr has only 1 fishbowl session, and 
they possibly need it for other purposes. A solution is to promote one of the 
Magnum fishbowl session to be the shared session, or leverage one of the free 
fishbowl slot. The schedule is as below.

Please vote your favorite time slot: http://doodle.com/poll/zuwercgnw2uecs5y .

Magnum fishbowl session:

• 11:00 - 11:40 (Thursday)

• 11:50 - 12:30

• 1:30 - 2:10

• 2:20 - 3:00

• 3:10 - 3:50

Free fishbowl slots:

• 9:00 – 9:40 (Thursday)

• 9:50 – 10:30

• 3:10 – 3:50 (conflict with Magnum session)

• 4:10 – 4:50 (conflict with Magnum session)

• 5:00 – 5:40 (conflict with Magnum session)

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best Regards ,

The G.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] weekly subteam status report

2016-04-18 Thread Ruby Loo
On Mon, Apr 18, 2016 at 3:10 PM, Ruby Loo  wrote:

> ...
> Network isolation (Neutron/Ironic work) (jroll)
> ===
> - Still needs review
> - cross-project session tuesday on the future of bare metal networking:
> https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
>

Oops, that should be
https://www.openstack.org/summit/austin-2016/summit-schedule/events/9491


>
> https://etherpad.openstack.org/p/newton-baremetal-networking
> ...
>


Yes, I sometimes read these things ;)
--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Question on removal of 'arbitrary' pluggable interfaces

2016-04-18 Thread Ed Leafe
On 04/18/2016 01:14 PM, Jay Pipes wrote:

> Each configuration option's deprecation is indicated in the Nova source
> code in the configuration option's declaration. For instance:
> 
> https://github.com/openstack/nova/blob/master/nova/conf/compute.py#L86-L91
> 
> grep for deprecated_for_removal=True

Also note that while the option isn't deprecated, some allowable formats
are being deprecated; those also have a note to that effect, as in:

https://github.com/openstack/nova/blob/master/nova/conf/scheduler.py#L258-L260

-- 

-- Ed Leafe



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Launchpad bug spring cleaning day Monday 4/18

2016-04-18 Thread Matt Riedemann



On 4/18/2016 9:00 AM, Markus Zoeller wrote:

In case the dashboard is not loading, you can use
* query_inconsistent.py
* query_stale_incomplete.py
from
https://github.com/markuszoeller/openstack/tree/master/scripts/launchpad

Regards, Markus Zoeller (markus_z)


From: Matt Riedemann 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 04/05/2016 08:45 PM
Subject: [openstack-dev] [nova] Launchpad bug spring cleaning day Monday

4/18


We're going to have a day of just cleaning out the launchpad bugs for
Nova on Monday 4/18.

This isn't a bug squashing day where people are proposing patches and
the core team is reviewing them.

This is purely about cleaning the garbage out of launchpad.

Markus Zoeller has a nice dashboard we can use. I'd like to specifically



focus on trimming these two tabs:

1. Inconsistent:
http://45.55.105.55:8082/bugs-dashboard.html#tabInconsistent (142 bugs
today)

2. Stale Incomplete:
http://45.55.105.55:8082/bugs-dashboard.html#tabIncompleteStale (59 bugs



today)

A lot of these are probably duplicates by now, or fixed, or just invalid



and we should close them out. That's what we'll focus on.

I'd really like to see solid participation from the core team given the
core team should know a lot of what's already fixed or invalid, and
being part of the core team is more than just reviewing code, it's also
making sure our bug backlog is reasonably sane.

--

Thanks,

Matt Riedemann




__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks to everyone that participated in the LP bug spring cleaning 
today. Markus took the numbers at the start of his day:


Inconsistent: 145
Stale Incomplete:  56

It's about the end of the working day for me and this is what we got done:

Inconsistent:  73
Stale Incomplete:  32

That's a pretty good dent. We also found a few nasty unconfirmed bugs 
along the way and marked those as confirmed/triaged and in some cases 
fixes have been pushed up today also.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-18 Thread Fox, Kevin M
I'd love to attend, but this is right on top of the app catalog meeting. I 
think the app catalog might be one of the primary users of a cross COE api.

At minimum we'd like to be able to be able to store url's for 
Kubernetes/Swarm/Mesos templates and have an api to kick off a workflow in 
Horizon to have Magnum start up a new instance of of the template the user 
selected.

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Monday, April 18, 2016 2:09 PM
To: Flavio Percoco; OpenStack Development Mailing List (not for usage questions)
Cc: foundat...@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Hi all,

Magnum will have a fishbowl session to discuss if it makes sense to build a 
common abstraction layer for all COEs (kubernetes, docker swarm and mesos):

https://www.openstack.org/summit/austin-2016/summit-schedule/events/9102

Frankly, this is a controversial topic since I heard agreements and 
disagreements from different people. It would be great if all of you can join 
the session and share your opinions and use cases. I wish we will have a 
productive discussion.

Best regards,
Hongbin

> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: April-12-16 8:40 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: foundat...@lists.openstack.org
> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all]
> One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>
> On 11/04/16 16:53 +, Adrian Otto wrote:
> >Amrith,
> >
> >I respect your point of view, and agree that the idea of a common
> >compute API is attractive… until you think a bit deeper about what
> that
> >would mean. We seriously considered a “global” compute API at the time
> >we were first contemplating Magnum. However, what we came to learn
> >through the journey of understanding the details of how such a thing
> >would be implemented, that such an API would either be (1) the lowest
> >common denominator (LCD) of all compute types, or (2) an exceedingly
> complex interface.
> >
> >You expressed a sentiment below that trying to offer choices for VM,
> >Bare Metal (BM), and Containers for Trove instances “adds considerable
> complexity”.
> >Roughly the same complexity would accompany the use of a comprehensive
> >compute API. I suppose you were imagining an LCD approach. If that’s
> >what you want, just use the existing Nova API, and load different
> >compute drivers on different host aggregates. A single Nova client can
> >produce VM, BM (Ironic), and Container (lbvirt-lxc) instances all with
> >a common API (Nova) if it’s configured in this way. That’s what we do.
> >Flavors determine which compute type you get.
> >
> >If what you meant is that you could tap into the power of all the
> >unique characteristics of each of the various compute types (through
> >some modular extensibility framework) you’ll likely end up with
> >complexity in Trove that is comparable to integrating with the native
> >upstream APIs, along with the disadvantage of waiting for OpenStack to
> >continually catch up to the pace of change of the various upstream
> >systems on which it depends. This is a recipe for disappointment.
> >
> >We concluded that wrapping native APIs is a mistake, particularly when
> >they are sufficiently different than what the Nova API already offers.
> >Containers APIs have limited similarities, so when you try to make a
> >universal interface to all of them, you end up with a really
> >complicated mess. It would be even worse if we tried to accommodate
> all
> >the unique aspects of BM and VM as well. Magnum’s approach is to offer
> >the upstream native API’s for the different container orchestration
> >engines (COE), and compose Bays for them to run on that are built from
> >the compute types that OpenStack supports. We do this by using
> >different Heat orchestration templates (and conditional templates) to
> >arrange a COE on the compute type of your choice. With that said,
> there
> >are still gaps where not all storage or network drivers work with
> >Ironic, and there are non-trivial security hurdles to clear to safely
> use Bays composed of libvirt-lxc instances in a multi-tenant
> environment.
> >
> >My suggestion to get what you want for Trove is to see if the cloud
> has
> >Magnum, and if it does, create a bay with the flavor type specified
> for
> >whatever compute type you want, and then use the native API for the
> COE
> >you selected for that bay. Start your instance on the COE, just like
> >you use Nova today. This way, you have low complexity in Trove, and
> you
> >can scale both the number of instances of your data nodes (containers),
> >and the infrastructure on which they run (Nova instances).
>
>
> I've been researching on this area and I've reached pretty much the
> same conclu

Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Adam Young

On 04/18/2016 02:10 PM, Matt Fischer wrote:

Thanks Brant,

I will missing that distinction.

On Mon, Apr 18, 2016 at 9:43 AM, Brant Knudson > wrote:




On Mon, Apr 18, 2016 at 10:20 AM, Matt Fischer
mailto:m...@mattfischer.com>> wrote:

On Mon, Apr 18, 2016 at 8:29 AM, Brant Knudson mailto:b...@acm.org>> wrote:



On Fri, Apr 15, 2016 at 9:04 PM, Adam Young
mailto:ayo...@redhat.com>> wrote:

We all want Fernet to be a reality.  We ain't there
yet (Except for mfish who has no patience) but we are
getting closer.  The goal is to get Fernet as the
default token provider as soon as possible. The review
to do this has uncovered a few details that need to be
fixed before we can do this.

Trusts for V2 tokens were not working correctly.
Relatively easy fix.
https://review.openstack.org/#/c/278693/ Patch is
still failing on Python 3.  The tests are kindof racy
due to the revocation event 1 second granularity. 
Some of the tests here have A sleep (1) in them still,

but all should be using the time control aspect of the
unit test fixtures.

Some of the tests also use the same user to validate a
token as that have, for example, a role unassigned. 
These expose a problem that the revocation events are

catching too many tokens, some of which should not be
treated as revoked.

Also, some of the logic for revocation checking has to
change. Before, if a user had two roles, and had one
removed, the token would be revoked.  Now, however,
the token will validate successful, but the response
will only have the single assigned role in it.


Python 3 tests are failing because the Fernet
formatter is insisting that all project-ids be valid
UUIDs, but some of the old tests have "FOO" and "BAR"
as ids.  These either need to be converted to UUIDS,
or the formatter needs to be more forgiving.

Caching of token validations was messing with
revocation checking. Tokens that were valid once were
being reported as always valid. Thus, the current
review  removes all caching on token validations, a
change we cannot maintain.  Once all the test are
successfully passing, we will re-introduce the cache,
and be far more aggressive about cache invalidation.

Tempest tests are currently failing due to Devstack
not properly identifying Fernet as the default token
provider, and creating the Fernet key repository.  I'm
tempted to just force devstack to always create the
directory, as a user would need it if they ever
switched the token provider post launch anyway.


There's a review to change devstack to default to fernet:
https://review.openstack.org/#/c/195780/ . This was mostly
to show that tempest still passes with fernet configured.
It uncovered a couple of test issues (similar in nature to
the revocation checking issues mentioned in the original
note) that have since been fixed.

We'd prefer to not have devstack overriding config options
and instead use keystone's defaults. The problem is if
fernet is the default in keystone then it won't work out
of the box since the key database won't exist. One option
that I think we should investigate is to have keystone
create the key database on startup if it doesn't exist.

- Brant



I'm not a devstack user, but as I mentioned before, I assume
devstack called keystone-manage db_sync? Why couldn't it also
call keystone-manage fernet_setup?


When you tell devstack that it's using fernet then it does
keystone-manage fernet_setup. When you tell devstack to use the
default, it doesn't fernet_setup because for now it thinks the
default is UUID and doesn't require keys. One way to have devstack
work when fernet is the default is to have devstack always do
keystone-manage fernet_setup.

My thought was to have this as a temporary fix as the default changes.  
Once we settle in to Fernet, we can swap to "only Fernet if Fernet"


There is no reason Devstack can't read the config option from Keystone, 
but that is a larger change than I want to make for this.





Really what we want to do is have devstack work like other
deployment methods. We can reasonably expect featureful dep

Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Adam Young

On 04/18/2016 10:29 AM, Brant Knudson wrote:



On Fri, Apr 15, 2016 at 9:04 PM, Adam Young > wrote:


We all want Fernet to be a reality.  We ain't there yet (Except
for mfish who has no patience) but we are getting closer.  The
goal is to get Fernet as the default token provider as soon as
possible. The review to do this has uncovered a few details that
need to be fixed before we can do this.

Trusts for V2 tokens were not working correctly. Relatively easy
fix. https://review.openstack.org/#/c/278693/ Patch is still
failing on Python 3.  The tests are kindof racy due to the
revocation event 1 second granularity. Some of the tests here have
A sleep (1) in them still, but all should be using the time
control aspect of the unit test fixtures.

Some of the tests also use the same user to validate a token as
that have, for example, a role unassigned.  These expose a problem
that the revocation events are catching too many tokens, some of
which should not be treated as revoked.

Also, some of the logic for revocation checking has to change.
Before, if a user had two roles, and had one removed, the token
would be revoked.  Now, however, the token will validate
successful, but the response will only have the single assigned
role in it.


Python 3 tests are failing because the Fernet formatter is
insisting that all project-ids be valid UUIDs, but some of the old
tests have "FOO" and "BAR" as ids.  These either need to be
converted to UUIDS, or the formatter needs to be more forgiving.

Caching of token validations was messing with revocation checking.
Tokens that were valid once were being reported as always valid.
Thus, the current review  removes all caching on token
validations, a change we cannot maintain.  Once all the test are
successfully passing, we will re-introduce the cache, and be far
more aggressive about cache invalidation.

Tempest tests are currently failing due to Devstack not properly
identifying Fernet as the default token provider, and creating the
Fernet key repository.  I'm tempted to just force devstack to
always create the directory, as a user would need it if they ever
switched the token provider post launch anyway.


There's a review to change devstack to default to fernet: 
https://review.openstack.org/#/c/195780/ . This was mostly to show 
that tempest still passes with fernet configured. It uncovered a 
couple of test issues (similar in nature to the revocation checking 
issues mentioned in the original note) that have since been fixed.


We'd prefer to not have devstack overriding config options and instead 
use keystone's defaults. The problem is if fernet is the default in 
keystone then it won't work out of the box since the key database 
won't exist. One option that I think we should investigate is to have 
keystone create the key database on startup if it doesn't exist.


In some deployment, they should be owned by different users.  In 
general, a system/daemon user should not be writing to /etc.  Key 
rotation/etc is likely to be handled by an external Content management 
system, so it might not be the right default.





- Brant



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][requirements][packaging][summit] input needed on summit discussion about global requirements

2016-04-18 Thread Thomas Goirand
Hi Doug,

I very much welcome opening such a thread before the discussion at the
summit, as often, sessions are too short. Taking the time to write
things down first also helps having a more constructive discussion.

Before I reply to each individual message below, let me attempt to reply
to the big picture seen in your etherpad. I was tempted to insert
comments on each lines of it, but I'm not sure how this would be
received, and probably it's best to attempt to reply more globally.

>From what I understand, the biggesgt problems you're trying to solve is
that managing the global-reqs is really time consuming from the release
team point of view, and especially its propagation to individual
projects. There's IMO many things that we could do to improve the
situation, which would be acceptable from the package maintainers point
of view.

First of all, from what I could see in the etherpad, I see a lot of
release work which I consider not useful for anyone: not for downstream
distros, not upstream projects. Mostly, the propagation of the
global-requirements.txt to each and every individual Python library or
service *for OpenStack maintained libs* could be reviewed. Because 1/
distros will always package the highest version available in
upper-constraints.txt, and 2/ it doesn't really reflect a reality. As
you pointed out, project A may need a new feature from lib X, but
project B wont care. I strongly believe that we should leave lower
boundaries as a responsibility of individual projects. What important
though, is to make sure that the highest version released does work,
because that's what we will effectively package.

What we can then later on do, at the distribution level, is artificially
set the lower bounds of versions to whatever we've just uploaded for a
given release of OpenStack. In fact, I've been doing this a lot already.
For example, I uploaded Eventlet 0.17.4, and then 0.18.4. There was
never anything in the between. Therefore, doing a dependency like:

Depends: python-eventlet (>= 0.18.3)

makes no sense, and I always pushed:

Depends: python-eventlet (>= 0.18.4)

as this reflects the reality of distros.

If we generalize this concept, then I could push the minimum version of
all oslo libs into every single package for a given version of OpenStack.

What is a lot more annoying though, is for packages which I do not
control directly, and which are used by many other non-OpenStack
packages inside the distribution. For example, Django, SQLAlchemy or
jQuery, to only name a few.

I have absolutely no problem upping the lower bounds for all of
OpenStack components aggressively. We don't have gate jobs for the lower
bounds of our requirements. If we decide that it becomes the norm, I can
generalize and push for doing this even more. For example, after pushing
the update of an oslo lib B version X, I could push such requirements
everywhere, which in fact, would be a good thing (as this would trigger
rebuilds and recheck of all unit tests). Though, all of this would
benefit from a lot of automation and checks.

On your etherpad, you wrote:

"During the lead-up to preparing the final releases, one of the tracking
tasks we have is to ensure all projects have synced their global
requirements updates. This is another area where we could reduce the
burden on the release team."

Well, don't bother, this doesn't reflect a reality anyway (ie: maybe
service X can use an older version of oslo.utils), so that's not really
helpful in any way.

You also wrote:

"Current ranges in global-requirements are large but most projects do
not actively test the oldest supported version (or other versions in
between) meaning that the requirement might result in broken packages."

Yeah, that's truth, I've seen this and reported a few bugs (the last I
have in memory is Neutron requiring SQLA >= 1.0.12). Though that's still
very useful hints for package maintainers *for 3rd party libs* (as I
wrote, it's less important for OpenStack components). We have a few
breakage here and there, but they are hopefully fixes.

Though having a single version that projects are allowed to test with is
super important, so we can check everything can work together. IMO,
that's the part which should absolutely not change. Dropping that is
like opening a Pandora box. Relying on containers and/or venv will
unfortunately, not work, from my stand point.

The general rule for a distribution is that the highest version always
win, otherwise, it's never maintainable (for security and bug fixes). It
should be the case for *any program*, not even just any OpenStack
components. There's never a case where it's ok to use something older,
just because it feels like less work to do. This type of "laziness"
leads to very dangerous outcomes, always.

Though I don't see any issue with a project willing to keep backward
compatibility with a lower version than what other project do. In fact,
that's highly desirable to always try to remain compatible with lower
versions

Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Fox, Kevin M
Its also could be more difficult for ops to debug if it autocreates it on a 
cluster when config management is broken and you don't realize it, and you see 
keys that were created, but totally wrong for the cluster.

Though maybe there should be a generic ha=True option in keystone to override 
that behavior and never try creating keys. Then default ha=False.

Thanks,
Kevin

From: Adam Young [ayo...@redhat.com]
Sent: Monday, April 18, 2016 3:14 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Keystone] State of Fernet Token deployment

On 04/18/2016 10:29 AM, Brant Knudson wrote:


On Fri, Apr 15, 2016 at 9:04 PM, Adam Young 
mailto:ayo...@redhat.com>> wrote:
We all want Fernet to be a reality.  We ain't there yet (Except for mfish who 
has no patience) but we are getting closer.  The goal is to get Fernet as the 
default token provider as soon as possible. The review to do this has uncovered 
a few details that need to be fixed before we can do this.

Trusts for V2 tokens were not working correctly.  Relatively easy fix. 
https://review.openstack.org/#/c/278693/ Patch is still failing on Python 3.  
The tests are kindof racy due to the revocation event 1 second granularity.  
Some of the tests here have A sleep (1) in them still, but all should be using 
the time control aspect of the unit test fixtures.

Some of the tests also use the same user to validate a token as that have, for 
example, a role unassigned.  These expose a problem that the revocation events 
are catching too many tokens, some of which should not be treated as revoked.

Also, some of the logic for revocation checking has to change. Before, if a 
user had two roles, and had one removed, the token would be revoked.  Now, 
however, the token will validate successful, but the response will only have 
the single assigned role in it.


Python 3 tests are failing because the Fernet formatter is insisting that all 
project-ids be valid UUIDs, but some of the old tests have "FOO" and "BAR" as 
ids.  These either need to be converted to UUIDS, or the formatter needs to be 
more forgiving.

Caching of token validations was messing with revocation checking. Tokens that 
were valid once were being reported as always valid. Thus, the current review  
removes all caching on token validations, a change we cannot maintain.  Once 
all the test are successfully passing, we will re-introduce the cache, and be 
far more aggressive about cache invalidation.

Tempest tests are currently failing due to Devstack not properly identifying 
Fernet as the default token provider, and creating the Fernet key repository.  
I'm tempted to just force devstack to always create the directory, as a user 
would need it if they ever switched the token provider post launch anyway.


There's a review to change devstack to default to fernet: 
https://review.openstack.org/#/c/195780/ . This was mostly to show that tempest 
still passes with fernet configured. It uncovered a couple of test issues 
(similar in nature to the revocation checking issues mentioned in the original 
note) that have since been fixed.

We'd prefer to not have devstack overriding config options and instead use 
keystone's defaults. The problem is if fernet is the default in keystone then 
it won't work out of the box since the key database won't exist. One option 
that I think we should investigate is to have keystone create the key database 
on startup if it doesn't exist.

In some deployment, they should be owned by different users.  In general, a 
system/daemon user should not be writing to /etc.  Key rotation/etc is likely 
to be handled by an external Content management system, so it might not be the 
right default.



- Brant




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] State of Fernet Token deployment

2016-04-18 Thread Dolph Mathews
On Mon, Apr 18, 2016 at 5:14 PM, Adam Young  wrote:

> On 04/18/2016 10:29 AM, Brant Knudson wrote:
>
>
>
> On Fri, Apr 15, 2016 at 9:04 PM, Adam Young  wrote:
>
>> We all want Fernet to be a reality.  We ain't there yet (Except for mfish
>> who has no patience) but we are getting closer.  The goal is to get Fernet
>> as the default token provider as soon as possible. The review to do this
>> has uncovered a few details that need to be fixed before we can do this.
>>
>> Trusts for V2 tokens were not working correctly.  Relatively easy fix.
>> https://review.openstack.org/#/c/278693/ Patch is still failing on
>> Python 3.  The tests are kindof racy due to the revocation event 1 second
>> granularity.  Some of the tests here have A sleep (1) in them still, but
>> all should be using the time control aspect of the unit test fixtures.
>>
>> Some of the tests also use the same user to validate a token as that
>> have, for example, a role unassigned.  These expose a problem that the
>> revocation events are catching too many tokens, some of which should not be
>> treated as revoked.
>>
>> Also, some of the logic for revocation checking has to change. Before, if
>> a user had two roles, and had one removed, the token would be revoked.
>> Now, however, the token will validate successful, but the response will
>> only have the single assigned role in it.
>>
>>
>> Python 3 tests are failing because the Fernet formatter is insisting that
>> all project-ids be valid UUIDs, but some of the old tests have "FOO" and
>> "BAR" as ids.  These either need to be converted to UUIDS, or the formatter
>> needs to be more forgiving.
>>
>> Caching of token validations was messing with revocation checking. Tokens
>> that were valid once were being reported as always valid. Thus, the current
>> review  removes all caching on token validations, a change we cannot
>> maintain.  Once all the test are successfully passing, we will re-introduce
>> the cache, and be far more aggressive about cache invalidation.
>>
>> Tempest tests are currently failing due to Devstack not properly
>> identifying Fernet as the default token provider, and creating the Fernet
>> key repository.  I'm tempted to just force devstack to always create the
>> directory, as a user would need it if they ever switched the token provider
>> post launch anyway.
>>
>>
> There's a review to change devstack to default to fernet:
> https://review.openstack.org/#/c/195780/ . This was mostly to show that
> tempest still passes with fernet configured. It uncovered a couple of test
> issues (similar in nature to the revocation checking issues mentioned in
> the original note) that have since been fixed.
>
> We'd prefer to not have devstack overriding config options and instead use
> keystone's defaults. The problem is if fernet is the default in keystone
> then it won't work out of the box since the key database won't exist. One
> option that I think we should investigate is to have keystone create the
> key database on startup if it doesn't exist.
>
>
> In some deployment, they should be owned by different users.  In general,
> a system/daemon user should not be writing to /etc.  Key rotation/etc is
> likely to be handled by an external Content management system, so it might
> not be the right default.
>

+1 Besides race conditions, this is why keystone doesn't try to
"automatically" populate /etc/keystone/fernet-keys/ on startup. Fernet keys
only need to be readable by the user running keystone.


>
>
>
> - Brant
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >