[openstack-dev] [tricircle]agenda of weekly meeting Sept.21

2016-09-21 Thread joehuang
Agenda of Sept.21 weekly meeting, let's continue the topics:


# freeze date for Newton release

# patch planned to be merged before freeze date

# open discussion


How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.


Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]PTL nomination end, and no voting needed

2016-09-21 Thread joehuang
Hello, 

During the last weekly meeting, we discussed that the deadline for PTL 
candidacy will be this Tuesday noon, for we are not OpenStack big-tent project 
yet, so the deadline is not needed to be aligned with other official projects.

According to the election program in OpenStack community, voting will be held 
if there is more than one nomination. For there is only one nomination until 
now,  so no voting is required.

As the result of the nomination, I'll continue to serve as the PTL in Ocata 
release, thank you everyone's great contribution, let's do better in the coming 
Ocata release.

Best Regards
Chaoyi Huang (joehuang)


From: Shinobu Kinjo [shinobu...@gmail.com]
Sent: 18 September 2016 19:48
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]PTL candidacy

No objection.
I really appreciate your great leadership.

Cheers,
Shinobu

On Sun, Sep 18, 2016 at 6:26 PM, joehuang  wrote:
> Hello,
>
> This is Chaoyi Huang(nick: joehuang), I would like to nominate myself as
> Tricircle PTL in Ocata release.
>
> Before look forward to what to do in Ocata release, I want to have a short
> look back what's Tricircle has done in Newton release. The objectives and
> our progress in Newton release are:
>
> * Cross OpenStack L2 networking: "local network" and "shared vlan" supported
> in Tricircle for L2/L3 networking functionalities, this is a great
> fundamental for our Newton release.
> * Dynamic pod binding: the framework is in review.
> * Tempest and basic VM/Volume operation: tempest based check and gate test
> has been integrated into the process, support basic features tempest
> * Keep Tricircle follow OpenStack open source development guideline to make
> Tricircle project as open as any other OpenStack project, so that more and
> more talents would like to join and contribute in Tricircle, and target at
> being a big tent project of OpenStack, being member of OpenStack eco-system
> and help the eco-system to address the problem domain in multi-OpenStack
> cloud, no matter it's in one site or multi-site: now Tricircle is applying
> big-tent application, and the splitting is ongoing, and glad to see more and
> more contributors join Tricricle community.
>
>
> This is great progress for Tricricle based on every contributor's great
> effort.
>
> As the Tricircle will be splitted and dedicate for cross Neutron networking
> automation, my vision for Tricircle Ocata will based on our basis in Newton
> release and what we discussed in these weeks:
>
> * Ensure the Tricircle splitting smoothly with quality.
> * Continue cross OpenStack L2/L3 networking, and work with upstream projects
> like Neutron, L2GW
> * Being better citizen in OpenStack, continue the big-tent application, and
> collaboration with other projects
> * enhance the installation method and make it easier to play Tricircle, so
> that more contributors would like to join Tricircle.
> * Start advance networking service support in Ocata for Tricircle if
> possible
>
> Since the Tricircle project tries to address very fundamental cross Neutron
> networking automation chanllenges, it's exciting to work and contribute in
> such a project, let's enjoy the journey.
>
> Best Regards
> Chaoyi Huang(joehuang)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Email:
shin...@linux.com
shin...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] [horizon] why is heat service-list limited to 'admin project?

2016-09-21 Thread Akihiro Motoki
Hi,

The default policy.json provided by heat limits 'service-list' API to
'admin' project like below.
Is there any reason 'admin' role user in non-'admin' project cannot
see service-list?

   "service:index": "rule:context_is_admin",
"context_is_admin": "role:admin and is_admin_project:True",

I noticed this when investigating a horizon bug
https://bugs.launchpad.net/horizon/+bug/1624834.
horizon currently has a bit different policy engine and it does not
support is_admin_project:True.
We would like to know the background of this default configuration.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][sahara] LVM vs BDD drivers performance tests results

2016-09-21 Thread Michał Dulko
On 09/20/2016 05:48 PM, John Griffith wrote:
> On Tue, Sep 20, 2016 at 9:06 AM, Duncan Thomas
> mailto:duncan.tho...@gmail.com>> wrote:
>
> On 20 September 2016 at 16:24, Nikita Konovalov
> mailto:nkonova...@mirantis.com>> wrote:
>
> Hi,
>
> From Sahara (and Hadoop workload in general) use-case the
> reason we used BDD was a complete absence of any overhead on
> compute resources utilization. 
>
> The results show that the LVM+Local target perform pretty
> close to BDD in synthetic tests. It's a good sign for LVM. It
> actually shows that most of the storage virtualization
> overhead is not caused by LVM partitions and drivers
> themselves but rather by the iSCSI daemons.
>
> So I would still like to have the ability to attach partitions
> locally bypassing the iSCSI to guarantee 2 things:
> * Make sure that lio processes do not compete for CPU and RAM
> with VMs running on the same host.
> * Make sure that CPU intensive VMs (or whatever else is
> running nearby) are not blocking the storage.
>
>
> So these are, unless we see the effects via benchmarks, completely
> meaningless requirements. Ivan's initial benchmarks suggest
> that LVM+LIO is pretty much close enough to BDD even with iSCSI
> involved. If you're aware of a case where it isn't, the first
> thing to do is to provide proof via a reproducible benchmark.
> Otherwise we are likely to proceed, as John suggests, with the
> assumption that local target does not provide much benefit. 
>
> I've a few benchmarks myself that I suspect will find areas where
> getting rid of iSCSI is benefit, however if you have any then you
> really need to step up and provide the evidence. Relying on vague
> claims of overhead is now proven to not be a good idea. 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
> ​Honestly we can have both, I'll work up a bp to resurrect the idea of
> a "smart" scheduling feature that lets you request the volume be on
> the same node as the compute node and use it directly, and then if
> it's NOT it will attach a target and use it that way (in other words
> you run a stripped down c-vol service on each compute node).

Don't we have at least scheduling problem solved [1] already?

[1]
https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/instance_locality_filter.py

>
> Sahara keeps insisting on being a snow-flake with Cinder volumes and
> the block driver, it's really not necessary.  I think we can
> compromise just a little both ways, give you standard Cinder semantics
> for volumes, but allow you direct acccess to them if/when requested,
> but have those be flexible enough that targets *can* be attached so
> they meet all of the required functionality and API implementations. 
> This also means that we don't have to continue having a *special*
> driver in Cinder that frankly only works for one specific use case and
> deployment.
>
> I've pointed to this a number of times but it never seems to
> resonate... but I never learn so I'll try it once again [1].  Note
> that was before the name "brick" was hijacked and now means something
> completely different.
>
> [1]: https://wiki.openstack.org/wiki/CinderBrick
>
> Thanks,
> John​


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Enable IPv6 in Manila Ocata

2016-09-21 Thread jun zhong
Hi,

As agreed by the manila community in IRC meeting,
we try to enable IPv6 in Ocata. Please check the brief spec[1] and code[2]).

The areas affected most are API (access rules) and in the drivers (access
rules
& export locations). This change intends to add the IPv6 format validation
for
ip access rule type in allow_access API, allowing manila to support IPv6
ACL.

Hi all of the driver maintainers, could you test the IPv6 feature code[2]
to make sure whether your driver can completely support IPv6.
If there still have something else might not be IPv6-ready, please let me
known. Thanks
[1] https://review.openstack.org/#/c/362786/
[2] https://review.openstack.org/#/c/312321/


Thanks,
Jun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Barcelona design sessions

2016-09-21 Thread Afek, Ifat (Nokia - IL)
Hi,

As discussed in our IRC meeting today, you are welcome to suggest topics for 
vitrage design sessions in Barcelona:
https://etherpad.openstack.org/p/vitrage-barcelona-design-sessions

Thanks,
Ifat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Kuryr IPVlan Code PoC

2016-09-21 Thread Daly, Louise M
Hi everyone,

As promised here is a link to the code PoC for the Kuryr-IPVlan proposal.
https://github.com/lmdaly/kuryr-libnetwork

Link to specific commit
https://github.com/lmdaly/kuryr-libnetwork/commit/1dc895a6d8bfaa03c0dd5cfb2d3e23e2e948a67c

>From here you can clone the repo and install Kuryr as you normally would with 
>a few additional steps:

1. The IPVlan driver must be installed on the VM/Machine that the PoC will be 
run on. Fedora-Server(not the cloud image) includes the driver by default but 
the likes of the cloud image must be modified to include the driver.
2. You must install Docker experimental.
3. You must use the Kuryr IPAM driver for address management.
4. In order to enable the IPVlan mode you must change the ipvlan option in the 
kuryr.conf file from false to true.
5. You must also change the ifname option to match the interface of the private 
network you wish to run the containers on. (Default is ens3)
6. As listed in the limitations on the README.rst on kuryr "To create Docker 
networks with subnets having same/overlapping cidr, it is expected to pass 
unique pool name for each such network creation Docker command." You will need 
to do this if you are creating a docker network with the same private network 
on another VM.

The IPVlan proposal was sent out to the mailing list - link for those who 
missed it.
http://osdir.com/ml/openstack-dev/2016-09/msg00816.html

Please send any feedback, issues, comments, bugs.

Thanks,
Louise


--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Adding ihrachys to the neutron-drivers team

2016-09-21 Thread Rossella Sblendido

Congratulations Ihar! You really deserved this, I am sure you'll do great.

Rossella

On 09/20/2016 10:57 AM, Miguel Angel Ajo Pelayo wrote:

Congratulations Ihar!, well deserved through hard work! :)

On Mon, Sep 19, 2016 at 8:03 PM, Brian Haley  wrote:

Congrats Ihar!

-Brian


On 09/17/2016 12:40 PM, Armando M. wrote:


Hi folks,

I would like to propose Ihar to become a member of the Neutron drivers
team [1].

Ihar wide knowledge of the Neutron codebase, and his longstanding duties
as
stable core, downstream package whisperer, release and oslo liaison (I am
sure I
am forgetting some other capacity he is in) is going to put him at great
comfort
in the newly appointed role, and help him grow and become wise even
further.

Even though we have not been meeting regularly lately we will resume our
Thursday meetings soon [2], and having Ihar onboard by then will be highly
beneficial.

Please, join me in welcome Ihar to the team.

Cheers,
Armando

[1]
http://docs.openstack.org/developer/neutron/policies/neutron-teams.html#drivers-team


[2] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Setting kernel args to overcloud nodes

2016-09-21 Thread Saravanan KR
I have been working on the user-data scripts (first-boot) for updating
the kernel args on the overcloud node [1]. The pre-condition is that
the kernel args has to be applied and node has to be restarted before
os-net-config runs.

I got in to problem of provisioning network not getting ip after the
reboot in the user-data script. While investigating, figured out that
network.service starts the nodes on the alpha-numeric order, on which
the first nic is not the one used for provisioning. network.service
initiates a DHCP DISCOVER on it, when it times out, network.service
goes to failed state and all other interfaces are DOWN state. If i
manually bring the interface up (via ipmi console), then all proceeds
fine without any issue.

To overcome this issue, I have written a small script to find out the
provisioning network via metadata (metadata has the mac address of the
provisioning network) and make BOOTPROTO=none on all other interface's
ifcfg files except the provisioning network. There still an issue of
IP not ready at the time of querying metadata, temporarily added a
sleep which solves it. The user-data script [1] has all these fixes
and tested on an baremetal overcloud node.

If anyone has a better way of doing it, you are more than welcome to suggest.

Regards,
Saravanan KR

[1] https://gist.github.com/krsacme/1234bf024ac917c74913827298840c1c

On Wed, Jul 27, 2016 at 6:52 PM, Saravanan KR  wrote:
> Hello,
>
> We are working on SR-IOV & DPDK tripleo integration. In which, setting
> the kernel args for huge pages, iommu and cpu isolation is required.
> Earlier we were working on setting of kernel args via IPA [1], reasons
> being:
> 1. IPA is installing the boot loader on the overcloud node
> 2. Ironic knows the hardware spec, using which, we can target specific
> args to nodes via introspection rules
>
> As the proposal is to change the image owned file '/etc/default/grub',
> it has been suggested by ironic team to use the instance user data to
> set the kernel args [2][3], instead of IPA. In the suggested approach,
> we are planning to update the file /etc/default/grub, update
> /etc/grub2.cfg and then issue a reboot. Reboot is mandatory because,
> os-net-config will configure the DPDK bridges and ports by binding the
> DPDK driver, which requires kernel args should be set for iommu and
> huge pages.
>
> As discussed on the IRC tripleo meeting, we need to ensure that the
> user data with update of kernel args, does not overlap with any other
> puppet configurations. Please let us know if you have any comments on
> this approach.
>
> Regards,
> Saravanan KR
>
> [1] https://review.openstack.org/#/c/331564/
> [2] 
> http://docs.openstack.org/developer/ironic/deploy/install-guide.html#appending-kernel-parameters-to-boot-instances
> [3] 
> http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html#firstboot-extra-configuration

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [puppet] Preparing TripleO agenda for Barcelona - action needed

2016-09-21 Thread Giulio Fidente

On 09/19/2016 10:49 PM, Emilien Macchi wrote:

(adding puppet tag for cross project session).

Let's continue to prepare TripleO sessions.

https://etherpad.openstack.org/p/ocata-tripleo

For reminder, we have 2 fishbowls and 4 working rooms.
I looked at the topic proposals and I started to organize some sessions.

Some actions from you are required:
- review the session proposal
- if you want to drive a session, please put your name in "Chair".
- for each session we need to choose if we want it to be a work room
or a fishbowl session.
- 4 topics are still there, please propose a session (concatenate them
if possible)
- if you missed this etherpad until now, feel free to propose a
session with your topic (ex: TripleO UI - roadmap, etc).

At least but not least, I would propose a cross project session with
Puppet OpenStack group (using a slot from their schedule) so we might
have a 7th session.


the cross project session with the puppet group is a nice idea indeed, 
thanks Emilien


in that context it would be nice to gather some ideas/feedback on the 
status of openstack integration scenarios vs tripleo scenarios and see 
if we can optimize resources and/or coverage

--
Giulio Fidente
GPG KEY: 08D733BA | IRC: gfidente

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Thierry Carrez
Hi everyone,

As announced previously[1][2], there were no PTL candidates within the
election deadline for a number of official OpenStack project teams:
Astara, UX, OpenStackSalt and Security.

In the Astara case, the current team working on it would like to abandon
the project (and let it be available for any new team who wishes to take
it away). A change should be proposed really soon now to go in that
direction.

In the UX case, the current PTL (Piet Kruithof) very quickly reacted,
explained his error and asked to be considered for the position for
Ocata. The TC will officialize his nomination at the next meeting,
together with the newly elected PTLs.

That leaves us with OpenStackSalt and Security, where nobody reacted to
the announcement that we are missing PTL candidates. That points to a
real disconnect between those teams and the rest of the community. Even
if you didn't have the election schedule in mind, it was pretty hard to
miss all the PTL nominations in the email last week.

The majority of TC members present at the meeting yesterday suggested
that those project teams should be removed from the Big Tent, with their
design summit space allocation slightly reduced to match that (and make
room for other not-yet-official teams).

In the case of OpenStackSalt, it's a relatively new addition, and if
they get their act together they could probably be re-proposed in the
future. In the case of Security, it points to a more significant
disconnect (since it's not the first time the PTL misses the nomination
call). We definitely still need to care about Security (and we also need
a home for the Vulnerability Management team), but I think the "Security
team" acts more like a workgroup than as an official project team, as
evidenced by the fact that nobody in that team reacted to the lack of
PTL nomination, or the announcement that the team missed the bus.

The suggested way forward there would be to remove the "Security project
team", have the Vulnerability Management Team file to be its own
official project team (in the same vein as the stable maintenance team),
and have Security be just a workgroup rather than a project team.

Thoughts, comments ?

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103904.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103939.html

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc]a chance to meet all TCs for Tricircle big-tent application in Barcelona summit?

2016-09-21 Thread Mike Perez
On 00:48 Sep 21, joehuang wrote:
> Thank you for the message. Will the weekly IRC meeting in that week also be 
> cancelled during
> the summit period according to your experience?

Likely yes.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] [deb] [packaging] OpenStack contribution stats skewed by deb-* projects

2016-09-21 Thread Thierry Carrez
Ilya Shakhat wrote:
> Hi,
> 
> tldr; Commits stats are significantly skewed by deb-* projects
> (http://stackalytics.com/?metric=commits&module=packaging-deb-group)
> 
> By default Stackalytics processes commits from project's master branch.
> For some "old core" projects there is configuration to process stable
> branches as well. If some commit is cherry-picked from master to stable
> it is counted twice in both branches / releases. The configuration for
> stable branch is simple - branch starting with branching point (e.g.
> stable/newton that starts with rc1)
> 
> In deb-* projects master branch corresponds to upstream Debian
> community. All OpenStack-related contribution goes into debian/
> branch. But unlike in the rest of OpenStack, git workflow differs and
> the branch contains merge commits from master. This makes filtering
> "pure" branch commits from those that came from master quite tricky (not
> possible to specify the branch-point). And support of this will require
> changes in Stackalytics code.
> 
> Since currently we are at the time when people may get nervous about
> numbers, I'd suggest to temporary hide all commits from deb-* projects
> and revise stats processing in a month.

Sounds good. Are you working on it ?


-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptl] code churn and questionable changes

2016-09-21 Thread Amrith Kumar
Of late I've been seeing a lot of rather questionable changes that appear to be 
getting blasted out across multiple projects; changes that cause considerable 
code churn, and don't (IMHO) materially improve the quality of OpenStack.

I'd love to provide a list of the changes that triggered this email but I know 
that this will result in a rat hole where we end up discussing the merits of 
the individual items on the list and lose sight of the bigger picture. That 
won't help address the question I have below in any way, so I'm at a 
disadvantage of having to describe my issue in abstract terms.



Here's how I characterize these changes (changes that meet one or more of these 
criteria):



-Contains little of no information in the commit message (often just a 
single line)

-Makes some generic statement like "Do X not Y", "Don't use Z", "Make ABC 
better" with no further supporting information

-Fail (literally) every single CI job, clearly never tested by the developer

-Gets blasted across many projects, literally tens with often the same kind 
of questionable (often wrong) change

-Makes a stylistic python improvement that is not enforced by any check 
(causes a cottage industry of changes making the same correction every couple 
of months)

-Reverses some previous python stylistic improvement with no clear reason 
(another cottage industry)



I've tried to explain it to myself as enthusiasm, and a desire to contribute 
aggressively; I've lapsed into cynicism at times and tried to explain it as 
gaming the numbers system, but all that is merely rationalization and doesn't 
help.



Over time, the result generally is that these developers' changes get ignored. 
And that's not a good thing for the community as a whole. We want to be a 
welcoming community and one which values all contributions so I'm looking for 
some suggestions and guidance on how one can work with contributors to try and 
improve the quality of these changes, and help the contributor feel that their 
changes are valued by the project? Other more experienced PTL's, ex-PTL's, long 
time open-source-community folks, I'm seriously looking for suggestions and 
ideas.



Any and all input is welcome, do other projects see this, how do you handle it, 
is this normal, ...



Thanks!



-amrith








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Core nominations

2016-09-21 Thread Dmitry Tantsur

Thanks for your trust, much appreciated!

On 09/16/2016 10:44 PM, Emilien Macchi wrote:

Cool, sounds like great feedback here!

so I created the new gerrit groups and assign new members into it.
Congrats folks!

On Thu, Sep 15, 2016 at 12:52 PM, Ivan Berezovskiy
 wrote:

+1, great job, guys! Keep rocking!

2016-09-15 18:03 GMT+03:00 Denis Egorenko :


+1, good job

2016-09-15 17:44 GMT+03:00 Matt Fischer :


+1 to all. Thanks for your work guys!

On Thu, Sep 15, 2016 at 6:59 AM, Emilien Macchi 
wrote:


While our group keeps moving, it's time to propose again new people
part of core team.

Dmitry Tantsur / puppet-ironic
Dmitry is the guardian of puppet-ironic. He's driving most of the
recent features in this module and he now fully deserves being core on
it.

Pradeep Kilambi / puppet-aodh,ceilometer,gnocchi,panko
Prad is our Telemetry guru and he never stops to bring attention on
these modules! Keep going Prad, we appreciate your help here.

Iury Gregory / all modules
Iury is our padawan. Still learning, but learning fast, he has been a
continuous contributor over the last months. He's always here on IRC
and during meetings to help.
He always volunteer to help and not for the most fun tasks. (He drove
the authtoken work during Newton). I would like to reward his work and
show that we trust him to be a good core reviewer.
Iury, keep going in your efforts!


If your name is not here yet, please keep doing consistent work, help
in bug triage, maintain stable CI, doing good reviews, improve our
documentation, etc.

As usual, Puppet OpenStack core team is free to -1 / +1 the proposal.

Thanks,
--
Emilien Macchi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Best Regards,
Egorenko Denis,
Senior Deployment Engineer
Mirantis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Thanks, Ivan Berezovskiy
MOS Puppet Team Lead
at Mirantis

slack: iberezovskiy
skype: bouhforever
phone: + 7-960-343-42-46


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-21 Thread gordon chung
i feel like this gets brought up every year. we block these patches in 
Telemetry projects unless they can be justified beyond the copy/paste 
description.

in addition to this, please, PLEASE stop creating 'all project bugs'. i 
don't want to get emails on updates to projects unrelated to the ones i 
care about. also, it makes updating the bug impossible because it times 
out. i'm too lazy to search ML but this has been raise before, please stop.

let's all unite together and block these patches to bring an end to it. :)

On 21/09/16 07:56 AM, Amrith Kumar wrote:
> Of late I've been seeing a lot of rather questionable changes that
> appear to be getting blasted out across multiple projects; changes that
> cause considerable code churn, and don't (IMHO) materially improve the
> quality of OpenStack.
>
> I’d love to provide a list of the changes that triggered this email but
> I know that this will result in a rat hole where we end up discussing
> the merits of the individual items on the list and lose sight of the
> bigger picture. That won’t help address the question I have below in any
> way, so I’m at a disadvantage of having to describe my issue in abstract
> terms.
>
>
>
> Here’s how I characterize these changes (changes that meet one or more
> of these criteria):
>
>
>
> -Contains little of no information in the commit message (often just
> a single line)
>
> -Makes some generic statement like “Do X not Y”, “Don’t use Z”,
> “Make ABC better” with no further supporting information
>
> -Fail (literally) every single CI job, clearly never tested by the
> developer
>
> -Gets blasted across many projects, literally tens with often the
> same kind of questionable (often wrong) change
>
> -Makes a stylistic python improvement that is not enforced by any
> check (causes a cottage industry of changes making the same correction
> every couple of months)
>
> -Reverses some previous python stylistic improvement with no clear
> reason (another cottage industry)
>
>
>
> I’ve tried to explain it to myself as enthusiasm, and a desire to
> contribute aggressively; I’ve lapsed into cynicism at times and tried to
> explain it as gaming the numbers system, but all that is merely
> rationalization and doesn’t help.
>
>
>
> Over time, the result generally is that these developers’ changes get
> ignored. And that’s not a good thing for the community as a whole. We
> want to be a welcoming community and one which values all contributions
> so I’m looking for some suggestions and guidance on how one can work
> with contributors to try and improve the quality of these changes, and
> help the contributor feel that their changes are valued by the project?
> Other more experienced PTL’s, ex-PTL’s, long time open-source-community
> folks, I’m seriously looking for suggestions and ideas.
>
>
>
> Any and all input is welcome, do other projects see this, how do you
> handle it, is this normal, …
>
>
>
> Thanks!
>
>
>
> -amrith
>

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Rob C
For my part, I missed the elections, that's my bad. I normally put a
calendar item in for that issue. I don't think that my missing the election
date should result in the group being treated in this way. Members of the
TC have contacted me about unrelated things recently, I have always been
available however my schedule has made it hard for me to sift through -dev
recently and I missed the volley of nomination emails. This is certainly a
failing on my part.

It's certainly true that the security team, and our cores tend not to pay
as much attention to the -dev mailing list as we should. The list is pretty
noisy and  traditionally we always had a separate list that we used for
security and since moving away from that we tend to focus on IRC or direct
emails. Though as can be seen with our core announcements etc, we do try to
do things the "openstack way"

However, to say we're not active I think is a bit unfair. Theirry and
others regularly mail me directly about things like rooms for the summit
and I typically respond in good time, I think what's happened here is more
an identification of the fact that we need to focus more on doing things
"the openstack way" rather than being kicked out of the big tent.

We regularly work with the VMT on security issues, we issue large amounts
of guidance on our own, we have been working hard on an asset based threat
analysis process for OpenStack teams who are looking to be security
managed, we've reviewed external TA documentation and recently in our
midcycle (yes, we're dedicated enough to fly to Texas and meet up to work
on such issues) we created the first real set of security documents for an
OpenStack project,  we worked with Barbican to apply the asset based threat
analysis that we'd like to engage other teams in [1], [2]

Here's a couple of the things that we've been doing in this cycle:
* Issuing Security Notes for Glance, Nova, Horizon, Bandit, Neutron and
Barbican[3]
* Updating the security guide (the book we wrote on securing OpenStack)[4]
* Hosting a midcycle and inducting new members
* Supporting the VMT with several embargoed and complex vulnerabilities
* Building up a security blog[5]
* Making OpenStack the biggest open source project to ever receive the Core
Infrastructure Initative Best Practices Badge[6][7]
* Working on the OpenStack Security Whitepaper [8]
* Developing CI security tooling such as Bandit [9]

We are a very active team, working extremely hard on trying to make one
OpenStack secure. This is often a thankless task, we provide a lot of what
customers are asking for from OpenStack but as we don't drive individual
flagship features our contributions are often overlooked. However, above is
just a selection of what we've been doing throughout the last cycle.

If it's too late for these comments to have an influence then sobeit but
this is failure of appropriate levels of email filtering and perhaps a
highlight of how we need to alter our culture somewhat to partipate more in
-dev in general than it is any indication of a lack of dedication, time,
effort or contribution on the part of the Security Project.  We have
dedicate huge amounts of efforts to OpenStack and to relegate us to a
working group would be massively detrimental for one reason above all
others. We get corporate participation, time and effort in terms of
employee hours and contributions because we're an official part of
OpenStack, we've had to build this up over time. If you remove the Security
Project from the big tent I believe that participation in Security for
OpenStack will drop off significantly.

We are active, we are helping to make OpenStack secure and we (I) suck at
keeping ontop of email. Don't kick us out for that. If needs be we can find
another PTL or otherwise take special steps to ensure that missing
elections doesn't happen.

Apart from missing elections, I think we do a huge amount for the community
and removing us from OpenStack would in no way be beneficial to either the
Security Project or OpenStack as a whole.

-Rob

[1] https://review.openstack.org/#/c/357978/5
[2] https://etherpad.openstack.org/p/barbican-threat-analysis
[3] https://wiki.openstack.org/wiki/Security_Notes
[4] http://docs.openstack.org/sec/
[5] https://openstack-security.github.io/
[6] https://bestpractices.coreinfrastructure.org/
[7]
http://www.businesswire.com/news/home/20160725005133/en/OpenStack-Earns-Core-Infrastructure-Initiative-Practices-Badge
[8] https://www.openstack.org/software/security/
[9] https://wiki.openstack.org/wiki/Security/Projects/Bandit




On Wed, Sep 21, 2016 at 12:23 PM, Thierry Carrez 
wrote:

> Hi everyone,
>
> As announced previously[1][2], there were no PTL candidates within the
> election deadline for a number of official OpenStack project teams:
> Astara, UX, OpenStackSalt and Security.
>
> In the Astara case, the current team working on it would like to abandon
> the project (and let it be available for any new team who wishes to take
> it away). A change sho

Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-21 Thread Sean Dague
If this is the bug that triggered this discussion, yes, please never do
anything like that -
https://bugs.launchpad.net/python-openstacksdk/+bug/1475722

The bug now has too many projects to take any actions on it.

I think a basic rule of thumb that before creating a bug that has > 4
projects on it, make sure that you've socialized that idea on the
mailing list. Work that actually cuts across that many projects really
is going to need a community discussion to figure out where it phases
things in.

On 09/21/2016 08:15 AM, gordon chung wrote:
> i feel like this gets brought up every year. we block these patches in 
> Telemetry projects unless they can be justified beyond the copy/paste 
> description.
> 
> in addition to this, please, PLEASE stop creating 'all project bugs'. i 
> don't want to get emails on updates to projects unrelated to the ones i 
> care about. also, it makes updating the bug impossible because it times 
> out. i'm too lazy to search ML but this has been raise before, please stop.
> 
> let's all unite together and block these patches to bring an end to it. :)
> 
> On 21/09/16 07:56 AM, Amrith Kumar wrote:
>> Of late I've been seeing a lot of rather questionable changes that
>> appear to be getting blasted out across multiple projects; changes that
>> cause considerable code churn, and don't (IMHO) materially improve the
>> quality of OpenStack.
>>
>> I’d love to provide a list of the changes that triggered this email but
>> I know that this will result in a rat hole where we end up discussing
>> the merits of the individual items on the list and lose sight of the
>> bigger picture. That won’t help address the question I have below in any
>> way, so I’m at a disadvantage of having to describe my issue in abstract
>> terms.
>>
>>
>>
>> Here’s how I characterize these changes (changes that meet one or more
>> of these criteria):
>>
>>
>>
>> -Contains little of no information in the commit message (often just
>> a single line)
>>
>> -Makes some generic statement like “Do X not Y”, “Don’t use Z”,
>> “Make ABC better” with no further supporting information
>>
>> -Fail (literally) every single CI job, clearly never tested by the
>> developer
>>
>> -Gets blasted across many projects, literally tens with often the
>> same kind of questionable (often wrong) change
>>
>> -Makes a stylistic python improvement that is not enforced by any
>> check (causes a cottage industry of changes making the same correction
>> every couple of months)
>>
>> -Reverses some previous python stylistic improvement with no clear
>> reason (another cottage industry)
>>
>>
>>
>> I’ve tried to explain it to myself as enthusiasm, and a desire to
>> contribute aggressively; I’ve lapsed into cynicism at times and tried to
>> explain it as gaming the numbers system, but all that is merely
>> rationalization and doesn’t help.
>>
>>
>>
>> Over time, the result generally is that these developers’ changes get
>> ignored. And that’s not a good thing for the community as a whole. We
>> want to be a welcoming community and one which values all contributions
>> so I’m looking for some suggestions and guidance on how one can work
>> with contributors to try and improve the quality of these changes, and
>> help the contributor feel that their changes are valued by the project?
>> Other more experienced PTL’s, ex-PTL’s, long time open-source-community
>> folks, I’m seriously looking for suggestions and ideas.
>>
>>
>>
>> Any and all input is welcome, do other projects see this, how do you
>> handle it, is this normal, …
>>
>>
>>
>> Thanks!
>>
>>
>>
>> -amrith
>>
> 
> cheers,
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-21 Thread Dolph Mathews
This is a topic that appears periodically; I think it's important that we
consider the patches objectively, just like any other patch.

If these patches result in substantial and unproductive load on infra that
can be deemed abusive, then that's another matter.

And as a general rule, there is zero benefit to filing bugs in Launchpad if
there is no end-user impact (especially against 20+ projects). Close the
bug as Opinion (if Launchpad hasn't already broken) and focus on the
patches. The stakeholders for these types of change are developers reading
and writing code, not end users, so the bug reports are superfluous.

On Wed, Sep 21, 2016 at 7:18 AM gordon chung  wrote:

> i feel like this gets brought up every year. we block these patches in
> Telemetry projects unless they can be justified beyond the copy/paste
> description.
>
> in addition to this, please, PLEASE stop creating 'all project bugs'. i
> don't want to get emails on updates to projects unrelated to the ones i
> care about. also, it makes updating the bug impossible because it times
> out. i'm too lazy to search ML but this has been raise before, please stop.
>
> let's all unite together and block these patches to bring an end to it. :)
>
> On 21/09/16 07:56 AM, Amrith Kumar wrote:
> > Of late I've been seeing a lot of rather questionable changes that
> > appear to be getting blasted out across multiple projects; changes that
> > cause considerable code churn, and don't (IMHO) materially improve the
> > quality of OpenStack.
> >
> > I’d love to provide a list of the changes that triggered this email but
> > I know that this will result in a rat hole where we end up discussing
> > the merits of the individual items on the list and lose sight of the
> > bigger picture. That won’t help address the question I have below in any
> > way, so I’m at a disadvantage of having to describe my issue in abstract
> > terms.
> >
> >
> >
> > Here’s how I characterize these changes (changes that meet one or more
> > of these criteria):
> >
> >
> >
> > -Contains little of no information in the commit message (often just
> > a single line)
> >
> > -Makes some generic statement like “Do X not Y”, “Don’t use Z”,
> > “Make ABC better” with no further supporting information
> >
> > -Fail (literally) every single CI job, clearly never tested by the
> > developer
> >
> > -Gets blasted across many projects, literally tens with often the
> > same kind of questionable (often wrong) change
> >
> > -Makes a stylistic python improvement that is not enforced by any
> > check (causes a cottage industry of changes making the same correction
> > every couple of months)
> >
> > -Reverses some previous python stylistic improvement with no clear
> > reason (another cottage industry)
> >
> >
> >
> > I’ve tried to explain it to myself as enthusiasm, and a desire to
> > contribute aggressively; I’ve lapsed into cynicism at times and tried to
> > explain it as gaming the numbers system, but all that is merely
> > rationalization and doesn’t help.
> >
> >
> >
> > Over time, the result generally is that these developers’ changes get
> > ignored. And that’s not a good thing for the community as a whole. We
> > want to be a welcoming community and one which values all contributions
> > so I’m looking for some suggestions and guidance on how one can work
> > with contributors to try and improve the quality of these changes, and
> > help the contributor feel that their changes are valued by the project?
> > Other more experienced PTL’s, ex-PTL’s, long time open-source-community
> > folks, I’m seriously looking for suggestions and ideas.
> >
> >
> >
> > Any and all input is welcome, do other projects see this, how do you
> > handle it, is this normal, …
> >
> >
> >
> > Thanks!
> >
> >
> >
> > -amrith
> >
>
> cheers,
> --
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
-Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-21 Thread Brian Curtin
On Wed, Sep 21, 2016 at 8:15 AM, gordon chung  wrote:
> i feel like this gets brought up every year. we block these patches in
> Telemetry projects unless they can be justified beyond the copy/paste
> description.
>
> in addition to this, please, PLEASE stop creating 'all project bugs'. i
> don't want to get emails on updates to projects unrelated to the ones i
> care about. also, it makes updating the bug impossible because it times
> out. i'm too lazy to search ML but this has been raise before, please stop.
>
> let's all unite together and block these patches to bring an end to it. :)

I know Launchpad only has about three features, but is there a way to
block this there? I created that MagicMock issue that somehow got
spammed to everyone, but it was never intended to be used that way. It
was to solve a legitimate problem we inflicted on ourselves long ago,
corrected it pretty quickly and easily, and then moved on with life.
It probably doesn't affect any of the 50 projects it got added to in
the same way, or at all even.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][keystone] User Project List

2016-09-21 Thread Dolph Mathews
On Wed, Sep 21, 2016 at 12:31 AM Adrian Turjak 
wrote:

> The default keystone policy up until Newton doesn't let a user get their
> own user
>

This seems to be the crutch of your issue - can you provide an example of
this specific failure and the corresponding policy? As far as I'm aware,
the default upstream policy files have allowed for this since about Grizzly
or Havana, unless that's quietly broken somehow.


>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
-Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Filip Pytloun
Hello,

it's definately our bad that we missed elections in OpenStackSalt
project. Reason is similar to Rob's - we are active on different
channels (mostly IRC as we keep regular meetings) and don't used to
reading mailing lists with lots of generic topics (it would be good to
have separate mailing list for such calls and critical topics or
individual mails to project's core members).

Our project is very active [1], trying to do things the Openstack way
and I think it would be a pitty to remove it from Big Tent just because
we missed mail and therefore our first PTL election.

Of course I don't want to excuse our fault. In case it's not too late,
we will try to be more active in mailing lists like openstack-dev and
not miss such important events next time.

[1] http://stackalytics.com/?module=openstacksalt-group

-Filip

On Wed, Sep 21, 2016 at 12:23 PM, Thierry Carrez 
wrote:

> Hi everyone,
>
> As announced previously[1][2], there were no PTL candidates within the
> election deadline for a number of official OpenStack project teams:
> Astara, UX, OpenStackSalt and Security.
>
> In the Astara case, the current team working on it would like to abandon
> the project (and let it be available for any new team who wishes to take
> it away). A change should be proposed really soon now to go in that
> direction.
>
> In the UX case, the current PTL (Piet Kruithof) very quickly reacted,
> explained his error and asked to be considered for the position for
> Ocata. The TC will officialize his nomination at the next meeting,
> together with the newly elected PTLs.
>
> That leaves us with OpenStackSalt and Security, where nobody reacted to
> the announcement that we are missing PTL candidates. That points to a
> real disconnect between those teams and the rest of the community. Even
> if you didn't have the election schedule in mind, it was pretty hard to
> miss all the PTL nominations in the email last week.
>
> The majority of TC members present at the meeting yesterday suggested
> that those project teams should be removed from the Big Tent, with their
> design summit space allocation slightly reduced to match that (and make
> room for other not-yet-official teams).
>
> In the case of OpenStackSalt, it's a relatively new addition, and if
> they get their act together they could probably be re-proposed in the
> future. In the case of Security, it points to a more significant
> disconnect (since it's not the first time the PTL misses the nomination
> call). We definitely still need to care about Security (and we also need
> a home for the Vulnerability Management team), but I think the "Security
> team" acts more like a workgroup than as an official project team, as
> evidenced by the fact that nobody in that team reacted to the lack of
> PTL nomination, or the announcement that the team missed the bus.
>
> The suggested way forward there would be to remove the "Security project
> team", have the Vulnerability Management Team file to be its own
> official project team (in the same vein as the stable maintenance team),
> and have Security be just a workgroup rather than a project team.
>
> Thoughts, comments ?
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-
> September/103904.html
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2016-
> September/103939.html
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Ian Cordasco
 

-Original Message-
From: Rob C 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: September 21, 2016 at 07:19:40
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [security] [salt] Removal of Security and 
OpenStackSalt project teams from the Big Tent

> For my part, I missed the elections, that's my bad. I normally put a
> calendar item in for that issue. I don't think that my missing the election
> date should result in the group being treated in this way. Members of the
> TC have contacted me about unrelated things recently, I have always been
> available however my schedule has made it hard for me to sift through -dev
> recently and I missed the volley of nomination emails. This is certainly a
> failing on my part.
>  
> It's certainly true that the security team, and our cores tend not to pay
> as much attention to the -dev mailing list as we should. The list is pretty
> noisy and traditionally we always had a separate list that we used for
> security and since moving away from that we tend to focus on IRC or direct
> emails. Though as can be seen with our core announcements etc, we do try to
> do things the "openstack way"
>  
> However, to say we're not active I think is a bit unfair. Theirry and
> others regularly mail me directly about things like rooms for the summit
> and I typically respond in good time, I think what's happened here is more
> an identification of the fact that we need to focus more on doing things
> "the openstack way" rather than being kicked out of the big tent.
>  
> We regularly work with the VMT on security issues, we issue large amounts
> of guidance on our own, we have been working hard on an asset based threat
> analysis process for OpenStack teams who are looking to be security
> managed, we've reviewed external TA documentation and recently in our
> midcycle (yes, we're dedicated enough to fly to Texas and meet up to work
> on such issues) we created the first real set of security documents for an
> OpenStack project, we worked with Barbican to apply the asset based threat
> analysis that we'd like to engage other teams in [1], [2]
>  
> Here's a couple of the things that we've been doing in this cycle:
> * Issuing Security Notes for Glance, Nova, Horizon, Bandit, Neutron and
> Barbican[3]
> * Updating the security guide (the book we wrote on securing OpenStack)[4]
> * Hosting a midcycle and inducting new members
> * Supporting the VMT with several embargoed and complex vulnerabilities
> * Building up a security blog[5]
> * Making OpenStack the biggest open source project to ever receive the Core
> Infrastructure Initative Best Practices Badge[6][7]
> * Working on the OpenStack Security Whitepaper [8]
> * Developing CI security tooling such as Bandit [9]
>  
> We are a very active team, working extremely hard on trying to make one
> OpenStack secure. This is often a thankless task, we provide a lot of what
> customers are asking for from OpenStack but as we don't drive individual
> flagship features our contributions are often overlooked. However, above is
> just a selection of what we've been doing throughout the last cycle.
>  
> If it's too late for these comments to have an influence then sobeit but
> this is failure of appropriate levels of email filtering and perhaps a
> highlight of how we need to alter our culture somewhat to partipate more in
> -dev in general than it is any indication of a lack of dedication, time,
> effort or contribution on the part of the Security Project. We have
> dedicate huge amounts of efforts to OpenStack and to relegate us to a
> working group would be massively detrimental for one reason above all
> others. We get corporate participation, time and effort in terms of
> employee hours and contributions because we're an official part of
> OpenStack, we've had to build this up over time. If you remove the Security
> Project from the big tent I believe that participation in Security for
> OpenStack will drop off significantly.
>  
> We are active, we are helping to make OpenStack secure and we (I) suck at
> keeping ontop of email. Don't kick us out for that. If needs be we can find
> another PTL or otherwise take special steps to ensure that missing
> elections doesn't happen.
>  
> Apart from missing elections, I think we do a huge amount for the community
> and removing us from OpenStack would in no way be beneficial to either the
> Security Project or OpenStack as a whole.
>  
> -Rob
>  
> [1] https://review.openstack.org/#/c/357978/5
> [2] https://etherpad.openstack.org/p/barbican-threat-analysis
> [3] https://wiki.openstack.org/wiki/Security_Notes
> [4] http://docs.openstack.org/sec/
> [5] https://openstack-security.github.io/
> [6] https://bestpractices.coreinfrastructure.org/
> [7]
> http://www.businesswire.com/news/home/20160725005133/en/OpenStack-Earns-Core-Infrastructure-Initiative-Practices-Badge
>   
> [8] https://www.openstack.org/software/

[openstack-dev] tempest tests in Horizon GUI

2016-09-21 Thread Barber, Ofer
I have a basic question about tempest.

When I run a tempest test/scenario-test, should I see the components (network, 
subnet, router etc.) in the horizon GUI ?

If yes, for what username or what project those are created ?

Thank you,
Ofer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Luke Hinds
Hi,

So I am recent elected core to the security group, so while obviously
pro OSSG-Sec, I also have a fairly fresh perspective of the group.

I would first off all not agree on disengagement with the community.
Well at least not from my perspective.

Since I joined I have found the group welcoming to new members, with
well run with meetings never starting late or failing to achieve
actions from before. While I may be a new core, I am not new to open
source, so there is no way I would have joined if I felt the group was
waning in enthusiasm, disconnected or not moving forward.

The team are actively working on several projects which have found
vulnerabilities in openstack, namely Bandit and syntribos, threat
analysis and I was inspired to start on my own new proposal project
from seeing the enthusiasm in the group. There is also lots of
engagement between other cores and the security group in OSSN's
(security notes). I recently took over covering these, and have
enjoyed working immensely with cores in keystone, trove, nova,
neutron, and horizon etc. I did not see any disconnect there myself.

On the matter of elections, I understand people are upset that the PTL
nomination period was missed, but I understand there was a genuine
reason for this which I will leave for the PTL to cover. For me Robert
did a really great job of welcoming and mentoring me into the security
group, so I personally have nothing but respect there.

So if the decision is made to demote(?) the group, I guess so be it,
but it will be a big downer and disappointment for me as someone that
is proud and enthusiastic to be a new OSSG-core sec member.

Regards,

Luke



From: Thierry Carrez 
Date: Wed, Sep 21, 2016 at 12:23 PM
Subject: [openstack-dev] [security] [salt] Removal of Security and
OpenStackSalt project teams from the Big Tent
To: OpenStack Development Mailing List 


Hi everyone,

As announced previously[1][2], there were no PTL candidates within the
election deadline for a number of official OpenStack project teams:
Astara, UX, OpenStackSalt and Security.

In the Astara case, the current team working on it would like to abandon
the project (and let it be available for any new team who wishes to take
it away). A change should be proposed really soon now to go in that
direction.

In the UX case, the current PTL (Piet Kruithof) very quickly reacted,
explained his error and asked to be considered for the position for
Ocata. The TC will officialize his nomination at the next meeting,
together with the newly elected PTLs.

That leaves us with OpenStackSalt and Security, where nobody reacted to
the announcement that we are missing PTL candidates. That points to a
real disconnect between those teams and the rest of the community. Even
if you didn't have the election schedule in mind, it was pretty hard to
miss all the PTL nominations in the email last week.

The majority of TC members present at the meeting yesterday suggested
that those project teams should be removed from the Big Tent, with their
design summit space allocation slightly reduced to match that (and make
room for other not-yet-official teams).

In the case of OpenStackSalt, it's a relatively new addition, and if
they get their act together they could probably be re-proposed in the
future. In the case of Security, it points to a more significant
disconnect (since it's not the first time the PTL misses the nomination
call). We definitely still need to care about Security (and we also need
a home for the Vulnerability Management team), but I think the "Security
team" acts more like a workgroup than as an official project team, as
evidenced by the fact that nobody in that team reacted to the lack of
PTL nomination, or the announcement that the team missed the bus.

The suggested way forward there would be to remove the "Security project
team", have the Vulnerability Management Team file to be its own
official project team (in the same vein as the stable maintenance team),
and have Security be just a workgroup rather than a project team.

Thoughts, comments ?

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103904.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103939.html

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Luke Hinds | NFV Partner Engineering | Office of Technology | Red Hat
e: lhi...@redhat.com | irc: lhinds @freenode | m: +44 77 45 63 98 84 |
t: +44 12 52 36 2483

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.or

Re: [openstack-dev] [cinder] running afoul of Service.__repr__() during client.serivces.list()

2016-09-21 Thread Michał Dulko
On 09/21/2016 02:32 AM, Konstanski, Carlos P wrote:
> Am Dienstag, den 20.09.2016, 15:31 -0600 schrieb Konstanski, Carlos P:
>> I am currently using python-cinderclient version 1.5.0, though the code in
>> question is still in master.
>>
>> When calling client.services.list() I get this result: "AttributeError:
>> service"
>>
>> The execution path of client.services.list() eventually leads to this method
>> in
>> cinderclient/v2/services.py:24:
>>
>> def __repr__(self):  
>>  
>> return "" % self.service
>>   
>>
>> which in turn triggers a call to Resouce.__getattr__() in
>> cinderclient/openstack/common/apiclient/base.py:456.
>>
>> This custom getter will never find an attribute called service because a
>> Service
>> instance looks something like the following:
>>
>> {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone': u'nova',
>> u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00', u'state':
>> u'up', u'disabled_reason': None}
>>
>> So it returns the string "AttributeError: service".
>>
>> One way or another a fix is warranted, and I am ready, willing and able to
>> provide the fix. But first I want to find out more about the bigger picture.
>> could  it be that this __repr__() method actually correct, but the code that
>> populates my service instance is faulty? This could easily be the case if the
>> dict that feeds the Service class were to look like the following (for
>> example):
>>
>> {u'service': {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone':
>> u'nova', u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00',
>> u'state': u'up', u'disabled_reason': None}}
>>
>> Somehow I doubt it; why hide all the useful attributes in a dict under a
>> single
>> parent attribute? But I'm new to cinder and I don't know the rules. I'm not
>> here
>> to question your methods.
>>
>> Or am I just using it wrong? This code has survived for a long time, and
>> certainly someone would have noticed a problem by now. But it seems pretty
>> straightforward. How many ways are there to prepare a call to
>> client.services.list()? I get a Client instance, call authenticate() for fun,
>> and then call client.services.list(). Not a lot going on here.
>>
>> I'll get to work on a patch when I figure out what it is supposed to do, if 
>> it
>> is not already doing it.
>>
>> Sincerely,
>> Carlos Konstanski
> I guess the question I should be asking is this: Manager._list() (in
> cinderclient/base.py) returns a list of printable representations of objects,
> not a list of the objects themselves. Hopefully there's a more useful method
> that returns a list of actual objects, or at least a JSON representation. If I
> can't find such a method then I'll be back, or I'll put up a review to add 
> one.
>
> Carlos

Is bug being addressed in review [1] somehow related? If so, there's
some discussion on solutions going.

[1] https://review.openstack.org/#/c/308475

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-21 Thread Zane Bitter

On 14/09/16 11:44, Mike Bayer wrote:



On 09/14/2016 11:08 AM, Mike Bayer wrote:



On 09/14/2016 09:15 AM, Sean Dague wrote:

I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22




It looks like some race has been introduced where the various db
connections are not fully isolated from each other like they used to be.
The testing magic for this is buried pretty deep in oslo.db.


that error message occurs when a connection that is intended against a
SELECT statement fails to provide a cursor.description attribute.  It is
typically a driver-level bug in the MySQL world and corresponds to
mis-handled failure modes from the MySQL connection.

By "various DB connections are not fully isolated from each other" are
you suggesting that a single in-Python connection object itself is being
shared among multiple greenlets?   I'm not aware of a change in oslo.db
that would be a relationship to such an effect.


So, I think by "fully isolated from each other" what you really mean is
"operations upon a connection are not fully isolated from the subsequent
use of that connection", since that's what I see in the logs.  A
connection is attempting to be used during teardown to drop tables,
however it's in this essentially broken state from a PyMySQL
perspective, which would indicate something has gone wrong with this
(pooled) connection in the preceding test that could not be detected or
reverted once the connection was returned to the pool.

From Roman's observation, it looks like a likely source of this
corruption is a timeout that is interrupting the state of the PyMySQL
connection.   In the preceding stack trace, PyMySQL is encountering a
raise as it attempts to call "self._sock.recv_into(b)", and it seems
like some combination of eventlet's response to signals and the
fixtures.Timeout() fixture is the cause of this interruption.   As an
additional wart, something else is getting involved and turning it into
an IndexError, I'm not sure what that part is yet though I can imagine
that might be SQLAlchemy mis-interpreting what it expects to be a
PyMySQL exception class, since we normally look inside of
exception.args[0] to get the MySQL error code.   With a blank exception
like fixtures.TimeoutException, .args is the empty tuple.

The PyMySQL connection is now in an invalid state and unable to perform
a SELECT statement correctly, but the connection is not invalidated and
is instead returned to the connection pool in a broken state.  So the
subsequent teardown, if it uses this same connection (which is likely),
fails because the connection has been interrupted in the middle of its
work and not given the chance to clean up.

Seems like the use of fixtures.Timeout() fixture here is not organized
to work with a database operation in progress, especially an
eventlet-monkeypatched PyMySQL.   Ideally, if something like a timeout
due to a signal handler occurs, the entire connection pool should be
disposed (quickest way, engine.dispose()), or at the very least (and
much more targeted), the connection that's involved should be
invalidated from the pool, e.g. connection.invalidate().

The change to the environment here would be that this timeout is
happening at all - the reason for that is not yet known.   If oslo.db's
version were involved in this error, I would guess that it would be
related to this timeout condition being caused, and not anything to do
with the connection provisioning.



Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.


We've been seeing similar errors in Heat since at least Liberty 
(https://bugs.launchpad.net/heat/+bug/1499669). Mike and I did some 
poking around yesterday and basically confirmed his theory above. If you 
look at the PyMySQL code, it believes that only an IOError can occur 
while writing to a socket, so it has no handling for any other type of 
exception, thus it can't deal with signal handlers raising exceptions or 
other exceptions being thrown into the greenthread by eventlet. It 
sounds like sqlalchemy also fails to catch at least some of these 
exceptions and invalidate the connection.


tl;dr this appears to have been around forever (at least since we 
switched to using a pure-Python MySQL client) and is almost certainly 
completely unrelated to any particular release of oslo.db.


cheers,
Zane.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] [deb] [packaging] OpenStack contribution stats skewed by deb-* projects

2016-09-21 Thread Ilya Shakhat
2016-09-21 14:37 GMT+03:00 Thierry Carrez :

> Ilya Shakhat wrote:
> > Hi,
> >
> > tldr; Commits stats are significantly skewed by deb-* projects
> > (http://stackalytics.com/?metric=commits&module=packaging-deb-group)
> >
> > By default Stackalytics processes commits from project's master branch.
> > For some "old core" projects there is configuration to process stable
> > branches as well. If some commit is cherry-picked from master to stable
> > it is counted twice in both branches / releases. The configuration for
> > stable branch is simple - branch starting with branching point (e.g.
> > stable/newton that starts with rc1)
> >
> > In deb-* projects master branch corresponds to upstream Debian
> > community. All OpenStack-related contribution goes into debian/
> > branch. But unlike in the rest of OpenStack, git workflow differs and
> > the branch contains merge commits from master. This makes filtering
> > "pure" branch commits from those that came from master quite tricky (not
> > possible to specify the branch-point). And support of this will require
> > changes in Stackalytics code.
> >
> > Since currently we are at the time when people may get nervous about
> > numbers, I'd suggest to temporary hide all commits from deb-* projects
> > and revise stats processing in a month.
>
> Sounds good. Are you working on it ?


Yep. I'm working on this, will update on the results.

--Ilya Shakhat
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] running afoul of Service.__repr__() during client.serivces.list()

2016-09-21 Thread Konstanski, Carlos P
Am Mittwoch, den 21.09.2016, 15:07 +0200 schrieb Michał Dulko:
> On 09/21/2016 02:32 AM, Konstanski, Carlos P wrote:
> > 
> > Am Dienstag, den 20.09.2016, 15:31 -0600 schrieb Konstanski, Carlos P:
> > > 
> > > I am currently using python-cinderclient version 1.5.0, though the code in
> > > question is still in master.
> > > 
> > > When calling client.services.list() I get this result: "AttributeError:
> > > service"
> > > 
> > > The execution path of client.services.list() eventually leads to this
> > > method
> > > in
> > > cinderclient/v2/services.py:24:
> > > 
> > > def
> > > __repr__(self):  
> > >  
> > > return "" %
> > > self.service
> > >   
> > > 
> > > which in turn triggers a call to Resouce.__getattr__() in
> > > cinderclient/openstack/common/apiclient/base.py:456.
> > > 
> > > This custom getter will never find an attribute called service because a
> > > Service
> > > instance looks something like the following:
> > > 
> > > {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone': u'nova',
> > > u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00', u'state':
> > > u'up', u'disabled_reason': None}
> > > 
> > > So it returns the string "AttributeError: service".
> > > 
> > > One way or another a fix is warranted, and I am ready, willing and able to
> > > provide the fix. But first I want to find out more about the bigger
> > > picture.
> > > could  it be that this __repr__() method actually correct, but the code
> > > that
> > > populates my service instance is faulty? This could easily be the case if
> > > the
> > > dict that feeds the Service class were to look like the following (for
> > > example):
> > > 
> > > {u'service': {u'status': u'enabled', u'binary': u'cinder-scheduler',
> > > u'zone':
> > > u'nova', u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00',
> > > u'state': u'up', u'disabled_reason': None}}
> > > 
> > > Somehow I doubt it; why hide all the useful attributes in a dict under a
> > > single
> > > parent attribute? But I'm new to cinder and I don't know the rules. I'm
> > > not
> > > here
> > > to question your methods.
> > > 
> > > Or am I just using it wrong? This code has survived for a long time, and
> > > certainly someone would have noticed a problem by now. But it seems pretty
> > > straightforward. How many ways are there to prepare a call to
> > > client.services.list()? I get a Client instance, call authenticate() for
> > > fun,
> > > and then call client.services.list(). Not a lot going on here.
> > > 
> > > I'll get to work on a patch when I figure out what it is supposed to do,
> > > if it
> > > is not already doing it.
> > > 
> > > Sincerely,
> > > Carlos Konstanski
> > I guess the question I should be asking is this: Manager._list() (in
> > cinderclient/base.py) returns a list of printable representations of
> > objects,
> > not a list of the objects themselves. Hopefully there's a more useful method
> > that returns a list of actual objects, or at least a JSON representation. If
> > I
> > can't find such a method then I'll be back, or I'll put up a review to add
> > one.
> > 
> > Carlos
> Is bug being addressed in review [1] somehow related? If so, there's
> some discussion on solutions going.
> 
> [1] https://review.openstack.org/#/c/308475

This neophyte needs a bit of education. What is review [1] ?

In the meantime I have a potential fix. I'll see if some of my coworkers who
have put up patches in the past can help me figure out how it's done the
Openstack Way.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] [deb] [packaging] OpenStack contribution stats skewed by deb-* projects

2016-09-21 Thread Thomas Goirand
On 09/20/2016 10:30 PM, Ilya Shakhat wrote:
> Hi,
> 
> tldr; Commits stats are significantly skewed by deb-* projects
> (http://stackalytics.com/?metric=commits&module=packaging-deb-group)
> 
> By default Stackalytics processes commits from project's master branch.
> For some "old core" projects there is configuration to process stable
> branches as well. If some commit is cherry-picked from master to stable
> it is counted twice in both branches / releases. The configuration for
> stable branch is simple - branch starting with branching point (e.g.
> stable/newton that starts with rc1)
> 
> In deb-* projects master branch corresponds to upstream Debian
> community. All OpenStack-related contribution goes into debian/
> branch. But unlike in the rest of OpenStack, git workflow differs and
> the branch contains merge commits from master. This makes filtering
> "pure" branch commits from those that came from master quite tricky (not
> possible to specify the branch-point). And support of this will require
> changes in Stackalytics code.
> 
> Since currently we are at the time when people may get nervous about
> numbers, I'd suggest to temporary hide all commits from deb-* projects
> and revise stats processing in a month.
> 
> Thanks,
> Ilya

Replying again here (I'm subscribed, so it will go through this time).

Ilya,

I don't understand why Stackalytics has it wrong, when the electorate
script for the PTL election is correct. Here's the script for getting
commits:
https://github.com/openstack-infra/system-config/blob/master/tools/owners.py

What part of Stackalytics is gathering the commits?

Waiting for a full month to solve this issue properly isn't nice at all
for those working on packaging_deb. Could it be solved properly earlier
than this?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] OpenFlow version to use in the OVS agent

2016-09-21 Thread Bernard Cafarelli
Thanks, the comment and function name led me to think it was supposed
to only return the matching group.
So I will keep current 1.3 version in the L2 agent extension then!

On 20 September 2016 at 20:48, Cathy Zhang  wrote:
> Hi Bernard,
>
> Networking-sfc currently uses OF1.3. Although OF1.3 dumps all groups, 
> networking-sfc has follow-on filter code to select the info associated with 
> the specific group ID from the dump. So we are fine and let's keep it as 
> OF1.3.
>
> We can upgrade to OF1.5 when Neutron uses OF1.5.
>
> Thanks,
> Cathy
>
>
> -Original Message-
> From: Bernard Cafarelli [mailto:bcafa...@redhat.com]
> Sent: Tuesday, September 20, 2016 7:16 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [networking-sfc] OpenFlow version to use in the OVS 
> agent
>
> In the OVSSfcAgent migration to a L2 agent extension review[1], Igor Duarte 
> Cardoso noticed a difference on the OpenFlow versions between a comment and 
> actual code.
> In current code [2], we have:
> # We need to dump-groups according to group Id, # which is a feature of 
> OpenFlow1.5 full_args = ["ovs-ofctl", "-O openflow13", cmd, self.br_name
>
> Indeed, only OpenFlow 1.5 and later support dumping a specific group [3]. 
> Earlier versions of OpenFlow always dump all groups.
> So current code will return all groups:
> $ sudo ovs-ofctl -O OpenFlow13 dump-groups br-int 1 OFPST_GROUP_DESC reply 
> (OF1.3) (xid=0x2):
>  
> group_id=1,type=select,bucket=actions=set_field:fa:16:3e:05:46:69->eth_dst,resubmit(,5),bucket=actions=set_field:fa:16:3e:cd:b7:7e->eth_dst,resubmit(,5)
>  
> group_id=2,type=select,bucket=actions=set_field:fa:16:3e:2d:f3:28->eth_dst,resubmit(,5)
> $ sudo ovs-ofctl -O OpenFlow15 dump-groups br-int 1 OFPST_GROUP_DESC reply 
> (OF1.5) (xid=0x2):
>  
> group_id=1,type=select,bucket=bucket_id:0,actions=set_field:fa:16:3e:05:46:69->eth_dst,resubmit(,5),bucket=bucket_id:1,actions=set_field:fa:16:3e:cd:b7:7e->eth_dst,resubmit(,5)
>
> This code behavior will not change in my extension rewrite, so this will 
> still have to be fixed. though I am not sure on the solution:
> * We can use Openflow 1.5, but its support looks experimental? And Neutron 
> apparently only uses up to 1.4 (for OVS firewall extension)
> * Method to dump a group can "grep" the group ID in the complete dump.
> Not as efficient but works with OpenFlow 1.1+
> * Use another system to load balance across the port pairs?
>
> Thoughts?
> In gerrit, I kept it set to 1.5 (no impact for now as this is still marked as 
> WIP)
>
> [1]: https://review.openstack.org/#/c/351789
> [2]: 
> https://github.com/openstack/networking-sfc/blob/master/networking_sfc/services/sfc/common/ovs_ext_lib.py
> [3]: http://openvswitch.org/support/dist-docs/ovs-ofctl.8.txt
>
> --
> Bernard Cafarelli
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Bernard Cafarelli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Design session ideas for Barcelona

2016-09-21 Thread Steve Martinelli
Keystoners and Keystone enthusiasts,

We're tracking ideas for design sessions on an etherpad [1] -- so please
help populate the etherpad!

The ideas will then be prioritized and grouped together into fishbowl
sessions (tokens, authorization, operators, authentication, etc) at a later
date.

[1] https://etherpad.openstack.org/p/keystone-ocata-summit-brainstorm

Thanks,
Steve Martinelli
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] [deb] [packaging] OpenStack contribution stats skewed by deb-* projects

2016-09-21 Thread Ian Cordasco
 

-Original Message-
From: Thomas Goirand 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: September 21, 2016 at 08:40:07
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [stackalytics] [deb] [packaging] OpenStack 
contribution stats skewed by deb-* projects

> On 09/20/2016 10:30 PM, Ilya Shakhat wrote:
> > Hi,
> >
> > tldr; Commits stats are significantly skewed by deb-* projects
> > (http://stackalytics.com/?metric=commits&module=packaging-deb-group)
> >
> > By default Stackalytics processes commits from project's master branch.
> > For some "old core" projects there is configuration to process stable
> > branches as well. If some commit is cherry-picked from master to stable
> > it is counted twice in both branches / releases. The configuration for
> > stable branch is simple - branch starting with branching point (e.g.
> > stable/newton that starts with rc1)
> >
> > In deb-* projects master branch corresponds to upstream Debian
> > community. All OpenStack-related contribution goes into debian/
> > branch. But unlike in the rest of OpenStack, git workflow differs and
> > the branch contains merge commits from master. This makes filtering
> > "pure" branch commits from those that came from master quite tricky (not
> > possible to specify the branch-point). And support of this will require
> > changes in Stackalytics code.
> >
> > Since currently we are at the time when people may get nervous about
> > numbers, I'd suggest to temporary hide all commits from deb-* projects
> > and revise stats processing in a month.
> >
> > Thanks,
> > Ilya
>  
> Replying again here (I'm subscribed, so it will go through this time).
>  
> Ilya,
>  
> I don't understand why Stackalytics has it wrong, when the electorate
> script for the PTL election is correct. Here's the script for getting
> commits:
> https://github.com/openstack-infra/system-config/blob/master/tools/owners.py  
>  
> What part of Stackalytics is gathering the commits?
>  
> Waiting for a full month to solve this issue properly isn't nice at all
> for those working on packaging_deb. Could it be solved properly earlier
> than this?
>  
> Cheers,
>  
> Thomas Goirand (zigo)

Thomas,

As you already pointed out, where it matters, the analysis of commits is 
correct. I'm sure the Stackalytics team has prioritized this as they see 
appropriate. How does the current prioritization harm the Debian packaging 
team? Are employers of team members using stackalytics to judge activity? I'd 
encourage you and the team members to point them to better tooling for that.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [horizon] why is heat service-list limited to 'admin project?

2016-09-21 Thread Zane Bitter

On 21/09/16 03:30, Akihiro Motoki wrote:

Hi,

The default policy.json provided by heat limits 'service-list' API to
'admin' project like below.
Is there any reason 'admin' role user in non-'admin' project cannot
see service-list?


https://bugs.launchpad.net/keystone/+bug/968696


   "service:index": "rule:context_is_admin",
"context_is_admin": "role:admin and is_admin_project:True",

I noticed this when investigating a horizon bug
https://bugs.launchpad.net/horizon/+bug/1624834.
horizon currently has a bit different policy engine and it does not
support is_admin_project:True.
We would like to know the background of this default configuration.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] running afoul of Service.__repr__() during client.serivces.list()

2016-09-21 Thread Michał Dulko


On 09/21/2016 03:32 PM, Konstanski, Carlos P wrote:
> Am Mittwoch, den 21.09.2016, 15:07 +0200 schrieb Michał Dulko:
>> On 09/21/2016 02:32 AM, Konstanski, Carlos P wrote:
>>> Am Dienstag, den 20.09.2016, 15:31 -0600 schrieb Konstanski, Carlos P:
 I am currently using python-cinderclient version 1.5.0, though the code in
 question is still in master.

 When calling client.services.list() I get this result: "AttributeError:
 service"

 The execution path of client.services.list() eventually leads to this
 method
 in
 cinderclient/v2/services.py:24:

 def
 __repr__(self):  
  
 return "" %
 self.service
   

 which in turn triggers a call to Resouce.__getattr__() in
 cinderclient/openstack/common/apiclient/base.py:456.

 This custom getter will never find an attribute called service because a
 Service
 instance looks something like the following:

 {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone': u'nova',
 u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00', u'state':
 u'up', u'disabled_reason': None}

 So it returns the string "AttributeError: service".

 One way or another a fix is warranted, and I am ready, willing and able to
 provide the fix. But first I want to find out more about the bigger
 picture.
 could  it be that this __repr__() method actually correct, but the code
 that
 populates my service instance is faulty? This could easily be the case if
 the
 dict that feeds the Service class were to look like the following (for
 example):

 {u'service': {u'status': u'enabled', u'binary': u'cinder-scheduler',
 u'zone':
 u'nova', u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00',
 u'state': u'up', u'disabled_reason': None}}

 Somehow I doubt it; why hide all the useful attributes in a dict under a
 single
 parent attribute? But I'm new to cinder and I don't know the rules. I'm
 not
 here
 to question your methods.

 Or am I just using it wrong? This code has survived for a long time, and
 certainly someone would have noticed a problem by now. But it seems pretty
 straightforward. How many ways are there to prepare a call to
 client.services.list()? I get a Client instance, call authenticate() for
 fun,
 and then call client.services.list(). Not a lot going on here.

 I'll get to work on a patch when I figure out what it is supposed to do,
 if it
 is not already doing it.

 Sincerely,
 Carlos Konstanski
>>> I guess the question I should be asking is this: Manager._list() (in
>>> cinderclient/base.py) returns a list of printable representations of
>>> objects,
>>> not a list of the objects themselves. Hopefully there's a more useful method
>>> that returns a list of actual objects, or at least a JSON representation. If
>>> I
>>> can't find such a method then I'll be back, or I'll put up a review to add
>>> one.
>>>
>>> Carlos
>> Is bug being addressed in review [1] somehow related? If so, there's
>> some discussion on solutions going.
>>
>> [1] https://review.openstack.org/#/c/308475
> This neophyte needs a bit of education. What is review [1] ?

I've meant Gerrit review page linked above under [1]:
https://review.openstack.org/#/c/308475

> In the meantime I have a potential fix. I'll see if some of my coworkers who
> have put up patches in the past can help me figure out how it's done the
> Openstack Way.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] running afoul of Service.__repr__() during client.serivces.list()

2016-09-21 Thread Konstanski, Carlos P
Am Mittwoch, den 21.09.2016, 15:49 +0200 schrieb Michał Dulko:
> 
> On 09/21/2016 03:32 PM, Konstanski, Carlos P wrote:
> > 
> > Am Mittwoch, den 21.09.2016, 15:07 +0200 schrieb Michał Dulko:
> > > 
> > > On 09/21/2016 02:32 AM, Konstanski, Carlos P wrote:
> > > > 
> > > > Am Dienstag, den 20.09.2016, 15:31 -0600 schrieb Konstanski, Carlos P:
> > > > > 
> > > > > I am currently using python-cinderclient version 1.5.0, though the
> > > > > code in
> > > > > question is still in master.
> > > > > 
> > > > > When calling client.services.list() I get this result:
> > > > > "AttributeError:
> > > > > service"
> > > > > 
> > > > > The execution path of client.services.list() eventually leads to this
> > > > > method
> > > > > in
> > > > > cinderclient/v2/services.py:24:
> > > > > 
> > > > > def
> > > > > __repr__(self):   
> > > > >    
> > > > >  
> > > > > return "" %
> > > > > self.service
> > > > >   
> > > > > 
> > > > > which in turn triggers a call to Resouce.__getattr__() in
> > > > > cinderclient/openstack/common/apiclient/base.py:456.
> > > > > 
> > > > > This custom getter will never find an attribute called service because
> > > > > a
> > > > > Service
> > > > > instance looks something like the following:
> > > > > 
> > > > > {u'status': u'enabled', u'binary': u'cinder-scheduler', u'zone':
> > > > > u'nova',
> > > > > u'host': u'dev01', u'updated_at': u'2016-09-20T21:16:00.00',
> > > > > u'state':
> > > > > u'up', u'disabled_reason': None}
> > > > > 
> > > > > So it returns the string "AttributeError: service".
> > > > > 
> > > > > One way or another a fix is warranted, and I am ready, willing and
> > > > > able to
> > > > > provide the fix. But first I want to find out more about the bigger
> > > > > picture.
> > > > > could  it be that this __repr__() method actually correct, but the
> > > > > code
> > > > > that
> > > > > populates my service instance is faulty? This could easily be the case
> > > > > if
> > > > > the
> > > > > dict that feeds the Service class were to look like the following (for
> > > > > example):
> > > > > 
> > > > > {u'service': {u'status': u'enabled', u'binary': u'cinder-scheduler',
> > > > > u'zone':
> > > > > u'nova', u'host': u'dev01', u'updated_at': u'2016-09-
> > > > > 20T21:16:00.00',
> > > > > u'state': u'up', u'disabled_reason': None}}
> > > > > 
> > > > > Somehow I doubt it; why hide all the useful attributes in a dict under
> > > > > a
> > > > > single
> > > > > parent attribute? But I'm new to cinder and I don't know the rules.
> > > > > I'm
> > > > > not
> > > > > here
> > > > > to question your methods.
> > > > > 
> > > > > Or am I just using it wrong? This code has survived for a long time,
> > > > > and
> > > > > certainly someone would have noticed a problem by now. But it seems
> > > > > pretty
> > > > > straightforward. How many ways are there to prepare a call to
> > > > > client.services.list()? I get a Client instance, call authenticate()
> > > > > for
> > > > > fun,
> > > > > and then call client.services.list(). Not a lot going on here.
> > > > > 
> > > > > I'll get to work on a patch when I figure out what it is supposed to
> > > > > do,
> > > > > if it
> > > > > is not already doing it.
> > > > > 
> > > > > Sincerely,
> > > > > Carlos Konstanski
> > > > I guess the question I should be asking is this: Manager._list() (in
> > > > cinderclient/base.py) returns a list of printable representations of
> > > > objects,
> > > > not a list of the objects themselves. Hopefully there's a more useful
> > > > method
> > > > that returns a list of actual objects, or at least a JSON
> > > > representation. If
> > > > I
> > > > can't find such a method then I'll be back, or I'll put up a  review to
> > > > add
> > > > one.
> > > > 
> > > > Carlos
> > > Is bug being addressed in review [1] somehow related? If so, there's
> > > some discussion on solutions going.
> > > 
> > > [1] https://review.openstack.org/#/c/308475
> > This neophyte needs a bit of education. What is review [1] ?
> I've meant Gerrit review page linked above under [1]:
> > https://review.openstack.org/#/c/308
Ah there it is. (my email client makes links invisible, sorry.) No, unrelated.
Similar but I'm dealing with the Service class, not the Volume class. And fixing
the __repr__ method isn't going to help in my case. I need the actual data, not
a barely-unique summary of the data. Review coming as soon as I figure out the
mechanics.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][keystone] User Project List

2016-09-21 Thread Adrian Turjak
Nope, default keystone policy has not allowed you to get your own user until this patch was merged:
https://github.com/openstack/keystone/commit/c990ec5c144d9b1408d47cb83cb0b3d6aeed0d57
Sad but true it seems. :(
On 22/09/2016 12:58 AM, Dolph Mathews  wrote:
>
>
>
> On Wed, Sep 21, 2016 at 12:31 AM Adrian Turjak  wrote:
>>
>> The default keystone policy up until Newton doesn't let a user get their
>> own user
>
>
> This seems to be the crutch of your issue - can you provide an example of this specific failure and the corresponding policy? As far as I'm aware, the default upstream policy files have allowed for this since about Grizzly or Havana, unless that's quietly broken somehow.
>  
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -- 
> -Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] gate-keystoneclient-dsvm-functional-ubuntu-xenial is broken

2016-09-21 Thread Rodrigo Duarte
Forgot to add the commit reference :)

[1] https://review.openstack.org/#/c/368244/

On Wed, Sep 21, 2016 at 10:59 AM, Rodrigo Duarte 
wrote:

> After some investigation I've found the possible issue: the functional
> tests run in parallel, some of them create and delete roles and others use
> tokens to perform the creation/update/delete of other types of fixtures.
> The problem is that when we delete a role, we also revoke *all* tokens
> from a user that has any assignment containing that role - so? Race
> condition: if we are executing a not related operation and another test
> deletes a role, the user tokens will be revoked resulting in a request
> error.
>
> The strange part is that reverting this commit [1], the tests seem to work
> fine most of the times - what makes think that commit actually *fixes* a
> big issue in our revoke events (since before it, we would not revoke such
> types of tokens).
>
> I can see a couple of options:
> - Create brand new users and role_assignments to be responsible to handle
> operations in the fixtures for each test
> - Change the "framework" of the tests and rely on tempest plugins
>
> What to think? Makes sense?
>
> On Tue, Sep 20, 2016 at 11:03 AM, Steve Martinelli  > wrote:
>
>> Since September 14th the keystoneclient functional test job has been
>> broken. Let's be mindful of infra resources and stop rechecking the patches
>> there. Anyone have time to investigate this?
>>
>> See patches https://review.openstack.org/#/c/369469/ or
>> https://review.openstack.org/#/c/371324/
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
>



-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] gate-keystoneclient-dsvm-functional-ubuntu-xenial is broken

2016-09-21 Thread Brant Knudson
On Wed, Sep 21, 2016 at 8:59 AM, Rodrigo Duarte 
wrote:

> After some investigation I've found the possible issue: the functional
> tests run in parallel, some of them create and delete roles and others use
> tokens to perform the creation/update/delete of other types of fixtures.
> The problem is that when we delete a role, we also revoke *all* tokens
> from a user that has any assignment containing that role - so? Race
> condition: if we are executing a not related operation and another test
> deletes a role, the user tokens will be revoked resulting in a request
> error.
>
> The strange part is that reverting this commit [1], the tests seem to work
> fine most of the times - what makes think that commit actually *fixes* a
> big issue in our revoke events (since before it, we would not revoke such
> types of tokens).
>
> I can see a couple of options:
> - Create brand new users and role_assignments to be responsible to handle
> operations in the fixtures for each test
>

This makes sense to me. There was a bug in the tests and this corrects it.


> - Change the "framework" of the tests and rely on tempest plugins
>
>
I don't know why this was suggested.

- Brant


> What to think? Makes sense?
>
> On Tue, Sep 20, 2016 at 11:03 AM, Steve Martinelli  > wrote:
>
>> Since September 14th the keystoneclient functional test job has been
>> broken. Let's be mindful of infra resources and stop rechecking the patches
>> there. Anyone have time to investigate this?
>>
>> See patches https://review.openstack.org/#/c/369469/ or
>> https://review.openstack.org/#/c/371324/
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
>
> _
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-21 Thread Julien Danjou
On Wed, Sep 21 2016, Amrith Kumar wrote:

> Over time, the result generally is that these developers' changes get ignored.
> And that's not a good thing for the community as a whole. We want to be a
> welcoming community and one which values all contributions so I'm looking for
> some suggestions and guidance on how one can work with contributors to try and
> improve the quality of these changes, and help the contributor feel that their
> changes are valued by the project? Other more experienced PTL's, ex-PTL's, 
> long
> time open-source-community folks, I'm seriously looking for suggestions and
> ideas.

FWIW, I tried to reach privately some of those folks spamming the
Telemetry projects with poor patches.

Turns out that some of them were just trying to "contribute to
OpenStack" for the sake of it for an internship or the like. I tried to
explain that we were happy having volunteers and that they should ask
for meaningful tasks rather than spamming us, but they only seemed
interested into having things merged quickly and easily.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] gate-keystoneclient-dsvm-functional-ubuntu-xenial is broken

2016-09-21 Thread Rodrigo Duarte
After some investigation I've found the possible issue: the functional
tests run in parallel, some of them create and delete roles and others use
tokens to perform the creation/update/delete of other types of fixtures.
The problem is that when we delete a role, we also revoke *all* tokens from
a user that has any assignment containing that role - so? Race condition:
if we are executing a not related operation and another test deletes a
role, the user tokens will be revoked resulting in a request error.

The strange part is that reverting this commit [1], the tests seem to work
fine most of the times - what makes think that commit actually *fixes* a
big issue in our revoke events (since before it, we would not revoke such
types of tokens).

I can see a couple of options:
- Create brand new users and role_assignments to be responsible to handle
operations in the fixtures for each test
- Change the "framework" of the tests and rely on tempest plugins

What to think? Makes sense?

On Tue, Sep 20, 2016 at 11:03 AM, Steve Martinelli 
wrote:

> Since September 14th the keystoneclient functional test job has been
> broken. Let's be mindful of infra resources and stop rechecking the patches
> there. Anyone have time to investigate this?
>
> See patches https://review.openstack.org/#/c/369469/ or
> https://review.openstack.org/#/c/371324/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for fedora distro support

2016-09-21 Thread Steven Dake (stdake)



On 9/20/16, 11:18 AM, "Haïkel"  wrote:

2016-09-19 19:40 GMT+02:00 Jeffrey Zhang :
> Kolla core reviewer team,
>
> Kolla supports multiple Linux distros now, including
>
> * Ubuntu
> * CentOS
> * RHEL
> * Fedora
> * Debian
> * OracleLinux
>
> But only Ubuntu, CentOS, and OracleLinux are widely used and we have
> robust gate to ensure the quality.
>
> For fedora, Kolla hasn't any test for it and nobody reports any bug
> about it( i.e. nobody use fedora as base distro image). We (kolla
> team) also do not have enough resources to support so many Linux
> distros. I prefer to deprecate fedora support now.  This is talked in
> past but inconclusive[0].
>
> Please vote:
>
> 1. Kolla needs support fedora( if so, we need some guys to set up the
> gate and fix all the issues ASAP in O cycle)
> 2. Kolla should deprecate fedora support
>
> [0] 
http://lists.openstack.org/pipermail/openstack-dev/2016-June/098526.html
>


/me has no voting rights

As RDO maintainer and Fedora developer, I support option 2. as it'd be
very time-consuming to maintain Fedora support..


>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>

Haikel,

Quck Q – are you saying maintaining fedora in kolla is time consuming or that 
maintaining rdo for fedora is time consuming (and something that is being 
dropped)?

Thanks for improving clarity on this situation.

Regards
-steve

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-21 Thread Boris Bobrov

Hello,


in addition to this, please, PLEASE stop creating 'all project bugs'. i
don't want to get emails on updates to projects unrelated to the ones i
care about. also, it makes updating the bug impossible because it times
out. i'm too lazy to search ML but this has been raise before, please stop.

let's all unite together and block these patches to bring an end to it. :)


People who contribute to OpenStack long enough already know this.
Usually new contributors do it. And we cannot reach out to them
in this mailing list. There should be a way to limit this somewhere
in Launchpad.


On 21/09/16 07:56 AM, Amrith Kumar wrote:

Of late I've been seeing a lot of rather questionable changes that
appear to be getting blasted out across multiple projects; changes that
cause considerable code churn, and don't (IMHO) materially improve the
quality of OpenStack.

I’d love to provide a list of the changes that triggered this email but
I know that this will result in a rat hole where we end up discussing
the merits of the individual items on the list and lose sight of the
bigger picture. That won’t help address the question I have below in any
way, so I’m at a disadvantage of having to describe my issue in abstract
terms.



Here’s how I characterize these changes (changes that meet one or more
of these criteria):



-Contains little of no information in the commit message (often just
a single line)

-Makes some generic statement like “Do X not Y”, “Don’t use Z”,
“Make ABC better” with no further supporting information

-Fail (literally) every single CI job, clearly never tested by the
developer

-Gets blasted across many projects, literally tens with often the
same kind of questionable (often wrong) change

-Makes a stylistic python improvement that is not enforced by any
check (causes a cottage industry of changes making the same correction
every couple of months)

-Reverses some previous python stylistic improvement with no clear
reason (another cottage industry)



I’ve tried to explain it to myself as enthusiasm, and a desire to
contribute aggressively; I’ve lapsed into cynicism at times and tried to
explain it as gaming the numbers system, but all that is merely
rationalization and doesn’t help.



Over time, the result generally is that these developers’ changes get
ignored. And that’s not a good thing for the community as a whole. We
want to be a welcoming community and one which values all contributions
so I’m looking for some suggestions and guidance on how one can work
with contributors to try and improve the quality of these changes, and
help the contributor feel that their changes are valued by the project?
Other more experienced PTL’s, ex-PTL’s, long time open-source-community
folks, I’m seriously looking for suggestions and ideas.



Any and all input is welcome, do other projects see this, how do you
handle it, is this normal, …



Thanks!



-amrith



cheers,



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] Results Presentation: Managing OpenStack Quotas within Production Environments

2016-09-21 Thread Danielle Mundle
The OpenStack UX team will be giving a results presentation from a series
of interviews intended to understand how operators manage quotas at scale
as well as the pain points associated with that process.  The study was
conducted by Danielle (IRC: uxdanielle) and included operators from CERN,
Pacific Northwest National Laboratory, Workday, Intel and Universidade
Federal de Campina Grande.

The presentation begins in ~20 minutes. WebEx information to join the
session can be found at the top of the UX wiki page:
https://wiki.openstack.org/wiki/UX#Results_Presentation:_Managing_OpenStack_Quotas_within_Production_Environments

Thanks for supporting UX research in the community!
--Danielle
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest]Tempest test concurrency

2016-09-21 Thread Bob Hansen


I have been looking at some of the stackviz output as I'm trying to improve
the run time of my thrid-party CI. As an example:

http://logs.openstack.org/36/371836/1/check/gate-tempest-dsvm-full-ubuntu-xenial/087db0f/logs/stackviz/#/stdin/timeline

What jumps out is the amount of time that each worker is not running any
tests. I would have expected quite a bit more concurrecy between the two
workers in the chart, e.g. more overlap. I've noticed a simliar thing with
my test runs using 4 workers.

Can anyone explain why this is and where can I find out more information
about the scheduler and what information it is using to decide when to
dispatch tests? I'm already feeding my system a prior subunit stream to
help influence the scheduler as my test run times are different due to the
way our openstack implementation is architected. A simple round-robin
approach is not the most efficeint in my case.

(maybe openstack-infra is a better place to ask?)

Thanks!

Bob Hansen
z/VM OpenStack Enablement
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-21 Thread Ihar Hrachyshka

I just hit that TimeoutException error in neutron functional tests:

http://logs.openstack.org/68/373868/4/check/gate-neutron-dsvm-functional-ubuntu-trusty/4de275e/testr_results.html.gz

It’s a bit weird that we hit that 180 sec timeout because in good runs, the  
test takes ~5 secs.


Do we have a remedy against that kind of failure? I saw nova bumped the  
timeout length for the tests. Is it the approach we should apply across the  
board for other projects?


Ihar

Zane Bitter  wrote:


On 14/09/16 11:44, Mike Bayer wrote:

On 09/14/2016 11:08 AM, Mike Bayer wrote:

On 09/14/2016 09:15 AM, Sean Dague wrote:

I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22




It looks like some race has been introduced where the various db
connections are not fully isolated from each other like they used to be.
The testing magic for this is buried pretty deep in oslo.db.


that error message occurs when a connection that is intended against a
SELECT statement fails to provide a cursor.description attribute.  It is
typically a driver-level bug in the MySQL world and corresponds to
mis-handled failure modes from the MySQL connection.

By "various DB connections are not fully isolated from each other" are
you suggesting that a single in-Python connection object itself is being
shared among multiple greenlets?   I'm not aware of a change in oslo.db
that would be a relationship to such an effect.


So, I think by "fully isolated from each other" what you really mean is
"operations upon a connection are not fully isolated from the subsequent
use of that connection", since that's what I see in the logs.  A
connection is attempting to be used during teardown to drop tables,
however it's in this essentially broken state from a PyMySQL
perspective, which would indicate something has gone wrong with this
(pooled) connection in the preceding test that could not be detected or
reverted once the connection was returned to the pool.

From Roman's observation, it looks like a likely source of this
corruption is a timeout that is interrupting the state of the PyMySQL
connection.   In the preceding stack trace, PyMySQL is encountering a
raise as it attempts to call "self._sock.recv_into(b)", and it seems
like some combination of eventlet's response to signals and the
fixtures.Timeout() fixture is the cause of this interruption.   As an
additional wart, something else is getting involved and turning it into
an IndexError, I'm not sure what that part is yet though I can imagine
that might be SQLAlchemy mis-interpreting what it expects to be a
PyMySQL exception class, since we normally look inside of
exception.args[0] to get the MySQL error code.   With a blank exception
like fixtures.TimeoutException, .args is the empty tuple.

The PyMySQL connection is now in an invalid state and unable to perform
a SELECT statement correctly, but the connection is not invalidated and
is instead returned to the connection pool in a broken state.  So the
subsequent teardown, if it uses this same connection (which is likely),
fails because the connection has been interrupted in the middle of its
work and not given the chance to clean up.

Seems like the use of fixtures.Timeout() fixture here is not organized
to work with a database operation in progress, especially an
eventlet-monkeypatched PyMySQL.   Ideally, if something like a timeout
due to a signal handler occurs, the entire connection pool should be
disposed (quickest way, engine.dispose()), or at the very least (and
much more targeted), the connection that's involved should be
invalidated from the pool, e.g. connection.invalidate().

The change to the environment here would be that this timeout is
happening at all - the reason for that is not yet known.   If oslo.db's
version were involved in this error, I would guess that it would be
related to this timeout condition being caused, and not anything to do
with the connection provisioning.


Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.


We've been seeing similar errors in Heat since at least Liberty  
(https://bugs.launchpad.net/heat/+bug/1499669). Mike and I did some  
poking around yesterday and basically confirmed his theory above. If you  
look at the PyMySQL code, it believes that only an IOError can occur  
while writing to a socket, so it has no handling for any other type of  
exception, thus it can't deal with signal handlers raising exceptions or  
other exceptions being thrown into the greenthread by eventlet. It sounds  
like sqlalchemy also fails to catch at least some of these exceptions and  
invalidate the connection.


tl;dr this appears to have been around forever (at least since we  
switched to using a pure-Python MySQL client) and is almost certainly

[openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-21 Thread Matt Riedemann
Nova has policy defaults in code now and we can generate the sample 
using oslopolicy-sample-generator but we'd like to get the default 
policy sample in the Nova developer documentation also, like we have for 
nova.conf.sample.


I see we use the sphinxconfiggen extension for building the 
nova.conf.sample in our docs, but I don't see anything like that for 
generating docs for a sample policy file.


Has anyone already started working on that, or is interested in working 
on that? I've never written a sphinx extension before but I'm guessing 
it could be borrowed a bit from how sphinxconfiggen was written in 
oslo.config.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Doug Hellmann
Excerpts from Rob C's message of 2016-09-21 13:17:07 +0100:
> For my part, I missed the elections, that's my bad. I normally put a
> calendar item in for that issue. I don't think that my missing the election
> date should result in the group being treated in this way. Members of the
> TC have contacted me about unrelated things recently, I have always been
> available however my schedule has made it hard for me to sift through -dev
> recently and I missed the volley of nomination emails. This is certainly a
> failing on my part.
> 
> It's certainly true that the security team, and our cores tend not to pay
> as much attention to the -dev mailing list as we should. The list is pretty
> noisy and  traditionally we always had a separate list that we used for
> security and since moving away from that we tend to focus on IRC or direct
> emails. Though as can be seen with our core announcements etc, we do try to
> do things the "openstack way"
> 
> However, to say we're not active I think is a bit unfair. Theirry and
> others regularly mail me directly about things like rooms for the summit
> and I typically respond in good time, I think what's happened here is more
> an identification of the fact that we need to focus more on doing things
> "the openstack way" rather than being kicked out of the big tent.
> 
> We regularly work with the VMT on security issues, we issue large amounts
> of guidance on our own, we have been working hard on an asset based threat
> analysis process for OpenStack teams who are looking to be security
> managed, we've reviewed external TA documentation and recently in our
> midcycle (yes, we're dedicated enough to fly to Texas and meet up to work
> on such issues) we created the first real set of security documents for an
> OpenStack project,  we worked with Barbican to apply the asset based threat
> analysis that we'd like to engage other teams in [1], [2]
> 
> Here's a couple of the things that we've been doing in this cycle:
> * Issuing Security Notes for Glance, Nova, Horizon, Bandit, Neutron and
> Barbican[3]
> * Updating the security guide (the book we wrote on securing OpenStack)[4]
> * Hosting a midcycle and inducting new members
> * Supporting the VMT with several embargoed and complex vulnerabilities
> * Building up a security blog[5]
> * Making OpenStack the biggest open source project to ever receive the Core
> Infrastructure Initative Best Practices Badge[6][7]
> * Working on the OpenStack Security Whitepaper [8]
> * Developing CI security tooling such as Bandit [9]
> 
> We are a very active team, working extremely hard on trying to make one
> OpenStack secure. This is often a thankless task, we provide a lot of what
> customers are asking for from OpenStack but as we don't drive individual
> flagship features our contributions are often overlooked. However, above is
> just a selection of what we've been doing throughout the last cycle.
> 
> If it's too late for these comments to have an influence then sobeit but
> this is failure of appropriate levels of email filtering and perhaps a
> highlight of how we need to alter our culture somewhat to partipate more in
> -dev in general than it is any indication of a lack of dedication, time,
> effort or contribution on the part of the Security Project.  We have
> dedicate huge amounts of efforts to OpenStack and to relegate us to a
> working group would be massively detrimental for one reason above all
> others. We get corporate participation, time and effort in terms of
> employee hours and contributions because we're an official part of
> OpenStack, we've had to build this up over time. If you remove the Security
> Project from the big tent I believe that participation in Security for
> OpenStack will drop off significantly.
> 
> We are active, we are helping to make OpenStack secure and we (I) suck at
> keeping ontop of email. Don't kick us out for that. If needs be we can find
> another PTL or otherwise take special steps to ensure that missing
> elections doesn't happen.

While it's admirable of you to take responsibility, there's no
reason to think this is an individual team member's fault.  The
team is responsible as a group for ensuring that it is meeting its
responsibilities to the rest of the community. In this case, the
election officials and TC had no reason to assume that you would
or would not run again. Any contributor could have entered the race.
When no one at all did, that lack of engagement reflected on the
entire team, not only you.

> Apart from missing elections, I think we do a huge amount for the community
> and removing us from OpenStack would in no way be beneficial to either the
> Security Project or OpenStack as a whole.

Based on the list above, the team is doing far more than I was aware
of.  I'm glad to hear that, as it looks like there is a considerable
amount of work going into those contributions. I hope we can find
a way to increase the team's participation in community operations
outside of c

Re: [openstack-dev] [tripleo] let's talk (development) environment deployment tooling and workflows

2016-09-21 Thread John Trowbridge


On 09/19/2016 01:21 PM, Steven Hardy wrote:
> Hi Alex,
> 
> Firstly, thanks for this detailed feedback - it's very helpful to have
> someone with a fresh perspective look at the day-1 experience for TripleO,
> and while some of what follows are "know issues", it's great to get some
> perspective on them, as well as ideas re how we might improve things.
> 
> On Thu, Sep 15, 2016 at 09:09:24AM -0600, Alex Schultz wrote:
>> Hi all,
>>
>> I've recently started looking at the various methods for deploying and
>> developing tripleo.  What I would like to bring up is the current
>> combination of the tooling for managing the VM instances and the
>> actual deployment method to launch the undercloud/overcloud
>> installation.  While running through the various methods and reading
>> up on the documentation, I'm concerned that they are not currently
>> flexible enough for a developer (or operator for that matter) to be
>> able to setup the various environment configurations for testing
>> deployments and doing development.  Additionally I ran into issues
>> just trying get them working at all so this probably doesn't help when
>> trying to attract new contributors as well.  The focus of this email
>> and of my experience seems to relate with workflow-simplification
>> spec[0].  I would like to share my experiences with the various
>> tooling available and raise some ideas.
>>
>> Example Situation:
>>
>> For example, I have a laptop with 16G of RAM and an SSD and I'd like
>> to get started with tripleo.  How can I deploy tripleo?
> 
> So, this is probably problem #1, because while I have managed to deploy a
> minimal TripleO environment on a laptop with 16G of RAM, I think it's
> pretty widely known that it's not really enough (certainly with our default
> configuration, which has unfortunately grown over time as more and more
> things got integrated).
> 
> I see two options here:
> 
> 1. Document the reality (which is really you need a physical machine with
> at least 32G RAM unless you're prepared to deal with swapping).
> 
> 2. Look at providing a "TripleO lite" install option, which disables some
> services (both on the undercloud and default overcloud install).
> 
> Either of these are defintely possible, but (2) seems like the best
> long-term solution (although it probably means another CI job).
> 
>> Tools:
>>
>> instack:
>>
>> I started with the tripleo docs[1] that reference using the instack
>> tools for virtual environment creation while deploying tripleo.   The
>> docs say you need at least 12G of RAM[2].  The docs lie (step 7[3]).
>> So after basically shutting everything down and letting it deploy with
>> all my RAM, the deployment fails because the undercloud runs out of
>> RAM and OOM killer kills off heat.  This was not because I had reduced
>> the amount of ram for the undercloud node or anything.  It was because
>> by default, 6GB of RAM with no swap is configured for the undercloud
>> (not sure if this is a bug?).  So I added a swap file to the
>> undercloud and continued. My next adventure was having the overcloud
>> deployment fail because lack of memory as puppet fails trying to spawn
>> a process and gets denied.  The instack method does not configure swap
>> for the VMs that are deployed and the deployment did not work with 5GB
>> RAM for each node.  So for a full 16GB I was unable to follow the
>> documentation and use instack to successfully deploy.  At this point I
>> switched over to trying to use tripleo-quickstart.  Eventually I was
>> able to figure out a configuration with instack to get it to deploy
>> when I figured out how to enable swap for the overcloud deployment.
> 
> Yeah, so this definitely exposes that we need to update the docs, and also
> provide an easy install-time option to enable swap on all-the-things for
> memory contrained environments.
> 
>> tripleo-quickstart:
>>
>> The next thing I attempted to use was the tripleo-quickstart[4].
>> Following the directions I attempted to deploy against my localhost.
>> I turns out that doesn't work as expected since ansible likes to do
>> magic when dealing with localhost[5].  Ultimately I was unable to get
>> it working against my laptop locally because I ran into some libvirt
>> issues.  But I was able to get it to work when I pointed it at a
>> separate machine.  It should be noted that tripleo-quickstart creates
>> an undercloud with swap which was nice because then it actually works,
>> but is an inconsistent experience depending on which tool you used for
>> your deployment.
> 
> Yeah, so while a lot of folks have good luck with tripleo-quickstart, it
> has the disadvantage of not currently being the tool used in upstream
> TripleO CI (which folks have looked at fixing, but it's not yet happened).
> 
> The original plan was for tripleo-quickstart to completely replace the
> instack-virt-setup workflow:
> 
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart
> 
> But for a variety of reasons, we never quite

Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Doug Hellmann

> On Sep 21, 2016, at 8:58 AM, Filip Pytloun  wrote:
> 
> Hello,
> 
> it's definately our bad that we missed elections in OpenStackSalt
> project. Reason is similar to Rob's - we are active on different
> channels (mostly IRC as we keep regular meetings) and don't used to
> reading mailing lists with lots of generic topics (it would be good to
> have separate mailing list for such calls and critical topics or
> individual mails to project's core members).

With 59 separate teams, even emailing the PTLs directly is becoming 
impractical. I can’t imagine trying to email all of the core members directly.

A separate mailing list just for “important announcements” would need someone 
to decide what is “important”. It would also need everyone to be subscribed, or 
we would have to cross-post to the existing list. That’s why we use topic tags 
on the mailing list, so that it is possible to filter messages based on what is 
important to the reader, rather than the sender.

> Our project is very active [1], trying to do things the Openstack way
> and I think it would be a pitty to remove it from Big Tent just because
> we missed mail and therefore our first PTL election.

I don’t see any releases listed on 
https://releases.openstack.org/independent.html either. Are you tagging 
releases, yet?

I see no emails tagged with [salt] on the mailing list since March of this 
year, aside from this thread. Are you using a different communication channel 
for team coordination? You mention IRC, but how are new contributors expected 
to find you?

> 
> Of course I don't want to excuse our fault. In case it's not too late,
> we will try to be more active in mailing lists like openstack-dev and
> not miss such important events next time.
> 
> [1] http://stackalytics.com/?module=openstacksalt-group
> 
> -Filip
> 
> On Wed, Sep 21, 2016 at 12:23 PM, Thierry Carrez 
> wrote:
> 
>> Hi everyone,
>> 
>> As announced previously[1][2], there were no PTL candidates within the
>> election deadline for a number of official OpenStack project teams:
>> Astara, UX, OpenStackSalt and Security.
>> 
>> In the Astara case, the current team working on it would like to abandon
>> the project (and let it be available for any new team who wishes to take
>> it away). A change should be proposed really soon now to go in that
>> direction.
>> 
>> In the UX case, the current PTL (Piet Kruithof) very quickly reacted,
>> explained his error and asked to be considered for the position for
>> Ocata. The TC will officialize his nomination at the next meeting,
>> together with the newly elected PTLs.
>> 
>> That leaves us with OpenStackSalt and Security, where nobody reacted to
>> the announcement that we are missing PTL candidates. That points to a
>> real disconnect between those teams and the rest of the community. Even
>> if you didn't have the election schedule in mind, it was pretty hard to
>> miss all the PTL nominations in the email last week.
>> 
>> The majority of TC members present at the meeting yesterday suggested
>> that those project teams should be removed from the Big Tent, with their
>> design summit space allocation slightly reduced to match that (and make
>> room for other not-yet-official teams).
>> 
>> In the case of OpenStackSalt, it's a relatively new addition, and if
>> they get their act together they could probably be re-proposed in the
>> future. In the case of Security, it points to a more significant
>> disconnect (since it's not the first time the PTL misses the nomination
>> call). We definitely still need to care about Security (and we also need
>> a home for the Vulnerability Management team), but I think the "Security
>> team" acts more like a workgroup than as an official project team, as
>> evidenced by the fact that nobody in that team reacted to the lack of
>> PTL nomination, or the announcement that the team missed the bus.
>> 
>> The suggested way forward there would be to remove the "Security project
>> team", have the Vulnerability Management Team file to be its own
>> official project team (in the same vein as the stable maintenance team),
>> and have Security be just a workgroup rather than a project team.
>> 
>> Thoughts, comments ?
>> 
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-
>> September/103904.html
>> [2]
>> http://lists.openstack.org/pipermail/openstack-dev/2016-
>> September/103939.html
>> 
>> --
>> Thierry Carrez (ttx)
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/ope

Re: [openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-21 Thread Alexander Makarov

What if policy will be manageable using RESTful API?
I'd like to validate the idea to handle policies in keystone or 
affiliated service: https://review.openstack.org/#/c/325326/


On 21.09.2016 17:49, Matt Riedemann wrote:
Nova has policy defaults in code now and we can generate the sample 
using oslopolicy-sample-generator but we'd like to get the default 
policy sample in the Nova developer documentation also, like we have 
for nova.conf.sample.


I see we use the sphinxconfiggen extension for building the 
nova.conf.sample in our docs, but I don't see anything like that for 
generating docs for a sample policy file.


Has anyone already started working on that, or is interested in 
working on that? I've never written a sphinx extension before but I'm 
guessing it could be borrowed a bit from how sphinxconfiggen was 
written in oslo.config.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest]Tempest test concurrency

2016-09-21 Thread Matthew Treinish
On Wed, Sep 21, 2016 at 10:44:51AM -0400, Bob Hansen wrote:
> 
> 
> I have been looking at some of the stackviz output as I'm trying to improve
> the run time of my thrid-party CI. As an example:
> 
> http://logs.openstack.org/36/371836/1/check/gate-tempest-dsvm-full-ubuntu-xenial/087db0f/logs/stackviz/#/stdin/timeline
> 
> What jumps out is the amount of time that each worker is not running any
> tests. I would have expected quite a bit more concurrecy between the two
> workers in the chart, e.g. more overlap. I've noticed a simliar thing with
> my test runs using 4 workers.

So the gaps between tests aren't actually wait time, the workers are saturated
doing stuff during a run. Those gaps are missing data in the subunit streams
that are used as the soure of the data for rendering those timelines. The gaps
are where things like setUp, setUpClass, tearDown, tearDownClass, and
addCleanups which are not added to the subunit stream. It's just an artifact of
the incomplete data, not bad scheduling. This also means that testr does not
take into account any of the missing timing when it makes decisions based on
previous runs.

> 
> Can anyone explain why this is and where can I find out more information
> about the scheduler and what information it is using to decide when to
> dispatch tests? I'm already feeding my system a prior subunit stream to
> help influence the scheduler as my test run times are different due to the
> way our openstack implementation is architected. A simple round-robin
> approach is not the most efficeint in my case.

If you're curious about how testr does scheduling most of that happens here:

https://github.com/testing-cabal/testrepository/blob/master/testrepository/testcommand.py

One thing to remember is that testr isn't actually a test runner, it's a test
runner runner. It partitions the tests based on time information and passes
those to (multiple) test runner workers. The actual order of execution inside
those partitions is handled by the test runner itself. (in our case subunit.run)

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-21 Thread Matt Riedemann

On 9/21/2016 10:05 AM, Alexander Makarov wrote:

What if policy will be manageable using RESTful API?
I'd like to validate the idea to handle policies in keystone or
affiliated service: https://review.openstack.org/#/c/325326/

On 21.09.2016 17:49, Matt Riedemann wrote:

Nova has policy defaults in code now and we can generate the sample
using oslopolicy-sample-generator but we'd like to get the default
policy sample in the Nova developer documentation also, like we have
for nova.conf.sample.

I see we use the sphinxconfiggen extension for building the
nova.conf.sample in our docs, but I don't see anything like that for
generating docs for a sample policy file.

Has anyone already started working on that, or is interested in
working on that? I've never written a sphinx extension before but I'm
guessing it could be borrowed a bit from how sphinxconfiggen was
written in oslo.config.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'm not sure how that's related to what I'm asking for here. We have 
policy defaults in code, and we want to generate those defaults into a 
sample policy file and have that in the docs. Sure the policy can be 
changed and customized later, but this isn't about that (or how that is 
done), it's just about documenting the default policy since the 
policy.json that ships in the nova tree now is empty. So we want to 
document the defaults, same as nova.conf.sample.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-21 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2016-09-21 09:49:29 -0500:
> Nova has policy defaults in code now and we can generate the sample 
> using oslopolicy-sample-generator but we'd like to get the default 
> policy sample in the Nova developer documentation also, like we have for 
> nova.conf.sample.
> 
> I see we use the sphinxconfiggen extension for building the 
> nova.conf.sample in our docs, but I don't see anything like that for 
> generating docs for a sample policy file.
> 
> Has anyone already started working on that, or is interested in working 
> on that? I've never written a sphinx extension before but I'm guessing 
> it could be borrowed a bit from how sphinxconfiggen was written in 
> oslo.config.
> 

I don't have time to do it myself, but I can help get someone else
started and work with them on code reviews in oslo.policy.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][Ceilometer] Ceilometer Newton RC2 available

2016-09-21 Thread Davanum Srinivas
Hello everyone,

The release candidate for Ceilometer for the end of the Newton cycle
is available! You can find the RC2 source code tarball at:

https://tarballs.openstack.org/ceilometer/ceilometer-7.0.0.0rc2.tar.gz

Unless release-critical issues are found that warrant a release
candidate respin, this RC2 will be formally released as the final
Newton release on 6 October. You are therefore strongly encouraged to
test and validate this tarball!

Alternatively, you can directly test the stable/newton release branch at:

http://git.openstack.org/cgit/openstack/ceilometer/log/?h=stable/newton

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/ceilometer/+filebug

and tag it *newton-rc-potential* to bring it to the Ceilometer release
crew's attention.

Thanks,
Dims (On behalf of Release Team)

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Major Hayden
On 09/21/2016 05:17 AM, Rob C wrote:
> Apart from missing elections, I think we do a huge amount for the community 
> and removing us from OpenStack would in no way be beneficial to either the 
> Security Project or OpenStack as a whole.

I definitely agree with Rob here and I support keeping the Security team in the 
big tent.

Although I'm not an active contributor there (but I want to be), I've joined 
some of their meetings and they've provided guidance on some of the work I've 
done with OpenStack-Ansible's (OSA) security hardening role.  The OSSN's they 
produce are helpful and the information contained within them is used when we 
improve OSA.  The Security Guide is also extremely useful for deployers who 
need advice on configuring OpenStack in a secure way.

--
Major Hayden



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-21 Thread Andrew Laski


On Wed, Sep 21, 2016, at 11:05 AM, Alexander Makarov wrote:
> What if policy will be manageable using RESTful API?
> I'd like to validate the idea to handle policies in keystone or 
> affiliated service: https://review.openstack.org/#/c/325326/

As Matt said, that's unrelated to what he's asking about.

However, I have asked twice now on the review what the benefit of doing
this is and haven't received a response so I'll ask here. The proposal
would add additional latency to nearly every API operation in a service
and in return what do they get? Now that it's possible to register sane
policy defaults within a project most operators do not even need to
think about policy for projects that do that. And any policy changes
that are necessary are easily handled by a config management system.

I would expect to see a pretty significant benefit in exchange for
moving policy control out of Nova, and so far it's not clear to me what
that would be.


> 
> On 21.09.2016 17:49, Matt Riedemann wrote:
> > Nova has policy defaults in code now and we can generate the sample 
> > using oslopolicy-sample-generator but we'd like to get the default 
> > policy sample in the Nova developer documentation also, like we have 
> > for nova.conf.sample.
> >
> > I see we use the sphinxconfiggen extension for building the 
> > nova.conf.sample in our docs, but I don't see anything like that for 
> > generating docs for a sample policy file.
> >
> > Has anyone already started working on that, or is interested in 
> > working on that? I've never written a sphinx extension before but I'm 
> > guessing it could be borrowed a bit from how sphinxconfiggen was 
> > written in oslo.config.
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-21 Thread Joshua Harlow

Zane Bitter wrote:

On 14/09/16 11:44, Mike Bayer wrote:



On 09/14/2016 11:08 AM, Mike Bayer wrote:



On 09/14/2016 09:15 AM, Sean Dague wrote:

I noticed the following issues happening quite often now in the
opportunistic db tests for nova -
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22





It looks like some race has been introduced where the various db
connections are not fully isolated from each other like they used to
be.
The testing magic for this is buried pretty deep in oslo.db.


that error message occurs when a connection that is intended against a
SELECT statement fails to provide a cursor.description attribute. It is
typically a driver-level bug in the MySQL world and corresponds to
mis-handled failure modes from the MySQL connection.

By "various DB connections are not fully isolated from each other" are
you suggesting that a single in-Python connection object itself is being
shared among multiple greenlets? I'm not aware of a change in oslo.db
that would be a relationship to such an effect.


So, I think by "fully isolated from each other" what you really mean is
"operations upon a connection are not fully isolated from the subsequent
use of that connection", since that's what I see in the logs. A
connection is attempting to be used during teardown to drop tables,
however it's in this essentially broken state from a PyMySQL
perspective, which would indicate something has gone wrong with this
(pooled) connection in the preceding test that could not be detected or
reverted once the connection was returned to the pool.

From Roman's observation, it looks like a likely source of this
corruption is a timeout that is interrupting the state of the PyMySQL
connection. In the preceding stack trace, PyMySQL is encountering a
raise as it attempts to call "self._sock.recv_into(b)", and it seems
like some combination of eventlet's response to signals and the
fixtures.Timeout() fixture is the cause of this interruption. As an
additional wart, something else is getting involved and turning it into
an IndexError, I'm not sure what that part is yet though I can imagine
that might be SQLAlchemy mis-interpreting what it expects to be a
PyMySQL exception class, since we normally look inside of
exception.args[0] to get the MySQL error code. With a blank exception
like fixtures.TimeoutException, .args is the empty tuple.

The PyMySQL connection is now in an invalid state and unable to perform
a SELECT statement correctly, but the connection is not invalidated and
is instead returned to the connection pool in a broken state. So the
subsequent teardown, if it uses this same connection (which is likely),
fails because the connection has been interrupted in the middle of its
work and not given the chance to clean up.

Seems like the use of fixtures.Timeout() fixture here is not organized
to work with a database operation in progress, especially an
eventlet-monkeypatched PyMySQL. Ideally, if something like a timeout
due to a signal handler occurs, the entire connection pool should be
disposed (quickest way, engine.dispose()), or at the very least (and
much more targeted), the connection that's involved should be
invalidated from the pool, e.g. connection.invalidate().

The change to the environment here would be that this timeout is
happening at all - the reason for that is not yet known. If oslo.db's
version were involved in this error, I would guess that it would be
related to this timeout condition being caused, and not anything to do
with the connection provisioning.



Olso.db 4.13.3 did hit the scene about the time this showed up. So I
think we need to strongly consider blocking it and revisiting these
issues post newton.


We've been seeing similar errors in Heat since at least Liberty
(https://bugs.launchpad.net/heat/+bug/1499669). Mike and I did some
poking around yesterday and basically confirmed his theory above. If you
look at the PyMySQL code, it believes that only an IOError can occur
while writing to a socket, so it has no handling for any other type of
exception, thus it can't deal with signal handlers raising exceptions or
other exceptions being thrown into the greenthread by eventlet. It
sounds like sqlalchemy also fails to catch at least some of these
exceptions and invalidate the connection.

tl;dr this appears to have been around forever (at least since we
switched to using a pure-Python MySQL client) and is almost certainly
completely unrelated to any particular release of oslo.db.


I've seen something similar at https://review.openstack.org/#/c/316935/

Maybe its time we asked again why are we still using eventlet and do we 
need to anymore. What functionality of it are people actually taking 
advantage of? If it's supporting libraries like oslo.service then it'd 
probably be useful to talk to the ceilometer folks who replaced 
oslo.service with something else (another oslo library for periodics and 
https://github

Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Clint Byrum
Excerpts from Filip Pytloun's message of 2016-09-21 14:58:52 +0200:
> Hello,
> 
> it's definately our bad that we missed elections in OpenStackSalt
> project. Reason is similar to Rob's - we are active on different
> channels (mostly IRC as we keep regular meetings) and don't used to
> reading mailing lists with lots of generic topics (it would be good to
> have separate mailing list for such calls and critical topics or
> individual mails to project's core members).
> 
> Our project is very active [1], trying to do things the Openstack way
> and I think it would be a pitty to remove it from Big Tent just because
> we missed mail and therefore our first PTL election.
> 
> Of course I don't want to excuse our fault. In case it's not too late,
> we will try to be more active in mailing lists like openstack-dev and
> not miss such important events next time.
> 
> [1] http://stackalytics.com/?module=openstacksalt-group
> 

Seems like we need a bit added to this process which makes sure big tent
projects have their primary IRC channel identified, and a list of core
reviewer and meeting chair IRC nicks to ping when something urgent comes
up. This isn't just useful for elections, but is probably something the
VMT would appreciate as well, and likely anyone else who has an urgent
need to make contact with a team.

I think it might also be useful if we could make the meeting bot remind
teams of any pending actions they need to take such as elections upon
#startmeeting.

Seems like all of that could be automated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] [horizon] why is heat service-list limited to 'admin project?

2016-09-21 Thread Akihiro Motoki
Thanks. After I sent the mail, we had a good conversation with Rabi and
understood the whole background.
Horizon will try to support better keystone v3 support in Ocata cycle.

2016-09-21 22:47 GMT+09:00 Zane Bitter :

> On 21/09/16 03:30, Akihiro Motoki wrote:
>
>> Hi,
>>
>> The default policy.json provided by heat limits 'service-list' API to
>> 'admin' project like below.
>> Is there any reason 'admin' role user in non-'admin' project cannot
>> see service-list?
>>
>
> https://bugs.launchpad.net/keystone/+bug/968696
>
>"service:index": "rule:context_is_admin",
>> "context_is_admin": "role:admin and is_admin_project:True",
>>
>> I noticed this when investigating a horizon bug
>> https://bugs.launchpad.net/horizon/+bug/1624834.
>> horizon currently has a bit different policy engine and it does not
>> support is_admin_project:True.
>> We would like to know the background of this default configuration.
>>
>> Thanks,
>> Akihiro
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Filip Pytloun
Hello,

> With 59 separate teams, even emailing the PTLs directly is becoming 
> impractical. I can’t imagine trying to email all of the core members directly.
> 
> A separate mailing list just for “important announcements” would need someone 
> to decide what is “important”. It would also need everyone to be subscribed, 
> or we would have to cross-post to the existing list. That’s why we use topic 
> tags on the mailing list, so that it is possible to filter messages based on 
> what is important to the reader, rather than the sender.

So maybe call it openstack-organization or openstack-teams or something
to focus on organizational topics.
Using tags and filters is also a way but may not be suitable for
everyone.

> I don’t see any releases listed on 
> https://releases.openstack.org/independent.html either. Are you tagging 
> releases, yet?

Yes, we've done a few releases, see eg. openstack/salt-formula-nova
releases here: https://github.com/openstack/salt-formula-nova/releases

I don't know why it's not listed on releases.openstack.org page.

> I see no emails tagged with [salt] on the mailing list since March of this 
> year, aside from this thread. Are you using a different communication channel 
> for team coordination? You mention IRC, but how are new contributors expected 
> to find you?

Yes, we are using openstack-salt channel and openstack meetings over
IRC. This channel is mentioned eg. in readme here [1] and community
meetings page [2] which are on weekly basis (logs [3]).

We also had a couple of people comming to team IRC talking to us about project
so I believe they can find the way to contact us even without our heavy
activity at openstack-dev (which should be better as I admitted).

[1] https://github.com/openstack/openstack-salt
[2] https://wiki.openstack.org/wiki/Meetings/openstack-salt
[3] http://eavesdrop.openstack.org/meetings/openstack_salt/2016/

> > 
> > Of course I don't want to excuse our fault. In case it's not too late,
> > we will try to be more active in mailing lists like openstack-dev and
> > not miss such important events next time.
> > 
> > [1] http://stackalytics.com/?module=openstacksalt-group
> > 
> > -Filip
> > 
> > On Wed, Sep 21, 2016 at 12:23 PM, Thierry Carrez 
> > wrote:
> > 
> >> Hi everyone,
> >> 
> >> As announced previously[1][2], there were no PTL candidates within the
> >> election deadline for a number of official OpenStack project teams:
> >> Astara, UX, OpenStackSalt and Security.
> >> 
> >> In the Astara case, the current team working on it would like to abandon
> >> the project (and let it be available for any new team who wishes to take
> >> it away). A change should be proposed really soon now to go in that
> >> direction.
> >> 
> >> In the UX case, the current PTL (Piet Kruithof) very quickly reacted,
> >> explained his error and asked to be considered for the position for
> >> Ocata. The TC will officialize his nomination at the next meeting,
> >> together with the newly elected PTLs.
> >> 
> >> That leaves us with OpenStackSalt and Security, where nobody reacted to
> >> the announcement that we are missing PTL candidates. That points to a
> >> real disconnect between those teams and the rest of the community. Even
> >> if you didn't have the election schedule in mind, it was pretty hard to
> >> miss all the PTL nominations in the email last week.
> >> 
> >> The majority of TC members present at the meeting yesterday suggested
> >> that those project teams should be removed from the Big Tent, with their
> >> design summit space allocation slightly reduced to match that (and make
> >> room for other not-yet-official teams).
> >> 
> >> In the case of OpenStackSalt, it's a relatively new addition, and if
> >> they get their act together they could probably be re-proposed in the
> >> future. In the case of Security, it points to a more significant
> >> disconnect (since it's not the first time the PTL misses the nomination
> >> call). We definitely still need to care about Security (and we also need
> >> a home for the Vulnerability Management team), but I think the "Security
> >> team" acts more like a workgroup than as an official project team, as
> >> evidenced by the fact that nobody in that team reacted to the lack of
> >> PTL nomination, or the announcement that the team missed the bus.
> >> 
> >> The suggested way forward there would be to remove the "Security project
> >> team", have the Vulnerability Management Team file to be its own
> >> official project team (in the same vein as the stable maintenance team),
> >> and have Security be just a workgroup rather than a project team.
> >> 
> >> Thoughts, comments ?
> >> 
> >> [1]
> >> http://lists.openstack.org/pipermail/openstack-dev/2016-
> >> September/103904.html
> >> [2]
> >> http://lists.openstack.org/pipermail/openstack-dev/2016-
> >> September/103939.html
> >> 
> >> --
> >> Thierry Carrez (ttx)
> >> 
> >> __
> >> O

Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-21 Thread Mike Bayer



On 09/21/2016 11:41 AM, Joshua Harlow wrote:


I've seen something similar at https://review.openstack.org/#/c/316935/

Maybe its time we asked again why are we still using eventlet and do we
need to anymore. What functionality of it are people actually taking
advantage of? If it's supporting libraries like oslo.service then it'd
probably be useful to talk to the ceilometer folks who replaced
oslo.service with something else (another oslo library for periodics and
https://github.com/sileht/cotyledon for service oriented tasks).


Plus Keystone has gotten off of it.

I actually like eventlet and gevent quite a lot.   I am using it in a 
new middleware component that will be involved with database connection 
pooling.  However, I *don't* use the global monkeypatching aspect. 
That's where this all goes very wrong.   Things that are designed for 
synchronous operations, like database-oriented business methods as well 
as the work of the database driver itself, should run within threads. 
You can in fact use eventlet/gevent's APIs explicitly and you can even 
combine it with traditional threading explicitly.   I'm actually using a 
stdlib Queue (carefully) to send data between greenlets and threads. 
Madness!








-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-21 Thread Joshua Harlow

Andrew Laski wrote:

However, I have asked twice now on the review what the benefit of doing
this is and haven't received a response so I'll ask here. The proposal
would add additional latency to nearly every API operation in a service
and in return what do they get? Now that it's possible to register sane
policy defaults within a project most operators do not even need to
think about policy for projects that do that. And any policy changes
that are necessary are easily handled by a config management system.

I would expect to see a pretty significant benefit in exchange for
moving policy control out of Nova, and so far it's not clear to me what
that would be.


One way to do this is to setup something like etc.d or zookeeper and 
have policy files be placed into certain 'keys' in there by keystone, 
then consuming projects would 'watch' those keys for being changed (and 
get notified when they are changed); the project would then reload its 
policy when the other service (keystone) write a new key/policy.


https://coreos.com/etcd/docs/latest/api.html#waiting-for-a-change

or 
https://zookeeper.apache.org/doc/r3.4.5/zookeeperProgrammers.html#ch_zkWatches


or (pretty sure consul has something similar),

This is pretty standard stuff folks :-/ and it's how afaik things like 
https://github.com/skynetservices/skydns work (and more), and it would 
avoid that 'additional latency' (unless the other service is adjusting 
the policy key every millisecond, which seems sorta unreasonable).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-21 Thread Joshua Harlow

Mike Bayer wrote:



On 09/21/2016 11:41 AM, Joshua Harlow wrote:


I've seen something similar at https://review.openstack.org/#/c/316935/

Maybe its time we asked again why are we still using eventlet and do we
need to anymore. What functionality of it are people actually taking
advantage of? If it's supporting libraries like oslo.service then it'd
probably be useful to talk to the ceilometer folks who replaced
oslo.service with something else (another oslo library for periodics and
https://github.com/sileht/cotyledon for service oriented tasks).


Plus Keystone has gotten off of it.

I actually like eventlet and gevent quite a lot. I am using it in a new
middleware component that will be involved with database connection
pooling. However, I *don't* use the global monkeypatching aspect. That's
where this all goes very wrong. Things that are designed for synchronous
operations, like database-oriented business methods as well as the work
of the database driver itself, should run within threads. You can in
fact use eventlet/gevent's APIs explicitly and you can even combine it
with traditional threading explicitly. I'm actually using a stdlib Queue
(carefully) to send data between greenlets and threads. Madness!


Agreed (thanks for making that clear), it's really the monkeying that 
kills things, not eventlet/gevent directly, fair point.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] [release] opportunistic tests breaking randomly

2016-09-21 Thread Roman Podoliaka
FWIW, there was no new failures in Nova jobs since then.

I'm confused as well why these tests would sporadically take much
longer time to execute. Perhaps we could install something like atop
on our nodes to answer that question.

On Wed, Sep 21, 2016 at 5:46 PM, Ihar Hrachyshka  wrote:
> I just hit that TimeoutException error in neutron functional tests:
>
> http://logs.openstack.org/68/373868/4/check/gate-neutron-dsvm-functional-ubuntu-trusty/4de275e/testr_results.html.gz
>
> It’s a bit weird that we hit that 180 sec timeout because in good runs, the
> test takes ~5 secs.
>
> Do we have a remedy against that kind of failure? I saw nova bumped the
> timeout length for the tests. Is it the approach we should apply across the
> board for other projects?
>
> Ihar
>
>
> Zane Bitter  wrote:
>
>> On 14/09/16 11:44, Mike Bayer wrote:
>>>
>>> On 09/14/2016 11:08 AM, Mike Bayer wrote:

 On 09/14/2016 09:15 AM, Sean Dague wrote:
>
> I noticed the following issues happening quite often now in the
> opportunistic db tests for nova -
>
> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22sqlalchemy.exc.ResourceClosedError%5C%22
>
>
>
>
> It looks like some race has been introduced where the various db
> connections are not fully isolated from each other like they used to
> be.
> The testing magic for this is buried pretty deep in oslo.db.


 that error message occurs when a connection that is intended against a
 SELECT statement fails to provide a cursor.description attribute.  It is
 typically a driver-level bug in the MySQL world and corresponds to
 mis-handled failure modes from the MySQL connection.

 By "various DB connections are not fully isolated from each other" are
 you suggesting that a single in-Python connection object itself is being
 shared among multiple greenlets?   I'm not aware of a change in oslo.db
 that would be a relationship to such an effect.
>>>
>>>
>>> So, I think by "fully isolated from each other" what you really mean is
>>> "operations upon a connection are not fully isolated from the subsequent
>>> use of that connection", since that's what I see in the logs.  A
>>> connection is attempting to be used during teardown to drop tables,
>>> however it's in this essentially broken state from a PyMySQL
>>> perspective, which would indicate something has gone wrong with this
>>> (pooled) connection in the preceding test that could not be detected or
>>> reverted once the connection was returned to the pool.
>>>
>>> From Roman's observation, it looks like a likely source of this
>>> corruption is a timeout that is interrupting the state of the PyMySQL
>>> connection.   In the preceding stack trace, PyMySQL is encountering a
>>> raise as it attempts to call "self._sock.recv_into(b)", and it seems
>>> like some combination of eventlet's response to signals and the
>>> fixtures.Timeout() fixture is the cause of this interruption.   As an
>>> additional wart, something else is getting involved and turning it into
>>> an IndexError, I'm not sure what that part is yet though I can imagine
>>> that might be SQLAlchemy mis-interpreting what it expects to be a
>>> PyMySQL exception class, since we normally look inside of
>>> exception.args[0] to get the MySQL error code.   With a blank exception
>>> like fixtures.TimeoutException, .args is the empty tuple.
>>>
>>> The PyMySQL connection is now in an invalid state and unable to perform
>>> a SELECT statement correctly, but the connection is not invalidated and
>>> is instead returned to the connection pool in a broken state.  So the
>>> subsequent teardown, if it uses this same connection (which is likely),
>>> fails because the connection has been interrupted in the middle of its
>>> work and not given the chance to clean up.
>>>
>>> Seems like the use of fixtures.Timeout() fixture here is not organized
>>> to work with a database operation in progress, especially an
>>> eventlet-monkeypatched PyMySQL.   Ideally, if something like a timeout
>>> due to a signal handler occurs, the entire connection pool should be
>>> disposed (quickest way, engine.dispose()), or at the very least (and
>>> much more targeted), the connection that's involved should be
>>> invalidated from the pool, e.g. connection.invalidate().
>>>
>>> The change to the environment here would be that this timeout is
>>> happening at all - the reason for that is not yet known.   If oslo.db's
>>> version were involved in this error, I would guess that it would be
>>> related to this timeout condition being caused, and not anything to do
>>> with the connection provisioning.
>>>
> Olso.db 4.13.3 did hit the scene about the time this showed up. So I
> think we need to strongly consider blocking it and revisiting these
> issues post newton.
>>
>>
>> We've been seeing similar errors in Heat since at least Liberty
>> (https://bugs.launchpad.net/heat/+b

Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Travis Mcpeak
is to provide proof via a reproducible benchmark.
> Otherwise we are likely to proceed, as John suggests, with the
> assumption that local target does not provide much benefit. 
>
> I've a few benchmarks myself that I suspect will find areas where
> getting rid of iSCSI is benefit, however if you have any then you
> really need to step up and provide the evidence. Relying on vague
> claims of overhead is now proven to not be a good idea. 
>
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> <
http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
>
> ?Honestly we can have both, I'll work up a bp to resurrect the idea of
> a "smart" scheduling feature that lets you request the volume be on
> the same node as the compute node and use it directly, and then if
> it's NOT it will attach a target and use it that way (in other words
> you run a stripped down c-vol service on each compute node).

Don't we have at least scheduling problem solved [1] already?

[1]
https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/instance_locality_filter.py


>
> Sahara keeps insisting on being a snow-flake with Cinder volumes and
> the block driver, it's really not necessary.  I think we can
> compromise just a little both ways, give you standard Cinder semantics
> for volumes, but allow you direct acccess to them if/when requested,
> but have those be flexible enough that targets *can* be attached so
> they meet all of the required functionality and API implementations. 
> This also means that we don't have to continue having a *special*
> driver in Cinder that frankly only works for one specific use case and
> deployment.
>
> I've pointed to this a number of times but it never seems to
> resonate... but I never learn so I'll try it once again [1].  Note
> that was before the name "brick" was hijacked and now means something
> completely different.
>
> [1]: https://wiki.openstack.org/wiki/CinderBrick
>
> Thanks,
> John?




--

Message: 2
Date: Wed, 21 Sep 2016 16:05:08 +0800
From: jun zhong 
To: openstack-dev 
Subject: [openstack-dev]  [manila] Enable IPv6 in Manila Ocata
Message-ID:
 
Content-Type: text/plain; charset="utf-8"

Hi,

As agreed by the manila community in IRC meeting,
we try to enable IPv6 in Ocata. Please check the brief spec[1] and 
code[2]).

The areas affected most are API (access rules) and in the drivers (access
rules
& export locations). This change intends to add the IPv6 format validation
for
ip access rule type in allow_access API, allowing manila to support IPv6
ACL.

Hi all of the driver maintainers, could you test the IPv6 feature code[2]
to make sure whether your driver can completely support IPv6.
If there still have something else might not be IPv6-ready, please let me
known. Thanks
[1] https://review.openstack.org/#/c/362786/
[2] https://review.openstack.org/#/c/312321/


Thanks,
Jun
-- next part --
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160921/880e28e8/attachment-0001.html
>

--

Message: 3
Date: Wed, 21 Sep 2016 08:38:53 +
From: "Afek, Ifat (Nokia - IL)" 
To: "OpenStack Development Mailing List (not for usage questions)"
 
Subject: [openstack-dev] [vitrage] Barcelona design sessions
Message-ID: 
Content-Type: text/plain; charset="utf-8"

Hi,

As discussed in our IRC meeting today, you are welcome to suggest topics 
for vitrage design sessions in Barcelona:
https://etherpad.openstack.org/p/vitrage-barcelona-design-sessions

Thanks,
Ifat.

-- next part --
An HTML attachment was scrubbed...
URL: <
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160921/3b999def/attachment-0001.html
>

--

Message: 4
Date: Wed, 21 Sep 2016 09:53:06 +
From: "Daly, Louise M" 
To: "openstack-dev@lists.openstack.org"
 
Subject: [openstack-dev]  [Kuryr] Kuryr IPVlan Code PoC
Message-ID:
 
Content-Type: text/plain; charset="us-ascii"

Hi everyone,

As promised here is a link to the code PoC for the Kuryr-IPVlan proposal.
https://github.com/lmdaly/kuryr-libnetwork

Link to specific commit
https://github.com/lmdaly/kuryr-libnetwork/commit/1dc895a6d8bfaa03c0dd5cfb2d3e23e2e948a67c


>From here you can clone the rep

Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Adam Lawson
But something else struck me, the velocity and sheer NUMBER of emails that
must be filtered to find and extract these key announcements is tricky so I
don't fault anyone for missing the needle in the haystack. Important needle
no doubt but I wonder if there are more efficient ways to ensure important
info is highlighted.

My knee jerk idea is a way for individuals to subscribe to certain topics
that come into their inbox. I don't have a good way within Gmail to
sub-filter these which has been a historical problem for me in terms of
awareness of following hot topics.

//adam


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Wed, Sep 21, 2016 at 9:28 AM, Adam Lawson  wrote:

> You know something that struck me, I noticed there were several teams last
> cycle that did not elect a PTL so this round I was watching to see if any
> teams did not have a PTL elected and presumed it was because of many of the
> reasons surfaced in previous emails in this thread including being heads
> down, watching other channels and potentially insufficient numbers of
> individuals interested in the PTL role.
>
> So I waited and noticed Astara, Security and a handful of other projects
> did not have a PTL elected so I picked Astara because I am an OpenStack
> architect who specializes in SDN, security and distributed storage and
> applied. Of course I missed the deadline by about 2 hours but Security was
> another project I was interested in.
>
> So all this said, there are individuals interested in the PTL role to
> ensure project teams have someone handling the logistics and coordination.
> My issue however was that I was not yet eligible to be a candidate which
> I'll remedy moving forward.
>
> I'm still interested in serving as a PTL for a project that needs one. I
> personally believe that in the case of Security, there needs to be a
> dedicated team due to the nature and impact of security breaches that
> directly influence the perception of OpenStack as a viable cloud solution
> for enterprises looking (or re-looking) at it for the first time.
>
> I'm not a full-time developer but an architect so I am planning to open a
> new discussion about how PTL candidates are currently being qualified.
> Again, different thread.
>
> For this thread, if there is a concern about PTL interest - it's there and
> I would be open to helping the team in this regard if it helps keep the
> team activity in the OpenStack marquee.
>
> //adam
>
>
> *Adam Lawson*
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
> On Wed, Sep 21, 2016 at 8:56 AM, Clint Byrum  wrote:
>
>> Excerpts from Filip Pytloun's message of 2016-09-21 14:58:52 +0200:
>> > Hello,
>> >
>> > it's definately our bad that we missed elections in OpenStackSalt
>> > project. Reason is similar to Rob's - we are active on different
>> > channels (mostly IRC as we keep regular meetings) and don't used to
>> > reading mailing lists with lots of generic topics (it would be good to
>> > have separate mailing list for such calls and critical topics or
>> > individual mails to project's core members).
>> >
>> > Our project is very active [1], trying to do things the Openstack way
>> > and I think it would be a pitty to remove it from Big Tent just because
>> > we missed mail and therefore our first PTL election.
>> >
>> > Of course I don't want to excuse our fault. In case it's not too late,
>> > we will try to be more active in mailing lists like openstack-dev and
>> > not miss such important events next time.
>> >
>> > [1] http://stackalytics.com/?module=openstacksalt-group
>> >
>>
>> Seems like we need a bit added to this process which makes sure big tent
>> projects have their primary IRC channel identified, and a list of core
>> reviewer and meeting chair IRC nicks to ping when something urgent comes
>> up. This isn't just useful for elections, but is probably something the
>> VMT would appreciate as well, and likely anyone else who has an urgent
>> need to make contact with a team.
>>
>> I think it might also be useful if we could make the meeting bot remind
>> teams of any pending actions they need to take such as elections upon
>> #startmeeting.
>>
>> Seems like all of that could be automated.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
htt

Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Adam Lawson
You know something that struck me, I noticed there were several teams last
cycle that did not elect a PTL so this round I was watching to see if any
teams did not have a PTL elected and presumed it was because of many of the
reasons surfaced in previous emails in this thread including being heads
down, watching other channels and potentially insufficient numbers of
individuals interested in the PTL role.

So I waited and noticed Astara, Security and a handful of other projects
did not have a PTL elected so I picked Astara because I am an OpenStack
architect who specializes in SDN, security and distributed storage and
applied. Of course I missed the deadline by about 2 hours but Security was
another project I was interested in.

So all this said, there are individuals interested in the PTL role to
ensure project teams have someone handling the logistics and coordination.
My issue however was that I was not yet eligible to be a candidate which
I'll remedy moving forward.

I'm still interested in serving as a PTL for a project that needs one. I
personally believe that in the case of Security, there needs to be a
dedicated team due to the nature and impact of security breaches that
directly influence the perception of OpenStack as a viable cloud solution
for enterprises looking (or re-looking) at it for the first time.

I'm not a full-time developer but an architect so I am planning to open a
new discussion about how PTL candidates are currently being qualified.
Again, different thread.

For this thread, if there is a concern about PTL interest - it's there and
I would be open to helping the team in this regard if it helps keep the
team activity in the OpenStack marquee.

//adam


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Wed, Sep 21, 2016 at 8:56 AM, Clint Byrum  wrote:

> Excerpts from Filip Pytloun's message of 2016-09-21 14:58:52 +0200:
> > Hello,
> >
> > it's definately our bad that we missed elections in OpenStackSalt
> > project. Reason is similar to Rob's - we are active on different
> > channels (mostly IRC as we keep regular meetings) and don't used to
> > reading mailing lists with lots of generic topics (it would be good to
> > have separate mailing list for such calls and critical topics or
> > individual mails to project's core members).
> >
> > Our project is very active [1], trying to do things the Openstack way
> > and I think it would be a pitty to remove it from Big Tent just because
> > we missed mail and therefore our first PTL election.
> >
> > Of course I don't want to excuse our fault. In case it's not too late,
> > we will try to be more active in mailing lists like openstack-dev and
> > not miss such important events next time.
> >
> > [1] http://stackalytics.com/?module=openstacksalt-group
> >
>
> Seems like we need a bit added to this process which makes sure big tent
> projects have their primary IRC channel identified, and a list of core
> reviewer and meeting chair IRC nicks to ping when something urgent comes
> up. This isn't just useful for elections, but is probably something the
> VMT would appreciate as well, and likely anyone else who has an urgent
> need to make contact with a team.
>
> I think it might also be useful if we could make the meeting bot remind
> teams of any pending actions they need to take such as elections upon
> #startmeeting.
>
> Seems like all of that could be automated.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] let's talk (development) environment deployment tooling and workflows

2016-09-21 Thread Alex Schultz
On Wed, Sep 21, 2016 at 9:00 AM, John Trowbridge  wrote:
>
>
>
> On 09/19/2016 01:21 PM, Steven Hardy wrote:
> > Hi Alex,
> >
> > Firstly, thanks for this detailed feedback - it's very helpful to have
> > someone with a fresh perspective look at the day-1 experience for TripleO,
> > and while some of what follows are "know issues", it's great to get some
> > perspective on them, as well as ideas re how we might improve things.
> >
> > On Thu, Sep 15, 2016 at 09:09:24AM -0600, Alex Schultz wrote:
> >> Hi all,
> >>
> >> I've recently started looking at the various methods for deploying and
> >> developing tripleo.  What I would like to bring up is the current
> >> combination of the tooling for managing the VM instances and the
> >> actual deployment method to launch the undercloud/overcloud
> >> installation.  While running through the various methods and reading
> >> up on the documentation, I'm concerned that they are not currently
> >> flexible enough for a developer (or operator for that matter) to be
> >> able to setup the various environment configurations for testing
> >> deployments and doing development.  Additionally I ran into issues
> >> just trying get them working at all so this probably doesn't help when
> >> trying to attract new contributors as well.  The focus of this email
> >> and of my experience seems to relate with workflow-simplification
> >> spec[0].  I would like to share my experiences with the various
> >> tooling available and raise some ideas.
> >>
> >> Example Situation:
> >>
> >> For example, I have a laptop with 16G of RAM and an SSD and I'd like
> >> to get started with tripleo.  How can I deploy tripleo?
> >
> > So, this is probably problem #1, because while I have managed to deploy a
> > minimal TripleO environment on a laptop with 16G of RAM, I think it's
> > pretty widely known that it's not really enough (certainly with our default
> > configuration, which has unfortunately grown over time as more and more
> > things got integrated).
> >
> > I see two options here:
> >
> > 1. Document the reality (which is really you need a physical machine with
> > at least 32G RAM unless you're prepared to deal with swapping).
> >
> > 2. Look at providing a "TripleO lite" install option, which disables some
> > services (both on the undercloud and default overcloud install).
> >
> > Either of these are defintely possible, but (2) seems like the best
> > long-term solution (although it probably means another CI job).
> >
> >> Tools:
> >>
> >> instack:
> >>
> >> I started with the tripleo docs[1] that reference using the instack
> >> tools for virtual environment creation while deploying tripleo.   The
> >> docs say you need at least 12G of RAM[2].  The docs lie (step 7[3]).
> >> So after basically shutting everything down and letting it deploy with
> >> all my RAM, the deployment fails because the undercloud runs out of
> >> RAM and OOM killer kills off heat.  This was not because I had reduced
> >> the amount of ram for the undercloud node or anything.  It was because
> >> by default, 6GB of RAM with no swap is configured for the undercloud
> >> (not sure if this is a bug?).  So I added a swap file to the
> >> undercloud and continued. My next adventure was having the overcloud
> >> deployment fail because lack of memory as puppet fails trying to spawn
> >> a process and gets denied.  The instack method does not configure swap
> >> for the VMs that are deployed and the deployment did not work with 5GB
> >> RAM for each node.  So for a full 16GB I was unable to follow the
> >> documentation and use instack to successfully deploy.  At this point I
> >> switched over to trying to use tripleo-quickstart.  Eventually I was
> >> able to figure out a configuration with instack to get it to deploy
> >> when I figured out how to enable swap for the overcloud deployment.
> >
> > Yeah, so this definitely exposes that we need to update the docs, and also
> > provide an easy install-time option to enable swap on all-the-things for
> > memory contrained environments.
> >
> >> tripleo-quickstart:
> >>
> >> The next thing I attempted to use was the tripleo-quickstart[4].
> >> Following the directions I attempted to deploy against my localhost.
> >> I turns out that doesn't work as expected since ansible likes to do
> >> magic when dealing with localhost[5].  Ultimately I was unable to get
> >> it working against my laptop locally because I ran into some libvirt
> >> issues.  But I was able to get it to work when I pointed it at a
> >> separate machine.  It should be noted that tripleo-quickstart creates
> >> an undercloud with swap which was nice because then it actually works,
> >> but is an inconsistent experience depending on which tool you used for
> >> your deployment.
> >
> > Yeah, so while a lot of folks have good luck with tripleo-quickstart, it
> > has the disadvantage of not currently being the tool used in upstream
> > TripleO CI (which folks have looked at fixing, but it's not yet happen

Re: [openstack-dev] [tempest]Tempest test concurrency

2016-09-21 Thread Bob Hansen
Matthew, this helps tremendously. As you can tell the conclusion I was
heading towards was not accurate.

Now to look a bit deeper.

Thanks,

Bob Hansen
z/VM OpenStack Enablement

Matthew Treinish  wrote on 09/21/2016 11:07:04 AM:

> From: Matthew Treinish 
> To: "OpenStack Development Mailing List (not for usage questions)"
> 
> Date: 09/21/2016 11:09 AM
> Subject: Re: [openstack-dev] [tempest]Tempest test concurrency
>
> On Wed, Sep 21, 2016 at 10:44:51AM -0400, Bob Hansen wrote:
> >
> >
> > I have been looking at some of the stackviz output as I'm trying to
improve
> > the run time of my thrid-party CI. As an example:
> >
> > http://logs.openstack.org/36/371836/1/check/gate-tempest-dsvm-
> full-ubuntu-xenial/087db0f/logs/stackviz/#/stdin/timeline
> >
> > What jumps out is the amount of time that each worker is not running
any
> > tests. I would have expected quite a bit more concurrecy between the
two
> > workers in the chart, e.g. more overlap. I've noticed a simliar thing
with
> > my test runs using 4 workers.
>
> So the gaps between tests aren't actually wait time, the workers
aresaturated
> doing stuff during a run. Those gaps are missing data in the subunit
streams
> that are used as the soure of the data for rendering those timelines. The
gaps
> are where things like setUp, setUpClass, tearDown, tearDownClass, and
> addCleanups which are not added to the subunit stream. It's just an
> artifact of
> the incomplete data, not bad scheduling. This also means that testr does
not
> take into account any of the missing timing when it makes decisions based
on
> previous runs.
>
> >
> > Can anyone explain why this is and where can I find out more
information
> > about the scheduler and what information it is using to decide when to
> > dispatch tests? I'm already feeding my system a prior subunit stream to
> > help influence the scheduler as my test run times are different due to
the
> > way our openstack implementation is architected. A simple round-robin
> > approach is not the most efficeint in my case.
>
> If you're curious about how testr does scheduling most of that happens
here:
>
> https://github.com/testing-cabal/testrepository/blob/master/
> testrepository/testcommand.py
>
> One thing to remember is that testr isn't actually a test runner, it's a
test
> runner runner. It partitions the tests based on time information and
passes
> those to (multiple) test runner workers. The actual order of execution
inside
> those partitions is handled by the test runner itself. (in our case
> subunit.run)
>
> -Matt Treinish
> [attachment "signature.asc" deleted by Bob Hansen/Endicott/IBM]
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][keystone] User Project List

2016-09-21 Thread Dolph Mathews
On Wed, Sep 21, 2016 at 9:03 AM Adrian Turjak 
wrote:

> Nope, default keystone policy has not allowed you to get your own user
> until this patch was merged:
>
> https://github.com/openstack/keystone/commit/c990ec5c144d9b1408d47cb83cb0b3d6aeed0d57
>
> Sad but true it seems. :(
>
Wow, you're right! That's certainly true for both liberty and mitaka in
both of the policy files:

*
https://github.com/openstack/keystone/blob/stable/liberty/etc/policy.json#L44
*
https://github.com/openstack/keystone/blob/stable/liberty/etc/policy.v3cloudsample.json#L49
*
https://github.com/openstack/keystone/blob/stable/mitaka/etc/policy.json#L44
*
https://github.com/openstack/keystone/blob/stable/mitaka/etc/policy.v3cloudsample.json#L48

I should also express a +1 for something along the lines of your original
proposal. I'd go so far as to suggest that `openstack show user` (without a
user ID or name as an argument) should return "me" (the authenticated
user), as I think that'd be a better user experience.

> On 22/09/2016 12:58 AM, Dolph Mathews  wrote:
> >
> >
> >
> > On Wed, Sep 21, 2016 at 12:31 AM Adrian Turjak 
> wrote:
> >>
> >> The default keystone policy up until Newton doesn't let a user get their
> >> own user
> >
> >
> > This seems to be the crutch of your issue - can you provide an example
> of this specific failure and the corresponding policy? As far as I'm aware,
> the default upstream policy files have allowed for this since about Grizzly
> or Havana, unless that's quietly broken somehow.
> >
> >>
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > --
> > -Dolph
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
-Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2016-09-21 08:56:24 -0700:
> Excerpts from Filip Pytloun's message of 2016-09-21 14:58:52 +0200:
> > Hello,
> > 
> > it's definately our bad that we missed elections in OpenStackSalt
> > project. Reason is similar to Rob's - we are active on different
> > channels (mostly IRC as we keep regular meetings) and don't used to
> > reading mailing lists with lots of generic topics (it would be good to
> > have separate mailing list for such calls and critical topics or
> > individual mails to project's core members).
> > 
> > Our project is very active [1], trying to do things the Openstack way
> > and I think it would be a pitty to remove it from Big Tent just because
> > we missed mail and therefore our first PTL election.
> > 
> > Of course I don't want to excuse our fault. In case it's not too late,
> > we will try to be more active in mailing lists like openstack-dev and
> > not miss such important events next time.
> > 
> > [1] http://stackalytics.com/?module=openstacksalt-group
> > 
> 
> Seems like we need a bit added to this process which makes sure big tent
> projects have their primary IRC channel identified, and a list of core
> reviewer and meeting chair IRC nicks to ping when something urgent comes
> up. This isn't just useful for elections, but is probably something the
> VMT would appreciate as well, and likely anyone else who has an urgent
> need to make contact with a team.

IRC channels are listed on team pages on governance.o.o. For example:
http://governance.openstack.org/reference/projects/openstacksalt.html

Core reviewers are accessible through gerrit. For example,
https://review.openstack.org/#/admin/projects/openstack/openstack-salt,access
leads to https://review.openstack.org/#/admin/groups/1268,members

Meeting chair nicks are available on eavesdrop.o.o. For example,
http://eavesdrop.openstack.org/#OpenStack_Salt_Team_Meeting

It might make sense to automate pulling that information together into a
single page somewhere, maybe the team page on governance.o.o.

The larger point is that the community expects teams to be paying
attention to the cycle schedule and taking care of the actions expected
without being individually asked to do so.

> I think it might also be useful if we could make the meeting bot remind
> teams of any pending actions they need to take such as elections upon
> #startmeeting.

I could see that being useful, yes.

> Seems like all of that could be automated.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Charles Neill
nd then if
> it's NOT it will attach a target and use it that way (in other words
> you run a stripped down c-vol service on each compute node).

Don't we have at least scheduling problem solved [1] already?

[1]
https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/instance_locality_filter.py

>
> Sahara keeps insisting on being a snow-flake with Cinder volumes and
> the block driver, it's really not necessary.  I think we can
> compromise just a little both ways, give you standard Cinder semantics
> for volumes, but allow you direct acccess to them if/when requested,
> but have those be flexible enough that targets *can* be attached so
> they meet all of the required functionality and API implementations.
> This also means that we don't have to continue having a *special*
> driver in Cinder that frankly only works for one specific use case and
> deployment.
>
> I've pointed to this a number of times but it never seems to
> resonate... but I never learn so I'll try it once again [1].  Note
> that was before the name "brick" was hijacked and now means something
> completely different.
>
> [1]: https://wiki.openstack.org/wiki/CinderBrick
>
> Thanks,
> John?




--

Message: 2
Date: Wed, 21 Sep 2016 16:05:08 +0800
From: jun zhong mailto:jun.zhongj...@gmail.com>>
To: openstack-dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev]  [manila] Enable IPv6 in Manila Ocata
Message-ID:

mailto:caaz2tn-hrs_3d0hvavvvu2ephs4cch1pko88fx1egguh8h9...@mail.gmail.com>>
Content-Type: text/plain; charset="utf-8"

Hi,

As agreed by the manila community in IRC meeting,
we try to enable IPv6 in Ocata. Please check the brief spec[1] and code[2]).

The areas affected most are API (access rules) and in the drivers (access
rules
& export locations). This change intends to add the IPv6 format validation
for
ip access rule type in allow_access API, allowing manila to support IPv6
ACL.

Hi all of the driver maintainers, could you test the IPv6 feature code[2]
to make sure whether your driver can completely support IPv6.
If there still have something else might not be IPv6-ready, please let me
known. Thanks
[1] https://review.openstack.org/#/c/362786/
[2] https://review.openstack.org/#/c/312321/


Thanks,
Jun
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openstack.org/pipermail/openstack-dev/attachments/20160921/880e28e8/attachment-0001.html>

--

Message: 3
Date: Wed, 21 Sep 2016 08:38:53 +
From: "Afek, Ifat (Nokia - IL)" 
mailto:ifat.a...@nokia.com>>
To: "OpenStack Development Mailing List (not for usage questions)"

mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [vitrage] Barcelona design sessions
Message-ID: 
mailto:cad9e9dc-e55d-4bcc-bc8e-1cacdffa7...@alcatel-lucent.com>>
Content-Type: text/plain; charset="utf-8"

Hi,

As discussed in our IRC meeting today, you are welcome to suggest topics for 
vitrage design sessions in Barcelona:
https://etherpad.openstack.org/p/vitrage-barcelona-design-sessions

Thanks,
Ifat.

-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openstack.org/pipermail/openstack-dev/attachments/20160921/3b999def/attachment-0001.html>

--

Message: 4
Date: Wed, 21 Sep 2016 09:53:06 +
From: "Daly, Louise M" mailto:louise.m.d...@intel.com>>
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>"

mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev]  [Kuryr] Kuryr IPVlan Code PoC
Message-ID:

mailto:d2c06722ff4fe54c88729fc07a5dcd91b43...@irsmsx101.ger.corp.intel.com>>
Content-Type: text/plain; charset="us-ascii"

Hi everyone,

As promised here is a link to the code PoC for the Kuryr-IPVlan proposal.
https://github.com/lmdaly/kuryr-libnetwork

Link to specific commit
https://github.com/lmdaly/kuryr-libnetwork/commit/1dc895a6d8bfaa03c0dd5cfb2d3e23e2e948a67c

>From here you can clone the repo and install Kuryr as you normally would with 
>a few additional steps:

1. The IPVlan driver must be installed on the VM/Machine that the PoC will be 
run on. Fedora-Server(not the cloud image) includes the driver by default but 
the likes of the cloud image must be modified to include the driver.
2. You must install Docker experimental.
3. You must use the Kuryr IPAM driver for address management.
4. In order to enable the IPVlan mode you must change the ipvlan option in the 
kuryr.conf file from false to true.
5. You must also change the ifname option to match the interface of the private 
network you wish to run the container

[openstack-dev] [Kolla] Ocata summit session poll

2016-09-21 Thread Michał Jastrzębski
Hello,

Now, when we have full list of sessions, let's prioritize them
accordingly to our preferences. Based on this we'll allocate our
summit space.

http://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_8368e1e74f8a0049&akey=91bbcf4baeff0a2f

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-21 Thread Andrew Laski


On Wed, Sep 21, 2016, at 12:02 PM, Joshua Harlow wrote:
> Andrew Laski wrote:
> > However, I have asked twice now on the review what the benefit of doing
> > this is and haven't received a response so I'll ask here. The proposal
> > would add additional latency to nearly every API operation in a service
> > and in return what do they get? Now that it's possible to register sane
> > policy defaults within a project most operators do not even need to
> > think about policy for projects that do that. And any policy changes
> > that are necessary are easily handled by a config management system.
> >
> > I would expect to see a pretty significant benefit in exchange for
> > moving policy control out of Nova, and so far it's not clear to me what
> > that would be.
> 
> One way to do this is to setup something like etc.d or zookeeper and 
> have policy files be placed into certain 'keys' in there by keystone, 
> then consuming projects would 'watch' those keys for being changed (and 
> get notified when they are changed); the project would then reload its 
> policy when the other service (keystone) write a new key/policy.
> 
> https://coreos.com/etcd/docs/latest/api.html#waiting-for-a-change
> 
> or 
> https://zookeeper.apache.org/doc/r3.4.5/zookeeperProgrammers.html#ch_zkWatches
> 
> or (pretty sure consul has something similar),
> 
> This is pretty standard stuff folks :-/ and it's how afaik things like 
> https://github.com/skynetservices/skydns work (and more), and it would 
> avoid that 'additional latency' (unless the other service is adjusting 
> the policy key every millisecond, which seems sorta unreasonable).

Sure. Or have Keystone be a frontend for ansible/puppet/chef/ What's
not clear to me in any of this is what's the benefit to having Keystone
as a fronted to policy configuration/changes, or be involved in any real
way with authorization decisions? What issue is being solved by getting
Keystone involved?


> 
> -Josh
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Morgan Fainberg
On Sep 21, 2016 09:37, "Adam Lawson"  wrote:
>
> But something else struck me, the velocity and sheer NUMBER of emails
that must be filtered to find and extract these key announcements is tricky
so I don't fault anyone for missing the needle in the haystack. Important
needle no doubt but I wonder if there are more efficient ways to ensure
important info is highlighted.
>
> My knee jerk idea is a way for individuals to subscribe to certain topics
that come into their inbox. I don't have a good way within Gmail to
sub-filter these which has been a historical problem for me in terms of
awareness of following hot topics.
>
> //adam
>
>
> Adam Lawson
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
> On Wed, Sep 21, 2016 at 9:28 AM, Adam Lawson  wrote:
>>
>> You know something that struck me, I noticed there were several teams
last cycle that did not elect a PTL so this round I was watching to see if
any teams did not have a PTL elected and presumed it was because of many of
the reasons surfaced in previous emails in this thread including being
heads down, watching other channels and potentially insufficient numbers of
individuals interested in the PTL role.
>>
>> So I waited and noticed Astara, Security and a handful of other projects
did not have a PTL elected so I picked Astara because I am an OpenStack
architect who specializes in SDN, security and distributed storage and
applied. Of course I missed the deadline by about 2 hours but Security was
another project I was interested in.
>>
>> So all this said, there are individuals interested in the PTL role to
ensure project teams have someone handling the logistics and coordination.
My issue however was that I was not yet eligible to be a candidate which
I'll remedy moving forward.
>>
>> I'm still interested in serving as a PTL for a project that needs one. I
personally believe that in the case of Security, there needs to be a
dedicated team due to the nature and impact of security breaches that
directly influence the perception of OpenStack as a viable cloud solution
for enterprises looking (or re-looking) at it for the first time.
>>
>> I'm not a full-time developer but an architect so I am planning to open
a new discussion about how PTL candidates are currently being qualified.
Again, different thread.
>>
>> For this thread, if there is a concern about PTL interest - it's there
and I would be open to helping the team in this regard if it helps keep the
team activity in the OpenStack marquee.
>>
>> //adam
>>
>>
>> Adam Lawson
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>> On Wed, Sep 21, 2016 at 8:56 AM, Clint Byrum  wrote:
>>>
>>> Excerpts from Filip Pytloun's message of 2016-09-21 14:58:52 +0200:
>>> > Hello,
>>> >
>>> > it's definately our bad that we missed elections in OpenStackSalt
>>> > project. Reason is similar to Rob's - we are active on different
>>> > channels (mostly IRC as we keep regular meetings) and don't used to
>>> > reading mailing lists with lots of generic topics (it would be good to
>>> > have separate mailing list for such calls and critical topics or
>>> > individual mails to project's core members).
>>> >
>>> > Our project is very active [1], trying to do things the Openstack way
>>> > and I think it would be a pitty to remove it from Big Tent just
because
>>> > we missed mail and therefore our first PTL election.
>>> >
>>> > Of course I don't want to excuse our fault. In case it's not too late,
>>> > we will try to be more active in mailing lists like openstack-dev and
>>> > not miss such important events next time.
>>> >
>>> > [1] http://stackalytics.com/?module=openstacksalt-group
>>> >
>>>
>>> Seems like we need a bit added to this process which makes sure big tent
>>> projects have their primary IRC channel identified, and a list of core
>>> reviewer and meeting chair IRC nicks to ping when something urgent comes
>>> up. This isn't just useful for elections, but is probably something the
>>> VMT would appreciate as well, and likely anyone else who has an urgent
>>> need to make contact with a team.
>>>
>>> I think it might also be useful if we could make the meeting bot remind
>>> teams of any pending actions they need to take such as elections upon
>>> #startmeeting.
>>>
>>> Seems like all of that could be automated.
>>>
>>>
__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not

Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Doug Hellmann
Excerpts from Filip Pytloun's message of 2016-09-21 17:43:46 +0200:
> Hello,
> 
> > With 59 separate teams, even emailing the PTLs directly is becoming 
> > impractical. I can’t imagine trying to email all of the core members 
> > directly.
> > 
> > A separate mailing list just for “important announcements” would need 
> > someone to decide what is “important”. It would also need everyone to be 
> > subscribed, or we would have to cross-post to the existing list. That’s why 
> > we use topic tags on the mailing list, so that it is possible to filter 
> > messages based on what is important to the reader, rather than the sender.
> 
> So maybe call it openstack-organization or openstack-teams or something
> to focus on organizational topics.
> Using tags and filters is also a way but may not be suitable for
> everyone.

The idea of splitting the contributor list comes up pretty regularly
and we rehash the same suggestions each time.  Given that what we
have now worked fine for 57 of the 59 offical teams (the Astara
team knew in advance it would not have a PTL running, and Piet had
some sort of technical issue submitting his candidacy for the UX
team), I'm not yet convinced that we need to make large-scale changes
to our community communication standard practices in support of the
2 remaining teams.

That's not to say that the system we have now is perfect, but we
can't realistically support multiple systems at the same time.  We
need everyone to use the same system, otherwise we have (even more)
fragmented communication. So, we either need everyone to agree to
some new system and then have people step forward to implement it,
or we need to all agree to do our best to use the system we have
in place now.

> 
> > I don’t see any releases listed on 
> > https://releases.openstack.org/independent.html either. Are you tagging 
> > releases, yet?
> 
> Yes, we've done a few releases, see eg. openstack/salt-formula-nova
> releases here: https://github.com/openstack/salt-formula-nova/releases
> 
> I don't know why it's not listed on releases.openstack.org page.

Did your release liaison follow the instructions to make that happen?
http://git.openstack.org/cgit/openstack/releases/tree/README.rst

> 
> > I see no emails tagged with [salt] on the mailing list since March of this 
> > year, aside from this thread. Are you using a different communication 
> > channel for team coordination? You mention IRC, but how are new 
> > contributors expected to find you?
> 
> Yes, we are using openstack-salt channel and openstack meetings over
> IRC. This channel is mentioned eg. in readme here [1] and community
> meetings page [2] which are on weekly basis (logs [3]).
> 
> We also had a couple of people comming to team IRC talking to us about project
> so I believe they can find the way to contact us even without our heavy
> activity at openstack-dev (which should be better as I admitted).

That works great for folks in your timezones. It's less useful for
anyone who isn't around at the same time as you, which is one reason
our community emphasizes using email communications. Email gives
you asynchronous discussions for timezone coverage, allows folks
who are traveling or off work for a period to catch up on and
participate in discussions later, etc.

> 
> [1] https://github.com/openstack/openstack-salt
> [2] https://wiki.openstack.org/wiki/Meetings/openstack-salt
> [3] http://eavesdrop.openstack.org/meetings/openstack_salt/2016/
> 
> > > 
> > > Of course I don't want to excuse our fault. In case it's not too late,
> > > we will try to be more active in mailing lists like openstack-dev and
> > > not miss such important events next time.
> > > 
> > > [1] http://stackalytics.com/?module=openstacksalt-group
> > > 
> > > -Filip
> > > 
> > > On Wed, Sep 21, 2016 at 12:23 PM, Thierry Carrez 
> > > wrote:
> > > 
> > >> Hi everyone,
> > >> 
> > >> As announced previously[1][2], there were no PTL candidates within the
> > >> election deadline for a number of official OpenStack project teams:
> > >> Astara, UX, OpenStackSalt and Security.
> > >> 
> > >> In the Astara case, the current team working on it would like to abandon
> > >> the project (and let it be available for any new team who wishes to take
> > >> it away). A change should be proposed really soon now to go in that
> > >> direction.
> > >> 
> > >> In the UX case, the current PTL (Piet Kruithof) very quickly reacted,
> > >> explained his error and asked to be considered for the position for
> > >> Ocata. The TC will officialize his nomination at the next meeting,
> > >> together with the newly elected PTLs.
> > >> 
> > >> That leaves us with OpenStackSalt and Security, where nobody reacted to
> > >> the announcement that we are missing PTL candidates. That points to a
> > >> real disconnect between those teams and the rest of the community. Even
> > >> if you didn't have the election schedule in mind, it was pretty hard to
> > >> miss all the PTL nominations in the email last wee

Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Doug Hellmann
> > So I would still like to have the ability to attach partitions
> > locally bypassing the iSCSI to guarantee 2 things:
> > * Make sure that lio processes do not compete for CPU and RAM
> > with VMs running on the same host.
> > * Make sure that CPU intensive VMs (or whatever else is
> > running nearby) are not blocking the storage.
> >
> >
> > So these are, unless we see the effects via benchmarks, completely
> > meaningless requirements. Ivan's initial benchmarks suggest
> > that LVM+LIO is pretty much close enough to BDD even with iSCSI
> > involved. If you're aware of a case where it isn't, the first
> > thing to do is to provide proof via a reproducible benchmark.
> > Otherwise we are likely to proceed, as John suggests, with the
> > assumption that local target does not provide much benefit. 
> >
> > I've a few benchmarks myself that I suspect will find areas where
> > getting rid of iSCSI is benefit, however if you have any then you
> > really need to step up and provide the evidence. Relying on vague
> > claims of overhead is now proven to not be a good idea. 
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >
> > ?Honestly we can have both, I'll work up a bp to resurrect the idea of
> > a "smart" scheduling feature that lets you request the volume be on
> > the same node as the compute node and use it directly, and then if
> > it's NOT it will attach a target and use it that way (in other words
> > you run a stripped down c-vol service on each compute node).
> 
> Don't we have at least scheduling problem solved [1] already?
> 
> [1]
> https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/instance_locality_filter.py
> 
> >
> > Sahara keeps insisting on being a snow-flake with Cinder volumes and
> > the block driver, it's really not necessary.  I think we can
> > compromise just a little both ways, give you standard Cinder semantics
> > for volumes, but allow you direct acccess to them if/when requested,
> > but have those be flexible enough that targets *can* be attached so
> > they meet all of the required functionality and API implementations. 
> > This also means that we don't have to continue having a *special*
> > driver in Cinder that frankly only works for one specific use case and
> > deployment.
> >
> > I've pointed to this a number of times but it never seems to
> > resonate... but I never learn so I'll try it once again [1].  Note
> > that was before the name "brick" was hijacked and now means something
> > completely different.
> >
> > [1]: https://wiki.openstack.org/wiki/CinderBrick
> >
> > Thanks,
> > John?
> 
> 
> 
> 
> --
> 
> Message: 2
> Date: Wed, 21 Sep 2016 16:05:08 +0800
> From: jun zhong 
> To: openstack-dev 
> Subject: [openstack-dev]  [manila] Enable IPv6 in Manila Ocata
> Message-ID:
>  
> Content-Type: text/plain; charset="utf-8"
> 
> Hi,
> 
> As agreed by the manila community in IRC meeting,
> we try to enable IPv6 in Ocata. Please check the brief spec[1] and 
> code[2]).
> 
> The areas affected most are API (access rules) and in the drivers (access
> rules
> & export locations). This change intends to add the IPv6 format validation
> for
> ip access rule type in allow_access API, allowing manila to support IPv6
> ACL.
> 
> Hi all of the driver maintainers, could you test the IPv6 feature code[2]
> to make sure whether your driver can completely support IPv6.
> If there still have something else might not be IPv6-ready, please let me
> known. Thanks
> [1] https://review.openstack.org/#/c/362786/
> [2] https://review.openstack.org/#/c/312321/
> 
> 
> Thanks,
> Jun
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20160921/880e28e8/attachment-0001.html
> >
> 
> --
> 
> Message: 3
> Date: Wed, 21

Re: [openstack-dev] [Kolla] Ocata summit session poll

2016-09-21 Thread Steven Dake (stdake)
One note in this poll.  Repo-split has already reached a consensus decision via 
ml vote, and the activity around that will happen prior to summit, so it is 
probably worth ignoring entirely.

Regards
-steve


On 9/21/16, 10:14 AM, "Michał Jastrzębski"  wrote:

Hello,

Now, when we have full list of sessions, let's prioritize them
accordingly to our preferences. Based on this we'll allocate our
summit space.


http://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_8368e1e74f8a0049&akey=91bbcf4baeff0a2f

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kolla] Ocata summit session poll

2016-09-21 Thread Swapnil Kulkarni
On Wed, Sep 21, 2016 at 11:16 PM, Steven Dake (stdake)  wrote:
> One note in this poll.  Repo-split has already reached a consensus decision 
> via ml vote, and the activity around that will happen prior to summit, so it 
> is probably worth ignoring entirely.
>
> Regards
> -steve
>
>
> On 9/21/16, 10:14 AM, "Michał Jastrzębski"  wrote:
>
> Hello,
>
> Now, when we have full list of sessions, let's prioritize them
> accordingly to our preferences. Based on this we'll allocate our
> summit space.
>
> 
> http://civs.cs.cornell.edu/cgi-bin/vote.pl?id=E_8368e1e74f8a0049&akey=91bbcf4baeff0a2f
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I agree. We already devoted a couple of sessions (or more) in Austin
design summit and couple of polls on ML thread.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-21 Thread Jeremy Stanley
On 2016-09-21 15:41:11 +1000 (+1000), Tony Breeds wrote:
> On Tue, Sep 20, 2016 at 11:57:26AM +0100, Daniel P. Berrange wrote:
[...]
> >   (3) Do nothing, leave the bug unfixed in stable/liberty
> > 
> > While this is a security bug, it is one that has existed in every single
> > openstack release ever, and it is not a particularly severe bug. Even if
> > we fixed in liberty, it would still remain unfixed in every release before
> > liberty. We're in the verge of releasing Newton at which point liberty
> > becomes less relevant. So I question whether it is worth spending more
> > effort on dealing with this in liberty upstream.  Downstream vendors
> > still have the option to do either (1) or (2) in their own private
> > branches if they so desire, regardless of whether we fix it upstream.
> 
> I think 3 is the least worst option.
[...]

At least from my perspective with my VMT hat on, declaring that we
have a security vulnerability severe enough to fix in stable/mitaka
but unfixable in stable/liberty calls into question whether the
latter is actually maintainable by our general definition as a
project or is ready for EOL.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-21 Thread Sean Dague
On 09/21/2016 02:03 PM, Jeremy Stanley wrote:
> On 2016-09-21 15:41:11 +1000 (+1000), Tony Breeds wrote:
>> On Tue, Sep 20, 2016 at 11:57:26AM +0100, Daniel P. Berrange wrote:
> [...]
>>>   (3) Do nothing, leave the bug unfixed in stable/liberty
>>>
>>> While this is a security bug, it is one that has existed in every single
>>> openstack release ever, and it is not a particularly severe bug. Even if
>>> we fixed in liberty, it would still remain unfixed in every release before
>>> liberty. We're in the verge of releasing Newton at which point liberty
>>> becomes less relevant. So I question whether it is worth spending more
>>> effort on dealing with this in liberty upstream.  Downstream vendors
>>> still have the option to do either (1) or (2) in their own private
>>> branches if they so desire, regardless of whether we fix it upstream.
>>
>> I think 3 is the least worst option.
> [...]
> 
> At least from my perspective with my VMT hat on, declaring that we
> have a security vulnerability severe enough to fix in stable/mitaka
> but unfixable in stable/liberty calls into question whether the
> latter is actually maintainable by our general definition as a
> project or is ready for EOL.

Well, the risk profile of what has to be changed for stable/liberty
(given that all the actual code is buried in libraries which have tons
of other changes). Special cherry-picked library versions would be
needed to fix this without openning up a ton of risk for breaking
stable/liberty badly.

That is the bit of work that no one seems to really have picked up.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] etherpad for collecting goal info

2016-09-21 Thread Doug Hellmann
This week at the TC meeting someone (Anne?) pointed out that the
name of the etherpad with the list of community-wide goals wasn't
ideal ("ocata-tc-goals" includes the cycle name and the "tc" component
gives the impression that these are goals of the "TC" rather than
that the pad was used by the TC for notes about goals).

I've created a new etherpad at
https://etherpad.openstack.org/p/community-goals to replace the old one,
and copied the old content over.

The goals there are listed in the order that someone added them to
the original list, and the order should not be taken to imply any
sort of prioritization or preference.

At this point the set of goals for Ocata is closed, but the other items
may come up for discussion at the summit to be considered for Pike.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Useful tool for easier viewing of IRC logs online

2016-09-21 Thread Boden Russell

> Source code is here: https://github.com/abashmak/chrome-irc-filter
> 
> Comments, suggestions are welcome.

Nice thanks!

I've always wanted a tool that could alert me of "missed mentions" when
I'm offline IRC rather than having to manually parse the IRC logs for
those times I'm offline. However I'm guessing that falls outside the
scope of this tool or could be done with some other tool (I haven't
investigated yet)?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Chivers, Doug
mpromise just a little both ways, give you standard Cinder semantics
> > for volumes, but allow you direct acccess to them if/when requested,
> > but have those be flexible enough that targets *can* be attached so
> > they meet all of the required functionality and API implementations. 
> > This also means that we don't have to continue having a *special*
> > driver in Cinder that frankly only works for one specific use case and
> > deployment.
> >
> > I've pointed to this a number of times but it never seems to
> > resonate... but I never learn so I'll try it once again [1].  Note
> > that was before the name "brick" was hijacked and now means something
> > completely different.
> >
> > [1]: https://wiki.openstack.org/wiki/CinderBrick
> >
> > Thanks,
> > John?
> 
> 
> 
> 
> --
> 
> Message: 2
> Date: Wed, 21 Sep 2016 16:05:08 +0800
> From: jun zhong 
> To: openstack-dev 
> Subject: [openstack-dev]  [manila] Enable IPv6 in Manila Ocata
> Message-ID:
>  
> Content-Type: text/plain; charset="utf-8"
> 
> Hi,
> 
> As agreed by the manila community in IRC meeting,
> we try to enable IPv6 in Ocata. Please check the brief spec[1] and 
> code[2]).
> 
> The areas affected most are API (access rules) and in the drivers (access
> rules
> & export locations). This change intends to add the IPv6 format validation
> for
> ip access rule type in allow_access API, allowing manila to support IPv6
> ACL.
> 
> Hi all of the driver maintainers, could you test the IPv6 feature code[2]
> to make sure whether your driver can completely support IPv6.
> If there still have something else might not be IPv6-ready, please let me
> known. Thanks
> [1] https://review.openstack.org/#/c/362786/
> [2] https://review.openstack.org/#/c/312321/
    > 
> 
> Thanks,
> Jun
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> 
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160921/880e28e8/attachment-0001.html
> >
> 
> --
> 
> Message: 3
> Date: Wed, 21 Sep 2016 08:38:53 +
> From: "Afek, Ifat (Nokia - IL)" 
> To: "OpenStack Development Mailing List (not for usage questions)"
>  
> Subject: [openstack-dev] [vitrage] Barcelona design sessions
> Message-ID: 
> Content-Type: text/plain; charset="utf-8"
> 
> Hi,
> 
> As discussed in our IRC meeting today, you are welcome to suggest topics 
> for vitrage design sessions in Barcelona:
> https://etherpad.openstack.org/p/vitrage-barcelona-design-sessions
> 
> Thanks,
> Ifat.
> 
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> 
http://lists.openstack.org/pipermail/openstack-dev/attachments/20160921/3b999def/attachment-0001.html
> >
> 
> --
> 
> Message: 4
> Date: Wed, 21 Sep 2016 09:53:06 +
> From: "Daly, Louise M" 
> To: "openstack-dev@lists.openstack.org"
>  
> Subject: [openstack-dev]  [Kuryr] Kuryr IPVlan Code PoC
> Message-ID:
>  
> Content-Type: text/plain; charset="us-ascii"
> 
> Hi everyone,
> 
> As promised here is a link to the code PoC for the Kuryr-IPVlan proposal.
> https://github.com/lmdaly/kuryr-libnetwork
> 
> Link to specific commit
> 
https://github.com/lmdaly/kuryr-libnetwork/commit/1dc895a6d8bfaa03c0dd5cfb2d3e23e2e948a67c
> 
> >From here you can clone the repo and install Kuryr as you normally would 
> with a few additional steps:
> 
> 1. The IPVlan driver must be installed on the VM/Machine that the PoC 
will 
> be run on. Fedora-Server(not the cloud image) includes the driver by 
> default but the likes of the cloud image must be modified to include the 
> driver.
> 2. You must install Docker experimental.
> 3. You must use the Kuryr IPAM driver for address management.
> 4. In order to enable the IPVlan mode you must change the ipvlan option 
in 
> the kuryr.conf file from false to true.
> 5. You must also change the ifname option to match the interface of the 
> private network you wish to run the containers on. (Default 

Re: [openstack-dev] [osc][keystone] User Project List

2016-09-21 Thread Steve Martinelli
On Wed, Sep 21, 2016 at 1:04 PM, Dolph Mathews 
wrote:

>
> I should also express a +1 for something along the lines of your original
> proposal. I'd go so far as to suggest that `openstack show user` (without a
> user ID or name as an argument) should return "me" (the authenticated
> user), as I think that'd be a better user experience.
>

That should be fixed in openstackclient 3.0.0 --
https://github.com/openstack/python-openstackclient/commit/337d013c94378a4b3f0e8f90e4f5bd745448658f
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Filip Pytloun
On 2016/09/21 13:23, Doug Hellmann wrote:
> The idea of splitting the contributor list comes up pretty regularly
> and we rehash the same suggestions each time.  Given that what we
> have now worked fine for 57 of the 59 offical teams (the Astara
> team knew in advance it would not have a PTL running, and Piet had
> some sort of technical issue submitting his candidacy for the UX
> team), I'm not yet convinced that we need to make large-scale changes
> to our community communication standard practices in support of the
> 2 remaining teams.
> 
> That's not to say that the system we have now is perfect, but we
> can't realistically support multiple systems at the same time.  We
> need everyone to use the same system, otherwise we have (even more)
> fragmented communication. So, we either need everyone to agree to
> some new system and then have people step forward to implement it,
> or we need to all agree to do our best to use the system we have
> in place now.

I think it may work as is (with proper mail filters), but as someone already
mentioned in this thread it would be better to have someone more experienced
in Openstack community projects as a core team member or PTL to catch all
these things otherwise it may happen that inexperienced PTL/team just miss
something like now.

Still I don't think it's such a big issue to just fire project from Big Tent -
who will benefit from that? Again someone already mentioned what will it mean
for such team (loss of potencial developers, etc.).
Moreover for teams who are actively working on project as it seems that both
OpenStackSalt and Security teams do.

And I thought that real work on a project is our primary goal.. this situation
is like loosing job when I left dirty coffee cup at my workspace.

> Did your release liaison follow the instructions to make that happen?
> http://git.openstack.org/cgit/openstack/releases/tree/README.rst

That seems to be the reason. There was new release planned with support for
containerized deployment which would follow that guide (as first releases were
done during/shortly after openstack-salt move to Big Tent).
As mentioned above - more experienced PTL would be helpful here and we are
currently talking with people who could fit that position.

> 
> > 
> > > I see no emails tagged with [salt] on the mailing list since March of 
> > > this year, aside from this thread. Are you using a different 
> > > communication channel for team coordination? You mention IRC, but how are 
> > > new contributors expected to find you?
> > 
> > Yes, we are using openstack-salt channel and openstack meetings over
> > IRC. This channel is mentioned eg. in readme here [1] and community
> > meetings page [2] which are on weekly basis (logs [3]).
> > 
> > We also had a couple of people comming to team IRC talking to us about 
> > project
> > so I believe they can find the way to contact us even without our heavy
> > activity at openstack-dev (which should be better as I admitted).
> 
> That works great for folks in your timezones. It's less useful for
> anyone who isn't around at the same time as you, which is one reason
> our community emphasizes using email communications. Email gives
> you asynchronous discussions for timezone coverage, allows folks
> who are traveling or off work for a period to catch up on and
> participate in discussions later, etc.
> 
> > 
> > [1] https://github.com/openstack/openstack-salt
> > [2] https://wiki.openstack.org/wiki/Meetings/openstack-salt
> > [3] http://eavesdrop.openstack.org/meetings/openstack_salt/2016/
> > 
> > > > 
> > > > Of course I don't want to excuse our fault. In case it's not too late,
> > > > we will try to be more active in mailing lists like openstack-dev and
> > > > not miss such important events next time.
> > > > 
> > > > [1] http://stackalytics.com/?module=openstacksalt-group
> > > > 
> > > > -Filip
> > > > 
> > > > On Wed, Sep 21, 2016 at 12:23 PM, Thierry Carrez 
> > > > wrote:
> > > > 
> > > >> Hi everyone,
> > > >> 
> > > >> As announced previously[1][2], there were no PTL candidates within the
> > > >> election deadline for a number of official OpenStack project teams:
> > > >> Astara, UX, OpenStackSalt and Security.
> > > >> 
> > > >> In the Astara case, the current team working on it would like to 
> > > >> abandon
> > > >> the project (and let it be available for any new team who wishes to 
> > > >> take
> > > >> it away). A change should be proposed really soon now to go in that
> > > >> direction.
> > > >> 
> > > >> In the UX case, the current PTL (Piet Kruithof) very quickly reacted,
> > > >> explained his error and asked to be considered for the position for
> > > >> Ocata. The TC will officialize his nomination at the next meeting,
> > > >> together with the newly elected PTLs.
> > > >> 
> > > >> That leaves us with OpenStackSalt and Security, where nobody reacted to
> > > >> the announcement that we are missing PTL candidates. That points to a
> > > >> real disconnect between those 

Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Dave Walker
On 21 September 2016 at 19:20, Chivers, Doug  wrote:

> My concern is with the original wording “The suggested way forward there
> would be to remove the "Security project team"”.
>
> This seems like a move to instantly reduce investment in OpenStack
> security, because the majority of members of the Security Project are
> corporately funded, which will be significantly impacted by the removal of
> the security project. I have no knowledge over the difference between a
> working group and a project, like everyone else on the project we are
> simply here to contribute to OpenStack security, drive innovation in
> security, deliver documentation like OSSNs, etc, rather than get involved
> in the politics of OpenStack.
>
> In response to the various questions of why no-one from our project
> noticed that we didn’t have a nomination for the PTL, we assumed that was
> taken care of. Realistically maybe two or three people on the security
> project have the availability to be PTL, one being our current PTL, for all
> the rest of us its simply not a concern until we need to vote.
>
> On a personal note, reading –dev is unfortunately a lower priority than
> designing architectures, responding to customers and sales teams, closing
> tickets, writing decks and on the afternoon or so I can spend each week,
> working on my upstream projects (this week it was:
> https://review.openstack.org/#/c/357978/5 - thanks to the Barbican team
> for all their work). Possibly this is wrong, but I didn’t sign up as a
> contributor to spend all my spare time reading mailing lists.
>
>


Honestly, I can only echo this.  I've been around the OSSP(G) since 2013,
but only really been active in the last 18 months or so.  It's been pretty
clear that when Security moved from a Group to a Project, investment
towards security grew dramatically.

The meetings are well run with real objectives achieved with members
focused on constant outreach to other projects.  For reference, the email
that started this thread was picked up and discussed by some members of the
OSSP within *minutes* of it being sent... and those people were pretty
outraged.

I'm sure it wasn't intended, but the original email could be read as quite
insulting.. "That points to a real disconnect between those teams and the
rest of the community".  I think this is an unfair statement based on
minimal observation of a point of order.

The OSSP spends a significant amount of its time on outreach, which is the
whole underlying principle of the project.  This can be seen with efforts
such as bandit gate coverage, Threat Analysis and OSSN's.

Further, reducing the summit timetable for Security and "have Security be
just a workgroup".. really sends the wrong message about Security being a
first class citizen in OpenStack.

OSSP ticks all the 4 opens, and stating that "The leadership is chosen by
the contributors to the project".. it is convention that a nomination email
is sent to -dev, but that shouldn't be assumed that the contributors have
not considered their leader.

I think people working on the OSSP assumed it would be Rob again, and were
happy with this.  It isn't because of lack of community engagement or
interest IMO.

So.. other than someone failing to nominate for PTL in the time-frame, what
else justifies the statement of "points[ing] to a real disconnect between
those teams and the rest of the community".. or shows that OSSG no longer
meets the 4 opens?

--
Kind Regards,
Dave Walker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Doug Hellmann
uspect will find areas where
> > > getting rid of iSCSI is benefit, however if you have any then you
> > > really need to step up and provide the evidence. Relying on vague
> > > claims of overhead is now proven to not be a good idea. 
> > >
> > > 
> > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > <
> > http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > 
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> > >
> > > ?Honestly we can have both, I'll work up a bp to resurrect the idea of
> > > a "smart" scheduling feature that lets you request the volume be on
> > > the same node as the compute node and use it directly, and then if
> > > it's NOT it will attach a target and use it that way (in other words
> > > you run a stripped down c-vol service on each compute node).
> > 
> > Don't we have at least scheduling problem solved [1] already?
> > 
> > [1]
> > 
> https://github.com/openstack/cinder/blob/master/cinder/scheduler/filters/instance_locality_filter.py
> > 
> > >
> > > Sahara keeps insisting on being a snow-flake with Cinder volumes and
> > > the block driver, it's really not necessary.  I think we can
> > > compromise just a little both ways, give you standard Cinder semantics
> > > for volumes, but allow you direct acccess to them if/when requested,
> > > but have those be flexible enough that targets *can* be attached so
> > > they meet all of the required functionality and API implementations. 
> > > This also means that we don't have to continue having a *special*
> > > driver in Cinder that frankly only works for one specific use case and
> > > deployment.
> > >
> > > I've pointed to this a number of times but it never seems to
> > > resonate... but I never learn so I'll try it once again [1].  Note
> > > that was before the name "brick" was hijacked and now means something
> > > completely different.
> > >
> > > [1]: https://wiki.openstack.org/wiki/CinderBrick
> > >
> > > Thanks,
> > > John?
> > 
> > 
> > 
> > 
> > --
> > 
> > Message: 2
> > Date: Wed, 21 Sep 2016 16:05:08 +0800
> > From: jun zhong 
> > To: openstack-dev 
> > Subject: [openstack-dev]  [manila] Enable IPv6 in Manila Ocata
> > Message-ID:
> >  
> > Content-Type: text/plain; charset="utf-8"
> > 
> > Hi,
> > 
> > As agreed by the manila community in IRC meeting,
> > we try to enable IPv6 in Ocata. Please check the brief spec[1] and 
> > code[2]).
> > 
> > The areas affected most are API (access rules) and in the drivers 
> (access
> > rules
> > & export locations). This change intends to add the IPv6 format 
> validation
> > for
> > ip access rule type in allow_access API, allowing manila to support IPv6
> > ACL.
> > 
> > Hi all of the driver maintainers, could you test the IPv6 feature 
> code[2]
> > to make sure whether your driver can completely support IPv6.
> > If there still have something else might not be IPv6-ready, please let 
> me
> > known. Thanks
> > [1] https://review.openstack.org/#/c/362786/
> > [2] https://review.openstack.org/#/c/312321/
> > 
> > 
> > Thanks,
> > Jun
> > -- next part --
> > An HTML attachment was scrubbed...
> > URL: <
> > 
> http://lists.openstack.org/pipermail/openstack-dev/attachments/20160921/880e28e8/attachment-0001.html
> > >
> > 
> > --
> > 
> > Message: 3
> > Date: Wed, 21 Sep 2016 08:38:53 +
> > From: "Afek, Ifat (Nokia - IL)" 
> > To: "OpenStack Development Mailing List (not for usage questions)"
>

Re: [openstack-dev] [UX] Results Presentation: Managing OpenStack Quotas within Production Environments

2016-09-21 Thread Barrett, Carol L
Danielle – I think this is good, but if you are not getting the level of 
participation you want…or commitment to follow-on actions, I would suggest you 
adopt a “go to them” strategy.

Thanks
Carol

From: Danielle Mundle [mailto:danielle.m.mun...@gmail.com]
Sent: Wednesday, September 21, 2016 7:42 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [UX] Results Presentation: Managing OpenStack Quotas 
within Production Environments

The OpenStack UX team will be giving a results presentation from a series of 
interviews intended to understand how operators manage quotas at scale as well 
as the pain points associated with that process.  The study was conducted by 
Danielle (IRC: uxdanielle) and included operators from CERN, Pacific Northwest 
National Laboratory, Workday, Intel and Universidade Federal de Campina Grande.

The presentation begins in ~20 minutes. WebEx information to join the session 
can be found at the top of the UX wiki page: 
https://wiki.openstack.org/wiki/UX#Results_Presentation:_Managing_OpenStack_Quotas_within_Production_Environments

Thanks for supporting UX research in the community!
--Danielle

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Travis McPeak
"So all this said, there are individuals interested in the PTL role to
ensure project teams have someone handling the logistics and coordination.
My issue however was that I was not yet eligible to be a candidate which
I'll remedy moving forward.

I'm still interested in serving as a PTL for a project that needs one. I
personally believe that in the case of Security, there needs to be a
dedicated team due to the nature and impact of security breaches that
directly influence the perception of OpenStack as a viable cloud solution
for enterprises looking (or re-looking) at it for the first time."

@Adam we'd certainly appreciate your help staying on top of

required activities, email, etc.  Surely a PTL should be

somebody who has at least been involved in the project?

-- 
-Travis
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Doug Hellmann
Excerpts from Filip Pytloun's message of 2016-09-21 20:36:42 +0200:
> On 2016/09/21 13:23, Doug Hellmann wrote:
> > The idea of splitting the contributor list comes up pretty regularly
> > and we rehash the same suggestions each time.  Given that what we
> > have now worked fine for 57 of the 59 offical teams (the Astara
> > team knew in advance it would not have a PTL running, and Piet had
> > some sort of technical issue submitting his candidacy for the UX
> > team), I'm not yet convinced that we need to make large-scale changes
> > to our community communication standard practices in support of the
> > 2 remaining teams.
> > 
> > That's not to say that the system we have now is perfect, but we
> > can't realistically support multiple systems at the same time.  We
> > need everyone to use the same system, otherwise we have (even more)
> > fragmented communication. So, we either need everyone to agree to
> > some new system and then have people step forward to implement it,
> > or we need to all agree to do our best to use the system we have
> > in place now.
> 
> I think it may work as is (with proper mail filters), but as someone already
> mentioned in this thread it would be better to have someone more experienced
> in Openstack community projects as a core team member or PTL to catch all
> these things otherwise it may happen that inexperienced PTL/team just miss
> something like now.

If the team needs help, please ask for it. We should be able to find
someone to do a little mentoring and provide some guidance.

> Still I don't think it's such a big issue to just fire project from Big Tent -
> who will benefit from that? Again someone already mentioned what will it mean
> for such team (loss of potencial developers, etc.).
> Moreover for teams who are actively working on project as it seems that both
> OpenStackSalt and Security teams do.

Signing up to be a part of the big tent is not free. Membership comes
with expectations and obligations. Failing to meet those may be an
indication that the team isn't ready, or that membership is not a good
fit.

> And I thought that real work on a project is our primary goal.. this situation
> is like loosing job when I left dirty coffee cup at my workspace.

I hope you consider team leadership and community participation to
be more important than your analogy implies.

Doug

> 
> > Did your release liaison follow the instructions to make that happen?
> > http://git.openstack.org/cgit/openstack/releases/tree/README.rst
> 
> That seems to be the reason. There was new release planned with support for
> containerized deployment which would follow that guide (as first releases were
> done during/shortly after openstack-salt move to Big Tent).
> As mentioned above - more experienced PTL would be helpful here and we are
> currently talking with people who could fit that position.
> 
> > 
> > > 
> > > > I see no emails tagged with [salt] on the mailing list since March of 
> > > > this year, aside from this thread. Are you using a different 
> > > > communication channel for team coordination? You mention IRC, but how 
> > > > are new contributors expected to find you?
> > > 
> > > Yes, we are using openstack-salt channel and openstack meetings over
> > > IRC. This channel is mentioned eg. in readme here [1] and community
> > > meetings page [2] which are on weekly basis (logs [3]).
> > > 
> > > We also had a couple of people comming to team IRC talking to us about 
> > > project
> > > so I believe they can find the way to contact us even without our heavy
> > > activity at openstack-dev (which should be better as I admitted).
> > 
> > That works great for folks in your timezones. It's less useful for
> > anyone who isn't around at the same time as you, which is one reason
> > our community emphasizes using email communications. Email gives
> > you asynchronous discussions for timezone coverage, allows folks
> > who are traveling or off work for a period to catch up on and
> > participate in discussions later, etc.
> > 
> > > 
> > > [1] https://github.com/openstack/openstack-salt
> > > [2] https://wiki.openstack.org/wiki/Meetings/openstack-salt
> > > [3] http://eavesdrop.openstack.org/meetings/openstack_salt/2016/
> > > 
> > > > > 
> > > > > Of course I don't want to excuse our fault. In case it's not too late,
> > > > > we will try to be more active in mailing lists like openstack-dev and
> > > > > not miss such important events next time.
> > > > > 
> > > > > [1] http://stackalytics.com/?module=openstacksalt-group
> > > > > 
> > > > > -Filip
> > > > > 
> > > > > On Wed, Sep 21, 2016 at 12:23 PM, Thierry Carrez 
> > > > > 
> > > > > wrote:
> > > > > 
> > > > >> Hi everyone,
> > > > >> 
> > > > >> As announced previously[1][2], there were no PTL candidates within 
> > > > >> the
> > > > >> election deadline for a number of official OpenStack project teams:
> > > > >> Astara, UX, OpenStackSalt and Security.
> > > > >> 
> > > > >> In the Astara case, the current team work

Re: [openstack-dev] [nova][stable/liberty] Backport impasse: "virt: set address space & CPU time limits when running qemu-img"

2016-09-21 Thread Jeremy Stanley
On 2016-09-21 14:05:51 -0400 (-0400), Sean Dague wrote:
[...]
> Well, the risk profile of what has to be changed for stable/liberty
> (given that all the actual code is buried in libraries which have tons
> of other changes). Special cherry-picked library versions would be
> needed to fix this without openning up a ton of risk for breaking
> stable/liberty badly.
> 
> That is the bit of work that no one seems to really have picked up.

Makes sense. It's also possible in that case that it's not a sign of
stable/liberty being unmaintainable, but rather implies that the
vulnerability as fixed in stable/mitaka falls below the effective
severity threshold to warrant a security advisory.

Put another way, I'd like to find some reasonable means to explain
the lack of a fix in a "supported" stable branch. If the VMT and
stable branch maintainers need accept the possibility that something
can be treated as a vulnerability by the OpenStack community but
only fixed in some supported branches, that introduces a lot of
additional uncertainty for downstream consumers of our advisory
process and the associated patches tracked by it.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-21 Thread Joshua Harlow

Andrew Laski wrote:


On Wed, Sep 21, 2016, at 12:02 PM, Joshua Harlow wrote:

Andrew Laski wrote:

However, I have asked twice now on the review what the benefit of doing
this is and haven't received a response so I'll ask here. The proposal
would add additional latency to nearly every API operation in a service
and in return what do they get? Now that it's possible to register sane
policy defaults within a project most operators do not even need to
think about policy for projects that do that. And any policy changes
that are necessary are easily handled by a config management system.

I would expect to see a pretty significant benefit in exchange for
moving policy control out of Nova, and so far it's not clear to me what
that would be.

One way to do this is to setup something like etc.d or zookeeper and
have policy files be placed into certain 'keys' in there by keystone,
then consuming projects would 'watch' those keys for being changed (and
get notified when they are changed); the project would then reload its
policy when the other service (keystone) write a new key/policy.

https://coreos.com/etcd/docs/latest/api.html#waiting-for-a-change

or
https://zookeeper.apache.org/doc/r3.4.5/zookeeperProgrammers.html#ch_zkWatches

or (pretty sure consul has something similar),

This is pretty standard stuff folks :-/ and it's how afaik things like
https://github.com/skynetservices/skydns work (and more), and it would
avoid that 'additional latency' (unless the other service is adjusting
the policy key every millisecond, which seems sorta unreasonable).


Sure. Or have Keystone be a frontend for ansible/puppet/chef/ What's
not clear to me in any of this is what's the benefit to having Keystone
as a fronted to policy configuration/changes, or be involved in any real
way with authorization decisions? What issue is being solved by getting
Keystone involved?



I don't understand the puppet/chef connection, can u clarify.

If I'm interpreting it right, I would assume it's the same reason that 
something like 'skydns' exists over etcd; to provide a useful API that 
focuses on the dns particulars that etcd will of course not have any 
idea about. So I guess the keystone API could(?)/would(?) then focus on 
policy particulars as its value-add.


Maybe now I understand what u mean by puppet/chef, in that you are 
asking why isn't skydns (for example) just letting/invoking 
puppet/chef/ansible to distribute/send-out dns (dnsmasq) files? Is that 
your equivalent question?


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Useful tool for easier viewing of IRC logs online

2016-09-21 Thread Bashmakov, Alexander
That's a good idea, I was just thinking along the same lines today. It's 
definitely out of the scope of my tool, though. Some targeted filtering could 
be implemented, but it would still be in "offline" mode. If you want it live, 
then perhaps some IRC clients offer that functionality or maybe there is a ZNC 
module for that.

> -Original Message-
> From: Boden Russell [mailto:boden...@gmail.com]
> Sent: Wednesday, September 21, 2016 11:22 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all] Useful tool for easier viewing of IRC logs
> online
> 
> 
> > Source code is here: https://github.com/abashmak/chrome-irc-filter
> >
> > Comments, suggestions are welcome.
> 
> Nice thanks!
> 
> I've always wanted a tool that could alert me of "missed mentions" when I'm
> offline IRC rather than having to manually parse the IRC logs for those times
> I'm offline. However I'm guessing that falls outside the scope of this tool or
> could be done with some other tool (I haven't investigated yet)?
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Jakub Pavlik

Hello all,

it took us 2 years of hard working to get these official. OpenStack-Salt 
is now used by around 40 production deployments and it is focused very 
on operation and popularity is growing. You are removing the project 
week after one of top contributor announced that they will use that as 
part of solution. We made a mistakes, however I do not think that is 
reason to remove us. I do no think that quality of the project is 
measured like this. Our PTL got ill and did not do properly his job for 
last 3 weeks, but this can happen anybody.


 It is up to you. If you think that we are useless for community, then 
remove us and we will have to continue outside of this community. 
However growing successful use cases will not be under official 
openstack community, which makes my feeling bad.


Thanks,

Jakub

On 21.9.2016 21:03, Doug Hellmann wrote:

Excerpts from Filip Pytloun's message of 2016-09-21 20:36:42 +0200:

On 2016/09/21 13:23, Doug Hellmann wrote:

The idea of splitting the contributor list comes up pretty regularly
and we rehash the same suggestions each time.  Given that what we
have now worked fine for 57 of the 59 offical teams (the Astara
team knew in advance it would not have a PTL running, and Piet had
some sort of technical issue submitting his candidacy for the UX
team), I'm not yet convinced that we need to make large-scale changes
to our community communication standard practices in support of the
2 remaining teams.

That's not to say that the system we have now is perfect, but we
can't realistically support multiple systems at the same time.  We
need everyone to use the same system, otherwise we have (even more)
fragmented communication. So, we either need everyone to agree to
some new system and then have people step forward to implement it,
or we need to all agree to do our best to use the system we have
in place now.

I think it may work as is (with proper mail filters), but as someone already
mentioned in this thread it would be better to have someone more experienced
in Openstack community projects as a core team member or PTL to catch all
these things otherwise it may happen that inexperienced PTL/team just miss
something like now.

If the team needs help, please ask for it. We should be able to find
someone to do a little mentoring and provide some guidance.


Still I don't think it's such a big issue to just fire project from Big Tent -
who will benefit from that? Again someone already mentioned what will it mean
for such team (loss of potencial developers, etc.).
Moreover for teams who are actively working on project as it seems that both
OpenStackSalt and Security teams do.

Signing up to be a part of the big tent is not free. Membership comes
with expectations and obligations. Failing to meet those may be an
indication that the team isn't ready, or that membership is not a good
fit.


And I thought that real work on a project is our primary goal.. this situation
is like loosing job when I left dirty coffee cup at my workspace.

I hope you consider team leadership and community participation to
be more important than your analogy implies.

Doug


Did your release liaison follow the instructions to make that happen?
http://git.openstack.org/cgit/openstack/releases/tree/README.rst

That seems to be the reason. There was new release planned with support for
containerized deployment which would follow that guide (as first releases were
done during/shortly after openstack-salt move to Big Tent).
As mentioned above - more experienced PTL would be helpful here and we are
currently talking with people who could fit that position.


I see no emails tagged with [salt] on the mailing list since March of this 
year, aside from this thread. Are you using a different communication channel 
for team coordination? You mention IRC, but how are new contributors expected 
to find you?

Yes, we are using openstack-salt channel and openstack meetings over
IRC. This channel is mentioned eg. in readme here [1] and community
meetings page [2] which are on weekly basis (logs [3]).

We also had a couple of people comming to team IRC talking to us about project
so I believe they can find the way to contact us even without our heavy
activity at openstack-dev (which should be better as I admitted).

That works great for folks in your timezones. It's less useful for
anyone who isn't around at the same time as you, which is one reason
our community emphasizes using email communications. Email gives
you asynchronous discussions for timezone coverage, allows folks
who are traveling or off work for a period to catch up on and
participate in discussions later, etc.


[1] https://github.com/openstack/openstack-salt
[2] https://wiki.openstack.org/wiki/Meetings/openstack-salt
[3] http://eavesdrop.openstack.org/meetings/openstack_salt/2016/


Of course I don't want to excuse our fault. In case it's not too late,
we will try to be more active in mailing lists like openstack-dev and
not miss such important 

Re: [openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

2016-09-21 Thread Andrew Laski


On Wed, Sep 21, 2016, at 03:18 PM, Joshua Harlow wrote:
> Andrew Laski wrote:
> >
> > On Wed, Sep 21, 2016, at 12:02 PM, Joshua Harlow wrote:
> >> Andrew Laski wrote:
> >>> However, I have asked twice now on the review what the benefit of doing
> >>> this is and haven't received a response so I'll ask here. The proposal
> >>> would add additional latency to nearly every API operation in a service
> >>> and in return what do they get? Now that it's possible to register sane
> >>> policy defaults within a project most operators do not even need to
> >>> think about policy for projects that do that. And any policy changes
> >>> that are necessary are easily handled by a config management system.
> >>>
> >>> I would expect to see a pretty significant benefit in exchange for
> >>> moving policy control out of Nova, and so far it's not clear to me what
> >>> that would be.
> >> One way to do this is to setup something like etc.d or zookeeper and
> >> have policy files be placed into certain 'keys' in there by keystone,
> >> then consuming projects would 'watch' those keys for being changed (and
> >> get notified when they are changed); the project would then reload its
> >> policy when the other service (keystone) write a new key/policy.
> >>
> >> https://coreos.com/etcd/docs/latest/api.html#waiting-for-a-change
> >>
> >> or
> >> https://zookeeper.apache.org/doc/r3.4.5/zookeeperProgrammers.html#ch_zkWatches
> >>
> >> or (pretty sure consul has something similar),
> >>
> >> This is pretty standard stuff folks :-/ and it's how afaik things like
> >> https://github.com/skynetservices/skydns work (and more), and it would
> >> avoid that 'additional latency' (unless the other service is adjusting
> >> the policy key every millisecond, which seems sorta unreasonable).
> >
> > Sure. Or have Keystone be a frontend for ansible/puppet/chef/ What's
> > not clear to me in any of this is what's the benefit to having Keystone
> > as a fronted to policy configuration/changes, or be involved in any real
> > way with authorization decisions? What issue is being solved by getting
> > Keystone involved?
> >
> 
> I don't understand the puppet/chef connection, can u clarify.
> 
> If I'm interpreting it right, I would assume it's the same reason that 
> something like 'skydns' exists over etcd; to provide a useful API that 
> focuses on the dns particulars that etcd will of course not have any 
> idea about. So I guess the keystone API could(?)/would(?) then focus on 
> policy particulars as its value-add.
> 
> Maybe now I understand what u mean by puppet/chef, in that you are 
> asking why isn't skydns (for example) just letting/invoking 
> puppet/chef/ansible to distribute/send-out dns (dnsmasq) files? Is that 
> your equivalent question?

I'm focused on Nova/Keystone/OpenStack here, I'm sure skydns has good
reasons for their technical choices and I'm in no place to question
them.

I'm trying to understand the value-add that Keystone could provide here.
Policy configuration is fairly static so I'm not understanding the
desire to put an API on top of it. But perhaps I'm missing the use case
here which is why I've been asking.

My ansible/puppet/chef comparison was just that those are ways to
distribute static files and would work just as well as something built
on top of etcd/zookeeper. I'm not really concerned about how it's
implemented though. I'm just trying to understand if the desire is to
have Keystone handle this so that deployers don't need to work with
their configuration management system to configure policy files, or is
there something more here?


> 
> -Josh
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] [salt] Removal of Security and OpenStackSalt project teams from the Big Tent

2016-09-21 Thread Adam Lawson
Travis,

My answer would be -that- is the most ideal scenario. I care about
OpenStack and ensuring quality projects have adequate representation so I
checked to see which ones didn't have anyone defined for leadership and
picked one to step in and help, assuming no one was able to fill that role
for that specific cycle.

On Sep 21, 2016 12:06 PM, "Travis McPeak"  wrote:

> "So all this said, there are individuals interested in the PTL role to
> ensure project teams have someone handling the logistics and coordination.
> My issue however was that I was not yet eligible to be a candidate which
> I'll remedy moving forward.
>
> I'm still interested in serving as a PTL for a project that needs one. I
> personally believe that in the case of Security, there needs to be a
> dedicated team due to the nature and impact of security breaches that
> directly influence the perception of OpenStack as a viable cloud solution
> for enterprises looking (or re-looking) at it for the first time."
>
> @Adam we'd certainly appreciate your help staying on top of
>
> required activities, email, etc.  Surely a PTL should be
>
> somebody who has at least been involved in the project?
>
> --
> -Travis
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [AODH] event-alarm timeout discussion

2016-09-21 Thread gordon chung


On 21/09/16 01:43 AM, Zhai, Edwin wrote:
> All,
>
> I'd like make some clarification for the event-alarm timeout design as
> many of you have some misunderstanding here. Pls. correct me if any
> mistakes.
>
> I realized that there are 2 different things, but we mix them sometime:
> 1. event-timeout-alarm
> This is one new type of alarm that bracket *.start and *.end events and
> get alarmed when receive *.start but no *.end in timeout. This new alarm
> handles one type of events/actions, e.g. create one alarm for instance
> creation, then all instances created in future will be handled by this
> alarm. This is not for real time, so it's acceptable that user know one
> instance creation failure in 5 mins.
>
> This new type of alarm can be implemented by one worker to check the DB
> periodically to do the statistic work. That is, new evaluator works in
> 'polling' mode, something like threshold alarm evaluator.
>
> One BP is @
> https://review.openstack.org/#/c/199005/

we should probably disregard this bp since it was assumed you guys 
talked over it. i'm abandoning it as i think we just forgot about it.

>
> 2. event-alarm timeout
> This is one new feature for _existed_ event-alarm evaluator. One alarm
> becomes 'UNALARM' when not receive desire event in timeout. This feature
> just handles one specific event, e.g create one alarm for instance ABC's
> XYZ operation with 5s, then user is notified in 5s immediately if no
> XYZ.done event comes. If want check for another instance, we need create
> another alarm.
>
> This is used in telco scenario, where operator want know if operation
> failure in real time.
>
> My patch(https://review.openstack.org/#/c/272028/) is for this purpose
> only, but I feel many guys mistaken them(sometimes even me) as they
> looks similar. So my question is: Do you think this telco usage model of
> event-alarm timeout is valid? If not, we can avoid discussing its
> implementation and ignore following.
>
>
> === event-alarm timeout implementation =
> As it's for event-alarm, we need keep it as event-driven. Furthermore,
> for quick response, we need use event for timeout handling. Periodic
> worker can't meet real time requirement.
>
> Separated queue for 'alarm.timeout.end'(indicates timeout expire) leads
> tricky race condition.  e.g.  'XYZ.done' comes in queue1, and
> 'alarm.timeout.end' comes in queue2, so that they are handled in
> parallel way:
>
> 1. In queue1, 'XYZ.done' is checking against alarm(current UNKNOWN), and
> will be set ALARM in next step.
> 2. In queue2, 'alarm.timeout.end' is checking against same alarm(current
> UNKNOWN), and will be set to OK(UNALARM) in next step.
> 3. In qeueu1, alarm transition happen: UNKNOWN => ALARM
> 4. In queue2, another alarm transition happen: ALARM =>OK(UNALARM)
>


can you clarify how this work? after user creates event timeout alarm 
definition through API (i assume the alarm definition specify we should 
see event x within y seconds).
- how does the evaluator get this alarm definition? is there an 
alarm.timeout.start message?
- what is this UNALARM state? to be honest, that isn't a real word so i 
don't know what it's suppose to represent here.

biggest problem for me is the only thing i know is there's a 
alarm.timeout.end event that needs to be handled by evaluator. i don't 
know where it's coming from or what it's needed for.


> So this alarm has bogus transition: UNKNOWN=>ALARM=>UNALARM, and tells
> the user: required event came, then no required event came;
>
> If put all events in one queue, evaluator handles them one by one(low
> level oslo mesg should be multi-threaded) so that second event would see
> alarm state as not UNKNOWN, and give up its transition.  As Gordc said,
> it's slow. But only very small part of the event-alarm need timeout
> handling, as it's only for telco usage model.

so the multithreaded part is what i was talking about. it's not handling 
them one by one. it's handling 64 (or whatever the default is) at any 
given time. whether its' one queue or two, you have a race to handle.

>
> One possible improvement as JD pointed out is to avoid so many spawned
> thread. We can just create one thread inside evaluator, and ask this
> thread handle all timeout requests from evaluator. Is it acceptable for
> event-alarm timeout solution?
>
>
> Best Rgds,
> Edwin

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] [deb] [packaging] OpenStack contribution stats skewed by deb-* projects

2016-09-21 Thread Thierry Carrez
Thomas Goirand wrote:
> I don't understand why Stackalytics has it wrong, when the electorate
> script for the PTL election is correct. Here's the script for getting
> commits:
> https://github.com/openstack-infra/system-config/blob/master/tools/owners.py

AFAIK that is because Stackalytics works from git history, while the
infra script works from Gerrit changes (which are more reliable).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >