Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - anybody want to help?

2016-11-16 Thread joehuang
>From encapsulation perspective, it's quite good for multi-steps approach to 
>make it being consistently with different version of image upload.

Another thought about Oaktree as a service: would it be a job for Oaktree to 
find proper OpenStack? 

For the location could be varied in different time for the same API calling, 
the end user may specify region or not, may specify az or not. So if
an end user does not specify region in instance creation, should Oaktree 
schedule one?

If Oaktree only intends to support different language library through gRPC, 
then would the encapsulation around Shade locally will be enough?

Best Regards
Chaoyi Huang (joehuang)


From: Monty Taylor [mord...@inaugust.com]
Sent: 16 November 2016 23:58
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - 
anybody want to help?

On 11/16/2016 09:34 AM, Monty Taylor wrote:
> On 11/15/2016 11:26 PM, joehuang wrote:
>>> Glance Image Uploads and Swift Object Uploads (and downloads). Having
>>> those two data operations go through an API proxy seems inefficient.
>>> However, having them not in the API seems like a bad user experience.
>>> Perhaps if we take advantage of the gRPC streaming protocol support
>>> doing a direct streaming passthrough actually wouldn't be awful. Or
>>> maybe the better approach would be for the gRPC call to return a URL and
>>> token for a user to POST/PUT to directly. Literally no clue.
>>
>> From bandwidth consideration, the bandwidth for the API service like Oaktree
>> may not as wide as that for data storage service, for example Swift. That 
>> means
>> if the Oaktree will proxy the image upload, then the bandwidth for the 
>> Oaktree
>> sever may be exhausted soon, and not  able to provide other API service.
>
> Yes - this is exactly right and a big part of the problem.
>
>> It's good in Glance V2 that image could be update to a store, then register 
>> the location
>> to a Glance image, but not directly upload bits to Glance API directly.
>
> Unfortunately for us - we need to support glance v1 PUT, glance v2 PUT,
> glance v2 task import and the new and upcoming glance v2 multi-step
> image upload.
>
> I had an idea this morning though - tell me what you think.
>
> The API will be multi-step (similar to the new glance image upload
> process) but with explicit instructions for users. We'll suggest that
> client lib authors who are building friendly libs on top of the oaktree
> client encapsulate the multi-step logic in some manner, and we will
> provide explicit instructions on what the multi-steps are.
>
> API:
>
> rpc CreateImage (ImageSpec) returns (ImageUpload) {}
> rpc UploadImageContent (stream ImageContent) returns (ImageUploadStatus) {}
> rpc FinalizeImageUpload (ImageSpec) returns (Image) {}
>
> rpc GetToken (Location) returns (Token) {}
>
> message ImageSpec {
>   Location location = 1;
>   string name = 3;
>   uint32 min_ram = 4;
>   uint64 min_disk = 5;
>   // etc - more fields
>   repeated bytes image_content = 99;
> };
>
> message ImageUpload {
>   enum UploadScheme {
> grpc_upload = 0;
> rest_put = 1;
> swift = 2;
>   };
>   UploadScheme scheme = 1;
>   string endpoint = 2;

Ooh! What if endpoint was actually a repeated field (array)? That way
for PUT operations it would just be a single entry - but for the swift
case, the SLO segment URLs could be pre-computed by oaktree.

It would make "size" a hard requirement from the API - but I'm fine with
that.

Logic below ...

>   map headers = 3;
>   uint32 segement_size = 4;
> };
>
> The logic is then:
>
> image_spec = ImageSpec(
> name='my_image')
> upload = client.CreateImage(image_spec)
> if upload.scheme == ImageUpload.grpc_upload:
> image = client.UploadImage(open('file', 'r'))
> elif upload.scheme == ImageUpload.rest_put:
> image = requests.put(
> upload.endpoint, headers=upload.headers,
> data=open('file', 'r'))
> elif upload.scheme = ImageUpload.swift:
> # upload to upload.endpoint, probably as a
> # swift SLO splitting the content into
> # segments of upload.segment_size
 count = 0
 content = open('file', 'r')
 for endpoint in upload.endpoints:
 content.seek(count * upload.segment_size)
 requests.put(
 endpoint, headers=upload.headers,
 data=content.read(upload.segment_size))
 count += 1

Making that multi-threaded is an obvious improvement of course.

> image = client.FinalizeImageUpload(image_spec)

Then the creation of the manifest object in swift could be handled in
finalize by oaktree. In fact- that way we could collapse the put and
swift cases to just be a "REST" case - since all of the operations are
PUT to a URL provided by oaktree - and for glance PUT segment_size will
just be == size.

> It's a three-pronged upload approach that a client author has to write -
> but the two different REST interactions should be ea

[openstack-dev]   [neutron][networking-vpp]  - networking-vpp don‘t support the latest version.

2016-11-16 Thread 阎松明
Hi,

I use the devstack to install the  latest version(master) to using 
networking-vpp.

It's failed. The log here:


Traceback (most recent call last):

  File "/usr/bin/vpp-agent", line 6, in <module>

from networking_vpp.agent.server import main

  File "/home/stack/networking-vpp/networking_vpp/agent/server.py", line 55, in 
<module>

DEV_NAME_PREFIX = n_const.TAP_DEVICE_PREFIX

AttributeError: 'module' object has no attribute 'TAP_DEVICE_PREFIX'

Because neutron move the "TAP_DEVICE_PREFIX " to neutron_lib.

I read the readme, i find this is only test for Mitaka. So does this project 
have a plan to support Newton in the future?

Mybe it's need to add a version of "Mitaka" to indicate that this is a stable 
version for Mitaka and "master" for development. 




Thanks,

Songming Yan 
















 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [documentation]guide on new project template

2016-11-16 Thread joehuang
Hello,

Tricircle is just a new big-tent project, would like to know which 
documentation are necessary for the project and whether there are documentation 
templates.

I just find this page for new project, but no clue how to organize an new 
project's documentation.

Many thanks if someone can help.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla core election policy change - vote

2016-11-16 Thread Martin André
On Wed, Nov 16, 2016 at 7:23 PM, Michał Jastrzębski  wrote:
> Hello,
>
> In light of recent events (kolla-kubernetes becoming thriving project,
> kolla-ansible being split) I feel we need to change of core reviewer
> election process.
>
> Currently Kolla was single core team. That is no longer the case, as
> right now we have 3 distinct core teams (kolla, kolla-ansible and
> kolla-kubernetes), although for now with big overlap in terms of
> people in them.
>
> For future-proofing, as we already have difference between teams, I
> propose following change to core election process:
>
>
> Core reviewer voting rights are reserved by core team in question.
> Which means if someone propose core reviewer to kolla-kubernetes, only
> kolla-kubernetes cores has a voting right (not kolla or kolla-ansible
> cores).

Makes sense. +1

Martin

> Voting will be open until 30 Nov '16 end of day. Reviewers from all
> core teams in kolla has voting rights on this policy change.
>
> Regards
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][trunk-port] OVS tbr bridge wasn't be created by OVS agent

2016-11-16 Thread zhi
Hi, Brain

Thanks for your reply. I try to build a new environment with devstack and
the code from the master branch. But I meet some OVS problems. Please let
me show them.

I installed the devstack ( Ubuntu 14.04 and code from master branch )
successfully. But I found that OVS version is 2.0.2. So I remove all the
OVS packages. I followed this document[1] to install the OVS which version
is 2.6.0. I met an exception when I restart the OVS agent.

Exception details show that:

2016-11-17 14:57:15.831 WARNING neutron.agent.ovsdb.native.vlog [-] tcp:
127.0.0.1:6640: send error: Connection refused
2016-11-17 14:57:16.793 INFO oslo_rootwrap.client [-] Spawned new rootwrap
daemon process with pid=21465
2016-11-17 14:57:16.837 WARNING neutron.agent.ovsdb.native.vlog [-] tcp:
127.0.0.1:6640: send error: Connection refused
2016-11-17 14:57:16.859 WARNING neutron.agent.ovsdb.native.vlog [-] tcp:
127.0.0.1:6640: send error: Connection refused
2016-11-17 14:57:16.901 WARNING neutron.agent.ovsdb.native.vlog [-] tcp:
127.0.0.1:6640: send error: Connection refused
2016-11-17 14:57:16.983 WARNING neutron.agent.ovsdb.native.vlog [-] tcp:
127.0.0.1:6640: send error: Connection refused
2016-11-17 14:57:17.146 WARNING neutron.agent.ovsdb.native.vlog [-] tcp:
127.0.0.1:6640: send error: Connection refused
2016-11-17 14:57:17.469 WARNING neutron.agent.ovsdb.native.vlog [-] tcp:
127.0.0.1:6640: send error: Connection refused
2016-11-17 14:57:18.113 WARNING neutron.agent.ovsdb.native.vlog [-] tcp:
127.0.0.1:6640: send error: Connection refused
2016-11-17 14:57:18.116 ERROR ryu.lib.hub [-] hub: uncaught exception:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/ryu/lib/hub.py", line 54, in
_launch
return func(*args, **kwargs)
  File
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
line 37, in agent_main_wrapper
ovs_agent.main(bridge_classes)
  File
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
line 2172, in main
agent = OVSNeutronAgent(bridge_classes, cfg.CONF)
  File
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
line 140, in __init__
self.ovs = ovs_lib.BaseOVS()
  File "/opt/stack/neutron/neutron/agent/common/ovs_lib.py", line 107, in
__init__
self.ovsdb = ovsdb.API.get(self)
  File "/opt/stack/neutron/neutron/agent/ovsdb/api.py", line 89, in get
return iface(context)
  File "/opt/stack/neutron/neutron/agent/ovsdb/impl_idl.py", line 291, in
__init__
super(NeutronOvsdbIdl, self).__init__(context)
  File "/opt/stack/neutron/neutron/agent/ovsdb/impl_idl.py", line 199, in
__init__
OvsdbIdl.ovsdb_connection.start()
  File "/opt/stack/neutron/neutron/agent/ovsdb/native/connection.py", line
79, in start
helper = self.get_schema_helper()
  File "/opt/stack/neutron/neutron/agent/ovsdb/native/connection.py", line
105, in get_schema_helper
helper = do_get_schema_helper()
  File "/usr/local/lib/python2.7/dist-packages/tenacity/__init__.py", line
87, in wrapped_f
return r.call(f, *args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/tenacity/__init__.py", line
188, in call
raise RetryError(fut).reraise()
  File "/usr/local/lib/python2.7/dist-packages/tenacity/__init__.py", line
233, in reraise
raise self.last_attempt.result()
  File
"/usr/local/lib/python2.7/dist-packages/concurrent/futures/_base.py", line
398, in result
return self.__get_result()
  File "/usr/local/lib/python2.7/dist-packages/tenacity/__init__.py", line
159, in call
result = fn(*args, **kwargs)
  File "/opt/stack/neutron/neutron/agent/ovsdb/native/connection.py", line
104, in do_get_schema_helper
self.schema_name)
  File "/opt/stack/neutron/neutron/agent/ovsdb/native/idlutils.py", line
112, in get_schema_helper
'err': os.strerror(err)})
Exception: Could not retrieve schema from tcp:127.0.0.1:6640: Connection
refused

I try to use "ovs-vsctl show" to ensure if the OVS runs okay. The result
shows the right info :

root@devstack:~# ovs-vsctl show
a4416a7b-3899-48bc-926f-b02e6554924d
Manager "ptcp:6640:127.0.0.1"
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}

... ...

Could you give me some advice to how to resolve the neutron ovs agent
exception which I met ? :)


Thanks
Zhi Chang


[1]:
https://github.com/mininet/mininet/wiki/Installing-new-version-of-Open-vSwitch

2016-11-15 21:30 GMT+08:00 Brian Haley :

> On 11/15/16 5:12 AM, zhi wrote:
>
>> Sorry, I forgot to say my local environment is Liberty. :)
>>
>
> According to the blueprint and reviews this didn't land until Newton,
> maybe some in Mitaka, so I wouldn't expect it to work in Liberty.
>
> -Brian
>
>
> 201

Re: [openstack-dev] [nova] Problem with Quota and servers spawned in groups

2016-11-16 Thread Chris Friesen

On 11/17/2016 12:27 AM, Chris Friesen wrote:

On 11/16/2016 03:55 PM, Sławek Kapłoński wrote:

As I said before, I was testing it and I didn't have instances in Error
state. Can You maybe check it once again on current master branch?


I don't have a master devstack handy...will try and set one up.  I just tried on
a stable/mitaka devstack--I bumped up the quotas and ran:

nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec --min-count 1
--max-count 100 blah

All the instances went to the "scheduling" state, the first 21 instances
scheduled successfully then one failed the RamFilter.  I ended up with 100
instances all in the "error" state.


I located a running devstack based on master, the nova repo was using commit 
633c817d from Nov 12.


It behaved the same...I jacked up the quotas to give it space, then ran:

nova boot --flavor m1.xlarge --image cirros-0.3.4-x86_64-uec --min-count 1 
--max-count 20 blah



The first nine instances scheduled successfully, the next one failed the 
RamFilter filter, and all the instances went to the "error" state.


This is what we'd expect given that in ComputeTaskManager.build_instances() if 
the call to self._schedule_instances() raises an exception we'll hit the 
"except" clause and loop over all the instances, setting them to the error 
state.  And down in FilterScheduler.select_destinations() we will raise an 
exception if we couldn't schedule all the hosts:


if len(selected_hosts) < num_instances:

reason = _('There are not enough hosts available.')
raise exception.NoValidHost(reason=reason)

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [neutron][lbaasv2][octavia] Not able to create loadbalancer

2016-11-16 Thread Ganpat Agarwal
Thanks a lot Michael.

Recreating the amphora image with Ubuntu Trusty solved the issue for me.

We are planning to add octavia on our *ansible* managed cloud, but could
not find any concrete documentation.Will give it a try.

Regards,
Ganpat



On Wed, Nov 16, 2016 at 10:20 PM, Michael Johnson 
wrote:

> Hi Ganpat,
>
> FYI, we are on freenode IRC: #openstack-lbaas if you would like to
> chat interactively.
>
> So, I see the amp is expecting systemd, which probably means you are
> using a "master" version of diskimage-builder with a stable/newton
> version of Octavia.  On November 2nd, they switched diskimage-builder
> to use a xenial Ubuntu image by default.  This patch just merged on
> Octavia master to support that change:
> https://review.openstack.org/396438
>
> I think you have two options:
> 1. Set the environment variable DIB_RELEASE=trusty and recreate the
> amphora image[1].
> 2. Install the stable/newton version of diskimage-builder and recreate
> the amphora image.
>
> For option one I have pasted a script I use to rebuild the image with
> Ubuntu trusty.
> Note, this script will delete your current image in glance and expects
> the octavia repository to be located in /opt/stack/octavia, so please
> update it as needed.
>
> Michael
>
> [1] https://gist.github.com/michjohn/a7cd582fc19e0b4bc894eea6249829f9
>
> On Wed, Nov 16, 2016 at 8:25 AM, Ganpat Agarwal
>  wrote:
> > Here are the steps i followed
> >
> > 1. Created a LB
> >
> > stack@devstack-openstack:~/devstack$ neutron lbaas-loadbalancer-list
> > +--+--+-
> +-+--+
> > | id   | name | vip_address |
> > provisioning_status | provider |
> > +--+--+-
> +-+--+
> > | 1ffcfe97-99a3-47c1-9df1-63bac71d9e04 | lb1  | 10.0.0.10   |
> PENDING_CREATE
> > | octavia  |
> > +--+--+-
> +-+--+
> >
> > 2. List amphora instance
> > stack@devstack-openstack:~/devstack$ nova list
> > +--+
> --+++---
> --+-
> -+
> > | ID   | Name
> > | Status | Task State | Power State | Networks
> > |
> > +--+
> --+++---
> --+-
> -+
> > | 89dc06b7-00a9-456f-abc9-50f14e1bc78b |
> > amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd | ACTIVE | -  |
> Running
> > | lb-mgmt-net=192.168.0.6; private=10.0.0.11,
> > fdbc:aa5f:a6ae:0:f816:3eff:fe0b:86d7 |
> > +--+
> --+++---
> --+-
> -+
> >
> > 3. able to ssh on lb-mgmt-ip , 192.168.0.6
> >
> > Network config
> >
> > ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ ip a
> > 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
> > default qlen 1
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > inet 127.0.0.1/8 scope host lo
> >valid_lft forever preferred_lft forever
> > inet6 ::1/128 scope host
> >valid_lft forever preferred_lft forever
> > 2: ens3:  mtu 1450 qdisc pfifo_fast
> state
> > UP group default qlen 1000
> > link/ether fa:16:3e:02:a7:50 brd ff:ff:ff:ff:ff:ff
> > inet 192.168.0.6/24 brd 192.168.0.255 scope global ens3
> >valid_lft forever preferred_lft forever
> > inet6 fe80::f816:3eff:fe02:a750/64 scope link
> >valid_lft forever preferred_lft forever
> > 3: ens6:  mtu 1500 qdisc noop state DOWN group
> default
> > qlen 1000
> >
> >
> > 4. No amphora agent running
> >
> > ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ sudo service
> > amphora-agent status
> > ● amphora-agent.service
> >Loaded: not-found (Reason: No such file or directory)
> >Active: inactive (dead)
> >
> > ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ sudo service
> > amphora-agent start
> > Failed to start amphora-agent.service: Unit amphora-agent.service not
> found.
> >
> >
> > How to proceed from here?
> >
> >
> > On Wed, Nov 16, 2016 at 6:04 PM, 洪 赵  wrote:
> >>
> >> After the amphora vm was created, the Octavia worker tried to plug VIP
> to
> >> the amphora  vm, but failed. It could not connect to the amphora agent.
> You
> >> may ssh to the vm and check if the networks and ip addresses are
> correctly
> >> set.
> >>
> >>
> >>
> >> Good luck.
> >>
> >> -hzhao
> >>
> >>
> >>
> >> 发件人: Ganpat Agarwal
> >> 发送时间: 2016年11月16日 14:40
> >> 收件人: OpenStack Development Mailing List (not for usage questions)
> >> 主题: [

Re: [openstack-dev] [nova] Problem with Quota and servers spawned in groups

2016-11-16 Thread Chris Friesen

On 11/16/2016 03:55 PM, Sławek Kapłoński wrote:

As I said before, I was testing it and I didn't have instances in Error
state. Can You maybe check it once again on current master branch?


I don't have a master devstack handy...will try and set one up.  I just tried on 
a stable/mitaka devstack--I bumped up the quotas and ran:


nova boot --flavor m1.tiny --image cirros-0.3.4-x86_64-uec --min-count 1 
--max-count 100 blah


All the instances went to the "scheduling" state, the first 21 instances 
scheduled successfully then one failed the RamFilter.  I ended up with 100 
instances all in the "error" state.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Nominating Ya Zhou for daisycloud-core reviewer

2016-11-16 Thread hu . zhijiang
Daisycloud-core team,

I'd like to nominate Ya Zhou(IRC name 'zhouya') for daisycloud-core core 
reviewer.

Ya started to work on Daisycloud-core since the beginning of the project 
and he has made significant contribution to project in last three months 
including adding Kolla backend, implementing extensible framework, etc. 
The most important work is introducing extensible framework to 
daisycloud-core, it makes our product a more open design to adapt more new 
technology such as bifrost, HWM, and any other third-party plugins. 

I have had a nice talk with him offline and he expressed willing to 
contribute more to daisycloud-core project and I believe he will make a 
great addition to the core review team.

So please put your +1 or -1 here no matter if you are core member or 
contributor. I will collect the result in seven days and use the majority 
voting policy to get the final result.

Thanks you so much!

B.R.,
Zhijiang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday Nov 17th at 9:00 UTC

2016-11-16 Thread Ghanshyam Mann
Hello everyone,



Please reminder that the weekly OpenStack QA team IRC meeting will be Thursday, 
Nov 17th at 9:00 UTC in the #openstack-meeting channel.



The agenda for the meeting can be found here:

https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_November_17th_2016_.280900_UTC.29

Anyone is welcome to add an item to the agenda.



To help people figure out what time 9:00 UTC is in other timezones the next 
meeting will be at:



04:00 EST

18:00 JST

18:30 ACST

11:00 CEST

04:00 CDT

02:00 PDT


Thanks & Regards,
gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Craton] NFV planned host maintenance

2016-11-16 Thread Jim Baker
On Wed, Nov 16, 2016 at 10:54 AM, Sulochan Acharya 
wrote:

> Hi,
>
> On Wed, Nov 16, 2016 at 2:46 PM, Ian Cordasco 
> wrote:
>
>> -Original Message-
>> From: Juvonen, Tomi (Nokia - FI/Espoo) 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: November 11, 2016 at 02:27:19
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject:  [openstack-dev] [Craton] NFV planned host maintenance
>>
>> > I have been looking in past two OpenStack summits to have changes
>> needed to
>> > fulfill OPNFV Doctor use case for planned host maintenance and at the
>> same
>> > time trying to find other Ops requirements to satisfy different needs.
>> I was
>> > just about to start a new project (Fenix), but looking Craton, it seems
>> > a good alternative and was proposed to me in Barcelona meetup. Here is
>> some
>> > ideas and would like a comment wither Craton could be used here.
>>
>> Hi Tomi,
>>
>> Thanks for your interest in craton! I'm replying in-line, but please
>> come and join us in #craton on Freenode as well!
>>
>> > OPNFV Doctor / NFV requirements are described here:
>> > http://artifacts.opnfv.org/doctor/docs/requirements/02-use_
>> cases.html#nvfi-maintenance
>> > http://artifacts.opnfv.org/doctor/docs/requirements/03-archi
>> tecture.html#nfvi-maintenance
>> > http://artifacts.opnfv.org/doctor/docs/requirements/05-imple
>> mentation.html#nfvi-maintenance
>> >
>> > My rough thoughts about what would be initially needed (as short as I
>> can):
>> >
>> > - There should be a database of all hosts matching to what is known by
>> Nova.
>>
>> So I think this might be the first problem that you'll run into with
>> Craton.
>>
>> Craton is designed to specifically manage the physical devices in a
>> data centre. At the moment, it only considers the hosts that you'd run
>> Nova on, not the Virtual Machines that Nova is managing on the Compute
>> hosts.
>>
>
Craton's inventory supports the following modeling:

   1. devices, which may have a parent (so a strict tree); we map this
   against such entities as top-of-rack switches; hosts; and containers
   2. logical relationships for these devices, including project, region,
   cell (optional); and arbitrary labels (tags)
   3. key/value variables on most entities, including devices. Variables
   support *resolution* - an override mechanism where values are looked up
   against some chain (for device, that's the device tree, cell, region, in
   that order). Values are typed JSON in the underlying (and default)
   SQLAlchemy model we use.

Craton users synchronize the device inventory from other source of truth
systems, such as an asset database; or perhaps manually. Meanwhile,
variables can reflect desired state configuration (so like Ansible); as
well as captured information.


>> It's plausible that we could add the ability to track virtual
>> machines, but Craton is meant to primarily work underneath the cloud.
>> I think this might be changing since Craton is looking forward to
>> helping manage a multi-cloud environment, so it's possible this won't
>> be an issue for long.
>>
>
Craton's device-focused model, although oriented to hardware, is rather
arbitrary. Recently we have been also looking at what is needed to support
a multi-tenant, multi-cloud inventory, and it seems quite feasible to
manage in Craton's inventory a subset of the resources provided by AWS or
Azure.

Does this mean VMs and similar resources? Maybe. However, our thinking has
been, for relatively fast changing and numerous resources, link to the
source of truth, in this case Nova. In particular, we have a very model for
variables that could be readily extended to support what we call
virtualized variables - dictionary mappings that are implemented by looking
up on a remote service. See https://bugs.launchpad.net/craton/+bug/1606882
- so long as it implements collections.abc.Mapping, we can plug into how
variables are resolved.


>
> So I think there is 2 parts to this. 1. What Ian mentioned is that Craton
> currently does not keep inventory of VM running inside an openstack
> deployment. Nova (and Horizon for UI) is already doing this for the users.
> However, we do run jobs or workload on the VM .. like live migrating VM's
> from a host that might undergo maintenance, or a host monitoring flagged as
> bad etc. This is done using `plugins` that talk to nova etc. So I think
> some of what you are looking for falls into that perhaps ? This can be
> based on some notification the Craton engine receives from another
> application (like monitoring for example).
>

>
>>
>> > - There should by an API for Cloud Admin to set planned maintenance
>> window
>> > for a host (maybe aggregate, group of hosts), when in maintenance and
>> unset
>> > when finished. There might be some optional parameters like target host
>> > where to move things currently running on effected host. could also be
>> > used for retirement of a host.
>>
>> This sounds like 

[openstack-dev] [ironic][ironic-python-agent]code-review about this commit of numa node

2016-11-16 Thread zhou . ya
Hi ironic team:

I send this email to ask something about 
https://review.openstack.org/#/c/369245/ .

This commit is about 
https://bugs.launchpad.net/ironic-python-agent/+bug/1622940.

The jenkins has +1 for this commit.I hope you could spare some time to 
give me some advice of this commit.

And karthik S also need numa node information in this RFE 
https://bugs.launchpad.net/ironic-python-agent/+bug/1635253.

If this commit looks good to you,could you please do the workflow?It 
will be a great help of you to do the review.

Thank you very much.

Looking forward for your response.
Regards
zhouya

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla core election policy change - vote

2016-11-16 Thread Swapnil Kulkarni
On Wed, Nov 16, 2016 at 11:53 PM, Michał Jastrzębski  wrote:
> Hello,
>
> In light of recent events (kolla-kubernetes becoming thriving project,
> kolla-ansible being split) I feel we need to change of core reviewer
> election process.
>
> Currently Kolla was single core team. That is no longer the case, as
> right now we have 3 distinct core teams (kolla, kolla-ansible and
> kolla-kubernetes), although for now with big overlap in terms of
> people in them.
>
> For future-proofing, as we already have difference between teams, I
> propose following change to core election process:
>
>
> Core reviewer voting rights are reserved by core team in question.
> Which means if someone propose core reviewer to kolla-kubernetes, only
> kolla-kubernetes cores has a voting right (not kolla or kolla-ansible
> cores).
>
>
> Voting will be open until 30 Nov '16 end of day. Reviewers from all
> core teams in kolla has voting rights on this policy change.
>
> Regards
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][osc] Openstack client, Openstack SDK and neutronclient consistency

2016-11-16 Thread Dean Troyer
> Excerpts from Sławek Kapłoński's message of 2016-11-16 22:36:41 +0100:
>> Hello,
>> So I want to ask all of You how in Your opinion it should be solved.
>> Currently there is inconsistency between CLI clients and
>> API/Horizon/Openstack SDK (I checked that it is possible to create
>> resource without name via SDK).

There are a number of intentional inconsistencies between OSC and the
various REST APIs, precisely because many of the the APIs themselves
are very inconsistent not only between projects but even within each
project.  When those APIs are directly reflected in the CLI, like
happens with many of the project-specific CLIs, the users suffer due
to the inconsistencies.


On Wed, Nov 16, 2016 at 4:03 PM, Doug Hellmann  wrote:
> The OpenStackClient team made the decision to always require names
> for newly created resources. Perhaps Dean or Steve can fill in more
> of the background, but my understanding is that this is a design
> decision for the user interface implemented by OSC, and not is not
> considered a bug.

Doug is correct here, we (OpenStackClient) made a specific decision in
the command structure to always use the resource name as the
positional argument of a create command.  In this case I believe the
consistency is worth what pain it may cause to invent a name for a new
policy.

We have done UX surveys with OSC at the last two summits and the
number one favorable comment from the users (varying from cloud
consumers to operators to OpenStack developers) has been regarding how
much they appreciate the command consistency.  This is our biggest
feature.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Magnum_driver

2016-11-16 Thread Tim Hinrichs
Hi Ruben,

The fieldnames you care about are the fields as they show up in the JSON
that gets returned from the magnum-client methods inside the datasource
driver, e.g. from these methods...

self.magnum.cluster_template.list()
self.magnum.cluster.list()

Several ways to check that you have the right fields..
1) Suppose your Murano driver is running inside of Congress, and you've
created an instance of it.
$ openstack congress datasource create magnum magnum ...
$ openstack congress datasource list

You know you have the right fields in the driver if you're getting data in
the Murano datasource instance.
$ openstack congress datasource row list magnum cluster
$ openstack congress datasource row list magnum cluster_template

2) If that doesn't work, check what JSON data magnum is returning on the
CLI by adding the --debug parameter to the magnum CLI commands.
[I haven't used magnum's client, but it looks to be using the standard
openstack client, so should support --debug.]

$ magnum cluster-template-list --debug

You'll need to wade through the output to find the JSON that gets returned
from the server.

3) If you think you have the right field names in the congress translators,
but you're still not seeing data in the Murano datasource instance, you'll
probably want to look at the congress logs. It looks like you have the
right log.DEBUG statements in there.  One trick is to disable all the
datasources in congress other than your own to minimize the logs you need
to look through.  Another is to change your log.DEBUG to log.WARNING
temporarily so your messages show up more clearly in the log.

You can look at logs from the command line with:
$ less -R /opt/stack/logs/congress.log

Or go into 'screen -x', go to the Congress window, and look at the output
in real time.  You'll want a tutorial on screen if you haven't used it.


Tim



On Wed, Nov 16, 2016 at 9:37 AM Ruben 
wrote:

> Hi everybody,
> first of all: Tim thanks for your help.
>
> I've read the code in
> python-magnumclient/magnumclient/v1/cluster_templates_shell.py and in
> python-magnumclient/magnumclient/v1/clusters_shell.py, so I've modify the
> translators in the magnum_driver according to
> _show_cluster_template(cluster_template) and _show_cluster(cluster) methods
> shown in these files.
> Are these the methods to be taken into account for the translation?
> Anyway I don't know if I made mistakes in the translation..
>
> I've some doubts because if I made the CLI command 'magnum
> cluster-template-list' I have:
>
> +--+--+
> | uuid | name |
> +--+--+
> | 8d25a1ed-faa6-4305-a6a1-6559708c805b | k8s-cluster-template |
> +--+--+
>
> and if I made the CLI command 'magnum cluster-template-show
> k8s-cluster-template' I have:
>
> +---+--+
> | Property  | Value|
> +---+--+
> | insecure_registry | -|
> | labels| {}   |
> | updated_at| -|
> | floating_ip_enabled   | True |
> | fixed_subnet  | -|
> | master_flavor_id  | my   |
> | uuid  | 8d25a1ed-faa6-4305-a6a1-6559708c805b |
> | no_proxy  | -|
> | https_proxy   | -|
> | tls_disabled  | False|
> | keypair_id| testkey  |
> | public| False|
> | http_proxy| -|
> | docker_volume_size| 7|
> | server_type   | vm   |
> | external_network_id   | public   |
> | cluster_distro| fedora-atomic|
> | image_id  | fedora-atomic-latest |
> | volume_driver | -|
> | registry_enabled  | False|
> | docker_storage_driver | devicemapper |
> | apiserver_port| -|
> | name  | k8s-cluster-template |
> | created_at| 2016-11-11T11:38:25+00:00|
> | network_driver| flannel  |
> | fixed_network | -|
> | coe   | kubernetes   |
> | flavor_id | m1.tiny   

Re: [openstack-dev] [charms] Deploy Keystone To Public Cloud

2016-11-16 Thread Marco Ceppi
This should work, a charm school at ODS drove this exact thing. I'll give
it a try tomorrow to see if I can help.

Marco

On Thu, Nov 17, 2016, 12:14 AM James Beedy  wrote:

> I'm having an issue getting a response back (mostly timeouts occur) when
> trying to talk to keystone deployed to AWS using private (on vpn) or public
> ip address. I've had luck with setting os-*-hostname configs, and ssh'ing
> in and running the keystone/openstack client locally from the keystone
> instance after adding the private ip <-> fqdn mapping in
> keystone:/etc/hosts, but can't seem to come up with any combination that
> lets me talk to the keystone api remotely. Just to be clear, I'm only
> deploying keystone and percona-cluster charms to AWS, not all of Openstack.
>
> If not possible using the ec2 provider, is this a possibility with any
> public providers?
>
> Thanks
> --
> Juju mailing list
> j...@lists.ubuntu.com
> Modify settings or unsubscribe at:
> https://lists.ubuntu.com/mailman/listinfo/juju
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] Deploy Keystone To Public Cloud

2016-11-16 Thread James Beedy
I'm having an issue getting a response back (mostly timeouts occur) when
trying to talk to keystone deployed to AWS using private (on vpn) or public
ip address. I've had luck with setting os-*-hostname configs, and ssh'ing
in and running the keystone/openstack client locally from the keystone
instance after adding the private ip <-> fqdn mapping in
keystone:/etc/hosts, but can't seem to come up with any combination that
lets me talk to the keystone api remotely. Just to be clear, I'm only
deploying keystone and percona-cluster charms to AWS, not all of Openstack.

If not possible using the ec2 provider, is this a possibility with any
public providers?

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][osc] Openstack client, Openstack SDK and neutronclient consistency

2016-11-16 Thread Monty Taylor
On 11/16/2016 04:03 PM, Doug Hellmann wrote:
> Excerpts from Sławek Kapłoński's message of 2016-11-16 22:36:41 +0100:
>> Hello,
>>
>> Few days ago someone reported bug [1] and I started checking it. I found
>> that when I'm trying to create QoS policy with neutronclient or OSC then
>> name parameter is neccessary.
>> But this parameter is not neccessary in Neutron API - I can create
>> policy without name when calling API directly (e.g. with curl).
>> For me it is bug on neutronclient and openstack client side and it
>> should be IMHO fixed to allow creating QoS policy without name (so
>> without any parameter given in fact: "neutron qos-policy-create").
>>
>> But today on QoS IRC meeting we were talking about it and reedip point
>> us another bug [2] (and his patch [3]) which concerns same problem but with
>> different parameter.
>> And in this patchset amotoki said that it can't be fixed in way that
>> allows to create resource without any parameter given in
>> openstack/neutron client.
>>
>> So I want to ask all of You how in Your opinion it should be solved.
>> Currently there is inconsistency between CLI clients and
>> API/Horizon/Openstack SDK (I checked that it is possible to create
>> resource without name via SDK).
>> I checked it for QoS policy (and network in SDK) but I think that it
>> might be more generic issue.
>>
>> [1] https://bugs.launchpad.net/neutron/+bug/1640767
>> [2] https://launchpad.net/bugs/1520055
>> [3] https://review.openstack.org/#/c/250587/
>>
> 
> The OpenStackClient team made the decision to always require names
> for newly created resources. Perhaps Dean or Steve can fill in more
> of the background, but my understanding is that this is a design
> decision for the user interface implemented by OSC, and not is not
> considered a bug.

I'll say that in shade we made the same decision, even though we don't
have QoS support. We also mostly enforce uniqueness just about
everywhere in names even if the REST API doesn't - for similar reasons.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][networking-sfc] We will have networking-sfc meeting at 1700 UTC on 11/17/2016

2016-11-16 Thread Cathy Zhang
Hi Everyone,

We will cover the following topics:

1.   Newton release which has been completed

2.   Big Stadium assessment walk through (I think we have got all items 
taken care of with a few pending for Neutron driver team approval)

3.   Ocata Release time line

4.   Ocata release preparation: New Features planned for Ocata release

5.   Review patches and bug scrub

Feel free to add the topic you would like to discuss. Please double check your 
local time for 1700 UTC. For pacific time zone, it is 9am instead of 10am now.

Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla core election policy change - vote

2016-11-16 Thread Mauricio Lima
+1

2016-11-16 16:55 GMT-03:00 Ryan Hallisey :

> +1
>
> -Ryan
>
> - Original Message -
> From: "Michał Jastrzębski" 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Sent: Wednesday, November 16, 2016 1:23:27 PM
> Subject: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla
> core election policy change - vote
>
> Hello,
>
> In light of recent events (kolla-kubernetes becoming thriving project,
> kolla-ansible being split) I feel we need to change of core reviewer
> election process.
>
> Currently Kolla was single core team. That is no longer the case, as
> right now we have 3 distinct core teams (kolla, kolla-ansible and
> kolla-kubernetes), although for now with big overlap in terms of
> people in them.
>
> For future-proofing, as we already have difference between teams, I
> propose following change to core election process:
>
>
> Core reviewer voting rights are reserved by core team in question.
> Which means if someone propose core reviewer to kolla-kubernetes, only
> kolla-kubernetes cores has a voting right (not kolla or kolla-ansible
> cores).
>
>
> Voting will be open until 30 Nov '16 end of day. Reviewers from all
> core teams in kolla has voting rights on this policy change.
>
> Regards
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][osc] Openstack client, Openstack SDK and neutronclient consistency

2016-11-16 Thread Doug Hellmann
Excerpts from Sławek Kapłoński's message of 2016-11-16 22:36:41 +0100:
> Hello,
> 
> Few days ago someone reported bug [1] and I started checking it. I found
> that when I'm trying to create QoS policy with neutronclient or OSC then
> name parameter is neccessary.
> But this parameter is not neccessary in Neutron API - I can create
> policy without name when calling API directly (e.g. with curl).
> For me it is bug on neutronclient and openstack client side and it
> should be IMHO fixed to allow creating QoS policy without name (so
> without any parameter given in fact: "neutron qos-policy-create").
> 
> But today on QoS IRC meeting we were talking about it and reedip point
> us another bug [2] (and his patch [3]) which concerns same problem but with
> different parameter.
> And in this patchset amotoki said that it can't be fixed in way that
> allows to create resource without any parameter given in
> openstack/neutron client.
> 
> So I want to ask all of You how in Your opinion it should be solved.
> Currently there is inconsistency between CLI clients and
> API/Horizon/Openstack SDK (I checked that it is possible to create
> resource without name via SDK).
> I checked it for QoS policy (and network in SDK) but I think that it
> might be more generic issue.
> 
> [1] https://bugs.launchpad.net/neutron/+bug/1640767
> [2] https://launchpad.net/bugs/1520055
> [3] https://review.openstack.org/#/c/250587/
> 

The OpenStackClient team made the decision to always require names
for newly created resources. Perhaps Dean or Steve can fill in more
of the background, but my understanding is that this is a design
decision for the user interface implemented by OSC, and not is not
considered a bug.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][fwaas] neutron-fwaas meeting time change

2016-11-16 Thread Sridar Kandaswamy (skandasw)
Thanks Nate. Based on further conversations and with the time change I
think what we intended was:

14:00 UTC

Tokyo: 11:00pm
Bengaluru: 07:30pm
US (EST):  09:00am
US (PST):  06:00am

I am fine with Mon or Tue.

Thanks

Sridar

On 11/14/16, 12:07 PM, "Nate Johnston"  wrote:

>On Sat, Oct 29, 2016 at 08:41:42AM +, Nate Johnston wrote:
>> Hello neutron-fwaas team,
>> 
>> With the expiration of Daylight Savings Time imminent in the USA, I
>>would like
>> to propose that we change the meeting time for the neutron-fwaas IRC
>>meeting
>> from the current value of 0400 UTC.  Here are a few proposals:
>> 
>> First proposed time: 1300 UTC
>> Tokyo (UTC+0900): 10:00 PM
>> Bengaluru (UTC+0530): 6:30 PM
>> US Eastern (UTC-0400): 9:00 AM
>> US Pacific (UTC-0700): 6:00 AM
>
>This was the option agreed to in the team meeting[1], and in the team
>meeting a
>preference for Thursday was expressed.  That is however a popular
>timeslot, and
>so there are no available channels at that time on Wednesday or Thursday.
> 
>
>What would be the best day for a meeting: Monday or Tuesday?
>
>Let's keep to the old meeting time until the meeting time change[2]
>merges.
>
>Thanks!
>
>--N.
>
>[1] 
>http://eavesdrop.openstack.org/meetings/fwaas/2016/fwaas.2016-11-02-04.00.
>log.html#l-105
>[2] https://review.openstack.org/#/c/393793/
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] review priorities ocata summit session recap

2016-11-16 Thread Matt Riedemann
The final design summit session at the Ocata summit was, as usual, the 
review priorities session. The full etherpad is here:


https://etherpad.openstack.org/p/ocata-nova-summit-priorities

Given the short cycle and restricted core reviewer bandwidth we're 
really only making resource providers and cells v2 review priorities for 
the release, along with some other things those depend on.


Resource Providers
--

There are three efforts here which can be worked concurrently, but the 
priority order is:


1. Making nova-scheduler use the placement service. Sylvain Bauza is 
working on this.


2. Handling shared storage/IP pools. Chris Dent is working on the server 
side aggregates changes for this and Jay Pipes is working on the client 
side pieces in the resource tracker.


3. Custom resource classes. Jay Pipes is leading the development on this 
and already has most, if not all, of the code up for review, some if it 
already merged.


Cells v2


1. Scheduler interaction. Dan Smith is leading the development work on 
this, lots of code is up for review and some of the series is already 
merged.


2. Adding the multi-cell simple python merge sort / filtering for 
listing instances. Dan Smith signed up to work on this.


3. Quotas. Melanie Witt is owning this, and as pointed out in an earlier 
recap on this specific session we might change how we do quotas for 
cells v2 in the API.


4. CI testing. I'm signed up to work on this. Some of the work has 
already gone into switching Ocata CI jobs (except cellsv1) to Neutron by 
default, making nova-network fail to start unless you're in a cells v1 
environment (dansmith wrote that patch), and then enabling cells v2 in 
master branch jobs by default (there are grenade changes needed for 
this). Eventually I'll be working on multinode (multicell) support here too.




There are two other peripheral efforts that are a review priority 
because of cells v2 being dependent on them.


1. Restricting the server list filters/sort parameters. Kevin Zheng and 
Alex Xu are working on this.


2. Moving neutron port creation to conductor. This is a dependency for 
supporting routed networks which is a dependency for using neutron with 
multiple cells. John Garbutt already started working on this in Newton 
so this is just a continuation of that work in Ocata.




There are a few non-priority but still notable mentions of things we're 
going to keep working on throughout the release:


1. Getting an agreement on the Cinder API rework effort which is needed 
to eventually support volume multiattach. John Garbutt and myself are 
working that from the Nova side, and John Griffith and Ildiko Vancsa 
from the Cinder side.


2. Continuing to unwind the CI dependencies on nova-network. I'm working 
on those efforts.


3. Get agreement on the discoverable capabilities API WG spec so we can 
work on implementing that in Pike.


4. Get the gate-tempest-dsvm-security job created which tests with 
Barbican as the key manager and it would verify signed images. This is a 
foundation on which we need to later build in new security-related 
features to Nova.




Finally, because of the short schedule, we agreed that there is just a 
single feature freeze which is the same as the rest of OpenStack for the 
Ocata release, which is January 26th. So there is no non-priority 
feature freeze for Ocata.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Problem with Quota and servers spawned in groups

2016-11-16 Thread Sławek Kapłoński

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Wed, 16 Nov 2016, Chris Friesen wrote:

> On 11/15/2016 06:50 PM, melanie witt wrote:
> > On Tue, 15 Nov 2016 18:10:40 -0600, Chris Friesen wrote:
> > > I'm in favor of your change, since the existing behaviour doesn't make
> > > sense.
> > > 
> > > But at some point I guess consistency trumps correctness, and if a new
> > > microversion is necessary to mark the new behaviour then a spec is
> > > required, and at that point we might want to fix the other issues with
> > > multi-boot at the same time.  (Like
> > > https://bugs.launchpad.net/nova/+bug/1458122 )
> > 
> > I think what Sławek is saying is that the quota behavior for multi-create
> > already changed at some point in the past, without a spec. He did 
> > experiments
> > recently that show a multi-create request succeeds as long as the min_count 
> > is
> > satisfied when there isn't enough quota for max_count. This is different 
> > than
> > the behavior at the time you opened the bug. So it seems the horse has left 
> > the
> > barn on this one.
> 
> The bug I reported is not related to quota, but rather the ability to
> schedule the instances.
> 
> The issue in the bug report is that if I ask to boot a min of X and a max of
> Z instances, and only Y instances can be scheduled (where X request will fail and all the instances will be put into an ERROR state.

As I said before, I was testing it and I didn't have instances in Error
state. Can You maybe check it once again on current master branch?

> 
> Arguably what *should* happen is that Y instances get created.  Also I think
> it would make more sense if the remaining  Z-Y instances are just never
> created rather than being created in an ERROR state.
> 
> Chris
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [glance] ESL question 'shared' vs 'shareable'

2016-11-16 Thread Ian Y. Choi

Hello devs,

As a Korean translator, I also quite agree with the idea.
Some images with "shared" state but actually not shared yet would be 
awkward,

and "shareable" word would cover such context
: those images can be shard but may not be shared yet (although the 
addition of "image members"is needed)

  or already shared.


Hello translators,

I am copying openstack-i...@lists.openstack.org on this.
Would you pleased think about this and share your thoughts?
(Or please attend to the next i18n IRC meeting about in 9-10 hours and 
tell me.)



With many thanks,

/Ian


Sean McGinnis wrote on 11/17/2016 1:44 AM:

On Wed, Nov 16, 2016 at 04:04:52PM +, Brian Rosmaita wrote:

Hello Translators,

We're having a discussion about a new image "visibility" value for Glance,
and before we go too far, it would be helpful to know whether what we're
worried about is going to matter for ESL people.

Here's the situation: Since the Diablo release, Glance end users have had
the ability to share images with other cloud users by adding "members" to
the image.  We call those "shared images".  Previously, we haven't had a
special "visibility" keyword for these, but we are introducing one now
[0].  Here's the problem introduced by that change:

(1) Members can only be added to an image if its current visibility value
allows for it. We're going to make this an explicit visibility state that
we ware proposing to call 'shared'.

(2) An image with visibility == 'shared', however, isn't actually
accessible to other users unless they are added as "image members".  So
it's going to be possible for a user to have some images with visibility
== 'shared', but they aren't *really* shared with anyone yet.

(3) For reasons outlined on [0], we're proposing to make this new
visibility the default value in Glance.  This will enable the current
sharing workflow to work in a backward-compatible way.  But some people
are worried that users will panic when they see that their new images have
visibility == 'shared' (even though no other users have access to such
images until "image members" are added).

(4) To address this, we're thinking that maybe the identifier for this
kind of image visibility should be 'shareable'.

Finally, here's my question.  For an ESL person looking at these two
identifiers (which, as identifiers, won't be translated):
* shared
* shareable

Are the above so similar that the nuances of the discussion above would be
lost anyway?  In other words, are we just bikeshedding here, or is there a
clear distinction?  What I mean is, is the panic described above likely or
unlikely to happen for an ESL person?

thanks,
brian

Good question. I think technically it would be shareable, which would
mean that it then able to be shared.

Realistically though, in my opinion, calling it shared to denote that it
_can be_ shared is probably intuitive enough that there wouldn't be any
confusion about the naming.

My 2 cents.


[0] https://review.openstack.org/#/c/396919/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Openstack client, Openstack SDK and neutronclient consistency

2016-11-16 Thread Sławek Kapłoński
Hello,

Few days ago someone reported bug [1] and I started checking it. I found
that when I'm trying to create QoS policy with neutronclient or OSC then
name parameter is neccessary.
But this parameter is not neccessary in Neutron API - I can create
policy without name when calling API directly (e.g. with curl).
For me it is bug on neutronclient and openstack client side and it
should be IMHO fixed to allow creating QoS policy without name (so
without any parameter given in fact: "neutron qos-policy-create").

But today on QoS IRC meeting we were talking about it and reedip point
us another bug [2] (and his patch [3]) which concerns same problem but with
different parameter.
And in this patchset amotoki said that it can't be fixed in way that
allows to create resource without any parameter given in
openstack/neutron client.

So I want to ask all of You how in Your opinion it should be solved.
Currently there is inconsistency between CLI clients and
API/Horizon/Openstack SDK (I checked that it is possible to create
resource without name via SDK).
I checked it for QoS policy (and network in SDK) but I think that it
might be more generic issue.

[1] https://bugs.launchpad.net/neutron/+bug/1640767
[2] https://launchpad.net/bugs/1520055
[3] https://review.openstack.org/#/c/250587/

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] ocata summit libvirt imagebackend refactor session recap

2016-11-16 Thread Matt Riedemann
Sorry for getting behind on these, but we're nearing the finish line on 
summit session recaps. :)


We had a design summit session to reflect a bit on what went well and 
what didn't go so well with the libvirt imagebackend refactor work in 
the Newton release, and where to go from here. The full etherpad is here:


https://etherpad.openstack.org/p/ocata-nova-summit-libvirt-imagebackend

We start with the mini retrospective.

What went well:

- mdbooth gained knowledge on some really gross and tricky parts of the 
nova code base, like block devices and the libvirt image backend/cache 
code. The more people know about this code the better though.

- We identified some integration testing gaps.
- The smaller incremental changes were easier to review.

What didn't go so well:

- A lot of code was written but didn't land. This was due to a few 
things, such as review fatigue on a seemingly endless series of changes 
with no end goal in sight, and just in general we had a lot of 
priorities in the Newton release so core reviewers were also spending 
time reviewing other priorities.
- The EMC/ScaleIO team was told their new libvirt imagebackend code 
needed to wait for the refactor work to complete and they pushed up a 
single large patch to refactor all of the code, which was -2ed as it 
conflicted with what mdbooth was working on. There was an obvious 
breakdown in communication here, but we want to stress that we are open 
to new ideas on how to write all of this code, it was just a distraction 
at the time in Newton though so bad timing. We'd probably also need 
better testing coverage before we could totally rewrite all of it.


What to change:

- Work in smaller chunks (10 changes in a series max) to avoid reviewer 
fatigue.
- Do the changes in a single topic branch - the changes in Newton moved 
around a bit and were hard for reviewers to follow.


After the retrospective we talked about the plans for this effort in Ocata.

The major change here is potentially not persisting the storage metadata 
in the nova database as originally laid out in the spec. There are 
thoughts now around storing the canonical domain xml for a libvirt guest 
and then using that for things like reboot. So Matt Booth isn't going to 
focus on the storage persistence part of the spec for Ocata and just 
continue working on refactoring and cleaning up the libvirt image 
backend and image cache code.


We also said that we wouldn't block the ScaleIO imagebackend on the 
refactor work now, same with Virtuozzo supporting ephemeral disks.


Finally, we talked a bit about reviving some CI testing around resizing 
with ephemeral disks here:


https://review.openstack.org/#/c/338411/

We want to try and move forward on that by modifying the test flavors 
created in devstack to include some ephemeral disk and then change the 
proposed test in Tempest to check if the flavors that Tempest is 
configured to use contain ephemeral disk and if so, then resize the 
instance and ssh into the guest to verify the ephemeral disks were 
resized, else just skip the test.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutron-lib impact

2016-11-16 Thread Armando M.
On 16 November 2016 at 10:11, Gary Kotton  wrote:

> Hi,
>
> I agree with you on a number of points that you have made below. But you
> are mixing things up. One you state that we should be moving faster and it
> is patches like this that actually hinder us. We are not moving as the core
> team is dwindling down and people are leaving the project. We need to add
> new members to the core team and remove people who are not taking part. We
> are a community and not an autocracy. It is great that you are driving this
> but getting people on board would be helpful. I feel that the cores from
> the subprojects should chime in and review this – at the end of the day it
> affects them too. I have even asked other to base reviews on this code. I
> just think that you need to be aware that there are other people working on
> the project and that they too should be engaged.
>

What's this thread for then if not engaging/inviting people to review? Your
point seems moot.


> Thanks
>
> Gary
>
>
>
> *From: *"Armando M." 
> *Reply-To: *OpenStack List 
> *Date: *Wednesday, November 16, 2016 at 6:16 PM
> *To: *OpenStack List 
> *Subject: *Re: [openstack-dev] [neutron] neutron-lib impact
>
>
>
>
>
>
>
> On 16 November 2016 at 00:55, Gary Kotton  wrote:
>
> Hi,
>
> The directory integration will break all of the plugins and neutron
> projects. I do not think that this is something that we should do. It
> breaks the neutron API contract.
>
>
>
> The plugin directory is an implementation internal. Let's be very clear,
> in case you have not realized this already:
>
>
>
> *Neutron is not supposed to be imported directly by projects and we all
> knew it when we started off with the project decomposition.*
>
>
>
> neutron-lib is our response to driving adoption of stable interfaces
> across the neutron ecosystem of repositories. Forcing ourselves to
> introduce artificial deprecation cycles for internal details is not only
> slowing us down but it has proven ineffective so far. We should accelerate
> with the decoupling of projects so that we can all consider these types of
> breakages a thing of the past.
>
>
>
> I think that we should only unblock the patch
> https://review.openstack.org/#/c/386845. I think that due to the fact
> that this patch (very big) will break all plugins, we should only approve
> it once every sub project owner has chimed in.
>
> This will mean that she/he will need to understand that there may be some
> tweaks involved in getting unit tests to pass. CI may automagically work.
>
>
>
> This is impractical and defeats the point of allowing us to go faster. I
> have taken the proactive step of announcing this change publicly and with
> ample notice. I have addressed many subprojects myself and have already
> seen +2/+1 flocking in. I have moved forward without creating busy work for
> myself and the review team.
>
>
>
> I feel that as a core reviewer my responsibility is to make sure that we
> do not break things.
>
>
>
> We are not in a sane situation. It's been two years since we split the
> repo up and very little progress has been made to decouple the projects via
> stable interfaces. I am trying to identify ways to allow us to accelerate
> and you're stifling that effort with your abuse of core rights. I was not
> going to let the patch merge without a final announcement at the next team
> meeting.
>
>
>
> In addition to this we have a responsibility to ensure that things
> continue to work. Hopefully we can find a way to do this in a more friendly
> manner.
>
>
>
> I have taken such a responsibility with [1]. It takes us longer to discuss
> (on something that was already widely agreed on) than either fixing the
> breakage or provide a 'fake' backward compat layer which we'll lead to the
> breakage as soon we take it away [2].
>
>
>
> That said, I am happy to concede if other members of the core team agrees
> with you. As PTL, I have identified a gap that needs to be filled and I am
> proactively stepping up to address the gap. I can't obviously be right all
> the time, but I was under the impression I had the majority of the core
> team on my side.
>
>
>
> At this point, I'd invite other neutron core members to review and vote on
> the patch.
>
>
>
> A.
>
>
>
> [1] https://review.openstack.org/#/q/topic:plugin-directory
> [2]  https://bugs.launchpad.net/vmware-nsx/+bug/1640319
>
>
>
> Thanks
>
> Gary
>
>
>
> *From: *"Armando M." 
> *Reply-To: *OpenStack List 
> *Date: *Wednesday, November 16, 2016 at 6:51 AM
> *To: *OpenStack List 
> *Subject: *[openstack-dev] [neutron] neutron-lib impact
>
>
>
> Hi neutrinos,
>
>
>
> As mentioned during the last team meeting [1], there is a change [2] in
> the works aimed at adopting the neutron plugins directory as provided in
> neutron-lib 1.0.0 [3].
>
>
>
> As shown in [2], the switch to using the directory is relatively
> straightforward. I leave the rest of the affected repos as an exercise for
> the reader :)
>
>
>
> Cheers,
>
> Armando
>
>
>
> [1] http://eav

Re: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla core election policy change - vote

2016-11-16 Thread Ryan Hallisey
+1

-Ryan

- Original Message -
From: "Michał Jastrzębski" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, November 16, 2016 1:23:27 PM
Subject: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla core 
election policy change - vote

Hello,

In light of recent events (kolla-kubernetes becoming thriving project,
kolla-ansible being split) I feel we need to change of core reviewer
election process.

Currently Kolla was single core team. That is no longer the case, as
right now we have 3 distinct core teams (kolla, kolla-ansible and
kolla-kubernetes), although for now with big overlap in terms of
people in them.

For future-proofing, as we already have difference between teams, I
propose following change to core election process:


Core reviewer voting rights are reserved by core team in question.
Which means if someone propose core reviewer to kolla-kubernetes, only
kolla-kubernetes cores has a voting right (not kolla or kolla-ansible
cores).


Voting will be open until 30 Nov '16 end of day. Reviewers from all
core teams in kolla has voting rights on this policy change.

Regards
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] python-wsmanclient future

2016-11-16 Thread Arkady.Kanevsky
+1

-Original Message-
From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com] 
Sent: Tuesday, November 15, 2016 10:00 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] python-wsmanclient future

On Mon, Nov 7, 2016 at 8:51 AM, Dmitry Tantsur  wrote:
> Hi folks!
>
> In view of the Ironic governance discussion [1] I'd like to talk about 
> wsmanclient [2] future.
>
> This project was created to split away wsman code from 
> python-dracclient to be reused in other drivers (I can only think of 
> AMT right now). This was never finished: dracclient still uses its internal 
> wsman implementation.
>
> To make it worse, the guy behind this effort (ifarkas) has left our 
> team, python-dracclient is likely to leave Ironic governance per [1], 
> and the AMT driver is going to leave the Ironic tree.
>
> At least the majority of the folks currently behind dracclient (Miles, 
> Lucas and myself) do not have resources to continue this wsmanclient effort.
> Unless somebody is ready to take over both wsmanclient itself and the 
> effort to port dracclient, I suggest we abandon wsmanclient.
>
> Any thoughts?

+1. Sounds like nobody objects, I can add retiring this to my todo list.

// jim

>
> [1] https://review.openstack.org/#/c/392685/
> [2] https://github.com/openstack/python-wsmanclient
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] IRC Meeting canceled on Nov 23 for Thanksgiving Holiday Week

2016-11-16 Thread HU, BIN
Hello,

It will be Thanksgiving Holiday week in US next week. Thus it was agreed in our 
IRC meeting today that our IRC meeting would be canceled next week (Nov 23).

Thanks
Bin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Gluon] IRC Meeting Time

2016-11-16 Thread HU, BIN
Hello folks,

Sorry to those who have missed our IRC meeting today. I should have sent a 
reminder that because US and Europe have changed to standard time, while our 
IRC meeting time is always UTC 1800, the local meeting time in US and Europe 
should have adjusted accordingly.

Thanks
Bin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][cinder] [api] API and entity naming consistency

2016-11-16 Thread Ben Swartzlander

On 11/16/2016 11:28 AM, Ravi, Goutham wrote:

+ [api] in the subject to attract API-WG attention.



We already have a guideline in the API-WG around resource names for “_”
vs “-“ -
https://specs.openstack.org/openstack/api-wg/guidelines/naming.html#rest-api-resource-names
. With some exceptions (like share_instances that you mention), I see
that we have implemented – across other resources.

Body elements however, we prefer underscores, i.e, do not have body
elements that follow CamelCase or mixedCase.



My personal preference would be to retain “share-” in the resource
names. As an application developer that has to integrate with block
storage and shared file systems APIs, I would like the distinction if
possible; because at the end of the day, the typical workflow for me
would be:

-  Get the endpoint from the catalog for the specific version of
the service API I want

-  Append resource to endpoint and make my REST calls.



The distinction in the APIs would ensure my code is readable. It would
be interesting to see what the API working group prefers around this. We
have in the past realized that /capabilities could to be uniform across
services because it is expected to spew a bunch of strings to the user
(warning: still under contention, see
https://review.openstack.org/#/c/386555/) . However, there is a mountain
of a difference between the underlying intent of /share-networks and
neutron’s /networks resources.


So you'd be in favor of renaming cinder's /snapshots URL to 
/volume-snapshots and manila's /snapshots URL to /share-snapshots?


I agree the explicitness is appealing, but we have to recognize that the 
existing API has tons of implicitness in the names, and changing the 
existing API will cause pain no matter how well-intentioned the changes are.



However, whatever we decide there, let’s not overload resources within
the project, an explicit API will be appreciated for application
development. share-types and group-types are not ‘types’ unless
everything about these resources (i.e, database representation) are the
same and all HTTP verbs that you are planning to add correspond to both.



--

Goutham



*From: *Valeriy Ponomaryov 
*Reply-To: *"OpenStack Development Mailing List (not for usage
questions)" 
*Date: *Wednesday, November 16, 2016 at 4:22 PM
*To: *"OpenStack Development Mailing List (not for usage questions)"

*Subject: *[openstack-dev] [manila][cinder] API and entity naming
consistency



For the moment Manila project, as well as Cinder, does have
inconsistency between entity and API naming, such as:

- "share type" ("volume type" in Cinder) entity has "/types/{id}" URL

- "share snapshot" ("volume snapshot" in Cinder) entity has
"/snapshots/{id}" URL



BUT, Manila has other Manila-specific APIs as following:



- "share network" entity and "/share-networks/{id}" API

- "share server" entity and "/share-servers/{id}" API



And with implementation of new features [1] it becomes a problem,
because we start having

"types" and "snapshots" for different things (share and share groups,
share types and share group types).



So, here is first open question:



What is our convention in naming APIs according to entity names?



- Should APIs contain full name or it may be shortened?

- Should we restrict it to some of the variants (full or shortened) or
allow some API follow one approach and some follow other approach,
consider it as "don't care"? Where "don't care" case is current
approach, de facto.



Then, we have second question here:



- Should we use only "dash" ( - ) symbols in API names or "underscore" (
_ ) is allowed?

- Should we allow both variants at once for each API?

- Should we allow APIs use any of variants and have zoo with various
approaches?



In Manila project, mostly "dash" is used, except one API -
"share_instances".



[1] https://review.openstack.org/#/c/315730/



--

Kind Regards
Valeriy Ponomaryov
vponomar...@mirantis.com 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] - stable release for mitaka or newton

2016-11-16 Thread Farhad Sunavala
Do the Kuryr folks plan to release a stable version for mitaka and/or newton ?
thanks,Farhad.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Weekly Policy Meeting

2016-11-16 Thread Lance Bragstad
Good point, I'll add that as an action item to the etherpad. I think
someone did bring that up in the meeting, I think it was Adam [0].


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2016-11-16.log.html#t2016-11-16T17:04:54

On Wed, Nov 16, 2016 at 12:16 PM, Steve Martinelli 
wrote:

> May I could suggest another action item? I think we need clear use cases.
> What policy and authorization capabilities are users expecting keystone to
> have? What are the short-comings of the implementation we have today?
>
> On Wed, Nov 16, 2016 at 1:06 PM, Lance Bragstad 
> wrote:
>
>> We had some issues using Hangouts because we hit the maximum limit of
>> attendees. To make it so that everyone could participate equally, we moved
>> the meeting to #openstack-keystone [0]. I have an action item to propose an
>> official meeting to the irc-meetings repository. Patch for the meeting is
>> currently up for review and it will be at the same time, but in
>> #openstack-meeting-cp instead of #openstack-keystone [1]. We do have a list
>> of action items we came up with during the meeting:
>>
>>- ACTION ITEM: ayoung to repropose https://review.openstack.org/#
>>/c/391624/
>>- ACTION ITEM: lbragstad to create an official meeting
>>   - Done: https://review.openstack.org/#/c/398500/
>>- ACTION ITEM: the entire group to get familiar with Apache Fortress
>>   - Docs: http://directory.apache.org/fortress/
>>   - Example: http://xuctarine.blogspot.ru/2
>>   016/08/apache-fortress-easiest-way-to-get-full.html
>>   
>> 
>>- ACTION ITEM: the entire group to get familiar with OpenStack
>>congress
>>   - Docs: https://wiki.openstack.org/wiki/Congress
>>- ACTION ITEM: the entire group to get familiar with
>>https://github.com/admiyo/keystone/tree/url_patterns
>>
>>
>> The first item on the agenda will be following up on any action items
>> from previous meetings. I've already bootstrapped the next agenda in the
>> meeting etherpad [2]. Thanks again to everyone who attended and I look
>> forward to next week's meeting.
>>
>>
>> [0] http://eavesdrop.openstack.org/irclogs/%23openstack-
>> keystone/%23openstack-keystone.2016-11-16.log.html#t2016-11-16T16:01:43
>> [1] https://review.openstack.org/#/c/398500/
>> [2] https://etherpad.openstack.org/p/keystone-policy-meeting
>>
>> On Wed, Nov 16, 2016 at 8:32 AM, Lance Bragstad 
>> wrote:
>>
>>> Just sending out a reminder that we'll be having our first meeting in 90
>>> minutes. You can find all information about our agenda in the etherpad [0]
>>> as well as a link to the hangout [1].
>>>
>>> See you there!
>>>
>>> [0] https://etherpad.openstack.org/p/keystone-policy-meeting
>>> [1] https://hangouts.google.com/call/pd36j4qv5zfbldmhxeeatq6f7ae
>>>
>>>
>>> On Fri, Nov 11, 2016 at 8:33 AM, Lance Bragstad 
>>> wrote:
>>>
 I've added some initial content to the etherpad [0], to get things
 rolling. Since this is going to be a recurring thing, I'd like our first
 meeting to level set the playing field for everyone. Let's spend some time
 getting familiar with policy concepts, understand exactly how OpenStack
 policy works today, then we can start working on writing down what we like
 and don't like about the existing implementation. I'm sure most people
 interested in this work will already be familiar with the problem, but I
 want to make it easy for folks who aren't to ramp up quickly and get them
 into the discussion.

 Some have already started contributing to the etherpad! I've slowly
 started massaging that information into our first agenda. I'll continue to
 do so and send out another email on Tuesday as a reminder to familiarize
 yourselves with the etherpad before the meeting.


 Thanks!


 [0] https://etherpad.openstack.org/p/keystone-policy-meeting

 On Thu, Nov 10, 2016 at 2:36 PM, Steve Martinelli <
 s.martine...@gmail.com> wrote:

> Thanks for taking the initiative Lance! It'll be great to hear some
> ideas that are capable of making policy more fine grained, and keeping
> things backwards compatible.
>
> On Thu, Nov 10, 2016 at 3:30 PM, Lance Bragstad 
> wrote:
>
>> Hi folks,
>>
>> After hearing the recaps from the summit, it sounds like policy was a
>> hot topic (per usual). This is also reinforced by the fact every release 
>> we
>> have specifications proposed to re-do policy in some way.
>>
>> It's no doubt policy in OpenStack needs work. Let's dedicate an hour
>> a week to policy, analyze what we have currently, design an ideal 
>> solution,
>> and aim for that. We can bring our progress to the PTG in Atlanta.
>>
>> We'll hold the meeting openly using Google Hangouts and record o

Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - anybody want to help?

2016-11-16 Thread Dean Troyer
On Wed, Nov 16, 2016 at 9:34 AM, Monty Taylor  wrote:
> (there are parts of this that are hand-wavey - but how does it sound in
> general?)

That sounds basically good because that noise I heard last night must
have been you sneaking in and stealing those steps from my white board
for what OSC will do with Glance.

OSC will have a one-step upload at the CLI, even if we also have to
expose some of the multi-steps separately in order to allow more
complex flows to happen.  The thing I like best is I can draw a line
through my 'poll for status' step when using gRPC!

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla core election policy change - vote

2016-11-16 Thread Eduardo Gonzalez
+1

On Wed, Nov 16, 2016, 6:46 PM Fox, Kevin M  wrote:

> +1
> 
> From: Michał Jastrzębski [inc...@gmail.com]
> Sent: Wednesday, November 16, 2016 10:23 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla
> core election policy change - vote
>
> Hello,
>
> In light of recent events (kolla-kubernetes becoming thriving project,
> kolla-ansible being split) I feel we need to change of core reviewer
> election process.
>
> Currently Kolla was single core team. That is no longer the case, as
> right now we have 3 distinct core teams (kolla, kolla-ansible and
> kolla-kubernetes), although for now with big overlap in terms of
> people in them.
>
> For future-proofing, as we already have difference between teams, I
> propose following change to core election process:
>
>
> Core reviewer voting rights are reserved by core team in question.
> Which means if someone propose core reviewer to kolla-kubernetes, only
> kolla-kubernetes cores has a voting right (not kolla or kolla-ansible
> cores).
>
>
> Voting will be open until 30 Nov '16 end of day. Reviewers from all
> core teams in kolla has voting rights on this policy change.
>
> Regards
> Michal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla-ansible is available

2016-11-16 Thread Michał Jastrzębski
So I can see value of moving configs to kolla itself, but that would
require significant ansible-fu to get it properly separated, I'd
suggest separating this discussion from general announcement. If
someone is willing to make effort of clean separation of configs from
ansible, let's discuss how to do it technically.

On 16 November 2016 at 12:39, Fox, Kevin M  wrote:
> I think some kolla-kubernetes folks will still want to use kolla genconfig.
>
> Not sure it really does need the ansible dependency though. if the dep is
> removed, it may be better to put it in the kolla repo then the kolla-ansible
> repo.
>
> Thanks,
> Kevin
> 
> From: Jeffrey Zhang [zhang.lei@gmail.com]
> Sent: Tuesday, November 15, 2016 11:55 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] Kolla-ansible is available
>
>
> On Wed, Nov 16, 2016 at 9:10 AM, Fox, Kevin M  wrote:
>>
>> whats the plan for genconfig? its based on ansible right now, but may fit
>> better as a non ansible specific tool?
>
>
> the core issue is: k8s depends on the ansible configuration file.
> now Kolla is split. how the kolla-k8s generate the configuration file? if it
> still re-use the ansible configuration file. we do not need any change.
>
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - anybody want to help?

2016-11-16 Thread Brad Topol

No Morgan.  You were supposed to stay quiet on this so we could spread vial
behind the scenes rumors on how Monty is trying to bring back CORBA!!! My
apologies to all the young folks not familiar with CORBA...

On a serious note this work has the potential to be extremely valuable and
I am look forward to seeing how it matures.  Is there an easy way for busy
folks to stay up to date on how this progresses? Ideally the interop
challenge work (which is continuing forward) should hopefully be able to
take advantage of the innovations that this project will deliver.


Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Morgan Fainberg 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   11/15/2016 08:42 PM
Subject:Re: [openstack-dev] oaktree - a friendly end-user oriented API
layer - anybody want to help?





On Tue, Nov 15, 2016 at 5:16 PM, Jay Pipes  wrote:
  Awesome start, Monty :) Comments inline.

  On 11/15/2016 09:56 AM, Monty Taylor wrote:
   Hey everybody!

   At this past OpenStack Summit the results of the Interop Challenge were
   shown on stage. It was pretty awesome - 17 different people from 17
   different clouds ran the same workload. And it worked!

   However, one of the reasons it worked is because they all used the
   Ansible modules we wrote that are based on the shade library that
   contains the business logic needed to hide vendor differences in clouds.
   That means that there IS a fantastic OpenStack interoperability story -
   but only if you program in Python. That's less awesome.

   With that in mind - I'm pleased to announce a new project that aims to
   address that - oaktree.

   oaktree is a gRPC-based API porcelain service for OpenStack that is
   based on the shade library and I'd love some help in writing it.

   Basing oaktree on shade gets not only the business logic. Shade already
   understands a multi-cloud world. And because we use shade in Infra for
   nodepool, it already has caching, batching and thundering herd
   protection sorted to be able to hand very high loads efficiently. So
   while oaktree is new, the primary logic and fundamentals are all shade
   and are battle-tested.

  ++ muy bueno.

   The barrier to deployers adding it to their clouds needs to be as low as
   humanly possible. So as we work on it, ensuring that we keep it
   dead-simple to install, update and operate must be a primary concern.

   Where are we and what's next?

   oaktree doesn't do a whole lot that's terribly interesting at the
   moment. We have all of the development scaffolding and gate jobs set up
   and a few functions implemented.

   oaktree exists currently as two repos - oaktree and oaktreemodel:

     http://git.openstack.org/cgit/openstack/oaktree
     http://git.openstack.org/cgit/openstack/oaktreemodel

   oaktreemodel contains the Protobuf definitions and the build scripts to
   produce Python, C++ and Go code from them. The python code is published
   to PyPI as a normal pure-python library. The C++ code is published as a
   source tarball and the Go code is checked back in to the same repo so
   that go works properly.

  Very nice. I recently started playing around with gRPC myself for some
  ideas I had about replacing part of nova-compute with a Golang worker
  service that can tolerate lengthy disconnections with a centralized
  control plane (hello, v[E]CPE!).

  It's been (quite) a few years since I last used protobufs (hey, remember
  Drizzle?) but it's been a blast getting back into protobufs development.
  Now that I see you're using a similar approach for oaktree, I'm
  definitely interested in contributing.

   oaktree depends on the python oaktreemodel library, and also on shade.
   It implements the server portion of the gRPC service definition.

   Currently, oaktree can list and search for flavors, images and floating
   ips. Exciting right? Most of the work to expose the rest of the API that
   shade can provide at the moment is going to be fairly straightforward -
   although in each case figuring out the best mapping will take some care.

   We have a few major things that need some good community design. These
   are also listed in a todo.rst file in the oaktree repo which is part of
   the docs:

     http://oaktree.readthedocs.io/en/latest/

   The auth story. The native/default auth for gRPC is oauth. It has the
   ability for pluggable auth, but that would raise the barrier for new
   languages. I'd love it if we can come up with a story that involves
   making API users in keystone and authorizing them to use oaktree via an
   oauth transaction.

  ++

  > The keystone auth backends currently are all about
   integrating with other auth management systems, which is great for
   environments where you have a web browser, but not so much for ones
   where you need t

Re: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla core election policy change - vote

2016-11-16 Thread Fox, Kevin M
+1

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Wednesday, November 16, 2016 10:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla core 
election policy change - vote

Hello,

In light of recent events (kolla-kubernetes becoming thriving project,
kolla-ansible being split) I feel we need to change of core reviewer
election process.

Currently Kolla was single core team. That is no longer the case, as
right now we have 3 distinct core teams (kolla, kolla-ansible and
kolla-kubernetes), although for now with big overlap in terms of
people in them.

For future-proofing, as we already have difference between teams, I
propose following change to core election process:


Core reviewer voting rights are reserved by core team in question.
Which means if someone propose core reviewer to kolla-kubernetes, only
kolla-kubernetes cores has a voting right (not kolla or kolla-ansible
cores).


Voting will be open until 30 Nov '16 end of day. Reviewers from all
core teams in kolla has voting rights on this policy change.

Regards
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Kolla-ansible is available

2016-11-16 Thread Fox, Kevin M
I think some kolla-kubernetes folks will still want to use kolla genconfig.

Not sure it really does need the ansible dependency though. if the dep is 
removed, it may be better to put it in the kolla repo then the kolla-ansible 
repo.

Thanks,
Kevin

From: Jeffrey Zhang [zhang.lei@gmail.com]
Sent: Tuesday, November 15, 2016 11:55 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla] Kolla-ansible is available


On Wed, Nov 16, 2016 at 9:10 AM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
whats the plan for genconfig? its based on ansible right now, but may fit 
better as a non ansible specific tool?

the core issue is: k8s depends on the ansible configuration file.
now Kolla is split. how the kolla-k8s generate the configuration file? if it 
still re-use the ansible configuration file. we do not need any change.



--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kolla-ansible][kolla-kubernetes] Kolla core election policy change - vote

2016-11-16 Thread Michał Jastrzębski
Hello,

In light of recent events (kolla-kubernetes becoming thriving project,
kolla-ansible being split) I feel we need to change of core reviewer
election process.

Currently Kolla was single core team. That is no longer the case, as
right now we have 3 distinct core teams (kolla, kolla-ansible and
kolla-kubernetes), although for now with big overlap in terms of
people in them.

For future-proofing, as we already have difference between teams, I
propose following change to core election process:


Core reviewer voting rights are reserved by core team in question.
Which means if someone propose core reviewer to kolla-kubernetes, only
kolla-kubernetes cores has a voting right (not kolla or kolla-ansible
cores).


Voting will be open until 30 Nov '16 end of day. Reviewers from all
core teams in kolla has voting rights on this policy change.

Regards
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Weekly Policy Meeting

2016-11-16 Thread Steve Martinelli
May I could suggest another action item? I think we need clear use cases.
What policy and authorization capabilities are users expecting keystone to
have? What are the short-comings of the implementation we have today?

On Wed, Nov 16, 2016 at 1:06 PM, Lance Bragstad  wrote:

> We had some issues using Hangouts because we hit the maximum limit of
> attendees. To make it so that everyone could participate equally, we moved
> the meeting to #openstack-keystone [0]. I have an action item to propose an
> official meeting to the irc-meetings repository. Patch for the meeting is
> currently up for review and it will be at the same time, but in
> #openstack-meeting-cp instead of #openstack-keystone [1]. We do have a list
> of action items we came up with during the meeting:
>
>- ACTION ITEM: ayoung to repropose https://review.openstack.org/#
>/c/391624/
>- ACTION ITEM: lbragstad to create an official meeting
>   - Done: https://review.openstack.org/#/c/398500/
>- ACTION ITEM: the entire group to get familiar with Apache Fortress
>   - Docs: http://directory.apache.org/fortress/
>   - Example: http://xuctarine.blogspot.ru/2016/08/apache-fortress-
>   easiest-way-to-get-full.html
>- ACTION ITEM: the entire group to get familiar with OpenStack congress
>   - Docs: https://wiki.openstack.org/wiki/Congress
>- ACTION ITEM: the entire group to get familiar with
>https://github.com/admiyo/keystone/tree/url_patterns
>
>
> The first item on the agenda will be following up on any action items from
> previous meetings. I've already bootstrapped the next agenda in the meeting
> etherpad [2]. Thanks again to everyone who attended and I look forward to
> next week's meeting.
>
>
> [0] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%
> 23openstack-keystone.2016-11-16.log.html#t2016-11-16T16:01:43
> [1] https://review.openstack.org/#/c/398500/
> [2] https://etherpad.openstack.org/p/keystone-policy-meeting
>
> On Wed, Nov 16, 2016 at 8:32 AM, Lance Bragstad 
> wrote:
>
>> Just sending out a reminder that we'll be having our first meeting in 90
>> minutes. You can find all information about our agenda in the etherpad [0]
>> as well as a link to the hangout [1].
>>
>> See you there!
>>
>> [0] https://etherpad.openstack.org/p/keystone-policy-meeting
>> [1] https://hangouts.google.com/call/pd36j4qv5zfbldmhxeeatq6f7ae
>>
>>
>> On Fri, Nov 11, 2016 at 8:33 AM, Lance Bragstad 
>> wrote:
>>
>>> I've added some initial content to the etherpad [0], to get things
>>> rolling. Since this is going to be a recurring thing, I'd like our first
>>> meeting to level set the playing field for everyone. Let's spend some time
>>> getting familiar with policy concepts, understand exactly how OpenStack
>>> policy works today, then we can start working on writing down what we like
>>> and don't like about the existing implementation. I'm sure most people
>>> interested in this work will already be familiar with the problem, but I
>>> want to make it easy for folks who aren't to ramp up quickly and get them
>>> into the discussion.
>>>
>>> Some have already started contributing to the etherpad! I've slowly
>>> started massaging that information into our first agenda. I'll continue to
>>> do so and send out another email on Tuesday as a reminder to familiarize
>>> yourselves with the etherpad before the meeting.
>>>
>>>
>>> Thanks!
>>>
>>>
>>> [0] https://etherpad.openstack.org/p/keystone-policy-meeting
>>>
>>> On Thu, Nov 10, 2016 at 2:36 PM, Steve Martinelli <
>>> s.martine...@gmail.com> wrote:
>>>
 Thanks for taking the initiative Lance! It'll be great to hear some
 ideas that are capable of making policy more fine grained, and keeping
 things backwards compatible.

 On Thu, Nov 10, 2016 at 3:30 PM, Lance Bragstad 
 wrote:

> Hi folks,
>
> After hearing the recaps from the summit, it sounds like policy was a
> hot topic (per usual). This is also reinforced by the fact every release 
> we
> have specifications proposed to re-do policy in some way.
>
> It's no doubt policy in OpenStack needs work. Let's dedicate an hour a
> week to policy, analyze what we have currently, design an ideal solution,
> and aim for that. We can bring our progress to the PTG in Atlanta.
>
> We'll hold the meeting openly using Google Hangouts and record our
> notes using etherpad.
>
> Our first meeting will be Wednesday, November 16th from 10:00 AM –
> 11:00 AM Central (16:00 - 17:00 UTC) and it will reoccur weekly.
>
> Hangout: https://hangouts.google.com/call/pd36j4qv5zfbldmhxeeatq6f7ae
> Etherpad: https://etherpad.openstack.org/p/keystone-policy-meeting
>
> Let me know if you have any other questions, comments or concerns. I
> look forward to the first meeting!
>
> Lance
>
> __

Re: [openstack-dev] [neutron] neutron-lib impact

2016-11-16 Thread Gary Kotton
Hi,
I agree with you on a number of points that you have made below. But you are 
mixing things up. One you state that we should be moving faster and it is 
patches like this that actually hinder us. We are not moving as the core team 
is dwindling down and people are leaving the project. We need to add new 
members to the core team and remove people who are not taking part. We are a 
community and not an autocracy. It is great that you are driving this but 
getting people on board would be helpful. I feel that the cores from the 
subprojects should chime in and review this – at the end of the day it affects 
them too. I have even asked other to base reviews on this code. I just think 
that you need to be aware that there are other people working on the project 
and that they too should be engaged.
Thanks
Gary

From: "Armando M." 
Reply-To: OpenStack List 
Date: Wednesday, November 16, 2016 at 6:16 PM
To: OpenStack List 
Subject: Re: [openstack-dev] [neutron] neutron-lib impact



On 16 November 2016 at 00:55, Gary Kotton 
mailto:gkot...@vmware.com>> wrote:
Hi,
The directory integration will break all of the plugins and neutron projects. I 
do not think that this is something that we should do. It breaks the neutron 
API contract.

The plugin directory is an implementation internal. Let's be very clear, in 
case you have not realized this already:

Neutron is not supposed to be imported directly by projects and we all knew it 
when we started off with the project decomposition.

neutron-lib is our response to driving adoption of stable interfaces across the 
neutron ecosystem of repositories. Forcing ourselves to introduce artificial 
deprecation cycles for internal details is not only slowing us down but it has 
proven ineffective so far. We should accelerate with the decoupling of projects 
so that we can all consider these types of breakages a thing of the past.

I think that we should only unblock the patch 
https://review.openstack.org/#/c/386845. I think that due to the fact that this 
patch (very big) will break all plugins, we should only approve it once every 
sub project owner has chimed in.
This will mean that she/he will need to understand that there may be some 
tweaks involved in getting unit tests to pass. CI may automagically work.

This is impractical and defeats the point of allowing us to go faster. I have 
taken the proactive step of announcing this change publicly and with ample 
notice. I have addressed many subprojects myself and have already seen +2/+1 
flocking in. I have moved forward without creating busy work for myself and the 
review team.

I feel that as a core reviewer my responsibility is to make sure that we do not 
break things.

We are not in a sane situation. It's been two years since we split the repo up 
and very little progress has been made to decouple the projects via stable 
interfaces. I am trying to identify ways to allow us to accelerate and you're 
stifling that effort with your abuse of core rights. I was not going to let the 
patch merge without a final announcement at the next team meeting.

In addition to this we have a responsibility to ensure that things continue to 
work. Hopefully we can find a way to do this in a more friendly manner.

I have taken such a responsibility with [1]. It takes us longer to discuss (on 
something that was already widely agreed on) than either fixing the breakage or 
provide a 'fake' backward compat layer which we'll lead to the breakage as soon 
we take it away [2].

That said, I am happy to concede if other members of the core team agrees with 
you. As PTL, I have identified a gap that needs to be filled and I am 
proactively stepping up to address the gap. I can't obviously be right all the 
time, but I was under the impression I had the majority of the core team on my 
side.

At this point, I'd invite other neutron core members to review and vote on the 
patch.

A.

[1] https://review.openstack.org/#/q/topic:plugin-directory
[2]  https://bugs.launchpad.net/vmware-nsx/+bug/1640319

Thanks
Gary

From: "Armando M." mailto:arma...@gmail.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 16, 2016 at 6:51 AM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [neutron] neutron-lib impact

Hi neutrinos,

As mentioned during the last team meeting [1], there is a change [2] in the 
works aimed at adopting the neutron plugins directory as provided in 
neutron-lib 1.0.0 [3].

As shown in [2], the switch to using the directory is relatively 
straightforward. I leave the rest of the affected repos as an exercise for the 
reader :)

Cheers,
Armando

[1] 
http://eavesdrop.openstack.org/meetings/networking/2016/networking.2016-11-14-21.00.txt
[2] https://review.openstack.org/#/q/topic:plugin-directory
[3] http://docs.openstack.org/releasenotes/neutron-lib/unreleased.html#id3


__
O

Re: [openstack-dev] [all] important changes to pep8 python jobs

2016-11-16 Thread Paul Belanger
On Thu, Nov 03, 2016 at 12:29:09PM -0400, Paul Belanger wrote:
> Greetings,
> 
> We (openstack-infra) are proposing a change to the current pep8[1] job for
> python jobs, and would like to bring your attention to it.
> 
> We'll be removing the extra-index-url field from pip.conf which forces the job
> to manually build any missing wheels as dependencies.  The reason for this, is
> to force a way for jobs to ensure the proper OS dependencies are installed.
> 
> There is a chance your project pep8 job may break, which is why we are sending
> out this email.  We encourage each project to be using bindep[2], binary
> dependency management tool, to ensure any OS packages are needed. If your
> project needs a specific binary to be installed to compile your project, 
> simply
> add it to the bindep.txt file in your project repo.
> 
> We'll be approving the change on Nov. 16, 2016 and send out another message as
> we move closer to the date.
> 
> If you have any questions, feel free to reply or use #openstack-infra on
> freenode.
> 
Greetings all,

We have just approved this change and it will be live shortly. Again, if you are
having problems, please join us in #openstack-infra.

> ---
> Paul Belanger
> 
> [1] https://review.openstack.org/391875
> [2] http://docs.openstack.org/infra/bindep/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Weekly Policy Meeting

2016-11-16 Thread Lance Bragstad
We had some issues using Hangouts because we hit the maximum limit of
attendees. To make it so that everyone could participate equally, we moved
the meeting to #openstack-keystone [0]. I have an action item to propose an
official meeting to the irc-meetings repository. Patch for the meeting is
currently up for review and it will be at the same time, but in
#openstack-meeting-cp instead of #openstack-keystone [1]. We do have a list
of action items we came up with during the meeting:

   - ACTION ITEM: ayoung to repropose
   https://review.openstack.org/#/c/391624/
   - ACTION ITEM: lbragstad to create an official meeting
  - Done: https://review.openstack.org/#/c/398500/
   - ACTION ITEM: the entire group to get familiar with Apache Fortress
  - Docs: http://directory.apache.org/fortress/
  - Example:
  
http://xuctarine.blogspot.ru/2016/08/apache-fortress-easiest-way-to-get-full.html
   - ACTION ITEM: the entire group to get familiar with OpenStack congress
  - Docs: https://wiki.openstack.org/wiki/Congress
   - ACTION ITEM: the entire group to get familiar with
   https://github.com/admiyo/keystone/tree/url_patterns

The first item on the agenda will be following up on any action items from
previous meetings. I've already bootstrapped the next agenda in the meeting
etherpad [2]. Thanks again to everyone who attended and I look forward to
next week's meeting.


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2016-11-16.log.html#t2016-11-16T16:01:43
[1] https://review.openstack.org/#/c/398500/
[2] https://etherpad.openstack.org/p/keystone-policy-meeting

On Wed, Nov 16, 2016 at 8:32 AM, Lance Bragstad  wrote:

> Just sending out a reminder that we'll be having our first meeting in 90
> minutes. You can find all information about our agenda in the etherpad [0]
> as well as a link to the hangout [1].
>
> See you there!
>
> [0] https://etherpad.openstack.org/p/keystone-policy-meeting
> [1] https://hangouts.google.com/call/pd36j4qv5zfbldmhxeeatq6f7ae
>
>
> On Fri, Nov 11, 2016 at 8:33 AM, Lance Bragstad 
> wrote:
>
>> I've added some initial content to the etherpad [0], to get things
>> rolling. Since this is going to be a recurring thing, I'd like our first
>> meeting to level set the playing field for everyone. Let's spend some time
>> getting familiar with policy concepts, understand exactly how OpenStack
>> policy works today, then we can start working on writing down what we like
>> and don't like about the existing implementation. I'm sure most people
>> interested in this work will already be familiar with the problem, but I
>> want to make it easy for folks who aren't to ramp up quickly and get them
>> into the discussion.
>>
>> Some have already started contributing to the etherpad! I've slowly
>> started massaging that information into our first agenda. I'll continue to
>> do so and send out another email on Tuesday as a reminder to familiarize
>> yourselves with the etherpad before the meeting.
>>
>>
>> Thanks!
>>
>>
>> [0] https://etherpad.openstack.org/p/keystone-policy-meeting
>>
>> On Thu, Nov 10, 2016 at 2:36 PM, Steve Martinelli > > wrote:
>>
>>> Thanks for taking the initiative Lance! It'll be great to hear some
>>> ideas that are capable of making policy more fine grained, and keeping
>>> things backwards compatible.
>>>
>>> On Thu, Nov 10, 2016 at 3:30 PM, Lance Bragstad 
>>> wrote:
>>>
 Hi folks,

 After hearing the recaps from the summit, it sounds like policy was a
 hot topic (per usual). This is also reinforced by the fact every release we
 have specifications proposed to re-do policy in some way.

 It's no doubt policy in OpenStack needs work. Let's dedicate an hour a
 week to policy, analyze what we have currently, design an ideal solution,
 and aim for that. We can bring our progress to the PTG in Atlanta.

 We'll hold the meeting openly using Google Hangouts and record our
 notes using etherpad.

 Our first meeting will be Wednesday, November 16th from 10:00 AM –
 11:00 AM Central (16:00 - 17:00 UTC) and it will reoccur weekly.

 Hangout: https://hangouts.google.com/call/pd36j4qv5zfbldmhxeeatq6f7ae
 Etherpad: https://etherpad.openstack.org/p/keystone-policy-meeting

 Let me know if you have any other questions, comments or concerns. I
 look forward to the first meeting!

 Lance

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> htt

Re: [openstack-dev] [Craton] NFV planned host maintenance

2016-11-16 Thread Sulochan Acharya
Hi,

On Wed, Nov 16, 2016 at 2:46 PM, Ian Cordasco 
wrote:

> -Original Message-
> From: Juvonen, Tomi (Nokia - FI/Espoo) 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: November 11, 2016 at 02:27:19
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject:  [openstack-dev] [Craton] NFV planned host maintenance
>
> > I have been looking in past two OpenStack summits to have changes needed
> to
> > fulfill OPNFV Doctor use case for planned host maintenance and at the
> same
> > time trying to find other Ops requirements to satisfy different needs. I
> was
> > just about to start a new project (Fenix), but looking Craton, it seems
> > a good alternative and was proposed to me in Barcelona meetup. Here is
> some
> > ideas and would like a comment wither Craton could be used here.
>
> Hi Tomi,
>
> Thanks for your interest in craton! I'm replying in-line, but please
> come and join us in #craton on Freenode as well!
>
> > OPNFV Doctor / NFV requirements are described here:
> > http://artifacts.opnfv.org/doctor/docs/requirements/02-
> use_cases.html#nvfi-maintenance
> > http://artifacts.opnfv.org/doctor/docs/requirements/03-
> architecture.html#nfvi-maintenance
> > http://artifacts.opnfv.org/doctor/docs/requirements/05-
> implementation.html#nfvi-maintenance
> >
> > My rough thoughts about what would be initially needed (as short as I
> can):
> >
> > - There should be a database of all hosts matching to what is known by
> Nova.
>
> So I think this might be the first problem that you'll run into with
> Craton.
>
> Craton is designed to specifically manage the physical devices in a
> data centre. At the moment, it only considers the hosts that you'd run
> Nova on, not the Virtual Machines that Nova is managing on the Compute
> hosts.
>
> It's plausible that we could add the ability to track virtual
> machines, but Craton is meant to primarily work underneath the cloud.
> I think this might be changing since Craton is looking forward to
> helping manage a multi-cloud environment, so it's possible this won't
> be an issue for long.
>

So I think there is 2 parts to this. 1. What Ian mentioned is that Craton
currently does not keep inventory of VM running inside an openstack
deployment. Nova (and Horizon for UI) is already doing this for the users.
However, we do run jobs or workload on the VM .. like live migrating VM's
from a host that might undergo maintenance, or a host monitoring flagged as
bad etc. This is done using `plugins` that talk to nova etc. So I think
some of what you are looking for falls into that perhaps ? This can be
based on some notification the Craton engine receives from another
application (like monitoring for example).


>
> > - There should by an API for Cloud Admin to set planned maintenance
> window
> > for a host (maybe aggregate, group of hosts), when in maintenance and
> unset
> > when finished. There might be some optional parameters like target host
> > where to move things currently running on effected host. could also be
> > used for retirement of a host.
>
> This sounds like it's part of the next phase of Craton development -
> the remediation workflows. I think Jim and Sulo are more suited
> towards talking to that though.
>
>
So we will be able to trigger a work/job (maintenance) based on 1. User
defined schedule 2. Based on some notification that we receive from another
application. Both user defined. Like Ian suggested, this is something we
plan to do in the next phase.


> > - There should be project(tenant) and host specific notifications that
> could:
>
> We are talking about an events/notifications system.
>
>
+1. We are working on providing notification messages for all actions
within the application.


> > - Trigger alarm in Aodh so Application would be aware of maintenance
> state
> > changes effecting to his servers, so zero downtime of application could
> > be guaranteed.
>
> I'm not sure it should be Craton's responsibility to do this, but I
> expect the administrator could set alarm criteria based off of
> Craton's events stream.
>

+1 We need to make sure that we dont try to be a monitoring solution. But
like Ian said we can always look at using the notification system to do
downstream processing.


>
> > - Notification could be consumed by workflow engine like Mistral, where
> > application server specific actions flows and admin action flows could
> > be performed (to move servers away, disable host,...).
> > - Host monitoring like Vitrage could consume notification to disable
> > alarms for host as of planned maintenance ongoing and not down by fault.
>

I think its both ways,
some alarm triggered -> Craton -> Disable the monitoring. But also,
craton notification -> Some application consumes it -> does something else.

So the way I think of this is, Admin sets/schedules some work on a host ->
Craton workflow will disable your monitoring (given monitoring solution
allows such action) -> Start the mai

[openstack-dev] [Congress] Magnum_driver

2016-11-16 Thread Ruben
Hi everybody,
first of all: Tim thanks for your help.

I've read the code in 
python-magnumclient/magnumclient/v1/cluster_templates_shell.py and in 
python-magnumclient/magnumclient/v1/clusters_shell.py, so I've modify the 
translators in the magnum_driver according to 
_show_cluster_template(cluster_template) and _show_cluster(cluster) methods 
shown in these files.
Are these the methods to be taken into account for the translation? 
Anyway I don't know if I made mistakes in the translation..

I've some doubts because if I made the CLI command 'magnum 
cluster-template-list' I have:

+--+--+
| uuid | name |
+--+--+
| 8d25a1ed-faa6-4305-a6a1-6559708c805b | k8s-cluster-template |
+--+--+

and if I made the CLI command 'magnum cluster-template-show 
k8s-cluster-template' I have:

+---+--+
| Property  | Value|
+---+--+
| insecure_registry | -|
| labels| {}   |
| updated_at| -|
| floating_ip_enabled   | True |
| fixed_subnet  | -|
| master_flavor_id  | my   |
| uuid  | 8d25a1ed-faa6-4305-a6a1-6559708c805b |
| no_proxy  | -|
| https_proxy   | -|
| tls_disabled  | False|
| keypair_id| testkey  |
| public| False|
| http_proxy| -|
| docker_volume_size| 7|
| server_type   | vm   |
| external_network_id   | public   |
| cluster_distro| fedora-atomic|
| image_id  | fedora-atomic-latest |
| volume_driver | -|
| registry_enabled  | False|
| docker_storage_driver | devicemapper |
| apiserver_port| -|
| name  | k8s-cluster-template |
| created_at| 2016-11-11T11:38:25+00:00|
| network_driver| flannel  |
| fixed_network | -|
| coe   | kubernetes   |
| flavor_id | m1.tiny  |
| master_lb_enabled | False|
| dns_nameserver| 193.205.160.3|
+---+--+

So fields that I see here are a litte bit different from fields in the methods 
above mentioned.
Same thing for cluster.

'magnum cluster-list'

+--+-+++--+---+
| uuid | name| keypair_id | node_count 
| master_count | status|
+--+-+++--+---+
| 1634beb9-25de-4cdd-bafa-67537069f0cc | k8s-cluster | testkey| 1  
| 1| CREATE_FAILED |
+--+-+++--+---+


'magnum cluster-show k8s-cluster'

+-++
| Property| Value   




 

Re: [openstack-dev] [neutron]Do we need to rfe to implement active-active router?

2016-11-16 Thread Assaf Muller
On Wed, Nov 16, 2016 at 10:42 AM, huangdenghui  wrote:
> hi
> Currently, neutron support DVR router and legacy router.  For high
> availability, there is HA router in reference implementation of legacy mode
> and DVR  mode. I am considering whether is active-active router needed in
> both mode?

Yes, an RFE would be required and likely a spec describing the high
level approach of the implementation.

>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][metadata] Is there HTTP attack issue in metadata proxy functionality offered by reference implementation?

2016-11-16 Thread Mathieu Gagné
On Wed, Nov 16, 2016 at 11:52 AM, Clint Byrum  wrote:
>
> IMO the HTTP metadata service and the way it works is one of the worst
> ideas we borrowed from EC2. Config drive (which I didn't like when I
> first saw it, but now that I've operated clouds, I love) is a simpler
> system and does not present any real surface area to the users.
>

Cannot agree more with you on that one.

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting Thursday November 17th

2016-11-16 Thread Christopher Aedo
Join us tomorrow (Thursday) for our weekly meeting, scheduled for
November 17th at 17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

Now that the dust has settled from the summit we'll pick up where we
left off - that will be discussing the transition to using Glare for
the backend.  We have a test server launched by infra now and are at
the point where we are down to fine tuning.  If you can join the
meeting to discuss further, please do!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [neutron][lbaasv2][octavia] Not able to create loadbalancer

2016-11-16 Thread Michael Johnson
Hi Ganpat,

FYI, we are on freenode IRC: #openstack-lbaas if you would like to
chat interactively.

So, I see the amp is expecting systemd, which probably means you are
using a "master" version of diskimage-builder with a stable/newton
version of Octavia.  On November 2nd, they switched diskimage-builder
to use a xenial Ubuntu image by default.  This patch just merged on
Octavia master to support that change:
https://review.openstack.org/396438

I think you have two options:
1. Set the environment variable DIB_RELEASE=trusty and recreate the
amphora image[1].
2. Install the stable/newton version of diskimage-builder and recreate
the amphora image.

For option one I have pasted a script I use to rebuild the image with
Ubuntu trusty.
Note, this script will delete your current image in glance and expects
the octavia repository to be located in /opt/stack/octavia, so please
update it as needed.

Michael

[1] https://gist.github.com/michjohn/a7cd582fc19e0b4bc894eea6249829f9

On Wed, Nov 16, 2016 at 8:25 AM, Ganpat Agarwal
 wrote:
> Here are the steps i followed
>
> 1. Created a LB
>
> stack@devstack-openstack:~/devstack$ neutron lbaas-loadbalancer-list
> +--+--+-+-+--+
> | id   | name | vip_address |
> provisioning_status | provider |
> +--+--+-+-+--+
> | 1ffcfe97-99a3-47c1-9df1-63bac71d9e04 | lb1  | 10.0.0.10   | PENDING_CREATE
> | octavia  |
> +--+--+-+-+--+
>
> 2. List amphora instance
> stack@devstack-openstack:~/devstack$ nova list
> +--+--+++-+--+
> | ID   | Name
> | Status | Task State | Power State | Networks
> |
> +--+--+++-+--+
> | 89dc06b7-00a9-456f-abc9-50f14e1bc78b |
> amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd | ACTIVE | -  | Running
> | lb-mgmt-net=192.168.0.6; private=10.0.0.11,
> fdbc:aa5f:a6ae:0:f816:3eff:fe0b:86d7 |
> +--+--+++-+--+
>
> 3. able to ssh on lb-mgmt-ip , 192.168.0.6
>
> Network config
>
> ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ ip a
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: ens3:  mtu 1450 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether fa:16:3e:02:a7:50 brd ff:ff:ff:ff:ff:ff
> inet 192.168.0.6/24 brd 192.168.0.255 scope global ens3
>valid_lft forever preferred_lft forever
> inet6 fe80::f816:3eff:fe02:a750/64 scope link
>valid_lft forever preferred_lft forever
> 3: ens6:  mtu 1500 qdisc noop state DOWN group default
> qlen 1000
>
>
> 4. No amphora agent running
>
> ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ sudo service
> amphora-agent status
> ● amphora-agent.service
>Loaded: not-found (Reason: No such file or directory)
>Active: inactive (dead)
>
> ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ sudo service
> amphora-agent start
> Failed to start amphora-agent.service: Unit amphora-agent.service not found.
>
>
> How to proceed from here?
>
>
> On Wed, Nov 16, 2016 at 6:04 PM, 洪 赵  wrote:
>>
>> After the amphora vm was created, the Octavia worker tried to plug VIP to
>> the amphora  vm, but failed. It could not connect to the amphora agent. You
>> may ssh to the vm and check if the networks and ip addresses are correctly
>> set.
>>
>>
>>
>> Good luck.
>>
>> -hzhao
>>
>>
>>
>> 发件人: Ganpat Agarwal
>> 发送时间: 2016年11月16日 14:40
>> 收件人: OpenStack Development Mailing List (not for usage questions)
>> 主题: [openstack-dev] [neutron][lbaasv2][octavia] Not able to create
>> loadbalancer
>>
>>
>>
>> Hi All,
>>
>> I am using devstack stable/newton branch and have deployed octavia for
>> neutron-lbaasv2.
>>
>> Here is my local.conf
>>
>> [[local|localrc]]
>> HOST_IP=10.0.2.15
>> DATABASE_PASSWORD=$ADMIN_PASSWORD
>> MYSQL_PASSWORD=$ADMIN_PASSWORD
>> RABBIT_PASSWORD=$ADMIN_PASSWORD
>> SERVICE_PASSWORD=$ADMIN_PASSWORD
>> SERVICE_TOKEN=tokentoken
>> DEST=/opt/stack
>>
>> # Disable Nova Network and enable Neutron
>> disable_service n-net
>> enable_service q-svc
>> enable_service q-agt

Re: [openstack-dev] [neutron][metadata] Is there HTTP attack issue in metadata proxy functionality offered by reference implementation?

2016-11-16 Thread Clint Byrum
Excerpts from huangdenghui's message of 2016-11-17 00:05:39 +0800:
> hi
> Currently, nova metadata service is proxy by metadata agent in dhcp agent 
> or l3 router agent, it is depended on whether network attach to router or 
> not. In essential, metadata agent implements a http proxy functionality by 
> computer node host protocal stack. In other words, it exposes host protocol 
> stack to vm. If vm is a attacker, it can launch a HTTP GET flood attacks. 
> then it may affect this computer node. I would like to hear you guy's  
> opinion. any comment is welcome. thanks.

Yes, it's an attack vector and should be protected and monitored as
such.

IMO the HTTP metadata service and the way it works is one of the worst
ideas we borrowed from EC2. Config drive (which I didn't like when I
first saw it, but now that I've operated clouds, I love) is a simpler
system and does not present any real surface area to the users.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [glance] ESL question 'shared' vs 'shareable'

2016-11-16 Thread Sean McGinnis
On Wed, Nov 16, 2016 at 04:04:52PM +, Brian Rosmaita wrote:
> Hello Translators,
> 
> We're having a discussion about a new image "visibility" value for Glance,
> and before we go too far, it would be helpful to know whether what we're
> worried about is going to matter for ESL people.
> 
> Here's the situation: Since the Diablo release, Glance end users have had
> the ability to share images with other cloud users by adding "members" to
> the image.  We call those "shared images".  Previously, we haven't had a
> special "visibility" keyword for these, but we are introducing one now
> [0].  Here's the problem introduced by that change:
> 
> (1) Members can only be added to an image if its current visibility value
> allows for it. We're going to make this an explicit visibility state that
> we ware proposing to call 'shared'.
> 
> (2) An image with visibility == 'shared', however, isn't actually
> accessible to other users unless they are added as "image members".  So
> it's going to be possible for a user to have some images with visibility
> == 'shared', but they aren't *really* shared with anyone yet.
> 
> (3) For reasons outlined on [0], we're proposing to make this new
> visibility the default value in Glance.  This will enable the current
> sharing workflow to work in a backward-compatible way.  But some people
> are worried that users will panic when they see that their new images have
> visibility == 'shared' (even though no other users have access to such
> images until "image members" are added).
> 
> (4) To address this, we're thinking that maybe the identifier for this
> kind of image visibility should be 'shareable'.
> 
> Finally, here's my question.  For an ESL person looking at these two
> identifiers (which, as identifiers, won't be translated):
> * shared
> * shareable
> 
> Are the above so similar that the nuances of the discussion above would be
> lost anyway?  In other words, are we just bikeshedding here, or is there a
> clear distinction?  What I mean is, is the panic described above likely or
> unlikely to happen for an ESL person?
> 
> thanks,
> brian

Good question. I think technically it would be shareable, which would
mean that it then able to be shared.

Realistically though, in my opinion, calling it shared to denote that it
_can be_ shared is probably intuitive enough that there wouldn't be any
confusion about the naming.

My 2 cents.

> 
> [0] https://review.openstack.org/#/c/396919/
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptls][tc][goals] acknowledging community goals for Ocata

2016-11-16 Thread Anne Gentle
On Wed, Nov 16, 2016 at 10:21 AM, Anne Gentle  wrote:

>
>
>
> On Wed, Nov 16, 2016 at 9:35 AM, Doug Hellmann 
> wrote:
>
>> We still have quite a few teams who have not acknowledged the goal for
>> Ocata. Remember, *all* teams are expected to respond, even if there is
>> no work to be done. The most important feature of this new process is
>> communication, which won't happen if teams don't participate.
>>
>> Please take a few minutes to review
>> http://governance.openstack.org/goals/index.html and
>> http://governance.openstack.org/goals/ocata/remove-incubated
>> -oslo-code.html
>> then submit a patch to add your planning artifacts to
>> openstack/governance/goals/ocata/remove-incubated-oslo-code.rst before
>> the deadline tomorrow.
>>
>
> Hi all,
>
> I wanted to follow up to let you know that Doug and I recorded a short
> video to talk about this new process and how we envision the community
> working together on this very first attempt at writing down expectations
> and setting up this new goal program.
>
> It's about 3 minutes and hopefully it'll help us all understand how to get
> the goals across the finish line. If you need more info, always feel free
> to reach out and ask. We'll iterate as we go.
>


And... now with an actual video link!

https://www.youtube.com/watch?v=tW0mJZe6Jiw


>
> Thanks,
> Anne
>
>
>>
>> Doug
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Anne Gentle
> www.justwriteclick.com
>



-- 
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][cinder] [api] API and entity naming consistency

2016-11-16 Thread Ravi, Goutham
+ [api] in the subject to attract API-WG attention.

We already have a guideline in the API-WG around resource names for “_” vs “-“ 
- 
https://specs.openstack.org/openstack/api-wg/guidelines/naming.html#rest-api-resource-names
 . With some exceptions (like share_instances that you mention), I see that we 
have implemented – across other resources.
Body elements however, we prefer underscores, i.e, do not have body elements 
that follow CamelCase or mixedCase.

My personal preference would be to retain “share-” in the resource names. As an 
application developer that has to integrate with block storage and shared file 
systems APIs, I would like the distinction if possible; because at the end of 
the day, the typical workflow for me would be:

-  Get the endpoint from the catalog for the specific version of the 
service API I want

-  Append resource to endpoint and make my REST calls.

The distinction in the APIs would ensure my code is readable. It would be 
interesting to see what the API working group prefers around this. We have in 
the past realized that /capabilities could to be uniform across services 
because it is expected to spew a bunch of strings to the user (warning: still 
under contention, see https://review.openstack.org/#/c/386555/) . However, 
there is a mountain of a difference between the underlying intent of 
/share-networks and neutron’s /networks resources.

However, whatever we decide there, let’s not overload resources within the 
project, an explicit API will be appreciated for application development. 
share-types and group-types are not ‘types’ unless everything about these 
resources (i.e, database representation) are the same and all HTTP verbs that 
you are planning to add correspond to both.

--
Goutham

From: Valeriy Ponomaryov 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, November 16, 2016 at 4:22 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [manila][cinder] API and entity naming consistency

For the moment Manila project, as well as Cinder, does have inconsistency 
between entity and API naming, such as:
- "share type" ("volume type" in Cinder) entity has "/types/{id}" URL
- "share snapshot" ("volume snapshot" in Cinder) entity has "/snapshots/{id}" 
URL

BUT, Manila has other Manila-specific APIs as following:

- "share network" entity and "/share-networks/{id}" API
- "share server" entity and "/share-servers/{id}" API

And with implementation of new features [1] it becomes a problem, because we 
start having
"types" and "snapshots" for different things (share and share groups, share 
types and share group types).

So, here is first open question:

What is our convention in naming APIs according to entity names?

- Should APIs contain full name or it may be shortened?
- Should we restrict it to some of the variants (full or shortened) or allow 
some API follow one approach and some follow other approach, consider it as 
"don't care"? Where "don't care" case is current approach, de facto.

Then, we have second question here:

- Should we use only "dash" ( - ) symbols in API names or "underscore" ( _ ) is 
allowed?
- Should we allow both variants at once for each API?
- Should we allow APIs use any of variants and have zoo with various approaches?

In Manila project, mostly "dash" is used, except one API - "share_instances".

[1] https://review.openstack.org/#/c/315730/

--
Kind Regards
Valeriy Ponomaryov
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaasv2][octavia] Not able to create loadbalancer

2016-11-16 Thread Michael Johnson
Hi Ganpat,

Yes, as hzhao mentioned, this error means that the controller was
unable to connect to the amphora over the management network.

Please check that this section is properly setup:
http://docs.openstack.org/developer/octavia/guides/dev-quick-start.html#load-balancer-network-configuration

You can also use our devstack plugin.sh script as a reference to how
we set it up in devstack environments:
https://github.com/openstack/octavia/blob/master/devstack/plugin.sh

Michael

On Tue, Nov 15, 2016 at 10:36 PM, Ganpat Agarwal
 wrote:
> Hi All,
>
> I am using devstack stable/newton branch and have deployed octavia for
> neutron-lbaasv2.
>
> Here is my local.conf
>
> [[local|localrc]]
> HOST_IP=10.0.2.15
> DATABASE_PASSWORD=$ADMIN_PASSWORD
> MYSQL_PASSWORD=$ADMIN_PASSWORD
> RABBIT_PASSWORD=$ADMIN_PASSWORD
> SERVICE_PASSWORD=$ADMIN_PASSWORD
> SERVICE_TOKEN=tokentoken
> DEST=/opt/stack
>
> # Disable Nova Network and enable Neutron
> disable_service n-net
> enable_service q-svc
> enable_service q-agt
> enable_service q-dhcp
> enable_service q-l3
> enable_service q-meta
>
> # Enable LBaaS v2
> enable_plugin neutron-lbaas
> https://git.openstack.org/openstack/neutron-lbaas stable/newton
> enable_plugin octavia https://git.openstack.org/openstack/octavia
> stable/newton
> enable_service q-lbaasv2
> enable_service octavia
> enable_service o-cw
> enable_service o-hm
> enable_service o-hk
> enable_service o-api
>
> # Neutron options
> Q_USE_SECGROUP=True
> FLOATING_RANGE="172.18.161.0/24"
> FIXED_RANGE="10.0.0.0/24"
> Q_FLOATING_ALLOCATION_POOL=start=172.18.161.250,end=172.18.161.254
> PUBLIC_NETWORK_GATEWAY="172.18.161.1"
> PUBLIC_INTERFACE=eth0
>
> LOG=True
> VERBOSE=True
> LOGFILE=$DEST/logs/stack.sh.log
> LOGDAYS=1
> SCREEN_LOGDIR=$DEST/logs/screen
> SYSLOG=True
> SYSLOG_HOST=$HOST_IP
> SYSLOG_PORT=516
> RECLONE=yes
>
>
> While creating loadbalancer, i am getting error in octavia worker
>
> 2016-11-16 06:13:08.264 4115 INFO octavia.controller.queue.consumer [-]
> Starting consumer...
> 2016-11-16 06:14:58.507 4115 INFO octavia.controller.queue.endpoint [-]
> Creating load balancer '51082942-b348-4900-bde9-6d617dba8f99'...
> 2016-11-16 06:14:59.204 4115 INFO
> octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB
> with id 93e28edd-71ee-4448-bc70-b0424dbd64f5
> 2016-11-16 06:14:59.334 4115 INFO octavia.certificates.generator.local [-]
> Signing a certificate request using OpenSSL locally.
> 2016-11-16 06:14:59.336 4115 INFO octavia.certificates.generator.local [-]
> Using CA Certificate from config.
> 2016-11-16 06:14:59.336 4115 INFO octavia.certificates.generator.local [-]
> Using CA Private Key from config.
> 2016-11-16 06:14:59.337 4115 INFO octavia.certificates.generator.local [-]
> Using CA Private Key Passphrase from config.
> 2016-11-16 06:15:15.085 4115 INFO
> octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for
> amphora: 93e28edd-71ee-4448-bc70-b0424dbd64f5 with compute id
> f339b48d-1445-47e0-950b-ee69c2add81f for load balancer:
> 51082942-b348-4900-bde9-6d617dba8f99
> 2016-11-16 06:15:15.208 4115 INFO
> octavia.network.drivers.neutron.allowed_address_pairs [-] Port
> ac27cbb8-078d-47fd-824c-e95b0ebff392 already exists. Nothing to be done.
> 2016-11-16 06:15:39.708 4115 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
> instance. Retrying.
> 2016-11-16 06:15:47.712 4115 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
> instance. Retrying.
> 
> several 100 lines with same message
> 
> 2016-11-16 06:24:29.310 4115 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
> instance. Retrying.
> ^[2016-11-16 06:24:34.316 4115 WARNING
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to
> instance. Retrying.
> 2016-11-16 06:24:39.317 4115 ERROR
> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries
> (currently set to 100) exhausted.  The amphora is unavailable.
> 2016-11-16 06:24:39.327 4115 WARNING
> octavia.controller.worker.controller_worker [-] Task
> 'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug'
> (19f29ca9-3e7f-4629-b976-b4d24539d8ed) transitioned into state 'FAILURE'
> from state 'RUNNING'
> 33 predecessors (most recent first):
>   Atom
> 'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs'
> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer':
> },
> 'provides': {u'93e28edd-71ee-4448-bc70-b0424dbd64f5':
>  0x7f8774263610>}}
>   |__Atom 'reload-lb-after-plug-vip' {'intention': 'EXECUTE', 'state':
> 'SUCCESS', 'requires': {'loadbalancer_id':
> u'51082942-b348-4900-bde9-6d617dba8f99'}, 'provides':
> }
>  |__Atom
> 'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData'
> {'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data':
> []},
> 'provides': None}
> |__Atom 'octavia.controller.worker.tasks.network

Re: [openstack-dev] 答复: [neutron][lbaasv2][octavia] Not able to create loadbalancer

2016-11-16 Thread Ganpat Agarwal
Here are the steps i followed

1. Created a LB

stack@devstack-openstack:~/devstack$ neutron lbaas-loadbalancer-list
+--+--+-+-+--+
| id   | name | vip_address |
provisioning_status | provider |
+--+--+-+-+--+
| 1ffcfe97-99a3-47c1-9df1-63bac71d9e04 | lb1  | 10.0.0.10   |
PENDING_CREATE  | octavia  |
+--+--+-+-+--+

2. List amphora instance
stack@devstack-openstack:~/devstack$ nova list
+--+--+++-+--+
| ID   | Name
  | Status | Task State | Power State | Networks
  |
+--+--+++-+--+
| 89dc06b7-00a9-456f-abc9-50f14e1bc78b |
amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd | ACTIVE | -  |
Running | lb-mgmt-net=192.168.0.6; private=10.0.0.11,
fdbc:aa5f:a6ae:0:f816:3eff:fe0b:86d7 |
+--+--+++-+--+

3. able to ssh on lb-mgmt-ip , 192.168.0.6

Network config

ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: ens3:  mtu 1450 qdisc pfifo_fast state
UP group default qlen 1000
link/ether fa:16:3e:02:a7:50 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.6/24 brd 192.168.0.255 scope global ens3
   valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe02:a750/64 scope link
   valid_lft forever preferred_lft forever
3: ens6:  mtu 1500 qdisc noop state DOWN group default
qlen 1000


4. No amphora agent running

ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ sudo service
amphora-agent status
● amphora-agent.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)

ubuntu@amphora-41ac5ea5-c4f6-4b13-add5-edf4ac4ae0dd:~$ sudo service
amphora-agent start
Failed to start amphora-agent.service: Unit amphora-agent.service not found.


How to proceed from here?


On Wed, Nov 16, 2016 at 6:04 PM, 洪 赵  wrote:

> After the amphora vm was created, the Octavia worker tried to plug VIP to
> the amphora  vm, but failed. It could not connect to the amphora agent. You
> may ssh to the vm and check if the networks and ip addresses are correctly
> set.
>
>
>
> Good luck.
>
> -hzhao
>
>
>
> *发件人: *Ganpat Agarwal 
> *发送时间: *2016年11月16日 14:40
> *收件人: *OpenStack Development Mailing List (not for usage questions)
> 
> *主题: *[openstack-dev] [neutron][lbaasv2][octavia] Not able to create
> loadbalancer
>
>
> Hi All,
>
> I am using devstack stable/newton branch and have deployed octavia for
> neutron-lbaasv2.
>
> Here is my local.conf
>
> [[local|localrc]]
> HOST_IP=10.0.2.15
> DATABASE_PASSWORD=$ADMIN_PASSWORD
> MYSQL_PASSWORD=$ADMIN_PASSWORD
> RABBIT_PASSWORD=$ADMIN_PASSWORD
> SERVICE_PASSWORD=$ADMIN_PASSWORD
> SERVICE_TOKEN=tokentoken
> DEST=/opt/stack
>
> # Disable Nova Network and enable Neutron
> disable_service n-net
> enable_service q-svc
> enable_service q-agt
> enable_service q-dhcp
> enable_service q-l3
> enable_service q-meta
>
> # Enable LBaaS v2
> enable_plugin neutron-lbaas https://git.openstack.org/
> openstack/neutron-lbaas stable/newton
> enable_plugin octavia https://git.openstack.org/openstack/octavia
> stable/newton
> enable_service q-lbaasv2
> enable_service octavia
> enable_service o-cw
> enable_service o-hm
> enable_service o-hk
> enable_service o-api
>
> # Neutron options
> Q_USE_SECGROUP=True
> FLOATING_RANGE="172.18.161.0/24"
> FIXED_RANGE="10.0.0.0/24"
> Q_FLOATING_ALLOCATION_POOL=start=172.18.161.250,end=172.18.161.254
> PUBLIC_NETWORK_GATEWAY="172.18.161.1"
> PUBLIC_INTERFACE=eth0
>
> LOG=True
> VERBOSE=True
> LOGFILE=$DEST/logs/stack.sh.log
> LOGDAYS=1
> SCREEN_LOGDIR=$DEST/logs/screen
> SYSLOG=True
> SYSLOG_HOST=$HOST_IP
> SYSLOG_PORT=516
> RECLONE=yes
>
>
> While creating loadbalancer, i am getting error in octavia worker
>
> 2016-11-16 06:13:08.264 4115 INFO octavia.controller.queue.consumer [-]
> Starting consumer...
> 2016-11-16 06:14:58.507 4115 INFO octavia.controller.queue.endpoint [-]
> Creating load balancer '51082942-b348-490

Re: [openstack-dev] [all][ptls][tc][goals] acknowledging community goals for Ocata

2016-11-16 Thread Anne Gentle
On Wed, Nov 16, 2016 at 9:35 AM, Doug Hellmann 
wrote:

> We still have quite a few teams who have not acknowledged the goal for
> Ocata. Remember, *all* teams are expected to respond, even if there is
> no work to be done. The most important feature of this new process is
> communication, which won't happen if teams don't participate.
>
> Please take a few minutes to review
> http://governance.openstack.org/goals/index.html and
> http://governance.openstack.org/goals/ocata/remove-
> incubated-oslo-code.html
> then submit a patch to add your planning artifacts to
> openstack/governance/goals/ocata/remove-incubated-oslo-code.rst before
> the deadline tomorrow.
>

Hi all,

I wanted to follow up to let you know that Doug and I recorded a short
video to talk about this new process and how we envision the community
working together on this very first attempt at writing down expectations
and setting up this new goal program.

It's about 3 minutes and hopefully it'll help us all understand how to get
the goals across the finish line. If you need more info, always feel free
to reach out and ask. We'll iterate as we go.

Thanks,
Anne


>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutron-lib impact

2016-11-16 Thread Armando M.
On 16 November 2016 at 00:55, Gary Kotton  wrote:

> Hi,
>
> The directory integration will break all of the plugins and neutron
> projects. I do not think that this is something that we should do. It
> breaks the neutron API contract.
>

The plugin directory is an implementation internal. Let's be very clear, in
case you have not realized this already:

*Neutron is not supposed to be imported directly by projects and we all
knew it when we started off with the project decomposition.*

neutron-lib is our response to driving adoption of stable interfaces across
the neutron ecosystem of repositories. Forcing ourselves to introduce
artificial deprecation cycles for internal details is not only slowing us
down but it has proven ineffective so far. We should accelerate with the
decoupling of projects so that we can all consider these types of breakages
a thing of the past.


> I think that we should only unblock the patch
> https://review.openstack.org/#/c/386845. I think that due to the fact
> that this patch (very big) will break all plugins, we should only approve
> it once every sub project owner has chimed in.
>
This will mean that she/he will need to understand that there may be some
> tweaks involved in getting unit tests to pass. CI may automagically work.
>

This is impractical and defeats the point of allowing us to go faster. I
have taken the proactive step of announcing this change publicly and with
ample notice. I have addressed many subprojects myself and have already
seen +2/+1 flocking in. I have moved forward without creating busy work for
myself and the review team.


> I feel that as a core reviewer my responsibility is to make sure that we
> do not break things.
>

We are not in a sane situation. It's been two years since we split the repo
up and very little progress has been made to decouple the projects via
stable interfaces. I am trying to identify ways to allow us to accelerate
and you're stifling that effort with your abuse of core rights. I was not
going to let the patch merge without a final announcement at the next team
meeting.


> In addition to this we have a responsibility to ensure that things
> continue to work. Hopefully we can find a way to do this in a more friendly
> manner.
>

I have taken such a responsibility with [1]. It takes us longer to discuss
(on something that was already widely agreed on) than either fixing the
breakage or provide a 'fake' backward compat layer which we'll lead to the
breakage as soon we take it away [2].

That said, I am happy to concede if other members of the core team agrees
with you. As PTL, I have identified a gap that needs to be filled and I am
proactively stepping up to address the gap. I can't obviously be right all
the time, but I was under the impression I had the majority of the core
team on my side.

At this point, I'd invite other neutron core members to review and vote on
the patch.

A.

[1] https://review.openstack.org/#/q/topic:plugin-directory
[2]  https://bugs.launchpad.net/vmware-nsx/+bug/1640319


> Thanks
>
> Gary
>
>
>
> *From: *"Armando M." 
> *Reply-To: *OpenStack List 
> *Date: *Wednesday, November 16, 2016 at 6:51 AM
> *To: *OpenStack List 
> *Subject: *[openstack-dev] [neutron] neutron-lib impact
>
>
>
> Hi neutrinos,
>
>
>
> As mentioned during the last team meeting [1], there is a change [2] in
> the works aimed at adopting the neutron plugins directory as provided in
> neutron-lib 1.0.0 [3].
>
>
>
> As shown in [2], the switch to using the directory is relatively
> straightforward. I leave the rest of the affected repos as an exercise for
> the reader :)
>
>
>
> Cheers,
>
> Armando
>
>
>
> [1] http://eavesdrop.openstack.org/meetings/networking/2016/
> networking.2016-11-14-21.00.txt
>
> [2] https://review.openstack.org/#/q/topic:plugin-directory
>
> [3] http://docs.openstack.org/releasenotes/neutron-lib/unreleased.html#id3
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [i18n] [glance] ESL question 'shared' vs 'shareable'

2016-11-16 Thread Brian Rosmaita
Hello Translators,

We're having a discussion about a new image "visibility" value for Glance,
and before we go too far, it would be helpful to know whether what we're
worried about is going to matter for ESL people.

Here's the situation: Since the Diablo release, Glance end users have had
the ability to share images with other cloud users by adding "members" to
the image.  We call those "shared images".  Previously, we haven't had a
special "visibility" keyword for these, but we are introducing one now
[0].  Here's the problem introduced by that change:

(1) Members can only be added to an image if its current visibility value
allows for it. We're going to make this an explicit visibility state that
we ware proposing to call 'shared'.

(2) An image with visibility == 'shared', however, isn't actually
accessible to other users unless they are added as "image members".  So
it's going to be possible for a user to have some images with visibility
== 'shared', but they aren't *really* shared with anyone yet.

(3) For reasons outlined on [0], we're proposing to make this new
visibility the default value in Glance.  This will enable the current
sharing workflow to work in a backward-compatible way.  But some people
are worried that users will panic when they see that their new images have
visibility == 'shared' (even though no other users have access to such
images until "image members" are added).

(4) To address this, we're thinking that maybe the identifier for this
kind of image visibility should be 'shareable'.

Finally, here's my question.  For an ESL person looking at these two
identifiers (which, as identifiers, won't be translated):
* shared
* shareable

Are the above so similar that the nuances of the discussion above would be
lost anyway?  In other words, are we just bikeshedding here, or is there a
clear distinction?  What I mean is, is the panic described above likely or
unlikely to happen for an ESL person?

thanks,
brian

[0] https://review.openstack.org/#/c/396919/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][metadata] Is there HTTP attack issue in metadata proxy functionality offered by reference implementation?

2016-11-16 Thread huangdenghui
hi
Currently, nova metadata service is proxy by metadata agent in dhcp agent 
or l3 router agent, it is depended on whether network attach to router or not. 
In essential, metadata agent implements a http proxy functionality by computer 
node host protocal stack. In other words, it exposes host protocol stack to vm. 
If vm is a attacker, it can launch a HTTP GET flood attacks. then it may affect 
this computer node. I would like to hear you guy's  opinion. any comment is 
welcome. thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Flakey functional test (Was: [Openstack-stable-maint] Stable check of openstack/glance failed)

2016-11-16 Thread Ian Cordasco
-Original Message-
From: tomislav.suk...@telekom.de 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: November 16, 2016 at 09:48:40
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [glance] Flakey functional test (Was: 
[Openstack-stable-maint] Stable check of openstack/glance failed)

> > > Since there's nothing pointing to any problems here, I would just ask is
> > > it possible that log file is not created if there's nothing to log?
> >
> > I don't think that's possible. We start glance's services (when under test)
> > with debug=True and verbose=True which means the config options that are
> > collected and then parsed should be logged to the file, at least.
>  
> It's true, in my case the log file has around 75kB. At least wsgi start is
> captured properly as INFO. However, parsing of configuration options is not
> there.
>  
> > > (and only if that is the case - I would suggest adding some dummy request
> > > Which would result in log entry; if not - please ignore this idea)
> >
> > Thanks for the help though, Tomislav! Did you look for the hash seed that
> > tox was using and try running the tests that way? I've been swamped with
> > other responsibilities this week so I haven't had time to investigate this
> > myself.
>  
> I tried with random seed (without specification), I tried with that specific
> seed and I tried both running all tests (py27) and just a specific one in
> both cases, all that several times. It just doesn't fail. I used Newton
> primarily but also tried on upstream version - just cannot reproduce.
> Although, the only difference which I see is that my machine is a bit slower,
> plus it uses only 2 workers.

Okay, we'll start keeping track of where these tests fail. We might be able to 
identify a specific provider and region and narrow this down to donated 
resources. Thanks Tomislav, you've been very helpful.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] ocata summit summary

2016-11-16 Thread Jim Rollenhagen
Hi all,

Sorry for the late-ness of this email, but wanted to send a wrap-up of the
Ocata summit from ironic's perspective. It was another super productive summit
and I'm thankful for all of the people who showed up and made it so. :)

As always, we discussed priorities for the cycle and those have been documented
here: 
http://specs.openstack.org/openstack/ironic-specs/priorities/ocata-priorities.html
Notes from that session are here:
https://etherpad.openstack.org/p/ironic-ocata-summit-priorities

We discussed a few specs that are up for evolving our API to be more useful
for both humans and computers. Not much contention here, they're all pretty
obvious things to do. Those notes are here:
https://etherpad.openstack.org/p/ironic-ocata-summit-api-evolution

We talked about improvements for our QA code and CI jobs. Lots of different
topics there. Most notable was agreement to consolidate some of our CI jobs
so we aren't burning a nodepool node per feature (essentially). Another
good discussion was around lack of features in Cirros holding us back. A
couple of options forward here are convincing Cirros to enable these, or
building our own image with these features built in. Notes from this session:
https://etherpad.openstack.org/p/ironic-ocata-summit-qa

Another session covered using callbacks and event handlers to handle async
actions in Neutron better. This improves those interactions by waiting until
they are done, rather than hoping things complete in time. It involves adding
code to Neutron or a Neutron plugin. Which we add code to was debated, but
Deva and I chatted with Armando later and agreed it should be okay to add it
to Neutron's tree, assuming it shares much of the code with the existing code
to do the same thing with Nova. Notes are here:
https://etherpad.openstack.org/p/ironic-ocata-summit-neutron-events

We joined the Nova team to discuss how a user might be able to define RAID
configuration during the server-create request. This ended with two options,
which Jay Pipes and Dmitry are going to explore. The first is adding to the
BDM v2 API, and the second is adding to a device metadata tag API. Ironic
will also need to expose a trait that says "can do RAID" to the resource
tracker. Last, we agreed that a flavor may have a default RAID config to be
passed to ironic. And of course, a spec will be needed. However, this work
probably won't happen until Pike or later. Notes:
https://etherpad.openstack.org/p/ironic-ocata-summit-deploy-time-raid

The task framework session discussed how we might build an API to expose
status and progress for asynchronous operations triggered by the API, as well
as returning data for actions like "send this command to the BMC". We made
some progress, there are concerns about the database schema and how we purge
old data from the DB. Notes here:
https://etherpad.openstack.org/p/ironic-ocata-summit-deploy-time-raid

Graphical consoles were another session. Much of the session was trying to
figure out how we secure things, and what should be in and out of tree.
Notes here: https://etherpad.openstack.org/p/ironic-ocata-summit-vnc-console

We spent a session figuring out what blockers exist for priorities, and how
to move around those, how to order things to avoid large conflicts, etc.
The notes are basically a plan to get each feature done. This session was
super valuable to me (and I hope, to everyone else), just to get in sync with
where everything is at.
https://etherpad.openstack.org/p/ironic-ocata-summit-unblock-priorities

Contributor meetup had three major discussions. First, our project logo. Most
folks were not huge fans of it and agreed to leave feedback. Second, what
projects belong in ironic governance or not. I sent an email about that
already: 
http://lists.openstack.org/pipermail/openstack-dev/2016-November/106569.html
Third, we decided that the entire core team should be core on specs, to increase
velocity there. I also emailed about that already:
http://lists.openstack.org/pipermail/openstack-dev/2016-November/106463.html

Thanks again to everyone for making this summit awesome. :)

As always, questions/comments/concerns about this stuff are welcome.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - anybody want to help?

2016-11-16 Thread Monty Taylor
On 11/16/2016 09:34 AM, Monty Taylor wrote:
> On 11/15/2016 11:26 PM, joehuang wrote:
>>> Glance Image Uploads and Swift Object Uploads (and downloads). Having
>>> those two data operations go through an API proxy seems inefficient.
>>> However, having them not in the API seems like a bad user experience.
>>> Perhaps if we take advantage of the gRPC streaming protocol support
>>> doing a direct streaming passthrough actually wouldn't be awful. Or
>>> maybe the better approach would be for the gRPC call to return a URL and
>>> token for a user to POST/PUT to directly. Literally no clue.
>>
>> From bandwidth consideration, the bandwidth for the API service like Oaktree 
>> may not as wide as that for data storage service, for example Swift. That 
>> means
>> if the Oaktree will proxy the image upload, then the bandwidth for the 
>> Oaktree
>> sever may be exhausted soon, and not  able to provide other API service.
> 
> Yes - this is exactly right and a big part of the problem.
> 
>> It's good in Glance V2 that image could be update to a store, then register 
>> the location
>> to a Glance image, but not directly upload bits to Glance API directly.
> 
> Unfortunately for us - we need to support glance v1 PUT, glance v2 PUT,
> glance v2 task import and the new and upcoming glance v2 multi-step
> image upload.
> 
> I had an idea this morning though - tell me what you think.
> 
> The API will be multi-step (similar to the new glance image upload
> process) but with explicit instructions for users. We'll suggest that
> client lib authors who are building friendly libs on top of the oaktree
> client encapsulate the multi-step logic in some manner, and we will
> provide explicit instructions on what the multi-steps are.
> 
> API:
> 
> rpc CreateImage (ImageSpec) returns (ImageUpload) {}
> rpc UploadImageContent (stream ImageContent) returns (ImageUploadStatus) {}
> rpc FinalizeImageUpload (ImageSpec) returns (Image) {}
> 
> rpc GetToken (Location) returns (Token) {}
> 
> message ImageSpec {
>   Location location = 1;
>   string name = 3;
>   uint32 min_ram = 4;
>   uint64 min_disk = 5;
>   // etc - more fields
>   repeated bytes image_content = 99;
> };
> 
> message ImageUpload {
>   enum UploadScheme {
> grpc_upload = 0;
> rest_put = 1;
> swift = 2;
>   };
>   UploadScheme scheme = 1;
>   string endpoint = 2;

Ooh! What if endpoint was actually a repeated field (array)? That way
for PUT operations it would just be a single entry - but for the swift
case, the SLO segment URLs could be pre-computed by oaktree.

It would make "size" a hard requirement from the API - but I'm fine with
that.

Logic below ...

>   map headers = 3;
>   uint32 segement_size = 4;
> };
> 
> The logic is then:
> 
> image_spec = ImageSpec(
> name='my_image')
> upload = client.CreateImage(image_spec)
> if upload.scheme == ImageUpload.grpc_upload:
> image = client.UploadImage(open('file', 'r'))
> elif upload.scheme == ImageUpload.rest_put:
> image = requests.put(
> upload.endpoint, headers=upload.headers,
> data=open('file', 'r'))
> elif upload.scheme = ImageUpload.swift:
> # upload to upload.endpoint, probably as a
> # swift SLO splitting the content into
> # segments of upload.segment_size
 count = 0
 content = open('file', 'r')
 for endpoint in upload.endpoints:
 content.seek(count * upload.segment_size)
 requests.put(
 endpoint, headers=upload.headers,
 data=content.read(upload.segment_size))
 count += 1

Making that multi-threaded is an obvious improvement of course.

> image = client.FinalizeImageUpload(image_spec)

Then the creation of the manifest object in swift could be handled in
finalize by oaktree. In fact- that way we could collapse the put and
swift cases to just be a "REST" case - since all of the operations are
PUT to a URL provided by oaktree - and for glance PUT segment_size will
just be == size.

> It's a three-pronged upload approach that a client author has to write -
> but the two different REST interactions should be easy - the grpc
> endpoint should be able to return endpoint/headers/token so that the end
> user doens't have to interpret _What_ rest call to make - just needs to
> make the exact one that ImageUpload message describes. (the swift upload
> can be documented more precisely - but this email is already long)
> 
> For the swift case, it's possible that the token could expire before all
> of the PUTs are made for each of the image segments. That's why we add a
> GetToken api call - so that in a loop the client can just request
> another token from the gRPC api without having to know anything more
> about those mechanics. Obviously that can also be hidden by client libs
> too - but in a way that's easily replicatable across langauges - and if
> someone wants to do things by hand, there are very explicit instructions.
> 
> The finalize step is important because there are things that may need to
> be per

Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - anybody want to help?

2016-11-16 Thread Monty Taylor
On 11/15/2016 07:16 PM, Jay Pipes wrote:
> Awesome start, Monty :) Comments inline.

Yay - thanks Jay!

> On 11/15/2016 09:56 AM, Monty Taylor wrote:
>> Hey everybody!
>>
>> At this past OpenStack Summit the results of the Interop Challenge were
>> shown on stage. It was pretty awesome - 17 different people from 17
>> different clouds ran the same workload. And it worked!
>>
>> However, one of the reasons it worked is because they all used the
>> Ansible modules we wrote that are based on the shade library that
>> contains the business logic needed to hide vendor differences in clouds.
>> That means that there IS a fantastic OpenStack interoperability story -
>> but only if you program in Python. That's less awesome.
>>
>> With that in mind - I'm pleased to announce a new project that aims to
>> address that - oaktree.
>>
>> oaktree is a gRPC-based API porcelain service for OpenStack that is
>> based on the shade library and I'd love some help in writing it.
>>
>> Basing oaktree on shade gets not only the business logic. Shade already
>> understands a multi-cloud world. And because we use shade in Infra for
>> nodepool, it already has caching, batching and thundering herd
>> protection sorted to be able to hand very high loads efficiently. So
>> while oaktree is new, the primary logic and fundamentals are all shade
>> and are battle-tested.
> 
> ++ muy bueno.
> 
>> The barrier to deployers adding it to their clouds needs to be as low as
>> humanly possible. So as we work on it, ensuring that we keep it
>> dead-simple to install, update and operate must be a primary concern.
>>
>> Where are we and what's next?
>>
>> oaktree doesn't do a whole lot that's terribly interesting at the
>> moment. We have all of the development scaffolding and gate jobs set up
>> and a few functions implemented.
>>
>> oaktree exists currently as two repos - oaktree and oaktreemodel:
>>
>>   http://git.openstack.org/cgit/openstack/oaktree
>>   http://git.openstack.org/cgit/openstack/oaktreemodel
>>
>> oaktreemodel contains the Protobuf definitions and the build scripts to
>> produce Python, C++ and Go code from them. The python code is published
>> to PyPI as a normal pure-python library. The C++ code is published as a
>> source tarball and the Go code is checked back in to the same repo so
>> that go works properly.
> 
> Very nice. I recently started playing around with gRPC myself for some
> ideas I had about replacing part of nova-compute with a Golang worker
> service that can tolerate lengthy disconnections with a centralized
> control plane (hello, v[E]CPE!).

Well, I've got the protoc -> golang generation working in the gate, so
one step down.

> It's been (quite) a few years since I last used protobufs (hey, remember
> Drizzle?) but it's been a blast getting back into protobufs development.
> Now that I see you're using a similar approach for oaktree, I'm
> definitely interested in contributing.

Yah - turns out they're pretty awesome. They are less flexible in many
respects to REST - but so far I'm finding the limitations are actually
quite nice.

Also - you'll note that oaktreemodel has inherited code from Drizzle's
build system. :)

>> oaktree depends on the python oaktreemodel library, and also on shade.
>> It implements the server portion of the gRPC service definition.
>>
>> Currently, oaktree can list and search for flavors, images and floating
>> ips. Exciting right? Most of the work to expose the rest of the API that
>> shade can provide at the moment is going to be fairly straightforward -
>> although in each case figuring out the best mapping will take some care.
>>
>> We have a few major things that need some good community design. These
>> are also listed in a todo.rst file in the oaktree repo which is part of
>> the docs:
>>
>>   http://oaktree.readthedocs.io/en/latest/
>>
>> The auth story. The native/default auth for gRPC is oauth. It has the
>> ability for pluggable auth, but that would raise the barrier for new
>> languages. I'd love it if we can come up with a story that involves
>> making API users in keystone and authorizing them to use oaktree via an
>> oauth transaction.
> 
> ++
> 
>> The keystone auth backends currently are all about
>> integrating with other auth management systems, which is great for
>> environments where you have a web browser, but not so much for ones
>> where you need to put your auth credentials into a file so that your
>> scripts can work. I'm waving my hands wildly here - because all I really
>> have are problems to solve and none of the solutions I have are great.
>>
>> Glance Image Uploads and Swift Object Uploads (and downloads). Having
>> those two data operations go through an API proxy seems inefficient.
> 
> Uh, yeah :)
> 
>> However, having them not in the API seems like a bad user experience.
>> Perhaps if we take advantage of the gRPC streaming protocol support
>> doing a direct streaming passthrough actually wouldn't be awful. Or
>> maybe the better approach would

Re: [openstack-dev] [glance] Flakey functional test (Was: [Openstack-stable-maint] Stable check of openstack/glance failed)

2016-11-16 Thread Tomislav.Sukser
> > Since there's nothing pointing to any problems here, I would just ask is
> > it possible that log file is not created if there's nothing to log?
> 
> I don't think that's possible. We start glance's services (when under test)
> with debug=True and verbose=True which means the config options that are 
> collected and then parsed should be logged to the file, at least.

It's true, in my case the log file has around 75kB. At least wsgi start is
captured properly as INFO. However, parsing of configuration options is not
there.

> > (and only if that is the case - I would suggest adding some dummy request
> > Which would result in log entry; if not - please ignore this idea)
> 
> Thanks for the help though, Tomislav! Did you look for the hash seed that 
> tox was using and try running the tests that way? I've been swamped with
> other responsibilities this week so I haven't had time to investigate this 
> myself.

I tried with random seed (without specification), I tried with that specific
seed and I tried both running all tests (py27) and just a specific one in 
both cases, all that several times. It just doesn't fail. I used Newton
primarily but also tried on upstream version - just cannot reproduce.
Although, the only difference which I see is that my machine is a bit slower,
plus it uses only 2 workers.

Kind regards,
Tomislav

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron]Do we need to rfe to implement active-active router?

2016-11-16 Thread huangdenghui
hi
Currently, neutron support DVR router and legacy router.  For high 
availability, there is HA router in reference implementation of legacy mode and 
DVR  mode. I am considering whether is active-active router needed in both mode?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - anybody want to help?

2016-11-16 Thread Monty Taylor
On 11/15/2016 11:26 PM, joehuang wrote:
>> Glance Image Uploads and Swift Object Uploads (and downloads). Having
>> those two data operations go through an API proxy seems inefficient.
>> However, having them not in the API seems like a bad user experience.
>> Perhaps if we take advantage of the gRPC streaming protocol support
>> doing a direct streaming passthrough actually wouldn't be awful. Or
>> maybe the better approach would be for the gRPC call to return a URL and
>> token for a user to POST/PUT to directly. Literally no clue.
> 
> From bandwidth consideration, the bandwidth for the API service like Oaktree 
> may not as wide as that for data storage service, for example Swift. That 
> means
> if the Oaktree will proxy the image upload, then the bandwidth for the Oaktree
> sever may be exhausted soon, and not  able to provide other API service.

Yes - this is exactly right and a big part of the problem.

> It's good in Glance V2 that image could be update to a store, then register 
> the location
> to a Glance image, but not directly upload bits to Glance API directly.

Unfortunately for us - we need to support glance v1 PUT, glance v2 PUT,
glance v2 task import and the new and upcoming glance v2 multi-step
image upload.

I had an idea this morning though - tell me what you think.

The API will be multi-step (similar to the new glance image upload
process) but with explicit instructions for users. We'll suggest that
client lib authors who are building friendly libs on top of the oaktree
client encapsulate the multi-step logic in some manner, and we will
provide explicit instructions on what the multi-steps are.

API:

rpc CreateImage (ImageSpec) returns (ImageUpload) {}
rpc UploadImageContent (stream ImageContent) returns (ImageUploadStatus) {}
rpc FinalizeImageUpload (ImageSpec) returns (Image) {}

rpc GetToken (Location) returns (Token) {}

message ImageSpec {
  Location location = 1;
  string name = 3;
  uint32 min_ram = 4;
  uint64 min_disk = 5;
  // etc - more fields
  repeated bytes image_content = 99;
};

message ImageUpload {
  enum UploadScheme {
grpc_upload = 0;
rest_put = 1;
swift = 2;
  };
  UploadScheme scheme = 1;
  string endpoint = 2;
  map headers = 3;
  uint32 segement_size = 4;
};

The logic is then:

image_spec = ImageSpec(
name='my_image')
upload = client.CreateImage(image_spec)
if upload.scheme == ImageUpload.grpc_upload:
image = client.UploadImage(open('file', 'r'))
elif upload.scheme == ImageUpload.rest_put:
image = requests.put(
upload.endpoint, headers=upload.headers,
data=open('file', 'r'))
elif upload.scheme = ImageUpload.swift:
# upload to upload.endpoint, probably as a
# swift SLO splitting the content into
# segments of upload.segment_size
image = client.FinalizeImageUpload(image_spec)

It's a three-pronged upload approach that a client author has to write -
but the two different REST interactions should be easy - the grpc
endpoint should be able to return endpoint/headers/token so that the end
user doens't have to interpret _What_ rest call to make - just needs to
make the exact one that ImageUpload message describes. (the swift upload
can be documented more precisely - but this email is already long)

For the swift case, it's possible that the token could expire before all
of the PUTs are made for each of the image segments. That's why we add a
GetToken api call - so that in a loop the client can just request
another token from the gRPC api without having to know anything more
about those mechanics. Obviously that can also be hidden by client libs
too - but in a way that's easily replicatable across langauges - and if
someone wants to do things by hand, there are very explicit instructions.

The finalize step is important because there are things that may need to
be performed after the upload in all cases. For the swift case, the
import task has to be spawned and waited on. For the put and gRPC cases
there are metadata fields, like protected, that can only be set once the
other actions are complete.

(there are parts of this that are hand-wavey - but how does it sound in
general?)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila][cinder] API and entity naming consistency

2016-11-16 Thread Ben Swartzlander

On 11/16/2016 10:22 AM, Valeriy Ponomaryov wrote:

For the moment Manila project, as well as Cinder, does have
inconsistency between entity and API naming, such as:
- "share type" ("volume type" in Cinder) entity has "/types/{id}" URL
- "share snapshot" ("volume snapshot" in Cinder) entity has
"/snapshots/{id}" URL

BUT, Manila has other Manila-specific APIs as following:

- "share network" entity and "/share-networks/{id}" API
- "share server" entity and "/share-servers/{id}" API

And with implementation of new features [1] it becomes a problem,
because we start having
"types" and "snapshots" for different things (share and share groups,
share types and share group types).

So, here is first open question:

What is our convention in naming APIs according to entity names?

- Should APIs contain full name or it may be shortened?
- Should we restrict it to some of the variants (full or shortened) or
allow some API follow one approach and some follow other approach,
consider it as "don't care"? Where "don't care" case is current
approach, de facto.


I think that consistency is important but the question is consistency 
with what. Right now we have an inconsistent design and it will be 
effort to change it either way. If we're going to spend that effort 
there needs to be a good reason.


Initially I had been in favor of "share-groups" over just "groups", 
however if we go that direction it will make all of the places where we 
don't use the share- prefix that much more glaring. Consistency with the 
the past and with cinder would suggest that we should avoid using share- 
prefixes everywhere possible, and we should look into removing them from 
places where we added them somewhat gratuitously (share networks, share 
servers, share instances).



Then, we have second question here:

- Should we use only "dash" ( - ) symbols in API names or "underscore" (
_ ) is allowed?


Underscores should never be used. This seems like a mistake when 
instances were added.



- Should we allow both variants at once for each API?


Thanks to microversions, if we change any API we can support only the 
old name for the old microversion and only the new name for the new 
microversion. There is no reason to support both at the same time for 
any microversion



- Should we allow APIs use any of variants and have zoo with various
approaches?

In Manila project, mostly "dash" is used, except one API -
"share_instances".

[1] https://review.openstack.org/#/c/315730/

--
Kind Regards
Valeriy Ponomaryov
vponomar...@mirantis.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][ptls][tc][goals] acknowledging community goals for Ocata

2016-11-16 Thread Doug Hellmann
We still have quite a few teams who have not acknowledged the goal for
Ocata. Remember, *all* teams are expected to respond, even if there is
no work to be done. The most important feature of this new process is
communication, which won't happen if teams don't participate.

Please take a few minutes to review
http://governance.openstack.org/goals/index.html and
http://governance.openstack.org/goals/ocata/remove-incubated-oslo-code.html
then submit a patch to add your planning artifacts to
openstack/governance/goals/ocata/remove-incubated-oslo-code.rst before
the deadline tomorrow.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][fip] router support two external network

2016-11-16 Thread huangdenghui
Hi
thanks, this is the case what i am looking for.




At 2016-11-13 19:38:57, "Irena Berezovsky"  wrote:

Hi,
The case you are describing may be related to the previously discussed RFE [1].
Having additional networks with FIP range attached via router interface should 
be allowed from the API point of view, but may need some adaptations to make it 
work properly. Please see the details in the discussion log [1].


[1]  https://bugs.launchpad.net/neutron/+bug/1566191


BR,
Irena






On Sun, Nov 13, 2016 at 12:35 PM, Gary Kotton  wrote:


Hi,

Today the mapping is 1:1. So if you want additional mappinsg to internal 
networks then you can define more than one interface on your instance. Then map 
each interface to the relevant network.

Thanks

Gary

 

From: huangdenghui 
Reply-To: OpenStack List 
Date: Saturday, November 12, 2016 at 10:26 AM
To: OpenStack List 
Subject: [openstack-dev] [neutron][dvr][fip] router support two external network

 

Hi all
Currently, neutron model supports one router with one external network, 
which is used to connect router to outside world. FIP can be allocated from 
external network which is gateway of a router. One private fixed ip of one 
port(usually it is a vm port ) can only associate one floating ip.  In some 
deployment scenario, all ports served by one router, all ports need ip address 
which can be accessed by intranet, and some ports need ip address which can be 
accessed by internet. I was wondering how neutron to resolve this kind of use 
cases? One idea is one router support two external network(one for intranet, 
the other one for internet, but only have gateway), the other idea is one 
router still have only one external network, but this external have two 
different type of subnet (one for internet, the other one for intranet). any 
comment is welcome. thanks.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila][cinder] API and entity naming consistency

2016-11-16 Thread Valeriy Ponomaryov
For the moment Manila project, as well as Cinder, does have inconsistency
between entity and API naming, such as:
- "share type" ("volume type" in Cinder) entity has "/types/{id}" URL
- "share snapshot" ("volume snapshot" in Cinder) entity has
"/snapshots/{id}" URL

BUT, Manila has other Manila-specific APIs as following:

- "share network" entity and "/share-networks/{id}" API
- "share server" entity and "/share-servers/{id}" API

And with implementation of new features [1] it becomes a problem, because
we start having
"types" and "snapshots" for different things (share and share groups, share
types and share group types).

So, here is first open question:

What is our convention in naming APIs according to entity names?

- Should APIs contain full name or it may be shortened?
- Should we restrict it to some of the variants (full or shortened) or
allow some API follow one approach and some follow other approach, consider
it as "don't care"? Where "don't care" case is current approach, de facto.

Then, we have second question here:

- Should we use only "dash" ( - ) symbols in API names or "underscore" ( _
) is allowed?
- Should we allow both variants at once for each API?
- Should we allow APIs use any of variants and have zoo with various
approaches?

In Manila project, mostly "dash" is used, except one API -
"share_instances".

[1] https://review.openstack.org/#/c/315730/

-- 
Kind Regards
Valeriy Ponomaryov
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][hyperv] hyper-v CI issues

2016-11-16 Thread Matt Riedemann
cfriesen was asking in IRC today why this libvirt-only driver change in 
nova kept failing the hyper-v CI:


https://review.openstack.org/#/c/346263

http://64.119.130.115/nova/346263/9/

It looks like there are a few issues:

1. The hyper-v CI doesn't appear to be testing cfriesen's change, it's 
testing this:


https://review.openstack.org/#/c/273504

Because this is in the logs:

"HEAD is now at 431e019 Merge commit 'refs/changes/04/273504/19' of 
ssh://review.openstack.org:29418/openstack/nova into HEAD"


2. I also noticed it's cherry picking a Tempest change:

+ git fetch git://git.openstack.org/openstack/tempest 
refs/changes/49/383049/8

From git://git.openstack.org/openstack/tempest
 * branchrefs/changes/49/383049/8 -> FETCH_HEAD
+ git cherry-pick FETCH_HEAD
[master ec1bd9a] wait for port status to be ACTIVE

Which is: https://review.openstack.org/#/c/383049/

There is quite a bit of discussion in that patch, but essentially it's 
trying to workaround the fact that the hyper-v driver isn't waiting for 
vif plugged events when using neutron, so the server goes ACTIVE before 
the networking is completely setup.


I think that all virt drivers in nova that are using neutron, which 
needs to be all of them now as nova-network is going away, should be 
implementing the vif plugging wait/timeout code - it was added 
specifically for the CI issues we used to have in the gate before the 
neutron jobs were voting.


So who is working on adding that support to the hyper-v driver? I think 
that basically trumps any feature work that the hyper-v team is trying 
to get done in Ocata.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api]

2016-11-16 Thread Ed Leafe
On Nov 16, 2016, at 7:42 AM, Ian Cordasco  wrote:

> If you're including me as a vote for nin, you should also consider me as a 
> vote for not_in. Otherwise, count me as half a vote for either. I still think 
> not_in is just ever-so slightly better than nin. 

It is on the agenda for the API working group’s meeting tomorrow (Thursday) at 
1600 UTC in #openstack-meeting-3
(http://www.timeanddate.com/worldclock/fixedtime.html?iso=20161117T16)

Please join the meeting tomorrow if you are able!

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Problem with Quota and servers spawned in groups

2016-11-16 Thread Chris Friesen

On 11/15/2016 06:50 PM, melanie witt wrote:

On Tue, 15 Nov 2016 18:10:40 -0600, Chris Friesen wrote:

I'm in favor of your change, since the existing behaviour doesn't make
sense.

But at some point I guess consistency trumps correctness, and if a new
microversion is necessary to mark the new behaviour then a spec is
required, and at that point we might want to fix the other issues with
multi-boot at the same time.  (Like
https://bugs.launchpad.net/nova/+bug/1458122 )


I think what Sławek is saying is that the quota behavior for multi-create
already changed at some point in the past, without a spec. He did experiments
recently that show a multi-create request succeeds as long as the min_count is
satisfied when there isn't enough quota for max_count. This is different than
the behavior at the time you opened the bug. So it seems the horse has left the
barn on this one.


The bug I reported is not related to quota, but rather the ability to schedule 
the instances.


The issue in the bug report is that if I ask to boot a min of X and a max of Z 
instances, and only Y instances can be scheduled (where Xwill fail and all the instances will be put into an ERROR state.


Arguably what *should* happen is that Y instances get created.  Also I think it 
would make more sense if the remaining  Z-Y instances are just never created 
rather than being created in an ERROR state.


Chris



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] meeting format poll

2016-11-16 Thread Jeremy Stanley
On 2016-11-16 09:33:54 -0500 (-0500), Steve Martinelli wrote:
[...]
> As silly as it sounds, not having to log in has
> made a noticeable difference -- it's not just me (or another ptl) setting
> the agenda)

Not silly at all, and in fact very useful feedback! I still hold out
hope that once we migrate off Launchpad SSO we'll be able to begin
migrating toward more transparent authentication for the
applications we're hosting, so that users no longer have to click
through several pages to authenticate each Webapp, only to have it
expire again soon thereafter.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Craton] NFV planned host maintenance

2016-11-16 Thread Ian Cordasco
-Original Message-
From: Juvonen, Tomi (Nokia - FI/Espoo) 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: November 11, 2016 at 02:27:19
To: OpenStack Development Mailing List (not for usage questions)

Subject:  [openstack-dev] [Craton] NFV planned host maintenance

> I have been looking in past two OpenStack summits to have changes needed to
> fulfill OPNFV Doctor use case for planned host maintenance and at the same
> time trying to find other Ops requirements to satisfy different needs. I was
> just about to start a new project (Fenix), but looking Craton, it seems
> a good alternative and was proposed to me in Barcelona meetup. Here is some
> ideas and would like a comment wither Craton could be used here.

Hi Tomi,

Thanks for your interest in craton! I'm replying in-line, but please
come and join us in #craton on Freenode as well!

> OPNFV Doctor / NFV requirements are described here:
> http://artifacts.opnfv.org/doctor/docs/requirements/02-use_cases.html#nvfi-maintenance
> http://artifacts.opnfv.org/doctor/docs/requirements/03-architecture.html#nfvi-maintenance
> http://artifacts.opnfv.org/doctor/docs/requirements/05-implementation.html#nfvi-maintenance
>
> My rough thoughts about what would be initially needed (as short as I can):
>
> - There should be a database of all hosts matching to what is known by Nova.

So I think this might be the first problem that you'll run into with Craton.

Craton is designed to specifically manage the physical devices in a
data centre. At the moment, it only considers the hosts that you'd run
Nova on, not the Virtual Machines that Nova is managing on the Compute
hosts.

It's plausible that we could add the ability to track virtual
machines, but Craton is meant to primarily work underneath the cloud.
I think this might be changing since Craton is looking forward to
helping manage a multi-cloud environment, so it's possible this won't
be an issue for long.

> - There should by an API for Cloud Admin to set planned maintenance window
> for a host (maybe aggregate, group of hosts), when in maintenance and unset
> when finished. There might be some optional parameters like target host
> where to move things currently running on effected host. could also be
> used for retirement of a host.

This sounds like it's part of the next phase of Craton development -
the remediation workflows. I think Jim and Sulo are more suited
towards talking to that though.

> - There should be project(tenant) and host specific notifications that could:

We are talking about an events/notifications system.

> - Trigger alarm in Aodh so Application would be aware of maintenance state
> changes effecting to his servers, so zero downtime of application could
> be guaranteed.

I'm not sure it should be Craton's responsibility to do this, but I
expect the administrator could set alarm criteria based off of
Craton's events stream.

> - Notification could be consumed by workflow engine like Mistral, where
> application server specific actions flows and admin action flows could
> be performed (to move servers away, disable host,...).
> - Host monitoring like Vitrage could consume notification to disable
> alarms for host as of planned maintenance ongoing and not down by fault.
> - There should be admin and project level API to query maintenance session
> status.
> - Workflow status should be queried or read as notification to keep internal
> state and send further notification.
> - Some more discussion also in "BCN-ops-informal-meetup" that goes beyond 
> this:
> https://etherpad.openstack.org/p/BCN-ops-informal-meetup

These are all interesting ideas. Thank you!

> What else, details, problems:
>
> There is a problem in flow engine actions. Depending on how long maintenance
> would take or what type of server is running, application wants flows to 
> behave
> differently. Application specific flows could surely be done, but problem is
> that they should make admin actions. It should be solved how application can
> decide actions flows while only admin can run them. Should admin make
> the flows and let application a power to choose by hint in nova metadata or
> in notification going to flow engine.
>
> Started a discussion in Austin summit about extending the planned host
> maintenance in Nova, but it was agreed there could just be a link to external
> tool. Now if this tool would exist in OpenStack, I would suggest to link it
> like this, but surely this is to be seen after the external tool
> implementation exists:
> - Nova Services API could have a way for admin to set and unset a "base URL"
> pointing to external tool about planned maintenance effecting to a host.
> - Admin should see link to external tool when querying services via services
> API. This might be formed like: {base URL}/{host_name}
> - Project should have a project specific link to external tool when querying
> via Nova servers API. This might be: {base URL}/project/{hostId}.
> hostId is exposed 

Re: [openstack-dev] [Tacker] Unable to assign IP address to connection points.

2016-11-16 Thread HADDLETON, Robert W (Bob)

Hi Prasad:
The first two things to check are:

1 - Check the VM instance in Horizon to confirm that there are three IP 
addresses assigned to it.  If there is only one IP address assigned to 
the VM, check the subnet configuration for the vnf_private and private 
networks and make sure they have DHCP enabled.


2 - Verify that the image you are using is configured to enable DHCP on 
eth1 and eth2 or their equivalent network interfaces.


If there ARE three IP addresses assigned to the VM from step 1 then it 
is likely that the image is not configured to support DHCP on eth1 and eth2.
You would need to either modify the image to enabled DHCP on eth1 and 
eth2 and then save the modified image, or find a new image that has DHCP 
enabled on those ports.


Hope this helps

Bob


On 11/16/2016 4:11 AM, prasad kokkula wrote:

Hi All,

[Tacker]   I have tried to launch the vnf Instance using Tacker. vnf 
is launched succesfully and able to do SSH.


I have faced the issue, the connection points (CP2, CP3) are not 
getting ip addreess except managament CP (CP1). Could you please let 
me know is this Tacker issue or any configuration mismatch.


I have installed openstack newton release on Centos 7. Please let me 
know if you need any other configuration.




=
Below are the net-list ip's

[root@localhost (keystone_admin)]# neutron net-list
+--+-+---+
| id   | name| subnets 
  |

+--+-+---+
| 55077c0e-8291-4730-99b4-f280967cb69e | public  | 
39256aad-d075-4c38-bf2c-14613df2252e 172.24.4.224/28 
  |
| 73bbaf70-9bdd-4359-a3a2-09dbd5734341 | private | 
09b9018c-ca3b-46ee-9a4e-507e5124139f 10.0.0.0/24 
  |
| d0560ee9-9ab0-4df8-a0d2-14064950a17c | vnf_mgmt  | 
01d2b67c-ee28-4875-92e0-a8e51fdf8401 192.168.200.0/24 
 |
| f98f38b8-8b6c-4adb-b0e9-a265ce969acf | vnf_private | 
61d39f59-2ff7-4292-afd9-536f007fd30c 192.168.201.0/24 
 |

+--+-+---+
[root@localhost (keystone_admin)]#

Tosca file used for vnf creation.


[root@localhost (keystone_admin)]# cat sample-vnfd.yaml

tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0

description: Demo vCPE example

metadata:
  template_name: sample-tosca-vnfd

topology_template:
  node_templates:
VDU1:
  type: tosca.nodes.nfv.VDU.Tacker
  capabilities:
nfv_compute:
  properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
  properties:
image: cirros1
availability_zone: nova
mgmt_driver: noop
user_data_format: RAW
config: |
  param0: key1
  param1: key2

CP1:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
management: true
  requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1

CP2:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
anti_spoofing_protection: false
  requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU1

CP3:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
anti_spoofing_protection: false
  requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU1

VL1:
  type: tosca.nodes.nfv.VL
  properties:
network_name: vnf_mgmt
vendor: Tacker

VL2:
  type: tosca.nodes.nfv.VL
  properties:
network_name: vnf_private
vendor: Tacker

VL3:
  type: tosca.nodes.nfv.VL
  properties:
network_name: private
vendor: Tacker

===


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<>__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Weekly Policy Meeting

2016-11-16 Thread Lance Bragstad
Just sending out a reminder that we'll be having our first meeting in 90
minutes. You can find all information about our agenda in the etherpad [0]
as well as a link to the hangout [1].

See you there!

[0] https://etherpad.openstack.org/p/keystone-policy-meeting
[1] https://hangouts.google.com/call/pd36j4qv5zfbldmhxeeatq6f7ae


On Fri, Nov 11, 2016 at 8:33 AM, Lance Bragstad  wrote:

> I've added some initial content to the etherpad [0], to get things
> rolling. Since this is going to be a recurring thing, I'd like our first
> meeting to level set the playing field for everyone. Let's spend some time
> getting familiar with policy concepts, understand exactly how OpenStack
> policy works today, then we can start working on writing down what we like
> and don't like about the existing implementation. I'm sure most people
> interested in this work will already be familiar with the problem, but I
> want to make it easy for folks who aren't to ramp up quickly and get them
> into the discussion.
>
> Some have already started contributing to the etherpad! I've slowly
> started massaging that information into our first agenda. I'll continue to
> do so and send out another email on Tuesday as a reminder to familiarize
> yourselves with the etherpad before the meeting.
>
>
> Thanks!
>
>
> [0] https://etherpad.openstack.org/p/keystone-policy-meeting
>
> On Thu, Nov 10, 2016 at 2:36 PM, Steve Martinelli 
> wrote:
>
>> Thanks for taking the initiative Lance! It'll be great to hear some ideas
>> that are capable of making policy more fine grained, and keeping things
>> backwards compatible.
>>
>> On Thu, Nov 10, 2016 at 3:30 PM, Lance Bragstad 
>> wrote:
>>
>>> Hi folks,
>>>
>>> After hearing the recaps from the summit, it sounds like policy was a
>>> hot topic (per usual). This is also reinforced by the fact every release we
>>> have specifications proposed to re-do policy in some way.
>>>
>>> It's no doubt policy in OpenStack needs work. Let's dedicate an hour a
>>> week to policy, analyze what we have currently, design an ideal solution,
>>> and aim for that. We can bring our progress to the PTG in Atlanta.
>>>
>>> We'll hold the meeting openly using Google Hangouts and record our notes
>>> using etherpad.
>>>
>>> Our first meeting will be Wednesday, November 16th from 10:00 AM –
>>> 11:00 AM Central (16:00 - 17:00 UTC) and it will reoccur weekly.
>>>
>>> Hangout: https://hangouts.google.com/call/pd36j4qv5zfbldmhxeeatq6f7ae
>>> Etherpad: https://etherpad.openstack.org/p/keystone-policy-meeting
>>>
>>> Let me know if you have any other questions, comments or concerns. I
>>> look forward to the first meeting!
>>>
>>> Lance
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] meeting format poll

2016-11-16 Thread Steve Martinelli
On Wed, Nov 16, 2016 at 9:10 AM, Jeremy Stanley  wrote:

> On 2016-11-15 17:23:49 -0500 (-0500), Steve Martinelli wrote:
> > I don't bother with the time slider, the meeting agenda is never deleted
> > from the etherpad, we just keep tacking on
>
> Oh, I see, using it as an append-only log (and hope nobody erases
> anything or the pad doesn't spontaneously corrupt, which we have
> seen happen from time to time). Well, 1. you could also just do the
> same thing with a wiki page, but more importantly 2. etherpads start
> to perform really poorly when your content reaches a certain size so
> you may find you have to periodically rotate to a fresh pad when it
> begins to bog down (keep on the lookout for that).
>

Yeah, i figured this will eventually happen, at which point we'll either
prune or move to a new one. As silly as it sounds, not having to log in has
made a noticeable difference -- it's not just me (or another ptl) setting
the agenda)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stable] Usefulness of Weekly Meeting

2016-11-16 Thread Matt Riedemann

On 11/16/2016 7:53 AM, Ian Cordasco wrote:



-Original Message-
From: Ian Cordasco 
Reply: Ian Cordasco 
Date: November 16, 2016 at 07:06:27
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [Stable] Usefulness of Weekly Meeting

-Original Message-
From: Tony Breeds
Reply: OpenStack Development Mailing List (not for usage questions) ,
OpenStack Development Mailing List (not for usage questions)
Date: November 15, 2016 at 16:55:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Stable] Usefulness of Weekly Meeting


On Tue, Nov 15, 2016 at 10:25:26AM -0800, Ian Cordasco wrote:

Hi all,

So the stable-maintenance team (and liaisons to it) have had a meeting
scheduled for a while now. Recently, however, we've had trouble
getting more than one person to attend meetings when we have them.


I need to apologise for letting this happen.


I'd like to emphasize that I don't see this as any one person's fault. I just 
was thinking
aloud about whether or not we find much use. We're not exactly a high activity 
team and
we do communicate very well via this very medium.


The current arrangement with 2
meetings US/EU and US/AU means that I can only attend the US/AU timeslot, and I
failed to set someone to run the US/EU meeting. I'd like to propose that we
switch the arrantment to:

US/AU: #openstack-meeting-4 2100 UTC Monday (same time as now different room)
AU/EU: #openstack-meeting-4 1000 UTC Monday


This sounds fine to me. It might even be less confusing than having two 
separate days and
channels too.


Alternate weeks, although we *could* run both meetings on the same day every
2nd week if people wanted.


I wonder if it would be more useful to less frequent meetings (perhaps
every other week) and if we need to reschedule them to better serve
those who plan to attend.


As always it's a question of less often is harder to form a habit. I'd like to
request we try the new schedule until the PTG and then re-evaluate.


That sounds fine with me. I just wanted to gather some other feedback. :)


I submitted https://review.openstack.org/398363 so that folks can see how this 
translates to their calendar using the generated ics files.

Cheers!
--
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



FWIW I'm fine with doing fewer meetings or just shifting the meetings to 
a time that is more accommodating for Tony. As noted, we do a pretty 
good job already of communicating via the mailing list when we need to. 
The stable team meetings have never been well attended even when I was 
running them regularly.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] meeting format poll

2016-11-16 Thread Jeremy Stanley
On 2016-11-15 17:23:49 -0500 (-0500), Steve Martinelli wrote:
> I don't bother with the time slider, the meeting agenda is never deleted
> from the etherpad, we just keep tacking on

Oh, I see, using it as an append-only log (and hope nobody erases
anything or the pad doesn't spontaneously corrupt, which we have
seen happen from time to time). Well, 1. you could also just do the
same thing with a wiki page, but more importantly 2. etherpads start
to perform really poorly when your content reaches a certain size so
you may find you have to periodically rotate to a fresh pad when it
begins to bog down (keep on the lookout for that).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Flakey functional test (Was: [Openstack-stable-maint] Stable check of openstack/glance failed)

2016-11-16 Thread Ian Cordasco
 

-Original Message-
From: tomislav.suk...@telekom.de 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: November 16, 2016 at 07:46:18
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [glance] Flakey functional test (Was: 
[Openstack-stable-maint] Stable check of openstack/glance failed)

> > Hi Glance team members,
> >
> > Over the weekend one of our stable periodic jobs failed. It failed on
> > a test (glance.tests.functional.test_reload.TestReload.test_reload)
> > that I've seen fail a couple times previously. I've created a bug for
> > this: https://bugs.launchpad.net/glance/+bug/1641670 And I'm hoping
> > someone will be able to reproduce it and fix it.
> >
> > I suspect, this is a matter of resource contention in the donated VMs
> > that our CI uses, but I can't be certain.
> >
> > The stable team would greatly appreciate help diagnosing and fixing this 
> > issue.
>  
> Hi,
>  
> Even I was unable to reproduce the issue, I found something slightly odd.
> The reloaded log file is named etcnew.log and placed in the same directory
> as api.log, registry.log and others. Look like this is due to the name
> construction (in glance/tests/functional/test_reload.py:245):
>  
> conf_dir = os.path.join(self.test_dir, 'etc')
> log_file = conf_dir + 'new.log'
>  
> Although this might be nice to fix, it doesn't look like a problem to me.
>  
> Since there's nothing pointing to any problems here, I would just ask is
> it possible that log file is not created if there's nothing to log?

I don't think that's possible. We start glance's services (when under test) 
with debug=True and verbose=True which means the config options that are 
collected and then parsed should be logged to the file, at least.

> (and only if that is the case - I would suggest adding some dummy request
> Which would result in log entry; if not - please ignore this idea)

Thanks for the help though, Tomislav! Did you look for the hash seed that tox 
was using and try running the tests that way? I've been swamped with other 
responsibilities this week so I haven't had time to investigate this myself.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] Proposing Jay Faulkner for ironic-stable-maint

2016-11-16 Thread jim

> On Nov 16, 2016, at 08:02, Dmitry Tantsur  wrote:
> 
> Hi!
> 
> I'm formally proposing that the ironic-stable-maint team [1] adds Jay 
> Faulkner. He's been consistently reviewing stable patches as shown by [2]. I 
> fully trust that his operator experience will help his judgment on not 
> landing dangerous things :)
> 
> So for those on the team already, please reply with a +1 or -1 vote.

+1 from me, let's have Jay help land code instead of telling the current cores 
things are ready to land :)

// jim 

> 
> [1] https://review.openstack.org/#/admin/groups/950,members
> [2] 
> https://review.openstack.org/#/q/(branch:stable/mitaka+OR+branch:stable/newton)+AND+reviewer:%22Jay+Faulkner+%253Cjay%2540jvf.cc%253E%22
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stable] Usefulness of Weekly Meeting

2016-11-16 Thread Ian Cordasco
 

-Original Message-
From: Ian Cordasco 
Reply: Ian Cordasco 
Date: November 16, 2016 at 07:06:27
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [Stable] Usefulness of Weekly Meeting
> -Original Message-
> From: Tony Breeds  
> Reply: OpenStack Development Mailing List (not for usage questions) ,  
> OpenStack Development Mailing List (not for usage questions)  
> Date: November 15, 2016 at 16:55:36
> To: OpenStack Development Mailing List (not for usage questions)  
> Subject: Re: [openstack-dev] [Stable] Usefulness of Weekly Meeting
>  
> > On Tue, Nov 15, 2016 at 10:25:26AM -0800, Ian Cordasco wrote:
> > > Hi all,
> > >
> > > So the stable-maintenance team (and liaisons to it) have had a meeting
> > > scheduled for a while now. Recently, however, we've had trouble
> > > getting more than one person to attend meetings when we have them.
> >
> > I need to apologise for letting this happen.
>  
> I'd like to emphasize that I don't see this as any one person's fault. I just 
> was thinking  
> aloud about whether or not we find much use. We're not exactly a high 
> activity team and  
> we do communicate very well via this very medium.
>  
> > The current arrangement with 2
> > meetings US/EU and US/AU means that I can only attend the US/AU timeslot, 
> > and I
> > failed to set someone to run the US/EU meeting. I'd like to propose that we
> > switch the arrantment to:
> >
> > US/AU: #openstack-meeting-4 2100 UTC Monday (same time as now different 
> > room)
> > AU/EU: #openstack-meeting-4 1000 UTC Monday
>  
> This sounds fine to me. It might even be less confusing than having two 
> separate days and  
> channels too.
>  
> > Alternate weeks, although we *could* run both meetings on the same day every
> > 2nd week if people wanted.
> >
> > > I wonder if it would be more useful to less frequent meetings (perhaps
> > > every other week) and if we need to reschedule them to better serve
> > > those who plan to attend.
> >
> > As always it's a question of less often is harder to form a habit. I'd like 
> > to
> > request we try the new schedule until the PTG and then re-evaluate.
>  
> That sounds fine with me. I just wanted to gather some other feedback. :)

I submitted https://review.openstack.org/398363 so that folks can see how this 
translates to their calendar using the generated ics files.

Cheers!
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [bifrost] Proposing Yolanda Robla-Mota and Chris Krelle to bifrost-core

2016-11-16 Thread Julia Kreger
Since it's creation, bifrost has valued input from users to better
support their specific use cases in the community. As time has passed,
we've realized the needed to grow the core team in order to improve
review velocity and further diversity.

To this end, I am proposing two individuals to the bifrost-core group.

- Yolanda Robla-Mota
- Chris Krelle

Yolanda has been a user of and contributor to bifrost for some time,
using it with-in infracloud. She is now continuing to use bifrost with
her current projects.  Her feedback has been extremely valuable, and I
believe it is long overdue to propose her to bifrost-core.

Chris has been a long time contributor and reviewer to bifrost. He
brings yet another point of view with his use of Ironic running in
stand-alone mode.  He looks at things from a practical, yet big
picture, point of view.  Chris often reviewed bifrost when he was a
member of ironic-core, and to be honest, we [bifrost cores] want him
back.

If there are no objections, I will add both to the bifrost-core group tomorrow.

Thanks!
-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Flakey functional test (Was: [Openstack-stable-maint] Stable check of openstack/glance failed)

2016-11-16 Thread Tomislav.Sukser
> Hi Glance team members,
> 
> Over the weekend one of our stable periodic jobs failed. It failed on
> a test (glance.tests.functional.test_reload.TestReload.test_reload)
> that I've seen fail a couple times previously. I've created a bug for
> this: https://bugs.launchpad.net/glance/+bug/1641670 And I'm hoping
> someone will be able to reproduce it and fix it.
> 
> I suspect, this is a matter of resource contention in the donated VMs
> that our CI uses, but I can't be certain.
> 
> The stable team would greatly appreciate help diagnosing and fixing this 
> issue.

Hi,

Even I was unable to reproduce the issue, I found something slightly odd.
The reloaded log file is named etcnew.log and placed in the same directory
as api.log, registry.log and others. Look like this is due to the name
construction (in glance/tests/functional/test_reload.py:245):

conf_dir = os.path.join(self.test_dir, 'etc')
log_file = conf_dir + 'new.log'

Although this might be nice to fix, it doesn't look like a problem to me.

Since there's nothing pointing to any problems here, I would just ask is
it possible that log file is not created if there's nothing to log?
(and only if that is the case - I would suggest adding some dummy request
Which would result in log entry; if not - please ignore this idea)


Kind regards,
Tomislav Sukser

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api]

2016-11-16 Thread Ian Cordasco
If you're including me as a vote for nin, you should also consider me as a vote 
for not_in. Otherwise, count me as half a vote for either. I still think not_in 
is just ever-so slightly better than nin. 

-Original Message-
From: milanisko k 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: November 16, 2016 at 02:47:12
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [api]

> Guys,
>  
> thanks for the responses, so far we've got (if I'm not mistaken):
>  
> ?state=nin: 3 (including me)
> ?state=not_in: 1
> ?state=out: 0
> ?not_state=in: 0
>  
> I'd like to finish this poll by EOW so that more folks have the opportunity
> to express their preference.
>  
> Cheers,
> milan
>  
>  
> 2016-11-15 13:50 GMT+01:00 Miles Gould :
>  
> > On 14/11/16 20:52, Ian Cordasco wrote:
> >
> >> not_in is nice and explicit while nin and out are a bit, more clever. I
> >> think we should avoid trying to be clever.
> >>
> >
> > Agreed - I think not_in is more intelligible and guessable than the other
> > suggestions.
> >
> > Miles
> >
> >
> > __  
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __  
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][horizon] XStatic-JSEncrypt 2.3.1.0 release

2016-11-16 Thread no-reply
We are satisfied to announce the release of:

XStatic-JSEncrypt 2.3.1.0: JSEncrypt 2.3.1 (XStatic packaging
standard)

Download the package from:

https://pypi.python.org/pypi/XStatic-JSEncrypt

For more details, please see below.

Changes in XStatic-JSEncrypt 2.0.0.1..2.3.1.0
-

5526c15 Update XStatic-jsencrypt to 2.3.1
5a62fc0 Update JSEncrypt to v2.3.0
6330113 Deprecated tox -downloadcache option removed
1053545 Update .gitreview for new namespace
1ebb18f Add tox.ini to enable publish/tarball job
ebb78c2 Add .gitreview
4d5dbcf Package the generated library, not the source


Diffstat (except docs and test files)
-

.gitreview  |   4 +
MANIFEST.in |   4 +-
setup.cfg   |  20 ++
setup.py|  10 +-
tox.ini |   8 +
xstatic/pkg/jsencrypt/__init__.py   |  12 +-
xstatic/pkg/jsencrypt/data/jsencrypt.js | 474 +---
7 files changed, 292 insertions(+), 240 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stable] Usefulness of Weekly Meeting

2016-11-16 Thread Ian Cordasco
 

-Original Message-
From: Tony Breeds 
Reply: OpenStack Development Mailing List (not for usage questions) 
, OpenStack Development Mailing List (not 
for usage questions) 
Date: November 15, 2016 at 16:55:36
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [Stable] Usefulness of Weekly Meeting

> On Tue, Nov 15, 2016 at 10:25:26AM -0800, Ian Cordasco wrote:
> > Hi all,
> >
> > So the stable-maintenance team (and liaisons to it) have had a meeting
> > scheduled for a while now. Recently, however, we've had trouble
> > getting more than one person to attend meetings when we have them.
>  
> I need to apologise for letting this happen.

I'd like to emphasize that I don't see this as any one person's fault. I just 
was thinking aloud about whether or not we find much use. We're not exactly a 
high activity team and we do communicate very well via this very medium.

> The current arrangement with 2
> meetings US/EU and US/AU means that I can only attend the US/AU timeslot, and 
> I
> failed to set someone to run the US/EU meeting. I'd like to propose that we
> switch the arrantment to:
>  
> US/AU: #openstack-meeting-4 2100 UTC Monday (same time as now different room)
> AU/EU: #openstack-meeting-4 1000 UTC Monday

This sounds fine to me. It might even be less confusing than having two 
separate days and channels too.

> Alternate weeks, although we *could* run both meetings on the same day every
> 2nd week if people wanted.
>  
> > I wonder if it would be more useful to less frequent meetings (perhaps
> > every other week) and if we need to reschedule them to better serve
> > those who plan to attend.
>  
> As always it's a question of less often is harder to form a habit. I'd like to
> request we try the new schedule until the PTG and then re-evaluate.

That sounds fine with me. I just wanted to gather some other feedback. :)

Cheers!
--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] [stable] Proposing Jay Faulkner for ironic-stable-maint

2016-11-16 Thread Dmitry Tantsur

Hi!

I'm formally proposing that the ironic-stable-maint team [1] adds Jay Faulkner. 
He's been consistently reviewing stable patches as shown by [2]. I fully trust 
that his operator experience will help his judgment on not landing dangerous 
things :)


So for those on the team already, please reply with a +1 or -1 vote.

[1] https://review.openstack.org/#/admin/groups/950,members
[2] 
https://review.openstack.org/#/q/(branch:stable/mitaka+OR+branch:stable/newton)+AND+reviewer:%22Jay+Faulkner+%253Cjay%2540jvf.cc%253E%22


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [neutron][lbaasv2][octavia] Not able to create loadbalancer

2016-11-16 Thread 洪 赵
After the amphora vm was created, the Octavia worker tried to plug VIP to the 
amphora  vm, but failed. It could not connect to the amphora agent. You may ssh 
to the vm and check if the networks and ip addresses are correctly set.

Good luck.
-hzhao

发件人: Ganpat Agarwal
发送时间: 2016年11月16日 14:40
收件人: OpenStack Development Mailing List (not for usage 
questions)
主题: [openstack-dev] [neutron][lbaasv2][octavia] Not able to create loadbalancer

Hi All,

I am using devstack stable/newton branch and have deployed octavia for 
neutron-lbaasv2.

Here is my local.conf

[[local|localrc]]
HOST_IP=10.0.2.15
DATABASE_PASSWORD=$ADMIN_PASSWORD
MYSQL_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=tokentoken
DEST=/opt/stack

# Disable Nova Network and enable Neutron
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta

# Enable LBaaS v2
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas 
stable/newton
enable_plugin octavia https://git.openstack.org/openstack/octavia stable/newton
enable_service q-lbaasv2
enable_service octavia
enable_service o-cw
enable_service o-hm
enable_service o-hk
enable_service o-api

# Neutron options
Q_USE_SECGROUP=True
FLOATING_RANGE="172.18.161.0/24"
FIXED_RANGE="10.0.0.0/24"
Q_FLOATING_ALLOCATION_POOL=start=172.18.161.250,end=172.18.161.254
PUBLIC_NETWORK_GATEWAY="172.18.161.1"
PUBLIC_INTERFACE=eth0

LOG=True
VERBOSE=True
LOGFILE=$DEST/logs/stack.sh.log
LOGDAYS=1
SCREEN_LOGDIR=$DEST/logs/screen
SYSLOG=True
SYSLOG_HOST=$HOST_IP
SYSLOG_PORT=516
RECLONE=yes


While creating loadbalancer, i am getting error in octavia worker

2016-11-16 06:13:08.264 4115 INFO octavia.controller.queue.consumer [-] 
Starting consumer...
2016-11-16 06:14:58.507 4115 INFO octavia.controller.queue.endpoint [-] 
Creating load balancer '51082942-b348-4900-bde9-6d617dba8f99'...
2016-11-16 06:14:59.204 4115 INFO 
octavia.controller.worker.tasks.database_tasks [-] Created Amphora in DB with 
id 93e28edd-71ee-4448-bc70-b0424dbd64f5
2016-11-16 06:14:59.334 4115 INFO octavia.certificates.generator.local [-] 
Signing a certificate request using OpenSSL locally.
2016-11-16 06:14:59.336 4115 INFO octavia.certificates.generator.local [-] 
Using CA Certificate from config.
2016-11-16 06:14:59.336 4115 INFO octavia.certificates.generator.local [-] 
Using CA Private Key from config.
2016-11-16 06:14:59.337 4115 INFO octavia.certificates.generator.local [-] 
Using CA Private Key Passphrase from config.
2016-11-16 06:15:15.085 4115 INFO 
octavia.controller.worker.tasks.database_tasks [-] Mark ALLOCATED in DB for 
amphora: 93e28edd-71ee-4448-bc70-b0424dbd64f5 with compute id 
f339b48d-1445-47e0-950b-ee69c2add81f for load balancer: 
51082942-b348-4900-bde9-6d617dba8f99
2016-11-16 06:15:15.208 4115 INFO 
octavia.network.drivers.neutron.allowed_address_pairs [-] Port 
ac27cbb8-078d-47fd-824c-e95b0ebff392 already exists. Nothing to be done.
2016-11-16 06:15:39.708 4115 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-11-16 06:15:47.712 4115 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.

several 100 lines with same message

2016-11-16 06:24:29.310 4115 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
^[2016-11-16 06:24:34.316 4115 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to 
instance. Retrying.
2016-11-16 06:24:39.317 4115 ERROR 
octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries 
(currently set to 100) exhausted.  The amphora is unavailable.
2016-11-16 06:24:39.327 4115 WARNING 
octavia.controller.worker.controller_worker [-] Task 
'octavia.controller.worker.tasks.amphora_driver_tasks.AmphoraePostVIPPlug' 
(19f29ca9-3e7f-4629-b976-b4d24539d8ed) transitioned into state 'FAILURE' from 
state 'RUNNING'
33 predecessors (most recent first):
  Atom 
'octavia.controller.worker.tasks.network_tasks.GetAmphoraeNetworkConfigs' 
{'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': 
}, 
'provides': {u'93e28edd-71ee-4448-bc70-b0424dbd64f5': 
}}
  |__Atom 'reload-lb-after-plug-vip' {'intention': 'EXECUTE', 'state': 
'SUCCESS', 'requires': {'loadbalancer_id': 
u'51082942-b348-4900-bde9-6d617dba8f99'}, 'provides': 
}
 |__Atom 
'octavia.controller.worker.tasks.database_tasks.UpdateAmphoraVIPData' 
{'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'amps_data': 
[]}, 'provides': 
None}
|__Atom 'octavia.controller.worker.tasks.network_tasks.PlugVIP' 
{'intention': 'EXECUTE', 'state': 'SUCCESS', 'requires': {'loadbalancer': 
}, 
'provides': []}
   |__Atom 
'octavia.controller.worker.tasks.database_tasks.Upda

[openstack-dev] [Tacker] Unable to assign IP address to connection points.

2016-11-16 Thread prasad kokkula
Hi All,

[Tacker]   I have tried to launch the vnf Instance using Tacker. vnf is
launched succesfully and able to do SSH.

I have faced the issue, the connection points (CP2, CP3) are not getting ip
addreess except managament CP (CP1). Could you please let me know is this
Tacker issue or any configuration mismatch.

I have installed openstack newton release on Centos 7. Please let me know
if you need any other configuration.




=
Below are the net-list ip's

[root@localhost (keystone_admin)]# neutron net-list
+--+-+--
-+
| id   | name| subnets
  |
+--+-+--
-+
| 55077c0e-8291-4730-99b4-f280967cb69e | public  |
39256aad-d075-4c38-bf2c-14613df2252e 172.24.4.224/28  |
| 73bbaf70-9bdd-4359-a3a2-09dbd5734341 | private |
09b9018c-ca3b-46ee-9a4e-507e5124139f 10.0.0.0/24  |
| d0560ee9-9ab0-4df8-a0d2-14064950a17c | vnf_mgmt|
01d2b67c-ee28-4875-92e0-a8e51fdf8401 192.168.200.0/24 |
| f98f38b8-8b6c-4adb-b0e9-a265ce969acf | vnf_private |
61d39f59-2ff7-4292-afd9-536f007fd30c 192.168.201.0/24 |
+--+-+--
-+
[root@localhost (keystone_admin)]#

Tosca file used for vnf creation.


[root@localhost (keystone_admin)]# cat sample-vnfd.yaml

tosca_definitions_version: tosca_simple_profile_for_nfv_1_0_0

description: Demo vCPE example

metadata:
  template_name: sample-tosca-vnfd

topology_template:
  node_templates:
VDU1:
  type: tosca.nodes.nfv.VDU.Tacker
  capabilities:
nfv_compute:
  properties:
num_cpus: 1
mem_size: 512 MB
disk_size: 1 GB
  properties:
image: cirros1
availability_zone: nova
mgmt_driver: noop
user_data_format: RAW
config: |
  param0: key1
  param1: key2

CP1:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
management: true
  requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1

CP2:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
anti_spoofing_protection: false
  requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU1

CP3:
  type: tosca.nodes.nfv.CP.Tacker
  properties:
anti_spoofing_protection: false
  requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU1

VL1:
  type: tosca.nodes.nfv.VL
  properties:
network_name: vnf_mgmt
vendor: Tacker

VL2:
  type: tosca.nodes.nfv.VL
  properties:
network_name: vnf_private
vendor: Tacker

VL3:
  type: tosca.nodes.nfv.VL
  properties:
network_name: private
vendor: Tacker


===
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tricircle][neutron]Tricircle now is one of OpenStack Big-Tent project

2016-11-16 Thread Shinobu Kinjo
On Wed, Nov 16, 2016 at 6:46 PM, joehuang  wrote:

> Hi, Shinobu,
>
> Team work leads the project here :)
>

Indeed.
Let's move on

 - Shinobu


>
> Gergely also provided use cases from OPNFV:
> https://lists.opnfv.org/pipermail/opnfv-tech-discuss/
> 2016-November/013661.html
>
> Or you can directly find it here: http://artifacts.opnfv.
> org/netready/docs/requirements/index.html#georedundancy
>
>
> Best Regards
> Chaoyi Huang (joehuang)
> --
> *From:* Shinobu Kinjo [shinobu...@gmail.com]
> *Sent:* 16 November 2016 15:25
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [all][tricircle][neutron]Tricircle now is
> one of OpenStack Big-Tent project
>
> Great!
>
>  - Shinobu
>
> On Wed, Nov 16, 2016 at 3:39 PM, joehuang  wrote:
>
>> Hi all,
>>
>> Tricircle was officially accepted yesterday as a big-tent project.
>>
>> The purpose of the Tricircle project is to provide networking automation
>> across Neutron in multi-region OpenStack clouds deployment.
>>
>> Use cases for the Tricircle are described in
>> https://wiki.openstack.org/wiki/Tricircle#Use_Cases.
>>
>> A brief introduction of Tricircle is provided here:
>>
>> Each OpenStack cloud includes its own Nova, Cinder and Neutron, the
>> Neutron servers in these OpenStack clouds are called local Neuron
>> servers, all these local Neutron servers will be configured with the
>> Tricircle Local Neutron Plugin. A seperate Neutron server will be
>> installed and run standalone as the coordinator of networking automation
>> across local Neutron servers, this Neutron server will be configured
>> with the Tricircle Central Neutron Plugin, and is called central
>> Neutron server.
>>
>> Leverage the Tricircle Central Neutron Plugin and the Tricircle Local
>> Neutron Plugin configured in these Neutron servers, the Tricircle can
>> ensure the IP address pool, IP/MAC address allocation and network
>> segment allocation being managed globally without conflict, and the
>> Tricircle handles tenant oriented data link layer(Layer2) or network
>> layer(Layer3) networking automation across local Neutron servers,
>> resources like VMs, bare metal or containers of the tenant can
>> communicate with each other via Layer2 or Layer3, no matter in which
>> OpenStack cloud these resources are running on.
>>
>> How to start in Tricircle:
>> 1. The best entry point for the Tricircle project is its wiki page:
>> https://wiki.openstack.org/wiki/Tricircle. Source code repository is
>> https://github.com/openstack/tricircle.
>>
>> 2. You can play it through devstack:
>> https://github.com/openstack/tricircle/blob/master/doc/sourc
>> e/multi-node-installation-devstack.rst
>>
>> 3. The design blueprint provides general overview on the ongoing
>> discussion
>> on the design:
>> https://docs.google.com/document/d/1zcxwl8xMEpxVCqLTce2-
>> dUOtB-ObmzJTbV1uSQ6qTsY/edit#
>>
>> We are trying to tackle common use cases and challenges in OpenStack
>> multi-region cloud area, We welcome new contributors who wish to join our
>> effort.
>>
>> We are holding a weekly IRC meeting:
>> Weekly on Wednesdays at 1300 UTC, IRC channel: #openstack-meeting
>> Project IRC channel and other resources could be found here:
>> https://wiki.openstack.org/wiki/Tricircle#Resources.
>>
>> And everyone is welcome.
>>
>> (Neutron subject is also included in the mail title, inter-communication
>> and collaboration between Neutron and Tricircle is greatly welcome)
>>
>> Best Regards
>> Chaoyi Huang (joehuang)
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oaktree - a friendly end-user oriented API layer - anybody want to help?

2016-11-16 Thread joehuang
Consider that Shade/Oaktree will interact with multiple clouds, it's necessary 
to establish check and gate test
environment for multi-clouds. This is also one requirement from Tricircle.

Best Regards
Chaoyi Huang (joehuang)

From: Monty Taylor [mord...@inaugust.com]
Sent: 15 November 2016 22:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] oaktree - a friendly end-user oriented API layer - 
anybody want to help?

Hey everybody!

At this past OpenStack Summit the results of the Interop Challenge were
shown on stage. It was pretty awesome - 17 different people from 17
different clouds ran the same workload. And it worked!

However, one of the reasons it worked is because they all used the
Ansible modules we wrote that are based on the shade library that
contains the business logic needed to hide vendor differences in clouds.
That means that there IS a fantastic OpenStack interoperability story -
but only if you program in Python. That's less awesome.

With that in mind - I'm pleased to announce a new project that aims to
address that - oaktree.

oaktree is a gRPC-based API porcelain service for OpenStack that is
based on the shade library and I'd love some help in writing it.

Basing oaktree on shade gets not only the business logic. Shade already
understands a multi-cloud world. And because we use shade in Infra for
nodepool, it already has caching, batching and thundering herd
protection sorted to be able to hand very high loads efficiently. So
while oaktree is new, the primary logic and fundamentals are all shade
and are battle-tested.

The barrier to deployers adding it to their clouds needs to be as low as
humanly possible. So as we work on it, ensuring that we keep it
dead-simple to install, update and operate must be a primary concern.

Where are we and what's next?

oaktree doesn't do a whole lot that's terribly interesting at the
moment. We have all of the development scaffolding and gate jobs set up
and a few functions implemented.

oaktree exists currently as two repos - oaktree and oaktreemodel:

  http://git.openstack.org/cgit/openstack/oaktree
  http://git.openstack.org/cgit/openstack/oaktreemodel

oaktreemodel contains the Protobuf definitions and the build scripts to
produce Python, C++ and Go code from them. The python code is published
to PyPI as a normal pure-python library. The C++ code is published as a
source tarball and the Go code is checked back in to the same repo so
that go works properly.

oaktree depends on the python oaktreemodel library, and also on shade.
It implements the server portion of the gRPC service definition.

Currently, oaktree can list and search for flavors, images and floating
ips. Exciting right? Most of the work to expose the rest of the API that
shade can provide at the moment is going to be fairly straightforward -
although in each case figuring out the best mapping will take some care.

We have a few major things that need some good community design. These
are also listed in a todo.rst file in the oaktree repo which is part of
the docs:

  http://oaktree.readthedocs.io/en/latest/

The auth story. The native/default auth for gRPC is oauth. It has the
ability for pluggable auth, but that would raise the barrier for new
languages. I'd love it if we can come up with a story that involves
making API users in keystone and authorizing them to use oaktree via an
oauth transaction. The keystone auth backends currently are all about
integrating with other auth management systems, which is great for
environments where you have a web browser, but not so much for ones
where you need to put your auth credentials into a file so that your
scripts can work. I'm waving my hands wildly here - because all I really
have are problems to solve and none of the solutions I have are great.

Glance Image Uploads and Swift Object Uploads (and downloads). Having
those two data operations go through an API proxy seems inefficient.
However, having them not in the API seems like a bad user experience.
Perhaps if we take advantage of the gRPC streaming protocol support
doing a direct streaming passthrough actually wouldn't be awful. Or
maybe the better approach would be for the gRPC call to return a URL and
token for a user to POST/PUT to directly. Literally no clue.

In any case - I'd love help from anyone who thinks this sounds like a
good idea. In a perfect world we'll have something ready for 1.0 by Atlanta.

Join us in #openstack-shade if you want to hack.

Thanks!
Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack

[openstack-dev] [watcher] How to verify that my strategy works well within watcher?

2016-11-16 Thread David TARDIVEL
I had a debate with my team about how can I be sure the strategy I want to use
to optimize my cluster with Watcher works well. Also, how a developer can 
guarantee its 
strategy works well?

After a lot of discussions, we found a consensus:

- For the unit tests, a developer should be able to demonstrate that its 
strategy algorithm is correct,
  by mocking all data required by this algorithm (Cluster data model, metrics, 
scoring data, ...). A good 
  test coverage is required to be confident about a given strategy. If you 
implement a deterministic algorithm,
  unit tests should be able to thoroughly check the algorithm outputs.

- For the integration tests (Tempest), a developer/tester should be able to 
check that the strategy has been
correctly implemented, according to the plugin implementation documentation 
available within the Watcher
developer guide[0]. One should at least implement a Tempest scenario test that 
creates a executes a new audit
with this strategy (and related goal). Note that with tempest gate job, there 
is not a single measurement being
collected, so one should take into account such a case.

Finally, we added a new section in the Watcher documentation to promote each 
strategy. This piece of documentation
is very important, hence strategy developers should make sure they describe 
both how the strategy works and
the goal it achieves. A complete documentation, with references, will make end 
users more confident about
using it.

Feel free to comment my post :)

[0]: http://docs.openstack.org/developer/watcher/


BR,
David

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tricircle][neutron]Tricircle now is one of OpenStack Big-Tent project

2016-11-16 Thread joehuang
Hi, Shinobu,

Team work leads the project here :)

Gergely also provided use cases from OPNFV:
https://lists.opnfv.org/pipermail/opnfv-tech-discuss/2016-November/013661.html

Or you can directly find it here: 
http://artifacts.opnfv.org/netready/docs/requirements/index.html#georedundancy


Best Regards
Chaoyi Huang (joehuang)

From: Shinobu Kinjo [shinobu...@gmail.com]
Sent: 16 November 2016 15:25
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tricircle][neutron]Tricircle now is one of 
OpenStack Big-Tent project

Great!

 - Shinobu

On Wed, Nov 16, 2016 at 3:39 PM, joehuang 
mailto:joehu...@huawei.com>> wrote:
Hi all,

Tricircle was officially accepted yesterday as a big-tent project.

The purpose of the Tricircle project is to provide networking automation
across Neutron in multi-region OpenStack clouds deployment.

Use cases for the Tricircle are described in
https://wiki.openstack.org/wiki/Tricircle#Use_Cases.

A brief introduction of Tricircle is provided here:

Each OpenStack cloud includes its own Nova, Cinder and Neutron, the
Neutron servers in these OpenStack clouds are called local Neuron
servers, all these local Neutron servers will be configured with the
Tricircle Local Neutron Plugin. A seperate Neutron server will be
installed and run standalone as the coordinator of networking automation
across local Neutron servers, this Neutron server will be configured
with the Tricircle Central Neutron Plugin, and is called central
Neutron server.

Leverage the Tricircle Central Neutron Plugin and the Tricircle Local
Neutron Plugin configured in these Neutron servers, the Tricircle can
ensure the IP address pool, IP/MAC address allocation and network
segment allocation being managed globally without conflict, and the
Tricircle handles tenant oriented data link layer(Layer2) or network
layer(Layer3) networking automation across local Neutron servers,
resources like VMs, bare metal or containers of the tenant can
communicate with each other via Layer2 or Layer3, no matter in which
OpenStack cloud these resources are running on.

How to start in Tricircle:
1. The best entry point for the Tricircle project is its wiki page:
https://wiki.openstack.org/wiki/Tricircle. Source code repository is
https://github.com/openstack/tricircle.

2. You can play it through devstack:
https://github.com/openstack/tricircle/blob/master/doc/source/multi-node-installation-devstack.rst

3. The design blueprint provides general overview on the ongoing discussion
on the design:
https://docs.google.com/document/d/1zcxwl8xMEpxVCqLTce2-dUOtB-ObmzJTbV1uSQ6qTsY/edit#

We are trying to tackle common use cases and challenges in OpenStack
multi-region cloud area, We welcome new contributors who wish to join our
effort.

We are holding a weekly IRC meeting:
Weekly on Wednesdays at 1300 UTC, IRC channel: #openstack-meeting
Project IRC channel and other resources could be found here:
https://wiki.openstack.org/wiki/Tricircle#Resources.

And everyone is welcome.

(Neutron subject is also included in the mail title, inter-communication
and collaboration between Neutron and Tricircle is greatly welcome)

Best Regards
Chaoyi Huang (joehuang)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Nov.16

2016-11-16 Thread joehuang
Hello, team,

Tricircle now is one of OpenStack big-tent project, let's continue the weekly 
meeting.

Agenda of Nov.16 weekly meeting:

  1.  Ocata feature development discussion
  2.  Documentation requirement: 
http://docs.openstack.org/contributor-guide/quickstart/new-projects.html
  3.  Open Discussion

How to join:
#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutron-lib impact

2016-11-16 Thread Gary Kotton
Hi,
The directory integration will break all of the plugins and neutron projects. I 
do not think that this is something that we should do. It breaks the neutron 
API contract.
I think that we should only unblock the patch 
https://review.openstack.org/#/c/386845. I think that due to the fact that this 
patch (very big) will break all plugins, we should only approve it once every 
sub project owner has chimed in. This will mean that she/he will need to 
understand that there may be some tweaks involved in getting unit tests to 
pass. CI may automagically work.
I feel that as a core reviewer my responsibility is to make sure that we do not 
break things. In addition to this we have a responsibility to ensure that 
things continue to work. Hopefully we can find a way to do this in a more 
friendly manner.
Thanks
Gary

From: "Armando M." 
Reply-To: OpenStack List 
Date: Wednesday, November 16, 2016 at 6:51 AM
To: OpenStack List 
Subject: [openstack-dev] [neutron] neutron-lib impact

Hi neutrinos,

As mentioned during the last team meeting [1], there is a change [2] in the 
works aimed at adopting the neutron plugins directory as provided in 
neutron-lib 1.0.0 [3].

As shown in [2], the switch to using the directory is relatively 
straightforward. I leave the rest of the affected repos as an exercise for the 
reader :)

Cheers,
Armando

[1] 
http://eavesdrop.openstack.org/meetings/networking/2016/networking.2016-11-14-21.00.txt
[2] https://review.openstack.org/#/q/topic:plugin-directory
[3] http://docs.openstack.org/releasenotes/neutron-lib/unreleased.html#id3

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >