Re: [openstack-dev] [mistral][logo] Fwd: Mistral team draft logo

2016-12-07 Thread Renat Akhmerov
On 7 Dec 2016, at 20:37, Jay Pipes  wrote:
> 
> To me, it kind of looks like people jumping joyously off the top of a ferris 
> wheel.


Haha :)) I didn’t have THIS kind of association in my mind. That’s funny. 
Thanks, Jay!


Renat Akhmerov
@Nokia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging liberty as EOL

2016-12-07 Thread Tony Breeds
On Tue, Nov 22, 2016 at 01:35:48PM +1100, Tony Breeds wrote:

> I'll batch the removal of the stable/liberty branches between Nov 28th and Dec
> 3rd (UTC+1100).  Then during Decemeber I'll attempt to cleanup 
> zuul/layout.yaml
> to remove liberty exclusions and jobs.

This took longer as there are a few repos that are scheduled for EOL that were a
little problematic during the kilo cycle.  I've updated the list at [1]

Can the infra team please run eol_branch.sh [2] over the repos listed at that
URL [1] and flagged with 'Please EOL'.  The others will need to be done later.

eol_branch.sh needs just the repo names what can be generated with something 
like:

URL=https://gist.githubusercontent.com/tbreeds/93cd346c37aa46269456f56649f0a4ac/raw/liberty_eol_data.txt
eol_branch.sh REPOS=$(curl -s $URL | awk '/Please EOL/ {print $1}')

The data format is a balance between human and machine readable.

Yours Tony.

[1] 
https://gist.github.com/tbreeds/93cd346c37aa46269456f56649f0a4ac#file-liberty_eol_data-txt
[2] 
http://git.openstack.org/cgit/openstack-infra/release-tools/tree/eol_branch.sh


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver based LBaaSv2

2016-12-07 Thread Brandon Logan
On Wed, 2016-12-07 at 06:50 -0800, Michael Johnson wrote:
> Lubosz,
> 
> I would word that very differently.  We are not dropping LBaaSv2
> support.  It is not going away.  I don't want there to be confusion
> on
> this point.
> 
> We are however, moving/merging the API from neutron into Octavia.
> So, during this work the code will be transitioning repositories and
> you will need to carefully synchronize and/or manage the changes in
> both places.
> Currently the API changes have patchsets up in the Octavia
> repository.
> However, the old namespace driver has not yet been migrated over.
I know I've talked about using the namespace driver as a guinea pig for
the nlbaas to octavia shim driver layer, but I didn't know it would be
fully supported in octavia.  This will require a bit more work because
of the callbacks the agent expects to be able to call.

> 
> Michael
> 
> 
> On Tue, Dec 6, 2016 at 8:46 AM, Kosnik, Lubosz  om> wrote:
> > Hello Zhi,
> > So currently we’re working on dropping LBasSv2 support.
> > Octavia is a big-tent project providing lbass in OpenStack and
> > after merging
> > LBasS v2 API in Octavia we will deprecate that project and in next
> > 2
> > releases we’re planning to completely wipe out that code
> > repository. If you
> > would like to help with LBasS in OpenStack you’re more than welcome
> > to start
> > working with us on Octavia.
> > 
> > Cheers,
> > Lubosz Kosnik
> > Cloud Software Engineer OSIC
> > 
> > On Dec 6, 2016, at 6:04 AM, Gary Kotton  wrote:
> > 
> > Hi,
> > I think that there is a move to Octavia. I suggest reaching out to
> > that
> > community and see how these changes can be added. Sounds like a
> > nice
> > addition
> > Thanks
> > Gary
> > 
> > From: zhi 
> > Reply-To: OpenStack List 
> > Date: Tuesday, December 6, 2016 at 11:06 AM
> > To: OpenStack List 
> > Subject: [openstack-dev] [neutron][lbaas] New extensions for
> > HAProxy driver
> > based LBaaSv2
> > 
> > Hi, all
> > 
> > I am considering add some new extensions for HAProxy driver based
> > Neutron
> > LBaaSv2.
> > 
> > Extension 1, multi subprocesses supported. By following this
> > document[1], I
> > think we can let our HAProxy based LBaaSv2 support this feature. By
> > adding
> > this feature, we can enhance loadbalancers performance.
> > 
> > Extension 2, http keep-alive supported. By following this
> > document[2], we
> > can make our loadbalancers more effective.
> > 
> > 
> > Any comments are welcome!
> > 
> > Thanks
> > Zhi Chang
> > 
> > 
> > [1]: http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#c
> > pu-map
> > [2]:
> > http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#option
> > %20http-keep-alive
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Kolla-Kubernetes tagging -> on the road to 1.0.0

2016-12-07 Thread Steven Dake (stdake)
Hey folks,

Jeffrey delegated to me to determine the tagging structure for 
kolla-kubernetes.  I was under the mistaken impression we need to tag 1.0.0 
with the milestone tags (such as 1.0.0.0b2/1.0.0.0b3) for kolla-kubernetes.  
That is not the case.  We will be tagging 0.4.0 next, then 0.5.0, then 0.6.0, 
until we have something reliable that does the job as suggested by Doug 
Hellman.  For full context read the logs here:

http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2016-12-07.log.html#t2016-12-07T20:25:00
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 4

2016-12-07 Thread Jay Pipes

On 12/07/2016 07:06 PM, Matt Riedemann wrote:

On 12/7/2016 2:40 AM, Sylvain Bauza wrote:


FWIW, I think POST is not that complex and allows us to have room for
further request information like traits, without defeating the purpose
to have something RESTful.

The proposal is up, comments welcome
https://review.openstack.org/#/c/392569/

-Sylvain



Just to update everyone else following along, we had a discussion in IRC
today (me, edleafe, bauzas, sdague, cdent and dansmith) about GET vs
POST and the majority of us sided with simple GETs for now, knowing we
have the option to do complex POST requests later with a microversion if
it turns out that we need it.

I was originally wanting to do the POST request but wasn't fully aware
of the future plans to POST to /allocations to make claims with a
request spec which can have a complicated request body.

We also aren't doing traits right now, so while I'm not crazy about the
namespaced query language that's going to get built into the GET query
parameters, right now it's not a monster we need to deal with.

I don't want to underestimate the complexity that might blow up the GET
query parameter schema, especially once we start having to deal with NFV
use cases, but we aren't there yet and I'd rather not boil the ocean
right now. Sean pointed out, as thankfully he usually does, that if we
over-complicate this for future requirements we'll lose time working on
what needs to get done for the majority of use cases that we want to
have working in Ocata, so let's move forward with the more normal GET
format for listing resource providers with filters knowing that we have
options in the future with POST and microversions if we need that escape
hatch.


Thanks for posting back on this. I just finished reading back through 
the (long) conversation had on IRC this afternoon. Appreciate everyone 
lending their opinions, sticking to the discussion, and pushing through 
to a decision/conclusion.


At the end of the day, nobody is ever completely happy with every 
solution that is proposed. That's just the way it is with things like 
this. I know Dan and Sylvain aren't pleased with the decision, but I 
appreciate that both of you stuck with it and kept the discussion civil 
and productive.


As others noted, I pushed up code that implements the GET 
/resource_providers?resources=XXX handling [1]. It is rebased off of 
Sylvain's patch that adds object-layer handling of resource filters [2]. 
Hope to see your reviews on that. Sylvain, not sure there is anything to 
merge/squash in the patch, but if there is, I'll chat with you about it 
tomorrow morning.


Best,
-jay

Thanks,
-jay

[1] https://review.openstack.org/#/c/408285/
[2] https://review.openstack.org/#/c/386242/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Questions about how to setup nova-lxd

2016-12-07 Thread Matt Riedemann

On 12/7/2016 6:14 PM, zhihao wang wrote:

Hi All


I have installed openstack (Newton) on Ubuntu 16.04 on 4 Nodes, and I
want to install and configure Nova-LXD on my exiting openstack, so that
I can create and run Linux Container using LXD.


I am wondering is there anyone know how to setup this, and is there any
user guide for this?


Appreciate your help Thanks


Thanks

Wally



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Google told me about these docs:

https://linuxcontainers.org/lxd/getting-started-openstack/

http://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html

As noted in IRC, zigo and maybe jamespage are probably your best 
starting contacts.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]agenda of weekly meeting Dec.7

2016-12-07 Thread joehuang
Hello, Dims,

Thank you very much, interesting to know their work. Would like to get in touch 
with those folks.

Best Regards
Chaoyi Huang (joehuang)


From: Davanum Srinivas [dava...@gmail.com]
Sent: 07 December 2016 19:40
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]agenda of weekly meeting Dec.7

Chaoyi,

Is there any interest in this work?
http://cs.brown.edu/~rfonseca/pubs/yu16netex.pdf
https://goo.gl/photos/hwHfMNo4xDMfVK8j8

Please let me know and i'll get you in touch with those folks.

Thanks,
Dims


On Wed, Dec 7, 2016 at 3:00 AM, joehuang  wrote:
> Hello, team,
>
> Bug-smash and meetup in last week is very good, let's continue the weekly
> meeting.
>
> Agenda of Dec.7 weekly meeting:
>
> Bug smash and meetup summary
> Ocata feature development review
> legacy tables clean after splitting
> Open Discussion
>
>
> How to join:
>
> #  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on
> every Wednesday starting from UTC 13:00.
>
>
> If you  have other topics to be discussed in the weekly meeting, please
> reply the mail.
>
>
> Best Regards
> Chaoyi Huang (joehuang)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Questions about how to setup nova-lxd

2016-12-07 Thread zhihao wang
Hi All


I have installed openstack (Newton) on Ubuntu 16.04 on 4 Nodes, and I want to 
install and configure Nova-LXD on my exiting openstack, so that I can create 
and run Linux Container using LXD.


I am wondering is there anyone know how to setup this, and is there any user 
guide for this?


Appreciate your help Thanks


Thanks

Wally
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 4

2016-12-07 Thread Matt Riedemann

On 12/7/2016 2:40 AM, Sylvain Bauza wrote:


FWIW, I think POST is not that complex and allows us to have room for
further request information like traits, without defeating the purpose
to have something RESTful.

The proposal is up, comments welcome
https://review.openstack.org/#/c/392569/

-Sylvain



Just to update everyone else following along, we had a discussion in IRC 
today (me, edleafe, bauzas, sdague, cdent and dansmith) about GET vs 
POST and the majority of us sided with simple GETs for now, knowing we 
have the option to do complex POST requests later with a microversion if 
it turns out that we need it.


I was originally wanting to do the POST request but wasn't fully aware 
of the future plans to POST to /allocations to make claims with a 
request spec which can have a complicated request body.


We also aren't doing traits right now, so while I'm not crazy about the 
namespaced query language that's going to get built into the GET query 
parameters, right now it's not a monster we need to deal with.


I don't want to underestimate the complexity that might blow up the GET 
query parameter schema, especially once we start having to deal with NFV 
use cases, but we aren't there yet and I'd rather not boil the ocean 
right now. Sean pointed out, as thankfully he usually does, that if we 
over-complicate this for future requirements we'll lose time working on 
what needs to get done for the majority of use cases that we want to 
have working in Ocata, so let's move forward with the more normal GET 
format for listing resource providers with filters knowing that we have 
options in the future with POST and microversions if we need that escape 
hatch.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [acceleration]Team Biweekly Meeting 2016.12.07 agenda

2016-12-07 Thread Zhipeng Huang
Hi Miro,

Yes it is, and please find all the necessary information on the wiki page :)

On Thu, Dec 8, 2016 at 1:27 AM, Miroslav Halas  wrote:

> Howard,
>
>
>
> Thank you for sharing. I wasn’t able to join because I was quite confused
> about the channel to join (tried openstack-acc and openstack-meetinc-cp).
>
>
>
> Is #openstack-cyborg the channel to use going forward?
>
>
>
> Thanks,
>
>
>
> Miro
>
>
>
> *From:* Zhipeng Huang [mailto:zhipengh...@gmail.com]
> *Sent:* Wednesday, December 07, 2016 11:35 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Cc:* Miroslav Halas; rodolfo.alonso.hernan...@intel.com; Michele
> Paolino; Scott Kelso; Roman Dobosz; Jim Golden;
> pradeep.jagade...@huawei.com; michael.ro...@nokia.com;
> jian-feng.d...@intel.com; martial.mic...@nist.gov; Moshe Levi; Edan
> David; Francois Ozog; Fei K Chen; jack...@huawei.com; Harm Sluiman;
> li.l...@huawei.com
> *Subject:* Re: [acceleration]Team Biweekly Meeting 2016.12.07 agenda
>
>
>
> Hi Team,
>
>
>
> Thanks for attending today's meeting and having a great discussion, please
> find the minutes at https://wiki.openstack.org/wiki/Cyborg/MeetingLogs
>
>
>
> On Wed, Dec 7, 2016 at 4:30 PM, Zhipeng Huang 
> wrote:
>
> Hi Team,
>
>
>
> Please find the initial agenda for today's meeting at
> https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Next_
> meeting_:_UTC_1500.2C_Dec_7th
>
>
>
> --
>
> Zhipeng (Howard) Huang
>
>
>
> Standard Engineer
>
> IT Standard & Patent/IT Prooduct Line
>
> Huawei Technologies Co,. Ltd
>
> Email: huangzhip...@huawei.com
>
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
>
>
> (Previous)
>
> Research Assistant
>
> Mobile Ad-Hoc Network Lab, Calit2
>
> University of California, Irvine
>
> Email: zhipe...@uci.edu
>
> Office: Calit2 Building Room 2402
>
>
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
>
>
>
>
> --
>
> Zhipeng (Howard) Huang
>
>
>
> Standard Engineer
>
> IT Standard & Patent/IT Prooduct Line
>
> Huawei Technologies Co,. Ltd
>
> Email: huangzhip...@huawei.com
>
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
>
>
> (Previous)
>
> Research Assistant
>
> Mobile Ad-Hoc Network Lab, Calit2
>
> University of California, Irvine
>
> Email: zhipe...@uci.edu
>
> Office: Calit2 Building Room 2402
>
>
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][monasca] Can we uncap python-kafka ?

2016-12-07 Thread Tony Breeds
On Mon, Dec 05, 2016 at 04:03:13AM +, Keen, Joe wrote:
> I wasn’t able to set a test up on Friday and with all the other work I
> have for the next few days I doubt I’ll be able to get to it much before
> Wednesday.

It's Wednesday so can we have an update?

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Kevin Benton
I think we will probably end up having to support the direct pass through
anyway. The reason that landed in the spec is because someone claimed they
had switches that couldn't do translation very well. We will just need
volunteers to make the changes required on the Neutron side.

On Wed, Dec 7, 2016 at 10:19 AM, Vasyl Saienko 
wrote:

>
>
> On Wed, Dec 7, 2016 at 7:34 PM, Kevin Benton  wrote:
>
>> >It work only when whole switch is aimed by single customer, it will not
>> work when several customers sharing the same switch.
>>
>> Do you know what vendors have this limitation? I know the broadcom
>> chipsets didn't prevent this (we allowed VLAN rewrites scoped to ports at
>> Big Switch). If it's common to Cisco/Juniper then I guess we are stuck
>> reflecting bad hardware in the API. :(
>>
>
> @Kevin
> It looks that I was wrong, on the example I provided I expected to
> configure VLAN mapping on Gig0/1 uplink. It will not work in this case, but
> if configure VLAN mapping at ports where baremetal are plugged (i.e: Fa0/1
> - 0/5) it should work :)
> I definitely need more practice with VLAN mapping...
>
>
>
>>
>> On Wed, Dec 7, 2016 at 9:22 AM, Vasyl Saienko 
>> wrote:
>>
>>>
>>>
>>> On Wed, Dec 7, 2016 at 7:12 PM, Kevin Benton  wrote:
>>>


 On Wed, Dec 7, 2016 at 8:47 AM, Vasyl Saienko 
 wrote:

> @Armando: IMO the spec [0] is not about enablement of Trunks and
> baremetal. This spec is rather about trying to make user request with any
> network configuration (number of requested NICs) to be able successfully
> deployed on ANY ironic node (even when number of hardware interfaces is
> less than number of requested attached networks to instance) by implicitly
> creating neutron trunks on the fly.
>
> I have  a concerns about it and left a comment [1]. The guaranteed
> number of NICs on hardware server should be  available to user via nova
> flavor information. User should decide if he needs a trunk or not only by
> his own, as his image may even not support trunking. I suggest that
> creating trunks implicitly (w/o user knowledge) shouldn't happen.
>
> Current trunks implementation in Neutron will work just fine with
> baremetal case with one small addition:
>
> 1. segmentation_type and segmentation_id should not be API mandatory
> fields at least for the case when provider segmentation is VLAN.
>
> 2. User still should know what segmentation_id was picked to configure
> it on Instance side. (Not sure if it is done automatically via network
> metadata at the moment.). So it should be inherited from network
> provider:segmentation_id and visible to the user.
>
>
> @Kevin: Having VLAN mapping support on the switch will not solve
> problem described in scenario 3 when multiple users pick the same
> segmentation_id for different networks and their instances were spawned to
> baremetal nodes on the same switch.
>
> I don’t see other option than to control uniqueness of segmentation_id
> on Neutron side for baremetal case.
>

 Well unless there is a limitation in the switch hardware, VLAN mapping
 is scoped to each individual port so users can pick the same local
 segmentation_id. The point of the feature on switches is for when you have
 customers that specify their own VLANs and you need to map them to service
 provider VLANs (i.e. what is happening here).

>>>
>>> It work only when whole switch is aimed by single customer, it will not
>>> work when several customers sharing the same switch.
>>>
>>>


>
> Reference:
>
> [0] https://review.openstack.org/#/c/277853/
> [1] https://review.openstack.org/#/c/277853/10/specs/approved/VL
> AN-aware-baremetal-instances.rst@35
>
> On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton  wrote:
>
>> Just to be clear, in this case the switches don't support VLAN
>> translation (e.g. [1])? Because that also solves the problem you are
>> running into. This is the preferable path for bare metal because it 
>> avoids
>> exposing provider details to users and doesn't tie you to VLANs on the
>> backend.
>>
>> 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/
>>
>> On Wed, Dec 7, 2016 at 7:49 AM, Armando M.  wrote:
>>
>>>
>>>
>>> On 7 December 2016 at 04:02, Vasyl Saienko 
>>> wrote:
>>>
 Armando, Kevin,

 Thanks for your comments.

 To be more clear we are trying to use neutron trunks implementation
 with baremetal servers (Ironic). Baremetal servers are plugged to Tor 
 (Top
 of the Rack) switch. User images are spawned directly onto hardware.

>>> 

Re: [openstack-dev] [Horizon] Draft team mascot

2016-12-07 Thread Tripp, Travis S
I also like Radomir’s version.

From: "Ramirez, Eddie" 
Reply-To: OpenStack List 
Date: Wednesday, December 7, 2016 at 11:28 AM
To: OpenStack List 
Subject: Re: [openstack-dev] [Horizon] Draft team mascot

Very fixed, much color, so right. Wow.

+1 Radomir’s

From: Rob Cresswell (rcresswe) [mailto:rcres...@cisco.com]
Sent: Wednesday, December 7, 2016 4:50 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Horizon] Draft team mascot

Radomir’s version has my vote.

On 7 Dec 2016, at 10:11, Radomir Dopieralski 
> wrote:

Here, fixed.

On Wed, Dec 7, 2016 at 10:54 AM, Radomir Dopieralski 
>wrote:
That looks kinda like a white baboon. It definitely doesn't look like Doge -- 
wrong color, wrong head. I think the legs are too long too.

On Wed, Dec 7, 2016 at 10:31 AM, Timur Sufiev 
> wrote:
I still think this one 
https://wtf.jpg.wtf/0c/10/1479414543-0c1052f7c2f9990b6b0c472076594cb1.jpeg is 
the best :).

On Wed, Dec 7, 2016 at 1:07 AM Jason Rist 
> wrote:
On 12/06/2016 01:48 PM, Richard Jones wrote:
> >> On 6 Dec 2016, at 20:19, Richard Jones 
> >> > wrote:
> >> Please let me know what you think (by December 12) of this draft for
> >> our Horizon team mascot.
> >
> On 7 December 2016 at 07:38, Rob Cresswell (rcresswe)
> > wrote:
> > Are we missing an attachment / link ?
>
> Weird! Trying again:
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Much UI, such OpenStack, wow.

--
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] trunk api performance and scale measurments

2016-12-07 Thread Brian Stajkowski
Yes, this actually has to do with the policy check on any list of objects,
including ports.  The patch has to do with bypassing this individual
object check for get and delete as the individual attribute checks are
bypassed. The mentality is, if you have a list of objects with a base
action of get_ports, you are checking each object against the base action,
so why check every object.  But, I¹m looking for someone to say, yes there
is a reason and this is why.  Other than that, this cuts the response time
in half, but I¹m getting weird test failures with 3 tests as they are
related to a query count check, so some challenges.
--
Brian Stajkowski

Manager, Software Development ­ US
m: 702.575.7890
irc: ski
e: brian.stajkow...@rackspace.com




On 12/7/16, 3:38 AM, "John Davidge"  wrote:

>On 12/6/16, 6:06 PM, "Tidwell, Ryan"  wrote:
>
>>
>>I failed to make much mention of it in previous write-ups, but I also
>>encountered scale issues with listing ports after a certain threshold. I
>>haven¹t gone back
>> to identify where the tipping point is, but I did notice that Horizon
>>began to really bog down as I added ports to the system. On the surface
>>it didn¹t seem to matter whether these ports were used as subports or
>>not, the sheer volume of ports added to the
>> system seemed to cause both Horizon and more importantly GET on
>>v2.0/ports to really bog down.
>>
>>-Ryan
>
>Could this be related to https://bugs.launchpad.net/neutron/+bug/1611626 ?
>
>John
>
>
>
>Rackspace Limited is a company registered in England & Wales (company
>registered number 03897010) whose registered office is at 5 Millington
>Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy
>policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This
>e-mail message may contain confidential or privileged information
>intended for the recipient. Any dissemination, distribution or copying of
>the enclosed material is prohibited. If you receive this transmission in
>error, please notify us immediately by e-mail at ab...@rackspace.com and
>delete the original message. Your cooperation is appreciated.
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr][Magnum] How stable is kuryr-kubernetes?

2016-12-07 Thread Ton Ngo

Hi Mike,
On the Magnum side, myself and Hongbin Lu have been tracking the Kuryr
driver for integration with Magnum.  We did some early integration with the
Mitaka version of the libnetwork driver but we left those patches as
work-in-progress since there has been major redesign as you noted.  We have
been working with and Kuryr team on security issues and as soon as the new
version is functional, we will resume the work on Magnum.
We would be glad to work with you if you are interested.  You can ping
us on the Magnum IRC #openstack-containers.
Ton Ngo,



From:   Antoni Segura Puimedon 
To: "OpenStack Development Mailing List (not for usage questions)"

Cc: i.maxim...@samsung.com, ash.bill...@samsung.com,
s.dya...@samsung.com, heetae82@samsung.com
Date:   12/06/2016 08:21 AM
Subject:Re: [openstack-dev] [Kuryr][Magnum] How stable is
kuryr-kubernetes?





On Tue, Dec 6, 2016 at 9:10 AM, Mikhail Fedosin  wrote:
  Hi folks!

Hi Mikhail!


  We at Samsung are trying to integrate containers in OpenStack and at this
  moment we are looking at Kubernetes deployed by Magnum, which works good
  enough for now.

  One challenge we have faced recently is making containers able to
  communicate with Nova VM instances (in other words we want to integrate
  Neutron in Kubernetes) and Kuryr seems to be a right solution (based on
  its description). Unfortunately there is a lack of documentation, but
  from various presentations on youtube I got that kuryr has been split in
  two projects (kuryr-libnetwork for Docker Swarm and kuryr-kubernetes for
  Kubernetes respectively, and they both share a common library called
  "kuryr").

That's exactly right!

  kuryr-libnetwork continues previous works, which the community has been
  implementing for over a year. It looks stable, nevertheless it doesn't
  work with the latest Docker 1.12.

It works with 1.12, but not with 1.12's Swarm mode, since that is hardcoded
to use Docker's overlay driver, that is expected to change.

  kuryr-kubernetes is rather new, and I wonder if it can be already used
  (at least on devstack), or maybe some further efforts are required.

We have a previous python3 (and lbaasv1) only prototype that can be used to
test how it all works:

 https://github.com/midonet/kuryr/tree/k8s

With kuryr-kubernetes we are now reaching the stage to have services
supported again (they were supported in the above prototype). There is
devstack for

https://github.com/openstack/kuryr-kubernetes

The current state is that CNI patch [1] is about to be merged and the
service watchers should come in soon.



  Then please enlighten me about current status of Magnum-Kuryr
  integration. I saw that this was discussed in Barcelona and Austin, but
  in Magnum's master it's still unavailable. Also it will be great if you
  point at the responsible person with whom I can discuss it more detailed
  and see how I can be involved in the development.

For Magnum integration we have to move kuryr-libnetwork's container-in-vm
support[2][3] (that is being merged this week) to kuryr-kubernetes (which
only supports bare-metal binding right now). Once that is done, work can
begin on Magnum using it in either  macvlan, ipvlan, vlan mode (there's two
modes here, one container - one vlan, and one subnet, one vlan).

You can reach out to apuimedo (me), ivc_, irenab or vikasc about
kuryr-kubernetes and the same plus ltomasbo, lmdaly and mchiappero about
the container-in-vm.

Regards,

Toni


[1] https://review.openstack.org/#/c/404038/
[2] https://review.openstack.org/#/c/400365/
[3] https://review.openstack.org/#/c/402462/

  Thanks,
  Mike

  __

  OpenStack Development Mailing List (not for usage questions)
  Unsubscribe:
  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-07 Thread Jeremy Stanley
On 2016-12-07 12:14:06 -0600 (-0600), Ian Cordasco wrote:
[...]
> So I'm all for non-official projects using their own channels for
> meetings. My only wish (as someone working on a non-official
> project) would be that we could use meeting bot the same way we
> would in a meeting channel.

It's the same actual bot instance that's also logging your channel
conversations (if you have channel logging to
eavesdrop.openstack.org), the only difference is that in meeting
channels we grant it the mode necessary to be able to change channel
topics. It will even work without that, you just don't get your
channel topic updated automagically during meetings. We can also
fairly easily control that access on a per-channel basis (we have a
separate accessbot which sets channel permissions for us so it's
just a matter of making a very small change to a data file in a Git
repo).

> If it requires standing up additional instances of the meeting
> bot, I think it's fair for the companies sponsoring those projects
> to help openstack-infra with that, and I'd be willing to throw my
> own time in there for that too if necessary.
[...]

It likely will soon regardless because Freenode doesn't let any user
join >120 channels at a time so we'll almost certainly need to shard
channels across multiple meetbot instances soon anyway. There's a
thread starting on the infra ML about hacking on meetbot in concert
with devs from the Fedora community too, which you might want to
jump in on:

http://lists.openstack.org/pipermail/openstack-infra/2016-December/004951.html

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] unsupported drivers and their future

2016-12-07 Thread Pavlo Shchelokovskyy
HI all,

we (ironic community) some time ago decided [0] to require third-party CI
for any driver that is present in the main ironic code tree. I'd like to
discuss the state of currently unsupported drivers and how to proceed with
them.

Here is the current rundown, please correct me if I've got something wrong:

* AMT - already in ironic-staging-drivers repo, patch removing those from
ironic is on review [1]
* iBoot - already in ironic-staging-drivers repo, patch removing those from
ironic is on review [1]
* WakeOnLan - already in ironic-staging-drivers repo, patch removing those
from ironic is on review [1]
* IPMINative/Pyghmi - community driver, AFAIU community still considers
those as a viable alternative for the future and is constantly
re-evaluating maturity of pyghmi IPMI implementation, so these are to stay
for now
* SSH - community driver, still used on several ironic gate jobs and in
jobs of other projects under Baremetal program (like bifrost). Besides
AFAIK quite a number of people use it for development. So it is to stay in
the tree for some more time too, at least until all upstream gate jobs are
moved to ipmitool-based drivers.
* SNMP - people are working to enable testing it in CI, patches are
landing, stays in tree
* VirtualBox - community driver, for testing only, VirtualBox can be used
via SSH driver and I am not aware of any plans for (third-party) CI for it
(although it would in principle be possible even in upstream). Is anyone
actually using this driver?
* MSFTOCS - vendor driver, I am not aware of any plans for third-party CI
* SeaMicro - vendor driver, I am not aware of any plans for third-party CI

Based on that I propose to remove VirtualBox, MSFTOCS and SeaMicro drivers
from ironic right away. If anybody is interested in supporting them they
would have to extract those drivers (together with unit tests and docs) to
separate repos or propose them to ironic-staging-drivers minding the
warning [2].

[0]
https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/third-party-ci.html
[1] https://review.openstack.org/#/c/397847
[2]
http://ironic-staging-drivers.readthedocs.io/en/latest/README.html#what-the-ironic-staging-drivers-is-not

Best regards,
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Draft team mascot

2016-12-07 Thread Ramirez, Eddie
Very fixed, much color, so right. Wow.

+1 Radomir’s

From: Rob Cresswell (rcresswe) [mailto:rcres...@cisco.com]
Sent: Wednesday, December 7, 2016 4:50 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Horizon] Draft team mascot

Radomir’s version has my vote.

On 7 Dec 2016, at 10:11, Radomir Dopieralski 
> wrote:

Here, fixed.

On Wed, Dec 7, 2016 at 10:54 AM, Radomir Dopieralski 
>wrote:
That looks kinda like a white baboon. It definitely doesn't look like Doge -- 
wrong color, wrong head. I think the legs are too long too.

On Wed, Dec 7, 2016 at 10:31 AM, Timur Sufiev 
> wrote:
I still think this one 
https://wtf.jpg.wtf/0c/10/1479414543-0c1052f7c2f9990b6b0c472076594cb1.jpeg is 
the best :).

On Wed, Dec 7, 2016 at 1:07 AM Jason Rist 
> wrote:
On 12/06/2016 01:48 PM, Richard Jones wrote:
> >> On 6 Dec 2016, at 20:19, Richard Jones 
> >> > wrote:
> >> Please let me know what you think (by December 12) of this draft for
> >> our Horizon team mascot.
> >
> On 7 December 2016 at 07:38, Rob Cresswell (rcresswe)
> > wrote:
> > Are we missing an attachment / link ?
>
> Weird! Trying again:
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Much UI, such OpenStack, wow.

--
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Vasyl Saienko
On Wed, Dec 7, 2016 at 7:34 PM, Kevin Benton  wrote:

> >It work only when whole switch is aimed by single customer, it will not
> work when several customers sharing the same switch.
>
> Do you know what vendors have this limitation? I know the broadcom
> chipsets didn't prevent this (we allowed VLAN rewrites scoped to ports at
> Big Switch). If it's common to Cisco/Juniper then I guess we are stuck
> reflecting bad hardware in the API. :(
>

@Kevin
It looks that I was wrong, on the example I provided I expected to
configure VLAN mapping on Gig0/1 uplink. It will not work in this case, but
if configure VLAN mapping at ports where baremetal are plugged (i.e: Fa0/1
- 0/5) it should work :)
I definitely need more practice with VLAN mapping...



>
> On Wed, Dec 7, 2016 at 9:22 AM, Vasyl Saienko 
> wrote:
>
>>
>>
>> On Wed, Dec 7, 2016 at 7:12 PM, Kevin Benton  wrote:
>>
>>>
>>>
>>> On Wed, Dec 7, 2016 at 8:47 AM, Vasyl Saienko 
>>> wrote:
>>>
 @Armando: IMO the spec [0] is not about enablement of Trunks and
 baremetal. This spec is rather about trying to make user request with any
 network configuration (number of requested NICs) to be able successfully
 deployed on ANY ironic node (even when number of hardware interfaces is
 less than number of requested attached networks to instance) by implicitly
 creating neutron trunks on the fly.

 I have  a concerns about it and left a comment [1]. The guaranteed
 number of NICs on hardware server should be  available to user via nova
 flavor information. User should decide if he needs a trunk or not only by
 his own, as his image may even not support trunking. I suggest that
 creating trunks implicitly (w/o user knowledge) shouldn't happen.

 Current trunks implementation in Neutron will work just fine with
 baremetal case with one small addition:

 1. segmentation_type and segmentation_id should not be API mandatory
 fields at least for the case when provider segmentation is VLAN.

 2. User still should know what segmentation_id was picked to configure
 it on Instance side. (Not sure if it is done automatically via network
 metadata at the moment.). So it should be inherited from network
 provider:segmentation_id and visible to the user.


 @Kevin: Having VLAN mapping support on the switch will not solve
 problem described in scenario 3 when multiple users pick the same
 segmentation_id for different networks and their instances were spawned to
 baremetal nodes on the same switch.

 I don’t see other option than to control uniqueness of segmentation_id
 on Neutron side for baremetal case.

>>>
>>> Well unless there is a limitation in the switch hardware, VLAN mapping
>>> is scoped to each individual port so users can pick the same local
>>> segmentation_id. The point of the feature on switches is for when you have
>>> customers that specify their own VLANs and you need to map them to service
>>> provider VLANs (i.e. what is happening here).
>>>
>>
>> It work only when whole switch is aimed by single customer, it will not
>> work when several customers sharing the same switch.
>>
>>
>>>
>>>

 Reference:

 [0] https://review.openstack.org/#/c/277853/
 [1] https://review.openstack.org/#/c/277853/10/specs/approved/VL
 AN-aware-baremetal-instances.rst@35

 On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton  wrote:

> Just to be clear, in this case the switches don't support VLAN
> translation (e.g. [1])? Because that also solves the problem you are
> running into. This is the preferable path for bare metal because it avoids
> exposing provider details to users and doesn't tie you to VLANs on the
> backend.
>
> 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/
>
> On Wed, Dec 7, 2016 at 7:49 AM, Armando M.  wrote:
>
>>
>>
>> On 7 December 2016 at 04:02, Vasyl Saienko 
>> wrote:
>>
>>> Armando, Kevin,
>>>
>>> Thanks for your comments.
>>>
>>> To be more clear we are trying to use neutron trunks implementation
>>> with baremetal servers (Ironic). Baremetal servers are plugged to Tor 
>>> (Top
>>> of the Rack) switch. User images are spawned directly onto hardware.
>>>
>> Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
>>> networks (it looks like changing vlan on the port to segmentation_id 
>>> from
>>> Neutron network, scenario 1 in the attachment). Ironic works with VLAN
>>> segmentation only for now, but some vendors ML2 like arista allows to 
>>> use
>>> VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
>>> Different users may have baremetal servers connected to the same ToR 
>>> 

Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-07 Thread Ian Cordasco
 

-Original Message-
From: Thierry Carrez 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: December 7, 2016 at 07:30:40
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [all] Creating a new IRC meeting room ?

> Dolph Mathews wrote:
> > [...]
> > I think it honestly reflects our current breakdown of contributors &
> > collaboration. The artificial scarcity model only helps a vocal minority
> > with cross-project focus, and just results in odd meeting times for the
> > majority of projects that don't hold primetime meeting slots.
> >
> > While I don't think we should do away with meetings rooms, if a project
> > wants to hold meetings at a convenient time in their normal channel, I
> > think that's fine. Meeting conflicts will always exist. Major conflicts
> > will be resolved without the additional pressure of artificial scarcity.
>  
> I tend to agree with that. Like I said in my intro, we may be past the
> point where the artificial scarcity model is hurting us more than it
> helps us.
>  
> So how about:
> - we enable an #openstack-meeting-5 to instantly relieve scheduling pressure
> - we allow teams to hold meetings in their project channel if they want
> to (and show them all on the meeting agenda through the irc-meetings
> repo) as long as the channel is logged
> - we still generally recommend to use meeting rooms whenever possible,
> so that you can benefit from outside presence and easy mentions/pings
> - we will proactively add additional meeting rooms when the resource
> becomes scarce again

So I'm all for non-official projects using their own channels for meetings. My 
only wish (as someone working on a non-official project) would be that we could 
use meeting bot the same way we would in a meeting channel. If it requires 
standing up additional instances of the meeting bot, I think it's fair for the 
companies sponsoring those projects to help openstack-infra with that, and I'd 
be willing to throw my own time in there for that too if necessary.

> Options:
> - Once the change is in place, we could also limit official meeting room
> usage to official projects (since non-official projects can hold a
> meeting in their own room and still have it mentioned on the agenda)
> - If we remove artificial scarcity, we could discontinue the
> #openstack-meeting-cp channel (which was created to facilitate the
> scheduling of cross-project temporary meetings) and just tell
> cross-project initiatives to use the regular channels

I think there's still value in #openstack-meeting-cp, but I don't feel strongly 
enough to argue against its removal.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Kevin Benton
>It work only when whole switch is aimed by single customer, it will not
work when several customers sharing the same switch.

Do you know what vendors have this limitation? I know the broadcom chipsets
didn't prevent this (we allowed VLAN rewrites scoped to ports at Big
Switch). If it's common to Cisco/Juniper then I guess we are stuck
reflecting bad hardware in the API. :(

On Wed, Dec 7, 2016 at 9:22 AM, Vasyl Saienko  wrote:

>
>
> On Wed, Dec 7, 2016 at 7:12 PM, Kevin Benton  wrote:
>
>>
>>
>> On Wed, Dec 7, 2016 at 8:47 AM, Vasyl Saienko 
>> wrote:
>>
>>> @Armando: IMO the spec [0] is not about enablement of Trunks and
>>> baremetal. This spec is rather about trying to make user request with any
>>> network configuration (number of requested NICs) to be able successfully
>>> deployed on ANY ironic node (even when number of hardware interfaces is
>>> less than number of requested attached networks to instance) by implicitly
>>> creating neutron trunks on the fly.
>>>
>>> I have  a concerns about it and left a comment [1]. The guaranteed
>>> number of NICs on hardware server should be  available to user via nova
>>> flavor information. User should decide if he needs a trunk or not only by
>>> his own, as his image may even not support trunking. I suggest that
>>> creating trunks implicitly (w/o user knowledge) shouldn't happen.
>>>
>>> Current trunks implementation in Neutron will work just fine with
>>> baremetal case with one small addition:
>>>
>>> 1. segmentation_type and segmentation_id should not be API mandatory
>>> fields at least for the case when provider segmentation is VLAN.
>>>
>>> 2. User still should know what segmentation_id was picked to configure
>>> it on Instance side. (Not sure if it is done automatically via network
>>> metadata at the moment.). So it should be inherited from network
>>> provider:segmentation_id and visible to the user.
>>>
>>>
>>> @Kevin: Having VLAN mapping support on the switch will not solve problem
>>> described in scenario 3 when multiple users pick the same segmentation_id
>>> for different networks and their instances were spawned to baremetal nodes
>>> on the same switch.
>>>
>>> I don’t see other option than to control uniqueness of segmentation_id
>>> on Neutron side for baremetal case.
>>>
>>
>> Well unless there is a limitation in the switch hardware, VLAN mapping is
>> scoped to each individual port so users can pick the same local
>> segmentation_id. The point of the feature on switches is for when you have
>> customers that specify their own VLANs and you need to map them to service
>> provider VLANs (i.e. what is happening here).
>>
>
> It work only when whole switch is aimed by single customer, it will not
> work when several customers sharing the same switch.
>
>
>>
>>
>>>
>>> Reference:
>>>
>>> [0] https://review.openstack.org/#/c/277853/
>>> [1] https://review.openstack.org/#/c/277853/10/specs/approved/VL
>>> AN-aware-baremetal-instances.rst@35
>>>
>>> On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton  wrote:
>>>
 Just to be clear, in this case the switches don't support VLAN
 translation (e.g. [1])? Because that also solves the problem you are
 running into. This is the preferable path for bare metal because it avoids
 exposing provider details to users and doesn't tie you to VLANs on the
 backend.

 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/

 On Wed, Dec 7, 2016 at 7:49 AM, Armando M.  wrote:

>
>
> On 7 December 2016 at 04:02, Vasyl Saienko 
> wrote:
>
>> Armando, Kevin,
>>
>> Thanks for your comments.
>>
>> To be more clear we are trying to use neutron trunks implementation
>> with baremetal servers (Ironic). Baremetal servers are plugged to Tor 
>> (Top
>> of the Rack) switch. User images are spawned directly onto hardware.
>>
> Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
>> networks (it looks like changing vlan on the port to segmentation_id from
>> Neutron network, scenario 1 in the attachment). Ironic works with VLAN
>> segmentation only for now, but some vendors ML2 like arista allows to use
>> VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
>> Different users may have baremetal servers connected to the same ToR 
>> switch.
>>
>> By trying to apply current neutron trunking model leads to the
>> following errors:
>>
>> *Scenario 2: single user scenario, create VMs with trunk and
>> non-trunk ports.*
>>
>>- User create two networks:
>>net-1: (provider:segmentation_id: 100)
>>net-2: (provider:segmentation_id: 101)
>>
>>- User create 1 trunk port
>>port0 - parent port in net-1
>>port1 - subport in net-2 and define user segmentation_id: 300
>>

Re: [openstack-dev] [acceleration]Team Biweekly Meeting 2016.12.07 agenda

2016-12-07 Thread Miroslav Halas
Howard,

Thank you for sharing. I wasn’t able to join because I was quite confused about 
the channel to join (tried openstack-acc and openstack-meetinc-cp).

Is #openstack-cyborg the channel to use going forward?

Thanks,

Miro

From: Zhipeng Huang [mailto:zhipengh...@gmail.com]
Sent: Wednesday, December 07, 2016 11:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Miroslav Halas; rodolfo.alonso.hernan...@intel.com; Michele Paolino; Scott 
Kelso; Roman Dobosz; Jim Golden; pradeep.jagade...@huawei.com; 
michael.ro...@nokia.com; jian-feng.d...@intel.com; martial.mic...@nist.gov; 
Moshe Levi; Edan David; Francois Ozog; Fei K Chen; jack...@huawei.com; Harm 
Sluiman; li.l...@huawei.com
Subject: Re: [acceleration]Team Biweekly Meeting 2016.12.07 agenda

Hi Team,

Thanks for attending today's meeting and having a great discussion, please find 
the minutes at https://wiki.openstack.org/wiki/Cyborg/MeetingLogs

On Wed, Dec 7, 2016 at 4:30 PM, Zhipeng Huang 
> wrote:
Hi Team,

Please find the initial agenda for today's meeting at 
https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Next_meeting_:_UTC_1500.2C_Dec_7th

--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado



--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Vasyl Saienko
On Wed, Dec 7, 2016 at 7:12 PM, Kevin Benton  wrote:

>
>
> On Wed, Dec 7, 2016 at 8:47 AM, Vasyl Saienko 
> wrote:
>
>> @Armando: IMO the spec [0] is not about enablement of Trunks and
>> baremetal. This spec is rather about trying to make user request with any
>> network configuration (number of requested NICs) to be able successfully
>> deployed on ANY ironic node (even when number of hardware interfaces is
>> less than number of requested attached networks to instance) by implicitly
>> creating neutron trunks on the fly.
>>
>> I have  a concerns about it and left a comment [1]. The guaranteed number
>> of NICs on hardware server should be  available to user via nova flavor
>> information. User should decide if he needs a trunk or not only by his own,
>> as his image may even not support trunking. I suggest that creating trunks
>> implicitly (w/o user knowledge) shouldn't happen.
>>
>> Current trunks implementation in Neutron will work just fine with
>> baremetal case with one small addition:
>>
>> 1. segmentation_type and segmentation_id should not be API mandatory
>> fields at least for the case when provider segmentation is VLAN.
>>
>> 2. User still should know what segmentation_id was picked to configure it
>> on Instance side. (Not sure if it is done automatically via network
>> metadata at the moment.). So it should be inherited from network
>> provider:segmentation_id and visible to the user.
>>
>>
>> @Kevin: Having VLAN mapping support on the switch will not solve problem
>> described in scenario 3 when multiple users pick the same segmentation_id
>> for different networks and their instances were spawned to baremetal nodes
>> on the same switch.
>>
>> I don’t see other option than to control uniqueness of segmentation_id on
>> Neutron side for baremetal case.
>>
>
> Well unless there is a limitation in the switch hardware, VLAN mapping is
> scoped to each individual port so users can pick the same local
> segmentation_id. The point of the feature on switches is for when you have
> customers that specify their own VLANs and you need to map them to service
> provider VLANs (i.e. what is happening here).
>

It work only when whole switch is aimed by single customer, it will not
work when several customers sharing the same switch.


>
>
>>
>> Reference:
>>
>> [0] https://review.openstack.org/#/c/277853/
>> [1] https://review.openstack.org/#/c/277853/10/specs/approved/VL
>> AN-aware-baremetal-instances.rst@35
>>
>> On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton  wrote:
>>
>>> Just to be clear, in this case the switches don't support VLAN
>>> translation (e.g. [1])? Because that also solves the problem you are
>>> running into. This is the preferable path for bare metal because it avoids
>>> exposing provider details to users and doesn't tie you to VLANs on the
>>> backend.
>>>
>>> 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/
>>>
>>> On Wed, Dec 7, 2016 at 7:49 AM, Armando M.  wrote:
>>>


 On 7 December 2016 at 04:02, Vasyl Saienko 
 wrote:

> Armando, Kevin,
>
> Thanks for your comments.
>
> To be more clear we are trying to use neutron trunks implementation
> with baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top
> of the Rack) switch. User images are spawned directly onto hardware.
>
 Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
> networks (it looks like changing vlan on the port to segmentation_id from
> Neutron network, scenario 1 in the attachment). Ironic works with VLAN
> segmentation only for now, but some vendors ML2 like arista allows to use
> VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
> Different users may have baremetal servers connected to the same ToR 
> switch.
>
> By trying to apply current neutron trunking model leads to the
> following errors:
>
> *Scenario 2: single user scenario, create VMs with trunk and non-trunk
> ports.*
>
>- User create two networks:
>net-1: (provider:segmentation_id: 100)
>net-2: (provider:segmentation_id: 101)
>
>- User create 1 trunk port
>port0 - parent port in net-1
>port1 - subport in net-2 and define user segmentation_id: 300
>
>- User boot VMs:
>BM1: with trunk (connected to ToR Fa0/1)
>BM4: in net-2 (connected to ToR Fa0/4)
>
>- VLAN on the switch are configured as follow:
>Fa0/1 - trunk, native 100, allowed vlan 300
>Fa0/4 - access vlan 101
>
> *Problem:* BM1 has no access BM4 on net-2
>
>
> *Scenario 3: multiple user scenario, create VMs with trunk.*
>
>- User1 create two networks:
>net-1: (provider:segmentation_id: 100)
>net-2: (provider:segmentation_id: 101)
>
>- User2 create two 

Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Kevin Benton
On Wed, Dec 7, 2016 at 8:47 AM, Vasyl Saienko  wrote:

> @Armando: IMO the spec [0] is not about enablement of Trunks and
> baremetal. This spec is rather about trying to make user request with any
> network configuration (number of requested NICs) to be able successfully
> deployed on ANY ironic node (even when number of hardware interfaces is
> less than number of requested attached networks to instance) by implicitly
> creating neutron trunks on the fly.
>
> I have  a concerns about it and left a comment [1]. The guaranteed number
> of NICs on hardware server should be  available to user via nova flavor
> information. User should decide if he needs a trunk or not only by his own,
> as his image may even not support trunking. I suggest that creating trunks
> implicitly (w/o user knowledge) shouldn't happen.
>
> Current trunks implementation in Neutron will work just fine with
> baremetal case with one small addition:
>
> 1. segmentation_type and segmentation_id should not be API mandatory
> fields at least for the case when provider segmentation is VLAN.
>
> 2. User still should know what segmentation_id was picked to configure it
> on Instance side. (Not sure if it is done automatically via network
> metadata at the moment.). So it should be inherited from network
> provider:segmentation_id and visible to the user.
>
>
> @Kevin: Having VLAN mapping support on the switch will not solve problem
> described in scenario 3 when multiple users pick the same segmentation_id
> for different networks and their instances were spawned to baremetal nodes
> on the same switch.
>
> I don’t see other option than to control uniqueness of segmentation_id on
> Neutron side for baremetal case.
>

Well unless there is a limitation in the switch hardware, VLAN mapping is
scoped to each individual port so users can pick the same local
segmentation_id. The point of the feature on switches is for when you have
customers that specify their own VLANs and you need to map them to service
provider VLANs (i.e. what is happening here).


>
> Reference:
>
> [0] https://review.openstack.org/#/c/277853/
> [1] https://review.openstack.org/#/c/277853/10/specs/approved/
> VLAN-aware-baremetal-instances.rst@35
>
> On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton  wrote:
>
>> Just to be clear, in this case the switches don't support VLAN
>> translation (e.g. [1])? Because that also solves the problem you are
>> running into. This is the preferable path for bare metal because it avoids
>> exposing provider details to users and doesn't tie you to VLANs on the
>> backend.
>>
>> 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/
>>
>> On Wed, Dec 7, 2016 at 7:49 AM, Armando M.  wrote:
>>
>>>
>>>
>>> On 7 December 2016 at 04:02, Vasyl Saienko 
>>> wrote:
>>>
 Armando, Kevin,

 Thanks for your comments.

 To be more clear we are trying to use neutron trunks implementation
 with baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top
 of the Rack) switch. User images are spawned directly onto hardware.

>>> Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
 networks (it looks like changing vlan on the port to segmentation_id from
 Neutron network, scenario 1 in the attachment). Ironic works with VLAN
 segmentation only for now, but some vendors ML2 like arista allows to use
 VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
 Different users may have baremetal servers connected to the same ToR 
 switch.

 By trying to apply current neutron trunking model leads to the
 following errors:

 *Scenario 2: single user scenario, create VMs with trunk and non-trunk
 ports.*

- User create two networks:
net-1: (provider:segmentation_id: 100)
net-2: (provider:segmentation_id: 101)

- User create 1 trunk port
port0 - parent port in net-1
port1 - subport in net-2 and define user segmentation_id: 300

- User boot VMs:
BM1: with trunk (connected to ToR Fa0/1)
BM4: in net-2 (connected to ToR Fa0/4)

- VLAN on the switch are configured as follow:
Fa0/1 - trunk, native 100, allowed vlan 300
Fa0/4 - access vlan 101

 *Problem:* BM1 has no access BM4 on net-2


 *Scenario 3: multiple user scenario, create VMs with trunk.*

- User1 create two networks:
net-1: (provider:segmentation_id: 100)
net-2: (provider:segmentation_id: 101)

- User2 create two networks:
net-3: (provider:segmentation_id: 200)
net-4: (provider:segmentation_id: 201)

- User1 create 1 trunk port
port0 - parent port in net-1
port1 - subport in net-2 and define user segmentation_id: 300

- User2 create 1 trunk port
port0 - parent port in net-3

Re: [openstack-dev] [Zun] About k8s integration

2016-12-07 Thread Denis Makogon
Hello Hongbin.

See inline comments.

Kind regards,
Denis Makogon

2016-12-07 2:56 GMT+02:00 Hongbin Lu :

> Hi all,
>
>
>
> This is a continued discussion of the k8s integration blueprint [1].
> Currently, Zun exposes a container-oriented APIs that provides service for
> end-users to operate on containers (i.e. CRUD). At the last team meeting,
> we discussed how to introduce k8s to Zun as an alternative to the Docker
> driver. There are two approaches that has been discussed:
>
>
>
> 1. Introduce the concept of Pod. If we go with this approach, an API
> endpoint (i.e. /pods) will be added to the Zun APIs. Both Docker driver and
> k8s driver need to implement this endpoint. In addition, all the future
> drivers need to implement this endpoint as well (or throw a NotImplemented
> exception). Some of our team members raised concerns about this approach.
> The main concern is that this approach will hide a lot of k8s-specific
> features (i.e. replication controller) or there will be a lot of work to
> bring all those features to Zun.
>

Exactly, i think Pods concept shouldn't appear in Zun (it's all about
Magnum, isn't it?). So, the problem is that K8t Pod is too different from
Docker Swarm node and different from Rkt. Since Zun is aimed to be an
abstraction on-top for different container technologies. So every infra
management should be leveraged to Magnum.

I think it would make more sense to introduce an abstraction, let's say
"Datastore", behind this abstraction we can hide different types of
technologies (required connection attributes, etc.). If i would need to
create container in Swarm i'll use "--datastore swarm.production.com", if i
would need to attach value, i'll ask magnum to do that and whatever i would
need in order to deploy required Zun container.


>
>
>   $ zun pod-create … # this create a k8s pod (if k8s driver is used), or
> create a sandbox with a set of containers (if docker driver is used)
>
>   $ zun create … # this create a k8s pod with one container, or create a
> sandbox with one container
>
>
>
> 2. Introduce a dedicated k8s endpoint that acts as a proxy to k8s APIs.
> This will expose all the k8s features but users won’t have a unified APIs
> across drivers.
>
>
>

This is exactly intersection with Magnum. Zun is meant to be
Containers-as-a-Service, but not Containers-infra-management-as-a-Service.
So, if i would need to deploy container on specific Pod i would like to
have capability to deploy it on that pod (no matter if it was deployed by
Magnum or by 3rd-party tools outside of OpenStack), of course there would
be problems with Cinder volumes.

  $ zun k8s pod create … # this create a k8s pod
>
>   $ zun docker container create … # this create a docker container
>
>   $ zun create … # the behavior of this command is unclear
>
>
>
> So far, we haven’t decided which approach to use (or use a third
> approach), but we wanted to collect more feedback before making a decision.
> Thoughts?
>
>
>

So, overall, Zun should remain to be agnostic to any container technologies
like Docker, K8t, Rkt, CEO. So every infra management should be leveraged
to Magnum, and Zun should consume container technology CRUD API and use
Magnum in order to modify underlying Nova/Cinder resources.

Another question, why do Zun needs K8t pods CRUD API? Can't Zun talk to
Magnum to work with Magnum?


> [1] https://blueprints.launchpad.net/zun/+spec/k8s-integration
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Dev Digest November 26 to December 2

2016-12-07 Thread Kendall Nelson
Gave the wrong link for the HTML verison before. Sorry!

Actual HTML version:
http://www.openstack.org/blog/2016/12/openstack-developer-mailing-list-digest-november-26-december-2/

On Mon, Dec 5, 2016 at 2:55 PM Jeremy Stanley  wrote:

> On 2016-12-05 20:45:08 + (+), Kendall Nelson wrote:
> [...]
> > Allowing Teams Based on Vendor-specific Drivers [10]
> >-
> >Option 1: https://review.openstack.org/403834 - Proprietary driver
> dev
> >is unlevel
> >-
> >Option 2: https://review.openstack.org/403836 - Driver development
> can
> >be level
> >-
> >Option 3: https://review.openstack.org/403839 - Level playing fields,
> >except drivers
> >-
> >Option 4:  https://review.openstack.org/403829
> > - establish a new "driver team"
> >concept
> >-
> >   Thierry prefers this option
> >   -
> >Option 5: https://review.openstack.org/403830 - add resolution
> requiring
> >teams to accept driver contributions
> >-
> >   One of Flavio’s preferred options
> >   -
> >Option 6: https://review.openstack.org/403826 - add a resolution
> >allowing teams based on vendor-specific drivers
> >-
> >   Flavio’s other preferred option
> [...]
>
> Worth noting, these map to options 1, 2, 4, 5, 6 and 7 from Doug's
> summary. His option #3 is missing above, which was:
>
> https://review.openstack.org/403838 - Stop requiring a level
> playing field
>
> That probably explains the numbering skew between the two summaries.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Vasyl Saienko
@Armando: IMO the spec [0] is not about enablement of Trunks and baremetal.
This spec is rather about trying to make user request with any network
configuration (number of requested NICs) to be able successfully deployed
on ANY ironic node (even when number of hardware interfaces is less than
number of requested attached networks to instance) by implicitly creating
neutron trunks on the fly.

I have  a concerns about it and left a comment [1]. The guaranteed number
of NICs on hardware server should be  available to user via nova flavor
information. User should decide if he needs a trunk or not only by his own,
as his image may even not support trunking. I suggest that creating trunks
implicitly (w/o user knowledge) shouldn't happen.

Current trunks implementation in Neutron will work just fine with baremetal
case with one small addition:

1. segmentation_type and segmentation_id should not be API mandatory fields
at least for the case when provider segmentation is VLAN.

2. User still should know what segmentation_id was picked to configure it
on Instance side. (Not sure if it is done automatically via network
metadata at the moment.). So it should be inherited from network
provider:segmentation_id and visible to the user.


@Kevin: Having VLAN mapping support on the switch will not solve problem
described in scenario 3 when multiple users pick the same segmentation_id
for different networks and their instances were spawned to baremetal nodes
on the same switch.

I don’t see other option than to control uniqueness of segmentation_id on
Neutron side for baremetal case.

Reference:

[0] https://review.openstack.org/#/c/277853/
[1]
https://review.openstack.org/#/c/277853/10/specs/approved/VLAN-aware-baremetal-instances.rst@35

On Wed, Dec 7, 2016 at 6:14 PM, Kevin Benton  wrote:

> Just to be clear, in this case the switches don't support VLAN translation
> (e.g. [1])? Because that also solves the problem you are running into. This
> is the preferable path for bare metal because it avoids exposing provider
> details to users and doesn't tie you to VLANs on the backend.
>
> 1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/
>
> On Wed, Dec 7, 2016 at 7:49 AM, Armando M.  wrote:
>
>>
>>
>> On 7 December 2016 at 04:02, Vasyl Saienko  wrote:
>>
>>> Armando, Kevin,
>>>
>>> Thanks for your comments.
>>>
>>> To be more clear we are trying to use neutron trunks implementation with
>>> baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top of
>>> the Rack) switch. User images are spawned directly onto hardware.
>>>
>> Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
>>> networks (it looks like changing vlan on the port to segmentation_id from
>>> Neutron network, scenario 1 in the attachment). Ironic works with VLAN
>>> segmentation only for now, but some vendors ML2 like arista allows to use
>>> VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
>>> Different users may have baremetal servers connected to the same ToR switch.
>>>
>>> By trying to apply current neutron trunking model leads to the following
>>> errors:
>>>
>>> *Scenario 2: single user scenario, create VMs with trunk and non-trunk
>>> ports.*
>>>
>>>- User create two networks:
>>>net-1: (provider:segmentation_id: 100)
>>>net-2: (provider:segmentation_id: 101)
>>>
>>>- User create 1 trunk port
>>>port0 - parent port in net-1
>>>port1 - subport in net-2 and define user segmentation_id: 300
>>>
>>>- User boot VMs:
>>>BM1: with trunk (connected to ToR Fa0/1)
>>>BM4: in net-2 (connected to ToR Fa0/4)
>>>
>>>- VLAN on the switch are configured as follow:
>>>Fa0/1 - trunk, native 100, allowed vlan 300
>>>Fa0/4 - access vlan 101
>>>
>>> *Problem:* BM1 has no access BM4 on net-2
>>>
>>>
>>> *Scenario 3: multiple user scenario, create VMs with trunk.*
>>>
>>>- User1 create two networks:
>>>net-1: (provider:segmentation_id: 100)
>>>net-2: (provider:segmentation_id: 101)
>>>
>>>- User2 create two networks:
>>>net-3: (provider:segmentation_id: 200)
>>>net-4: (provider:segmentation_id: 201)
>>>
>>>- User1 create 1 trunk port
>>>port0 - parent port in net-1
>>>port1 - subport in net-2 and define user segmentation_id: 300
>>>
>>>- User2 create 1 trunk port
>>>port0 - parent port in net-3
>>>port1 - subport in net-4 and define user segmentation_id: 300
>>>
>>>- User1 boot VM:
>>>BM1: with trunk (connected to ToR Fa0/1)
>>>
>>>- User2 boot VM:
>>>BM4: with trunk (connected to ToR Fa0/4)
>>>
>>>- VLAN on the switch are configured as follow:
>>>Fa0/1 - trunk, native 100, allowed vlan 300
>>>Fa0/4 - trunk, native 200, allowed vlan 300
>>>
>>> *Problem:* User1 BM1 has access to User2 BM4 on net-2, Conflict in VLAN
>>> mapping provider vlan 101 should be mapped to user vlan 300, and provider
>>> vlan 201 should be also 

Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-07 Thread Sean McGinnis
On Wed, Dec 07, 2016 at 02:29:03PM +0100, Thierry Carrez wrote:
> 
> So how about:
> - we enable an #openstack-meeting-5 to instantly relieve scheduling pressure
> - we allow teams to hold meetings in their project channel if they want
> to (and show them all on the meeting agenda through the irc-meetings
> repo) as long as the channel is logged
> - we still generally recommend to use meeting rooms whenever possible,
> so that you can benefit from outside presence and easy mentions/pings
> - we will proactively add additional meeting rooms when the resource
> becomes scarce again

Sounds like a good plan to me.

> 
> Options:
> - Once the change is in place, we could also limit official meeting room
> usage to official projects (since non-official projects can hold a
> meeting in their own room and still have it mentioned on the agenda)

+1

> - If we remove artificial scarcity, we could discontinue the
> #openstack-meeting-cp channel (which was created to facilitate the
> scheduling of  cross-project temporary meetings) and just tell
> cross-project initiatives to use the regular channels

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [acceleration]Team Biweekly Meeting 2016.12.07 agenda

2016-12-07 Thread Zhipeng Huang
Hi Team,

Thanks for attending today's meeting and having a great discussion, please
find the minutes at https://wiki.openstack.org/wiki/Cyborg/MeetingLogs

On Wed, Dec 7, 2016 at 4:30 PM, Zhipeng Huang  wrote:

> Hi Team,
>
> Please find the initial agenda for today's meeting at
> https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Next_
> meeting_:_UTC_1500.2C_Dec_7th
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Kevin Benton
Just to be clear, in this case the switches don't support VLAN translation
(e.g. [1])? Because that also solves the problem you are running into. This
is the preferable path for bare metal because it avoids exposing provider
details to users and doesn't tie you to VLANs on the backend.

1. http://ipcisco.com/vlan-mapping-vlan-translation-%E2%80%93-part-2/

On Wed, Dec 7, 2016 at 7:49 AM, Armando M.  wrote:

>
>
> On 7 December 2016 at 04:02, Vasyl Saienko  wrote:
>
>> Armando, Kevin,
>>
>> Thanks for your comments.
>>
>> To be more clear we are trying to use neutron trunks implementation with
>> baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top of
>> the Rack) switch. User images are spawned directly onto hardware.
>>
> Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
>> networks (it looks like changing vlan on the port to segmentation_id from
>> Neutron network, scenario 1 in the attachment). Ironic works with VLAN
>> segmentation only for now, but some vendors ML2 like arista allows to use
>> VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
>> Different users may have baremetal servers connected to the same ToR switch.
>>
>> By trying to apply current neutron trunking model leads to the following
>> errors:
>>
>> *Scenario 2: single user scenario, create VMs with trunk and non-trunk
>> ports.*
>>
>>- User create two networks:
>>net-1: (provider:segmentation_id: 100)
>>net-2: (provider:segmentation_id: 101)
>>
>>- User create 1 trunk port
>>port0 - parent port in net-1
>>port1 - subport in net-2 and define user segmentation_id: 300
>>
>>- User boot VMs:
>>BM1: with trunk (connected to ToR Fa0/1)
>>BM4: in net-2 (connected to ToR Fa0/4)
>>
>>- VLAN on the switch are configured as follow:
>>Fa0/1 - trunk, native 100, allowed vlan 300
>>Fa0/4 - access vlan 101
>>
>> *Problem:* BM1 has no access BM4 on net-2
>>
>>
>> *Scenario 3: multiple user scenario, create VMs with trunk.*
>>
>>- User1 create two networks:
>>net-1: (provider:segmentation_id: 100)
>>net-2: (provider:segmentation_id: 101)
>>
>>- User2 create two networks:
>>net-3: (provider:segmentation_id: 200)
>>net-4: (provider:segmentation_id: 201)
>>
>>- User1 create 1 trunk port
>>port0 - parent port in net-1
>>port1 - subport in net-2 and define user segmentation_id: 300
>>
>>- User2 create 1 trunk port
>>port0 - parent port in net-3
>>port1 - subport in net-4 and define user segmentation_id: 300
>>
>>- User1 boot VM:
>>BM1: with trunk (connected to ToR Fa0/1)
>>
>>- User2 boot VM:
>>BM4: with trunk (connected to ToR Fa0/4)
>>
>>- VLAN on the switch are configured as follow:
>>Fa0/1 - trunk, native 100, allowed vlan 300
>>Fa0/4 - trunk, native 200, allowed vlan 300
>>
>> *Problem:* User1 BM1 has access to User2 BM4 on net-2, Conflict in VLAN
>> mapping provider vlan 101 should be mapped to user vlan 300, and provider
>> vlan 201 should be also mapped to vlan 300
>>
>>
>> Making segmentation_id on trunk subport optional and inheriting it from
>> port network segmentation_id solves such problems.
>> According to original spec both segmentation_type and segmentation_id are
>> optional [0].
>>
>> Does Neutron/Nova place information about user's VLAN onto instance via
>> network metadata?
>>
>> Reference:
>> [0] https://review.openstack.org/#/c/308521/1/specs/newton/v
>> lan-aware-vms.rst@118
>>
>
> Ah, I was actually going to add the following:
>
> Whether segmentation type and segmentation ID are mandatory or not depends
> on the driver in charge of the trunk. This is because for use cases like
> Ironic, as you wonder, these details may be inferred by the underlying
> network, as you point out.
>
> However, we have not tackled the Ironic use case just yet, for the main
> reason that ironic spec [1] is still WIP. So as far as newton is concerned,
> Ironic is not on the list of supported use cases for vlan-aware-vms, yet
> [2]. The reason why we have not tackled it yet is that there's the
> 'nuisance' in that a specific driver is known to the trunk plugin only at
> the time a parent port is bound and we hadn't come up with a clean and
> elegant way to developer a validator that took into account of it. I'll
> file a bug report to make sure this won't fall through the cracks. It'll be
> tagged with 'trunk'.
>
> [1] https://review.openstack.org/#/c/277853/
> [2] https://github.com/openstack/neutron/blob/master/
> neutron/services/trunk/rules.py#L215
>
> Cheers,
> Armando
>
>
>>
>> Thanks in advance,
>> Vasyl Saienko
>>
>> On Tue, Dec 6, 2016 at 7:08 PM, Armando M.  wrote:
>>
>>>
>>>
>>> On 6 December 2016 at 08:49, Vasyl Saienko 
>>> wrote:
>>>
 Hello Neutron Community,


 I've found that nice feature vlan-aware-vms was implemented in Newton
 [0].
 However the usage of 

Re: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver based LBaaSv2

2016-12-07 Thread Kosnik, Lubosz
Completely true Michael,

It’s like my mind is already in state that this project will loose support in 
next few releases. 2 are the standard for OpenStack. So like personally I don’t 
like to work and contribute to something what will be lost in future. And not 
like years future but near future from OpenStack perspective.

Lubosz

On Dec 7, 2016, at 8:50 AM, Michael Johnson 
> wrote:

Lubosz,

I would word that very differently.  We are not dropping LBaaSv2
support.  It is not going away.  I don't want there to be confusion on
this point.

We are however, moving/merging the API from neutron into Octavia.
So, during this work the code will be transitioning repositories and
you will need to carefully synchronize and/or manage the changes in
both places.
Currently the API changes have patchsets up in the Octavia repository.
However, the old namespace driver has not yet been migrated over.

Michael


On Tue, Dec 6, 2016 at 8:46 AM, Kosnik, Lubosz 
> wrote:
Hello Zhi,
So currently we’re working on dropping LBasSv2 support.
Octavia is a big-tent project providing lbass in OpenStack and after merging
LBasS v2 API in Octavia we will deprecate that project and in next 2
releases we’re planning to completely wipe out that code repository. If you
would like to help with LBasS in OpenStack you’re more than welcome to start
working with us on Octavia.

Cheers,
Lubosz Kosnik
Cloud Software Engineer OSIC

On Dec 6, 2016, at 6:04 AM, Gary Kotton 
> wrote:

Hi,
I think that there is a move to Octavia. I suggest reaching out to that
community and see how these changes can be added. Sounds like a nice
addition
Thanks
Gary

From: zhi >
Reply-To: OpenStack List 
>
Date: Tuesday, December 6, 2016 at 11:06 AM
To: OpenStack List 
>
Subject: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver
based LBaaSv2

Hi, all

I am considering add some new extensions for HAProxy driver based Neutron
LBaaSv2.

Extension 1, multi subprocesses supported. By following this document[1], I
think we can let our HAProxy based LBaaSv2 support this feature. By adding
this feature, we can enhance loadbalancers performance.

Extension 2, http keep-alive supported. By following this document[2], we
can make our loadbalancers more effective.


Any comments are welcome!

Thanks
Zhi Chang


[1]: http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#cpu-map
[2]:
http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#option%20http-keep-alive
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Armando M.
On 7 December 2016 at 04:02, Vasyl Saienko  wrote:

> Armando, Kevin,
>
> Thanks for your comments.
>
> To be more clear we are trying to use neutron trunks implementation with
> baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top of
> the Rack) switch. User images are spawned directly onto hardware.
>
Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
> networks (it looks like changing vlan on the port to segmentation_id from
> Neutron network, scenario 1 in the attachment). Ironic works with VLAN
> segmentation only for now, but some vendors ML2 like arista allows to use
> VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
> Different users may have baremetal servers connected to the same ToR switch.
>
> By trying to apply current neutron trunking model leads to the following
> errors:
>
> *Scenario 2: single user scenario, create VMs with trunk and non-trunk
> ports.*
>
>- User create two networks:
>net-1: (provider:segmentation_id: 100)
>net-2: (provider:segmentation_id: 101)
>
>- User create 1 trunk port
>port0 - parent port in net-1
>port1 - subport in net-2 and define user segmentation_id: 300
>
>- User boot VMs:
>BM1: with trunk (connected to ToR Fa0/1)
>BM4: in net-2 (connected to ToR Fa0/4)
>
>- VLAN on the switch are configured as follow:
>Fa0/1 - trunk, native 100, allowed vlan 300
>Fa0/4 - access vlan 101
>
> *Problem:* BM1 has no access BM4 on net-2
>
>
> *Scenario 3: multiple user scenario, create VMs with trunk.*
>
>- User1 create two networks:
>net-1: (provider:segmentation_id: 100)
>net-2: (provider:segmentation_id: 101)
>
>- User2 create two networks:
>net-3: (provider:segmentation_id: 200)
>net-4: (provider:segmentation_id: 201)
>
>- User1 create 1 trunk port
>port0 - parent port in net-1
>port1 - subport in net-2 and define user segmentation_id: 300
>
>- User2 create 1 trunk port
>port0 - parent port in net-3
>port1 - subport in net-4 and define user segmentation_id: 300
>
>- User1 boot VM:
>BM1: with trunk (connected to ToR Fa0/1)
>
>- User2 boot VM:
>BM4: with trunk (connected to ToR Fa0/4)
>
>- VLAN on the switch are configured as follow:
>Fa0/1 - trunk, native 100, allowed vlan 300
>Fa0/4 - trunk, native 200, allowed vlan 300
>
> *Problem:* User1 BM1 has access to User2 BM4 on net-2, Conflict in VLAN
> mapping provider vlan 101 should be mapped to user vlan 300, and provider
> vlan 201 should be also mapped to vlan 300
>
>
> Making segmentation_id on trunk subport optional and inheriting it from
> port network segmentation_id solves such problems.
> According to original spec both segmentation_type and segmentation_id are
> optional [0].
>
> Does Neutron/Nova place information about user's VLAN onto instance via
> network metadata?
>
> Reference:
> [0] https://review.openstack.org/#/c/308521/1/specs/newton/
> vlan-aware-vms.rst@118
>

Ah, I was actually going to add the following:

Whether segmentation type and segmentation ID are mandatory or not depends
on the driver in charge of the trunk. This is because for use cases like
Ironic, as you wonder, these details may be inferred by the underlying
network, as you point out.

However, we have not tackled the Ironic use case just yet, for the main
reason that ironic spec [1] is still WIP. So as far as newton is concerned,
Ironic is not on the list of supported use cases for vlan-aware-vms, yet
[2]. The reason why we have not tackled it yet is that there's the
'nuisance' in that a specific driver is known to the trunk plugin only at
the time a parent port is bound and we hadn't come up with a clean and
elegant way to developer a validator that took into account of it. I'll
file a bug report to make sure this won't fall through the cracks. It'll be
tagged with 'trunk'.

[1] https://review.openstack.org/#/c/277853/
[2]
https://github.com/openstack/neutron/blob/master/neutron/services/trunk/rules.py#L215

Cheers,
Armando


>
> Thanks in advance,
> Vasyl Saienko
>
> On Tue, Dec 6, 2016 at 7:08 PM, Armando M.  wrote:
>
>>
>>
>> On 6 December 2016 at 08:49, Vasyl Saienko  wrote:
>>
>>> Hello Neutron Community,
>>>
>>>
>>> I've found that nice feature vlan-aware-vms was implemented in Newton
>>> [0].
>>> However the usage of this feature for regular users is impossible,
>>> unless I'm missing something.
>>>
>>> As I understood correctly it should work in the following way:
>>>
>>>1. It is possible to group neutron ports to trunks.
>>>2. When trunk is created parent port should be defined:
>>>Only one port can be parent.
>>>segmentation of parent port is set as native or untagged vlan on the
>>>trunk.
>>>3. Other ports may be added as subports to existing trunk.
>>>When subport is added to trunk *segmentation_type* and *segmentation_id
>>>*should be specified.
>>>

Re: [openstack-dev] [tripleo] Proposing Alex Schultz core on puppet-tripleo

2016-12-07 Thread Emilien Macchi
10 positive replies, I think it's a yes :-)

Thanks again Alex for your hard work, it's very appreciated.

On Fri, Dec 2, 2016 at 4:16 PM, Giulio Fidente  wrote:
> On 12/01/2016 11:26 PM, Emilien Macchi wrote:
>>
>> Team,
>>
>> Alex Schultz (mwhahaha on IRC) has been active on TripleO since a few
>> months now.  While he's very active in different areas of TripleO, his
>> reviews and contributions on puppet-tripleo have been very useful.
>> Alex is a Puppet guy and also the current PTL of Puppet OpenStack. I
>> think he perfectly understands how puppet-tripleo works. His
>> involvement in the project and contributions on puppet-tripleo deserve
>> that we allow him to +2 puppet-tripleo.
>>
>> Thanks Alex for your involvement and hard work in the project, this is
>> very appreciated!
>
>
> +1 !
>
>
> --
> Giulio Fidente
> GPG KEY: 08D733BA | IRC: gfidente
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver based LBaaSv2

2016-12-07 Thread Michael Johnson
Lubosz,

I would word that very differently.  We are not dropping LBaaSv2
support.  It is not going away.  I don't want there to be confusion on
this point.

We are however, moving/merging the API from neutron into Octavia.
So, during this work the code will be transitioning repositories and
you will need to carefully synchronize and/or manage the changes in
both places.
Currently the API changes have patchsets up in the Octavia repository.
However, the old namespace driver has not yet been migrated over.

Michael


On Tue, Dec 6, 2016 at 8:46 AM, Kosnik, Lubosz  wrote:
> Hello Zhi,
> So currently we’re working on dropping LBasSv2 support.
> Octavia is a big-tent project providing lbass in OpenStack and after merging
> LBasS v2 API in Octavia we will deprecate that project and in next 2
> releases we’re planning to completely wipe out that code repository. If you
> would like to help with LBasS in OpenStack you’re more than welcome to start
> working with us on Octavia.
>
> Cheers,
> Lubosz Kosnik
> Cloud Software Engineer OSIC
>
> On Dec 6, 2016, at 6:04 AM, Gary Kotton  wrote:
>
> Hi,
> I think that there is a move to Octavia. I suggest reaching out to that
> community and see how these changes can be added. Sounds like a nice
> addition
> Thanks
> Gary
>
> From: zhi 
> Reply-To: OpenStack List 
> Date: Tuesday, December 6, 2016 at 11:06 AM
> To: OpenStack List 
> Subject: [openstack-dev] [neutron][lbaas] New extensions for HAProxy driver
> based LBaaSv2
>
> Hi, all
>
> I am considering add some new extensions for HAProxy driver based Neutron
> LBaaSv2.
>
> Extension 1, multi subprocesses supported. By following this document[1], I
> think we can let our HAProxy based LBaaSv2 support this feature. By adding
> this feature, we can enhance loadbalancers performance.
>
> Extension 2, http keep-alive supported. By following this document[2], we
> can make our loadbalancers more effective.
>
>
> Any comments are welcome!
>
> Thanks
> Zhi Chang
>
>
> [1]: http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#cpu-map
> [2]:
> http://cbonte.github.io/haproxy-dconv/1.6/configuration.html#option%20http-keep-alive
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Performance][shaker] Triangular topology

2016-12-07 Thread Matthieu Simonin


- Mail original -
> De: "Ilya Shakhat" 
> À: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Envoyé: Mardi 6 Décembre 2016 14:39:28
> Objet: Re: [openstack-dev] [Performance][shaker] Triangular topology
> 
> Hi Matt,
> 
> I would suggest to let users specify custom topology in Shaker scenario via
> graphs (e.g. directed triangle would look like: A -> B, B -> C, C -> A),
> where every pair of nodes is pair of VMs and every edge corresponds to the
> traffic flow. The above example will be deployed as 6 VMs, 2 per compute
> node (since we need to separate ingress and egress flows).

I totally agree as it could cover a lot of use cases.

> 
> I already have a patch that allows to deploy graph-based topology:
> https://review.openstack.org/#/c/407495/ but it does not configure
> concurrency properly yet (concurrency still increments by pairs, solution
> tbd)

I'm guessing that changing the semantic of concurrency with regard to the other
 scenarios is maybe not a good thing.

As far as I understand a concurrency of 3 with the following graph

- [A, B]
- [B, C]
- [C, A]

will lead to 3 flows (potentially bi-directionnal) being active. 

So without changing the current semantic of the concurrency 
we could have all flows active, with a concurrency of 6 for the following :

graph:
- [A, B]
- [B, C]
- [C, A]
- [A, B]
- [B, C]
- [C, A]

In that case, what would mean a concurrency of 3 with the above graph ? 
In other words, can we make sure that [A,B], [B,C] and [C,A] are active ? 
More generally, for a custom graph, maybe we can find a way to specify in 
the yaml what pairs should be active for a given concurrency level. 
This could be in the above case (pseudo-yaml) : 
graph:
- [A, B],1
- [B, C],2
- [C, A],3
- [A, B],4
- [B, C],5
- [C, A],6

all pairs with a number less or equal to the concurrency will be considered 
active.

> 
> Please check whether my approach suits your use case, feedback appreciated
> :)

I like it !

> 
> Thanks,
> Ilya
> 
> 2016-11-24 19:57 GMT+04:00 Matthieu Simonin :
> 
> > Hi Ilya,
> >
> > Thanks for your answer, let me know your findings.
> > In any case I'll be glad to help if needed.
> >
> > Matt
> >
> > ps : I just realized that I missed a proper subjet to the thread :(.
> > If this thread continue it's maybe better to change that.
> >
> > - Mail original -
> > > De: "Ilya Shakhat" 
> > > À: "OpenStack Development Mailing List (not for usage questions)" <
> > openstack-dev@lists.openstack.org>
> > > Envoyé: Jeudi 24 Novembre 2016 13:03:33
> > > Objet: Re: [openstack-dev] [Performance][shaker]
> > >
> > > Hi Matt,
> > >
> > > Out of the box Shaker doesn't support such topology.
> > > It shouldn't be hard to implement though. Let me check what needs to be
> > > done.
> > >
> > > Thanks,
> > > Ilya
> > >
> > > 2016-11-24 13:49 GMT+03:00 Matthieu Simonin :
> > >
> > > > Hello,
> > > >
> > > > I'm looking to shaker capabilities and I'm wondering if this kind
> > > > of accomodation (see attachment also) can be achieved
> > > >
> > > > Ascii (flat) version :
> > > >
> > > > CN1 (2n VMs) <- n flows -> CN2 (2n VMs)
> > > > CN1 (2n VMs) <- n flows -> CN3 (2n VMs)
> > > > CN2 (2n VMs) <- n flows -> CN3 (2n VMs)
> > > >
> > > > In this situation concurrency could be mapped to the number of
> > > > simultaneous flows in use per link.
> > > >
> > > > Best,
> > > >
> > > > Matt
> > > >
> > > >
> > > > 
> > __
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > >
> > > >
> > >
> > > 
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [mistral][logo] Fwd: Mistral team draft logo

2016-12-07 Thread Jay Pipes
To me, it kind of looks like people jumping joyously off the top of a 
ferris wheel.


Best,
-jay

On 12/07/2016 02:21 AM, Renat Akhmerov wrote:

Please provide your feedback on the draft logo for Mistral.

My opinion: I think it looks pretty nice and it has valid associations
in my mind with what Mistral does. Looking forward to seeing it coloured.

Renat Akhmerov
@Nokia


Begin forwarded message:

*From: *Heidi Joy Tretheway >
*Subject: **Mistral team draft logo*
*Date: *2 December 2016 at 02:27:05 GMT+7
*To: *Renat Akhmerov >

Thanks to you and your team for your patience as our illustrators
created nearly 60 project mascot logos. As I alerted you earlier, we
weren’t happy with the initial draft, so we pushed our illustrators to
create a logo that we’re truly happy with before we shared it with
your team to review/react to it.

We now have a draft logo (without color) and we’d love for you to
share this form with your team for feedback: www.tinyurl.com/OSmascot
  - This really helps us stay
organized, evaluate conflicting feedback, and give a clear direction
to the illustrators!

Please feel free to share this with your team and we’d love to have
your feedback by Tuesday, Dec. 13. I will be out of the office Dec.
2-12 but I promise to respond to questions as swiftly as possible when
I return, and the feedback form includes an opportunity to flag me if
you’d like a personal reply to your comments on it.

We’re doing our best to get these ready for the PTG and I really
appreciate your team’s patience!


photo   
*Heidi Joy Tretheway*
Senior Marketing Manager, OpenStack Foundation
503 816 9769  | Skype: heidi.tretheway

  








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Creating a new IRC meeting room ?

2016-12-07 Thread Thierry Carrez
Dolph Mathews wrote:
> [...]
> I think it honestly reflects our current breakdown of contributors &
> collaboration. The artificial scarcity model only helps a vocal minority
> with cross-project focus, and just results in odd meeting times for the
> majority of projects that don't hold primetime meeting slots.
> 
> While I don't think we should do away with meetings rooms, if a project
> wants to hold meetings at a convenient time in their normal channel, I
> think that's fine. Meeting conflicts will always exist. Major conflicts
> will be resolved without the additional pressure of artificial scarcity.

I tend to agree with that. Like I said in my intro, we may be past the
point where the artificial scarcity model is hurting us more than it
helps us.

So how about:
- we enable an #openstack-meeting-5 to instantly relieve scheduling pressure
- we allow teams to hold meetings in their project channel if they want
to (and show them all on the meeting agenda through the irc-meetings
repo) as long as the channel is logged
- we still generally recommend to use meeting rooms whenever possible,
so that you can benefit from outside presence and easy mentions/pings
- we will proactively add additional meeting rooms when the resource
becomes scarce again

Options:
- Once the change is in place, we could also limit official meeting room
usage to official projects (since non-official projects can hold a
meeting in their own room and still have it mentioned on the agenda)
- If we remove artificial scarcity, we could discontinue the
#openstack-meeting-cp channel (which was created to facilitate the
scheduling of  cross-project temporary meetings) and just tell
cross-project initiatives to use the regular channels

Comments, thoughts ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Vitrage] [Aodh] Vitrage – Aodh Integration

2016-12-07 Thread Weyl, Alexey (Nokia - IL)
Hi all,

My name is Alexey Weyl and I am a core contributor in the Vitrage team.

In the Austin Openstack summit we discussed with the Aodh team about creating a 
new alarm type, and after explaining it we got an approval [1].

Members from the Aodh team were in our design session in the Openstack summit 
in Barcelona, where we had further discussed this issue.

Vitrage is building a graph from all the datasources that it has (nova, heat, 
aodh, zabbix, neutron, physical entities, and more) and then when some states 
or alarms in Vitrage are changed then it may raise some new alarm or change 
some other properties of the entity in the graph.

Vitrage then pushes the raised alarms to Aodh in order that all of the projects 
will be aware of the alarm that has occurred.

We are starting to work on the following BP proposal: 
https://review.openstack.org/#/c/408060/

In this regard, I have two questions:
1. What should this new alarm be called?
2. As I'm new in Aodh, do you have any pointers or suggestions about where to 
start?


Thanks in Advance,
Alexey

[1] https://etherpad.openstack.org/p/newton-telemetry-vitrage In addition

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-odl] Neutron with OpenDaylight Boron controller

2016-12-07 Thread Elena Ezhova
Hi folks,

We are trying to deploy Neutron with OpenDaylight Boron-0.5.0 controller
using DevStack (master version) on Ubuntu 14.04 and there is a number of
issues we face.

First, we tried to deploy OpenDaylight with odl-ovsdb-openstack feature
using the following local.conf [0]. In this case VMs successfully got IP
addresses and L2 connectivity was working, but there was no L3 connectivity
meaning we couldn't ping router IP from qdhcp namespace and VMs in
different subnets couldn't reach each other.
We've also tried to deploy OpenDaylight odl-ovsdb-openstack with Neutron L3
agent by enabling q-l3 and setting ODL_L3=False in local.conf, but in this
case stack.sh failed due to br-int not having been created.

After that, we were given an advice on opendaylight irc channel to use
odl-netvirt-openstack feature instead as odl-ovsdb-openstack is kind of
deprecated.
When deploying Neutron+ODL with this feature we were referencing the
NetVirt demo from the latest ODL summit [1] as well as networking-odl
DevStack guide [2] and our local.conf was looking the following way [3].
As a result, though br-int is created and when a subnet or a VM are created
corresponding tap interfaces are added, there is no L2 and L3 connectivity
and there are a lot of error in karaf logs:

   - Errors on start: http://paste.openstack.org/show/591432/
   - Logs during Neutron subnet create:
   http://paste.openstack.org/show/591437/
   - OVS ports and flows after a Neutron network with a subnet were
   created: http://paste.openstack.org/show/591642/ .
   Here it seems that not all flows had been created, for example there is
   a flow in table 51 that sends traffic to table 52 which is missing.
   - There are also tons of traces identical to ones in bug [4] which
   should have been fixed in Beryllium release.

Has someone successfully deployed operational Neutron+OpenDaylight with
NetVirt or ovsdb feature recently? If so, we would really appreciate an
example of local.conf that was used as well as any clues that point out
what we can be missing.

Thanks,
Elena

[0] http://paste.openstack.org/show/591645/
https://docs.google.com/presentation/d/1VLzRIOEptSOY1b0w4PezRIQ0gF5vx7GyLKECWXRV5mE/edit#slide=id.g17efbe8461_0_146
[2]
https://github.com/openstack/networking-odl/blob/master/devstack/README.rst
[3] http://paste.openstack.org/show/591641/
[4] https://bugs.opendaylight.org/show_bug.cgi?id=5275
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Vlan aware VMs or trunking

2016-12-07 Thread Vasyl Saienko
Armando, Kevin,

Thanks for your comments.

To be more clear we are trying to use neutron trunks implementation with
baremetal servers (Ironic). Baremetal servers are plugged to Tor (Top of
the Rack) switch. User images are spawned directly onto hardware.
Ironic uses Neutron ML2 drivers to plug baremetal servers to Neutron
networks (it looks like changing vlan on the port to segmentation_id from
Neutron network, scenario 1 in the attachment). Ironic works with VLAN
segmentation only for now, but some vendors ML2 like arista allows to use
VXLAN (in this case VXLAN to VLAN mapping is created on the switch.).
Different users may have baremetal servers connected to the same ToR switch.

By trying to apply current neutron trunking model leads to the following
errors:

*Scenario 2: single user scenario, create VMs with trunk and non-trunk
ports.*

   - User create two networks:
   net-1: (provider:segmentation_id: 100)
   net-2: (provider:segmentation_id: 101)

   - User create 1 trunk port
   port0 - parent port in net-1
   port1 - subport in net-2 and define user segmentation_id: 300

   - User boot VMs:
   BM1: with trunk (connected to ToR Fa0/1)
   BM4: in net-2 (connected to ToR Fa0/4)

   - VLAN on the switch are configured as follow:
   Fa0/1 - trunk, native 100, allowed vlan 300
   Fa0/4 - access vlan 101

*Problem:* BM1 has no access BM4 on net-2


*Scenario 3: multiple user scenario, create VMs with trunk.*

   - User1 create two networks:
   net-1: (provider:segmentation_id: 100)
   net-2: (provider:segmentation_id: 101)

   - User2 create two networks:
   net-3: (provider:segmentation_id: 200)
   net-4: (provider:segmentation_id: 201)

   - User1 create 1 trunk port
   port0 - parent port in net-1
   port1 - subport in net-2 and define user segmentation_id: 300

   - User2 create 1 trunk port
   port0 - parent port in net-3
   port1 - subport in net-4 and define user segmentation_id: 300

   - User1 boot VM:
   BM1: with trunk (connected to ToR Fa0/1)

   - User2 boot VM:
   BM4: with trunk (connected to ToR Fa0/4)

   - VLAN on the switch are configured as follow:
   Fa0/1 - trunk, native 100, allowed vlan 300
   Fa0/4 - trunk, native 200, allowed vlan 300

*Problem:* User1 BM1 has access to User2 BM4 on net-2, Conflict in VLAN
mapping provider vlan 101 should be mapped to user vlan 300, and provider
vlan 201 should be also mapped to vlan 300


Making segmentation_id on trunk subport optional and inheriting it from
port network segmentation_id solves such problems.
According to original spec both segmentation_type and segmentation_id are
optional [0].

Does Neutron/Nova place information about user's VLAN onto instance via
network metadata?

Reference:
[0]
https://review.openstack.org/#/c/308521/1/specs/newton/vlan-aware-vms.rst@118

Thanks in advance,
Vasyl Saienko

On Tue, Dec 6, 2016 at 7:08 PM, Armando M.  wrote:

>
>
> On 6 December 2016 at 08:49, Vasyl Saienko  wrote:
>
>> Hello Neutron Community,
>>
>>
>> I've found that nice feature vlan-aware-vms was implemented in Newton [0].
>> However the usage of this feature for regular users is impossible, unless
>> I'm missing something.
>>
>> As I understood correctly it should work in the following way:
>>
>>1. It is possible to group neutron ports to trunks.
>>2. When trunk is created parent port should be defined:
>>Only one port can be parent.
>>segmentation of parent port is set as native or untagged vlan on the
>>trunk.
>>3. Other ports may be added as subports to existing trunk.
>>When subport is added to trunk *segmentation_type* and *segmentation_id
>>*should be specified.
>>segmentation_id of subport is set as allowed vlan on the trunk
>>
>> Non-admin user do not know anything about *segmentation_type* and
>> *segmentation_id.*
>>
>
> Segmentation type and ID are used to multiplex/demultiplex traffic in/out
> of the guest associated to a particular trunk. Aside the fact that the only
> supported type is VLAN at the moment (if ever), the IDs are user provided
> to uniquely identify the traffic coming in/out of the trunked networks so
> that it can reach the appropriate vlan interface within the guest. The
> documentation [1] is still in flight, but it clarifies this point.
>
> HTH
> Armando
>
> [1] https://review.openstack.org/#/c/361776
>
>
>> So it is strange that those fields are mandatory when subport is added to
>> trunk. Furthermore they may conflict with port's network segmentation_id
>> and type. Why we can't inherit segmentation_type and segmentation_id from
>> network settings of subport?
>>
>> References:
>> [0] https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms
>> [1] https://review.openstack.org/#/c/361776/15/doc/networking-gu
>> ide/source/config-trunking.rst
>> [2] https://etherpad.openstack.org/p/trunk-api-dump-newton
>>
>> Thanks in advance,
>> Vasyl Saienko
>>
>> 
>> 

Re: [openstack-dev] [tricircle]agenda of weekly meeting Dec.7

2016-12-07 Thread Davanum Srinivas
Chaoyi,

Is there any interest in this work?
http://cs.brown.edu/~rfonseca/pubs/yu16netex.pdf
https://goo.gl/photos/hwHfMNo4xDMfVK8j8

Please let me know and i'll get you in touch with those folks.

Thanks,
Dims


On Wed, Dec 7, 2016 at 3:00 AM, joehuang  wrote:
> Hello, team,
>
> Bug-smash and meetup in last week is very good, let's continue the weekly
> meeting.
>
> Agenda of Dec.7 weekly meeting:
>
> Bug smash and meetup summary
> Ocata feature development review
> legacy tables clean after splitting
> Open Discussion
>
>
> How to join:
>
> #  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on
> every Wednesday starting from UTC 13:00.
>
>
> If you  have other topics to be discussed in the weekly meeting, please
> reply the mail.
>
>
> Best Regards
> Chaoyi Huang (joehuang)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] trunk api performance and scale measurments

2016-12-07 Thread John Davidge
On 12/6/16, 6:06 PM, "Tidwell, Ryan"  wrote:

>
>I failed to make much mention of it in previous write-ups, but I also
>encountered scale issues with listing ports after a certain threshold. I
>haven’t gone back
> to identify where the tipping point is, but I did notice that Horizon
>began to really bog down as I added ports to the system. On the surface
>it didn’t seem to matter whether these ports were used as subports or
>not, the sheer volume of ports added to the
> system seemed to cause both Horizon and more importantly GET on
>v2.0/ports to really bog down.
>
>-Ryan

Could this be related to https://bugs.launchpad.net/neutron/+bug/1611626 ?

John



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Please avoid blueprint overload

2016-12-07 Thread Rob Cresswell
Hey all,

So this was discussed at the summit, but its still being done; we keep hugely 
overloading blueprint scope. There are dozens of patches assigned to a vague 
"identity tables" blueprint, which was already marked Obsolete. Several of 
these were just small unrelated features that are nothing to do with Identity 
work, so those should just be wishlist bugs.

Please make new blueprints for each panel (Users, Domains, etc.) so that we can 
accurately track and prioritisework. I've -1'd a handful of the patches that 
were using the old blueprint, too.

I realise this seems like red tape, but its really, really useful for me and 
Richard to track which patches are in flight for which efforts, so that we can 
make sure important features like the Users work get the appropriate attention.

Cheers,
Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Draft team mascot

2016-12-07 Thread Rob Cresswell (rcresswe)
Radomir’s version has my vote.

On 7 Dec 2016, at 10:11, Radomir Dopieralski 
> wrote:

Here, fixed.

On Wed, Dec 7, 2016 at 10:54 AM, Radomir Dopieralski 
>wrote:
That looks kinda like a white baboon. It definitely doesn't look like Doge -- 
wrong color, wrong head. I think the legs are too long too.

On Wed, Dec 7, 2016 at 10:31 AM, Timur Sufiev 
> wrote:
I still think this one 
https://wtf.jpg.wtf/0c/10/1479414543-0c1052f7c2f9990b6b0c472076594cb1.jpeg is 
the best :).

On Wed, Dec 7, 2016 at 1:07 AM Jason Rist 
> wrote:
On 12/06/2016 01:48 PM, Richard Jones wrote:
> >> On 6 Dec 2016, at 20:19, Richard Jones 
> >> > wrote:
> >> Please let me know what you think (by December 12) of this draft for
> >> our Horizon team mascot.
> >
> On 7 December 2016 at 07:38, Rob Cresswell (rcresswe)
> > wrote:
> > Are we missing an attachment / link ?
>
> Weird! Trying again:
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Much UI, such OpenStack, wow.

--
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Draft team mascot

2016-12-07 Thread Radomir Dopieralski
Here, fixed.

On Wed, Dec 7, 2016 at 10:54 AM, Radomir Dopieralski  wrote:

> That looks kinda like a white baboon. It definitely doesn't look like Doge
> -- wrong color, wrong head. I think the legs are too long too.
>
> On Wed, Dec 7, 2016 at 10:31 AM, Timur Sufiev 
> wrote:
>
>> I still think this one https://wtf.jpg.wtf/0c/10/
>> 1479414543-0c1052f7c2f9990b6b0c472076594cb1.jpeg is the best :).
>>
>> On Wed, Dec 7, 2016 at 1:07 AM Jason Rist  wrote:
>>
>>> On 12/06/2016 01:48 PM, Richard Jones wrote:
>>> > >> On 6 Dec 2016, at 20:19, Richard Jones 
>>> wrote:
>>> > >> Please let me know what you think (by December 12) of this draft for
>>> > >> our Horizon team mascot.
>>> > >
>>> > On 7 December 2016 at 07:38, Rob Cresswell (rcresswe)
>>> >  wrote:
>>> > > Are we missing an attachment / link ?
>>> >
>>> > Weird! Trying again:
>>> >
>>> >
>>> >
>>> > 
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> Much UI, such OpenStack, wow.
>>>
>>> --
>>> Jason E. Rist
>>> Senior Software Engineer
>>> OpenStack User Interfaces
>>> Red Hat, Inc.
>>> Freenode: jrist
>>> github/twitter: knowncitizen
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Draft team mascot

2016-12-07 Thread Radomir Dopieralski
That looks kinda like a white baboon. It definitely doesn't look like Doge
-- wrong color, wrong head. I think the legs are too long too.

On Wed, Dec 7, 2016 at 10:31 AM, Timur Sufiev  wrote:

> I still think this one https://wtf.jpg.wtf/0c/10/1479414543-
> 0c1052f7c2f9990b6b0c472076594cb1.jpeg is the best :).
>
> On Wed, Dec 7, 2016 at 1:07 AM Jason Rist  wrote:
>
>> On 12/06/2016 01:48 PM, Richard Jones wrote:
>> > >> On 6 Dec 2016, at 20:19, Richard Jones 
>> wrote:
>> > >> Please let me know what you think (by December 12) of this draft for
>> > >> our Horizon team mascot.
>> > >
>> > On 7 December 2016 at 07:38, Rob Cresswell (rcresswe)
>> >  wrote:
>> > > Are we missing an attachment / link ?
>> >
>> > Weird! Trying again:
>> >
>> >
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> Much UI, such OpenStack, wow.
>>
>> --
>> Jason E. Rist
>> Senior Software Engineer
>> OpenStack User Interfaces
>> Red Hat, Inc.
>> Freenode: jrist
>> github/twitter: knowncitizen
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Draft team mascot

2016-12-07 Thread Timur Sufiev
I still think this one
https://wtf.jpg.wtf/0c/10/1479414543-0c1052f7c2f9990b6b0c472076594cb1.jpeg is
the best :).

On Wed, Dec 7, 2016 at 1:07 AM Jason Rist  wrote:

> On 12/06/2016 01:48 PM, Richard Jones wrote:
> > >> On 6 Dec 2016, at 20:19, Richard Jones 
> wrote:
> > >> Please let me know what you think (by December 12) of this draft for
> > >> our Horizon team mascot.
> > >
> > On 7 December 2016 at 07:38, Rob Cresswell (rcresswe)
> >  wrote:
> > > Are we missing an attachment / link ?
> >
> > Weird! Trying again:
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> Much UI, such OpenStack, wow.
>
> --
> Jason E. Rist
> Senior Software Engineer
> OpenStack User Interfaces
> Red Hat, Inc.
> Freenode: jrist
> github/twitter: knowncitizen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is this a correct use of scheduler hints and nova-scheduler

2016-12-07 Thread Zhenyu Zheng
Thanks a for the information, I will propose a blueprint for the next cycle
:)

On Wed, Dec 7, 2016 at 4:38 PM, Sylvain Bauza  wrote:

>
>
> Le 07/12/2016 04:21, Zhenyu Zheng a écrit :
> > Hi all,
> >
> > I want to ask a question about using scheduler-hints, could we add
> > custom scheduler keys to work with our custom filters? Is it designed to
> > allow vendors add own custom filters and keys?
> >
>
> I tend to disagree with that approach from an interoperability
> perspective as two clouds could behave very differently.
>
> That said, there is a long-standing problem about scheduler hints being
> extensible with regards to our API input validation [1] and we basically
> agreed on allowing to relax the constraints [2].
>
> Long story short, you *can* technically do that for a custom filter but
> please take care of the communication you make around that new hint to
> your customers and make it clear that this hint is not interoperable.
>
> Also, I beg you to make sure that the hint name is self-explanatory and
> enough distinct from the other hints we already have so that a confusion
> could be minimal.
>
>
> > Another question is, as we have now persistent scheduler-hints in
> > request spec, is it possible to show the scheduler-hints either in
> > server-show or a new API? Because vendors may be interested to have an
> > idea on how this instance was built in the first place.
> >
>
> Well, I'd say it would be an admin or owner information, but yeah that
> could be worth to be exposed.
> AFAIK, there is no current way to get that so a blueprint with a spec
> describing the problem and the proposal (including an API microversion)
> could be interesting to review.
>
> -Sylvain
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-
> June/067996.html
>
> [2]
> https://github.com/openstack/nova/blob/5cc5a841109b082395d9664edcfc11
> e31fb678fa/nova/api/openstack/compute/schemas/scheduler_hints.py#L67-L71
>
> > Thanks.
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 4

2016-12-07 Thread Sylvain Bauza


Le 07/12/2016 01:07, melanie witt a écrit :
> On Tue, 6 Dec 2016 16:04:14 -0500, Jay Pipes wrote:
>> We're discussing only doing:
>>
>>  GET /resource_providers?
>>
>> Once we start doing claims in the scheduler, we'll have the ability to
>> do:
>>
>>  POST /allocations
>>  {
>> > all kinds of stuff :)>
>>  }
> 
> Thanks. FWIW, I'm not against simple non-JSON query params.
> 
> The last time we discussed this, I was against the idea of JSON blobs in
> query params from a usability standpoint and it was noted that GET with
> a request body isn't guaranteed to be forwarded properly when going
> through proxies because it's not described in the HTTP specification. So
> with that information, I thought POST as a read to e.g.
> /resource_providers/list was the best compromise.
> 
> That all arose because complex JSON bodies were described as a
> possibility for RP list requests. If that's not the case, then I didn't
> think we need to consider POST.
> 

FWIW, I think POST is not that complex and allows us to have room for
further request information like traits, without defeating the purpose
to have something RESTful.

The proposal is up, comments welcome
https://review.openstack.org/#/c/392569/

-Sylvain

> -melanie
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Is this a correct use of scheduler hints and nova-scheduler

2016-12-07 Thread Sylvain Bauza


Le 07/12/2016 04:21, Zhenyu Zheng a écrit :
> Hi all,
> 
> I want to ask a question about using scheduler-hints, could we add
> custom scheduler keys to work with our custom filters? Is it designed to
> allow vendors add own custom filters and keys?
> 

I tend to disagree with that approach from an interoperability
perspective as two clouds could behave very differently.

That said, there is a long-standing problem about scheduler hints being
extensible with regards to our API input validation [1] and we basically
agreed on allowing to relax the constraints [2].

Long story short, you *can* technically do that for a custom filter but
please take care of the communication you make around that new hint to
your customers and make it clear that this hint is not interoperable.

Also, I beg you to make sure that the hint name is self-explanatory and
enough distinct from the other hints we already have so that a confusion
could be minimal.


> Another question is, as we have now persistent scheduler-hints in
> request spec, is it possible to show the scheduler-hints either in
> server-show or a new API? Because vendors may be interested to have an
> idea on how this instance was built in the first place.
>

Well, I'd say it would be an admin or owner information, but yeah that
could be worth to be exposed.
AFAIK, there is no current way to get that so a blueprint with a spec
describing the problem and the proposal (including an API microversion)
could be interesting to review.

-Sylvain

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-June/067996.html

[2]
https://github.com/openstack/nova/blob/5cc5a841109b082395d9664edcfc11e31fb678fa/nova/api/openstack/compute/schemas/scheduler_hints.py#L67-L71

> Thanks.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Team Biweekly Meeting 2016.12.07 agenda

2016-12-07 Thread Zhipeng Huang
Hi Team,

Please find the initial agenda for today's meeting at
https://wiki.openstack.org/wiki/Meetings/CyborgTeamMeeting#Next_meeting_:_UTC_1500.2C_Dec_7th


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Dec.7

2016-12-07 Thread joehuang
Hello, team,

Bug-smash and meetup in last week is very good, let's continue the weekly 
meeting.

Agenda of Dec.7 weekly meeting:

  1.  Bug smash and meetup summary
  2.  Ocata feature development review
  3.  legacy tables clean after splitting
  4.  Open Discussion

How to join:
#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev