[openstack-dev] [Bareon][Fuel] Stevedore extensions system for Nailgun

2016-01-18 Thread Sylwester Brzeczkowski
Hi,

Recently I was working on making Nailgun extensions more usable. Current
extensions system has great potential to be used to extend Nailgun's
features e.g. integration with other services.
This potential is blocked by the fact that all extensions must be placed in
Nailgun's source code and be explicitly imported and added to global
"extensions list" [0].

There is the idea to use stevedore to unblock current extensions system.
Stevedore allows us to create an extension in separate package and make it
available just by installing it on Fuel Master node.
Nailgun will automatically detect that new extension is ready for use. User
will be able to turn on the extension via fuelclient [1] or directly via
Nailgun's REST API [2].

I've prepared a Demo [3] in which I show how to use extensions using
integration with Bareon-API [4] as an example. The extension is placed in
separate repo [5].

Please look at the spec for stevedore-based extensions for Nailgun [6] and
review it as soon as possible to move the initiative forward.
Any feedback is welcome!

Thank you!

[0]
https://github.com/openstack/fuel-web/blob/stable/8.0/nailgun/nailgun/extensions/base.py#L23-L36
[1] https://review.openstack.org/#/c/264699/
[2] https://review.openstack.org/#/c/264714/
[3] https://www.youtube.com/watch?v=8r0yaoPWtlY
[4] https://review.openstack.org/#/c/259072/
[5] https://github.com/gitfred/bareon-fuel-extension
[6] https://review.openstack.org/#/c/263738/

-- 
*Sylwester Brzeczkowski*
Junior Python Software Engineer
Product Development-Core : Product Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] api working group proposed actions guideline needs input

2016-01-18 Thread Michael Krotscheck
There's related work going on in ironic -
https://review.openstack.org/#/c/224022/
- it proposes a way by which a
user may query the transitions (actions) of the underlying FSM: "Get me all
the valid actions/transitions". Other comments on the review.

Also: I like using the same word for things. Is there a meaningful enough
difference between "Actions", "Transitions", and "Tasks", that requires
different words?

Michael

On Fri, Jan 15, 2016 at 9:17 AM Chris Dent  wrote:

>
> In this review:
>
>  https://review.openstack.org/#/c/234994/
>
> there's a proposal that provides guidance for how to represent certains
> types of actions against resources in an HTTP API. There's been a fair
> bit of back and forth between me and the original author without
> conclusion.
>
> It would be great to get additional eyes on this spec so that we
> could reach agreement. It is quite likely everyone involved is wrong
> in some fashion or there are misunderstandings happening. If you're
> working to implement actions in APIs, or just thinking about it,
> pile on.
>
> There's quite a lot of meat in the comments, discussing various
> alternatives.
>
> Thanks.
>
> --
> Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
> freenode: cdent tw:
> @anticdent__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Heka POC

2016-01-18 Thread Eric LEMOINE
Hi Kolla devs

So I've identified the following tests/POCs:

* Verify Heka can read logs from containers that log to stdout
* Verify Heka can read logs from containers that log to Syslog (/dev/log)
* Verify Heka can write logs to local files (as currently done using Rsyslog)

If these three POCs "succeed" then it'll mean that we may be able to
remove Rsyslog entirely, and I'll write the specs in that sense.

When I am done with these tests I'll report back to the mailing list,
and then continue working on the specs.

Do we agree with that?

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Add new plugins to neutron by the way of sub-project

2016-01-18 Thread hao li
A very Happy New Year 2016 !! At first , I don't know whether  the neutron‘s 
contributors can receive this letter. If not, could you tell me how to contact 
with them?I'm sorry to bother you in this way ,but I can't find another way 
that can get your  reply. We are a neutron  team .  we  add  a  Plug-in to the  
ML 2 to support  our company  controllers.Based on the  spirit of the " four 
open ",we want to  get these  code  open by the way of sub-project .Of course  
our team  try to make our Plugins to conform to the specifications. Could you 
take a time to have a look at our codes and documents? thanks a lot. Hao Li__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] changing in neutron code

2016-01-18 Thread Atif Saeed





Hi All, 

I am a newbie in an Open Stack developers community. I want to do some 
modification in Devstack/kilo neutron part. Can anyone guide me how to test the 
modification in the code part. 

(1) Should I need to define my own module/class. 
(2) Can I add some def in neutron code. 

and Importantly, How to test that my code is working or not?

Suggestions are highly appreciated and welcome. Really need to get help.

A. 



  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-18 Thread Duncan Thomas
On 5 January 2016 at 18:55, Ryan Rossiter 
wrote:

> This is definitely good to know. Are you planning on setting up something
> off to the side of o.vo within that holds a dictionary of all values for a
> release? Something like:
>
> {‘liberty’: {‘volume’: ‘1.3’, …},
>  ‘mitaka’: {‘volume’: ‘1.8’, …}, }
>
> With the possibility of replacing the release name with the RPC version or
> some other version placeholder. Playing devil’s advocate, how does this
> work out if I want to be continuously deploying Cinder from HEAD?


As far as I know (the design has iterated a bit, but I think I'm still
right), there is no need for such a table - before you start a rolling
upgrade, you call the 'pin now' api, and all of the services write their
max supported version to the DB. Once the DB is written to by all services,
the running services can then read that table and cache the max value. Any
new services bought up will also build a max volume cache on startup. Once
everything is upgraded, you can call 'pin now' again and the services can
figure out a new (hopefully higher) version limit.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #67

2016-01-18 Thread Emilien Macchi
Hello Puppeteers,

Tomorrow we will have our weekly meeting at UTC 1500.
Here is our agenda:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160119

Feel free to add more topics, reviews, bugs, as usual.

See you there,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][artifacts] FFE for Glare specs

2016-01-18 Thread Flavio Percoco

On 14/01/16 14:59 +, Alexander Tivelkov wrote:

Hi,

Unfortunately I skipped the "spec freeze exception week" due to a long holiday
season, so I'd like to ask for the freeze exception for the following two specs
now:

1. Add more implementation details to 'deprecate-v3-api' [1]
2. Glare Public API [2]
 


Hey Alex,

We're evaluating these specs and we should have feedback before the end of the 
day.

Thanks for sending the email out,
Flavio


Spec [1] is actually a patch adding more concrete details to the spec which
describes the removal of glance v3 API in favour of standalone glare v0.1 API
([3]), which was accepted for Mitaka and merged. So, it makes no sense to me in
accepting [3] but postponing [1] which actually just adds more details of the
very same job.

The second spec ([2]) aims to stabilise the glare API by addressing DefCore and
API-WG comments to the currently present API. The discussions of this API tend
to take a lot, but the actual implementation is really quick (since these are
just changes in API routers with the same domain and DB and  code underneath),
and I believe that we will still be able to do this work in Mitaka, even if the
spec will be approved much later in the cycle. Also, we've agreed that for this
type of work our FastTrack approach should still be applied, which means much
less review burden required. 

Thanks for considering this

[1] https://review.openstack.org/#/c/259427/
[2] https://review.openstack.org/#/c/254710/
[3] https://review.openstack.org/#/c/254163/
--
Regards,
Alexander Tivelkov



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [api] API Working Group Refresher

2016-01-18 Thread Everett Toews

On Jan 17, 2016, at 8:56 PM, Qiming Teng 
> wrote:

On Fri, Jan 15, 2016 at 12:48:51PM +, Chris Dent wrote:

At yesterday's API Working Group meeting we decided it would be a
good idea to send out a refresher on the existence of the group,
its goals and activities. If you have interest in the improvement
and standardization of OpenStack APIs please take this as an
invitation to participate.

The group meets once a week in openstack-meeting-3 on Thursdays
alternating between 00:00 UTC and 16:00 UTC[0].

The meeting time is quite confusing based on info on the wiki. It says
that 'next' meeting would be 2016-01-28 at 00:00UTC, previous meeting
was 2016-01-14 at 16:00UTC. Do we have a meeting on 2016-01-21? What
timeslot should it be? 16:00UTC or 00:00UTC?

I updated the meeting page to have the correct next meeting at 2016-01-21 at 
00:00 UTC.

Apologies for the confusion.

Everett
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] nova-network removal

2016-01-18 Thread Igor Kalnitsky
Roman, Sheena,

You meant to remove nova-network completely? Or only for new
environments? Should we support nova-network for old (let's say, 7.0)
clusters?

Thanks,
Igor

On Fri, Jan 15, 2016 at 10:03 PM, Sheena Gregson  wrote:
> Adrian – can someone from the PI team please confirm what testing was
> performed?
>
>
>
> From: Roman Alekseenkov [mailto:ralekseen...@mirantis.com]
> Sent: Friday, January 15, 2016 11:30 AM
>
>
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Fuel] nova-network removal
>
>
>
> I agree with Sheena. Sounds like removing support for nova-network would be
> the best option, even though it's late.
>
>
>
> However, I'd like us to think about the impact on vCenter integration.
> vCenter+nova-network was fully supported before. Since we are now
> recommending DVS or NSX backends, I'd like the team to explicitly confirm
> that those configurations have been tested.
>
>
>
> Thanks,
>
> Roman
>
>
>
> On Fri, Jan 15, 2016 at 6:43 AM, Sheena Gregson 
> wrote:
>
> Although we are very close to HCF, I see no option but to continue removing
> nova-network as I understand it is not currently functional or well-tested
> for the Mitaka release.  We must either remove it or test it, and we want to
> remove it anyway so that seems like the better path.
>
>
>
> Mike, what do you think?
>
>
>
> From: Roman Prykhodchenko [mailto:m...@romcheg.me]
> Sent: Friday, January 15, 2016 8:04 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [Fuel] nova-network removal
>
>
>
> I’d like to add that nova-network support was removed from python-fuelclient
> in 8.0.
>
>
>
> 14 січ. 2016 р. о 17:54 Vitaly Kramskikh 
> написав(ла):
>
>
>
> Folks,
>
> We have a request on review which prohibits creating new envs with
> nova-network: https://review.openstack.org/#/c/261229/ We're 3 weeks away
> from HCF, and I think this is too late for such a change. What do you think?
> Should we proceed and remove nova-network support in 8.0, which is
> deprecated since 7.0?
>
>
> --
>
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] How to auto allocate VIPs for roles in different network node groups?

2016-01-18 Thread Igor Kalnitsky
> Random choices aren't good IMHO, let's use defaults.

What if neither of node is in default group? Still use default group?
And prey that some third-party plugin will handle this case properly?

AFAIU, default nodegroup is slightly artificial thing. There's no such
thing like *default* nodegroup. Nodes may be in either group A or
group B. No defaults. Default is just pre-created nodegroup and that's
it, so there's nothing special in it. That's why I think it's bad idea
to remove limitation just because someone somewhere with manual hacks
and workaround *may* deploy controllers in different multi-racks. We
don't support load-balancing for nodes in different racks out-of-box.
Let's proceed with it for 8.0, and make a proper fix in 9.0.

On Fri, Jan 15, 2016 at 11:50 AM, Bogdan Dobrelya
 wrote:
> On 15.01.2016 10:19, Aleksandr Didenko wrote:
>> Hi,
>>
>> We need to come up with some solution for a problem with VIP generation
>> (auto allocation), see the original bug [0].
>>
>> The main problem here is: how do we know what exactly IPs to auto
>> allocate for VIPs when needed roles are in different nodegroups (i.e. in
>> different IP networks)?
>> For example 'public_vip' for 'controller' roles.
>>
>> Currently we have two possible solutions.
>>
>> 1) Fail early in pre-deployment check (when user hit "Deploy changes")
>> with error about inability to auto allocate VIP for nodes in different
>> nodegroups (racks). So in order to run deploy user has to put all roles
>> with the same VIPs in the same nodegroups (for example: all controllers
>> in the same nodegroup).
>>
>> Pros:
>>
>>   * VIPs are always correct, they are from the same network as nodes
>> that are going to use them, thus user simply can't configure invalid
>> VIPs for cluster and break deployment
>>
>> Cons:
>>
>>   * hardcoded limitation that is impossible to bypass, does not allow to
>> spread roles with VIPs across multiple racks even if it's properly
>> handled by Fuel Plugin, i.e. made so by design
>
> That'd be no good at all.
>
>>
>>
>> 2) Allow to move roles that use VIPs into different nodegroups, auto
>> allocate VIPs from "default" nodegroup and send an alert/notification to
>> user that such configuration may not work and it's up to user how to
>> proceed (either fix config or deploy at his/her own risk).
>
> It seems we have not much choice then, but use the option 2
>
>>
>> Pros:
>>
>>   * relatively simple solution
>>
>>   * impossible to break VIP serialization because in the worst case we
>> allocate VIPs from default nodegroup
>>
>> Cons:
>>
>>   * user can deploy invalid environment that will fail during deployment
>> or will not operate properly (for example when public_vip is not
>> able to migrate to controller from different rack)
>>
>>   * which nodegroup to choose to allocate VIPs? default nodegroup?
>> random pick? in case of random pick troubleshooting may become
>> problematic
>
> Random choices aren't good IMHO, let's use defaults.
>
>>
>>   * waste of IPs - IP address from the network range will be implicitly
>> allocated and marked as used, even it's not used by deployment
>> (plugin uses own ones)
>>
>>
>> *Please also note that this solution is needed for 8.0 only.*In 9.0 we
>> have new feature for manual VIPs allocation [1]. So in 9.0, if we can't
>> auto allocate VIPs for some cluster configuration, we can simply ask
>> user to manually set those problem VIPs or move roles to the same
>> network node group (rack).
>>
>> So, guys, please feel free to share your thoughts on this matter. Any
>> input is greatly appreciated.
>>
>> Regards,
>> Alex
>>
>> [0] https://bugs.launchpad.net/fuel/+bug/1524320
>> [1] https://blueprints.launchpad.net/fuel/+spec/allow-any-vip
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][infra] Gerrit UX seems dropped, few patches had been merged without the Fuel CI verified mark

2016-01-18 Thread Bogdan Dobrelya
Here is the list of patches merged w/o the verified mark from the Fuel
CI [0]. Note that some of them contain "Fuel-CI: disabled", therefore
were good to go as is, by design. But the rest was merged by mistake.
Sorry for that. Errare humanum est.

PS. Would be nice to improve the gerrit UI somehow to make it less error
prone for confusing green Verified + 1 marks and not having the rest of
the mandatory "green" marks.

[0]
https://review.openstack.org/#/q/project:openstack/fuel-library+AND+status:merged+AND+NOT+label:Verified%252B1%252Cuser%253Dfuel-ci,n,z

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Closing voting for midcycle agenda on January 25th

2016-01-18 Thread Steven Dake (stdake)
On January 25th, I will finalize the agenda for the Kolla midcycle based upon 
voting in the midcycle agenda document.  Feel free to vote, even if your not a 
core reviewer, on any particular topic (either vote +1, or +0).  A +0 vote will 
be helpful when I craft the agenda to show me that you did review the agenda 
item, and just decided that topic wasn't as high priority as the other topics.

The document where voting should happen can be found here:
https://etherpad.openstack.org/p/kolla-mitaka-midcycle

Thanks
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS flow modification performance

2016-01-18 Thread IWAMOTO Toshihiro
At Mon, 18 Jan 2016 00:42:32 -0500,
Kevin Benton wrote:
> 
> Thanks for doing this. A couple of questions:
> 
> What were your rootwrap settings when running these tests? Did you just
> have it calling sudo directly?

I used devstack's default, which runs root_helper_daemon.

> Also, you mention that this is only ~10% of the time spent during flow
> reconfiguration. What other areas are eating up so much time?


In another run,

$ for f in `cat tgidlist.n2`; do echo -n $f; opreport -n tgid:$f --merge 
tid|head -1|tr -d '\n'; (cd bg; opreport -n tgid:$f --merge tid|head -1);echo; 
done|sort -nr -k +2
10071   239058 100.000 python2.714922 100.000 python2.7 
999592328 100.000 python2.711450 100.000 python2.7 
757988202 100.000 python2.7(18596)
1109451560 100.000 python2.747964 100.000 python2.7 
703549687 100.000 python2.740678 100.000 python2.7 
1109349380 100.000 python2.736004 100.000 python2.7 
(legend:
 )

These processes are neutron-server, nova-api,
neutron-openvswitch-agent, nova-conductor, dstat and nova-conductor in
a decending order.

So neutron-server uses about 3x CPU time than the ovs agent,
nova-api's CPU usage is similar to the ovs agent's, and the others
aren't probably significant.

> Cheers,
> Kevin Benton
> 
> On Sun, Jan 17, 2016 at 10:12 PM, IWAMOTO Toshihiro 
> wrote:
> 
> > I'm sending out this mail to share the finding and discuss how to
> > improve with those interested in neutron ovs performance.
> >
> > TL;DR: The native of_interface code, which has been merged recently
> > and isn't default, seems to consume less CPU time but gives a mixed
> > result.  I'm looking into this for improvement.
> >
> > * Introduction
> >
> > With an ML2+ovs Neutron configuration, openflow rule modification
> > happens often and is somewhat a heavy operation as it involves
> > exec() of the ovs-ofctl command.
> >
> > The native of_interface driver doesn't use the ovs-ofctl command and
> > should have less performance impact on the system.  This document
> > tries to confirm this hypothesis.
> >
> >
> > * Method
> >
> > In order to focus on openflow rule operation time and avoid noise from
> > other operations (VM boot-up, etc.), neutron-openvswitch-agent was
> > restarted and the time it took to reconfigure the flows was measured.
> >
> > 1. Use devstack to start a test environment.  As debug logs generate
> >considable amount of load, ENABLE_DEBUG_LOG_LEVEL was set to false.
> > 2. Apply https://review.openstack.org/#/c/267905/ to enable
> >measurement of flow reconfiguration times.
> > 3. Boot 80 m1.nano instances.  In my setup, this generates 404 br-int
> >flows.  If you have >16G RAM, more could be booted.
> > 4. Stop neutron-openvswitch-agent and restart with --run-once arg.
> >Use time, oprofile, and python's cProfile (use --profile arg) to
> >collect data.
> >
> > * Results
> >
> > Execution time (averages of 3 runs):
> >
> > native 28.3s user 2.9s sys 0.4s
> > ovs-ofctl  25.7s user 2.2s sys 0.3s
> >
> > ovs-ofctl runs faster and seems to use less CPU, but the above doesn't
> > count in execution time of ovs-ofctl.
> >
> > Oprofile data collected by running "operf -s -t" contain the
> > information.
> >
> > With of_interface=native config, "opreport tgid:" shows:
> >
> >samples|  %|
> > --
> > 87408 100.000 python2.7
> > CPU_CLK_UNHALT...|
> >   samples|  %|
> > --
> > 69160 79.1232 python2.7
> >  8416  9.6284 vmlinux-3.13.0-24-generic
> >
> > and "opreport --merge tgid" doesn't show ovs-ofctl.
> >
> > With of_interface=ovs-ofctl, "opreport tgid:" shows:
> >
> >samples|  %|
> > --
> > 62771 100.000 python2.7
> > CPU_CLK_UNHALT...|
> >   samples|  %|
> > --
> > 49418 78.7274 python2.7
> >  6483 10.3280 vmlinux-3.13.0-24-generic
> >
> > and  "opreport --merge tgid" shows CPU consumption by ovs-ofctl
> >
> > 35774  3.5979 ovs-ofctl
> > CPU_CLK_UNHALT...|
> >   samples|  %|
> > --
> > 28219 78.8813 vmlinux-3.13.0-24-generic
> >  3487  9.7473 ld-2.19.so
> >  2301  6.4320 ovs-ofctl
> >
> > Comparing 87408 (native python) with 62771+35774, the native
> > of_interface uses 0.4s less CPU time overall.
> >
> > * Conclusion and future steps
> >
> > The native of_interface uses slightly less CPU time but takes longer
> > time to complete a flow reconfiguration after an agent restart.
> >
> > As an OVS agent accounts for only 1/10th of total CPU usage during a
> > flow reconfiguration (data not shown), there may be other areas for
> > improvement.
> >
> > The cProfile Python module gives more fine grained data, but no
> > apparent performance bottleneck was found.  The data show more
> > eventlet context 

Re: [openstack-dev] Shovel (RackHD/OpenStack)

2016-01-18 Thread Thierry Carrez

Heck, Joseph wrote:

Thanks for the links Jeremy! I'm still reading through what exactly
"bigtent" means, not sure I grok the placement for littler/ancillary
things like this effort, but the the links are hugely helpful!


Hey Joe, welcome back :P

Rather than defining OpenStack around the "Integrated Release", and 
having integrated, incubated and stackforge things, we now define 
OpenStack based on a mission and way to do development, and have only 
one category of official things.


If RackHD helps with the OpenStack Mission and is developed in the 
OpenStack Way, then it can definitely apply to become "an OpenStack 
project". It's still a bit far away from that state though... the first 
step if you want to go in that direction would be to host it under 
OpenStack dev infrastructure.


Cheers :)

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-01-18 Thread Alex Xu
We have weekly Nova API meeting tomorrow. The meeting is being held Tuesday
UTC1200.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Exception request : [stable] Ironic doesn't use cacert while talking to Swift ( https://review.openstack.org/#/c/253460/)

2016-01-18 Thread Dmitry Tantsur

On 01/17/2016 09:25 AM, Nisha Agarwal wrote:

Hello Team,

This patch got approval long back(Jan 6)  but due to Jenkins failure in
the merge pipeline of the Kilo branch, this patch was not merged.

Hence I request for an exception for this patch as  this was not merged
due to Jenkins issue.


Hi.

Our kilo gate is still not feeling well, so I'm not sure there's any 
point in giving an exception for anything not deadly critical. Sorry.




Regards
Nisha

--
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-18 Thread Eric LEMOINE
On Fri, Jan 15, 2016 at 2:40 PM, Michał Jastrzębski  wrote:
> Yeah that's true. We did all of openstack systems but we didn't
> implement infra around yet. I'd guess most of services can log either
> to stdout or file, and both sources should be accessible by heka.
> Also, I'd be surprised if heka wouldn't have syslog driver?

Heka's UdpInput plugin supports Unix datagram sockets [*], so this
plugin can possibly be used for reading from /dev/log.  This is
something I'll be testing soon.

[*] 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] changing in neutron code

2016-01-18 Thread Rossella Sblendido

Hi Atif,

you can find all the answers in the Neutron developer guide [1].

cheers,

Rossella

[1] http://docs.openstack.org/developer/neutron/devref/index.html

On 01/17/2016 08:17 PM, Atif Saeed wrote:

Hi All,

I am a newbie in an Open Stack developers community. I want to do some
modification in Devstack/kilo neutron part. Can anyone guide me how to
test the modification in the code part.

(1) Should I need to define my own module/class.
(2) Can I add some def in neutron code.

and Importantly, How to test that my code is working or not?

Suggestions are highly appreciated and welcome. Really need to get help.

A.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] allow a ranking mechanism for glance-api to order image locations

2016-01-18 Thread Steve Lewis
On Wed, Jan 13, 2016 at 4:07 PM, Jake Yip  wrote:

>
> I am wondering anyone else have solved this before? I would like to hear
> your opinions on how we can achieve this, and whether ranking it by
> metadata is the way to go.
>

I spoke with an operator in Vancouver (Spring 2015 Summit) who wanted
similar functionality for his environment (relying on Ceph and Swift
storage with a multi-region cloud) and I believe this solution would
partially-satisfy his use case and could help close the functional gap.

Thanks for bringing it up. I'm interested in helping see this delivered.

-- 
SteveL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Object backporting and the associated service

2016-01-18 Thread Michał Dulko
On 01/18/2016 03:31 PM, Duncan Thomas wrote:
> On 5 January 2016 at 18:55, Ryan Rossiter  > wrote:
>
> This is definitely good to know. Are you planning on setting up
> something off to the side of o.vo within that holds a dictionary
> of all values for a release? Something like:
>
> {‘liberty’: {‘volume’: ‘1.3’, …},
>  ‘mitaka’: {‘volume’: ‘1.8’, …}, }
>
> With the possibility of replacing the release name with the RPC
> version or some other version placeholder. Playing devil’s
> advocate, how does this work out if I want to be continuously
> deploying Cinder from HEAD?
>
>
> As far as I know (the design has iterated a bit, but I think I'm still
> right), there is no need for such a table - before you start a rolling
> upgrade, you call the 'pin now' api, and all of the services write
> their max supported version to the DB. Once the DB is written to by
> all services, the running services can then read that table and cache
> the max value. Any new services bought up will also build a max volume
> cache on startup. Once everything is upgraded, you can call 'pin now'
> again and the services can figure out a new (hopefully higher) version
> limit.
>

You're right, that was the initial design we've agreed on in Liberty.
Personally I'm now more in favor of how it's implemented in Nova [1].
Basically on service startup RPC API is pinned to the lowest version
among all the managers running in the environment. I've prepared PoC
patches and successfully executed multiple runs of Tempest on deployment
with Mitaka's c-api and mixed Liberty and Mitaka c-sch, c-vol, c-bak
(two of each service).

I think we should discuss this in details at the mid-cycle meetup next week.

[1] https://blueprints.launchpad.net/nova/+spec/service-version-behavior
[2] https://review.openstack.org/#/c/268025/
[3] https://review.openstack.org/#/c/268026/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we have a TripleO API, or simply use Mistral?

2016-01-18 Thread Tzu-Mainn Chen
- Original Message -
> On Thu, 2016-01-14 at 16:04 -0500, Tzu-Mainn Chen wrote:
> > 
> > - Original Message -
> > > On Wed, Jan 13, 2016 at 04:41:28AM -0500, Tzu-Mainn Chen wrote:
> > > > Hey all,
> > > > 
> > > > I realize now from the title of the other TripleO/Mistral thread
> > > > [1] that
> > > > the discussion there may have gotten confused.  I think using
> > > > Mistral for
> > > > TripleO processes that are obviously workflows - stack
> > > > deployment, node
> > > > registration - makes perfect sense.  That thread is exploring
> > > > practicalities
> > > > for doing that, and I think that's great work.
> > > > 
> > > > What I inappropriately started to address in that thread was a
> > > > somewhat
> > > > orthogonal point that Dan asked in his original email, namely:
> > > > 
> > > > "what it might look like if we were to use Mistral as a
> > > > replacement for the
> > > > TripleO API entirely"
> > > > 
> > > > I'd like to create this thread to talk about that; more of a
> > > > 'should we'
> > > > than 'can we'.  And to do that, I want to indulge in a thought
> > > > exercise
> > > > stemming from an IRC discussion with Dan and others.  All, please
> > > > correct
> > > > me
> > > > if I've misstated anything.
> > > > 
> > > > The IRC discussion revolved around one use case: deploying a Heat
> > > > stack
> > > > directly from a Swift container.  With an updated patch, the Heat
> > > > CLI can
> > > > support this functionality natively.  Then we don't need a
> > > > TripleO API; we
> > > > can use Mistral to access that functionality, and we're done,
> > > > with no need
> > > > for additional code within TripleO.  And, as I understand it,
> > > > that's the
> > > > true motivation for using Mistral instead of a TripleO API:
> > > > avoiding custom
> > > > code within TripleO.
> > > > 
> > > > That's definitely a worthy goal... except from my perspective,
> > > > the story
> > > > doesn't quite end there.  A GUI needs additional functionality,
> > > > which boils
> > > > down to: understanding the Heat deployment templates in order to
> > > > provide
> > > > options for a user; and persisting those options within a Heat
> > > > environment
> > > > file.
> > > > 
> > > > Right away I think we hit a problem.  Where does the code for
> > > > 'understanding
> > > > options' go?  Much of that understanding comes from the
> > > > capabilities map
> > > > in tripleo-heat-templates [2]; it would make sense to me that
> > > > responsibility
> > > > for that would fall to a TripleO library.
> > > > 
> > > > Still, perhaps we can limit the amount of TripleO code.  So to
> > > > give API
> > > > access to 'getDeploymentOptions', we can create a Mistral
> > > > workflow.
> > > > 
> > > >   Retrieve Heat templates from Swift -> Parse capabilities map
> > > > 
> > > > Which is fine-ish, except from an architectural perspective
> > > > 'getDeploymentOptions' violates the abstraction layer between
> > > > storage and
> > > > business logic, a problem that is compounded because
> > > > 'getDeploymentOptions'
> > > > is not the only functionality that accesses the Heat templates
> > > > and needs
> > > > exposure through an API.  And, as has been discussed on a
> > > > separate TripleO
> > > > thread, we're not even sure Swift is sufficient for our needs;
> > > > one possible
> > > > consideration right now is allowing deployment from templates
> > > > stored in
> > > > multiple places, such as the file system or git.
> > > 
> > > Actually, that whole capabilities map thing is a workaround for a
> > > missing
> > > feature in Heat, which I have proposed, but am having a hard time
> > > reaching
> > > consensus on within the Heat community:
> > > 
> > > https://review.openstack.org/#/c/196656/
> > > 
> > > Given that is a large part of what's anticipated to be provided by
> > > the
> > > proposed TripleO API, I'd welcome feedback and collaboration so we
> > > can move
> > > that forward, vs solving only for TripleO.
> > > 
> > > > Are we going to have duplicate 'getDeploymentOptions' workflows
> > > > for each
> > > > storage mechanism?  If we consolidate the storage code within a
> > > > TripleO
> > > > library, do we really need a *workflow* to call a single
> > > > function?  Is a
> > > > thin TripleO API that contains no additional business logic
> > > > really so bad
> > > > at that point?
> > > 
> > > Actually, this is an argument for making the validation part of the
> > > deployment a workflow - then the interface with the storage
> > > mechanism
> > > becomes more easily pluggable vs baked into an opaque-to-operators
> > > API.
> > > 
> > > E.g, in the long term, imagine the capabilities feature exists in
> > > Heat, you
> > > then have a pre-deployment workflow that looks something like:
> > > 
> > > 1. Retrieve golden templates from a template store
> > > 2. Pass templates to Heat, get capabilities map which defines
> > > features user
> > > must/may select.
> > > 3. Prompt user for 

Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is broken due to lack of disk space

2016-01-18 Thread Maciej Kwiek
Igor: It seems that fqdn -> ipaddr will indeed be resolved. Please share
your feedback in review: https://review.openstack.org/#/c/266964/3

On Fri, Jan 15, 2016 at 4:25 PM, Igor Kalnitsky 
wrote:

> Sheena -
>
> What do you mean by *targeted*? Shotgun's designed to be a *targeted*
> solution. If someone wants more *precise* targets - it's easy to
> specify them in Nailgun's settings.yaml.
>
> - Igor
>
> On Fri, Jan 15, 2016 at 5:02 PM, Sheena Gregson 
> wrote:
> > I’ve also seen the request multiple times to be able to provide more
> > targeted snapshots which might also (partially) solve this problem as it
> > would require significantly less disk space to grab logs from a subset of
> > nodes for a specific window of time, instead of the more robust grab-all
> > solution we have now.
> >
> >
> >
> > From: Maciej Kwiek [mailto:mkw...@mirantis.com]
> > Sent: Thursday, January 14, 2016 5:59 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Fuel] Diagnostic snapshot generation is
> broken
> > due to lack of disk space
> >
> >
> >
> > Igor,
> >
> >
> >
> > I will investigate this, thanks!
> >
> >
> >
> > Artem,
> >
> >
> >
> > I guess that if we have an untrusted user on master node, he could just
> put
> > something he wants to be in the snapshot in /var/log without having to
> time
> > the attack carefully with tar execution.
> >
> >
> >
> > I want to use links for directories, this saves me the trouble of
> creating
> > hardlinks for every single file in the directory. Although with how
> > exclusion is currently implemented it can cause deleting log files from
> > original directories, need to check this out.
> >
> >
> >
> > About your PS: whole /var/log on master node (not in container) is
> currently
> > downloaded, I think we shouldn't change this as we plan to drop
> containers
> > in 9.0.
> >
> >
> >
> > Cheers,
> >
> > Maciej
> >
> >
> >
> > On Thu, Jan 14, 2016 at 12:32 PM, Artem Panchenko <
> apanche...@mirantis.com>
> > wrote:
> >
> > Hi,
> >
> > using symlinks is a bit dangerous, here is a quote from the man you
> > mentioned [0]:
> >
> >> The `--dereference' option is unsafe if an untrusted user can modify
> >> directories while tar is running.
> >
> > Hard links usage is much safer, because you can't use them for
> directories.
> > But at the same time implementation in shotgun would be more complicated
> > than with symlinks.
> >
> > Anyway, in order to determine what linking to use we need to decide where
> > (/var/log or another partition) diagnostic snapshot will be stored.
> >
> > p.s.
> >
> >>This doesn't really give us much right now, because most of the logs are
> >> fetched from master node via ssh due to shotgun being run in mcollective
> >> container
> >
> >
> >
> > AFAIK '/var/log/docker-logs/' is available from mcollective container and
> > mounted to /var/log/:
> >
> > [root@fuel-lab-cz5557 ~]# dockerctl shell mcollective mount -l | grep
> > os-varlog
> > /dev/mapper/os-varlog on /var/log type ext4
> > (rw,relatime,stripe=128,data=ordered)
> >
> > From my experience '/var/log/docker-logs/remote' folder is most ' heavy'
> > thing in snapshot.
> >
> > [0] http://www.gnu.org/software/tar/manual/html_node/dereference.html
> >
> > Thanks!
> >
> >
> >
> > On 14.01.16 13:00, Igor Kalnitsky wrote:
> >
> > I took a glance on Maciej's patch and it adds a switch to tar command
> >
> > to make it follow symbolic links
> >
> > Yeah, that should work. Except one thing - we previously had fqdn ->
> >
> > ipaddr links in snapshots. So now they will be resolved into full
> >
> > copy?
> >
> >
> >
> > I meant that symlinks also give us the benefit of not using additional
> >
> > space (just as hardlinks do) while being able to link to files from
> >
> > different filesystems.
> >
> > I'm sorry, I got you wrong. :)
> >
> >
> >
> > - Igor
> >
> >
> >
> > On Thu, Jan 14, 2016 at 12:34 PM, Maciej Kwiek 
> wrote:
> >
> > Igor,
> >
> >
> >
> > I meant that symlinks also give us the benefit of not using additional
> space
> >
> > (just as hardlinks do) while being able to link to files from different
> >
> > filesystems.
> >
> >
> >
> > Also, as Barłomiej pointed out the `h` switch for tar should do the trick
> >
> > [1].
> >
> >
> >
> > Cheers,
> >
> > Maciej
> >
> >
> >
> > [1] http://www.gnu.org/software/tar/manual/html_node/dereference.html
> >
> >
> >
> > On Thu, Jan 14, 2016 at 11:22 AM, Bartlomiej Piotrowski
> >
> >  wrote:
> >
> > Igor,
> >
> >
> >
> > I took a glance on Maciej's patch and it adds a switch to tar command to
> >
> > make it follow symbolic links, so it looks good to me.
> >
> >
> >
> > Bartłomiej
> >
> >
> >
> > On Thu, Jan 14, 2016 at 10:39 AM, Igor Kalnitsky <
> ikalnit...@mirantis.com>
> >
> > wrote:
> >
> > Hey Maceij -
> >
> >
> >
> > About hardlinks - wouldn't it be better to use 

[openstack-dev] [release][cinder] os-brick release 0.8.0 (mitaka)

2016-01-18 Thread davanum
We are stoked to announce the release of:

os-brick 0.8.0: OpenStack Cinder brick library for managing local
volume attaches

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-brick

With package available at:

https://pypi.python.org/pypi/os-brick

Please report issues through launchpad:

http://bugs.launchpad.net/os-brick

For more details, please see below.


Changes in os-brick 0.7.0..0.8.0


a3261e3 Add connector for ITRI DISCO cinder driver
7e10ab7 os-brick add extend_volume API
d3b9696 os-brick add cinder local_dev lvm code
a0b5c73 Revert "Use assertTrue/False instead of assertEqual(T/F)"
00be786 Fix another unit test failure
7f00e71 Use assertTrue/False instead of assertEqual(T/F)
ccff862 Actually log the command used in _run_iscsiadm
2f2b19d Updated from global requirements
97c5768 remove python 2.6 trove classifier
7e767fd Updated from global requirements
e6d94fe ScaleIO could connect wrong volume to VM

Diffstat (except docs and test files)
-

etc/os-brick/rootwrap.d/os-brick.filters   |  36 ++
os_brick/exception.py  |  12 +
os_brick/initiator/connector.py| 266 +-
os_brick/initiator/linuxscsi.py|  88 
os_brick/local_dev/__init__.py |   0
os_brick/local_dev/lvm.py  | 782 +
requirements.txt   |   6 +-
setup.cfg  |   1 -
13 files changed, 1973 insertions(+), 9 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 15a780b..8abb660 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ oslo.concurrency>=2.3.0 # Apache-2.0
-oslo.log>=1.12.0 # Apache-2.0
+oslo.log>=1.14.0 # Apache-2.0
@@ -13,2 +13,2 @@ oslo.service>=1.0.0 # Apache-2.0
-oslo.utils>=2.8.0 # Apache-2.0
-requests>=2.8.1
+oslo.utils>=3.2.0 # Apache-2.0
+requests!=2.9.0,>=2.8.1



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] No team meeting today - 01/18/2016

2016-01-18 Thread Renat Akhmerov
We won't have a team meeting today because a number of key members won't be 
able to attend.

Renat Akhmerov
@Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] nova-network removal

2016-01-18 Thread Sheena Gregson
We should continue to support nova-network for 7.0 and prior environments,
so it won't be removed completely.  We should prevent users from creating
new environments on Mitaka/8.0 versions and later which use nova-network.

On Mon, Jan 18, 2016 at 9:18 AM, Igor Kalnitsky 
wrote:

> Roman, Sheena,
>
> You meant to remove nova-network completely? Or only for new
> environments? Should we support nova-network for old (let's say, 7.0)
> clusters?
>
> Thanks,
> Igor
>
> On Fri, Jan 15, 2016 at 10:03 PM, Sheena Gregson 
> wrote:
> > Adrian – can someone from the PI team please confirm what testing was
> > performed?
> >
> >
> >
> > From: Roman Alekseenkov [mailto:ralekseen...@mirantis.com]
> > Sent: Friday, January 15, 2016 11:30 AM
> >
> >
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Fuel] nova-network removal
> >
> >
> >
> > I agree with Sheena. Sounds like removing support for nova-network would
> be
> > the best option, even though it's late.
> >
> >
> >
> > However, I'd like us to think about the impact on vCenter integration.
> > vCenter+nova-network was fully supported before. Since we are now
> > recommending DVS or NSX backends, I'd like the team to explicitly confirm
> > that those configurations have been tested.
> >
> >
> >
> > Thanks,
> >
> > Roman
> >
> >
> >
> > On Fri, Jan 15, 2016 at 6:43 AM, Sheena Gregson 
> > wrote:
> >
> > Although we are very close to HCF, I see no option but to continue
> removing
> > nova-network as I understand it is not currently functional or
> well-tested
> > for the Mitaka release.  We must either remove it or test it, and we
> want to
> > remove it anyway so that seems like the better path.
> >
> >
> >
> > Mike, what do you think?
> >
> >
> >
> > From: Roman Prykhodchenko [mailto:m...@romcheg.me]
> > Sent: Friday, January 15, 2016 8:04 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Subject: Re: [openstack-dev] [Fuel] nova-network removal
> >
> >
> >
> > I’d like to add that nova-network support was removed from
> python-fuelclient
> > in 8.0.
> >
> >
> >
> > 14 січ. 2016 р. о 17:54 Vitaly Kramskikh 
> > написав(ла):
> >
> >
> >
> > Folks,
> >
> > We have a request on review which prohibits creating new envs with
> > nova-network: https://review.openstack.org/#/c/261229/ We're 3 weeks
> away
> > from HCF, and I think this is too late for such a change. What do you
> think?
> > Should we proceed and remove nova-network support in 8.0, which is
> > deprecated since 7.0?
> >
> >
> > --
> >
> > Vitaly Kramskikh,
> > Fuel UI Tech Lead,
> > Mirantis, Inc.
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] wsmanclient

2016-01-18 Thread Imre Farkas

Hi all,

We have the openwsman bug[1] for long which affects the AMT and DRAC 
drivers. I added a simple protocol implementation[2] to 
python-dracclient a couple months ago using requests and lxml and 
refactored the DRAC driver to use that instead of openwsman. It's far 
from being a complete implementation of the protocol specification, just 
a subset which is needed for the driver. I used it since then without 
issues and since folks expressed their interest in it for the AMT 
driver, I proposed changes to project-config[3] and governance[4] to 
register a new project with the intent to move the code to there from 
the DRAC specific python-dracclient.


Let me know what you think.

Thanks,
Imre


[1] https://bugs.launchpad.net/ironic/+bug/1454492
[2] 
https://github.com/openstack/python-dracclient/blob/master/dracclient/wsman.py

[3] https://review.openstack.org/#/c/269122/
[4] https://review.openstack.org/#/c/269137/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] api working group proposed actions guideline needs input

2016-01-18 Thread michael mccune

On 01/18/2016 10:05 AM, Michael Krotscheck wrote:

Also: I like using the same word for things. Is there a meaningful
enough difference between "Actions", "Transitions", and "Tasks", that
requires different words?


not that i have detected, although i would love hear other opinions on this.

i have to admit that this is the first time i'm hearing "transitions", 
and i feel it could be looked at separately from "actions" or "tasks". 
but i think this is largely a semantic debate.


i do think it would be useful to coalesce on a standard term for what we 
are talking about.


regards,
mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] notification subteam meeting

2016-01-18 Thread Balázs Gibizer
Hi, 

The next meeting of the nova notification subteam will happen 2016-01-19 
Tuesday 20:00 UTC [1] on #openstack-meeting-alt on freenode 

Agenda:
- Status of the outstanding specs and code reviews
- Agree on the new (reduced) frequency of the meeting
- AOB

See you there.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160119T20  
[2] https://wiki.openstack.org/wiki/Meetings/NovaNotification


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] changing in neutron code

2016-01-18 Thread Edgar Magana
I don’t foget to help us to improve the neutron documentation in the networking 
guide:
http://docs.openstack.org/liberty/networking-guide/

Edgar

From: Atif Saeed >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Sunday, January 17, 2016 at 11:17 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] changing in neutron code

Hi All,

I am a newbie in an Open Stack developers community. I want to do some 
modification in Devstack/kilo neutron part. Can anyone guide me how to test the 
modification in the code part.

(1) Should I need to define my own module/class.
(2) Can I add some def in neutron code.

and Importantly, How to test that my code is working or not?

Suggestions are highly appreciated and welcome. Really need to get help.

A.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Please Vote - Closing voting January 25th for Kolla Midcycle Event

2016-01-18 Thread Steven Dake (stdake)
On January 25th, I will finalize the agenda for the Kolla midcycle based upon 
voting in the midcycle agenda etherpad.  Feel free to vote, even if your not a 
core reviewer, on any particular topic (either vote +1, or +0).  A +0 vote will 
be helpful when I craft the agenda to show me that you did review the agenda 
item, and just decided that topic wasn't as high priority as the other topics.

The document where voting should happen can be found here:
https://etherpad.openstack.org/p/kolla-mitaka-midcycle

Thanks
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] nova-network removal

2016-01-18 Thread Andrian Noga
Sheena,

I can confirm, that configurations with both DVS and NSX backends were
successfully tested for 7.0 by PI. This backend works also for 8.0

Regards,
Andrian

On Fri, Jan 15, 2016 at 10:03 PM, Sheena Gregson 
wrote:

> Adrian – can someone from the PI team please confirm what testing was
> performed?
>
>
>
> *From:* Roman Alekseenkov [mailto:ralekseen...@mirantis.com]
> *Sent:* Friday, January 15, 2016 11:30 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Fuel] nova-network removal
>
>
>
> I agree with Sheena. Sounds like removing support for nova-network would
> be the best option, even though it's late.
>
>
>
> However, I'd like us to think about the impact on vCenter integration.
> vCenter+nova-network was fully supported before. Since we are now
> recommending DVS or NSX backends, I'd like the team to explicitly confirm
> that those configurations have been tested.
>
>
>
> Thanks,
>
> Roman
>
>
>
> On Fri, Jan 15, 2016 at 6:43 AM, Sheena Gregson 
> wrote:
>
> Although we are very close to HCF, I see no option but to continue
> removing nova-network as I understand it is not currently functional or
> well-tested for the Mitaka release.  We must either remove it or test it,
> and we want to remove it anyway so that seems like the better path.
>
>
>
> *Mike*, what do you think?
>
>
>
> *From:* Roman Prykhodchenko [mailto:m...@romcheg.me]
> *Sent:* Friday, January 15, 2016 8:04 AM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Fuel] nova-network removal
>
>
>
> I’d like to add that nova-network support was removed from
> python-fuelclient in 8.0.
>
>
>
> 14 січ. 2016 р. о 17:54 Vitaly Kramskikh 
> написав(ла):
>
>
>
> Folks,
>
> We have a request on review which prohibits creating new envs with
> nova-network: https://review.openstack.org/#/c/261229/ We're 3 weeks away
> from HCF, and I think this is too late for such a change. What do you
> think? Should we proceed and remove nova-network support in 8.0, which is
> deprecated since 7.0?
>
>
> --
>
> Vitaly Kramskikh,
> Fuel UI Tech Lead,
> Mirantis, Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Mitaka Milestone 2

2016-01-18 Thread John Trowbridge
Hola Otripletas,

According to the official schedule[1], this week is scheduled for Mitaka
milestone 2. Do we intend to have something working for this milestone?

I have been tracking the issues when deploying with current from all
other projects[2] (as opposed to using an old pinned delorean for
non-tripleo projects). This has been a rapidly moving target (that we
have yet to hit for Mitaka), since we are not doing this in CI (which I
totally get the reasons for).

I think if we want to have TripleO working with the rest of OpenStack's
Mitaka milestone 2, we will need to prioritize resolving the outstanding
issues this week. I would love to see us not merge anything that is not
related to either adding some validation of the deployed overcloud or
fixing some issue related to deploying with delorean current for all
packages.

One huge benefit to TripleO of doing this prioritization would be the
free testing we could get in RDO next week. Otherwise, I will have to do
my best to hack around the known issues for the RDO test day, which will
not be a true test of TripleO.

Another benefit, would be that if we get RDO CI testing TripleO as part
of our delorean promotion process, TripleO will be able to use the
automated current-passed-ci link instead of the manual current-tripleo
link. It will then be much easier to trace issues close to when they are
introduced rather than having a huge number of commits to comb through,
with many issues happening concurrently.

- trown

[1] http://docs.openstack.org/releases/schedules/mitaka.html
[2] https://etherpad.openstack.org/p/delorean_master_current_issues

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] alarms based on events

2016-01-18 Thread Premysl Kouril
Hello,

we are just evaluating Monasca for our new cloud infrastructure and I
would like to ask if there are any possibilities in current Monasca or
some development plans to address following use case:

We have a box which we need to monitor and when something goes wrong
with the box, it sends out and SNMP trap indicating that it is in bad
condition and when the box is fixed it sends out SNMP trap indicating
that it is OK and operational again (in other words: the box is
indicating health state transitions by sending events - in this case
SNMP traps).

Is it possible in Monasca to define such alarm which would work on top
of such events? In other words - Is it possible to have a Monasca
alarm which would go red on some external event go back green on some
other external event? By alarm I really mean a stateful entity in
monasca database not some notification to administrator.

Best regards.
Prema

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] how to split mistral log files

2016-01-18 Thread SHTILMAN, Tomer (Tomer)

Hi All
I have three linux service file for mistral api, engine and executor
All of them running with different param in the "server" (api/
E.g api is
/usr/bin/mistral-server --config-file=/etc/mistral/mistral.conf --server=api

All of the logs goes to /var/log/mistral/mistral-server.log

I would like to split them into three different logs , without changing the 
service files themselves
I thought of changing https://github.com/openstack/mistral/blob/master/setup.cfg
And creating three different console scripts
console_scripts =

mistral-engine = mistral.cmd.launch:main
mistral-api = mistral.cmd.launch:main
mistral-executor = mistral.cmd.launch:main

mistral-db-manage = mistral.db.sqlalchemy.migration.cli:main

will be happy to get your inputs/thoughts

Thanks
Tomer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-11, Jan 18-22, Mitaka-2 milestone

2016-01-18 Thread Kyle Mestery
On Fri, Jan 15, 2016 at 7:55 AM, Doug Hellmann 
wrote:

> Excerpts from Doug Hellmann's message of 2016-01-14 16:20:54 -0500:
> > Focus
> > -
> >
> > Next week is the second milestone for the Mitaka cycle. Major feature
> > work should be making good progress or be re-evaluated to see whether
> > it will really land this cycle.
> >
> > Release Actions
> > ---
> >
> > Liaisons should submit tag requests to the openstack/releases
> > repository for all projects following the cycle-with-milestone
> > release model before the end of the day on Jan 21.
> >
>

One question I have is, what should the version for projects be? For
example, for Neutron, M1 was set to 8.0.0.0b1. Should the M2 Neutron
milestone be 8.0.0.0c1? Or 8.0.0.0b2?

Thanks!
Kyle


> > We're working on updating the documented responsibilities for release
> > liaisons. Please have a look at https://review.openstack.org/#/c/262003/
> > and leave comments if you have questions or concerns.
> >
> > Important Dates
> > ---
> >
> > Mitaka 2: Jan 19-21
> >
> > Deadline for Mitaka 2 tag: Jan 21
> >
> > Mitaka release schedule:
> > http://docs.openstack.org/releases/schedules/mitaka.html
>
> One important reminder I left out: As Thierry described on this
> list earlier [1], we will be freezing changes to the release model
> tags for projects after the Mitaka 2 tags are in place. If you've
> been considering submitting patches to change the release tags for
> your project, please do that between now and next week.
>
> Doug
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2016-January/083726.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-18 Thread Jain, Vivek
If member port (IP address) is allocated by neutron, then why do we need to 
specify it explicitly? It can be derived by LBaaS driver implicitly.

Thanks,
Vivek






On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:

>Btw.
>
>I am still in favor on associating the subnets to the LB and then not specify 
>them per node at all.
>
>-Sam.
>
>
>-Original Message-
>From: Samuel Bercovici [mailto:samu...@radware.com] 
>Sent: Sunday, January 17, 2016 10:14 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be 
>optional on member create?
>
>+1
>Subnet should be mandatory
>
>The only thing this makes supporting load balancing servers which are not 
>running in the cloud more challenging to support.
>But I do not see this as a huge user story (lb in cloud load balancing IPs 
>outside the cloud)
>
>-Sam.
>
>-Original Message-
>From: Brandon Logan [mailto:brandon.lo...@rackspace.com] 
>Sent: Saturday, January 16, 2016 6:56 AM
>To: openstack-dev@lists.openstack.org
>Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional 
>on member create?
>
>I filed a bug [1] a while ago that subnet_id should be an optional parameter 
>for member creation.  Currently it is required.  Review [2] is makes it 
>optional.
>
>The original thinking was that if the load balancer is ever connected to that 
>same subnet, be it by another member on that subnet or the vip on that subnet, 
>then the user does not need to specify the subnet for new member if that new 
>member is on one of those subnets.
>
>At the midcycle we discussed it and we had an informal agreement that it 
>required too many assumptions on the part of the end user, neutron lbaas, and 
>driver.
>
>If anyone wants to voice their opinion on this matter, do so on the bug 
>report, review, or in response to this thread.  Otherwise, it'll probably be 
>abandoned and not done at some point.
>
>Thanks,
>Brandon
>
>[1] https://bugs.launchpad.net/neutron/+bug/1426248
>[2] https://review.openstack.org/#/c/267935/
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly status report

2016-01-18 Thread Ruby Loo
Hi,


We are cool to present this week's subteam report for Ironic. As usual,
this is pulled directly from the Ironic whiteboard[0] and formatted.


Bugs (dtantsur)

===

- Stats (diff with 11.01.2016):

- Ironic: 148 bugs (+1) + 162 wishlist items (+4). 15 new (+1), 102 in
progress (+3), 1 critical, 18 high (+1) and 10 incomplete (-1)

- Inspector: 18 bugs (+3) + 17 wishlist items (+1). 0 new, 10 in progress
(+3), 1 critical (+1), 8 high and 0 incomplete

- Nova bugs with Ironic tag: 24 (-2). 0 new, 0 critical, 0 high

- over 100 in-progress bugs, we/I need to start checking their status

- though never-working gate might be to blame


Network isolation (Neutron/Ironic work) (jroll)

===

- needs reviews


Manual cleaning (rloo)

==

- getting there; saw more activity this last week. one patch got merged:
247285, one patch got approved: 251995, and others are getting reviewed

- https://review.openstack.org/#/q/topic:bug/1526290


Parallel tasks with futurist (dtantsur)

===

- waiting for Futurist release (and for gate to recover) to continue

- WIP patch is still WIP: https://review.openstack.org/264720


Node filter API and claims endpoint (jroll, devananda, lucasagomes)

===

- (Not totally related but) https://review.openstack.org/#/c/267723 to
split capabilities out of properties


Oslo (lintan)

=

- A new lib(OSprofiler) under oslo umbrella was created for trace calls and
this can across multi-project too. It is suggested to send notification to
ceilometer using oslo.messaging for all HTTP/RPC/DB/Driver Calls. This is
helpful for admin to debug the bottleneck or error. It is already used by
Cinder, Heat, Glance and Nova(working), do we have a interest for this in M
release?

- https://review.openstack.org/#/c/103825/

- https://github.com/openstack/osprofiler

- [deva] yes, interested. If the patch to Ironic can land in M, great -
but definitely for N release, we should add this.


Inspector (dtansur)

===

- Gate is down due to DIB regression

- the fix has landed, waiting for the DIB release to happen and to hit
the gate


Bifrost (TheJulia)

==

- Gate was fixed last week, special thanks goes out ot Infra.

- No additional updates.


.


Until next week,

--ruby


[0] https://etherpad.openstack.org/p/IronicWhiteBoard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-18 Thread Egor Guz
Hongbin,

I did some digging and found that docker storage driver wasn’t configured 
correctly at agent nodes.
Also it looks like Atomic folks recommend use deicated volumes for DeviceMapper 
(http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/).
So added Cinder volume for the master as well (I tried create volumes at local 
storage, but it’s not even enough space for 1G volume).

Please take a look at https://review.openstack.org/#/c/267996, did around ~12 
gates run and got only 2 failures (tests cannot connect to master, but all 
containers logs looks alrignt. e.g. 
http://logs.openstack.org/96/267996/3/check/gate-functional-dsvm-magnum-swarm/d8d855b/console.html#_2016-01-18_04_31_17_312),
 we have similar error rates with Kub. So after merging this code we can try to 
enable voting for Swarm tests, thoughts?

—
Egor

On Jan 8, 2016, at 12:01, Hongbin Lu 
> wrote:

There are other symptoms as well, which I have no idea without a deep dip.

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-08-16 2:14 PM
To: openstack-dev@lists.openstack.org
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I belive most failures are related to containers tests. Maybe we should comment 
only them out and keep Swarm cluster provisioning.
Thoughts?

—
Egor

On Jan 8, 2016, at 06:37, Hongbin Lu 
>
 wrote:

Done: https://review.openstack.org/#/c/264998/

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: January-07-16 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are 
there other alternatives to consider? If not, let’s proceed with that approach.

Adrian

On Jan 7, 2016, at 3:34 PM, Hongbin Lu 
>
 wrote:

Clark,

That is true. The check pipeline must pass in order to enter the gate pipeline. 
Here is the problem we are facing. A patch that was able to pass the check 
pipeline is blocked in gate pipeline, due to the instability of the test. The 
removal of unstable test from gate pipeline aims to unblock the patches that 
already passed the check.

An alternative is to remove the unstable test from check pipeline as well or 
mark it as non-voting test. If that is what the team prefers, I will adjust the 
review accordingly.

Best regards,
Honbgin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org]
Sent: January-07-16 6:04 PM
To: 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
Hi folks,

It looks the swarm func test is currently unstable, which negatively impacts 
the patch submission workflow. I proposed to remove it from Jenkins gate (but 
keep it in Jenkins check), until it becomes stable.
Please find the details in the review
(https://review.openstack.org/#/c/264998/) and let me know if you have any 
concern.

Removing it from gate but not from check doesn't necessarily help much because 
you can only enter the gate pipeline once the change has a +1 from Jenkins. 
Jenkins applies the +1 after check tests pass.

Clark

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[openstack-dev] [Nova] sponsor some LVM development

2016-01-18 Thread Premysl Kouril
Hello everybody,

we are a Europe based operator and we have a case for LVM based nova
instances in our new cloud infrastructure. We are currently
considering to contribute to OpenStack Nova to implement some features
which are currently not supported for LVM based instances (they are
only supported for raw/qcow2 file based instances). As an example of
such features - nova block live migration or thin provisioning - these
nowadays don't work with LVM based instances (they do work for file
based).

Before actually diving into development here internally - we wanted to
check on possibility to actually sponsor this development within
existing community. So if there is someone who would be interested in
this work please drop me an email.

Regards,
Prema

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] how to split mistral log files

2016-01-18 Thread Lingxian Kong
Hi, Tomer,

If you really want Mistral services to be running on different
processes or different servers, I recommend you use different config
file respectively, log file path can be configured differently, which
can achieve what you said.

On Tue, Jan 19, 2016 at 5:19 AM, SHTILMAN, Tomer (Tomer)
 wrote:
>
>
> Hi All
>
> I have three linux service file for mistral api, engine and executor
>
> All of them running with different param in the “server” (api/
>
> E.g api is
>
> /usr/bin/mistral-server --config-file=/etc/mistral/mistral.conf --server=api
>
>
>
> All of the logs goes to /var/log/mistral/mistral-server.log
>
>
>
> I would like to split them into three different logs , without changing the
> service files themselves
>
> I thought of changing
> https://github.com/openstack/mistral/blob/master/setup.cfg
>
> And creating three different console scripts
>
> console_scripts =
>
> mistral-engine = mistral.cmd.launch:main
>
> mistral-api = mistral.cmd.launch:main
>
> mistral-executor = mistral.cmd.launch:main
>
> mistral-db-manage = mistral.db.sqlalchemy.migration.cli:main
>
>
>
> will be happy to get your inputs/thoughts
>
> Thanks
>
> Tomer
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cross-Project Meeting SKIPPED, Tue Jan 19th, 21:00 UTC

2016-01-18 Thread Mike Perez
Hi all!

We are skipping the cross-project meeting since we are currently establishing
the Cross Project Spec Liaison team [1].

Please check and make sure your team has someone registered in the wiki table
[2] by this week, in order to make sure someone from your team is responsible
for participating in cross-project specs and keeping your project up-to-date
with decisions that effect your project.

We also have a new meeting channel which is #openstack-meeting-cp where the
cross-project meeting will now take place at it's usual time Tuesdays at 2100
UTC.

[1] - 
http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons
[2] - 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Liaisons

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] Cross-Project Specs and Your Project

2016-01-18 Thread Mike Perez
On 11:24 Jan 14, Mike Perez wrote:
> Hello all!
> 
> We've been discussing cross-project spec liaisons on the mailing list [1] and
> cross-project meeting [2][3] for a bit, and we're now stating the official
> responsibilities [4] of the group.

Hey all,

The Project Team Guide for Cross-Project Specification Liaisons has been
approved [1]. Thanks everyone for reviewing the initial doc!

Now please coordinate with your team and sign up [2]. Thanks!

[1] - 
http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons
[2] - 
https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Liaisons

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][all] glance_store drivers deprecation/stabilization: Volunteers needed

2016-01-18 Thread Flavio Percoco

On 11/01/16 15:52 -0500, Flavio Percoco wrote:

Greetings,

Gentle reminder that this is happening next week.

Cheers,
Flavio

- Original Message -

From: "Flavio Percoco" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, December 10, 2015 9:16:09 AM
Subject: [glance][all] glance_store drivers deprecation/stabilization: 
Volunteers needed

Greetings,

As some of you know, there's a proposal (still a rough draft) for
refactoring the glance_store API. This library is the home for the
store drivers used by Glance to save or access the image data.

As other drivers in OpenStack, this library is facing the issue of
having unmaintained, untested and incomplete implementations of stores
that are, hypotetically, being used in production environments.

In order to guarantee some level of stability and, more important,
maintenance, the Glance team is looking for volunteers to sign up as
maintainers/keepers of the existing drivers.

Unfortunately, given the fact that our team is not as big as we would
like and that we don't have the knowledge to provide support for every
single driver, the Glance team will have to deprecate, and later
remove, the drivers that will remain without a maintainer.

Each driver will have to have a voting CI job running (maintained by
the driver maintainer) that will have to run Glance's functional tests
to ensure the API features are also supported by the driver.

There are 2 drivers I belive shouldn't fall into this category and
that should be maintained by the Glance community itself. These
drivers are:

- Filesystem
- Http

Please, find the full list of drivers here[0] and feel free to sign up
as volunteer in as many drivers as your time permits to maintain.
Please, provide all the information required as the lack of it will
result in the candidacy not being valid. As some sharp eyes will
notice, the Swift driver is not in the list above. The reason for that
is that, although it's a key piece of OpenStack, not everyone in the
Glance community knows the code of that driver well-enough and there
are enough folks that know it that could perhaps volunteer as
maintainers/reviewers for it. Furthermore, adding the swift driver
there would mean we should probably add the Cinder one as well as it's
part of OpenStack just like Swift. We can extend that list later. For
now, I'd like to focus on bringing some stability to the library.

The above information, as soon as it's complete or the due date is
reached, will be added to glance_store's docs so that folks know where
to find the drivers maintainers and who to talk to when things go
south.

Here's an attempt to schedule some of this work (please refer to
this tag[0.1] and this soon-to-be-approved review[0.2] to have more
info w.r.t the deprecation times and backwards compatibility
guarantees):

- By mitaka 2 (Jan 16-22), all drivers should have a maintainer.
  Drivers without one, will be marked as deprecated in Mitaka.


This has been done!

http://docs.openstack.org/developer/glance_store/drivers/index.html

Only 1 driver was left without maintainer and, as established, I've marked it as
deprecated:

https://review.openstack.org/#/c/266077/

Thanks to everyone who volunteered!
Flavio


- By N-2 (schedule still not available), all drivers that were marked
  as deprecated in Mitaka will be removed.

- By N-1 (schedule still not available), all drivers should have
  support for the main storage capabilities[1], which are READ_ACCESS,
  WRITE_ACCESS, and DRIVER_REUSABLE. Drivers that won't have support
  for the main set of capabilities will be marked as deprecated and
  then removed in O-1 (except for the HTTP one, which the team has
  agreed on keeping as a read-only driver).

- By N-2 (schedule still not available), all drivers need to have a
  voting gate. Drivers that won't have voting gates will be marked as
  deprecated and then removed in O-1.

Although glance_store has intermediate releases, the above is being
planned based on the integrated release to avoid sudden "surprises"
on already released OpenStack versions.

Note that the above plan requires that the ongoing effort for setting
up a gate based on functional tests for glance_store will be
completed. There's enough time to get all this done for every driver.

In addition to the above, I'd like to note that we need to do this
*before* the refactor[2] so that we can provide a minimum guarantee
that it won't break the existing contract. Furthermore, maintainers of
this drivers will be asked to help migrating their drivers to the new
API but that will follow a different schedule that needs to be
discussed in the spec itself.

This is, obviously, a multi-release effort that will require syncing
with future PTLs of the project.

One more thing. Note that the above work shouldn't distract the team
from the priorities we've scheduled for Mitaka. The requested
work/info should be simple enough to provide and work on without
distracting us. I'll take care of 

Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-18 Thread Matt Kassawara
On Mon, Jan 18, 2016 at 4:14 PM, Kevin Benton  wrote:

> Thanks for the awesome writeup.
>
> >5) A bridge or veth pair with an IP address can participate in path MTU
> discovery (PMTUD). However, these devices do not appear to understand
> namespaces and originate the ICMP message from the host instead of a
> namespace. Therefore, the message never reaches the destination...
> typically a host outside of the deployment.
>
> I suspect this is because we don't put the bridges into namespaces. Even
> if we did do this, we would need to allocate IP addresses for every compute
> node to use to chat on the network...
>

Yup. Moving the MTU disparity to the first layer-3 device a packet
traverses inbound to a VM saves us from burning IPs too.


>
>
>
> >At least for the Linux bridge agent, I think we can address ingress MTU
> disparity (to the VM) by moving it to the first device in the chain capable
> of layer-3 operations, particularly the neutron router namespace. We can
> address the egress MTU disparity (from the VM) by advertising the MTU of
> the overlay network to the VM via DHCP/RA or using manual interface
> configuration.
>
> So when setting up DHCP for the subnet, would telling the DHCP agent to
> use an MTU we calculate based on (global MTU value - network encap
> overhead) achieve what you are suggesting here?
>

Yup. We mostly attempt to do that now.

On Fri, Jan 15, 2016 at 10:41 AM, Sean M. Collins 
>> wrote:
>>
>>> MTU has been an ongoing issue in Neutron for _years_.
>>>
>>> It's such a hassle, that most people just throw up their hands and set
>>> their physical infrastructure to jumbo frames. We even document it.
>>>
>>>
>>> http://docs.openstack.org/juno/install-guide/install/apt-debian/content/neutron-network-node.html
>>>
>>> > Ideally, you can prevent these problems by enabling jumbo frames on
>>> > the physical network that contains your tenant virtual networks. Jumbo
>>> > frames support MTUs up to approximately 9000 bytes which negates the
>>> > impact of GRE overhead on virtual networks.
>>>
>>> We've pushed this onto operators and deployers. There's a lot of
>>> code in provisioning projects to handle MTUs.
>>>
>>> http://codesearch.openstack.org/?q=MTU=nope==
>>>
>>> We have mentions to it in our architecture design guide
>>>
>>>
>>> http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/arch-design/source/network-focus-architecture.rst#n150
>>>
>>> I want to get Neutron to the point where it starts discovering this
>>> information and automatically configuring, in the optimistic cases. I
>>> understand that it can be complex and have corner cases, but the issue
>>> we have today is that it is broken in some multinode jobs, even Neutron
>>> developers are configuring it correctly.
>>>
>>> I also had this discussion on the DevStack side in
>>> https://review.openstack.org/#/c/112523/
>>> where basically, sure we can fix it in DevStack and at the gate, but it
>>> doesn't fix the problem for anyone who isn't using DevStack to deploy
>>> their cloud.
>>>
>>> Today we have a ton of MTU configuration options sprinkled throghout the
>>> L3 agent, dhcp agent, l2 agents, and at least one API extension to the
>>> REST API for handling MTUs.
>>>
>>> So yeah, a lot of knobs and not a lot of documentation on how to make
>>> this thing work correctly. I'd like to try and simplify.
>>>
>>>
>>> Further reading:
>>>
>>>
>>> http://techbackground.blogspot.co.uk/2013/06/path-mtu-discovery-and-gre.html
>>>
>>> http://lists.openstack.org/pipermail/openstack/2013-October/001778.html
>>>
>>>
>>> https://ask.openstack.org/en/question/6140/quantum-neutron-gre-slow-performance/
>>>
>>>
>>> https://ask.openstack.org/en/question/12499/forcing-mtu-to-1400-via-etcneutrondnsmasq-neutronconf-per-daniels/
>>>
>>>
>>> http://blog.systemathic.ch/2015/03/05/openstack-mtu-pitfalls-with-tunnels/
>>>
>>> https://twitter.com/search?q=openstack%20neutron%20MTU
>>>
>>> --
>>> Sean M. Collins
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

[openstack-dev] [all][clients] Enable hacking in python-*clients

2016-01-18 Thread Kekane, Abhishek
Hi Devs,

As of now all OpenStack projects has hacking checks which take cares about 
OpenStack guidelines issues are caught while running PEP8 checks using tox.
There are no such checks in any of the python-*client.

IMO its worth to enable hacking checks in python-*clients as well which will 
caught some guidelines issues in local environment only,

Please let me know your opinion on the same.

Thanks & Regards,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-18 Thread Ton Ngo
Hi Egor,
Do we need to add a cinder volume to the master nodes for Kubernetes as
well?  We did not run Docker on the master node before so the volume was
not needed.
Ton Ngo,




From:   Hongbin Lu 
To: Egor Guz , OpenStack Development Mailing
List 
Date:   01/18/2016 12:29 PM
Subject:Re: [openstack-dev] [magnum] Temporarily remove swarm func test
from gate



Hi Egor,

Thanks for investigating on the issue. I will review the patch. Agreed. We
can definitely enable the swarm tests if everything works fine.

Best regards,
Hongbin

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-18-16 2:42 PM
To: OpenStack Development Mailing List
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test
from gate

Hongbin,

I did some digging and found that docker storage driver wasn’t configured
correctly at agent nodes.
Also it looks like Atomic folks recommend use deicated volumes for
DeviceMapper (
http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/
).
So added Cinder volume for the master as well (I tried create volumes at
local storage, but it’s not even enough space for 1G volume).

Please take a look at https://review.openstack.org/#/c/267996, did around
~12 gates run and got only 2 failures (tests cannot connect to master, but
all containers logs looks alrignt. e.g.
http://logs.openstack.org/96/267996/3/check/gate-functional-dsvm-magnum-swarm/d8d855b/console.html#_2016-01-18_04_31_17_312
), we have similar error rates with Kub. So after merging this code we can
try to enable voting for Swarm tests, thoughts?

—
Egor

On Jan 8, 2016, at 12:01, Hongbin Lu > wrote:

There are other symptoms as well, which I have no idea without a deep dip.

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-08-16 2:14 PM
To: openstack-dev@lists.openstack.org<
mailto:openstack-dev@lists.openstack.org>
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test
from gate

Hongbin,

I belive most failures are related to containers tests. Maybe we should
comment only them out and keep Swarm cluster provisioning.
Thoughts?

—
Egor

On Jan 8, 2016, at 06:37, Hongbin Lu > wrote:

Done: https://review.openstack.org/#/c/264998/

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: January-07-16 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test
from gate

Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are
there other alternatives to consider? If not, let’s proceed with that
approach.

Adrian

On Jan 7, 2016, at 3:34 PM, Hongbin Lu > wrote:

Clark,

That is true. The check pipeline must pass in order to enter the gate
pipeline. Here is the problem we are facing. A patch that was able to pass
the check pipeline is blocked in gate pipeline, due to the instability of
the test. The removal of unstable test from gate pipeline aims to unblock
the patches that already passed the check.

An alternative is to remove the unstable test from check pipeline as well
or mark it as non-voting test. If that is what the team prefers, I will
adjust the review accordingly.

Best regards,
Honbgin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org]
Sent: January-07-16 6:04 PM
To: openstack-dev@lists.openstack.org<
mailto:openstack-dev@lists.openstack.org><
mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test
from gate

On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
Hi folks,

It looks the swarm func test is currently unstable, which negatively
impacts the patch submission workflow. I proposed to remove it from Jenkins
gate (but keep it in Jenkins check), until it becomes stable.
Please find the details in the review
(https://review.openstack.org/#/c/264998/) and let me know if you have any
concern.

Removing it from gate but not from check doesn't necessarily help much
because you can only enter the gate pipeline once the change has a +1 from
Jenkins. Jenkins applies the +1 after check tests pass.

Clark

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org<
mailto:openstack-dev-requ...@lists.openstack.org><
mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Convergence status

2016-01-18 Thread Anant Patil

Hi,

This is to discuss the status of convergence patches and plans for
making it default.

The convergence gate jobs have been running successfully since more than
a month now. There were three integration tests skipped:
1. StackValidationTest
2. UpdateStackTest.test_stack_update_alias_type
3. UpdateStackTest.test_stack_update_alias_changes

2 and 3 of above are addressed by patches and have been running
successfully:
https://review.openstack.org/#/c/248676/
https://review.openstack.org/#/c/259865/

The StackValidationTest fails because the test uses an image without
cfn-tools and convergence heat will wait for the signal to arrive.
https://bugs.launchpad.net/heat/+bug/1486281/comments/3 . This should be
fixed when we address https://bugs.launchpad.net/heat/+bug/1533176 . The
delete request can cancel the currently running check-resource requests
and it can then proceed without having to wait for resources to
complete. Also, if we use a proper image, this is is not seen.

There are few important patches in review which should close most of the
bugs:
https://review.openstack.org/#/c/264748/
https://review.openstack.org/#/c/262374/
https://review.openstack.org/#/c/261208/
https://review.openstack.org/#/c/264675/
https://review.openstack.org/#/c/208790/

There are other bugs for which the patches should land soon. We should
plan to make convergence default earlier in m-3 phase so that we can
thoroughly test it before release. Let me know your opinion on this.

- Anant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Docs] Definition of a provider Network

2016-01-18 Thread Andreas Scheuring
Hi everybody, 

I stumbled over a definition that explains the difference between a
Provider network and a self service network. [1] 

To summarize it says:
- Provider Network: primarily uses layer2 services and vlan segmentation
and cannot be used for advanced services (fwaas,..)
- Self-service Network: is Neutron configured to use a overlay network
and supports advanced services (fwaas,..)


But my understanding is more like this:
- Provider Network: The Openstack user needs information about the
underlying network infrastructure to create a virtual network that
exactly matches this infrastructure. 

- Self service network: The Openstack user can create virtual networks
without knowledge about the underlaying infrastructure on the data
network. This can also include vlan networks, if the l2 plugin/agent was
configured accordingly.


Did the meaning of a provider network change in the meantime, or is my
understanding just wrong?

Thanks!




[1]
http://docs.openstack.org/liberty/install-guide-rdo/overview.html#id4


-- 
-
Andreas (IRC: scheuran)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage meeting tomorrow

2016-01-18 Thread AFEK, Ifat (Ifat)
Hi,

We will have Vitrage weekly meeting tomorrow, Wednesday at 9:00 UTC, on 
#openstack-meeting-3 channel.

Agenda:

* Current status and progress from last week
* Review action items
* Next steps 
* Open Discussion

You are welcome to join.

Thanks, 
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][clients] Enable hacking in python-*clients

2016-01-18 Thread Akihiro Motoki
Hi Abishek,

In my understanding, hacking check is enabled for most (or all) of
python-*client.
For example, flake8 is run for each neutronclient review [1].
test-requirements installs hacking, so I believe hacking check is enabled.
openstackclient and novaclient do the same [2] [3].
Am I missing something?

[1] 
http://git.openstack.org/cgit/openstack/python-neutronclient/tree/tox.ini#n26
[2] 
http://git.openstack.org/cgit/openstack/python-openstackclient/tree/tox.ini#n15
[3] http://git.openstack.org/cgit/openstack/python-novaclient/tree/tox.ini#n24

2016-01-19 14:18 GMT+09:00 Kekane, Abhishek :
> Hi Devs,
>
>
>
> As of now all OpenStack projects has hacking checks which take cares about
> OpenStack guidelines issues are caught while running PEP8 checks using tox.
>
> There are no such checks in any of the python-*client.
>
>
>
> IMO its worth to enable hacking checks in python-*clients as well which will
> caught some guidelines issues in local environment only,
>
>
>
> Please let me know your opinion on the same.
>
>
>
> Thanks & Regards,
>
>
>
> Abhishek Kekane
>
>
> __
> Disclaimer: This email and any attachments are sent in strictest confidence
> for the sole use of the addressee and may contain legally privileged,
> confidential, and proprietary data. If you are not the intended recipient,
> please advise the sender by replying promptly to this email and then delete
> and destroy this email and any attachments without any further use, copying
> or forwarding.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-18 Thread Kevin Benton
>Yup. We mostly attempt to do that now.

Right, but not by default. Can you think of a scenario where advertising it
would be harmful?
On Jan 18, 2016 23:57, "Matt Kassawara"  wrote:

>
>
> On Mon, Jan 18, 2016 at 4:14 PM, Kevin Benton  wrote:
>
>> Thanks for the awesome writeup.
>>
>> >5) A bridge or veth pair with an IP address can participate in path MTU
>> discovery (PMTUD). However, these devices do not appear to understand
>> namespaces and originate the ICMP message from the host instead of a
>> namespace. Therefore, the message never reaches the destination...
>> typically a host outside of the deployment.
>>
>> I suspect this is because we don't put the bridges into namespaces. Even
>> if we did do this, we would need to allocate IP addresses for every compute
>> node to use to chat on the network...
>>
>
> Yup. Moving the MTU disparity to the first layer-3 device a packet
> traverses inbound to a VM saves us from burning IPs too.
>
>
>>
>>
>>
>> >At least for the Linux bridge agent, I think we can address ingress MTU
>> disparity (to the VM) by moving it to the first device in the chain capable
>> of layer-3 operations, particularly the neutron router namespace. We can
>> address the egress MTU disparity (from the VM) by advertising the MTU of
>> the overlay network to the VM via DHCP/RA or using manual interface
>> configuration.
>>
>> So when setting up DHCP for the subnet, would telling the DHCP agent to
>> use an MTU we calculate based on (global MTU value - network encap
>> overhead) achieve what you are suggesting here?
>>
>
> Yup. We mostly attempt to do that now.
>
> On Fri, Jan 15, 2016 at 10:41 AM, Sean M. Collins 
>>> wrote:
>>>
 MTU has been an ongoing issue in Neutron for _years_.

 It's such a hassle, that most people just throw up their hands and set
 their physical infrastructure to jumbo frames. We even document it.


 http://docs.openstack.org/juno/install-guide/install/apt-debian/content/neutron-network-node.html

 > Ideally, you can prevent these problems by enabling jumbo frames on
 > the physical network that contains your tenant virtual networks. Jumbo
 > frames support MTUs up to approximately 9000 bytes which negates the
 > impact of GRE overhead on virtual networks.

 We've pushed this onto operators and deployers. There's a lot of
 code in provisioning projects to handle MTUs.

 http://codesearch.openstack.org/?q=MTU=nope==

 We have mentions to it in our architecture design guide


 http://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/arch-design/source/network-focus-architecture.rst#n150

 I want to get Neutron to the point where it starts discovering this
 information and automatically configuring, in the optimistic cases. I
 understand that it can be complex and have corner cases, but the issue
 we have today is that it is broken in some multinode jobs, even Neutron
 developers are configuring it correctly.

 I also had this discussion on the DevStack side in
 https://review.openstack.org/#/c/112523/
 where basically, sure we can fix it in DevStack and at the gate, but it
 doesn't fix the problem for anyone who isn't using DevStack to deploy
 their cloud.

 Today we have a ton of MTU configuration options sprinkled throghout the
 L3 agent, dhcp agent, l2 agents, and at least one API extension to the
 REST API for handling MTUs.

 So yeah, a lot of knobs and not a lot of documentation on how to make
 this thing work correctly. I'd like to try and simplify.


 Further reading:


 http://techbackground.blogspot.co.uk/2013/06/path-mtu-discovery-and-gre.html

 http://lists.openstack.org/pipermail/openstack/2013-October/001778.html


 https://ask.openstack.org/en/question/6140/quantum-neutron-gre-slow-performance/


 https://ask.openstack.org/en/question/12499/forcing-mtu-to-1400-via-etcneutrondnsmasq-neutronconf-per-daniels/


 http://blog.systemathic.ch/2015/03/05/openstack-mtu-pitfalls-with-tunnels/

 https://twitter.com/search?q=openstack%20neutron%20MTU

 --
 Sean M. Collins


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> 

Re: [openstack-dev] [mistral] how to split mistral log files

2016-01-18 Thread SHTILMAN, Tomer (Tomer)
Thanks 
Not sure that duplicating mistral.conf three times will be the best in this 
case , will make things very hard to manage


-Original Message-
From: EXT Lingxian Kong [mailto:anlin.k...@gmail.com] 
Sent: Tuesday, January 19, 2016 2:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [mistral] how to split mistral log files

Hi, Tomer,

If you really want Mistral services to be running on different processes or 
different servers, I recommend you use different config file respectively, log 
file path can be configured differently, which can achieve what you said.

On Tue, Jan 19, 2016 at 5:19 AM, SHTILMAN, Tomer (Tomer) 
 wrote:
>
>
> Hi All
>
> I have three linux service file for mistral api, engine and executor
>
> All of them running with different param in the “server” (api/
>
> E.g api is
>
> /usr/bin/mistral-server --config-file=/etc/mistral/mistral.conf 
> --server=api
>
>
>
> All of the logs goes to /var/log/mistral/mistral-server.log
>
>
>
> I would like to split them into three different logs , without 
> changing the service files themselves
>
> I thought of changing
> https://github.com/openstack/mistral/blob/master/setup.cfg
>
> And creating three different console scripts
>
> console_scripts =
>
> mistral-engine = mistral.cmd.launch:main
>
> mistral-api = mistral.cmd.launch:main
>
> mistral-executor = mistral.cmd.launch:main
>
> mistral-db-manage = mistral.db.sqlalchemy.migration.cli:main
>
>
>
> will be happy to get your inputs/thoughts
>
> Thanks
>
> Tomer
>
>
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence status

2016-01-18 Thread Sergey Kraynev
Anant, thank you for the summary.

I planned to announce it (enabling convergence by default) on the next week
(after m-2).
I think, that we need to land two first mentioned patches before it ^.


On 19 January 2016 at 09:44, Anant Patil  wrote:

>
> Hi,
>
> This is to discuss the status of convergence patches and plans for
> making it default.
>
> The convergence gate jobs have been running successfully since more than
> a month now. There were three integration tests skipped:
> 1. StackValidationTest
> 2. UpdateStackTest.test_stack_update_alias_type
> 3. UpdateStackTest.test_stack_update_alias_changes
>
> 2 and 3 of above are addressed by patches and have been running
> successfully:
> https://review.openstack.org/#/c/248676/
> https://review.openstack.org/#/c/259865/
>
> The StackValidationTest fails because the test uses an image without
> cfn-tools and convergence heat will wait for the signal to arrive.
> https://bugs.launchpad.net/heat/+bug/1486281/comments/3 . This should be
> fixed when we address https://bugs.launchpad.net/heat/+bug/1533176 . The
> delete request can cancel the currently running check-resource requests
> and it can then proceed without having to wait for resources to
> complete. Also, if we use a proper image, this is is not seen.
>
> There are few important patches in review which should close most of the
> bugs:
> https://review.openstack.org/#/c/264748/
> https://review.openstack.org/#/c/262374/
> https://review.openstack.org/#/c/261208/
> https://review.openstack.org/#/c/264675/
> https://review.openstack.org/#/c/208790/
>
> There are other bugs for which the patches should land soon. We should
> plan to make convergence default earlier in m-3 phase so that we can
> thoroughly test it before release. Let me know your opinion on this.
>
> - Anant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Sergey.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Blocked access to Gerrit account.

2016-01-18 Thread Lubosz Kosnik

Hello everyone,
Yesterday I merged my two accounts on launchpad and after that I cannot 
log in to my account using SSO on OpenStack gerrit.
When I'm logging using my merged account I'm landing in a new account, 
without any reviews, commits and anything else.

Additionally right now there are two available options in owner filter.
When you specify lubosz.kosnik you will see two times the same option 
"Lubosz Kosnik 

Re: [openstack-dev] [magnum] Temporarily remove swarm func test from gate

2016-01-18 Thread Hongbin Lu
Hi Egor,

Thanks for investigating on the issue. I will review the patch. Agreed. We can 
definitely enable the swarm tests if everything works fine.

Best regards,
Hongbin

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com] 
Sent: January-18-16 2:42 PM
To: OpenStack Development Mailing List
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I did some digging and found that docker storage driver wasn’t configured 
correctly at agent nodes.
Also it looks like Atomic folks recommend use deicated volumes for DeviceMapper 
(http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/).
So added Cinder volume for the master as well (I tried create volumes at local 
storage, but it’s not even enough space for 1G volume).

Please take a look at https://review.openstack.org/#/c/267996, did around ~12 
gates run and got only 2 failures (tests cannot connect to master, but all 
containers logs looks alrignt. e.g. 
http://logs.openstack.org/96/267996/3/check/gate-functional-dsvm-magnum-swarm/d8d855b/console.html#_2016-01-18_04_31_17_312),
 we have similar error rates with Kub. So after merging this code we can try to 
enable voting for Swarm tests, thoughts?

—
Egor

On Jan 8, 2016, at 12:01, Hongbin Lu 
> wrote:

There are other symptoms as well, which I have no idea without a deep dip.

-Original Message-
From: Egor Guz [mailto:e...@walmartlabs.com]
Sent: January-08-16 2:14 PM
To: openstack-dev@lists.openstack.org
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I belive most failures are related to containers tests. Maybe we should comment 
only them out and keep Swarm cluster provisioning.
Thoughts?

—
Egor

On Jan 8, 2016, at 06:37, Hongbin Lu 
>
 wrote:

Done: https://review.openstack.org/#/c/264998/

Best regards,
Hongbin

-Original Message-
From: Adrian Otto [mailto:adrian.o...@rackspace.com]
Sent: January-07-16 10:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

Hongbin,

I’m not aware of any viable options besides using a nonvoting gate job. Are 
there other alternatives to consider? If not, let’s proceed with that approach.

Adrian

On Jan 7, 2016, at 3:34 PM, Hongbin Lu 
>
 wrote:

Clark,

That is true. The check pipeline must pass in order to enter the gate pipeline. 
Here is the problem we are facing. A patch that was able to pass the check 
pipeline is blocked in gate pipeline, due to the instability of the test. The 
removal of unstable test from gate pipeline aims to unblock the patches that 
already passed the check.

An alternative is to remove the unstable test from check pipeline as well or 
mark it as non-voting test. If that is what the team prefers, I will adjust the 
review accordingly.

Best regards,
Honbgin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org]
Sent: January-07-16 6:04 PM
To: 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] Temporarily remove swarm func test from 
gate

On Thu, Jan 7, 2016, at 02:59 PM, Hongbin Lu wrote:
Hi folks,

It looks the swarm func test is currently unstable, which negatively impacts 
the patch submission workflow. I proposed to remove it from Jenkins gate (but 
keep it in Jenkins check), until it becomes stable.
Please find the details in the review
(https://review.openstack.org/#/c/264998/) and let me know if you have any 
concern.

Removing it from gate but not from check doesn't necessarily help much because 
you can only enter the gate pipeline once the change has a +1 from Jenkins. 
Jenkins applies the +1 after check tests pass.

Clark

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
 OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-18 Thread John Griffith
On Sun, Jan 17, 2016 at 8:30 PM, Matt Kassawara 
wrote:

> Prior attempts to solve the MTU problem in neutron simply band-aid it or
> become too complex from feature creep or edge cases that mask the primary
> goal of a simple implementation that works for most deployments. So, I ran
> some experiments to empirically determine the root cause of MTU problems in
> common neutron deployments using the Linux bridge agent. I plan to perform
> these experiments again using the Open vSwitch agent... after sufficient
> mental recovery.
>
> I highly recommend reading further, but here's the TL;DR:
>
> Observations...
>
> 1) During creation of a VXLAN interface, Linux automatically subtracts the
> VXLAN protocol overhead from the MTU of the parent interface.
> 2) A veth pair or tap with a different MTU on each end drops packets
> larger than the smaller MTU.
> 3) Linux automatically adjusts the MTU of a bridge to the lowest MTU of
> all the ports. Therefore, Linux reduces the typical bridge MTU from 1500 to
> 1450 when neutron adds a VXLAN interface to it.
> 4) A bridge with different MTUs on each port drops packets larger than the
> MTU of the bridge.
> 5) A bridge or veth pair with an IP address can participate in path MTU
> discovery (PMTUD). However, these devices do not appear to understand
> namespaces and originate the ICMP message from the host instead of a
> namespace. Therefore, the message never reaches the destination...
> typically a host outside of the deployment.
>
> Conclusion...
>
> The MTU disparity between native and overlay networks must reside in a
> device capable of layer-3 operations that can participate in PMTUD, such as
> the neutron router between a private/project overlay network and a
> public/external native network.
>
> Some background...
>
> In a typical datacenter network, MTU must remain consistent within a
> layer-2 network because fragmentation and the mechanism indicating the need
> for it occurs at layer-3. In other words, all host interfaces and switch
> ports on the same layer-2 network must use the same MTU. If the layer-2
> network connects to a router, the router port must also use the same MTU. A
> router can contain ports on multiple layer-2 networks with different MTUs
> because it operates on those networks at layer-3. If the MTU changes
> between ports on a router and devices on those layer-2 networks attempt to
> communicate at layer-3, the router can perform a couple of actions. For
> IPv4, the router can fragment the packet. However, if the packet contains
> the "don't fragment" (DF) flag, the router can either silently drop the
> packet or return an ICMP "fragmentation needed" message to the sender. This
> ICMP message contains the MTU of the next layer-2 network in the route
> between the sender and receiver. Each router in the path can return these
> ICMP messages to the sender until it learns the maximum MTU for the entire
> path, also known as path MTU discovery (PMTUD). IPv6 does not support
> fragmentation.
>
> The cloud provides a virtual extension of a physical network. In the
> simplest sense, patch cables become veth pairs, switches become bridges,
> and routers become namespaces. Therefore, MTU implementation for virtual
> networks should mimic physical networks where MTU changes must occur within
> a router at layer-3.
>
> For these experiments, my deployment contains one controller and one
> compute node. Neutron uses the ML2 plug-in and Linux bridge agent. The
> configuration does not contain any MTU options (e.g, path_mtu). One VM with
> a floating IP address attaches to a VXLAN private network that routes to a
> flat public network. The DHCP agent does not advertise MTU to the VM. My
> lab resides on public cloud infrastructure with networks that filter
> unknown MAC addresses such as those that neutron generates for virtual
> network components. Let's talk about the implications and workarounds.
>
> The VXLAN protocol contains 50 bytes of overhead. Linux automatically
> calculates the MTU of VXLAN devices by subtracting 50 bytes from the parent
> device, in this case a standard Ethernet interface with a 1500 MTU.
> However, due the limitations of public cloud networks, I must create a
> VXLAN tunnel between the controller node and a host outside of the
> deployment to simulate traffic from a datacenter network. This tunnel
> effectively reduces the "native" MTU from 1500 to 1450. Therefore, I need
> to subtract an additional 50 bytes from neutron VXLAN network components,
> essentially emulating the 50-byte difference between conventional neutron
> VXLAN networks and native networks. The host outside of the deployment
> assumes it can send packets using a 1450 MTU. The VM also assumes it can
> send packets using a 1450 MTU because the DHCP agent does not advertise a
> 1400 MTU to it.
>
> Let's get to it!
>
> Note: The commands in these experiments often generate lengthy output, so
> please refer to the gists when necessary.
>

Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-18 Thread Kevin Benton
Thanks for the awesome writeup.

>5) A bridge or veth pair with an IP address can participate in path MTU
discovery (PMTUD). However, these devices do not appear to understand
namespaces and originate the ICMP message from the host instead of a
namespace. Therefore, the message never reaches the destination...
typically a host outside of the deployment.

I suspect this is because we don't put the bridges into namespaces. Even if
we did do this, we would need to allocate IP addresses for every compute
node to use to chat on the network...


>At least for the Linux bridge agent, I think we can address ingress MTU
disparity (to the VM) by moving it to the first device in the chain capable
of layer-3 operations, particularly the neutron router namespace. We can
address the egress MTU disparity (from the VM) by advertising the MTU of
the overlay network to the VM via DHCP/RA or using manual interface
configuration.

So when setting up DHCP for the subnet, would telling the DHCP agent to use
an MTU we calculate based on (global MTU value - network encap overhead)
achieve what you are suggesting here?

On Sun, Jan 17, 2016 at 10:30 PM, Matt Kassawara 
wrote:

> Prior attempts to solve the MTU problem in neutron simply band-aid it or
> become too complex from feature creep or edge cases that mask the primary
> goal of a simple implementation that works for most deployments. So, I ran
> some experiments to empirically determine the root cause of MTU problems in
> common neutron deployments using the Linux bridge agent. I plan to perform
> these experiments again using the Open vSwitch agent... after sufficient
> mental recovery.
>
> I highly recommend reading further, but here's the TL;DR:
>
> Observations...
>
> 1) During creation of a VXLAN interface, Linux automatically subtracts the
> VXLAN protocol overhead from the MTU of the parent interface.
> 2) A veth pair or tap with a different MTU on each end drops packets
> larger than the smaller MTU.
> 3) Linux automatically adjusts the MTU of a bridge to the lowest MTU of
> all the ports. Therefore, Linux reduces the typical bridge MTU from 1500 to
> 1450 when neutron adds a VXLAN interface to it.
> 4) A bridge with different MTUs on each port drops packets larger than the
> MTU of the bridge.
> 5) A bridge or veth pair with an IP address can participate in path MTU
> discovery (PMTUD). However, these devices do not appear to understand
> namespaces and originate the ICMP message from the host instead of a
> namespace. Therefore, the message never reaches the destination...
> typically a host outside of the deployment.
>
> Conclusion...
>
> The MTU disparity between native and overlay networks must reside in a
> device capable of layer-3 operations that can participate in PMTUD, such as
> the neutron router between a private/project overlay network and a
> public/external native network.
>
> Some background...
>
> In a typical datacenter network, MTU must remain consistent within a
> layer-2 network because fragmentation and the mechanism indicating the need
> for it occurs at layer-3. In other words, all host interfaces and switch
> ports on the same layer-2 network must use the same MTU. If the layer-2
> network connects to a router, the router port must also use the same MTU. A
> router can contain ports on multiple layer-2 networks with different MTUs
> because it operates on those networks at layer-3. If the MTU changes
> between ports on a router and devices on those layer-2 networks attempt to
> communicate at layer-3, the router can perform a couple of actions. For
> IPv4, the router can fragment the packet. However, if the packet contains
> the "don't fragment" (DF) flag, the router can either silently drop the
> packet or return an ICMP "fragmentation needed" message to the sender. This
> ICMP message contains the MTU of the next layer-2 network in the route
> between the sender and receiver. Each router in the path can return these
> ICMP messages to the sender until it learns the maximum MTU for the entire
> path, also known as path MTU discovery (PMTUD). IPv6 does not support
> fragmentation.
>
> The cloud provides a virtual extension of a physical network. In the
> simplest sense, patch cables become veth pairs, switches become bridges,
> and routers become namespaces. Therefore, MTU implementation for virtual
> networks should mimic physical networks where MTU changes must occur within
> a router at layer-3.
>
> For these experiments, my deployment contains one controller and one
> compute node. Neutron uses the ML2 plug-in and Linux bridge agent. The
> configuration does not contain any MTU options (e.g, path_mtu). One VM with
> a floating IP address attaches to a VXLAN private network that routes to a
> flat public network. The DHCP agent does not advertise MTU to the VM. My
> lab resides on public cloud infrastructure with networks that filter
> unknown MAC addresses such as those that neutron generates for virtual
> network 

Re: [openstack-dev] [Nova] sponsor some LVM development

2016-01-18 Thread Anita Kuno
On 01/18/2016 02:49 PM, Premysl Kouril wrote:
> Hello everybody,
> 
> we are a Europe based operator and we have a case for LVM based nova
> instances in our new cloud infrastructure. We are currently
> considering to contribute to OpenStack Nova to implement some features
> which are currently not supported for LVM based instances (they are
> only supported for raw/qcow2 file based instances). As an example of
> such features - nova block live migration or thin provisioning - these
> nowadays don't work with LVM based instances (they do work for file
> based).
> 
> Before actually diving into development here internally - we wanted to
> check on possibility to actually sponsor this development within
> existing community. So if there is someone who would be interested in
> this work please drop me an email.

I'm not a Nova developer. I am interesting in clarifying what you are
asking.

Are you asking for current Nova developers to work on this feature? Or
is your company interested in having your developers interact with Nova
developers?

Thank you,
Anita.

> 
> Regards,
> Prema
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] No stable team meeting on Tuesday 1/26

2016-01-18 Thread Matt Riedemann
There are three meetups happening next week and I'll be at one of them, 
so we're going to cancel next week's meeting. So the next meeting will 
be on Monday 2/1 at 2100 UTC.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] Proposal to add Tony Breeds to nova-stable-maint

2016-01-18 Thread Michael Davies
Congrats Tony!
-- 
Michael Davies   mich...@the-davies.net
Rackspace Cloud Builders Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Mitaka Infra Sprint

2016-01-18 Thread Colleen Murphy
On Wed, Dec 9, 2015 at 9:17 PM, Joshua Hesketh 
wrote:

> Hi all,
> As discussed during the infra-meeting on Tuesday[0], the infra team will
> be holding a mid-cycle sprint to focus on infra-cloud[1].
> The sprint is an opportunity to get in a room and really work through as
> much code and reviews as we can related to infra-cloud while having each
> other near by to discuss blockers, technical challenges and enjoy company.
> Information + RSVP:
> https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint
> Dates:Mon. February 22nd at 9:00am to Thursday. February 25th
> Location:HPE Fort Collins Colorado Office
> Who:Anybody is welcome. Please put your name on the wiki page if you are
> interested in attending.
> If you have any questions please don't hesitate to ask.
> Cheers,Josh + Infra team
> [0]
> http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-08-19.00.html[1]
> https://specs.openstack.org/openstack-infra/infra-specs/specs/infra-cloud.html
>
> Since I didn't see one, I started an etherpad for the sprint, and added it
to the wiki page:

https://etherpad.openstack.org/p/mitaka-infra-midcycle

Colleen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] virtual midcycle dates

2016-01-18 Thread Jim Rollenhagen
On Mon, Jan 11, 2016 at 05:54:30PM -0800, Jim Rollenhagen wrote:
> Hi all,
> 
> Here's a list of potential dates for our midcycle; please note which you
> would be able to attend.
> 
> http://doodle.com/poll/2gnvq6eee3a6dfbk

The results appear to indicate February 16-18 works best, so we'll be
going with that. Thanks for the input everyone.

// jim

> 
> Note that since we're including people from lots of time zones, this
> will be basically running for much of the day. We'll have to coordinate
> sessions with who is interested, and maybe do dual sessions for big
> items, and consolidate thoughts into something coherent.
> 
> People may need to get up really early or stay online really late to
> attend some of these sessions. Sorry for that; that's the nature of a
> virtual thing like this. Note that you aren't required to do so (since
> I'm not your boss), but the more the merrier. :)
> 
> // jim
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka POC

2016-01-18 Thread Steven Dake (stdake)
Eric,

One request.  Please convert the google doc at your earliest convience
into a spec, because we require majority approval of all specs, and it
takes awhile to get specs merged.  If you can get specs in, the review
process can begin earlier rather then later :)

Thanks!
-steve

On 1/18/16, 8:05 AM, "Michał Jastrzębski"  wrote:

>Cool! Looks good to me.
>
>Please put it to gerrit as well once you're done (don't worry if it's
>not merge-quality, just put -workflow), I'd love to see code as well.
>
>Cheers,
>Michal
>
>On 18 January 2016 at 08:19, Eric LEMOINE  wrote:
>> Hi Kolla devs
>>
>> So I've identified the following tests/POCs:
>>
>> * Verify Heka can read logs from containers that log to stdout
>> * Verify Heka can read logs from containers that log to Syslog
>>(/dev/log)
>> * Verify Heka can write logs to local files (as currently done using
>>Rsyslog)
>>
>> If these three POCs "succeed" then it'll mean that we may be able to
>> remove Rsyslog entirely, and I'll write the specs in that sense.
>>
>> When I am done with these tests I'll report back to the mailing list,
>> and then continue working on the specs.
>>
>> Do we agree with that?
>>
>> Thanks.
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be optional on member create?

2016-01-18 Thread Stephen Balukoff
Vivek--

"Member" in this case refers to an IP address that (probably) lives on a
tenant back-end network. We can't specify just the IP address when talking
to such an IP since tenant subnets may use overlapping IP ranges (ie. in
this case, subnet is required). In the case of the namespace driver and
Octavia, we use the subnet parameter for all members to determine which
back-end networks the load balancing software needs a port on.

I think the original use case for making subnet optional was the idea that
sometimes a tenant would like to add a "member" IP that is not part of
their tenant networks at all--  this is more than likely an IP address that
lives outside the local cloud. The assumption, then, would be that this IP
address should be reachable through standard routing from wherever the load
balancer happens to live on the network. That is to say, the load balancer
will try to get to such an IP address via its default gateway, unless it
has a more specific route.

As far as I'm aware, this use case is still valid and being asked for by
tenants. Therefore, I'm in favor of making member subnet optional.

Stephen

On Mon, Jan 18, 2016 at 11:14 AM, Jain, Vivek  wrote:

> If member port (IP address) is allocated by neutron, then why do we need
> to specify it explicitly? It can be derived by LBaaS driver implicitly.
>
> Thanks,
> Vivek
>
>
>
>
>
>
> On 1/17/16, 1:05 AM, "Samuel Bercovici"  wrote:
>
> >Btw.
> >
> >I am still in favor on associating the subnets to the LB and then not
> specify them per node at all.
> >
> >-Sam.
> >
> >
> >-Original Message-
> >From: Samuel Bercovici [mailto:samu...@radware.com]
> >Sent: Sunday, January 17, 2016 10:14 AM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
> optional on member create?
> >
> >+1
> >Subnet should be mandatory
> >
> >The only thing this makes supporting load balancing servers which are not
> running in the cloud more challenging to support.
> >But I do not see this as a huge user story (lb in cloud load balancing
> IPs outside the cloud)
> >
> >-Sam.
> >
> >-Original Message-
> >From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
> >Sent: Saturday, January 16, 2016 6:56 AM
> >To: openstack-dev@lists.openstack.org
> >Subject: [openstack-dev] [Neutron][LBaaS][Octavia] Should subnet be
> optional on member create?
> >
> >I filed a bug [1] a while ago that subnet_id should be an optional
> parameter for member creation.  Currently it is required.  Review [2] is
> makes it optional.
> >
> >The original thinking was that if the load balancer is ever connected to
> that same subnet, be it by another member on that subnet or the vip on that
> subnet, then the user does not need to specify the subnet for new member if
> that new member is on one of those subnets.
> >
> >At the midcycle we discussed it and we had an informal agreement that it
> required too many assumptions on the part of the end user, neutron lbaas,
> and driver.
> >
> >If anyone wants to voice their opinion on this matter, do so on the bug
> report, review, or in response to this thread.  Otherwise, it'll probably
> be abandoned and not done at some point.
> >
> >Thanks,
> >Brandon
> >
> >[1] https://bugs.launchpad.net/neutron/+bug/1426248
> >[2] https://review.openstack.org/#/c/267935/
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Principal Technologist
Blue Box, An IBM Company
www.blueboxcloud.com
sbaluk...@blueboxcloud.com
206-607-0660 x807
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova][stable] Proposal to add Tony Breeds to nova-stable-maint

2016-01-18 Thread Matt Riedemann



On 1/15/2016 5:44 AM, John Garbutt wrote:

On 14 January 2016 at 10:45, Michael Still  wrote:

I think Tony would be a valuable addition to the team.


+1


+1


+1

John


On 14 Jan 2016 7:59 AM, "Matt Riedemann"  wrote:


I'm formally proposing that the nova-stable-maint team [1] adds Tony
Breeds to the core team.

I don't have a way to track review status on stable branches, but there
are review numbers from gerrit for stable/liberty [2] and stable/kilo [3].

I know that Tony does a lot of stable branch reviews and knows the
backport policy well, and he's also helped out numerous times over the last
year or so with fixing stable branch QA / CI issues (think gate wedge
failures in stable/juno over the last 6 months). So I think Tony would be a
great addition to the team.

So for those on the team already, please reply with a +1 or -1 vote.

[1] https://review.openstack.org/#/admin/groups/540,members
[2]
https://review.openstack.org/#/q/reviewer:%22Tony+Breeds%22+branch:stable/liberty+project:openstack/nova
[3]
https://review.openstack.org/#/q/reviewer:%22Tony+Breeds%22+branch:stable/kilo+project:openstack/nova

--

Thanks,

Matt Riedemann







Tony is now part of the nova-stable-maint core team. Congrats Tony!

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [astara] Outstanding mitaka-2 bugs

2016-01-18 Thread Adam Gandelman
Hi All-

I mentioned this in todays meeting but wanted to drop a note here since
most of us are out on a US holiday...

This week is Mitaka-2 [1] and we still have quite a few outstanding bugs
that still need resolution. The good news is that (i think) all of them
have had patches up for review for some time, and should all be in pretty
good shape for merging. Some are less trivial than others, with the most
complex being the remaining pieces to dynamic management addresses (bug
#1524068), see gerrit topic [2].

I'd like to ask core reviewers to make this backlog a priority for the
first part of the week, so we can get our M2 tags pushed to all our repos
before the end of the week.  There's also a number of non-critical patches
up that we should try to clear out as well. It'd be great to get a good
burn down going of reviews/bugs this week and dedicate the remainder of
this cycle looking at finishing up feature blueprints we've been planning
for Mitaka.

Thanks!
Adam

[1] https://launchpad.net/astara/+milestone/mitaka-2
[2] https://review.openstack.org/#/q/topic:bug/1524068+and+branch:master
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-18 Thread Kevin Benton
The MTU setting is an issue because it involves knowledge of the network
outside of openstack. That's why it was just a config value that was
expected to be set by an operator. This thread is working to see if we can
figure that out, or maybe at least come up with a different sub-optimal
default.

For the floating IP thing, do you need floating IPs? If not, using the
'provider networking' workflow is much simpler if you don't want tenant
virtual routers and whatnot:
http://docs.openstack.org/liberty/networking-guide/scenario_provider_lb.html

On Mon, Jan 18, 2016 at 4:06 PM, John Griffith 
wrote:

>
>
> On Sun, Jan 17, 2016 at 8:30 PM, Matt Kassawara 
> wrote:
>
>> Prior attempts to solve the MTU problem in neutron simply band-aid it or
>> become too complex from feature creep or edge cases that mask the primary
>> goal of a simple implementation that works for most deployments. So, I ran
>> some experiments to empirically determine the root cause of MTU problems in
>> common neutron deployments using the Linux bridge agent. I plan to perform
>> these experiments again using the Open vSwitch agent... after sufficient
>> mental recovery.
>>
>> I highly recommend reading further, but here's the TL;DR:
>>
>> Observations...
>>
>> 1) During creation of a VXLAN interface, Linux automatically subtracts
>> the VXLAN protocol overhead from the MTU of the parent interface.
>> 2) A veth pair or tap with a different MTU on each end drops packets
>> larger than the smaller MTU.
>> 3) Linux automatically adjusts the MTU of a bridge to the lowest MTU of
>> all the ports. Therefore, Linux reduces the typical bridge MTU from 1500 to
>> 1450 when neutron adds a VXLAN interface to it.
>> 4) A bridge with different MTUs on each port drops packets larger than
>> the MTU of the bridge.
>> 5) A bridge or veth pair with an IP address can participate in path MTU
>> discovery (PMTUD). However, these devices do not appear to understand
>> namespaces and originate the ICMP message from the host instead of a
>> namespace. Therefore, the message never reaches the destination...
>> typically a host outside of the deployment.
>>
>> Conclusion...
>>
>> The MTU disparity between native and overlay networks must reside in a
>> device capable of layer-3 operations that can participate in PMTUD, such as
>> the neutron router between a private/project overlay network and a
>> public/external native network.
>>
>> Some background...
>>
>> In a typical datacenter network, MTU must remain consistent within a
>> layer-2 network because fragmentation and the mechanism indicating the need
>> for it occurs at layer-3. In other words, all host interfaces and switch
>> ports on the same layer-2 network must use the same MTU. If the layer-2
>> network connects to a router, the router port must also use the same MTU. A
>> router can contain ports on multiple layer-2 networks with different MTUs
>> because it operates on those networks at layer-3. If the MTU changes
>> between ports on a router and devices on those layer-2 networks attempt to
>> communicate at layer-3, the router can perform a couple of actions. For
>> IPv4, the router can fragment the packet. However, if the packet contains
>> the "don't fragment" (DF) flag, the router can either silently drop the
>> packet or return an ICMP "fragmentation needed" message to the sender. This
>> ICMP message contains the MTU of the next layer-2 network in the route
>> between the sender and receiver. Each router in the path can return these
>> ICMP messages to the sender until it learns the maximum MTU for the entire
>> path, also known as path MTU discovery (PMTUD). IPv6 does not support
>> fragmentation.
>>
>> The cloud provides a virtual extension of a physical network. In the
>> simplest sense, patch cables become veth pairs, switches become bridges,
>> and routers become namespaces. Therefore, MTU implementation for virtual
>> networks should mimic physical networks where MTU changes must occur within
>> a router at layer-3.
>>
>> For these experiments, my deployment contains one controller and one
>> compute node. Neutron uses the ML2 plug-in and Linux bridge agent. The
>> configuration does not contain any MTU options (e.g, path_mtu). One VM with
>> a floating IP address attaches to a VXLAN private network that routes to a
>> flat public network. The DHCP agent does not advertise MTU to the VM. My
>> lab resides on public cloud infrastructure with networks that filter
>> unknown MAC addresses such as those that neutron generates for virtual
>> network components. Let's talk about the implications and workarounds.
>>
>> The VXLAN protocol contains 50 bytes of overhead. Linux automatically
>> calculates the MTU of VXLAN devices by subtracting 50 bytes from the parent
>> device, in this case a standard Ethernet interface with a 1500 MTU.
>> However, due the limitations of public cloud networks, I must create a
>> VXLAN tunnel between the controller