Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2015-09-09 Thread Ian Wells
Neutron already offers a DNS server (within the DHCP namespace, I think).
It does forward on non-local queries to an external DNS server, but it
already serves local names for instances; we'd simply have to set one
aside, or perhaps use one in a 'root' but nonlocal domain
(metadata.openstack e.g.).  In fact, this improves things slightly over the
IPv4 metadata server: IPv4 metadata is usually reached via the router,
whereas in ipv6 if we have a choice over addresses with can use a link
local address (and any link local address will do; it's not an address that
is 'magic' in some way, thanks to the wonder of service advertisement).

And per previous comments about 'Amazon owns this' - the current metadata
service is a de facto standard, which Amazon initiated but is not owned by
anybody, and it's not the only standard.  If you'd like proof of the
former, I believe our metadata service offers /openstack/ URLs, unlike
Amazon (mirroring the /openstack/ files on the config drive); and on the
latter, config-drive and Amazon-style metadata are only two of quite an
assortment of data providers that cloud-init will query.  If it makes you
think of it differently, think of this as the *Openstack* ipv6 metadata
service, and not the 'will-be-Amazon-one-day-maybe' service.


On 8 September 2015 at 17:03, Clint Byrum  wrote:

> Neutron would add a soft router that only knows the route to the metadata
> service (and any other services you want your neutron private network vms
> to be able to reach). This is not unique to the metadata service. Heat,
> Trove, etc, all want this as a feature so that one can poke holes out of
> these private networks only to the places where the cloud operator has
> services running.
>
> Excerpts from Fox, Kevin M's message of 2015-09-08 14:44:35 -0700:
> > How does that work with neutron private networks?
> >
> > Thanks,
> > Kevin
> > 
> > From: Clint Byrum [cl...@fewbar.com]
> > Sent: Tuesday, September 08, 2015 1:35 PM
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support
> >
> > Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> > > AFAIK, the cloud-init metadata service can currently be accessed only
> by sending a request to http://169.254.169.254, and no IPv6 equivalent is
> currently implemented. Does anyone working on this or tried to address this
> before?
> > >
> >
> > I'm not sure we'd want to carry the way metadata works forward now that
> > we have had some time to think about this.
> >
> > We already have DHCP6 and NDP. Just use one of those, and set the host's
> > name to a nonce that it can use to lookup the endpoint for instance
> > differentiation via DNS SRV records. So if you were told you are
> >
> > d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com
> >
> > Then you look that up as a SRV record on your configured DNS resolver,
> > and connect to the host name returned and do something like  GET
> > /d02a684d-56ea-44bc-9eba-18d997b1d32d
> >
> > And viola, metadata returns without any special link local thing, and
> > it works like any other dual stack application on the planet.
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Port forwarding

2015-09-09 Thread Germy Lure
Hi Gal,

Congratulations, eventually you understand what I mean.

Yes, in bulk. But I don't think that's an enhancement to the API. The bulk
operation is more common scenario. It is more useful and covers the single
port-mapping scenario.

By the way, bulk operation may apply to a subnet, a range(IP1 to IP100) or
even all the VMs behind a router. Perhaps, we need make a choice between
them while I prefer "range". Because it's more flexible and easier to use.

Many thanks.
Germy

On Wed, Sep 9, 2015 at 3:30 AM, Carl Baldwin  wrote:

> On Tue, Sep 1, 2015 at 11:59 PM, Gal Sagie  wrote:
> > Hello All,
> >
> > I have searched and found many past efforts to implement port forwarding
> in
> > Neutron.
>
> I have heard a few express a desire for this use case a few times in
> the past without gaining much traction.  Your summary here seems to
> show that this continues to come up.  I would be interested in seeing
> this move forward.
>
> > I have found two incomplete blueprints [1], [2] and an abandoned patch
> [3].
> >
> > There is even a project in Stackforge [4], [5] that claims
> > to implement this, but the L3 parts in it seems older then current
> master.
>
> I looked at this stack forge project.  It looks like files copied out
> of neutron and modified as an alternative to proposing a patch set to
> neutron.
>
> > I have recently came across this requirement for various use cases, one
> of
> > them is
> > providing feature compliance with Docker port-mapping feature (for
> Kuryr),
> > and saving floating
> > IP's space.
>
> I think both of these could be compelling use cases.
>
> > There has been many discussions in the past that require this feature,
> so i
> > assume
> > there is a demand to make this formal, just a small examples [6], [7],
> [8],
> > [9]
> >
> > The idea in a nutshell is to support port forwarding (TCP/UDP ports) on
> the
> > external router
> > leg from the public network to internal ports, so user can use one
> Floating
> > IP (the external
> > gateway router interface IP) and reach different internal ports
> depending on
> > the port numbers.
> > This should happen on the network node (and can also be leveraged for
> > security reasons).
>
> I'm sure someone will ask how this works with DVR.  It should be
> implemented so that it works with a DVR router but it will be
> implemented in the central part of the router.  Ideally, DVR and
> legacy routers work the same in this regard and a single bit of code
> will implement it for both.  If this isn't the case, I think that is a
> problem with our current code structure.
>
> > I think that the POC implementation in the Stackforge project shows that
> > this needs to be
> > implemented inside the L3 parts of the current reference implementation,
> it
> > will be hard
> > to maintain something like that in an external repository.
> > (I also think that the API/DB extensions should be close to the current
> L3
> > reference
> > implementation)
>
> Agreed.
>
> > I would like to renew the efforts on this feature and propose a RFE and a
> > spec for this to the
> > next release, any comments/ideas/thoughts are welcome.
> > And of course if any of the people interested or any of the people that
> > worked on this before
> > want to join the effort, you are more then welcome to join and comment.
>
> I have added this to the agenda for the Neutron drivers meeting.  When
> the team starts to turn its eye toward Mitaka, we'll discuss it.
> Hopefully that will be soon as I'm started to think about it already.
>
> I'd like to see how the API for this will look.  I don't think we'll
> need more detail that that for now.
>
> Carl
>
> > [1]
> https://blueprints.launchpad.net/neutron/+spec/router-port-forwarding
> > [2] https://blueprints.launchpad.net/neutron/+spec/fip-portforwarding
> > [3] https://review.openstack.org/#/c/60512/
> > [4] https://github.com/stackforge/networking-portforwarding
> > [5] https://review.openstack.org/#/q/port+forwarding,n,z
> >
> > [6]
> >
> https://ask.openstack.org/en/question/75190/neutron-port-forwarding-qrouter-vms/
> > [7] http://www.gossamer-threads.com/lists/openstack/dev/34307
> > [8]
> >
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-for-router-td46639.html
> > [9]
> >
> http://openstack.10931.n7.nabble.com/Neutron-port-forwarding-from-gateway-to-internal-hosts-td32410.html
> >
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] Devstack broken - third party CI broken

2015-09-09 Thread Chris Dent

On Wed, 9 Sep 2015, Chris Dent wrote:


On Wed, 9 Sep 2015, Chris Dent wrote:


I'll push up a couple of reviews to fix this, either on the
ceilometer or devstack side and we can choose which one we prefer.


Here's the devstack fix: https://review.openstack.org/#/c/221634/


This is breaking ceilometer in the gate too, not just third party
CI.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] Devstack broken - third party CI broken

2015-09-09 Thread Chris Dent

On Wed, 9 Sep 2015, Eduard Matei wrote:


2015-09-08 21:56:33.585 | Error: Service ceilometer-acentral is not running
2015-09-08 21:56:33.585 | + for service in '$failures'
2015-09-08 21:56:33.586 | ++ basename
/opt/stack/status/stack/ceilometer-acompute.failure
2015-09-08 21:56:33.587 | + service=ceilometer-acompute.failure
2015-09-08 21:56:33.587 | + service=ceilometer-acompute
2015-09-08 21:56:33.587 | + echo 'Error: Service ceilometer-acompute is not
running'
2015-09-08 21:56:33.587 | Error: Service ceilometer-acompute is not running
2015-09-08 21:56:33.587 | + '[' -n
'/opt/stack/status/stack/ceilometer-acentral.failure
2015-09-08 21:56:33.587 |
/opt/stack/status/stack/ceilometer-acompute.failure' ']'
2015-09-08 21:56:33.587 | + die 1467 'More details about the above errors
can be found with screen, with ./rejoin-stack.sh'
2015-09-08 21:56:33.587 | + local exitcode=0


This is because of a recent commit[1] on ceilometer that changed the
names of some of the agents. While the devstack plugin in ceilometer
is updated to reflect these changes, the ceilometer code that is
still in devstack itself is not. The removal of ceilometer from
devstack itself[2] is pending some infra updates[3].

I'll push up a couple of reviews to fix this, either on the
ceilometer or devstack side and we can choose which one we prefer.

[1] https://review.openstack.org/#/c/212498/
[2] https://review.openstack.org/#/c/196383/
[3] https://review.openstack.org/#/q/topic:bug/1489436,n,z

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] Devstack broken - third party CI broken

2015-09-09 Thread Chris Dent


You can work around the problem by replacing lines like:

enable_service ceilometer-acompute ceilometer-acentral 
ceilometer-anotification ceilometer-collector ceilometer-api

with:

enable_plugin ceilometer git://git.openstack.org/openstack/ceilometer

in your local.conf

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] What is the no_device flag for in block device mapping?

2015-09-09 Thread Daniel P. Berrange
On Tue, Sep 08, 2015 at 05:32:29PM +, Murray, Paul (HP Cloud) wrote:
> Hi All,
> 
> I'm wondering what the "no_device" flag is used for in the block device
> mappings. I had a dig around in the code but couldn't figure out why it
> is there. The name suggests an obvious meaning, but I've learnt not to
> guess too much from names.
> 
> Any pointers welcome.

I was going to suggest reading the docs

  http://docs.openstack.org/developer/nova/block_device_mapping.html

but they don't mention 'no_device' at all :-(

When we find out what it actually means we should document it there :-)

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev] Devstack broken - third party CI broken

2015-09-09 Thread Eduard Matei
Hi,

Our Jenkins CI has been failing consistently last night during devstack
install:


2015-09-08 21:56:33.585 | Error: Service ceilometer-acentral is not running
2015-09-08 21:56:33.585 | + for service in '$failures'
2015-09-08 21:56:33.586 | ++ basename
/opt/stack/status/stack/ceilometer-acompute.failure
2015-09-08 21:56:33.587 | + service=ceilometer-acompute.failure
2015-09-08 21:56:33.587 | + service=ceilometer-acompute
2015-09-08 21:56:33.587 | + echo 'Error: Service ceilometer-acompute is not
running'
2015-09-08 21:56:33.587 | Error: Service ceilometer-acompute is not running
2015-09-08 21:56:33.587 | + '[' -n
'/opt/stack/status/stack/ceilometer-acentral.failure
2015-09-08 21:56:33.587 |
/opt/stack/status/stack/ceilometer-acompute.failure' ']'
2015-09-08 21:56:33.587 | + die 1467 'More details about the above errors
can be found with screen, with ./rejoin-stack.sh'
2015-09-08 21:56:33.587 | + local exitcode=0


Screen logs for ceilometer show:

stack@d-p-c-local-01-2592:~/devstack$
/usr/local/bin/ceilometer-agent-central --
config-file /etc/ceilometer/ceilometer.conf & echo $!
>/opt/stack/status/stack/c
eilometer-acentral.pid; fg || echo "ceilometer-acentral failed to start" |
tee "
/opt/stack/status/stack/ceilometer-acentral.failure"
[1] 3837
/usr/local/bin/ceilometer-agent-central --config-file
/etc/ceilometer/ceilometer.conf
bash: /usr/local/bin/ceilometer-agent-central: No such file or directory
ceilometer-acentral failed to start
stack@d-p-c-local-01-2592:~/devstack$

Anyone any idea how to fix this?

Thanks,
-- 

*Eduard Biceri Matei*
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fail to get ipv4 address from dhcp

2015-09-09 Thread Huan Xie
Hi Zhi,

Thanks very much for your help☺
Even turn off “ARP Spoofing” cannot work.
But now, I find the cause for this:
For ovs-agent-plugin, it wil loop to check OVS status and port status.
But in my case, during the loop, it cannot detect there is new port added, so 
it fails to add tag for this port, and thus all package will be dropped.
Therefore, the dhcp request cannot be reached by dhcp agent, and the VM cannot 
get ip all the time.

My current walk around is set an configuration item in compute node’s 
ml2_conf.ini
[agent]
minimize_polling = False


Although it can work, but I’m still wondering why the new added port cannot be 
detected even “minimized_polling = True”
It seems the ovsdb monitor cannot detect that.
Do you have any suggestions for this part?

BR//Huan

From: zhi [mailto:changzhi1...@gmail.com]
Sent: Monday, September 07, 2015 2:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Fail to get ipv4 address from dhcp

hi, if you turn off the "ARP Spoofing" flag and restart the q-agt service. Does 
vm can get IP successfully?

2015-09-06 17:03 GMT+08:00 Huan Xie 
>:

Hi all,

I’m trying to deploy OpenStack environment using DevStack with latest master 
code.
I use Xenserver + neutron, with ML2 plugins and VLAN type.

The problem I met is that the instances cannot really get IP address (I use 
DHCP), although we can see the VM with IP from horizon.
I have tcpdump from VM side and DHCP server side, I can get DHCP request packet 
from VM side but cannot see any request packet from DHCP server side.
But after I reboot the q-agt, the VM can get IP successfully.
Checking the difference before and after q-agt restart, all my seen are the 
flow rules about ARP spoofing.

This is the q-agt’s br-int port, it is dom0’s flow rules and the bold part are 
new added

NXST_FLOW reply (xid=0x4):
   cookie=0x824d13a352a4e216, duration=163244.088s, table=0, 
n_packets=93, n_bytes=18140, idle_age=4998, hard_age=65534, priority=0 
actions=NORMAL
cookie=0x824d13a352a4e216, duration=163215.062s, table=0, n_packets=7, 
n_bytes=294, idle_age=33540, hard_age=65534, priority=10,arp,in_port=5 
actions=resubmit(,24)
   cookie=0x824d13a352a4e216, duration=163230.050s, table=0, 
n_packets=25179, n_bytes=2839586, idle_age=5, hard_age=65534, 
priority=3,in_port=2,dl_vlan=1023 actions=mod_vlan_vid:1,NORMAL
   cookie=0x824d13a352a4e216, duration=163236.775s, table=0, 
n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=2 
actions=drop
   cookie=0x824d13a352a4e216, duration=163243.516s, table=23, 
n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
   cookie=0x824d13a352a4e216, duration=163242.953s, table=24, 
n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
cookie=0x824d13a352a4e216, duration=163215.636s, table=24, n_packets=7, 
n_bytes=294, idle_age=33540, hard_age=65534, 
priority=2,arp,in_port=5,arp_spa=10.0.0.6 actions=NORMAL

I cannot see other changes after reboot q-agt, but it seems these rules are 
only for ARP spoofing, however, the instance can get IP from DHCP.
I also google for this problem, but failed to deal this problem.
Is anyone met this problem before or has any suggestion about how to debugging 
for this?

Thanks a lot

BR//Huan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] FFE request for nfs-as-a-data-source

2015-09-09 Thread Chen, Weiting
Hi, all.

I would like to request FFE for nfs as a data source for sahara.
This bp originally should include a dashboard change to create nfs as a data 
source.
I will register it as another bp and implement it in next version.
However, these patches have already done to put nfs-driver into 
sahara-image-elements and enable it in the cluster.
By using this way, the user can use nfs protocol via command line in Liberty 
release.

Blueprint:
https://blueprints.launchpad.net/sahara/+spec/nfs-as-a-data-source

Spec:
https://review.openstack.org/#/c/210839/

Patch:
https://review.openstack.org/#/c/218637/
https://review.openstack.org/#/c/218638/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Weekly Team Meeting 2015.09.09

2015-09-09 Thread Zhipeng Huang
Hi Team,

Let's resume our weekly meeting today. As Eran suggest before, we will
mainly discuss the work we have now, and leave the design session in
another time slot :) See you at UTC 1300 today.

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] Devstack broken - third party CI broken

2015-09-09 Thread Jordan Pittier
Also, as I believe your CI is for Cinder, I recommend that you disable all
uneeded services. (look how the DEVSTACK_LOCAL_CONFIG is used in
devstack-gate to add the proper disable_service line).

On Wed, Sep 9, 2015 at 11:00 AM, Chris Dent  wrote:

> On Wed, 9 Sep 2015, Chris Dent wrote:
>
> On Wed, 9 Sep 2015, Chris Dent wrote:
>>
>> I'll push up a couple of reviews to fix this, either on the
>>> ceilometer or devstack side and we can choose which one we prefer.
>>>
>>
>> Here's the devstack fix: https://review.openstack.org/#/c/221634/
>>
>
> This is breaking ceilometer in the gate too, not just third party
> CI.
>
>
> --
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2] ML2 late/early-cycle sprint announcement

2015-09-09 Thread Sukhdev Kapur
Folks,

We are planning on having ML2 coding sprint on October 6 through 8, 2015.
Some are calling it Liberty late-cycle sprint, others are calling it Mitaka
early-cycle sprint.

ML2 team has been discussing the issues related to synchronization of the
Neutron DB resources with the back-end drivers. Several issues have been
reported when multiple ML2 drivers are deployed in scaled HA deployments.
The issues surface when either side (Neutron or back-end HW/drivers)
restart and resource view gets out of sync. There is no mechanism in
Neutron or ML2 plugin which ensures the synchronization of the state
between the front-end and back-end. The drivers either end up implementing
their own solutions or they dump the issue on the operators to intervene
and correct it manually.

We plan on utilizing Task Flow to implement the framework in ML2 plugin
which can be leveraged by ML2 drivers to achieve synchronization in a
simplified manner.

There are couple of additional items on the Sprint agenda, which are listed
on the etherpad [1]. The details of venue and schedule are listed on the
enterpad as well. The sprint is hosted by Yahoo Inc.
Whoever is interested in the topics listed on the etherpad, is welcome to
sign up for the sprint and join us in making this reality.

Additionally, we will utilize this sprint to formalize the design
proposal(s) for the fish bowl session at Tokyo summit [2]

Any questions/clarifications, please join us in our weekly ML2 meeting on
Wednesday at 1600 UTC (9AM pacific time) at #openstack-meeting-alt

Thanks
-Sukhdev

[1] - https://etherpad.openstack.org/p/Neutron_ML2_Mid-Cycle_Sprint
[2] - https://etherpad.openstack.org/p/neutron-mitaka-designsummit
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] Devstack broken - third party CI broken

2015-09-09 Thread Chris Dent

On Wed, 9 Sep 2015, Chris Dent wrote:


I'll push up a couple of reviews to fix this, either on the
ceilometer or devstack side and we can choose which one we prefer.


Here's the devstack fix: https://review.openstack.org/#/c/221634/

In discussion with other ceilometer cores we decided this was more
effective than reverting the ceilometer change.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API v2.1 reference documentation

2015-09-09 Thread John Garbutt
On 9 September 2015 at 03:43, Anne Gentle  wrote:
>
>
> On Tue, Sep 8, 2015 at 8:41 PM, Ken'ichi Ohmichi 
> wrote:
>>
>> Hi Melanie,
>>
>> 2015-09-09 8:00 GMT+09:00 melanie witt :
>> > Hi All,
>> >
>> > With usage of v2.1 picking up (devstack) I find myself going to the API
>> > ref documentation [1] often and find it lacking compared with the similar 
>> > v2
>> > doc [2]. I refer to this doc whenever I see a novaclient bug where 
>> > something
>> > broke with v2.1 and I'm trying to find out what the valid request 
>> > parameters
>> > are, etc.
>> >
>> > The main thing I notice is in the v2.1 docs, there isn't any request
>> > parameter list with descriptions like there is in v2. And I notice "create
>> > server" documentation doesn't seem to exist -- there is "Create multiple
>> > servers" but it doesn't provide much nsight about what the many request
>> > parameters are.
>> >
>> > I assume the docs are generated from the code somehow, so I'm wondering
>> > how we can get this doc improved? Any pointers would be appreciated.
>
>
> They are manual, and Alex made a list of how far behind the 2.1 docs which
> is in a doc bug here:
>
> https://bugs.launchpad.net/openstack-api-site/+bug/1488144
>
> It's great to see Atsushi Sakai working hard on those, join him in the
> patching.
>
> We're still patching WADL for this release with the hope of adding Swagger
> for many services by October 15th -- however the WADL to Swagger tool we
> have now migrates WADL.

Mel, thanks for raising this one, its super important.

As I understand it from the API meeting I have dropped in on, some of
this work is being tracked in here:
https://etherpad.openstack.org/p/nova-v2.1-api-doc

At the summit we agreed the focus as Docs and getting v2.1 on by default.

That hasn't quite happened, but now is a good time to try and catch up
on the API docs. All help really appreciated there.

The recent issues with Horizon not working after upgrading
python-novaclient seem to be highlighting the need for docs on how to
use our client with the microversions too.

Thanks,
John

>
>> >
>> > Thanks,
>> > -melanie (irc: melwitt)
>> >
>> >
>> > [1] http://developer.openstack.org/api-ref-compute-v2.1.html
>> > [2] http://developer.openstack.org/api-ref-compute-v2.html
>>
>> Nice point.
>> "create server" API is most important and necessary to be described on
>> the document anyway.
>>
>> In short-term, we need to describe it from the code by hands, and we
>> can know available parameters from JSON-Schema code.
>> The base parameters can be gotten from
>>
>> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/servers.py#L18
>> In addition, there are extensions which add more parameters and we can
>> get to know from
>>
>> https://github.com/openstack/nova/tree/master/nova/api/openstack/compute/schemas
>> If module files contain the dict *server_create*, they are also API
>> parameters.
>> For example, keypairs extension adds "key_name" parameter and we can
>> know it from
>> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/schemas/keypairs.py
>>
>> In long-term, it will be great to generate these API parameter
>> document from JSON-Schema directly.
>> JSON-Schema supports "description" parameter and we can describe the
>> meaning of each parameter.
>> But that will be long-term way. We need to write them by hands now.
>>
>> Thanks
>> Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Concern about XStatic-bootswatch imports from fonts.googleapis.com

2015-09-09 Thread Matthias Runge
On 03/09/15 21:02, Matthias Runge wrote:
> On 03/09/15 13:24, Thomas Goirand wrote:
>> Hi,
>>
>> When doing:
>> grep -r fonts.googleapis.com *
>>
>> there's 56 lines of this kind of result:
>> xstatic/pkg/bootswatch/data/cyborg/bootstrap.css:@import
>> url("https://fonts.googleapis.com/css?family=Roboto:400,700;);
>>
>> This is wrong because:
I'd like to raise an issue with roboto fontface.


xstatic package points to
https://github.com/choffmeister/roboto-fontface-bower/tree/develop/fonts

it's unclear, where those files are coming from. roboto font is
apparently coming from Google
https://www.google.com/fonts/specimen/Roboto

Unfortunately it's not clear, where .eot, .woff, .svg are coming from.
or how to recreate them form googles published .ttf files.

On the other side Googles repository doesn't have tags or releases at
all. That makes it hard to detect a newer release (there is no release).




Why do we care (about this)?

Packaging a software package can be compared to building a car in the
middle of nowhere, where you simply have a plan and maybe steel, and
some basic tools. But no access to postal service, no flying in special
tools, etc.
According to the plan, you're building the car. If you need a tool,
build that tool, too.

Matthias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Releasing tripleo-common on PyPI

2015-09-09 Thread Dougal Matthews
Hi,

The tripleo-common library appears to be registered or PyPI but hasn't yet had 
a release[1]. I am not familiar with the release process - what do we need to 
do to make sure it is regularly released with other TripleO packages?

We will also want to do something similar with the new python-tripleoclient 
which doesn't seem to be registered on PyPI yet at all.

Thanks,
Dougal

[1]: https://pypi.python.org/pypi/tripleo-common

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Feature Freeze Exception: shelving commands

2015-09-09 Thread ZZelle
Hi,

I wanted to propose horizon-shelving-command[1][2] feature proposal for
exception.

This is a small feature based on existing pause/suspend command
implementations.


[1] https://blueprints.launchpad.net/horizon/+spec/horizon-shelving-command
[2] https://review.openstack.org/220838

Cedric/ZZelle@IRC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] attention requirements-cores, please look out for constraints updates

2015-09-09 Thread Alan Pevec
> I'd like to add in a lower-constraints.txt set of pins and actually
> start reporting on whether our lower bounds *work*.

Do you have a spec in progress for lower-constraints.txt?
It should help catch issues like https://review.openstack.org/221267
There are also lots of entries in global-requirements without minimum
version set while they should:
http://git.openstack.org/cgit/openstack/requirements/tree/README.rst#n226

Cheers,
Alan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-09 Thread Andrey Danin
I disagree from the development point of view. Now I just change manifests
on Fuel node and redeploy cluster to apply that changes. With your proposal
I'll need to build a new package and add it to a repo every time I change
something.

On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> Currently, we install fuel-libraryX.Y package(s) on the master node and
> then right before starting actual deployment we rsync [1] puppet modules
> (one of installed versions) from the master node to slave nodes. Such a
> flow makes things much more complicated than they could be if we installed
> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
> parameterized by repo urls (upstream + mos) and this pre-deployment task
> could be nothing more than just installing fuel-library package from mos
> repo defined for a cluster. We would not have several versions of
> fuel-library on the master node, we would not need that complicated upgrade
> stuff like we currently have for puppet modules.
>
> Please give your opinions on this.
>
>
> [1]
> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Should v2 compatibility mode (v2.0 on v2.1) fixes be applicable for v2.1 too?

2015-09-09 Thread Sean Dague
On 09/08/2015 08:15 PM, Ken'ichi Ohmichi wrote:
> 2015-09-08 19:45 GMT+09:00 Sean Dague :
>> On 09/06/2015 11:15 PM, GHANSHYAM MANN wrote:
>>> Hi All,
>>> 
>>> As we all knows, api-paste.ini default setting for /v2 was
>>> changed to run those on v2.1 (v2.0 on v2.1) which is really great
>>> think for easy code maintenance in future (removal of v2 code).
>>> 
>>> To keep "v2.0 on v2.1" fully compatible with "v2.0 on v2.0", some
>>> bugs were found[1] and fixed. But I think we should fix those
>>> only for v2 compatible mode not for v2.1.
>>> 
>>> For example bug#1491325, 'device' on volume attachment Request
>>> is optional param[2] (which does not mean 'null-able' is allowed)
>>> and v2.1 used to detect and error on usage of 'device' as "None".
>>> But as it was used as 'None' by many /v2 users and not to break
>>> those, we should allow 'None' on v2 compatible mode also. But we
>>> should not allow the same for v2.1.
>>> 
>>> IMO v2.1 strong input validation feature (which helps to make
>>> API usage in correct manner) should not be changed, and for v2
>>> compatible mode we should have another solution without affecting
>>> v2.1 behavior may be having different schema for v2 compatible
>>> mode and do the necessary fixes there.
>>> 
>>> Trying to know other's opinion on this or something I missed
>>> during any discussion.
>>> 
>>> [1]: https://bugs.launchpad.net/python-novaclient/+bug/1491325 
>>> https://bugs.launchpad.net/nova/+bug/1491511
>>> 
>>> [2]:
>>> http://developer.openstack.org/api-ref-compute-v2.1.html#attachVolume
>>
>>
>>> 
A lot of these issue need to be a case by case determination.
>> 
>> In this particular case, we had the Documetation, the nova code,
>> the clients, and the future.
>> 
>> The documentation: device is optional. That means it should be a
>> string or not there at all. The schema was set to enforce this on
>> v2.1
>> 
>> The nova code: device = None was accepted previously, because
>> device is a mandatory parameter all the way down the call stack. 2
>> layers in we default it to None if it wasn't specified.
>> 
>> The clients: both python-novaclient and ruby fog sent device=None
>> in the common case. While only 2 data points, this does demonstrate
>> this is more wide spread than just our buggy code.
>> 
>> The future: it turns out we really can't honor this parameter in
>> most cases anyway, and passing it just means causing bugs. This is
>> an artifact of the EC2 API that only works on specific (and
>> possibly forked) versions of Xen that Amazon runs. Most hypervisor
>> / guest relationships don't allow this to be set. The long term
>> direction is going to be removing it from our API.
>> 
>> Given that it seemed fine to relax this across all API. We screwed
>> up and didn't test this case correctly, and long term we're going
>> to dump it. So we don't want to honor 3 different versions of this
>> API, especially as no one seems written to work against the
>> documentation, but were written against the code in question. If
>> they write to the docs, they'll be fine. But the clients that are
>> out in the wild will be fine as well.
> 
> I think the case by case determination is fine, but current change 
> progress of relaxing validation seems wrong. In Kilo, we required
> nova-specs for relaxing v2.1 API validation like 
> https://review.openstack.org/#/c/126696/ and we had much enough
> discussion and we built a consensus about that. But we merged the
> above patch in just 2 working days without any nova-spec even if we
> didn't have a consensus about that v2.1 validation change requires
> microversion bump or not.
> 
> If we really need to relax validation thing for v2.0 compatible API, 
> please consider separating v2.0 API schema from v2.1 API schema. I
> have one idea about that like
> https://review.openstack.org/#/c/221129/
> 
> We worked for strict and consistent validation way on v2.1 API over
> 2 years, and I don't want to make it loose without enough thinking.

There also was no field data about what it broke. The strict schema's
were based on an assumed understanding of how people were interacting
with the API. But that wasn't tested until we merged a change to make
everyone in OpenStack use it.

And we found a couple of bugs in our assumptions. Those issues were
blocking other parts of the OpenStack ecosystem from merging any code.
Which is a pretty big deal. We did a lot of thinking about this one. It
also went to a Nova meeting and got discussed there.

I'm also in favor of the v2.0 schema patch, I just +2ed it. That doesn't
mean that we don't address real issues that will inhibit adoption. The
promise of v2.1 is that it was going to be the same surface as v2.0
except that stuff no one ever should have sent, or would send, would be
rejected on the surface. Which is a win from code complexity and
security. But in this particular case the schema made the wrong call
about how people were actually using this API. So we fixed that.

Being 

Re: [openstack-dev] [Fuel][Plugins] request for update of fuel-plugin-builder on pypi

2015-09-09 Thread Sergii Golovatiuk
+1 to Simon

Also the structure of fuel-plugin-builder should be refactored to community
standards. everything in 'fuel_plugin_builder' directory should moved to
top of repository.

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Wed, Sep 9, 2015 at 12:25 PM, Simon Pasquier 
wrote:

> Hi,
> It would be cool if fuel-plugin-builder (fpb) v3.0.0 could be released on
> pypi. We've moved some of the LMA plugins to use the v3 format.
> Right now we have to install fpb from source which is hard to automate in
> our tests unfortunately (as already noted by Sergii [1]).
> BR,
> Simon
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070781.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano][merlin] murano APIv2 and murano future ui

2015-09-09 Thread Ekaterina Chernova
Hi Nikolay!

Thanks for starting this activity! This is a really hot topic.
We also used to have plan to migrate our API to pecan. [1]
This also can be discussed.

Do we have a blueprint for that? Could you please file it and attach
etherpad to a new blueprint.

[1] -
https://blueprints.launchpad.net/murano/+spec/murano-api-server-pecan-wsme

Thanks,
Kate.

On Wed, Sep 9, 2015 at 2:07 PM, Nikolay Starodubtsev <
nstarodubt...@mirantis.com> wrote:

> Hi all,
> Yesterday on IRC weekly meeting murano team decided to start collecting
> ideas about murano APIv2 and murano future ui. We have to etherpads for
> this purpose:
> 1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2 ideas
> 2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for
> future ui ideas
>
> Feel free to write your ideas. If you have any questions you can reach me
> in IRC.
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano][merlin] murano APIv2 and murano future ui

2015-09-09 Thread Stéphane Bisinger
Please note that some projects already using pecan+WSME are actually
thinking about finding something else, since WSME doesn't have much
activity and has its fair share of issues. If you missed it, check out this
conversation thread:
http://lists.openstack.org/pipermail/openstack-dev/2015-August/073156.html

On Wed, Sep 9, 2015 at 1:50 PM, Nikolay Starodubtsev <
nstarodubt...@mirantis.com> wrote:

> Kate,
> This bp is pretty old, but I think it suits our needs [1]. Yeah, I'll
> attach etherpads to it.
> The idea about pecan/wsme is really useful.
> [1]: https://blueprints.launchpad.net/murano/+spec/api-vnext
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> 2015-09-09 14:42 GMT+03:00 Ekaterina Chernova :
>
>> Hi Nikolay!
>>
>> Thanks for starting this activity! This is a really hot topic.
>> We also used to have plan to migrate our API to pecan. [1]
>> This also can be discussed.
>>
>> Do we have a blueprint for that? Could you please file it and attach
>> etherpad to a new blueprint.
>>
>> [1] -
>> https://blueprints.launchpad.net/murano/+spec/murano-api-server-pecan-wsme
>>
>> Thanks,
>> Kate.
>>
>> On Wed, Sep 9, 2015 at 2:07 PM, Nikolay Starodubtsev <
>> nstarodubt...@mirantis.com> wrote:
>>
>>> Hi all,
>>> Yesterday on IRC weekly meeting murano team decided to start collecting
>>> ideas about murano APIv2 and murano future ui. We have to etherpads for
>>> this purpose:
>>> 1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2
>>> ideas
>>> 2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for
>>> future ui ideas
>>>
>>> Feel free to write your ideas. If you have any questions you can reach
>>> me in IRC.
>>>
>>>
>>>
>>> Nikolay Starodubtsev
>>>
>>> Software Engineer
>>>
>>> Mirantis Inc.
>>>
>>>
>>> Skype: dark_harlequine1
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stéphane
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano][merlin] murano APIv2 and murano future ui

2015-09-09 Thread Nikolay Starodubtsev
Hi all,
Yesterday on IRC weekly meeting murano team decided to start collecting
ideas about murano APIv2 and murano future ui. We have to etherpads for
this purpose:
1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2 ideas
2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for future
ui ideas

Feel free to write your ideas. If you have any questions you can reach me
in IRC.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Proposing Nikolai Starodubtsev for core

2015-09-09 Thread Filip Blaha

+1

On 09/08/2015 02:28 PM, Stan Lagun wrote:

+1

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis


On Tue, Sep 1, 2015 at 3:03 PM, Alexander Tivelkov 
> wrote:


+1. Well deserved.

--
Regards,
Alexander Tivelkov

On Tue, Sep 1, 2015 at 2:47 PM, Victor Ryzhenkin
> wrote:

+1 from me ;)

-- 
Victor Ryzhenkin

Junior QA Engeneer
freerunner on #freenode

Включено 1 сентября 2015 г. в 12:18:19, Ekaterina Chernova
(efedor...@mirantis.com ) написал:


+1

On Tue, Sep 1, 2015 at 10:03 AM, Dmitro Dovbii
> wrote:

+1

2015-09-01 2:24 GMT+03:00 Serg Melikyan
>:

+1

On Mon, Aug 31, 2015 at 3:45 PM, Kirill Zaitsev
> wrote:

I’m pleased to nominate Nikolai for Murano core.

He’s been actively participating in development
of murano during liberty and is among top5
contributors during last 90 days. He’s also
leading the CloudFoundry integration initiative.

Here are some useful links:

Overall contribution:
http://stackalytics.com/?user_id=starodubcevna
List of reviews:

https://review.openstack.org/#/q/reviewer:%22Nikolay+Starodubtsev%22,n,z
Murano contribution during latest 90 days
http://stackalytics.com/report/contribution/murano/90

Please vote with +1/-1 for approval/objections

-- 
Kirill Zaitsev

Murano team
Software Engineer
Mirantis, Inc


__
OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:

openstack-dev-requ...@lists.openstack.org?subject:unsubscribe



http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com  |
smelik...@mirantis.com 

+7 (495) 640-4904
, 0261
+7 (903) 156-0836 


__
OpenStack Development Mailing List (not for usage
questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe



http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [nova] [all] Updated String Freeze Guidelines

2015-09-09 Thread John Garbutt
Hi,

I have had quite a few comments from folks about the string freeze
being too strict.

It was noted that:
* users will prefer an untranslated log, over a silent failure
* translators don't want existing strings changing while they are
translating them
* translators tooling can cope OK with moved strings and new strings

After yesterday's cross project meeting, and hanging out in
#openstack-i18n I have come up with these updates to the String Freeze
Guidelines:
https://wiki.openstack.org/wiki/StringFreeze

Basically, we have a Soft String Freeze from Feature Freeze until RC1:
* Translators work through all existing strings during this time
* So avoid changing existing translatable strings
* Additional strings are generally OK

Then post RC1, we have a Hard String Freeze:
* No new strings, and no string changes
* Exceptions need discussion

Then at least 10 working days after RC1:
* we need a new RC candidate to include any updated strings

Is everyone happy with these changes?

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [aodh][ceilometer] (re)introducing Aodh - OpenStack Alarming

2015-09-09 Thread Qiming Teng
Hi, Gord,

Good to know there will be a team dedicated to this alarming service.
After reading your email, I still feel a need for some clarifications.

- According to [1], Aodh will be released as a standalone service,
  am I understanding this correctly?

- What is the official name for this new serivce when it stands on its
  own feet: "Telemetry Alarming" or just "Alarming" or something else?
  We will need a name for this to support in OpenStack SDK.

- There will be a need to create endpoints for Aodh in keystone? Or it
  is just a 'library' for ceilometer, sharing the same API endpoint and
  the same client with ceilometer?

- The original email mentioned that "the client can still be used and
  redirect to Aodh". Could you please clarify where the redirection will
  happen? It is a client side redirection or a ceilometer-server side
  redirection? I'm asking this because we sometimes program the REST
  APIs directly.

[1]
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n64
 


Regards,
  Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Releasing tripleo-common on PyPI

2015-09-09 Thread Dmitry Tantsur

On 09/09/2015 12:15 PM, Dougal Matthews wrote:

Hi,

The tripleo-common library appears to be registered or PyPI but hasn't yet had
a release[1]. I am not familiar with the release process - what do we need to
do to make sure it is regularly released with other TripleO packages?


I think this is a good start: 
https://github.com/openstack/releases/blob/master/README.rst




We will also want to do something similar with the new python-tripleoclient
which doesn't seem to be registered on PyPI yet at all.


And instack-undercloud.



Thanks,
Dougal

[1]: https://pypi.python.org/pypi/tripleo-common

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] Devstack broken - third party CI broken

2015-09-09 Thread Eduard Matei
Line
export DEVSTACK_LOCAL_CONFIG="disable_service ceilometer-acompute
ceilometer-acentral ceilometer-collector ceilometer-api"
in the job config did the trick.

Thanks,
Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] request for update of fuel-plugin-builder on pypi

2015-09-09 Thread Swann Croiset
+2 to sergii

and btw create dedicated repos for plugin examples

On Wed, Sep 9, 2015 at 12:34 PM, Sergii Golovatiuk  wrote:

> +1 to Simon
>
> Also the structure of fuel-plugin-builder should be refactored to
> community standards. everything in 'fuel_plugin_builder' directory should
> moved to top of repository.
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Wed, Sep 9, 2015 at 12:25 PM, Simon Pasquier 
> wrote:
>
>> Hi,
>> It would be cool if fuel-plugin-builder (fpb) v3.0.0 could be released on
>> pypi. We've moved some of the LMA plugins to use the v3 format.
>> Right now we have to install fpb from source which is hard to automate in
>> our tests unfortunately (as already noted by Sergii [1]).
>> BR,
>> Simon
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070781.html
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral Liberty-3 milestone has been released

2015-09-09 Thread Renat Akhmerov
Hi,

Mistral Liberty 3 milestone has been released! We’ve also released Mistral 
Client 1.0.2 that has some adjustments needed to use the new Mistral server.

Below are corresponding release pages where you can find downloadable artefacts 
and more detailed information about what has changed:
https://launchpad.net/mistral/liberty/liberty-3 

https://launchpad.net/python-mistralclient/liberty/1.0.2 


From now on till the official Liberty release let’s focus on massive bugfixing, 
writing documentation and whatever is left on UI. The next release (RC1) is 
scheduled for 25 Sep, not so much time to relax.

Many thanks to the team for your hard work!

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-09 Thread Dmitry Pyzhov
Vladimir,

thanks for bringing this up. It greatly correlates with the idea of
modularity. Everything related to an openstack release should be put in one
place and should be managed as a solid bundle on the master node. Package
repository is the first solution that comes to the mind and it looks pretty
good. Puppet modules, openstack.yaml and maybe even serialisers should be
stored in packages in the openstack release repository. And eventually
every other piece of our software should get rid of release-specific logic.

On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> Currently, we install fuel-libraryX.Y package(s) on the master node and
> then right before starting actual deployment we rsync [1] puppet modules
> (one of installed versions) from the master node to slave nodes. Such a
> flow makes things much more complicated than they could be if we installed
> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
> parameterized by repo urls (upstream + mos) and this pre-deployment task
> could be nothing more than just installing fuel-library package from mos
> repo defined for a cluster. We would not have several versions of
> fuel-library on the master node, we would not need that complicated upgrade
> stuff like we currently have for puppet modules.
>
> Please give your opinions on this.
>
>
> [1]
> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>
> Vladimir Kozhukalov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] request for update of fuel-plugin-builder on pypi

2015-09-09 Thread Simon Pasquier
Hi,
It would be cool if fuel-plugin-builder (fpb) v3.0.0 could be released on
pypi. We've moved some of the LMA plugins to use the v3 format.
Right now we have to install fpb from source which is hard to automate in
our tests unfortunately (as already noted by Sergii [1]).
BR,
Simon
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070781.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano][merlin] murano APIv2 and murano future ui

2015-09-09 Thread Nikolay Starodubtsev
Kate,
This bp is pretty old, but I think it suits our needs [1]. Yeah, I'll
attach etherpads to it.
The idea about pecan/wsme is really useful.
[1]: https://blueprints.launchpad.net/murano/+spec/api-vnext



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-09 14:42 GMT+03:00 Ekaterina Chernova :

> Hi Nikolay!
>
> Thanks for starting this activity! This is a really hot topic.
> We also used to have plan to migrate our API to pecan. [1]
> This also can be discussed.
>
> Do we have a blueprint for that? Could you please file it and attach
> etherpad to a new blueprint.
>
> [1] -
> https://blueprints.launchpad.net/murano/+spec/murano-api-server-pecan-wsme
>
> Thanks,
> Kate.
>
> On Wed, Sep 9, 2015 at 2:07 PM, Nikolay Starodubtsev <
> nstarodubt...@mirantis.com> wrote:
>
>> Hi all,
>> Yesterday on IRC weekly meeting murano team decided to start collecting
>> ideas about murano APIv2 and murano future ui. We have to etherpads for
>> this purpose:
>> 1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2
>> ideas
>> 2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for
>> future ui ideas
>>
>> Feel free to write your ideas. If you have any questions you can reach me
>> in IRC.
>>
>>
>>
>> Nikolay Starodubtsev
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>> Skype: dark_harlequine1
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] ML2 late/early-cycle sprint announcement

2015-09-09 Thread Gal Sagie
Hi Sukhdev,

The common sync framework is something i was also thinking about for some
time now.
I think its a very good idea and would love if i could participate in the
talks (and hopefully the implementation as well)

Thanks
Gal.

On Wed, Sep 9, 2015 at 9:46 AM, Sukhdev Kapur 
wrote:

> Folks,
>
> We are planning on having ML2 coding sprint on October 6 through 8, 2015.
> Some are calling it Liberty late-cycle sprint, others are calling it Mitaka
> early-cycle sprint.
>
> ML2 team has been discussing the issues related to synchronization of the
> Neutron DB resources with the back-end drivers. Several issues have been
> reported when multiple ML2 drivers are deployed in scaled HA deployments.
> The issues surface when either side (Neutron or back-end HW/drivers)
> restart and resource view gets out of sync. There is no mechanism in
> Neutron or ML2 plugin which ensures the synchronization of the state
> between the front-end and back-end. The drivers either end up implementing
> their own solutions or they dump the issue on the operators to intervene
> and correct it manually.
>
> We plan on utilizing Task Flow to implement the framework in ML2 plugin
> which can be leveraged by ML2 drivers to achieve synchronization in a
> simplified manner.
>
> There are couple of additional items on the Sprint agenda, which are
> listed on the etherpad [1]. The details of venue and schedule are listed on
> the enterpad as well. The sprint is hosted by Yahoo Inc.
> Whoever is interested in the topics listed on the etherpad, is welcome to
> sign up for the sprint and join us in making this reality.
>
> Additionally, we will utilize this sprint to formalize the design
> proposal(s) for the fish bowl session at Tokyo summit [2]
>
> Any questions/clarifications, please join us in our weekly ML2 meeting on
> Wednesday at 1600 UTC (9AM pacific time) at #openstack-meeting-alt
>
> Thanks
> -Sukhdev
>
> [1] - https://etherpad.openstack.org/p/Neutron_ML2_Mid-Cycle_Sprint
> [2] - https://etherpad.openstack.org/p/neutron-mitaka-designsummit
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] 7.0 Release - Hard Code Freeze in action

2015-09-09 Thread Eugene Bogdanov

Hello everyone,

Please be informed that Hard Code Freeze for Fuel 7.0 Release is 
officially in action and the following changes have been applied:


1. Stable/7.0 branch was created for the following repos:

fuel-main
fuel-library
fuel-web
fuel-ostf
fuel-astute
fuel-qa
python-fuelclient
fuel-agent
fuel-nailgun-agent
fuel-mirror

2. Development Focus in LP is now changed to 8.0.
3. 7.0 builds are now switched to stable/7.0 branch and new Jenkins jobs 
are created to make builds from master (8.0) [1]. Note that 8.0 builds 
are based on Liberty release and therefore are highly unstable because 
Liberty packaging is currently in progress.


Bug reporters, please ensure you target both master and 7.0 (stable/7.0) 
milestones since now when reporting bugs. Also, please remember that all 
fixes for stable/7.0 branch should first be applied to master (8.0) and 
then cherry-picked to stable/7.0. As always, please ensure that you do 
NOT merge changes to stable branch first. It always has to be a backport 
with the same Change-ID. Please see more on this at [2].


--
EugeneB

[1] https://ci.fuel-infra.org/view/ISO/
[2] 
https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Backport_bugfixes_to_stable_release_series
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-09 Thread Alex Schultz
I agree that we shouldn't need to sync as we should be able to just update
the fuel-library package. That being said, I think there might be a few
issues with this method. The first issue is with plugins and how to
properly handle the distribution of the plugins as they may also include
puppet code that needs to be installed on the other nodes for a deployment.
Currently I do not believe we install the plugin packages anywhere except
the master and when they do get installed there may be some post-install
actions that are only valid for the master.  Another issue is being
flexible enough to allow for deployment engineers to make custom changes
for a given environment.  Unless we can provide an improved process to
allow for people to provide in place modifications for an environment, we
can't do away with the rsync.

If we want to go completely down the package route (and we probably
should), we need to make sure that all of the other pieces that currently
go together to make a complete fuel deployment can be updated in the same
way.

-Alex

On Wed, Sep 9, 2015 at 8:15 AM, Andrey Danin  wrote:

> I don't think juggling with repos and pull requests is easier than direct
> editing of files on Fuel node. Do we have Perestorika installed on Fuel
> node in 7.0?
>
> On Wed, Sep 9, 2015 at 3:47 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Andrey,
>>
>> This change is going to make things even easier. Currently you don't need
>> to build fuel-library package manually, Perestroika is going to do it for
>> you. It builds necessary packages during minutes for every review request
>> and packaging ci even tests it for you. You just need to make necessary
>> changes not on master node but on your MACBOOK using your favorite editor.
>> Then you need to commit this change and send this patch on review. If you
>> want to test this patch manually, you just need to append this CR repo
>> (example is here [1]) to the list of repos you define for your cluster and
>> start deployment. Anyway, you still have rsync, mcollective and other old
>> plain tools to run deployment manually.
>>
>> [1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov 
>> wrote:
>>
>>> Vladimir,
>>>
>>> thanks for bringing this up. It greatly correlates with the idea of
>>> modularity. Everything related to an openstack release should be put in one
>>> place and should be managed as a solid bundle on the master node. Package
>>> repository is the first solution that comes to the mind and it looks pretty
>>> good. Puppet modules, openstack.yaml and maybe even serialisers should be
>>> stored in packages in the openstack release repository. And eventually
>>> every other piece of our software should get rid of release-specific logic.
>>>
>>> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 Currently, we install fuel-libraryX.Y package(s) on the master node and
 then right before starting actual deployment we rsync [1] puppet modules
 (one of installed versions) from the master node to slave nodes. Such a
 flow makes things much more complicated than they could be if we installed
 puppet modules on slave nodes as rpm/deb packages. Deployment itself is
 parameterized by repo urls (upstream + mos) and this pre-deployment task
 could be nothing more than just installing fuel-library package from mos
 repo defined for a cluster. We would not have several versions of
 fuel-library on the master node, we would not need that complicated upgrade
 stuff like we currently have for puppet modules.

 Please give your opinions on this.


 [1]
 https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218

 Vladimir Kozhukalov


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrey Danin
> ada...@mirantis.com
> skype: gcon.monolake
>
> 

Re: [openstack-dev] [Heat] Multi Node Stack - keystone federation

2015-09-09 Thread Zane Bitter

On 09/09/15 04:10, SHTILMAN, Tomer (Tomer) wrote:




On 07/09/15 05:27, SHTILMAN, Tomer (Tomer) wrote:
Hi

Currently in heat we have the ability to deploy a remote stack on a
different region using OS::Heat::Stack and region_name in the context

My question is regarding multi node , separate keystones, with
keystone federation.

Is there an option in a HOT template to send a stack to a different
node, using the keystone federation feature?

For example ,If I have two Nodes (N1 and N2) with separate keystones
(and keystone federation), I would like to deploy a stack on N1 with a
nested stack that will deploy on N2, similar to what we have now for
regions



Zane wrote:
Short answer: no.



Long answer: this is something we've wanted to do for a while, and a lot of 
folks have asked for it. We've been calling it multi-cloud (i.e.
multiple keystones, as opposed to multi-region which is multiple regions with 
one keystone). In principle it's a small extension to the multi-region stacks 
(just add a way to specify the auth_url as well as the region), but the tricky 
part is how to authenticate to the other clouds. We don't want to encourage 
people to put their login credentials into a template. I'm not sure to what 
extent keystone federation could solve that - I suspect that it does not allow 
you to use a single token on multiple clouds, just that it allows you to obtain 
a token on multiple clouds using the same credentials? So basically this idea 
is on hold until someone comes up with a safe way to authenticate to the other 
clouds. Ideas/specs welcome.



cheers,
Zane.


Thanks Zane for your reply
My understanding was that with keystone federation once you have a token issued 
by one keystone the other one respect it and there is no need to 
re-authenticate with the second keystone.


OK, that sounds close to what Kevin said as well, which was that you use 
your token from the local keystone to obtain a token from the remote 
keystone that will allow you to access the remote Heat. If that's the 
case we'll need to write some code to grab that other token, but either 
way it all sounds relatively straightforward without any security headaches.


I know there are people who want to do this with clouds that are not 
federated (and even people with custom resources for non-OpenStack 
clouds who want to use this) so we may still need to find a solution for 
the credential thing in the long term, but I see no reason not to start 
now by implementing the federation case - that will solve a big subset 
of the problem and doesn't foreclose any future development paths.



My thinking was more of changing the remote stack resource to have in the 
context the heat_url of the other node ,I am not sure if credentials are needed 
here.


Not the heat_url, but the auth_url - we'll obtain the Heat endpoint from 
the remote keystone catalog, just like we do locally. But other than 
that, exactly - it's just another optional sub-property of the context 
on the remote stack resource.



We are currently building in our lab multi cloud setup with keystone federation 
and I will check if my understating is correct, I am planning for propose a BP 
for this once will be clear


+1


Thanks again
Tomer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano][merlin] murano APIv2 and murano future ui

2015-09-09 Thread Nikolay Starodubtsev
thanks, Stéphane. It's a good notice.



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2015-09-09 15:10 GMT+03:00 Stéphane Bisinger :

> Please note that some projects already using pecan+WSME are actually
> thinking about finding something else, since WSME doesn't have much
> activity and has its fair share of issues. If you missed it, check out this
> conversation thread:
> http://lists.openstack.org/pipermail/openstack-dev/2015-August/073156.html
>
> On Wed, Sep 9, 2015 at 1:50 PM, Nikolay Starodubtsev <
> nstarodubt...@mirantis.com> wrote:
>
>> Kate,
>> This bp is pretty old, but I think it suits our needs [1]. Yeah, I'll
>> attach etherpads to it.
>> The idea about pecan/wsme is really useful.
>> [1]: https://blueprints.launchpad.net/murano/+spec/api-vnext
>>
>>
>>
>> Nikolay Starodubtsev
>>
>> Software Engineer
>>
>> Mirantis Inc.
>>
>>
>> Skype: dark_harlequine1
>>
>> 2015-09-09 14:42 GMT+03:00 Ekaterina Chernova :
>>
>>> Hi Nikolay!
>>>
>>> Thanks for starting this activity! This is a really hot topic.
>>> We also used to have plan to migrate our API to pecan. [1]
>>> This also can be discussed.
>>>
>>> Do we have a blueprint for that? Could you please file it and attach
>>> etherpad to a new blueprint.
>>>
>>> [1] -
>>> https://blueprints.launchpad.net/murano/+spec/murano-api-server-pecan-wsme
>>>
>>> Thanks,
>>> Kate.
>>>
>>> On Wed, Sep 9, 2015 at 2:07 PM, Nikolay Starodubtsev <
>>> nstarodubt...@mirantis.com> wrote:
>>>
 Hi all,
 Yesterday on IRC weekly meeting murano team decided to start collecting
 ideas about murano APIv2 and murano future ui. We have to etherpads for
 this purpose:
 1) https://etherpad.openstack.org/p/murano-APIv2 - for murano API v2
 ideas
 2) https://etherpad.openstack.org/p/murano-future-ui-(Merlin) - for
 future ui ideas

 Feel free to write your ideas. If you have any questions you can reach
 me in IRC.



 Nikolay Starodubtsev

 Software Engineer

 Mirantis Inc.


 Skype: dark_harlequine1


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Stéphane
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Should v2 compatibility mode (v2.0 on v2.1) fixes be applicable for v2.1 too?

2015-09-09 Thread Alex Xu

> 在 2015年9月8日,下午6:45,Sean Dague  写道:
> 
> On 09/06/2015 11:15 PM, GHANSHYAM MANN wrote:
>> Hi All,
>> 
>> As we all knows, api-paste.ini default setting for /v2 was changed to
>> run those on v2.1 (v2.0 on v2.1) which is really great think for easy
>> code maintenance in future (removal of v2 code).
>> 
>> To keep "v2.0 on v2.1" fully compatible with "v2.0 on v2.0", some bugs
>> were found[1] and fixed. But I think we should fix those only for v2
>> compatible mode not for v2.1.
>> 
>> For example bug#1491325, 'device' on volume attachment Request is
>> optional param[2] (which does not mean 'null-able' is allowed) and
>> v2.1 used to detect and error on usage of 'device' as "None". But as
>> it was used as 'None' by many /v2 users and not to break those, we
>> should allow 'None' on v2 compatible mode also. But we should not
>> allow the same for v2.1.
>> 
>> IMO v2.1 strong input validation feature (which helps to make API
>> usage in correct manner) should not be changed, and for v2 compatible
>> mode we should have another solution without affecting v2.1 behavior
>> may be having different schema for v2 compatible mode and do the
>> necessary fixes there.
>> 
>> Trying to know other's opinion on this or something I missed during
>> any discussion.
>> 
>> [1]: https://bugs.launchpad.net/python-novaclient/+bug/1491325
>>  https://bugs.launchpad.net/nova/+bug/1491511
>> 
>> [2]: http://developer.openstack.org/api-ref-compute-v2.1.html#attachVolume
> 
> A lot of these issue need to be a case by case determination.
> 

+1 case by case, in the beginning of this release, I really hope we get 
guideline and with few rules
to explain everything, then I can use that guideline to making everyone stop 
argue :) Finally
I found I’m wrong. Thanks to Sean told me I should think about client (even I 
know that, I still
need sometime learn to think about that.)

> In this particular case, we had the Documetation, the nova code, the
> clients, and the future.
> 
> The documentation: device is optional. That means it should be a string
> or not there at all. The schema was set to enforce this on v2.1
> 
> The nova code: device = None was accepted previously, because device is
> a mandatory parameter all the way down the call stack. 2 layers in we
> default it to None if it wasn't specified.
> 
> The clients: both python-novaclient and ruby fog sent device=None in the
> common case. While only 2 data points, this does demonstrate this is
> more wide spread than just our buggy code.
> 
> The future: it turns out we really can't honor this parameter in most
> cases anyway, and passing it just means causing bugs. This is an
> artifact of the EC2 API that only works on specific (and possibly
> forked) versions of Xen that Amazon runs. Most hypervisor / guest
> relationships don't allow this to be set. The long term direction is
> going to be removing it from our API.
> 
> Given that it seemed fine to relax this across all API. We screwed up
> and didn't test this case correctly, and long term we're going to dump
> it. So we don't want to honor 3 different versions of this API,
> especially as no one seems written to work against the documentation,
> but were written against the code in question. If they write to the
> docs, they'll be fine. But the clients that are out in the wild will be
> fine as well.

> 
>   -Sea
> 
> -- 
> Sean Dague
> http://dague.net 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Weekly Team Meeting 2015.09.09

2015-09-09 Thread Zhipeng Huang
Hi Please find the meetbot log at
http://eavesdrop.openstack.org/meetings/tricircle/2015/tricircle.2015-09-09-13.01.html
.

And also a noise cancelled minutes in the attachment.

On Wed, Sep 9, 2015 at 4:22 PM, Zhipeng Huang  wrote:

> Hi Team,
>
> Let's resume our weekly meeting today. As Eran suggest before, we will
> mainly discuss the work we have now, and leave the design session in
> another time slot :) See you at UTC 1300 today.
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado


tricircle meeting minutes 2015.09.09.docx
Description: MS-Word 2007 document
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [aodh][ceilometer] (re)introducing Aodh - OpenStack Alarming

2015-09-09 Thread Julien Danjou
On Wed, Sep 09 2015, Qiming Teng wrote:

> - According to [1], Aodh will be released as a standalone service,
>   am I understanding this correctly?

Yes.

> - What is the official name for this new serivce when it stands on its
>   own feet: "Telemetry Alarming" or just "Alarming" or something else?
>   We will need a name for this to support in OpenStack SDK.

I think it would be "alarming". Does that sounds good enough to
everyone?
I'm not a native speaker and I'm never sure if "alarming" is a good term
here.

> - There will be a need to create endpoints for Aodh in keystone? Or it
>   is just a 'library' for ceilometer, sharing the same API endpoint and
>   the same client with ceilometer?

Yes, you need to create endpoint in Keystone.

> - The original email mentioned that "the client can still be used and
>   redirect to Aodh". Could you please clarify where the redirection will
>   happen? It is a client side redirection or a ceilometer-server side
>   redirection? I'm asking this because we sometimes program the REST
>   APIs directly.

It's actually both, we do the redirect on the client side, but if people
uses the REST API directly there's also a 301 code returned.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] trello

2015-09-09 Thread Derek Higgins



On 08/09/15 16:36, Derek Higgins wrote:

Hi All,

Some of ye may remember some time ago we used to organize TripleO
based jobs/tasks on a trello board[1], at some stage this board fell out
of use (the exact reason I can't put my finger on). This morning I was
putting a list of things together that need to be done in the area of CI
and needed somewhere to keep track of it.

I propose we get back to using this trello board and each of us add
cards at the very least for the things we are working on.

This should give each of us a lot more visibility into what is ongoing
on in the tripleo project currently, unless I hear any objections,
tomorrow I'll start archiving all cards on the boards and removing
people no longer involved in tripleo. We can then start adding items and
anybody who wants in can be added again.


This is now done, see
https://trello.com/tripleo

Please ping me on irc if you want to be added.



thanks,
Derek.

[1] - https://trello.com/tripleo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Plugin integration and environment file naming

2015-09-09 Thread Rob Pothier (rpothier)

Looks good to me.

Rob


On 9/8/15, 4:40 AM, "Steven Hardy"  wrote:

>Hi all,
>
>So, lately we're seeing an increasing number of patches adding integration
>for various third-party plugins, such as different neutron and cinder
>backends.
>
>This is great to see, but it also poses the question of how we organize
>the
>user-visible interfaces to these things long term.
>
>Originally, I was hoping to land some Heat composability improvements[1]
>which would allow for tagging templates as providing a particular
>capability (such as "provides neutron ML2 plugin"), but this has stalled
>on
>some negative review feedback and isn't going to be implemented for
>Liberty.
>
>However, today looking at [2] and [3], (which both add t-h-t integration
>to
>enable neutron ML2 plugins), a simpler interim solution occured to me,
>which is just to make use of a suggested/mandatory naming convention.
>
>For example:
>
>environments/neutron-ml2-bigswitch.yaml
>environments/neutron-ml2-cisco-nexus-ucsm.yaml
>
>Or via directory structure:
>
>environments/neutron-ml2/bigswitch.yaml
>environments/neutron-ml2/cisco-nexus-ucsm.yaml
>
>This would require enforcement via code-review, but could potentially
>provide a much more intuitive interface for users when they go to create
>their cloud, and particularly it would make life much easier for any Ux to
>ask "choose which neutron-ml2 plugin you want", because the available
>options can simply be listed by looking at the available environment
>files?
>
>What do folks think of this, is now a good time to start enforcing such a
>convention?
>
>Steve
>
>[1] https://review.openstack.org/#/c/196656/
>[2] https://review.openstack.org/#/c/213142/
>[3] https://review.openstack.org/#/c/198754/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-09 Thread Vladimir Kozhukalov
Andrey,

This change is going to make things even easier. Currently you don't need
to build fuel-library package manually, Perestroika is going to do it for
you. It builds necessary packages during minutes for every review request
and packaging ci even tests it for you. You just need to make necessary
changes not on master node but on your MACBOOK using your favorite editor.
Then you need to commit this change and send this patch on review. If you
want to test this patch manually, you just need to append this CR repo
(example is here [1]) to the list of repos you define for your cluster and
start deployment. Anyway, you still have rsync, mcollective and other old
plain tools to run deployment manually.

[1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/



Vladimir Kozhukalov

On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov  wrote:

> Vladimir,
>
> thanks for bringing this up. It greatly correlates with the idea of
> modularity. Everything related to an openstack release should be put in one
> place and should be managed as a solid bundle on the master node. Package
> repository is the first solution that comes to the mind and it looks pretty
> good. Puppet modules, openstack.yaml and maybe even serialisers should be
> stored in packages in the openstack release repository. And eventually
> every other piece of our software should get rid of release-specific logic.
>
> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> Currently, we install fuel-libraryX.Y package(s) on the master node and
>> then right before starting actual deployment we rsync [1] puppet modules
>> (one of installed versions) from the master node to slave nodes. Such a
>> flow makes things much more complicated than they could be if we installed
>> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
>> parameterized by repo urls (upstream + mos) and this pre-deployment task
>> could be nothing more than just installing fuel-library package from mos
>> repo defined for a cluster. We would not have several versions of
>> fuel-library on the master node, we would not need that complicated upgrade
>> stuff like we currently have for puppet modules.
>>
>> Please give your opinions on this.
>>
>>
>> [1]
>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-09 Thread Andrey Danin
I don't think juggling with repos and pull requests is easier than direct
editing of files on Fuel node. Do we have Perestorika installed on Fuel
node in 7.0?

On Wed, Sep 9, 2015 at 3:47 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Andrey,
>
> This change is going to make things even easier. Currently you don't need
> to build fuel-library package manually, Perestroika is going to do it for
> you. It builds necessary packages during minutes for every review request
> and packaging ci even tests it for you. You just need to make necessary
> changes not on master node but on your MACBOOK using your favorite editor.
> Then you need to commit this change and send this patch on review. If you
> want to test this patch manually, you just need to append this CR repo
> (example is here [1]) to the list of repos you define for your cluster and
> start deployment. Anyway, you still have rsync, mcollective and other old
> plain tools to run deployment manually.
>
> [1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/
>
>
>
> Vladimir Kozhukalov
>
> On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov 
> wrote:
>
>> Vladimir,
>>
>> thanks for bringing this up. It greatly correlates with the idea of
>> modularity. Everything related to an openstack release should be put in one
>> place and should be managed as a solid bundle on the master node. Package
>> repository is the first solution that comes to the mind and it looks pretty
>> good. Puppet modules, openstack.yaml and maybe even serialisers should be
>> stored in packages in the openstack release repository. And eventually
>> every other piece of our software should get rid of release-specific logic.
>>
>> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> Currently, we install fuel-libraryX.Y package(s) on the master node and
>>> then right before starting actual deployment we rsync [1] puppet modules
>>> (one of installed versions) from the master node to slave nodes. Such a
>>> flow makes things much more complicated than they could be if we installed
>>> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
>>> parameterized by repo urls (upstream + mos) and this pre-deployment task
>>> could be nothing more than just installing fuel-library package from mos
>>> repo defined for a cluster. We would not have several versions of
>>> fuel-library on the master node, we would not need that complicated upgrade
>>> stuff like we currently have for puppet modules.
>>>
>>> Please give your opinions on this.
>>>
>>>
>>> [1]
>>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>>>
>>> Vladimir Kozhukalov
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican core

2015-09-09 Thread Nathan Reller
+1

Dave is a great member of the team, and I think he has earned it.

-Nate

On Tue, Sep 8, 2015 at 12:13 PM, Douglas Mendizábal <
douglas.mendiza...@rackspace.com> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> +1
>
> Dave has been a great asset to the team, and I think he would make an
> excellent core reviewer.
>
> - - Douglas Mendizábal
>
> On 9/8/15 11:05 AM, Juan Antonio Osorio wrote:
> > I'd like to nominate Dave Mccowan for the Barbican core review
> > team.
> >
> > He has been an active contributor both in doing relevant code
> > pieces and making useful and thorough reviews; And so I think he
> > would make a great addition to the team.
> >
> > Please bring the +1's :D
> >
> > Cheers!
> >
> > -- Juan Antonio Osorio R. e-mail: jaosor...@gmail.com
> > 
> >
> >
> >
> > __
> 
> >
> >
> >
> OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - https://gpgtools.org
>
> iQIcBAEBCgAGBQJV7wkfAAoJEB7Z2EQgmLX7X+IP/AtYTxcx0u+O6MMLDU1VcGZg
> 5ksCdn1bosfuqJ/X/QWplHBSG8BzllwciHm7YJxIY94MaAlThk3Zw6UDKKkBMqIt
> Qag09Z868LPl9/pll0whR5fVa052zSMq/QYWTnpgwpAgQduKNe4KaR1ZKhtBBbAJ
> BvjyKEa2dJLA6LIMXxcxpoCAKSeORM5lce19kHHhWyqq9v5A89U6GHMgwRAa2fGN
> 7RyYmlOrmxh6TyJQX9Xl+w9y5WPAbxaUqC0MYEkLMpa7VnGf2pEangkN0LUAJO2x
> NxwHa73b2LA8K1+4hwTvZO28sRnyMHwjSpqvpGt60FXkgi4dLyyy8gR6gsO49EDB
> QOSwpwyFHzA//iuMl72pAD6uMzK0SCECtEu2000l0p3WEXS1i0z7p9VTfw4FySqb
> V0S/IeSFfkt09TK2DoOSzXAvBZjsLz9gjRbRIv2dx0QTTmN5JpihOeoUojn24aDV
> 86AshlhoImJGOX16MwRL+T6LCindkczGe4Faz7WzmBomEJ7SOY6pzDbyEBLYcqzu
> crvrLt2D1HmaygFGS37lVCqxlIegwsnZHGIe+Jtr8pDIDSW37ig4LZIDVra2/lj9
> E7/fWYCDqbSIUWYG2jMr0/3eQQwZCj4kNvtWaTlNFmTPJZAEYpSN3rBhkfWBgsLv
> mqBOM4IeR4EqaqaC2og7
> =jL8d
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] CI for reliable live-migration

2015-09-09 Thread Timofei Durakov
Hello,
Update for gate-tempest-dsvm-multinode-full job.
Here is top 12 failing tests in weekly period:
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_resize_server_from_manual_to_auto:
14
tempest.api.compute.servers.test_disk_config.ServerDiskConfigTestJSON.test_resize_server_from_auto_to_manual:
14
tempest.scenario.test_server_advanced_ops.TestServerAdvancedOps.test_resize_server_confirm:
12
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert:
12
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm:
12
tempest.api.compute.admin.test_live_migration.LiveBlockMigrationTestJSON.test_live_block_migration_paused:
12
tempest.api.compute.servers.test_delete_server.DeleteServersTestJSON.test_delete_server_while_in_verify_resize_state:
12
tempest.api.compute.admin.test_migrations.MigrationsAdminTest.test_list_migrations_in_flavor_resize_situation:
12
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_confirm_from_stopped:
12
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPatternV2.test_volume_boot_pattern:
10
tempest.scenario.test_volume_boot_pattern.TestVolumeBootPattern.test_volume_boot_pattern:
10
tempest.api.compute.admin.test_live_migration.LiveBlockMigrationTestJSON.test_live_block_migration:
10


Full list of failing tests: http://xsnippet.org/360947/


On Fri, Aug 28, 2015 at 12:14 AM, Kraminsky, Arkadiy <
arkadiy.kramin...@hp.com> wrote:

> Hello,
>
> I'm a new developer on the Openstack project and am in the process of
> creating live migration CI for HP's 3PAR and Lefthand backends. I noticed
> you guys are looking for someone to pick up Joe Gordon's change for volume
> backed live migration tests and we can sure use something like this. I can
> take a look into the change, and see what I can do. :)
>
> Thanks,
>
> Arkadiy Kraminsky
> 
> From: Joe Gordon [joe.gord...@gmail.com]
> Sent: Wednesday, August 26, 2015 9:26 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] CI for reliable live-migration
>
>
>
> On Wed, Aug 26, 2015 at 8:18 AM, Matt Riedemann <
> mrie...@linux.vnet.ibm.com> wrote:
>
>
> On 8/26/2015 3:21 AM, Timofei Durakov wrote:
> Hello,
>
> Here is the situation: nova has live-migration feature but doesn't have
> ci job to cover it by functional tests, only
> gate-tempest-dsvm-multinode-full(non-voting, btw), which covers
> block-migration only.
> The problem here is, that live-migration could be different, depending
> on how instance was booted(volume-backed/ephemeral), how environment is
> configured(is shared instance directory(NFS, for example), or RBD used
> to store ephemeral disk), or for example user don't have that and is
> going to use --block-migrate flag. To claim that we have reliable
> live-migration in nova, we should check it at least on envs with rbd or
> nfs as more popular than envs without shared storages at all.
> Here is the steps for that:
>
>  1. make  gate-tempest-dsvm-multinode-full voting, as it looks OK for
> block-migration testing purposes;
>
> When we are ready to make multinode voting we should remove the equivalent
> single node job.
>
>
> If it's been stable for awhile then I'd be OK with making it voting on
> nova changes, I agree it's important to have at least *something* that
> gates on multi-node testing for nova since we seem to break this a few
> times per release.
>
> Last I checked it isn't as stable is single node yet:
> http://jogo.github.io/gate/multinode [0].  The data going into graphite
> is a bit noisy so this may be a red herring, but at the very least it needs
> to be investigated. When I was last looking into this there were at least
> two known bugs:
>
> https://bugs.launchpad.net/nova/+bug/1445569
> 
> https://bugs.launchpad.net/nova/+bug/1462305
>
>
> [0]
> http://graphite.openstack.org/graph/?from=-36hours=500=now=800=ff=00=100=0=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-full%27),%27orange%27)=color(alias(movingAverage(asPercent(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.FAILURE,sum(stats.zuul.pipeline.check.job.gate-tempest-dsvm-multinode-full.{SUCCESS,FAILURE})),%275hours%27),%20%27gate-tempest-dsvm-multinode-full%27),%27brown%27)=Check%20Failure%20Rates%20(36%20hours)&_t=0.48646087432280183
> <
> 

Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-09 Thread Vladimir Kozhukalov
No, Perestroika is not available on the Fuel master node and it is not
going to be available in the future. But Perestroika is going to be
re-worked so as to make it is possible to used separately from CI. It is
gonna be a python application to make package building as easy for a
developer/user as possible. Anyway I think this argument that it is easier
to develop is not that kind of argument which can prevail when discussing
production ready delivery approach.

Vladimir Kozhukalov

On Wed, Sep 9, 2015 at 4:15 PM, Andrey Danin  wrote:

> I don't think juggling with repos and pull requests is easier than direct
> editing of files on Fuel node. Do we have Perestorika installed on Fuel
> node in 7.0?
>
> On Wed, Sep 9, 2015 at 3:47 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Andrey,
>>
>> This change is going to make things even easier. Currently you don't need
>> to build fuel-library package manually, Perestroika is going to do it for
>> you. It builds necessary packages during minutes for every review request
>> and packaging ci even tests it for you. You just need to make necessary
>> changes not on master node but on your MACBOOK using your favorite editor.
>> Then you need to commit this change and send this patch on review. If you
>> want to test this patch manually, you just need to append this CR repo
>> (example is here [1]) to the list of repos you define for your cluster and
>> start deployment. Anyway, you still have rsync, mcollective and other old
>> plain tools to run deployment manually.
>>
>> [1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/
>>
>>
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov 
>> wrote:
>>
>>> Vladimir,
>>>
>>> thanks for bringing this up. It greatly correlates with the idea of
>>> modularity. Everything related to an openstack release should be put in one
>>> place and should be managed as a solid bundle on the master node. Package
>>> repository is the first solution that comes to the mind and it looks pretty
>>> good. Puppet modules, openstack.yaml and maybe even serialisers should be
>>> stored in packages in the openstack release repository. And eventually
>>> every other piece of our software should get rid of release-specific logic.
>>>
>>> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 Currently, we install fuel-libraryX.Y package(s) on the master node and
 then right before starting actual deployment we rsync [1] puppet modules
 (one of installed versions) from the master node to slave nodes. Such a
 flow makes things much more complicated than they could be if we installed
 puppet modules on slave nodes as rpm/deb packages. Deployment itself is
 parameterized by repo urls (upstream + mos) and this pre-deployment task
 could be nothing more than just installing fuel-library package from mos
 repo defined for a cluster. We would not have several versions of
 fuel-library on the master node, we would not need that complicated upgrade
 stuff like we currently have for puppet modules.

 Please give your opinions on this.


 [1]
 https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218

 Vladimir Kozhukalov


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrey Danin
> ada...@mirantis.com
> skype: gcon.monolake
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] request for update of fuel-plugin-builder on pypi

2015-09-09 Thread Igor Kalnitsky
Hi guys,

I'm going to wait for the patch [1] and then make a FPB release.

Regarding repo restructuring.. We do have an issue, and IIRC it's
targeted to 8.0.

[1]: https://review.openstack.org/#/c/221434/

Thanks,
Igor

On Wed, Sep 9, 2015 at 2:23 PM, Swann Croiset  wrote:
> +2 to sergii
>
> and btw create dedicated repos for plugin examples
>
>
> On Wed, Sep 9, 2015 at 12:34 PM, Sergii Golovatiuk
>  wrote:
>>
>> +1 to Simon
>>
>> Also the structure of fuel-plugin-builder should be refactored to
>> community standards. everything in 'fuel_plugin_builder' directory should
>> moved to top of repository.
>>
>> --
>> Best regards,
>> Sergii Golovatiuk,
>> Skype #golserge
>> IRC #holser
>>
>> On Wed, Sep 9, 2015 at 12:25 PM, Simon Pasquier 
>> wrote:
>>>
>>> Hi,
>>> It would be cool if fuel-plugin-builder (fpb) v3.0.0 could be released on
>>> pypi. We've moved some of the LMA plugins to use the v3 format.
>>> Right now we have to install fpb from source which is hard to automate in
>>> our tests unfortunately (as already noted by Sergii [1]).
>>> BR,
>>> Simon
>>> [1]
>>> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070781.html
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-09 Thread Vladimir Kozhukalov
Alex,

Regarding plugins: plugins are welcome to install specific additional
DEB/RPM repos on the master node, or just configure cluster to use
additional onl?ne repos, where all necessary packages (including plugin
specific puppet manifests) are to be available. Current granular deployment
approach makes it easy to append specific pre-deployment tasks
(master/slave does not matter). Correct me if I am wrong.

Regarding flexibility: having several versioned directories with puppet
modules on the master node, having several fuel-libraryX.Y packages
installed on the master node makes things "exquisitely convoluted" rather
than flexible. Like I said, it is flexible enough to use mcollective, plain
rsync, etc. if you really need to do things manually. But we have
convenient service (Perestroika) which builds packages in minutes if you
need. Moreover, In the nearest future (by 8.0) Perestroika will be
available as an application independent from CI. So, what is wrong with
building fuel-library package? What if you want to troubleshoot nova (we
install it using packages)? Should we also use rsync for everything else
like nova, mysql, etc.?

Vladimir Kozhukalov

On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz  wrote:

> I agree that we shouldn't need to sync as we should be able to just update
> the fuel-library package. That being said, I think there might be a few
> issues with this method. The first issue is with plugins and how to
> properly handle the distribution of the plugins as they may also include
> puppet code that needs to be installed on the other nodes for a deployment.
> Currently I do not believe we install the plugin packages anywhere except
> the master and when they do get installed there may be some post-install
> actions that are only valid for the master.  Another issue is being
> flexible enough to allow for deployment engineers to make custom changes
> for a given environment.  Unless we can provide an improved process to
> allow for people to provide in place modifications for an environment, we
> can't do away with the rsync.
>
> If we want to go completely down the package route (and we probably
> should), we need to make sure that all of the other pieces that currently
> go together to make a complete fuel deployment can be updated in the same
> way.
>
> -Alex
>
> On Wed, Sep 9, 2015 at 8:15 AM, Andrey Danin  wrote:
>
>> I don't think juggling with repos and pull requests is easier than direct
>> editing of files on Fuel node. Do we have Perestorika installed on Fuel
>> node in 7.0?
>>
>> On Wed, Sep 9, 2015 at 3:47 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Andrey,
>>>
>>> This change is going to make things even easier. Currently you don't
>>> need to build fuel-library package manually, Perestroika is going to do it
>>> for you. It builds necessary packages during minutes for every review
>>> request and packaging ci even tests it for you. You just need to make
>>> necessary changes not on master node but on your MACBOOK using your
>>> favorite editor. Then you need to commit this change and send this patch on
>>> review. If you want to test this patch manually, you just need to append
>>> this CR repo (example is here [1]) to the list of repos you define for your
>>> cluster and start deployment. Anyway, you still have rsync, mcollective and
>>> other old plain tools to run deployment manually.
>>>
>>> [1] http://perestroika-repo-tst.infra.mirantis.net/review/CR-221719/
>>>
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Wed, Sep 9, 2015 at 2:48 PM, Dmitry Pyzhov 
>>> wrote:
>>>
 Vladimir,

 thanks for bringing this up. It greatly correlates with the idea of
 modularity. Everything related to an openstack release should be put in one
 place and should be managed as a solid bundle on the master node. Package
 repository is the first solution that comes to the mind and it looks pretty
 good. Puppet modules, openstack.yaml and maybe even serialisers should be
 stored in packages in the openstack release repository. And eventually
 every other piece of our software should get rid of release-specific logic.

 On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> Currently, we install fuel-libraryX.Y package(s) on the master node
> and then right before starting actual deployment we rsync [1] puppet
> modules (one of installed versions) from the master node to slave nodes.
> Such a flow makes things much more complicated than they could be if we
> installed puppet modules on slave nodes as rpm/deb packages. Deployment
> itself is parameterized by repo urls (upstream + mos) and this
> pre-deployment task could be nothing more than just installing 
> fuel-library
> package from mos repo defined for a cluster. We would not have several
> versions of 

Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-09 Thread Dmitry Pyzhov
Andrey, you have highlighted important case. I hope you agree that this
case is not a blocker for the proposal. From the developer's point of view
packages are awful and we should use raw git repos on every node. It could
make developer's life way easier. But from architecture perspective it
would be a disaster.

Rsync is just another legacy part of our architecture. We had puppet master
before. We have rsync now. Let's see what we should use in future and how
we can make it convenient for developers.

On Wed, Sep 9, 2015 at 2:47 PM, Andrey Danin  wrote:

> I disagree from the development point of view. Now I just change manifests
> on Fuel node and redeploy cluster to apply that changes. With your proposal
> I'll need to build a new package and add it to a repo every time I change
> something.
>
> On Tue, Sep 8, 2015 at 11:41 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> Currently, we install fuel-libraryX.Y package(s) on the master node and
>> then right before starting actual deployment we rsync [1] puppet modules
>> (one of installed versions) from the master node to slave nodes. Such a
>> flow makes things much more complicated than they could be if we installed
>> puppet modules on slave nodes as rpm/deb packages. Deployment itself is
>> parameterized by repo urls (upstream + mos) and this pre-deployment task
>> could be nothing more than just installing fuel-library package from mos
>> repo defined for a cluster. We would not have several versions of
>> fuel-library on the master node, we would not need that complicated upgrade
>> stuff like we currently have for puppet modules.
>>
>> Please give your opinions on this.
>>
>>
>> [1]
>> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/orchestrator/tasks_serializer.py#L205-L218
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Andrey Danin
> ada...@mirantis.com
> skype: gcon.monolake
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][nova] QoS Neutron-Nova integration

2015-09-09 Thread Miguel Angel Ajo


   Hi,

 Looking forward to the M cycle,

 I was wondering if we could loop you in our next week Neutron/QoS 
meeting

on #openstack-meeting-3 around 16:00 CEST Sept 16th.

 We're thinking about several ways we should integrate QoS between 
nova and the new
extendable QoS service on neutron, specially regarding flavor 
integration, and guaranteed

limits (to avoid compute node and in-node physical interface overcommit).

Some details from this last meeting:

http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-09-09-14.07.html


Best regards,
Miguel Ángel.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Install fuel-libraryX.Y as a package on slave nodes

2015-09-09 Thread Alex Schultz
Hey Vladimir,



> Regarding plugins: plugins are welcome to install specific additional
> DEB/RPM repos on the master node, or just configure cluster to use
> additional onl?ne repos, where all necessary packages (including plugin
> specific puppet manifests) are to be available. Current granular deployment
> approach makes it easy to append specific pre-deployment tasks
> (master/slave does not matter). Correct me if I am wrong.
>
>
Don't get me wrong, I think it would be good to move to a fuel-library
distributed via package only.  I'm bringing these points up to indicate
that there is many other things that live in the fuel library puppet path
than just the fuel-library package.  The plugin example is just one place
that we will need to invest in further design and work to move to the
package only distribution.  What I don't want is some partially executed
work that only works for one type of deployment and creates headaches for
the people actually having to use fuel.  The deployment engineers and
customers who actually perform these actions should be asked about
packaging and their comfort level with this type of requirements.  I don't
have a complete understanding of the all the things supported today by the
fuel plugin system so it would be nice to get someone who is more familiar
to weigh in on this idea. Currently plugins are only rpms (no debs) and I
don't think we are building fuel-library debs at this time either.  So
without some work on both sides, we cannot move to just packages.


> Regarding flexibility: having several versioned directories with puppet
> modules on the master node, having several fuel-libraryX.Y packages
> installed on the master node makes things "exquisitely convoluted" rather
> than flexible. Like I said, it is flexible enough to use mcollective, plain
> rsync, etc. if you really need to do things manually. But we have
> convenient service (Perestroika) which builds packages in minutes if you
> need. Moreover, In the nearest future (by 8.0) Perestroika will be
> available as an application independent from CI. So, what is wrong with
> building fuel-library package? What if you want to troubleshoot nova (we
> install it using packages)? Should we also use rsync for everything else
> like nova, mysql, etc.?
>
>
Yes, we do have a service like Perestroika to build packages for us.  That
doesn't mean everyone else does or has access to do that today.  Setting up
a build system is a major undertaking and making that a hard requirement to
interact with our product may be a bit much for some customers.  In
speaking with some support folks, there are times when files have to be
munged to get around issues because there is no package or things are on
fire so they can't wait for a package to become available for a fix.  We
need to be careful not to impose limits without proper justification and
due diligence.  We already build the fuel-library package, so there's no
reason you couldn't try switching the rsync to install the package if it's
available on a mirror.  I just think you're going to run into the issues I
mentioned which need to be solved before we could just mark it done.

-Alex



> Vladimir Kozhukalov
>
> On Wed, Sep 9, 2015 at 4:39 PM, Alex Schultz 
> wrote:
>
>> I agree that we shouldn't need to sync as we should be able to just
>> update the fuel-library package. That being said, I think there might be a
>> few issues with this method. The first issue is with plugins and how to
>> properly handle the distribution of the plugins as they may also include
>> puppet code that needs to be installed on the other nodes for a deployment.
>> Currently I do not believe we install the plugin packages anywhere except
>> the master and when they do get installed there may be some post-install
>> actions that are only valid for the master.  Another issue is being
>> flexible enough to allow for deployment engineers to make custom changes
>> for a given environment.  Unless we can provide an improved process to
>> allow for people to provide in place modifications for an environment, we
>> can't do away with the rsync.
>>
>> If we want to go completely down the package route (and we probably
>> should), we need to make sure that all of the other pieces that currently
>> go together to make a complete fuel deployment can be updated in the same
>> way.
>>
>> -Alex
>>
>>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova] QoS Neutron-Nova integration

2015-09-09 Thread Vikram Choudhary
Hi Ajo,

I am In.  Thanks for the information.

Thanks
Vikram

On Wed, Sep 9, 2015 at 8:05 PM, Miguel Angel Ajo 
wrote:

>
>Hi,
>
>  Looking forward to the M cycle,
>
>  I was wondering if we could loop you in our next week Neutron/QoS
> meeting
> on #openstack-meeting-3 around 16:00 CEST Sept 16th.
>
>  We're thinking about several ways we should integrate QoS between
> nova and the new
> extendable QoS service on neutron, specially regarding flavor integration,
> and guaranteed
> limits (to avoid compute node and in-node physical interface overcommit).
>
> Some details from this last meeting:
>
> http://eavesdrop.openstack.org/meetings/neutron_qos/2015/neutron_qos.2015-09-09-14.07.html
>
> Best regards,
> Miguel Ángel.
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [nova] Verification of glance images before boot

2015-09-09 Thread Maish Saidel-Keesing
How can I know that the image that a new instance is spawned from - is 
actually the image that was originally registered in glance - and has 
not been maliciously tampered with in some way?


Is there some kind of verification that is performed against the md5sum 
of the registered image in glance before a new instance is spawned?


Is that done by Nova?
Glance?
Both? Neither?

The reason I ask is some 'paranoid' security (that is their job I 
suppose) people have raised these questions.


I know there is a glance BP already merged for L [1] - but I would like 
to understand the actual flow in a bit more detail.


Thanks.

[1] 
https://blueprints.launchpad.net/glance/+spec/image-signing-and-verification-support


--
Best Regards,
Maish Saidel-Keesing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Upgrades, Releases & Branches

2015-09-09 Thread Steven Hardy
On Tue, Aug 18, 2015 at 02:28:39PM -0400, James Slagle wrote:
> On Tue, Aug 18, 2015 at 2:10 PM, Steven Hardy  wrote:
> > On Mon, Aug 17, 2015 at 03:29:07PM -0400, James Slagle wrote:
> >> On Mon, Aug 17, 2015 at 9:28 AM, Steven Hardy  wrote:
> >> > Hi all,
> >> >
> >> > Recently I had some discussion with folks around the future strategy for
> >> > TripleO wrt upgrades, releases and branches, specifically:
> >> >
> >> > - How do we support a "stable" TripleO release/branch that enables folks 
> >> > to
> >> >   easily deploy the current stable release of OpenStack
> >> > - Related to the above, how do we allow development of TripleO components
> >> >   (and in particular t-h-t) to proceed without imposing undue constraints
> >> >   on what new features may be used (e.g new-for-liberty Heat features 
> >> > which
> >> >   aren't present in the current released OpenStack version)
> >> > - We're aiming to provide upgrade support, thus from and to which 
> >> > versions?
> >> >
> >> > I know historically TripleO has taken something of a developer and/or
> >> > continuous deployment model for granted, but I'd like to propose that we
> >> > revisit that discusion, such that we move towards something that's more
> >> > consumable by users/operators that are consuming the OpenStack 
> >> > coordinated
> >> > releases.
> >> >
> >> > The obvious answer is a stable branch for certain TripleO components, and
> >> > in particular for t-h-t, but this has disadvantages if we take the
> >> > OpenStack wide "no feature backports" approach - for example "upgrade
> >> > support to liberty" could be considered a feature, and many other TripleO
> >> > "features" are really more about making features of the deployed 
> >> > OpenStack
> >> > services consumable.
> >> >
> >> > I'd like propose we take a somewhat modified "release branch" approach,
> >> > which combines many of the advantages of the stable-branch model, but
> >> > allows for a somewhat more liberal approach to backports, where most 
> >> > things
> >> > are considered valid backports provided they work with the currently
> >> > released OpenStack services (e.g right now, a t-h-t release/kilo branch
> >> > would have to maintain compatibility with a kilo Heat in the undercloud)
> >>
> >> I like the idea, it seems reasonable to me.
> >>
> >> I do think we should clarify if the rule is:
> >>
> >> We *can* backport anything to release/kilo that doesn't break
> >> compatibility with kilo Heat.
> >>
> >> Or:
> >>
> >> We *must* backport anything to release/kilo that doesn't break
> >> compatibility with kilo Heat.
> >
> > I think I was envisaging something closer to the "must", but as Zane said,
> > more a "should", which if automated would become an opt-out thing, e.g
> > through a commit tag "nobackport" or whatever.
> >
> > Ideally, for the upstream branch we should probably be backporting most
> > things which don't break compatibility with the currently released
> > OpenStack services, and don't introduce gratuitous interface changes or
> > other backwards incompatibilities.
> >
> > I know our template "interfaces" are fuzzily defined but here are some
> > ideas of things we might not backport in addition to the "must work with
> > kilo" rule:
> >
> > - Removing parameters or resource types used to hook in external/optional
> >   code (e.g *ExtraConfig etc) - we should advertise these as deprecated via
> >   the descriptions, docs and release notes, then have them removed only
> >   when moving between TripleO releases (same as deprecation policy for most
> >   other projects)
> >
> > - Adding support for new services which either don't exist or weren't
> >   considered stable in the current released version
> >
> >> If it's the former, I think we'd get into a lot of subjective
> >> discussions around if we want certain things backported or not.
> >> Essentially it's the same discussion that happens for stable/*, except
> >> we consider features as well. This could become quite difficult to
> >> manage, and lead to a lot of reviewer opinionated inconsistency into
> >> what actually ends up getting backported.
> >
> > True, but this decision making ends up happening sometime regardless, e.g
> > what patches do you carry in a downstream package etc?  But you're right
> > defining the process early should help with consistency.
> >
> >>
> >> For instance, there could be a very large and disruptive feature that
> >> doesn't break compatibility at all, but some users may not want to see
> >> it in release/kilo. Or, something like the recent proposed patch to
> >> rename a bunch of templates by dropping the "-puppet". That doesn't
> >> break compatibility with a kilo Heat at all, however it could break
> >> compatibility with someone's scripts or external tooling, and might be
> >> a considered an "API" incompatible change. The consuming downstreams
> >> (RDO) may not want to consume such a change. I know we don't have any
> >> 

Re: [openstack-dev] [tripleo] Upgrade plans for RDO Manager - Brainstorming

2015-09-09 Thread Zane Bitter

On 24/08/15 15:12, Emilien Macchi wrote:

Hi,

So I've been working on OpenStack deployments for 4 years now and so far
RDO Manager is the second installer -after SpinalStack [1]- I'm working on.

SpinalStack already had interested features [2] that allowed us to
upgrade our customer platforms almost every months, with full testing
and automation.

Now, we have RDO Manager, I would be happy to share my little experience
on the topic and help to make it possible in the next cycle.

For that, I created an etherpad [3], which is not too long and focused
on basic topics for now. This is technical and focused on Infrastructure
upgrade automation.

Feel free to continue discussion on this thread or directly in the etherpad.

[1] http://spinalstack.enovance.com
[2] http://spinalstack.enovance.com/en/latest/dev/upgrade.html
[3] https://etherpad.openstack.org/p/rdo-manager-upgrades


I added some notes on the etherpad, but I think this discussion poses a 
larger question: what is TripleO? Why are we using Heat? Because to me 
the major benefit of Heat is that it maintains a record of the current 
state of the system that can be used to manage upgrades. And if we're 
not going to make use of that - if we're going to determine the state of 
the system by introspecting nodes and update it by using Ansible scripts 
without Heat's knowledge, then we probably shouldn't be using Heat at all.


I'm not saying that to close off the option - I think if Heat is not the 
best tool for the job then we should definitely consider other options. 
And right now it really is not the best tool for the job. Adopting 
Puppet (which was a necessary choice IMO) has meant that the 
responsibility for what I call "software orchestration"[1] is split 
awkwardly between Puppet and Heat. For example, the Puppet manifests are 
baked in to images on the servers, so Heat doesn't know when they've 
changed and can't retrigger Puppet to update the configuration when they 
do. We're left trying to reverse-engineer what is supposed to be a 
declarative model from the workflow that we want for things like 
updates/upgrades.


That said, I think there's still some cause for optimism: in a world 
where every service is deployed in a container and every container has 
its own Heat SoftwareDeployment, the boundary between Heat's 
responsibilities and Puppet's would be much clearer. The deployment 
could conceivably fit a declarative model much better, and even offer a 
lot of flexibility in which services run on which nodes. We won't really 
know until we try, but it seems distinctly possible to aspire toward 
Heat actually making things easier rather than just not making them too 
much harder. And there is stuff on the long-term roadmap that could be 
really great if only we had time to devote to it - for example, as I 
mentioned in the etherpad, I'd love to get Heat's user hooks integrated 
with Mistral so that we could have fully-automated, highly-available (in 
a hypothetical future HA undercloud) live migration of workloads off 
compute nodes during updates.


In the meantime, however, I do think that we have all the tools in Heat 
that we need to cobble together what we need to do. In Liberty, Heat 
supports batched rolling updates of ResourceGroups, so we won't need to 
use user hooks to cobble together poor-man's batched update support any 
more. We can use the user hooks for their intended purpose of notifying 
the client when to live-migrate compute workloads off a server that is 
about to upgraded. The Heat templates should already tell us exactly 
which services are running on which nodes. We can trigger particular 
software deployments on a stack update with a parameter value change (as 
we already do with the yum update deployment). For operations that 
happen in isolation on a single server, we can model them as 
SoftwareDeployment resources within the individual server templates. For 
operations that are synchronised across a group of servers (e.g. 
disabling services on the controller nodes in preparation for a DB 
migration) we can model them as a SoftwareDeploymentGroup resource in 
the parent template. And for chaining multiple sequential operations 
(e.g. disable services, migrate database, enable services), we can chain 
outputs to inputs to handle both ordering and triggering. I'm sure there 
will be many subtleties, but I don't think we *need* Ansible in the mix.


So it's really up to the wider TripleO project team to decide which path 
to go down. I am genuinely not bothered whether we choose Heat or 
Ansible. There may even be ways they can work together without 
compromising either model. But I would be pretty uncomfortable with a 
mix where we use Heat for deployment and Ansible for doing upgrades 
behind Heat's back.


cheers,
Zane.


[1] 
http://www.zerobanana.com/archive/2014/05/08#heat-configuration-management


__
OpenStack Development Mailing List 

Re: [openstack-dev] OpenStack support for Amazon Concepts - was Re: cloud-init IPv6 support

2015-09-09 Thread Joshua Harlow

And here is the code that does this (for cloudinit 0.7.x):

https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/helpers/openstack.py

This same code is used by the config drive datasource (the one that 
makes a disk/iso) in cloudinit and the http endpoint based datasource in 
cloudinit (the one that exports /openstack/ on the metadata server).


https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/helpers/openstack.py#L320 
(for the config drive subclass) and 
https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/helpers/openstack.py#L419 
(for the http endpoint based subclass).


Note that in the following: 
https://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/cloudinit/sources/DataSourceOpenStack.py#L77 
you can already provide a different metadata url (say one that uses 
ipv6) that will override the default "http://169.254.169.254; defined in 
there. This can be done in a few various ways but feel free to jump on 
#cloud-init IRC channel and ask if interested or find me on IRC 
somewhere (since I'm the main one who worked on all the above code).


-Josh

Fox, Kevin M wrote:

No, we already extend the metadata server with our own stuff. See
/openstack/ on the metadata server. Cloudinit even supports the
extensions. Supporting ipv6 as well as v4 is the same. Why does it
matter if aws doesnt currently support it? They can support it if they
want in the future and reuse code, or do their own thing and have to
convince cloudinit to support there way too. But why should that hold
back the openstack metadata server now? Lets lead rather then follow.

Thanks,
Kevin *
*

*From:* Sean M. Collins
*Sent:* Saturday, September 05, 2015 3:19:48 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Cc:* Fox, Kevin M; PAUL CARVER
*Subject:* OpenStack support for Amazon Concepts - was Re:
[openstack-dev] cloud-init IPv6 support

On Fri, Sep 04, 2015 at 04:20:23PM EDT, Kevin Benton wrote:

 Right, it depends on your perspective of who 'owns' the API. Is it
 cloud-init or EC2?

 At this point I would argue that cloud-init is in control because it would
 be a large undertaking to switch all of the AMI's on Amazon to something
 else. However, I know Sean disagrees with me on this point so I'll let him
 reply here.



Here's my take:

Cloud-Init is a *client* of the Metadata API. The OpenStack Metadata API
in both the Neutron and Nova projects should all the details of the
Metadata API that is documented at:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

This means that this is a compatibility layer that OpenStack has
implemented so that users can use appliances, applications, and
operating system images in both Amazon EC2 and an OpenStack environment.

Yes, we can make changes to cloud-init. However, there is no guarantee
that all users of the Metadata API are exclusively using cloud-init as
their client. It is highly unlikely that people are rolling their own
Metadata API clients, but it's a contract we've made with users. This
includes transport level details like the IP address that the service
listens on.

The Metadata API is an established API that Amazon introduced years ago,
and we shouldn't be "improving" APIs that we don't control. If Amazon
were to introduce IPv6 support the Metadata API tomorrow, we would
naturally implement it exactly the way they implemented it in EC2. We'd
honor the contract that Amazon made with its users, in our Metadata API,
since it is a compatibility layer.

However, since they haven't defined transport level details of the
Metadata API, regarding IPv6 - we can't take it upon ourselves to pick a
solution. It is not our API.

The nice thing about config-drive is that we've created a new mechanism
for bootstrapping instances - by replacing the transport level details
of the API. Rather than being a link-local address that instances access
over HTTP, it's a device that guests can mount and read. The actual
contents of the drive may have a similar schema as the Metadata API, but
I think at this point we've made enough of a differentiation between the
EC2 Metadata API and config-drive that I believe the contents of the
actual drive that the instance mounts can be changed without breaking
user expectations - since config-drive was developed by the OpenStack
community. The point being that we call it "config-drive" in
conversation and our docs. Users understand that config-drive is a
different feature.

I've had this same conversation about the Security Group API that we
have. We've named it the same thing as the Amazon API, but then went and
made all the fields different, inexplicably. Thankfully, it's just the
names of the fields, rather than being huge conceptual changes.


Re: [openstack-dev] [Fuel] Nominate Andrey Sledzinskiy for fuel-ostf core

2015-09-09 Thread Dmitry Tyzhnenko
+1
8 сент. 2015 г. 13:07 пользователь "Alexander Kostrikov" <
akostri...@mirantis.com> написал:

> +1
>
> On Tue, Sep 8, 2015 at 9:07 AM, Dmitriy Shulyak 
> wrote:
>
>> +1
>>
>> On Tue, Sep 8, 2015 at 9:02 AM, Anastasia Urlapova <
>> aurlap...@mirantis.com> wrote:
>>
>>> +1
>>>
>>> On Mon, Sep 7, 2015 at 6:30 PM, Tatyana Leontovich <
>>> tleontov...@mirantis.com> wrote:
>>>
 Fuelers,

 I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
 He’s been doing a great job in writing patches(support for detached
 services ).
 Also his review comments always have a lot of detailed information for
 further improvements


 http://stackalytics.com/?user_id=asledzinskiy=all_type=all=fuel-ostf

 Please vote with +1/-1 for approval/objection.

 Core reviewer approval process definition:
 https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess

 --
 Best regards,
 Tatyana


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Kind Regards,
>
> Alexandr Kostrikov,
>
> Mirantis, Inc.
>
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>
> Skype: akostrikov_mirantis
>
> E-mail: akostri...@mirantis.com 
>
> *www.mirantis.com *
> *www.mirantis.ru *
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] cloud-init IPv6 support

2015-09-09 Thread Fox, Kevin M
I think the DNS idea's going to run into problems for tenants that want to run 
their own, or have existing DNS servers. It may not play nicely with Designate 
as well.

Kevin


From: Ian Wells [ijw.ubu...@cack.org.uk]
Sent: Wednesday, September 09, 2015 12:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support

Neutron already offers a DNS server (within the DHCP namespace, I think).  It 
does forward on non-local queries to an external DNS server, but it already 
serves local names for instances; we'd simply have to set one aside, or perhaps 
use one in a 'root' but nonlocal domain (metadata.openstack e.g.).  In fact, 
this improves things slightly over the IPv4 metadata server: IPv4 metadata is 
usually reached via the router, whereas in ipv6 if we have a choice over 
addresses with can use a link local address (and any link local address will 
do; it's not an address that is 'magic' in some way, thanks to the wonder of 
service advertisement).

And per previous comments about 'Amazon owns this' - the current metadata 
service is a de facto standard, which Amazon initiated but is not owned by 
anybody, and it's not the only standard.  If you'd like proof of the former, I 
believe our metadata service offers /openstack/ URLs, unlike Amazon (mirroring 
the /openstack/ files on the config drive); and on the latter, config-drive and 
Amazon-style metadata are only two of quite an assortment of data providers 
that cloud-init will query.  If it makes you think of it differently, think of 
this as the *Openstack* ipv6 metadata service, and not the 
'will-be-Amazon-one-day-maybe' service.


On 8 September 2015 at 17:03, Clint Byrum 
> wrote:
Neutron would add a soft router that only knows the route to the metadata
service (and any other services you want your neutron private network vms
to be able to reach). This is not unique to the metadata service. Heat,
Trove, etc, all want this as a feature so that one can poke holes out of
these private networks only to the places where the cloud operator has
services running.

Excerpts from Fox, Kevin M's message of 2015-09-08 14:44:35 -0700:
> How does that work with neutron private networks?
>
> Thanks,
> Kevin
> 
> From: Clint Byrum [cl...@fewbar.com]
> Sent: Tuesday, September 08, 2015 1:35 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [Neutron] cloud-init IPv6 support
>
> Excerpts from Nir Yechiel's message of 2014-07-07 09:15:09 -0700:
> > AFAIK, the cloud-init metadata service can currently be accessed only by 
> > sending a request to http://169.254.169.254, and no IPv6 equivalent is 
> > currently implemented. Does anyone working on this or tried to address this 
> > before?
> >
>
> I'm not sure we'd want to carry the way metadata works forward now that
> we have had some time to think about this.
>
> We already have DHCP6 and NDP. Just use one of those, and set the host's
> name to a nonce that it can use to lookup the endpoint for instance
> differentiation via DNS SRV records. So if you were told you are
>
> d02a684d-56ea-44bc-9eba-18d997b1d32d.region.cloud.com
>
> Then you look that up as a SRV record on your configured DNS resolver,
> and connect to the host name returned and do something like  GET
> /d02a684d-56ea-44bc-9eba-18d997b1d32d
>
> And viola, metadata returns without any special link local thing, and
> it works like any other dual stack application on the planet.
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-09 Thread Alex Schultz
Hey Vladimir,


>
> The idea is to remove MOS DEB repo from the Fuel master node by default
> and use online MOS repo instead. Pros of such an approach are:
>
> 0) Reduced requirement for the master node minimal disk space
>

Is this a problem? How much disk space is saved If I have to go create a
local mirror via fuel-createmirror?


> 1) There won't be such things in like [1] and [2], thus less complicated
> flow, less errors, easier to maintain, easier to understand, easier to
> troubleshoot
> 2) If one wants to have local mirror, the flow is the same as in case of
> upstream repos (fuel-createmirror), which is clrear for a user to
> understand.
>

>From the issues I've seen,  fuel-createmirror isn't very straight forward
and has some issues making it a bad UX.


>
> Many people still associate ISO with MOS, but it is not true when using
> package based delivery approach.
>
> It is easy to define necessary repos during deployment and thus it is easy
> to control what exactly is going to be installed on slave nodes.
>
> What do you guys think of it?
>
>
>
Reliance on internet connectivity has been an issue since 6.1. For many
large users, complete access to the internet is not available or not
desired.  If we want to continue down this path, we need to improve the
tools to setup the local mirror and properly document what urls/ports/etc
need to be available for the installation of openstack and any mirror
creation process.  The ideal thing is to have an all-in-one CD similar to a
live cd that allows a user to completely try out fuel wherever they want
with out further requirements of internet access.  If we don't want to
continue with that, we need to do a better job around providing the tools
for a user to get up and running in a timely fashion.  Perhaps providing an
net-only iso and an all-included iso would be a better solution so people
will have their expectations properly set up front?

-Alex


>
> Vladimir Kozhukalov
>
> On Tue, Sep 8, 2015 at 4:53 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> The idea is to remove MOS DEB repo from the Fuel master node by default
>> and use online MOS repo instead. Pros of such an approach are:
>>
>> 0) Reduced requirement for the master node minimal disk space
>> 1) There won't be such things in like [1] and [2], thus less complicated
>> flow, less errors, easier to maintain, easier to understand, easier to
>> troubleshoot
>> 2) If one wants to have local mirror, the flow is the same as in case of
>> upstream repos (fuel-createmirror), which is clrear for a user to
>> understand.
>>
>> Many people still associate ISO with MOS
>>
>>
>>
>>
>>
>> [1]
>> https://github.com/stackforge/fuel-main/blob/master/iso/ks.template#L416-L419
>> [2]
>> https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/host_system.py#L109-L115
>>
>>
>> Vladimir Kozhukalov
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Something about being a PTL

2015-09-09 Thread Michael Krotscheck
Beautiful summary, Flavio, especially the points about creating new PTL's.
It's the bus-number argument: How many people have to get hit by a bus for
the project to falter? It's best to have a backup.

Also: Being a PTL is a full-time job.

>From working with current and former PTL's, I've noticed that it's almost
impossible to split your time between being a PTL and, say, being a member
of the TC, or working on an employer's private
feature/cloud/deployment/etc. For a far more eloquent explanation of why
this is, I defer to Devananda's wonderful non-candidacy email last spring.

http://lists.openstack.org/pipermail/openstack-dev/2015-April/062364.html

Not many people have the privilege of working for a company that supports
that level of upstream commitment. If your employer doesn't, send me your
Resumé ;).

Michael

On Wed, Sep 9, 2015 at 8:15 AM Flavio Percoco  wrote:

> Greetings,
>
> Next week many folks will be running for PTL positions and I thought
> about taking the time to dump[0] some thoughts about what being a PTL
> means - at least for me - and what one should consider before running.
>
> Since the audience I want to reach is mostly in this mailing list, I
> thought about sending it here as well.
>
> [0] http://blog.flaper87.com/post/something-about-being-a-ptl/
> Flavio
>
>
> It's that time of the cycle, in OpenStack, when projects need to elect
> who's going to be the PTL for the next 6 months. People look at the,
> hopefully many, candidacies and vote based on the proposals that are
> more sound to them. I believe, for the PTL elections, the voting
> process has worked decently, which is why this post is not meant for
> voters but for the, hopefully many, PTL candidates.
>
> First and foremost, thank you. Thanks for raising your hand and
> willing to take on this role. It's an honor to have you in the
> community and I wish you the best of lucks in this round. Below are a
> few things that I hope will help you in the preparation of your
> candidacy and that I also hope will help making you a better PTL and
> community member.
>
>
> Why do you want to be a PTL?
> 
>
> Before even start writing your candidacy, please, ask yourself why you
> want to be a PTL. What is it that you want to bring to the project
> that is good for both, the project and the community. You don't really
> need to get stuck on this question forever, you don't really need to
> bring something new to the project.
>
> In my opinion, a very good answer for the above could be: "I believe
> I'll provide the right guidance to the community and the project."
>
> Seriously, one mistake that new PTLs often do is to believe they are
> on their own. Turns out that PTLs arent. The whole point about being a
> PTL is to help the community and to improve it. You're not going to do
> that if you think you're the one pulling the community. PTLs ought to
> work *with* the community not *for* the community.
>
> This leads me to my next point
>
> Be part of the community
> 
>
> Being a PTL is more than just going through launchpad and keeping an
> eye on the milestones. That's a lot of work, true. But here's a
> secret, it takes more time to be involved with the community of the
> project you're serving than going through launchpad.
>
> As a PTL, you have to be around. You have to keep an eye on the
> mailing list in a daily basis. You have to talk to the members of the
> community you're serving because you have to be up-to-date about the
> things that are happening in the project and the community. There may
> be conflicts in reviews, bugs and you have to be there to help solving
> those.
>
> Among all the things you'll have to do, the community should be in the
> top 2 of your priorities. I'm not talking just about the community of
> the project you're working on. I'm talking about OpenStack. Does your
> project have an impact on other projects? Is your project part of
> DefCore? Is your project widely deployed? What are the deprecation
> guarantees provided? Does your project consume common libraries? What
> can your project contribute back to the rest of the community?
>
> There are *many* things related to the project's community and its
> interaction with the rest of the OpenStack community that are
> important and that should be taken care of. However, you're not alone,
> you have a community. Remember, you'll be serving the community, it's
> not the other way around. Working with the community is the best thing
> you can do.
>
> As you can imagine, the above is exhausting and it takes time. It
> takes a lot of time, which leads me to my next point.
>
> Make sure you'll have time
> ==
>
> There are a few things impossible in this world, predicting time
> availability is one of them. Nonetheless, we can get really close
> estimates and you should strive, *before* sending your candidacy, to
> get the closest estimate of your upstream 

Re: [openstack-dev] [Barbican] Nominating Dave Mccowan for Barbican core

2015-09-09 Thread michael mccune
i'm not a core, but +1 from me. Dave has made solid contributions and 
would be a great addition to the core team.


mike

On 09/08/2015 12:05 PM, Juan Antonio Osorio wrote:

I'd like to nominate Dave Mccowan for the Barbican core review team.

He has been an active contributor both in doing relevant code pieces and
making useful and thorough reviews; And so I think he would make a great
addition to the team.

Please bring the +1's :D

Cheers!

--
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [nova] Verification of glance images before boot

2015-09-09 Thread Nikhil Komawar
That's correct.

The size and the checksum are to be verified outside of Glance, in this
case Nova. However, you may want to note that it's not necessary that
all Nova virt drivers would use py-glanceclient so you would want to
check the download specific code in the virt driver your Nova deployment
is using.

Having said that, essentially the flow seems appropriate. Error must be
raise on mismatch.

The signing BP was to help prevent the compromised Glance from changing
the checksum and image blob at the same time. Using a digital signature,
you can prevent download of compromised data. However, the feature has
just been implemented in Glance; Glance users may take time to adopt.



On 9/9/15 11:15 AM, stuart.mcla...@hp.com wrote:
>
> The glance client (running 'inside' the Nova server) will re-calculate
> the checksum as it downloads the image and then compare it against the
> expected value. If they don't match an error will be raised.
>
>> How can I know that the image that a new instance is spawned from - is
>> actually the image that was originally registered in glance - and has
>> not been maliciously tampered with in some way?
>>
>> Is there some kind of verification that is performed against the md5sum
>> of the registered image in glance before a new instance is spawned?
>>
>> Is that done by Nova?
>> Glance?
>> Both? Neither?
>>
>> The reason I ask is some 'paranoid' security (that is their job I
>> suppose) people have raised these questions.
>>
>> I know there is a glance BP already merged for L [1] - but I would like
>> to understand the actual flow in a bit more detail.
>>
>> Thanks.
>>
>> [1]
>> https://blueprints.launchpad.net/glance/+spec/image-signing-and-verification-support
>>
>>
>> -- 
>> Best Regards,
>> Maish Saidel-Keesing
>>
>>
>>
>> --
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> End of OpenStack-dev Digest, Vol 41, Issue 22
>> *
>>
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa] Microsoft Hyper-V CI removed from nova-ci group until fixed

2015-09-09 Thread Matt Riedemann
I noticed hyper-v CI reporting a +1 on a change [1] that actually failed 
with a bad looking merge conflict, so I've removed the hyper-v CI 
account from the nova-ci group in Gerrit [2].  From talking with 
ociuhandu it sounds like zuul issues and they are working on it.


Ping me or John when things are fixed and we can add that account back 
to the nova-ci group in Gerrit.


[1] https://review.openstack.org/#/c/214493/
[2] https://review.openstack.org/#/admin/groups/511,members

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-09 Thread Adam Lawson
We thought about doing this as well and opted for a local repo, at least
for now. If you want to offer an online repo, I think it could be useful to
allow either scenario.

Just a thought from your friendly neighbors here. ; )

/adam
On Sep 8, 2015 7:03 AM, "Vladimir Kozhukalov" 
wrote:

> Sorry, fat fingers => early sending.
>
> =
> Dear colleagues,
>
> The idea is to remove MOS DEB repo from the Fuel master node by default
> and use online MOS repo instead. Pros of such an approach are:
>
> 0) Reduced requirement for the master node minimal disk space
> 1) There won't be such things in like [1] and [2], thus less complicated
> flow, less errors, easier to maintain, easier to understand, easier to
> troubleshoot
> 2) If one wants to have local mirror, the flow is the same as in case of
> upstream repos (fuel-createmirror), which is clrear for a user to
> understand.
>
> Many people still associate ISO with MOS, but it is not true when using
> package based delivery approach.
>
> It is easy to define necessary repos during deployment and thus it is easy
> to control what exactly is going to be installed on slave nodes.
>
> What do you guys think of it?
>
>
>
> Vladimir Kozhukalov
>
> On Tue, Sep 8, 2015 at 4:53 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> The idea is to remove MOS DEB repo from the Fuel master node by default
>> and use online MOS repo instead. Pros of such an approach are:
>>
>> 0) Reduced requirement for the master node minimal disk space
>> 1) There won't be such things in like [1] and [2], thus less complicated
>> flow, less errors, easier to maintain, easier to understand, easier to
>> troubleshoot
>> 2) If one wants to have local mirror, the flow is the same as in case of
>> upstream repos (fuel-createmirror), which is clrear for a user to
>> understand.
>>
>> Many people still associate ISO with MOS
>>
>>
>>
>>
>>
>> [1]
>> https://github.com/stackforge/fuel-main/blob/master/iso/ks.template#L416-L419
>> [2]
>> https://github.com/stackforge/fuel-web/blob/master/fuel_upgrade_system/fuel_upgrade/fuel_upgrade/engines/host_system.py#L109-L115
>>
>>
>> Vladimir Kozhukalov
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Something about being a PTL

2015-09-09 Thread Fox, Kevin M
Very well said. Thank you for this.

Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Wednesday, September 09, 2015 8:10 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [all] Something about being a PTL

Greetings,

Next week many folks will be running for PTL positions and I thought
about taking the time to dump[0] some thoughts about what being a PTL
means - at least for me - and what one should consider before running.

Since the audience I want to reach is mostly in this mailing list, I
thought about sending it here as well.

[0] http://blog.flaper87.com/post/something-about-being-a-ptl/
Flavio


It's that time of the cycle, in OpenStack, when projects need to elect
who's going to be the PTL for the next 6 months. People look at the,
hopefully many, candidacies and vote based on the proposals that are
more sound to them. I believe, for the PTL elections, the voting
process has worked decently, which is why this post is not meant for
voters but for the, hopefully many, PTL candidates.

First and foremost, thank you. Thanks for raising your hand and
willing to take on this role. It's an honor to have you in the
community and I wish you the best of lucks in this round. Below are a
few things that I hope will help you in the preparation of your
candidacy and that I also hope will help making you a better PTL and
community member.


Why do you want to be a PTL?


Before even start writing your candidacy, please, ask yourself why you
want to be a PTL. What is it that you want to bring to the project
that is good for both, the project and the community. You don't really
need to get stuck on this question forever, you don't really need to
bring something new to the project.

In my opinion, a very good answer for the above could be: "I believe
I'll provide the right guidance to the community and the project."

Seriously, one mistake that new PTLs often do is to believe they are
on their own. Turns out that PTLs arent. The whole point about being a
PTL is to help the community and to improve it. You're not going to do
that if you think you're the one pulling the community. PTLs ought to
work *with* the community not *for* the community.

This leads me to my next point

Be part of the community


Being a PTL is more than just going through launchpad and keeping an
eye on the milestones. That's a lot of work, true. But here's a
secret, it takes more time to be involved with the community of the
project you're serving than going through launchpad.

As a PTL, you have to be around. You have to keep an eye on the
mailing list in a daily basis. You have to talk to the members of the
community you're serving because you have to be up-to-date about the
things that are happening in the project and the community. There may
be conflicts in reviews, bugs and you have to be there to help solving
those.

Among all the things you'll have to do, the community should be in the
top 2 of your priorities. I'm not talking just about the community of
the project you're working on. I'm talking about OpenStack. Does your
project have an impact on other projects? Is your project part of
DefCore? Is your project widely deployed? What are the deprecation
guarantees provided? Does your project consume common libraries? What
can your project contribute back to the rest of the community?

There are *many* things related to the project's community and its
interaction with the rest of the OpenStack community that are
important and that should be taken care of. However, you're not alone,
you have a community. Remember, you'll be serving the community, it's
not the other way around. Working with the community is the best thing
you can do.

As you can imagine, the above is exhausting and it takes time. It
takes a lot of time, which leads me to my next point.

Make sure you'll have time
==

There are a few things impossible in this world, predicting time
availability is one of them. Nonetheless, we can get really close
estimates and you should strive, *before* sending your candidacy, to
get the closest estimate of your upstream availability for the next 6
months.

Being a PTL is an upstream job, it's nothing - at the very least it
shouldn't have - to do with your actual employer. Being a PTL is an
*upstream* job and you have to be *upstream* to do it correctly.

If you think you won't have time in a couple of months then, please,
don't run for PTL. If you think your manager will be asking you to
focus downstream then, please, don't run for PTL. If you think you'll
have other personal matters to take care of then, please, don't run
for PTL.

What I'm trying to say is that you should sit down and think of what
your next 6 months will look like time-wise. I believe it's safe
enough to say that you'll have to spend 60% to 70% of your time
upstream, assuming the porject is a busy one.

The above, though, is not to 

[openstack-dev] [Sahara] FFE request for improved secret storage

2015-09-09 Thread michael mccune

hi all,

i am requesting an FFE for the improved secret storage feature.

this change will allow operators to utilize the key manager service for 
offloading the passwords stored by sahara. this change does not 
implement mandatory usage of barbican, and defaults to a backward 
compatible behavior that requires no change to a stack.


there is currently 1 review up which addresses the main thrust of this 
change, there will be 1 additional review which will include more 
passwords being migrated to use the mechanisms for offloading.


i expect this work to be complete by sept. 25.

review
https://review.openstack.org/#/c/220680/

blueprint
https://blueprints.launchpad.net/sahara/+spec/improved-secret-storage

spec
http://specs.openstack.org/openstack/sahara-specs/specs/liberty/improved-secret-storage.html

thanks,
mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Something about being a PTL

2015-09-09 Thread Flavio Percoco

Greetings,

Next week many folks will be running for PTL positions and I thought
about taking the time to dump[0] some thoughts about what being a PTL
means - at least for me - and what one should consider before running.

Since the audience I want to reach is mostly in this mailing list, I
thought about sending it here as well.

[0] http://blog.flaper87.com/post/something-about-being-a-ptl/
Flavio


It's that time of the cycle, in OpenStack, when projects need to elect
who's going to be the PTL for the next 6 months. People look at the,
hopefully many, candidacies and vote based on the proposals that are
more sound to them. I believe, for the PTL elections, the voting
process has worked decently, which is why this post is not meant for
voters but for the, hopefully many, PTL candidates.

First and foremost, thank you. Thanks for raising your hand and
willing to take on this role. It's an honor to have you in the
community and I wish you the best of lucks in this round. Below are a
few things that I hope will help you in the preparation of your
candidacy and that I also hope will help making you a better PTL and
community member.


Why do you want to be a PTL?


Before even start writing your candidacy, please, ask yourself why you
want to be a PTL. What is it that you want to bring to the project
that is good for both, the project and the community. You don't really
need to get stuck on this question forever, you don't really need to
bring something new to the project.

In my opinion, a very good answer for the above could be: "I believe
I'll provide the right guidance to the community and the project."

Seriously, one mistake that new PTLs often do is to believe they are
on their own. Turns out that PTLs arent. The whole point about being a
PTL is to help the community and to improve it. You're not going to do
that if you think you're the one pulling the community. PTLs ought to
work *with* the community not *for* the community.

This leads me to my next point

Be part of the community


Being a PTL is more than just going through launchpad and keeping an
eye on the milestones. That's a lot of work, true. But here's a
secret, it takes more time to be involved with the community of the
project you're serving than going through launchpad.

As a PTL, you have to be around. You have to keep an eye on the
mailing list in a daily basis. You have to talk to the members of the
community you're serving because you have to be up-to-date about the
things that are happening in the project and the community. There may
be conflicts in reviews, bugs and you have to be there to help solving
those.

Among all the things you'll have to do, the community should be in the
top 2 of your priorities. I'm not talking just about the community of
the project you're working on. I'm talking about OpenStack. Does your
project have an impact on other projects? Is your project part of
DefCore? Is your project widely deployed? What are the deprecation
guarantees provided? Does your project consume common libraries? What
can your project contribute back to the rest of the community?

There are *many* things related to the project's community and its
interaction with the rest of the OpenStack community that are
important and that should be taken care of. However, you're not alone,
you have a community. Remember, you'll be serving the community, it's
not the other way around. Working with the community is the best thing
you can do.

As you can imagine, the above is exhausting and it takes time. It
takes a lot of time, which leads me to my next point.

Make sure you'll have time
==

There are a few things impossible in this world, predicting time
availability is one of them. Nonetheless, we can get really close
estimates and you should strive, *before* sending your candidacy, to
get the closest estimate of your upstream availability for the next 6
months.

Being a PTL is an upstream job, it's nothing - at the very least it
shouldn't have - to do with your actual employer. Being a PTL is an
*upstream* job and you have to be *upstream* to do it correctly. 


If you think you won't have time in a couple of months then, please,
don't run for PTL. If you think your manager will be asking you to
focus downstream then, please, don't run for PTL. If you think you'll
have other personal matters to take care of then, please, don't run
for PTL.

What I'm trying to say is that you should sit down and think of what
your next 6 months will look like time-wise. I believe it's safe
enough to say that you'll have to spend 60% to 70% of your time
upstream, assuming the porject is a busy one.

The above, though, is not to say that you shouldn't run when in doubt.
Actually, I'd rather have a great PTL for 3 months that'll then step
down than having the community being led by someone not motivated
enough that was forced to run.

Create new PTLs
===

Just like in every 

Re: [openstack-dev] [glance] [nova] Verification of glance images before boot

2015-09-09 Thread stuart . mclaren


The glance client (running 'inside' the Nova server) will re-calculate
the checksum as it downloads the image and then compare it against the
expected value. If they don't match an error will be raised.


How can I know that the image that a new instance is spawned from - is
actually the image that was originally registered in glance - and has
not been maliciously tampered with in some way?

Is there some kind of verification that is performed against the md5sum
of the registered image in glance before a new instance is spawned?

Is that done by Nova?
Glance?
Both? Neither?

The reason I ask is some 'paranoid' security (that is their job I
suppose) people have raised these questions.

I know there is a glance BP already merged for L [1] - but I would like
to understand the actual flow in a bit more detail.

Thanks.

[1]
https://blueprints.launchpad.net/glance/+spec/image-signing-and-verification-support

--
Best Regards,
Maish Saidel-Keesing



--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


End of OpenStack-dev Digest, Vol 41, Issue 22
*



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] differences between def detail() and def index() in glance/registry/api/v1/images.py

2015-09-09 Thread Su Zhang
Hello,

I am hitting an error and its trace passes def index ()
in glance/registry/api/v1/images.py.

I assume def index() is called by glance image-list. However, while testing
glance image-list I realized that def detail() is called
under glance/registry/api/v1/images.py instead of def index().

Could someone let me know what's the difference between the two functions?
How can I test out def index() under glance/registry/api/v1/images.py
through CLI or API?

Thanks,

-- 
Su Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support multiple compute drivers?

2015-09-09 Thread Jeff Peeler
I'd greatly prefer using availability zones/host aggregates as I'm trying
to keep the footprint as small as possible. It does appear that in the
section "configure scheduler to support host aggregates" [1], that I can
configure filtering using just one scheduler (right?). However, perhaps
more importantly, I'm now unsure with the network configuration changes
required for Ironic that deploying normal instances along with baremetal
servers is possible.

[1]
http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html

On Wed, Sep 9, 2015 at 1:13 PM, Jim Rollenhagen 
wrote:

> On Wed, Sep 02, 2015 at 03:42:20PM -0400, Jeff Peeler wrote:
> > Hi folks,
> >
> > I'm currently looking at supporting Ironic in the Kolla project [1], but
> > was unsure if it would be possible to run separate instances of nova
> > compute and controller (and scheduler too?) to enable both baremetal and
> > libvirt type deployments. I found this mailing list post from two years
> ago
> > [2], asking the same question. The last response in the thread seemed to
> > indicate work was being done on the scheduler to support multiple
> > configurations, but the review [3] ended up abandoned.
> >
> > Are the current requirements the same? Perhaps using two availability
> zones
> > would work, but I'm not clear if that works on the same host.
>
> At Rackspace we run Ironic in its own cell, and use cells filters to
> direct builds to the right place.
>
> The other option that supposedly works is host aggregates. I'm not sure
> host aggregates supports running two scheduler instances (because you'll
> want different filters), but maybe it does?
>
> // jim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Releasing tripleo-common on PyPI

2015-09-09 Thread Ben Nemec
On 09/09/2015 12:54 PM, Doug Hellmann wrote:
> Excerpts from Dmitry Tantsur's message of 2015-09-09 12:58:04 +0200:
>> On 09/09/2015 12:15 PM, Dougal Matthews wrote:
>>> Hi,
>>>
>>> The tripleo-common library appears to be registered or PyPI but hasn't yet 
>>> had
>>> a release[1]. I am not familiar with the release process - what do we need 
>>> to
>>> do to make sure it is regularly released with other TripleO packages?
>>
>> I think this is a good start: 
>> https://github.com/openstack/releases/blob/master/README.rst
> 
> That repo isn't managed by the release team, so you don't need to submit
> a release request as described there. You can, however, use the tools to
> tag a release yourself. Drop by #openstack-relmgr-office if you have
> questions about the tools or process, and I'll be happy to offer
> whatever guidance I can.

We have tripleo-specific docs for doing releases on the wiki:
https://wiki.openstack.org/wiki/TripleO/ReleaseManagement

Someday when I stop dropping the ball on getting the tripleo launchpad
permissions fixed, we'll be able to move to the same release model as
everyone else...

> 
> Doug
> 
>>
>>>
>>> We will also want to do something similar with the new python-tripleoclient
>>> which doesn't seem to be registered on PyPI yet at all.
>>
>> And instack-undercloud.
>>
>>>
>>> Thanks,
>>> Dougal
>>>
>>> [1]: https://pypi.python.org/pypi/tripleo-common
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rootwrap] rootwrap and libraries - RFC

2015-09-09 Thread Doug Hellmann
Excerpts from Matt Riedemann's message of 2015-09-09 13:45:29 -0500:
> 
> On 9/9/2015 1:04 PM, Doug Hellmann wrote:
> > Excerpts from Sean Dague's message of 2015-09-09 13:36:37 -0400:
> >> We've got a new pattern emerging where some of the key functionality in
> >> services is moving into libraries that can be called from different
> >> services. A good instance of this is os-brick, which has the setup /
> >> config functionality for devices that sometimes need to be called by
> >> cinder and sometimes need to be called by nova when setting up a guest.
> >> Many of these actions need root access, so require rootwrap filters.
> >>
> >> The point of putting this logic into a library is that it's self
> >> contained, and that it can be an upgrade unit that is distinct from the
> >> upgrade unit of either nova or cinder.
> >>
> >> The problem rootwrap.conf. Projects ship an example rootwrap.conf
> >> which specifies filter files like such:
> >>
> >> [DEFAULT]
> >> # List of directories to load filter definitions from (separated by ',').
> >> # These directories MUST all be only writeable by root !
> >> filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
> >>
> >> however, we'd really like to be loading those filters from files the
> >> library controls, so they are in sync with the library functionality.
> >> Knowing where those files are going to be turns out to be a really
> >> interesting guessing game. And, for security reasons, having a super
> >> large set of paths rootwrap is guessing at seems really unwise.
> >>
> >> It seems like what we really want is something more like this:
> >>
> >> [filters]
> >> nova=compute,network
> >> os-brick=os-brick
> >>
> >> Which would translate into a symbolic look up via:
> >>
> >> filter_file = resource_filename(project, '%s.filters' % filter)
> >> ... read the filter file
> >
> > Right now rootwrap takes as input an oslo.config file, which it reads to
> > find the filter_path config value, which it interprets as a directory
> > containing a bunch of other INI files, which it then reads and merges
> > together into a single set of filters. I'm not sure the symbolic lookup
> > you're proposing is going to support that use of multiple files. Maybe
> > it shouldn't?
> >
> > What about allowing filter_path to contain more than one directory
> > to scan? That would let projects using os-brick pass their own path and
> > os-brick's path, if it's different.
> >
> > Doug
> >
> >>
> >> So that rootwrap would be referencing things symbolically instead of
> >> static / filebased which is going to be different depending on install
> >> method.
> >>
> >>
> >> For liberty we just hacked around it and put the os-brick rules into
> >> Nova and Cinder. It's late in the release, and a clear better path
> >> forward wasn't out there. It does mean the upgrade of the two components
> >> is a bit coupled in the fiber channel case. But it was the best we could 
> >> do.
> >>
> >> I'd like to get the discussion rolling about the proposed solution
> >> above. It emerged from #openstack-cinder this morning as we attempted to
> >> get some kind of workable solution and figure out what was next. We
> >> should definitely do a summit session on this one to nail down the
> >> details and the path forward.
> >>
> >>  -Sean
> >>
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> The problem with the static file paths in rootwrap.conf is that we don't 
> know where those other library filter files are going to end up on the 
> system when the library is installed.  We could hard-code nova's 
> rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then 

I thought the configuration file passed to rootwrap was something the
deployer could change, which would let them fix the paths on their
system. Did I misunderstand what the argument was?

> that means the deploy/config management tooling that installing this 
> stuff needs to copy that directory structure from the os-brick install 
> location (which we're finding non-deterministic, at least when using 
> data_files with pbr) to the target location that rootwrap.conf cares about.
> 
> That's why we were proposing adding things to rootwrap.conf that 
> oslo.rootwrap can parse and process dynamically using the resource 
> access stuff in pkg_resources, so we just say 'I want you to load the 
> os-brick.filters file from the os-brick project, thanks.'.
> 

Doesn't that put the rootwrap config file for os-brick in a place the
deployer can't change it? Maybe they're not supposed to? If they're not,
then I agree that burying the actual file inside the library and using
something like pkgtools to get its contents makes more sense.

Doug


Re: [openstack-dev] [glance] differences between def detail() and def index() in glance/registry/api/v1/images.py

2015-09-09 Thread Fei Long Wang
I assume you're using Glance client, if so, by default, when you issuing 
command 'glance image-list', it will call /v1/images/detail instead of 
/v1/images, you can use curl or any browser http client to see the 
difference. Basically, just like the endpoint name, /v1/images/detail 
will give you more details. See below difference of their response.


Response from /v1/images/detail
{
"images": [
{
"status": "active",
"deleted_at": null,
"name": "fedora-21-atomic-3",
"deleted": false,
"container_format": "bare",
"created_at": "2015-09-03T22:56:37.00",
"disk_format": "qcow2",
"updated_at": "2015-09-03T23:00:15.00",
"min_disk": 0,
"protected": false,
"id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
"min_ram": 0,
"checksum": "d3b3da0e07743805dcc852785c7fc258",
"owner": "5f290ac4b100440b8b4c83fce78c2db7",
"is_public": true,
"virtual_size": null,
"properties": {
"os_distro": "fedora-atomic"
},
"size": 770179072
}
]
}

Response with /v1/images
{
"images": [
{
"name": "fedora-21-atomic-3",
"container_format": "bare",
"disk_format": "qcow2",
"checksum": "d3b3da0e07743805dcc852785c7fc258",
"id": "b940521b-97ff-48d9-a22e-ecc981ec0513",
"size": 770179072
}
]
}


On 10/09/15 11:46, Su Zhang wrote:


Hello,

I am hitting an error and its trace passes def index () 
in glance/registry/api/v1/images.py.


I assume def index() is called by glance image-list. However, while 
testing glance image-list I realized that def detail() is called 
under glance/registry/api/v1/images.py instead of def index().


Could someone let me know what's the difference between the two 
functions? How can I test out def index() under 
glance/registry/api/v1/images.py through CLI or API?


Thanks,

--
Su Zhang



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-09 Thread Robert Collins
On 10 September 2015 at 06:18, Doug Hellmann  wrote:
> Excerpts from Ben Swartzlander's message of 2015-09-08 22:22:38 -0400:

>> It would be a recognition that most customers don't want to upgrade
>> every 6 months -- they want to skip over 3 releases and upgrade every 2
>> years. I'm sure there are customers all over the spectrum from those who
>> run master to those to do want a new release every 6 month, to some that
>> want to install something and run it forever without upgrading*. My
>> intuition is that, for most customers, 2 years is a reasonable amount of
>> time to run a release before upgrading. I think major Linux distros
>> understand this, as is evidenced by their release and support patterns.
>
> As Jeremy pointed out, we have quite a bit of trouble now maintaining
> stable branches. given that, it just doesn't seem realistic in this
> community right now to expect support for a 2 year long LTS period.
> Given that, I see no real reason to base other policies around the
> idea that we might do it in the future. If we *do* end up finding
> more interest in stable/LTS support, then we can revisit the
> deprecation period at that point.

Also, there's a number of things that are bundled up into the one thing today:
 - schema migrations
 - cross-version compatibility (or lack thereof) of deps [and
co-installability too]
 - RPC compatibility
 - config file compatibility

All of which impact the ability to do rolling upgrades. Non-rolling
upgrades without config changes are a bit of a special case, but also
important.

So any LTS discussion needs to cover:
 - resourcing the maintenance of the LTS branch
 - solving the technical problems in keeping the branch running as the
platform it was developed on ages underneath it
 - solving the technical problems in dealing with libraries that are
moving on while its staying static (and please no, do not suggest LTS
branches for oslo libraries - they are only a subset of the problem)
 - dealing with having compat code hang around in-tree for much longer

So the backwards compatibility aspect of this discussion is really
just the tip of the iceberg.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] attention requirements-cores, please look out for constraints updates

2015-09-09 Thread Robert Collins
On 9 September 2015 at 22:22, Alan Pevec  wrote:
>> I'd like to add in a lower-constraints.txt set of pins and actually
>> start reporting on whether our lower bounds *work*.
>
> Do you have a spec in progress for lower-constraints.txt?
> It should help catch issues like https://review.openstack.org/221267
> There are also lots of entries in global-requirements without minimum
> version set while they should:
> http://git.openstack.org/cgit/openstack/requirements/tree/README.rst#n226

Not yet. Got some other fish to fry first :) - but I'd certainly be
happy to review a spec if someone else wants to work on
lower-constraints testing.

So the bare library thing - I think that advice is overly
prescriptive. Certainly with something like testtools, a bare version
is bad - its a mature library and older versions certainly aren't
relevant today. OTOH with something like os-brick, brand new, all
versions may well be ok.

Cheers,
Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] port delete allowed on VM

2015-09-09 Thread Kevin Benton
This is expected behavior.

On Tue, Sep 8, 2015 at 12:58 PM, Ajay Kalambur (akalambu) <
akala...@cisco.com> wrote:

> Hi
> Today when we create a VM on a port and delete that port I don’t get a
> message saying Port in Use
>
> Is there a plan to fix this or is this expected behavior in neutron
>
> Is there a plan to fix this and if so is there a bug tracking this?
>
> Ajay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators][chef] Pre-release of new kitchen-openstack driver, Windows and 1.4 test-kitchen support

2015-09-09 Thread JJ Asghar

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi Everyone!

I'd like to announce a pre-release[1] of 2.0.0.dev of the
kitchen-openstack[2] driver gem.  I've been able to test this with
Windows 2012R2 and all the flavors of Linux I have on my OpenStack
Cloud, but would love some more feedback.[3]

If you could give this pre-release a shot, and post any issues, negative
or positive; I'd like to get this pushed out to a real release late next
week. (Sept. 17 or 18)

If you don't know you can install the gem via: `gem install
kitchen-openstack --pre`

Please take note, with 1.4 TK, there is now a transport option for the
transport driver, so you'll most likely need to add a transport section
to your .kitchen.yml. I've made note of this here[3].

[1]: https://rubygems.org/gems/kitchen-openstack/versions/2.0.0.dev
[2]: https://github.com/test-kitchen/kitchen-openstack
[3]: https://github.com/test-kitchen/kitchen-openstack/issues
[4]: https://github.com/test-kitchen/kitchen-openstack#usage

- -- 
Best Regards,
JJ Asghar
c: 512.619.0722 t: @jjasghar irc: j^2

-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJV8MaQAAoJEDZbxzMH0+jTCYUP/iUEqNG3WOZ6VJjuZCBO+4Wt
5h2DSvg7Tc7gB+IIUyvz8G++ymatvGyY9zRNhxCJcpQuwndxrfYJuVFB+2KiJ8hS
oVfHcggVms0/DlmGUn8Lr/8GdCCawU2qwjkYg1STJorXCwH6phh6dIhWcPSjus8r
f2/JKStmawFqQ7MW/hI5qvJ2o46AvfHEbzyPChD9YYNMffdZzrUfKVNL9JcCpl4+
N9VU+Y2e2oo1yjKro68tM7JR0qE5gF5k0BgRXcxWkSzPVLB+ilD+mAqCwoaaRmkr
yuxAgWV7kwFWXQnK8O/OJEEX4/EQx6QqC7oR36DrPLGafxW9Jk9Jyj6eh12mt7G9
/uEAuKcket5F6CNLYikGH3Lm7ZhaFD75Of/9ourVWZy4aTl3zX7PaC7SwSb6Yx9B
Flt4O4hf4Nl7PgPzf3kyuWaR+39HmEpF4WwCNQ+NdA92IKebDcsR6SgdcwxxkiOl
5wXhSs8vr+fgEGBYp4ZoEHmGUMWghd/fcoH5yDVt+neM1FB9wJwQAjMUV0z3kCJ1
AEyzwyNHTtflsnL3613//zwWTjKE9U3cHhY7KaiBrL2jO+rDsfi1cAD34usYI1G0
T76D9IAkxZz4TycGWgVzSVTY1ESqJFIxE2BLGeCDHZq/8fQsa7ZlrKif02V/eO+o
jMSXjkVcXHATaMqFIMNe
=f+U2
-END PGP SIGNATURE-


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Summit session brainstorming

2015-09-09 Thread Jim Rollenhagen
Hi all,

I don't have an exact schedule or anything, but I wanted to start
brainstorming topics for the summit. I've started an etherpad:
https://etherpad.openstack.org/p/mitaka-ironic-design-summit-ideas

Keep in mind these should be topics that we need to sort things out on -
let's be sure not to rehash things where we already have consensus on
the path forward, etc.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-09 Thread Yaguang Tang
On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz  wrote:

>
> Hey Vladimir,
>
>
>>
>>
>>> 1) There won't be such things in like [1] and [2], thus less complicated
 flow, less errors, easier to maintain, easier to understand, easier to
 troubleshoot
 2) If one wants to have local mirror, the flow is the same as in case
 of upstream repos (fuel-createmirror), which is clrear for a user to
 understand.

>>>
>>> From the issues I've seen,  fuel-createmirror isn't very straight
>>> forward and has some issues making it a bad UX.
>>>
>>
>> I'd say the whole approach of having such tool as fuel-createmirror is a
>> way too naive. Reliable internet connection is totally up to network
>> engineering rather than deployment. Even using proxy is much better that
>> creating local mirror. But this discussion is totally out of the scope of
>> this letter. Currently,  we have fuel-createmirror and it is pretty
>> straightforward (installed as rpm, has just a couple of command line
>> options). The quality of this script is also out of the scope of this
>> thread. BTW we have plans to improve it.
>>
>
>
> Fair enough, I just wanted to raise the UX issues around these types of
> things as they should go into the decision making process.
>
>
>
>>
>>>
 Many people still associate ISO with MOS, but it is not true when using
 package based delivery approach.

 It is easy to define necessary repos during deployment and thus it is
 easy to control what exactly is going to be installed on slave nodes.

 What do you guys think of it?



>>> Reliance on internet connectivity has been an issue since 6.1. For many
>>> large users, complete access to the internet is not available or not
>>> desired.  If we want to continue down this path, we need to improve the
>>> tools to setup the local mirror and properly document what urls/ports/etc
>>> need to be available for the installation of openstack and any mirror
>>> creation process.  The ideal thing is to have an all-in-one CD similar to a
>>> live cd that allows a user to completely try out fuel wherever they want
>>> with out further requirements of internet access.  If we don't want to
>>> continue with that, we need to do a better job around providing the tools
>>> for a user to get up and running in a timely fashion.  Perhaps providing an
>>> net-only iso and an all-included iso would be a better solution so people
>>> will have their expectations properly set up front?
>>>
>>
>> Let me explain why I think having local MOS mirror by default is bad:
>> 1) I don't see any reason why we should treat MOS  repo other way than
>> all other online repos. A user sees on the settings tab the list of repos
>> one of which is local by default while others are online. It can make user
>> a little bit confused, can't it? A user can be also confused by the fact,
>> that some of the repos can be cloned locally by fuel-createmirror while
>> others can't. That is not straightforward, NOT fuel-createmirror UX.
>>
>
>
> I agree. The process should be the same and it should be just another
> repo. It doesn't mean we can't include a version on an ISO as part of a
> release.  Would it be better to provide the mirror on the ISO but not have
> it enabled by default for a release so that we can gather user feedback on
> this? This would include improved documentation and possibly allowing a
> user to choose their preference so we can collect metrics?
>
>
> 2) Having local MOS mirror by default makes things much more convoluted.
>> We are forced to have several directories with predefined names and we are
>> forced to manage these directories in nailgun, in upgrade script, etc. Why?
>> 3) When putting MOS mirror on ISO, we make people think that ISO is equal
>> to MOS, which is not true. It is possible to implement really flexible
>> delivery scheme, but we need to think of these things as they are
>> independent.
>>
>
>
> I'm not sure what you mean by this. Including a point in time copy on an
> ISO as a release is a common method of distributing software. Is this a
> messaging thing that needs to be addressed? Perhaps I'm not familiar with
> people referring to the ISO as being MOS.
>
>
> For large users it is easy to build custom ISO and put there what they
>> need but first we need to have simple working scheme clear for everyone. I
>> think dealing with all repos the same way is what is gonna makes things
>> simpler.
>>
>>
>
> Who is going to build a custom ISO? How does one request that? What
> resources are consumed by custom ISO creation process/request? Does this
> scale?
>
>
>
>> This thread is not about internet connectivity, it is about aligning
>> things.
>>
>>
> You are correct in that this thread is not explicitly about internet
> connectivity, but they are related. Any changes to remove a local
> repository and only provide an internet based solution makes internet
> connectivity something that needs to be 

Re: [openstack-dev] [magnum] keystone pluggable model

2015-09-09 Thread Jamie Lennox


- Original Message -
> From: "Murali Allada" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Thursday, 10 September, 2015 6:41:40 AM
> Subject: [openstack-dev] [magnum]  keystone pluggable model
> 
> 
> 
> ​ Hi All,
> 
> 
> 
> 
> 
> In the IRC meeting yesterday, I brought up this new blueprint I opened.
> 
> 
> 
> 
> 
> https://blueprints.launchpad.net/magnum/+spec/pluggable-keystone-model ​
> 
> 
> 
> 
> 
> The goal of this blueprint is to allow magnum operators to integrate with
> their version of keystone easily with downstream patches.
> 
> 
> 
> 
> 
> The goal is NOT to implement support for keystone version 2 upstream, but to
> make it easy for operators to integrate with V2 if they need to.
> 
> 
> 
> 
> 
> Most of the work required for this is already done in this patch.
> 
> 
> 
> 
> 
> https://review.openstack.org/#/c/218699
> 
> 
> 
> 
> 
> However, we didn't want to address this change in the same review.
> 
> 
> 
> 
> 
> We just need to refactor the code a little further and isolate all version
> specific keystone code to one file.
> 
> 
> 
> 
> 
> See my comments in the following review for details on what this change
> entails.
> 
> 
> 
> 
> 
> https://review.openstack.org/#/c/218699/5/magnum/common/clients.py
> 
> 
> 
> 
> 
> https://review.openstack.org/#/c/218699/5/magnum/common/keystone.py
> 
> 
> 
> 
> 
> Thanks,
> 
> 
> Murali

Hi, 

My keystone filter picked this up from the title so i don't really know 
anything specifically about magnum here, but can you explain what you are 
looking for in terms of abstraction a little more? 

Looking at the review the only thing that magnum is doing with the keystone API 
(not auth) is trust creation - and this is a v3 only feature so there's not 
much value to a v2 client there. I know this is a problem that heat has run 
into and done a similar solution where there is a contrib v2 module that short 
circuits some functions and leaves things somewhat broken. I don't think they 
would recommend it.

The other thing is auth. A version independent auth mechanism is something that 
keystoneclient has supplied for a while now. Here's two blog posts that show 
how to use sessions and auth plugins[1][2] from keystoneclient such that it is 
a completely deployment configuration choice what type (service passwords must 
die) or version of authentication is used. All clients i know with the 
exception of swift support sessions and plugins so this would seem like an 
ideal time for magnum to adopt them rather than reinvent auth version 
abstraction as you'll get some wins like not having to hack in already 
authenticated tokens into each client.

From the general design around client management it looks like you've taken 
some inspiration from heat so you might be interested in the recently merged 
patches there that convert to using auth plugins there. 

If you need any help with this please ping me on IRC. 


Jamie


[1] 
http://www.jamielennox.net/blog/2014/09/15/how-to-use-keystoneclient-sessions/
[2] http://www.jamielennox.net/blog/2015/02/17/loading-authentication-plugins/
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Election Season, PTL and TC September/October 2015

2015-09-09 Thread Tristan Cacqueray
PTL Election details:
  https://wiki.openstack.org/wiki/PTL_Elections_September_2015
TC Election details:
  https://wiki.openstack.org/wiki/TC_Elections_September/October_2015

Please read the stipulations and timelines for candidates and electorate
contained in these wikipages.

There will be an announcement email opening nominations as well as an
announcement email opening the polls.

Please note that election's workflow is now based on gerrit through the
new openstack/election repository. All candidacies must be submitted as
a text file to the openstack/election repository. Please check the
instructions on the wiki documentation.

Be aware, in the PTL elections if the program only has one candidate,
that candidate is acclaimed and there will be no poll. There will only
be a poll if there is more than one candidate stepping forward for a
program's PTL position.

There will be further announcements posted to the mailing list as action
is required from the electorate or candidates. This email is for
information purposes only.

If you have any questions which you feel affect others please reply to
this email thread. If you have any questions that you which to discuss
in private please email both myself Tristan Cacqueray (tristanC) email:
tdecacqu at redhat dot com and Tony Breed (tonyb) email: tony at
bakeyournoodle dot com so that we may address your concerns.

Thank you,
Tristan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support multiple compute drivers?

2015-09-09 Thread Steve Gordon
- Original Message -
> From: "Jeff Peeler" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> I'd greatly prefer using availability zones/host aggregates as I'm trying
> to keep the footprint as small as possible. It does appear that in the
> section "configure scheduler to support host aggregates" [1], that I can
> configure filtering using just one scheduler (right?). However, perhaps
> more importantly, I'm now unsure with the network configuration changes
> required for Ironic that deploying normal instances along with baremetal
> servers is possible.
> 
> [1]
> http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html

Hi Jeff,

I assume your need for a second scheduler is spurred by wanting to enable 
different filters for baremetal vs virt (rather than influencing scheduling 
using the same filters via image properties, extra specs, and boot parameters 
(hints)? 

I ask because if not you should be able to use the hypervisor_type image 
property to ensure that images intended for baremetal are directed there and 
those intended for kvm etc. are directed to those hypervisors. The 
documentation [1] doesn't list ironic as a valid value for this property but I 
looked into the code for this a while ago and it seemed like it should work... 
Apologies if you had already considered this.

Thanks,

Steve

[1] 
http://docs.openstack.org/cli-reference/content/chapter_cli-glance-property.html

> On Wed, Sep 9, 2015 at 1:13 PM, Jim Rollenhagen 
> wrote:
> 
> > On Wed, Sep 02, 2015 at 03:42:20PM -0400, Jeff Peeler wrote:
> > > Hi folks,
> > >
> > > I'm currently looking at supporting Ironic in the Kolla project [1], but
> > > was unsure if it would be possible to run separate instances of nova
> > > compute and controller (and scheduler too?) to enable both baremetal and
> > > libvirt type deployments. I found this mailing list post from two years
> > ago
> > > [2], asking the same question. The last response in the thread seemed to
> > > indicate work was being done on the scheduler to support multiple
> > > configurations, but the review [3] ended up abandoned.
> > >
> > > Are the current requirements the same? Perhaps using two availability
> > zones
> > > would work, but I'm not clear if that works on the same host.
> >
> > At Rackspace we run Ironic in its own cell, and use cells filters to
> > direct builds to the right place.
> >
> > The other option that supposedly works is host aggregates. I'm not sure
> > host aggregates supports running two scheduler instances (because you'll
> > want different filters), but maybe it does?
> >
> > // jim
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] Feature Freeze Exception proposal

2015-09-09 Thread Nikhil Komawar
FYI, this was granted FFE.

On 9/8/15 11:02 AM, Nikhil Komawar wrote:
> Malini,
>
> Your note on the etherpad [1] went unnoticed as we had that sync on
> Friday outside of our regular meeting and weekly meeting agenda etherpad
> was not fit for discussion purposes.
>
> It would be nice if you all can update & comment on the spec, ref. the
> note or have someone send a relative email here that explains the
> redressal of the issues raised on the spec and during Friday sync [2].
>
> [1] https://etherpad.openstack.org/p/glance-team-meeting-agenda
> [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>
> On 9/5/15 4:40 PM, Bhandaru, Malini K wrote:
>> Thank you Nikhil and Glance team on the FFE consideration.
>> We are committed to making the revisions per suggestion and separately seek 
>> help from the Flavio, Sabari, and Harsh.
>> Regards
>> Malini, Kent, and Jakub 
>>
>>
>> -Original Message-
>> From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
>> Sent: Friday, September 04, 2015 9:44 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception proposal
>>
>> Hi Malini et.al.,
>>
>> We had a sync up earlier today on this topic and a few items were discussed 
>> including new comments on the spec and existing code proposal.
>> You can find the logs of the conversation here [1].
>>
>> There are 3 main outcomes of the discussion:
>> 1. We hope to get a commitment on the feature (spec and the code) that the 
>> comments would be addressed and code would be ready by Sept 18th; after 
>> which the RC1 is planned to be cut [2]. Our hope is that the spec is merged 
>> way before and implementation to the very least is ready if not merged. The 
>> comments on the spec and merge proposal are currently implementation details 
>> specific so we were positive on this front.
>> 2. The decision to grant FFE will be on Tuesday Sept 8th after the spec has 
>> newer patch sets with major concerns addressed.
>> 3. We cannot commit to granting a backport to this feature so, we ask the 
>> implementors to consider using the plug-ability and modularity of the 
>> taskflow library. You may consult developers who have already worked on 
>> adopting this library in Glance (Flavio, Sabari and Harsh). Deployers can 
>> then use those scripts and put them back in their Liberty deployments even 
>> if it's not in the standard tarball.
>>
>> Please let me know if you have more questions.
>>
>> [1]
>> http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-glance.2015-09-04.log.html#t2015-09-04T14:29:47
>> [2] https://wiki.openstack.org/wiki/Liberty_Release_Schedule
>>
>> On 9/3/15 1:13 PM, Bhandaru, Malini K wrote:
>>> Thank you Nikhil and Brian!
>>>
>>> -Original Message-
>>> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]
>>> Sent: Thursday, September 03, 2015 9:42 AM
>>> To: openstack-dev@lists.openstack.org
>>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>>> proposal
>>>
>>> We agreed to hold off on granting it a FFE until tomorrow.
>>>
>>> There's a sync up meeting on this topic tomorrow, Friday Sept 4th at
>>> 14:30 UTC ( #openstack-glance ). Please be there to voice your opinion and 
>>> cast your vote.
>>>
>>> On 9/3/15 9:15 AM, Brian Rosmaita wrote:
 I added an agenda item for this for today's Glance meeting:
https://etherpad.openstack.org/p/glance-team-meeting-agenda

 I'd prefer to hold my vote until after the meeting.

 cheers,
 brian


 On 9/3/15, 6:14 AM, "Kuvaja, Erno"  wrote:

> Malini, all,
>
> My current opinion is -1 for FFE based on the concerns in the spec 
> and implementation.
>
> I'm more than happy to realign my stand after we have updated spec 
> and a) it's agreed to be the approach as of now and b) we can 
> evaluate how much work the implementation needs to meet with the 
> revisited spec.
>
> If we end up to the unfortunate situation that this functionality 
> does not merge in time for Liberty, I'm confident that this is one 
> of the first things in Mitaka. I really don't think there is too 
> much to go, we just might run out of time.
>
> Thanks for your patience and endless effort to get this done.
>
> Best,
> Erno
>
>> -Original Message-
>> From: Bhandaru, Malini K [mailto:malini.k.bhand...@intel.com]
>> Sent: Thursday, September 03, 2015 10:10 AM
>> To: Flavio Percoco; OpenStack Development Mailing List (not for 
>> usage
>> questions)
>> Subject: Re: [openstack-dev] [Glance] Feature Freeze Exception 
>> proposal
>>
>> Flavio, first thing in the morning Kent will upload a new BP that 
>> addresses the comments. We would very much appreciate a +1 on the 
>> FFE.
>>
>> Regards
>> Malini
>>
>>
>>
>> 

[openstack-dev] SOS

2015-09-09 Thread 蒋吉
Hi:
I need a coreos image that supports kubernetes , without it , my boss will 
kick my arse.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet]Fwd: Delorean with CBS Liberty deps repos

2015-09-09 Thread Emilien Macchi
I'll make the changes this week and test if our CI pass.


 Forwarded Message 
Subject: Delorean with CBS Liberty deps repos
Date: Thu, 10 Sep 2015 02:22:24 +0200
From: Alan Pevec 
To: Emilien Macchi 
CC: Matthias Runge , Haikel Guemar
, Javier Pena 

Hi Emilien,

I've updated RDO status in
https://etherpad.openstack.org/p/puppet-liberty-blocker

Testing repos to be used with Delorean repo without RDO Kilo and EPEL7
enabled are:

http://cbs.centos.org/repos/cloud7-openstack-liberty-testing/x86_64/os/
http://cbs.centos.org/repos/cloud7-openstack-common-testing/x86_64/os/

Haikel, Javier, please test those if I didn't miss something and add
your notes in https://etherpad.openstack.org/p/RDO-Liberty
Known issue is python-hacking deps but that shouldn't be only needed
for unit testing, priority is to clear all runtime deps first.

Cheers,
Alan





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SOS

2015-09-09 Thread Gongys
It should be easy, you google it for coreos raw or qcow2 image then import it 
into openstack

Sent from my iPhone

> On 2015年9月10日, at 10:53, 蒋吉  wrote:
> 
> Hi:
> I need a coreos image that supports kubernetes , without it , my boss 
> will kick my arse.
> 
> 
>  
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] SOS

2015-09-09 Thread Jerry Zhao

I haven't tried it out, but hope this blog could save you arse.

https://www.cloudssky.com/en/blog/Kubernetes-on-CoreOS-with-OpenStack/

http://stable.release.core-os.net/amd64-usr/367.1.0/coreos_production_openstack_image.img.bz2



On 09/09/2015 07:53 PM, 蒋吉 wrote:

Hi:
I need a coreos image that supports kubernetes , without it , my 
boss will kick my arse.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Weekly Team Meeting 2015.09.09

2015-09-09 Thread joehuang
Let’s also start the PTL selection according to the guide of OpenStack:  
https://wiki.openstack.org/wiki/PTL_Elections_September_2015

Please submit your PTL candidacy according to : 
https://wiki.openstack.org/wiki/PTL_Elections_September_2015#How_to_submit_your_candidacy

Best Regards
Chaoyi Huang ( Joe Huang )

From: Zhipeng Huang [mailto:zhipengh...@gmail.com]
Sent: Wednesday, September 09, 2015 10:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: joehuang; caizhiyuan (A); Eran Gampel; Saggi Mizrahi; Irena Berezovsky
Subject: Re: [openstack-dev][tricircle]Weekly Team Meeting 2015.09.09

Hi Please find the meetbot log at 
http://eavesdrop.openstack.org/meetings/tricircle/2015/tricircle.2015-09-09-13.01.html.

And also a noise cancelled minutes in the attachment.

On Wed, Sep 9, 2015 at 4:22 PM, Zhipeng Huang 
> wrote:
Hi Team,

Let's resume our weekly meeting today. As Eran suggest before, we will mainly 
discuss the work we have now, and leave the design session in another time slot 
:) See you at UTC 1300 today.

--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado



--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] RFE process question

2015-09-09 Thread James Dempsey
Greetings Devs,

I'm very excited about the new RFE process and thought I'd test it by
requesting a feature that is very often requested by my users[1].

There are some great docs out there about how to submit an RFE, but I
don't know what should happen after the submission to launchpad. My RFE
bug seems to have been untouched for a month and I'm unsure if I've done
something wrong. So, here are a few questions that I have.


1. Should I be following up on the dev list to ask for someone to look
at my RFE bug?
2. How long should I expect it to take to have my RFE acknowledged?
3. As an operator, I'm a bit ignorant as to whether or not there are
times during the release cycle during which there simply won't be
bandwidth to consider RFE bugs.
4. Should I be doing anything else?

Would love some guidance.

Cheers,
James

[1] https://bugs.launchpad.net/neutron/+bug/1483480

-- 
James Dempsey
Senior Cloud Engineer
Catalyst IT Limited
+64 4 803 2264
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Remove MOS DEB repo from master node

2015-09-09 Thread Igor Kalnitsky
Hello,

I agree with Vladimir - the idea of online repos is a right way to
move. In 2015 I believe we can ignore this "poor Internet connection"
reason, and simplify both Fuel and UX. Moreover, take a look at Linux
distributives - most of them fetch needed packages from the Internet
during installation, not from CD/DVD. The netboot installers are
popular, I can't even remember when was the last time I install my
Debian from the DVD-1 - I use netboot installer for years.

Thanks,
Igor


On Thu, Sep 10, 2015 at 3:58 AM, Yaguang Tang  wrote:
>
>
> On Thu, Sep 10, 2015 at 3:29 AM, Alex Schultz  wrote:
>>
>>
>> Hey Vladimir,
>>
>>>
>>>
>
> 1) There won't be such things in like [1] and [2], thus less
> complicated flow, less errors, easier to maintain, easier to understand,
> easier to troubleshoot
> 2) If one wants to have local mirror, the flow is the same as in case
> of upstream repos (fuel-createmirror), which is clrear for a user to
> understand.


 From the issues I've seen,  fuel-createmirror isn't very straight
 forward and has some issues making it a bad UX.
>>>
>>>
>>> I'd say the whole approach of having such tool as fuel-createmirror is a
>>> way too naive. Reliable internet connection is totally up to network
>>> engineering rather than deployment. Even using proxy is much better that
>>> creating local mirror. But this discussion is totally out of the scope of
>>> this letter. Currently,  we have fuel-createmirror and it is pretty
>>> straightforward (installed as rpm, has just a couple of command line
>>> options). The quality of this script is also out of the scope of this
>>> thread. BTW we have plans to improve it.
>>
>>
>>
>> Fair enough, I just wanted to raise the UX issues around these types of
>> things as they should go into the decision making process.
>>
>>
>>>
>
>
> Many people still associate ISO with MOS, but it is not true when using
> package based delivery approach.
>
> It is easy to define necessary repos during deployment and thus it is
> easy to control what exactly is going to be installed on slave nodes.
>
> What do you guys think of it?
>
>

 Reliance on internet connectivity has been an issue since 6.1. For many
 large users, complete access to the internet is not available or not
 desired.  If we want to continue down this path, we need to improve the
 tools to setup the local mirror and properly document what urls/ports/etc
 need to be available for the installation of openstack and any mirror
 creation process.  The ideal thing is to have an all-in-one CD similar to a
 live cd that allows a user to completely try out fuel wherever they want
 with out further requirements of internet access.  If we don't want to
 continue with that, we need to do a better job around providing the tools
 for a user to get up and running in a timely fashion.  Perhaps providing an
 net-only iso and an all-included iso would be a better solution so people
 will have their expectations properly set up front?
>>>
>>>
>>> Let me explain why I think having local MOS mirror by default is bad:
>>> 1) I don't see any reason why we should treat MOS  repo other way than
>>> all other online repos. A user sees on the settings tab the list of repos
>>> one of which is local by default while others are online. It can make user a
>>> little bit confused, can't it? A user can be also confused by the fact, that
>>> some of the repos can be cloned locally by fuel-createmirror while others
>>> can't. That is not straightforward, NOT fuel-createmirror UX.
>>
>>
>>
>> I agree. The process should be the same and it should be just another
>> repo. It doesn't mean we can't include a version on an ISO as part of a
>> release.  Would it be better to provide the mirror on the ISO but not have
>> it enabled by default for a release so that we can gather user feedback on
>> this? This would include improved documentation and possibly allowing a user
>> to choose their preference so we can collect metrics?
>>
>>
>>> 2) Having local MOS mirror by default makes things much more convoluted.
>>> We are forced to have several directories with predefined names and we are
>>> forced to manage these directories in nailgun, in upgrade script, etc. Why?
>>> 3) When putting MOS mirror on ISO, we make people think that ISO is equal
>>> to MOS, which is not true. It is possible to implement really flexible
>>> delivery scheme, but we need to think of these things as they are
>>> independent.
>>
>>
>>
>> I'm not sure what you mean by this. Including a point in time copy on an
>> ISO as a release is a common method of distributing software. Is this a
>> messaging thing that needs to be addressed? Perhaps I'm not familiar with
>> people referring to the ISO as being MOS.
>>
>>
>>> For large users it is easy to build custom ISO and put there what they
>>> 

Re: [openstack-dev] [rootwrap] rootwrap and libraries - RFC

2015-09-09 Thread Robert Collins
On 10 September 2015 at 06:45, Matt Riedemann
 wrote:
>

> The problem with the static file paths in rootwrap.conf is that we don't
> know where those other library filter files are going to end up on the
> system when the library is installed.  We could hard-code nova's
> rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then
> that means the deploy/config management tooling that installing this stuff
> needs to copy that directory structure from the os-brick install location
> (which we're finding non-deterministic, at least when using data_files with
> pbr) to the target location that rootwrap.conf cares about.
>
> That's why we were proposing adding things to rootwrap.conf that
> oslo.rootwrap can parse and process dynamically using the resource access
> stuff in pkg_resources, so we just say 'I want you to load the
> os-brick.filters file from the os-brick project, thanks.'.

So, I realise thats a bit sucky. My suggestion would be to just take
the tactical approach of syncing things into each consuming tree - and
dogpile onto the privsep daemon asap.

privsep is the outcome of Gus' experiments with having a Python API to
talk a richer language than shell command lines to a privileged
daemon, with one (or more) dedicated daemon processes per server
process. It avoids all of the churn and difficulties in mapping
complex things through the command line (none of our rootwrap files
are actually secure). And its massively lower latency and better
performing.

 https://review.openstack.org/#/c/204073/

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tacker][NFV] Proposal for experimental features

2015-09-09 Thread Sridhar Ramaswamy
Folks:

We are gathering momentum in our activities towards building a VNFM / NFVO
in OpenStack Tacker project [1]. As discussed in the last week's irc
meeting [2], I'd like to propose few "experimental features" within Tacker.

It is well know practice to use experimental tag to introduce bleeding edge
features in openstack. We realize some of these experimental features have
various unknowns - sometimes in architectural clarity (e.g. specific roles
of NFVO subcomponents) and other times in specific downstream dependency
(like ODL-SFC support). The experimental feature tag will allow a safe
place to iterate in these areas, perhaps fail fast, to eventually reach our
goal. The experimental feature will be marked as such in the tacker docs
(coming soon). Once an experimental feature's usage is vetted the
"experimental" tag will be removed.

The following two features are identified as the initial candidates,

1) VNF Forwarding Graph using SFC API

SFC efforts introduced by Tim Rozet will form the basis of this track.
The wider Tacker team can pitch in to carry this forward into a functional
VNF Forwarding Graph feature.

2) Basic NSD support

 Basic Network Service Descriptor (NSD) support to instantiate a
sequence of VNFs (described in VNFD). When (1) above becomes available this
track can build on a combined NSD + VNFFD for a fully orchestrated network
service chain.

As always comments and inputs are welcome.

- Sridhar

[1]  

https://wiki.openstack.org/wiki/Tacker
[2]
http://eavesdrop.openstack.org/meetings/tacker/2015/tacker.2015-09-03-16.03.log.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [kolla] Possible to support multiple compute drivers?

2015-09-09 Thread Jim Rollenhagen
On Wed, Sep 02, 2015 at 03:42:20PM -0400, Jeff Peeler wrote:
> Hi folks,
> 
> I'm currently looking at supporting Ironic in the Kolla project [1], but
> was unsure if it would be possible to run separate instances of nova
> compute and controller (and scheduler too?) to enable both baremetal and
> libvirt type deployments. I found this mailing list post from two years ago
> [2], asking the same question. The last response in the thread seemed to
> indicate work was being done on the scheduler to support multiple
> configurations, but the review [3] ended up abandoned.
> 
> Are the current requirements the same? Perhaps using two availability zones
> would work, but I'm not clear if that works on the same host.

At Rackspace we run Ironic in its own cell, and use cells filters to
direct builds to the right place.

The other option that supposedly works is host aggregates. I'm not sure
host aggregates supports running two scheduler instances (because you'll
want different filters), but maybe it does?

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rootwrap] rootwrap and libraries - RFC

2015-09-09 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2015-09-09 13:36:37 -0400:
> We've got a new pattern emerging where some of the key functionality in
> services is moving into libraries that can be called from different
> services. A good instance of this is os-brick, which has the setup /
> config functionality for devices that sometimes need to be called by
> cinder and sometimes need to be called by nova when setting up a guest.
> Many of these actions need root access, so require rootwrap filters.
> 
> The point of putting this logic into a library is that it's self
> contained, and that it can be an upgrade unit that is distinct from the
> upgrade unit of either nova or cinder.
> 
> The problem rootwrap.conf. Projects ship an example rootwrap.conf
> which specifies filter files like such:
> 
> [DEFAULT]
> # List of directories to load filter definitions from (separated by ',').
> # These directories MUST all be only writeable by root !
> filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
> 
> however, we'd really like to be loading those filters from files the
> library controls, so they are in sync with the library functionality.
> Knowing where those files are going to be turns out to be a really
> interesting guessing game. And, for security reasons, having a super
> large set of paths rootwrap is guessing at seems really unwise.
> 
> It seems like what we really want is something more like this:
> 
> [filters]
> nova=compute,network
> os-brick=os-brick
> 
> Which would translate into a symbolic look up via:
> 
> filter_file = resource_filename(project, '%s.filters' % filter)
> ... read the filter file

Right now rootwrap takes as input an oslo.config file, which it reads to
find the filter_path config value, which it interprets as a directory
containing a bunch of other INI files, which it then reads and merges
together into a single set of filters. I'm not sure the symbolic lookup
you're proposing is going to support that use of multiple files. Maybe
it shouldn't?

What about allowing filter_path to contain more than one directory
to scan? That would let projects using os-brick pass their own path and
os-brick's path, if it's different.

Doug

> 
> So that rootwrap would be referencing things symbolically instead of
> static / filebased which is going to be different depending on install
> method.
> 
> 
> For liberty we just hacked around it and put the os-brick rules into
> Nova and Cinder. It's late in the release, and a clear better path
> forward wasn't out there. It does mean the upgrade of the two components
> is a bit coupled in the fiber channel case. But it was the best we could do.
> 
> I'd like to get the discussion rolling about the proposed solution
> above. It emerged from #openstack-cinder this morning as we attempted to
> get some kind of workable solution and figure out what was next. We
> should definitely do a summit session on this one to nail down the
> details and the path forward.
> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Base feature deprecation policy

2015-09-09 Thread Doug Hellmann
Excerpts from Ben Swartzlander's message of 2015-09-08 22:22:38 -0400:
> On 09/08/2015 01:58 PM, Doug Hellmann wrote:
> > Excerpts from Ben Swartzlander's message of 2015-09-08 13:32:58 -0400:
> >> On 09/03/2015 08:22 AM, Thierry Carrez wrote:
> >>> Hi everyone,
> >>>
> >>> A feature deprecation policy is a standard way to communicate and
> >>> perform the removal of user-visible behaviors and capabilities. It helps
> >>> setting user expectations on how much and how long they can rely on a
> >>> feature being present. It gives them reassurance over the timeframe they
> >>> have to adapt in such cases.
> >>>
> >>> In OpenStack we always had a feature deprecation policy that would apply
> >>> to "integrated projects", however it was never written down. It was
> >>> something like "to remove a feature, you mark it deprecated for n
> >>> releases, then you can remove it".
> >>>
> >>> We don't have an "integrated release" anymore, but having a base
> >>> deprecation policy, and knowing which projects are mature enough to
> >>> follow it, is a great piece of information to communicate to our users.
> >>>
> >>> That's why the next-tags workgroup at the Technical Committee has been
> >>> working to propose such a base policy as a 'tag' that project teams can
> >>> opt to apply to their projects when they agree to apply it to one of
> >>> their deliverables:
> >>>
> >>> https://review.openstack.org/#/c/207467/
> >>>
> >>> Before going through the last stage of this, we want to survey existing
> >>> projects to see which deprecation policy they currently follow, and
> >>> verify that our proposed base deprecation policy makes sense. The goal
> >>> is not to dictate something new from the top, it's to reflect what's
> >>> generally already applied on the field.
> >>>
> >>> In particular, the current proposal says:
> >>>
> >>> "At the very minimum the feature [...] should be marked deprecated (and
> >>> still be supported) in the next two coordinated end-of-cyle releases.
> >>> For example, a feature deprecated during the M development cycle should
> >>> still appear in the M and N releases and cannot be removed before the
> >>> beginning of the O development cycle."
> >>>
> >>> That would be a n+2 deprecation policy. Some suggested that this is too
> >>> far-reaching, and that a n+1 deprecation policy (feature deprecated
> >>> during the M development cycle can't be removed before the start of the
> >>> N cycle) would better reflect what's being currently done. Or that
> >>> config options (which are user-visible things) should have n+1 as long
> >>> as the underlying feature (or behavior) is not removed.
> >>>
> >>> Please let us know what makes the most sense. In particular between the
> >>> 3 options (but feel free to suggest something else):
> >>>
> >>> 1. n+2 overall
> >>> 2. n+2 for features and capabilities, n+1 for config options
> >>> 3. n+1 overall
> >> I think any discussion of a deprecation policy needs to be combined with
> >> a discussion about LTS (long term support) releases. Real customers (not
> >> devops users -- people who pay money for support) can't deal with
> >> upgrades every 6 months.
> >>
> >> Unavoidably, distros are going to want to support certain releases for
> >> longer than the normal upstream support window so they can satisfy the
> >> needs of the aforementioned customers. This will be true whether the
> >> deprecation policy is N+1, N+2, or N+3.
> >>
> >> It makes sense for the community to define LTS releases and coordinate
> >> making sure all the relevant projects are mutually compatible at that
> >> release point. Then the job of actually maintaining the LTS release can
> >> fall on people who care about such things. The major benefit to solving
> >> the LTS problem, though, is that deprecation will get a lot less painful
> >> because you could assume upgrades to be one release at a time or
> >> skipping directly from one LTS to the next, and you can reduce your
> >> upgrade test matrix accordingly.
> > How is this fundamentally different from what we do now with stable
> > releases, aside from involving a longer period of time?
> 
> It would be a recognition that most customers don't want to upgrade 
> every 6 months -- they want to skip over 3 releases and upgrade every 2 
> years. I'm sure there are customers all over the spectrum from those who 
> run master to those to do want a new release every 6 month, to some that 
> want to install something and run it forever without upgrading*. My 
> intuition is that, for most customers, 2 years is a reasonable amount of 
> time to run a release before upgrading. I think major Linux distros 
> understand this, as is evidenced by their release and support patterns.

As Jeremy pointed out, we have quite a bit of trouble now maintaining
stable branches. given that, it just doesn't seem realistic in this
community right now to expect support for a 2 year long LTS period.
Given that, I see no real reason to base other policies around 

Re: [openstack-dev] [rootwrap] rootwrap and libraries - RFC

2015-09-09 Thread Matt Riedemann



On 9/9/2015 1:04 PM, Doug Hellmann wrote:

Excerpts from Sean Dague's message of 2015-09-09 13:36:37 -0400:

We've got a new pattern emerging where some of the key functionality in
services is moving into libraries that can be called from different
services. A good instance of this is os-brick, which has the setup /
config functionality for devices that sometimes need to be called by
cinder and sometimes need to be called by nova when setting up a guest.
Many of these actions need root access, so require rootwrap filters.

The point of putting this logic into a library is that it's self
contained, and that it can be an upgrade unit that is distinct from the
upgrade unit of either nova or cinder.

The problem rootwrap.conf. Projects ship an example rootwrap.conf
which specifies filter files like such:

[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap

however, we'd really like to be loading those filters from files the
library controls, so they are in sync with the library functionality.
Knowing where those files are going to be turns out to be a really
interesting guessing game. And, for security reasons, having a super
large set of paths rootwrap is guessing at seems really unwise.

It seems like what we really want is something more like this:

[filters]
nova=compute,network
os-brick=os-brick

Which would translate into a symbolic look up via:

filter_file = resource_filename(project, '%s.filters' % filter)
... read the filter file


Right now rootwrap takes as input an oslo.config file, which it reads to
find the filter_path config value, which it interprets as a directory
containing a bunch of other INI files, which it then reads and merges
together into a single set of filters. I'm not sure the symbolic lookup
you're proposing is going to support that use of multiple files. Maybe
it shouldn't?

What about allowing filter_path to contain more than one directory
to scan? That would let projects using os-brick pass their own path and
os-brick's path, if it's different.

Doug



So that rootwrap would be referencing things symbolically instead of
static / filebased which is going to be different depending on install
method.


For liberty we just hacked around it and put the os-brick rules into
Nova and Cinder. It's late in the release, and a clear better path
forward wasn't out there. It does mean the upgrade of the two components
is a bit coupled in the fiber channel case. But it was the best we could do.

I'd like to get the discussion rolling about the proposed solution
above. It emerged from #openstack-cinder this morning as we attempted to
get some kind of workable solution and figure out what was next. We
should definitely do a summit session on this one to nail down the
details and the path forward.

 -Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The problem with the static file paths in rootwrap.conf is that we don't 
know where those other library filter files are going to end up on the 
system when the library is installed.  We could hard-code nova's 
rootwrap.conf filter_path to include "/etc/os-brick/rootwrap.d" but then 
that means the deploy/config management tooling that installing this 
stuff needs to copy that directory structure from the os-brick install 
location (which we're finding non-deterministic, at least when using 
data_files with pbr) to the target location that rootwrap.conf cares about.


That's why we were proposing adding things to rootwrap.conf that 
oslo.rootwrap can parse and process dynamically using the resource 
access stuff in pkg_resources, so we just say 'I want you to load the 
os-brick.filters file from the os-brick project, thanks.'.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] keystone pluggable model

2015-09-09 Thread Murali Allada
?Hi All,


In the IRC meeting yesterday, I brought up this new blueprint I opened.


https://blueprints.launchpad.net/magnum/+spec/pluggable-keystone-model?


The goal of this blueprint is to allow magnum operators to integrate with their 
version of keystone easily with downstream patches.


The goal is NOT to implement support for keystone version 2 upstream, but to 
make it easy for operators to integrate with V2 if they need to.


Most of the work required for this is already done in this patch.


https://review.openstack.org/#/c/218699


However, we didn't want to address this change in the same review.


We just need to refactor the code a little further and isolate all version 
specific keystone code to one file.


See my comments in the following review for details on what this change entails.


https://review.openstack.org/#/c/218699/5/magnum/common/clients.py


https://review.openstack.org/#/c/218699/5/magnum/common/keystone.py


Thanks,

Murali
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >