Re: [openstack-dev] [release][all] independent and unmanaged projects should record their releases, too

2016-01-23 Thread Julien Danjou
On Fri, Jan 22 2016, Doug Hellmann wrote:

> The openstack/releases repo is used to produce releases for managed
> projects, but all projects may submit information about their releases
> to have that data published for reference. If you are the release
> liaison for a project, and you are tagging releases directly, please
> also consider adding your release data to openstack/releases to make it
> easy for deployers to find.

What about rather doing that automatically?

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][all] independent and unmanaged projects should record their releases, too

2016-01-23 Thread Doug Hellmann


> On Jan 23, 2016, at 5:51 AM, Julien Danjou  wrote:
> 
>> On Fri, Jan 22 2016, Doug Hellmann wrote:
>> 
>> The openstack/releases repo is used to produce releases for managed
>> projects, but all projects may submit information about their releases
>> to have that data published for reference. If you are the release
>> liaison for a project, and you are tagging releases directly, please
>> also consider adding your release data to openstack/releases to make it
>> easy for deployers to find.
> 
> What about rather doing that automatically?

We're still working on automating the tagging coming from deliverable info 
going into the repo. When that's done, the goal is to use that instead of 
direct tagging everywhere. So there won't be any need to automate patches 
coming into the repo when projects are tagged. 

Doug

> 
> -- 
> Julien Danjou
> // Free Software hacker
> // https://julien.danjou.info

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer][agent][bug] Trying Freezerc with no success

2016-01-23 Thread Fausto Marzi
Hi Deklan,
Following up our conversation on restoring from swift, yes there's a bug,
you are right, I've reproduced it.

It can be reproduced when python-swiftclient 2.7.0 is installed on the
system.
In Liberty and Mitaka the requirement is python-swiftclient>=2.2.0.

The issue should be solved by doing the following change in requirements.txt
-python-swiftclient>=2.2.0
+python-swiftclient>=2.2.0,<=2.6.0

I've opened the following freezer bug:

- https://bugs.launchpad.net/freezer/+bug/1537364

The patches to should solve the issue are:

Mitaka:
- https://review.openstack.org/#/c/271701

Liberty (cherry-pick):
- https://review.openstack.org/#/c/271702/

Many thanks for your effort, you are most welcome in the Freezer Team.
Fausto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Stop deployment can break production cluster. How we should avoid it?

2016-01-23 Thread Kyrylo Galanov
Hello,

Why don't we introduce additional state for nodes like 're-deploying'. If
deployment was stopped we don't erase nodes with this state, but change the
status to 'error' or 'ready' , for example.
Or we can add warning message that 'stop' button would destroy every and
each node.

On Fri, Jan 22, 2016 at 8:15 PM, Vladimir Sharshov 
wrote:

> Hi!
>
> I also vote for solution "mark a cluster 'operational' after successful
> deployment". It is simple and guarantee that we do not erase main
> components.
> Also it will free resources to support stop/rerun(resume) feature on task
> based deployment which will works much better (without node destroy as side
> affect)
>
> On Fri, Jan 22, 2016 at 8:09 PM, Igor Kalnitsky 
> wrote:
>
>> Dmitry,
>>
>> > We can mark a cluster 'operational' after successful deployment. And we
>> > can disable 'stop' button on this kind of clusters.
>>
>> I think this is a best solution so far. Moreover, I don't know how to
>> fix it properly since there could be a lot of questions how this
>> button should behave at all.
>>
>> Taking into account all this, I propose to solve this issue as a
>> blueprint (so we can think and cover all edge cases in the spec) or
>> drop stop button functionality at all.
>>
>> The latest, perhaps, may be a good solution. I don't know how often
>> someone use Stop deployment.
>>
>>
>> Bogdan,
>>
>> > This is the critical issue. The *worst* of possible situations for
>> > cluster operations. I believe this should be covered by a dedicated
>> > bulletin issued, the stop action shall be disabled for all releases as
>> > emergency fix, and fixed by next maintenance updates.
>>
>> It wasn't always the case. Some time ago we didn't execute any tasks
>> on controllers when adding new nodes. It's become a case, I assume,
>> since Fuel 8.0, when we start executing netconfig and other puppet
>> task on each deployment run.
>>
>> So we need to investigate in which release we have introduced
>> re-execution some tasks on controllers, and only then thinking about
>> bulletins.
>>
>>
>> Thanks,
>> Igor
>>
>> On Fri, Jan 22, 2016 at 1:06 PM, Bogdan Dobrelya 
>> wrote:
>> > On 22.01.2016 11:45, Dmitry Pyzhov wrote:
>> >> Guys,
>> >>
>> >> There is a tricky bug with our 'stop deployment'
>> >> feature: https://bugs.launchpad.net/fuel/+bug/1529691
>> >>
>> >> It cannot be fixed easily because it is a design flaw. By design we
>> >> cannot leave a node in unpredictable state. So we move all nodes that
>> >> are not in ready state back to bootstrap.
>> >>
>> >> But when user adding a node and deploying cluster system reruns puppet
>> >> on controllers. If user press 'stop' button controllers will be erased.
>> >> Cluster will be destroyed. Definitely this is not expected behaviour.
>> >
>> > This is the critical issue. The *worst* of possible situations for
>> > cluster operations. I believe this should be covered by a dedicated
>> > bulletin issued, the stop action shall be disabled for all releases as
>> > emergency fix, and fixed by next maintenance updates.
>> >
>> >>
>> >> Taking into account that we are going to rewrite this feature in 9.0
>> and
>> >> we are close to HCF there is no value in major changes for this feature
>> >> in 8.0. Let's do a simple workaround.
>> >>
>> >> We can mark a cluster 'operational' after successful deployment. And we
>> >> can disable 'stop' button on this kind of clusters.
>> >>
>> >> Any concerns or other proposals?
>> >>
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> > --
>> > Best regards,
>> > Bogdan Dobrelya,
>> > Irc #bogdando
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-b

[openstack-dev] [glance][ironic][cinder][nova] 'tar' as an image disk_format

2016-01-23 Thread Brian Rosmaita
Please provide feedback about a proposal to add 'tar' as a new Glance 
disk_format.[0]

The Ironic team is adding support for "OS tarball images" in Mitaka.  This is a 
compressed tar archive of a / (root filesystem). These tarballs are created by 
first installing the OS packages in a chroot and then compressing the chroot as 
tar.*.  The proposal is to store such images as disk_format == tar and 
container_format == bare.

Intuitively, 'tar' seems more like a container_format.  The Glance developer 
documentation, however, says that "The container format refers to whether the 
virtual machine image is in a file format that also contains metadata about the 
actual virtual machine."[1]  Under this proposal, there is no such metadata 
included.

The Glance docs say this about disk_format: "The disk format of a virtual 
machine image is the format of the underlying disk image. Virtual appliance 
vendors have different formats for laying out the information contained in a 
virtual machine disk image."[1]  Under this definition, 'tar' as used in this 
proposal [0] does in fact seem to be a disk_format.

There is not currently a 'tar' container format defined for Glance.  The 
closest we have now is 'ova' (an OVA tar archive file) and 'docker' (a Docker 
tar archive of the container filesystem).  And, in fact, 'tar' as a container 
format wouldn't be very helpful, as it doesn't indicate where in the tarball 
the metadata should be found.

The goal here is to come up with an identifier for an "OS tarball image" that's 
acceptable across projects and isn't confusing for people who are creating 
images.

Thanks in advance for your feedback,
brian

[0] https://bugs.launchpad.net/glance/+bug/1535900
[1] https://github.com/openstack/glance/blob/master/doc/source/formats.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.db 4.3.1 release (mitaka)

2016-01-23 Thread davanum
We are delighted to announce the release of:

oslo.db 4.3.1: Oslo Database library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.db

With package available at:

https://pypi.python.org/pypi/oslo.db

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

For more details, please see below.


Changes in oslo.db 4.3.0..4.3.1
---

3f22d45 Imported Translations from Zanata
c8c4543 Updated from global requirements
543d577 Fix tests to work under both pymsysql 0.6.2 and 0.7.x
4f5384d Don't log non-db error in retry wrapper

Diffstat (except docs and test files)
-

.../locale/en_GB/LC_MESSAGES/oslo.db-log-error.po  | 14 +--
.../locale/en_GB/LC_MESSAGES/oslo.db-log-info.po   | 14 +--
.../en_GB/LC_MESSAGES/oslo.db-log-warning.po   | 14 +--
oslo.db/locale/en_GB/LC_MESSAGES/oslo.db.po| 19 ---
oslo.db/locale/es/LC_MESSAGES/oslo.db-log-error.po | 14 +--
oslo.db/locale/es/LC_MESSAGES/oslo.db-log-info.po  | 14 +--
.../locale/es/LC_MESSAGES/oslo.db-log-warning.po   | 14 +--
oslo.db/locale/es/LC_MESSAGES/oslo.db.po   | 17 ++---
oslo.db/locale/fr/LC_MESSAGES/oslo.db-log-error.po | 14 +--
oslo.db/locale/fr/LC_MESSAGES/oslo.db-log-info.po  | 14 +--
.../locale/fr/LC_MESSAGES/oslo.db-log-warning.po   | 14 +--
oslo.db/locale/fr/LC_MESSAGES/oslo.db.po   | 17 ++---
oslo.db/locale/oslo.db-log-error.pot   | 18 +++---
oslo_db/api.py |  5 
requirements.txt   | 12 +-
setup.cfg  | 28 +++---
18 files changed, 140 insertions(+), 125 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 5ab0422..855e21e 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,3 +5,3 @@
-pbr>=1.6
-alembic>=0.8.0
-Babel>=1.3
+pbr>=1.6 # Apache-2.0
+alembic>=0.8.0 # MIT
+Babel>=1.3 # BSD
@@ -12,2 +12,2 @@ oslo.utils>=3.2.0 # Apache-2.0
-SQLAlchemy<1.1.0,>=1.0.10
-sqlalchemy-migrate>=0.9.6
+SQLAlchemy<1.1.0,>=1.0.10 # MIT
+sqlalchemy-migrate>=0.9.6 # Apache-2.0
@@ -15 +15 @@ stevedore>=1.5.0 # Apache-2.0
-six>=1.9.0
+six>=1.9.0 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.versionedobjects 1.4.0 release (mitaka)

2016-01-23 Thread davanum
We are thrilled to announce the release of:

oslo.versionedobjects 1.4.0: Oslo Versioned Objects library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.versionedobjects

With package available at:

https://pypi.python.org/pypi/oslo.versionedobjects

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.versionedobjects

For more details, please see below.


Changes in oslo.versionedobjects 1.3.0..1.4.0
-

6e801d1 Updated from global requirements
c448d0b Imported Translations from Zanata
b58f864 Updated from global requirements
bca1f60 Move compare_obj to the fixture module for external consumption

Diffstat (except docs and test files)
-

.../LC_MESSAGES/oslo.versionedobjects-log-error.po | 14 ++--
.../en_GB/LC_MESSAGES/oslo.versionedobjects.po | 55 +++---
.../LC_MESSAGES/oslo.versionedobjects-log-error.po | 14 ++--
.../locale/oslo.versionedobjects-log-error.pot | 19 +++--
.../locale/oslo.versionedobjects.pot   | 48 ++--
oslo_versionedobjects/fixture.py   | 50 +
requirements.txt   | 12 +--
setup.cfg  |  4 +-
test-requirements.txt  |  6 +-
11 files changed, 208 insertions(+), 144 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 671dce3..564d093 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,2 +4,2 @@
-six>=1.9.0
-Babel>=1.3
+six>=1.9.0 # MIT
+Babel>=1.3 # BSD
@@ -11,2 +11,2 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.2.0 # Apache-2.0
-iso8601>=0.1.9
+oslo.utils>=3.4.0 # Apache-2.0
+iso8601>=0.1.9 # MIT
@@ -15,2 +15,2 @@ oslo.i18n>=1.5.0 # Apache-2.0
-WebOb>=1.2.3
-netaddr!=0.7.16,>=0.7.12
+WebOb>=1.2.3 # MIT
+netaddr!=0.7.16,>=0.7.12 # BSD
diff --git a/test-requirements.txt b/test-requirements.txt
index 03e272e..64d87df 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -6 +6 @@ oslotest>=1.10.0 # Apache-2.0
-testtools>=1.4.0
+testtools>=1.4.0 # MIT
@@ -9 +9 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
@@ -11 +11 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
-coverage>=3.6
+coverage>=3.6 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][ironic][cinder][nova] 'tar' as an image disk_format

2016-01-23 Thread Clint Byrum
Excerpts from Brian Rosmaita's message of 2016-01-23 06:54:26 -0800:
> Please provide feedback about a proposal to add 'tar' as a new Glance 
> disk_format.[0]
> 
> The Ironic team is adding support for "OS tarball images" in Mitaka.  This is 
> a compressed tar archive of a / (root filesystem). These tarballs are created 
> by first installing the OS packages in a chroot and then compressing the 
> chroot as tar.*.  The proposal is to store such images as disk_format == tar 
> and container_format == bare.
> 
> Intuitively, 'tar' seems more like a container_format.  The Glance developer 
> documentation, however, says that "The container format refers to whether the 
> virtual machine image is in a file format that also contains metadata about 
> the actual virtual machine."[1]  Under this proposal, there is no such 
> metadata included.
> 
> The Glance docs say this about disk_format: "The disk format of a virtual 
> machine image is the format of the underlying disk image. Virtual appliance 
> vendors have different formats for laying out the information contained in a 
> virtual machine disk image."[1]  Under this definition, 'tar' as used in this 
> proposal [0] does in fact seem to be a disk_format.
> 
> There is not currently a 'tar' container format defined for Glance.  The 
> closest we have now is 'ova' (an OVA tar archive file) and 'docker' (a Docker 
> tar archive of the container filesystem).  And, in fact, 'tar' as a container 
> format wouldn't be very helpful, as it doesn't indicate where in the tarball 
> the metadata should be found.
> 
> The goal here is to come up with an identifier for an "OS tarball image" 
> that's acceptable across projects and isn't confusing for people who are 
> creating images.
> 

Seems fine to just have tar be both the container and image format, even
if it means the metadata portion just turns out to be null. Right? The
key is to be able to feed it to hypervisors that can deal with it, and
Ironic is presumably able to deal with it since they're asking for it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] alarms based on events

2016-01-23 Thread Premysl Kouril
Hi Roland,

> I don't think it would be difficult to add support for non-periodic 
> metrics/alarms. There are a couple of approaches we could take, so a design 
> discussion would be good to have if you are interested in implementing this. 
> This is feature that we are not working on right now, but it is on our list 
> to implement in the near future.

Definitely interested, so if there is a discussion I am happy to join
and outline our use cases.

Regards,
Prema

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-23 Thread Adam Lawson
For the sake of over-simplification, is there ever a reason to NOT enable
jumbo frames in a cloud/SDN context where most of the traffic is between
virtual elements that all support it? I understand that some switches do
not support it and traffic form the web doesn't support it either but
besides that, seems like a default "jumboframes = 1" concept would work
just fine to me.

Then again I'm all about making OpenStack easier to consume so my ideas
tend to gloss over special use cases with special requirements.


*Adam Lawson*

AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (844) 4-AQORN-NOW ext. 101
International: +1 302-387-4660
Direct: +1 916-246-2072

On Fri, Jan 22, 2016 at 7:13 PM, Matt Kassawara 
wrote:

> The fun continues, now using an OpenStack deployment on physical hardware
> that supports jumbo frames with 9000 MTU and IPv4/IPv6. This experiment
> still uses Linux bridge for consistency. I'm planning to run similar
> experiments with Open vSwitch and Open Virtual Network (OVN) in the next
> week.
>
> I highly recommend reading further, but here's the TL;DR: Using physical
> network interfaces with MTUs larger than 1500 reveals an additional problem
> with veth pair for the neutron router interface on the public network.
> Additionally, IP protocol version does not impact MTU calculation for
> Linux bridge.
>
> First, review the OpenStack bits and resulting network components in the
> environment [1]. In the first experiment, public cloud network limitations
> prevented truly seeing how Linux bridge (actually the kernel) handles
> physical network interfaces with MTUs larger than 1500. In this experiment,
> we see that it automatically calculates the proper MTU for bridges and
> VXLAN interfaces using the MTU of parent devices. Also, see that a regular
> 'ping' works between the host outside of the deployment and the VM [2].
>
> [1] https://gist.github.com/ionosphere80/a3725066386d8ca4c6d7
> [2] https://gist.github.com/ionosphere80/a8d601a356ac6c6274cb
>
> Note: The tcpdump output in each case references up to six points: neutron
> router gateway on the public network (qg), namespace end of the veth pair
> for the neutron router interface on the private network (qr), bridge end of
> the veth pair for router interface on the private network (tap), controller
> node end of the VXLAN network (underlying interface), compute node end of
> the VXLAN network (underlying interface), and the bridge end of the tap for
> the VM (tap).
>
> In the first experiment, SSH "stuck" because of a MTU mismatch on the veth
> pair between the router namespace and private network bridge. In this
> experiment, SSH works because the VM network interface uses a 1500 MTU and
> all devices along the path between the host and VM use a 1500 or larger
> MTU. So, let's configure the VM network interface to use the proper MTU of
> 9000 minus the VXLAN protocol overhead of 50 bytes... 8950... and try SSH
> again.
>
> 2: eth0:  mtu 8950 qdisc pfifo_fast qlen
> 1000
> link/ether fa:16:3e:46:ac:d3 brd ff:ff:ff:ff:ff:ff
> inet 172.16.1.3/24 brd 172.16.1.255 scope global eth0
> inet6 fd00:100:52:1:f816:3eff:fe46:acd3/64 scope global dynamic
>valid_lft 86395sec preferred_lft 14395sec
> inet6 fe80::f816:3eff:fe46:acd3/64 scope link
>valid_lft forever preferred_lft forever
>
> SSH doesn't work with IPv4 or IPv6. Adding a slight twist to the first
> experiment, I don't even see the large packet traversing the neutron
> router gateway on the public network. So, I began a tcpdump closer to the
> source on the bridge end of the veth pair for the neutron router
> interface on the public network.
>
> Looking at [3], the veth pair between the router namespace and private
> network bridge drops the packet. The MTU changes over a layer-2 connection
> without a router, similar to connecting two switches with different MTUs.
> Even if it could participate in PMTUD, the veth pair lacks an IP address
> and therefore cannot originate ICMP messages.
>
> [3] https://gist.github.com/ionosphere80/ec83d0955c79b05ea381
>
> Using observations from the first experiment, let's configure the MTU of
> the interfaces in the qrouter namespace to match the other end of their
> respective veth pairs. The public network (gateway) interface MTU becomes
> 9000 and the private network router interfaces (IPv4 and IPv6) become 8950.
>
> 2: qr-49b27408-04:  mtu 8950 qdisc
> pfifo_fast state UP mode DEFAULT group default qlen 1000
> link/ether fa:16:3e:e5:43:1c brd ff:ff:ff:ff:ff:ff
> 3: qr-b7e0ef22-32:  mtu 8950 qdisc
> pfifo_fast state UP mode DEFAULT group default qlen 1000
> link/ether fa:16:3e:16:01:92 brd ff:ff:ff:ff:ff:ff
> 4: qg-7bbe8e38-cc:  mtu 9000 qdisc
> pfifo_fast state UP mode DEFAULT group default qlen 1000
> link/ether fa:16:3e:2b:c1:fd brd ff:ff:ff:ff:ff:ff
>
> Let's ping with a payload size of 8922 for IPv4 and 8902 for IPv6, the
> maximum for a VXLAN segment with 8950

Re: [openstack-dev] [glance][ironic][cinder][nova] 'tar' as an image disk_format

2016-01-23 Thread Duncan Thomas
I guess my wisdom would be 'why'? What does this enable you to do that you
couldn't do with similar ease with the formats we have and are people
trying to do that frequently.

We've seen in cinder that image formats have a definite security surface to
them, and with glance adding arbitrary conversion pipelines, that surface
is going to increase with every format we add. This should mean we tend
towards being increasingly conservative I think.

We've heard a possible feature, but zero use case that I can see. Why is
this better than converting your days to a supported format?
On 23 Jan 2016 16:57, "Brian Rosmaita"  wrote:

> Please provide feedback about a proposal to add 'tar' as a new Glance
> disk_format.[0]
>
> The Ironic team is adding support for "OS tarball images" in Mitaka.  This
> is a compressed tar archive of a / (root filesystem). These tarballs are
> created by first installing the OS packages in a chroot and then
> compressing the chroot as tar.*.  The proposal is to store such images as
> disk_format == tar and container_format == bare.
>
> Intuitively, 'tar' seems more like a container_format.  The Glance
> developer documentation, however, says that "The container format refers to
> whether the virtual machine image is in a file format that also contains
> metadata about the actual virtual machine."[1]  Under this proposal, there
> is no such metadata included.
>
> The Glance docs say this about disk_format: "The disk format of a virtual
> machine image is the format of the underlying disk image. Virtual appliance
> vendors have different formats for laying out the information contained in
> a virtual machine disk image."[1]  Under this definition, 'tar' as used in
> this proposal [0] does in fact seem to be a disk_format.
>
> There is not currently a 'tar' container format defined for Glance.  The
> closest we have now is 'ova' (an OVA tar archive file) and 'docker' (a
> Docker tar archive of the container filesystem).  And, in fact, 'tar' as a
> container format wouldn't be very helpful, as it doesn't indicate where in
> the tarball the metadata should be found.
>
> The goal here is to come up with an identifier for an "OS tarball image"
> that's acceptable across projects and isn't confusing for people who are
> creating images.
>
> Thanks in advance for your feedback,
> brian
>
> [0] https://bugs.launchpad.net/glance/+bug/1535900
> [1] https://github.com/openstack/glance/blob/master/doc/source/formats.rst
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-23 Thread Mike Spreitzer
Adam Lawson  wrote on 01/23/2016 02:27:46 PM:

> For the sake of over-simplification, is there ever a reason to NOT 
> enable jumbo frames in a cloud/SDN context where most of the traffic
> is between virtual elements that all support it? I understand that 
> some switches do not support it and traffic form the web doesn't 
> support it either but besides that, seems like a default 
> "jumboframes = 1" concept would work just fine to me.
> 
> Then again I'm all about making OpenStack easier to consume so my 
> ideas tend to gloss over special use cases with special requirements.

Regardless of the default, there needs to be clear documentation on what 
to do for those of us who can not use jumbo frames, and it needs to work. 
That goes for production deployers and also for developers using DevStack.

Thanks,
Mike



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] facter update has broken fuel-library ci for 3.8

2016-01-23 Thread Alex Schultz
Hey Folks,

So on Friday our CI unit tests for puppet version 3.8 started
failing[0] due to an update to facter which seems to have issues with
one of our ceph facts[1].  This has blocked up the pipeline, so in
order to unstick it we are looking at updating the ceph
osd_devices_list fact[2] to address the CI failures.  Currently this
issue is preventing the merge of things to address Bug 1533082[3] and
Bug 1536608[4] which block BVT.

Thanks,
-Alex

[0] https://bugs.launchpad.net/fuel/+bug/1537102
[1] https://tickets.puppetlabs.com/browse/FACT-1318
[2] https://review.openstack.org/#/c/271521/
[3] https://bugs.launchpad.net/fuel/+bug/1533082
[4] https://bugs.launchpad.net/fuel/+bug/1536608

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] MTU configuration pain

2016-01-23 Thread Matt Kassawara
Adam,

Any modern datacenter network, especially those with 10 Gbps or faster
connectivity, should support jumbo frames for performance reasons. However,
depending on the network infrastructure, jumbo frames does not always mean
a 9000 MTU, so neutron should support a configurable value rather than a
boolean. I envision one configuration option containing the physical
network MTU that neutron uses to calculate the MTU of all virtual network
components. Mike... this mechanism should work for any physical network
MTU, large or small.

Matt

On Sat, Jan 23, 2016 at 3:28 PM, Mike Spreitzer  wrote:

> Adam Lawson  wrote on 01/23/2016 02:27:46 PM:
>
> > For the sake of over-simplification, is there ever a reason to NOT
> > enable jumbo frames in a cloud/SDN context where most of the traffic
> > is between virtual elements that all support it? I understand that
> > some switches do not support it and traffic form the web doesn't
> > support it either but besides that, seems like a default
> > "jumboframes = 1" concept would work just fine to me.
> >
> > Then again I'm all about making OpenStack easier to consume so my
> > ideas tend to gloss over special use cases with special requirements.
>
> Regardless of the default, there needs to be clear documentation on what
> to do for those of us who can not use jumbo frames, and it needs to work.
> That goes for production deployers and also for developers using DevStack.
>
> Thanks,
> Mike
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [oslo] Proposal of adding puppet-oslo to OpenStack

2016-01-23 Thread Xingchao Yu
Hi, all:

I spend some times to collect oslo.* versions of openstack
projects(which has related puppet module), please check it in following
table:

https://github.com/openstack/puppet-oslo#module-description

From the table, we can find most of oslo.* libraries are the same among
the openstack projects(except aodh, gnocchi).

So from the table, we could use puppet-oslo to replace configuration of
oslo.* in related modules gradually.

Thanks & Regards.


2016-01-21 23:58 GMT+08:00 Emilien Macchi :

>
>
> On 01/21/2016 08:15 AM, Doug Hellmann wrote:
> > Excerpts from Cody Herriges's message of 2016-01-19 15:50:05 -0800:
> >> Colleen Murphy wrote:
> >>> On Tue, Jan 19, 2016 at 9:57 AM, Xingchao Yu  >>> > wrote:
> >>>
> >>> Hi, Emilien:
> >>>
> >>>  Thanks for your efforts on this topic, I didn't attend V
> >>> release summit and missed related discussion about puppet-oslo.
> >>>
> >>>  As the reason for not using a unified way to manage oslo_*
> >>> parameters is there maybe exist different oslo_* version between
> >>> openstack projects.
> >>>
> >>>  I have an idea to solve this potential problem,we can maintain
> >>> several versions of puppet-oslo, each module can map to different
> >>> version of puppet-oslo.
> >>>
> >>> It would be something like follows: (the map info is not true,
> >>> just for example)
> >>>
> >>> In Mitaka release
> >>> puppet-nova maps to puppet-oslo with 8.0.0
> >>> puppet-designate maps to puppet-oslo with 7.0.0
> >>> puppet-murano maps to puppet-oslo with 6.0.0
> >>>
> >>> In Newton release
> >>> puppet-nova maps to puppet-oslo with 9.0.0
> >>> puppet-designate maps to puppet-oslo with 9.0.0
> >>> puppet-murano maps to puppet-oslo with 7.0.0
> >>>
> >>> For the simplest case of puppet infrastructure configuration, which is
> a
> >>> single puppetmaster with one environment, you cannot have multiple
> >>> versions of a single puppet module installed. This means you absolutely
> >>> cannot have an openstack infrastructure depend on having different
> >>> versions of a single module installed. In your example, a user would
> not
> >>>  be able to use both puppet-nova and puppet-designate since they are
> >>> using different versions of the puppet-oslo module.
> >>>
> >>> When we put out puppet modules, we guarantee that version X.x.x of a
> >>> given module works with the same version of every other module, and
> this
> >>> proposal would totally break that guarantee.
> >>>
> >>
> >> How does OpenStack solve this issue?
> >>
> >> * Do they literally install several different versions of the same
> >> python library?
> >> * Does every project vendor oslo?
> >> * Is the oslo library its self API compatible with older versions?
> >
> > Each Oslo library has its own version. Only one version of each
> > library is installed at a time. We use the global requirements list
> > to sync compatible requirements specifications across all OpenStack
> > projects to make them co-installable. And we try hard to maintain
> > API compatibility, using SemVer versioning to indicate when that
> > was not possible.
> >
> > If you want to have a single puppet module install all of the Oslo
> > libraries, you could pull the right versions from the
> upper-constraints.txt
> > file in the openstack/requirements repository. That file lists the
> > versions that were actually tested in the gate.
>
> Thanks for this feedback Doug!
> So I propose we create the module in openstack namespace, please vote for:
> https://review.openstack.org/#/c/270872/
>
> I talked with xingchao on IRC #puppet-openstack and he's doing
> project-config patch today.
> Maybe could we start with Nova, Neutron, Cinder, Glance, Keystone, see
> how it works and iterate later with other modules.
>
> Thoughts are welcome,
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Xingchao Yu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Dragonflow] - Status Update and Meetups in China

2016-01-23 Thread Gal Sagie
Hello Everyone,

We are just back from a week in China where we met with various local
community
members to discuss about Dragonflow roadmap and design.
We will have a nice number of people from different companies joining us on
the development
of Dragonflow and some are planning to deploy it.

We discussed the following areas which we are going to tackle by April:

1. *Neutron-DF DB Consistency* - This is a problem that most plugins /
other controllers have
today, how to make sure that the Neutron DB is fully synced with the
plugin/solution DB/view of
things.
This problem has many parts into it and i think we came up with a
pretty good plan for it, I will
write about it once the design is in review.

2. *Selective Proactive -  * Dragonflow has local controller at each
compute node, but these
controllers don't need to know the entire virtual topology, the idea to
sync only relevant
information based on the local ports and actual topology, we are going
to have this by
end of April.

3. *Pub/Sub mechanism* - This design is in review process [1] , hope to get
your comments

4. *Scale Testing* -  We received a lab and HW equipment to perform real
scale testing, we will
publish the results once we have them, we are going to focus on data
plane performance /
control plane and DB

5. *Distributed DNAT* - We already have spec for this merged [2], this is
going to be implemented
 as part of Mitaka using OVS flows only.

6. *Broadcast/Multicast traffic* - This is a pain full problem as discussed
also in this mailing list, we
have a nice idea how to solve this in Dragonflow and plan to publish a
spec for this soon.

Feel free to join our IRC channel #openstack-dragonflow or our weekly IRC
meeting [3]
Or send back any questions you might have

Thanks
Gal.

[1] https://review.openstack.org/#/c/263733/
[2]
http://docs.openstack.org/developer/dragonflow/specs/distributed_dnat.html
[3] https://wiki.openstack.org/wiki/Meetings/Dragonflow
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev