Re: [openstack-dev] [trove] Upcoming specs and blueprints for Trove/Mitaka

2015-12-10 Thread Flavio Percoco

On 10/12/15 18:44 +, Vyvial, Craig wrote:

Amrith/Victoria,

Thanks for the heads up about this these blueprints for the Mitaka cycle. This 
looks like a lot of work but there shouldn’t be a reason to hold back new 
blueprints this early in the cycle if they plan on being completed in Mitaka. 
Can we get these blueprints written up and submitted so that we can get them 
approved by Jan 8th? Due to the holidays i think this makes sense.

These blueprints should all be complete and merged by M-3 cut date (Feb 29th) 
for the feature freeze.

Let me know if there are concerns around this.



Sorry for jumping in out of the blue, especially as I haven't been
part of the process but, wouldn't it be better for Trove to just skips
having a hard spec freeze in Mitaka and just plan it for N (as Amrith
proposed) ?

Having a deadline and then allowing new spec to be proposed (or just a
bunch of freeze exceptions) is not very effective. Deadlines need to
be well planned ahead and thoroughly communicated.

If it was done, I'm sorry. As I mentioned, I wasn't part of the
process and I just happened to have read Amrith's email.

Hope the above makes sense,
Flavio


Thanks,
-Craig

On Dec 10, 2015, at 12:11 PM, Victoria Martínez de la Cruz 
> wrote:

2015-12-10 13:10 GMT-03:00 Amrith Kumar 
>:
Members of the Trove community,

Over the past couple of weeks we have discussed the possibility of an early 
deadline for submission of trove specifications for projects that are to be 
included in the Mitaka release. I understand why we're doing it, and agree with 
the concept. Unfortunately though, there are a number of projects for which 
specifications won't be ready in time for the proposed deadline of Friday 12/11 
(aka tomorrow).

I'd like to that the following projects are in the works and specifications 
will be submitted as soon as possible. Now that we know of the new process, we 
will all be able to make sure that we are better planned in time for the N 
release.

Blueprints have been registered for these projects.

The projects in question are:

Cassandra:
   - enable/disable/show root 
(https://blueprints.launchpad.net/trove/+spec/cassandra-database-user-functions)
   - Clustering 
(https://blueprints.launchpad.net/trove/+spec/cassandra-cluster)

MariaDB:
   - Clustering 
(https://blueprints.launchpad.net/trove/+spec/mariadb-clustering)
   - GTID replication 
(https://blueprints.launchpad.net/trove/+spec/mariadb-gtid-replication)

Vertica:
   - Add/Apply license 
(https://blueprints.launchpad.net/trove/+spec/vertica-licensing)
   - User triggered data upload from Swift 
(https://blueprints.launchpad.net/trove/+spec/vertica-bulk-data-load)
   - Cluster grow/shrink 
(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-grow-shrink)
   - Configuration Groups 
(https://blueprints.launchpad.net/trove/+spec/vertica-configuration-groups)
   - Cluster Anti-affinity 
(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-anti-affinity)

Hbase and Hadoop based databases:
   - Extend Trove to Hadoop based databases, starting with HBase 
(https://blueprints.launchpad.net/trove/+spec/hbase-support)

Specifications in the trove-specs repository will be submitted for review as 
soon as they are available.

Thanks,

-amrith



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi all,

I'd like to add the feature "Add backup strategy for Ceph backends" [0] to this 
list.

Thanks,

Victoria

[0] https://blueprints.launchpad.net/trove/+spec/implement-ceph-as-backup-option
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Cleaning Up Deprecation Warnings in the Gate

2015-12-10 Thread Matthew Treinish
Hi Everyone,

So we were having a discussion in #openstack-qa about the amount of logs we have
to process in our elastic search cluster and that we frequently have issues with
the amount of RAM available in the cluster.

In general, it would be good for projects to take a look at how verbose
their logging is, especially on successful tempest runs, and see if that's 
really necessary and see if they can make things a bit more concise and 
targeted (this will help operators too). However, one thing that stood out to me
is that we're getting a lot of deprecation warnings from all our gate jobs.

Using this query to see how many deprecation warning we emit on jobs running on
master in the past 7 days:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22deprecated%5C%22%20AND%20loglevel:%5C%22WARNING%5C%22%20AND%20build_branch:%5C%22master%5C%22

it found 17576707 hits. I don't really see any reason for us to be running dsvm
jobs on master using deprecated options.

We're looking for some people to volunteer to help with this, it isn't something
that's very difficult to fix. It'll likely just require changing things in
devstack to avoid configuring services with deprecated options. But, a big part
is also going to be going through the logs to find out where we're using
deprecated options.

Thanks,

Matthew Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Upcoming specs and blueprints for Trove/Mitaka

2015-12-10 Thread Victoria Martínez de la Cruz
2015-12-10 13:10 GMT-03:00 Amrith Kumar :

> Members of the Trove community,
>
> Over the past couple of weeks we have discussed the possibility of an
> early deadline for submission of trove specifications for projects that are
> to be included in the Mitaka release. I understand why we're doing it, and
> agree with the concept. Unfortunately though, there are a number of
> projects for which specifications won't be ready in time for the proposed
> deadline of Friday 12/11 (aka tomorrow).
>
> I'd like to that the following projects are in the works and
> specifications will be submitted as soon as possible. Now that we know of
> the new process, we will all be able to make sure that we are better
> planned in time for the N release.
>
> Blueprints have been registered for these projects.
>
> The projects in question are:
>
> Cassandra:
> - enable/disable/show root (
> https://blueprints.launchpad.net/trove/+spec/cassandra-database-user-functions
> )
> - Clustering (
> https://blueprints.launchpad.net/trove/+spec/cassandra-cluster)
>
> MariaDB:
> - Clustering (
> https://blueprints.launchpad.net/trove/+spec/mariadb-clustering)
> - GTID replication (
> https://blueprints.launchpad.net/trove/+spec/mariadb-gtid-replication)
>
> Vertica:
> - Add/Apply license (
> https://blueprints.launchpad.net/trove/+spec/vertica-licensing)
> - User triggered data upload from Swift (
> https://blueprints.launchpad.net/trove/+spec/vertica-bulk-data-load)
> - Cluster grow/shrink (
> https://blueprints.launchpad.net/trove/+spec/vertica-cluster-grow-shrink)
> - Configuration Groups (
> https://blueprints.launchpad.net/trove/+spec/vertica-configuration-groups)
> - Cluster Anti-affinity (
> https://blueprints.launchpad.net/trove/+spec/vertica-cluster-anti-affinity
> )
>
> Hbase and Hadoop based databases:
> - Extend Trove to Hadoop based databases, starting with HBase (
> https://blueprints.launchpad.net/trove/+spec/hbase-support)
>
> Specifications in the trove-specs repository will be submitted for review
> as soon as they are available.
>
> Thanks,
>
> -amrith
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hi all,

I'd like to add the feature "Add backup strategy for Ceph backends" [0] to
this list.

Thanks,

Victoria

[0]
https://blueprints.launchpad.net/trove/+spec/implement-ceph-as-backup-option
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Reg: Blueprint -- add-compute-node-on-the-go

2015-12-10 Thread Clint Byrum
Excerpts from Atul Ag's message of 2015-12-10 06:50:05 -0800:
>  Hi, 
> 
> I have added the blue print 
> https://blueprints.launchpad.net/nova/+spec/add-compute-node-on-the-go.
> Can you please let me know the feasibility, and accept the blueprint.
> 

Hi Atul, welcome, and thanks for your interest and contribution to the
community!

Nova has a specs+blueprints process. You should definitely read this:

https://wiki.openstack.org/wiki/Blueprints#Blueprints_and_Specs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-12-10 Thread Joshua Harlow

Unsure what u mean here,

Tooz is already in oslo.

Where u thinking of something else?

D'Angelo, Scott wrote:

Could the work for the tooz variant be leveraged to add a truly distributed 
solution (with the proper tooz distributed backend)? IF so, then +1 to this 
idea. Cinder will be implementing a version of tooz based distribute locks, so 
having it in Olso someday is a goal I'd think.


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Wednesday, December 09, 2015 6:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][all] The lock files saga (and where we can 
go from here)

So,

To try to reach some kind of conclusion here I am wondering if it would
be acceptable to folks (would people even adopt such a change?) if we
(oslo folks/others) provided a new function in say lockutils.py (in
oslo.concurrency) that would let users of oslo.concurrency pick which
kind of lock they would want to use...

The two types would be:

1. A pid based lock, which would *not* be resistant to crashing
processes, it would perhaps use
https://github.com/openstack/pylockfile/blob/master/lockfile/pidlockfile.py
internally. It would be more easily breakable and more easily
introspect-able (by either deleting the file or `cat` the file to see
the pid inside of it).
2. The existing lock that is resistant to crashing processes (it
automatically releases on owner process crash) but is not easily
introspect-able (to know who is using the lock) and is not easily
breakable (aka to forcefully break the lock and release waiters and the
current lock holder).

Would people use these two variants if (oslo) provided them, or would
the status quo exist and nothing much would change?

A third possibility is to spend energy using/integrating tooz
distributed locks and treating different processes on the same system as
distributed instances [even though they really are not distributed in
the classical sense]). These locks that tooz supports are already
introspect-able (via various means) and can be broken if needed (work is
in progress to make this breaking process more useable via API).

Thoughts?

-Josh

Clint Byrum wrote:

Excerpts from Joshua Harlow's message of 2015-12-01 09:28:18 -0800:

Sean Dague wrote:

On 12/01/2015 08:08 AM, Duncan Thomas wrote:

On 1 December 2015 at 13:40, Sean Dague>wrote:


   The current approach means locks block on their own, are processed in
   the order they come in, but deletes aren't possible. The busy lock would
   mean deletes were normal. Some extra cpu spent on waiting, and lock
   order processing would be non deterministic. It's trade offs, but I
   don't know anywhere that we are using locks as queues, so order
   shouldn't matter. The cpu cost on the busy wait versus the lock file
   cleanliness might be worth making. It would also let you actually see
   what's locked from the outside pretty easily.


The cinder locks are very much used as queues in places, e.g. making
delete wait until after an image operation finishes. Given that cinder
can already bring a node into resource issues while doing lots of image
operations concurrently (such as creating lots of bootable volumes at
once) I'd be resistant to anything that makes it worse to solve a
cosmetic issue.

Is that really a queue? Don't do X while Y is a lock. Do X, Y, Z, in
order after W is done is a queue. And what you've explains above about
Don't DELETE while DOING OTHER ACTION, is really just the queue model.

What I mean by treating locks as queues was depending on X, Y, Z
happening in that order after W. With a busy wait approach they might
happen as Y, Z, X or X, Z, B, Y. They will all happen after W is done.
But relative to each other, or to new ops coming in, no real order is
enforced.


So ummm, just so people know the fasteners lock code (and the stuff that
has existed for file locks in oslo.concurrency and prior to that
oslo-incubator...) never has guaranteed the aboved sequencing.

How it works (and has always worked) is the following:

1. A lock object is created
(https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L85)
2. That lock object acquire is performed
(https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L125)
3. At that point do_open is called to ensure the file exists (if it
exists already it is opened in append mode, so no overwrite happen) and
the lock object has a reference to the file descriptor of that file
(https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L112)
4. A retry loop starts, that repeats until either a provided timeout is
elapsed or the lock is acquired, the retry logic u can skip over but the
code that the retry loop calls is
https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L92

The retry loop (really this loop @

Re: [openstack-dev] Cleaning Up Deprecation Warnings in the Gate

2015-12-10 Thread gord chung



On 10/12/15 12:04 PM, Matthew Treinish wrote:

Using this query to see how many deprecation warning we emit on jobs running on
master in the past 7 days:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22deprecated%5C%22%20AND%20loglevel:%5C%22WARNING%5C%22%20AND%20build_branch:%5C%22master%5C%22

it found 17576707 hits. I don't really see any reason for us to be running dsvm
jobs on master using deprecated options.

We're looking for some people to volunteer to help with this, it isn't something
that's very difficult to fix.
we're discussing it over at #openstack-keystone. this should stop some 
of them: https://review.openstack.org/#/c/256078/



i think most of them are coming from apache/keystone.txt

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Multiple repos UX

2015-12-10 Thread Vladimir Kozhukalov
Dear colleagues,

At the moment we have several places where we configure multiple rpm/deb
repositories. Those are:

   1. Web UI (cluster settings tab) where we define repos for cluster
   deployment
   2. Fuel-menu (bootstrap section) where we define repos for building
   ubuntu bootstrap image
   3. Fuel-mirror where we define repos that are to be cloned (full or
   partial mirrors)

I'd prefer all these places to provide the same UX. By that I mean that
these components should use the same input data structure like this [0],
i.e. a flat list of fully independent repositories (see an example below).
First repo in the list is supposed to be a base OS repo (i.e. contains base
packages like libc).

[
 {
type: deb,
url: some-url,
section: some-section,
suite: some-suite,
priority: some-priority
  },
  {
type: deb,
url: another-url,
section: another-section,
suite: another-suite,
priority: another-priority
  },
...
]

I'd like to focus on the fact that these repositories should be defined
independently (no base url, no base suite, etc.) That makes little sense to
speculate about consistency of a particular repository. We only should talk
about consistency of the whole list of repositories together.

I'll try to explain. In the real world we usually deal with sets of
repositories which look like this:

http://archive.ubuntu.com/ubuntu/dists/trusty/
http://archive.ubuntu.com/ubuntu/dists/trusty-updates/
http://archive.ubuntu.com/ubuntu/dists/trusty-security/
http://mirror.fuel-infra.org/mos-repos/ubuntu/8.0/dists/mos8.0/
http://mirror.fuel-infra.org/mos-repos/ubuntu/8.0/dists/mos8.0-updates/
http://mirror.fuel-infra.org/mos-repos/ubuntu/8.0/dists/mos8.0-security/

As you can see these repositories have common hosts and base suites and it
instills us to think that repositories should not be defined separately
which is wrong. This special case does not break the whole approach. It is
just a special case. Repositories are nothing more than just sets of
packages that can depend on each other forming a tree when taken together.
Package relation does matter, not repository location, not suite name.
Parsing package tree for a set of repositories we can easily figure out
whether this set is consistent or not (e.g. python-packetary allows to do
this).

Taking into account the above, I'd say UI should allow a user to define
repositories independently not forcing to use special patterns like suite +
suite-updates + suite-security, not forcing repositories to be located on
the same host. That means we should modify fuel-menu bootstrap section
which currently allows a user to define a base url that is then used to
form a group of repos (base, base-updates, base-security). Besides, it
contradicts to our use case when we put mosX.Y locally on the master node
while mosX.Y-updates and mosX.Y-security are supposed to be available
online.

What do you guys think of that?


[0]
https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L2006-L2053


Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Upcoming specs and blueprints for Trove/Mitaka

2015-12-10 Thread Vyvial, Craig
Flavio,

So we had a quite a few specs in the last cycle that were rushed to be approved 
and then code rushed as well. This was meant to make sure that would not happen 
again because it ended up hurting us and had to get many of features through 
with exceptions. This is by no means a hard spec freeze but something to help 
mitigate the issues we had in the past and put some urgency into what specs we 
would accomplish in the cycle.

Thanks,
Craig

On Dec 10, 2015, at 12:53 PM, Flavio Percoco 
> wrote:

On 10/12/15 18:44 +, Vyvial, Craig wrote:
Amrith/Victoria,

Thanks for the heads up about this these blueprints for the Mitaka cycle. This 
looks like a lot of work but there shouldn’t be a reason to hold back new 
blueprints this early in the cycle if they plan on being completed in Mitaka. 
Can we get these blueprints written up and submitted so that we can get them 
approved by Jan 8th? Due to the holidays i think this makes sense.

These blueprints should all be complete and merged by M-3 cut date (Feb 29th) 
for the feature freeze.

Let me know if there are concerns around this.


Sorry for jumping in out of the blue, especially as I haven't been
part of the process but, wouldn't it be better for Trove to just skips
having a hard spec freeze in Mitaka and just plan it for N (as Amrith
proposed) ?

Having a deadline and then allowing new spec to be proposed (or just a
bunch of freeze exceptions) is not very effective. Deadlines need to
be well planned ahead and thoroughly communicated.

If it was done, I'm sorry. As I mentioned, I wasn't part of the
process and I just happened to have read Amrith's email.

Hope the above makes sense,
Flavio

Thanks,
-Craig

On Dec 10, 2015, at 12:11 PM, Victoria Martínez de la Cruz 
>
 wrote:

2015-12-10 13:10 GMT-03:00 Amrith Kumar 
>:
Members of the Trove community,

Over the past couple of weeks we have discussed the possibility of an early 
deadline for submission of trove specifications for projects that are to be 
included in the Mitaka release. I understand why we're doing it, and agree with 
the concept. Unfortunately though, there are a number of projects for which 
specifications won't be ready in time for the proposed deadline of Friday 12/11 
(aka tomorrow).

I'd like to that the following projects are in the works and specifications 
will be submitted as soon as possible. Now that we know of the new process, we 
will all be able to make sure that we are better planned in time for the N 
release.

Blueprints have been registered for these projects.

The projects in question are:

Cassandra:
  - enable/disable/show root 
(https://blueprints.launchpad.net/trove/+spec/cassandra-database-user-functions)
  - Clustering 
(https://blueprints.launchpad.net/trove/+spec/cassandra-cluster)

MariaDB:
  - Clustering 
(https://blueprints.launchpad.net/trove/+spec/mariadb-clustering)
  - GTID replication 
(https://blueprints.launchpad.net/trove/+spec/mariadb-gtid-replication)

Vertica:
  - Add/Apply license 
(https://blueprints.launchpad.net/trove/+spec/vertica-licensing)
  - User triggered data upload from Swift 
(https://blueprints.launchpad.net/trove/+spec/vertica-bulk-data-load)
  - Cluster grow/shrink 
(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-grow-shrink)
  - Configuration Groups 
(https://blueprints.launchpad.net/trove/+spec/vertica-configuration-groups)
  - Cluster Anti-affinity 
(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-anti-affinity)

Hbase and Hadoop based databases:
  - Extend Trove to Hadoop based databases, starting with HBase 
(https://blueprints.launchpad.net/trove/+spec/hbase-support)

Specifications in the trove-specs repository will be submitted for review as 
soon as they are available.

Thanks,

-amrith



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi all,

I'd like to add the feature "Add backup strategy for Ceph backends" [0] to this 
list.

Thanks,

Victoria

[0] https://blueprints.launchpad.net/trove/+spec/implement-ceph-as-backup-option
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Upcoming specs and blueprints for Trove/Mitaka

2015-12-10 Thread Amrith Kumar
Flavio,

The issue we had in the last cycle was that a lot of specs and code arrived for 
review late in the process and this posed a challenge. The intent this time 
around was to ensure that there wasn't such a back-end loaded process and that 
people had a good idea of what is coming down the pike. It is more of a traffic 
management solution, and one that we are trying for the first time in this 
cycle. We will get better in the next cycle.

That is my interpretation of the process and the context for my broadcast 
message. Note that this is not a hard spec freeze and a request for exceptions 
(which seems to be your interpretation). On the contrary, it is a heads-up to 
the rest of the Trove team of what is coming down the pike.
 
Thanks,

-amrith

> -Original Message-
> From: Flavio Percoco [mailto:fla...@redhat.com]
> Sent: Thursday, December 10, 2015 1:53 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [trove] Upcoming specs and blueprints for
> Trove/Mitaka
> 
> On 10/12/15 18:44 +, Vyvial, Craig wrote:
> >Amrith/Victoria,
> >
> >Thanks for the heads up about this these blueprints for the Mitaka cycle.
> This looks like a lot of work but there shouldn’t be a reason to hold back new
> blueprints this early in the cycle if they plan on being completed in Mitaka.
> Can we get these blueprints written up and submitted so that we can get
> them approved by Jan 8th? Due to the holidays i think this makes sense.
> >
> >These blueprints should all be complete and merged by M-3 cut date (Feb
> 29th) for the feature freeze.
> >
> >Let me know if there are concerns around this.
> >
> 
> Sorry for jumping in out of the blue, especially as I haven't been part of the
> process but, wouldn't it be better for Trove to just skips having a hard spec
> freeze in Mitaka and just plan it for N (as Amrith
> proposed) ?
> 
> Having a deadline and then allowing new spec to be proposed (or just a
> bunch of freeze exceptions) is not very effective. Deadlines need to be well
> planned ahead and thoroughly communicated.
> 
> If it was done, I'm sorry. As I mentioned, I wasn't part of the process and I
> just happened to have read Amrith's email.
> 
> Hope the above makes sense,
> Flavio
> 
> >Thanks,
> >-Craig
> >
> >On Dec 10, 2015, at 12:11 PM, Victoria Martínez de la Cruz
> 
> > wrote:
> >
> >2015-12-10 13:10 GMT-03:00 Amrith Kumar
> >:
> >Members of the Trove community,
> >
> >Over the past couple of weeks we have discussed the possibility of an early
> deadline for submission of trove specifications for projects that are to be
> included in the Mitaka release. I understand why we're doing it, and agree
> with the concept. Unfortunately though, there are a number of projects for
> which specifications won't be ready in time for the proposed deadline of
> Friday 12/11 (aka tomorrow).
> >
> >I'd like to that the following projects are in the works and specifications 
> >will
> be submitted as soon as possible. Now that we know of the new process, we
> will all be able to make sure that we are better planned in time for the N
> release.
> >
> >Blueprints have been registered for these projects.
> >
> >The projects in question are:
> >
> >Cassandra:
> >- enable/disable/show root
> (https://blueprints.launchpad.net/trove/+spec/cassandra-database-user-
> functions)
> >- Clustering
> >(https://blueprints.launchpad.net/trove/+spec/cassandra-cluster)
> >
> >MariaDB:
> >- Clustering (https://blueprints.launchpad.net/trove/+spec/mariadb-
> clustering)
> >- GTID replication
> >(https://blueprints.launchpad.net/trove/+spec/mariadb-gtid-replication)
> >
> >Vertica:
> >- Add/Apply license
> (https://blueprints.launchpad.net/trove/+spec/vertica-licensing)
> >- User triggered data upload from Swift
> (https://blueprints.launchpad.net/trove/+spec/vertica-bulk-data-load)
> >- Cluster grow/shrink
> (https://blueprints.launchpad.net/trove/+spec/vertica-cluster-grow-shrink)
> >- Configuration Groups
> (https://blueprints.launchpad.net/trove/+spec/vertica-configuration-
> groups)
> >- Cluster Anti-affinity
> >(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-anti-affi
> >nity)
> >
> >Hbase and Hadoop based databases:
> >- Extend Trove to Hadoop based databases, starting with HBase
> >(https://blueprints.launchpad.net/trove/+spec/hbase-support)
> >
> >Specifications in the trove-specs repository will be submitted for review as
> soon as they are available.
> >
> >Thanks,
> >
> >-amrith
> >
> >
> >
> >_
> __
> >___ OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe 

[openstack-dev] [infra][release][all] Automatic .ics generation for OpenStack's and project's deadlines

2015-12-10 Thread Flavio Percoco

Greetings,

I'd like to explore the possibility of having .ics generated - pretty
much the same way we generate it for irc-meetings - for the OpenStack
release schedule and project's deadlines. I believe just 1 calendar
would be enough but I'd be ok w/  a per-project .ics too. 


With the new home for the release schedule, and it being a good place
for projects to add their own deadlines as well, I believe it would be
good for people that use calendars to have these .ics being generated
and linked there as well.

Has this been attempted? Any objections? Is there something I'm not
considering?

Cheers,
Flavio

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][release][all] Automatic .ics generation for OpenStack's and project's deadlines

2015-12-10 Thread Jeremy Stanley
On 2015-12-10 18:20:44 + (+), Flavio Percoco wrote:
> I'd like to explore the possibility of having .ics generated - pretty
> much the same way we generate it for irc-meetings - for the OpenStack
> release schedule and project's deadlines. I believe just 1 calendar
> would be enough but I'd be ok w/  a per-project .ics too.

Sounds great to me!

> With the new home for the release schedule, and it being a good place
> for projects to add their own deadlines as well, I believe it would be
> good for people that use calendars to have these .ics being generated
> and linked there as well.

Yep, shouldn't be too hard to add, I expect.

> Has this been attempted? Any objections? Is there something I'm not
> considering?

I'm not aware of any work so far to that end, but have a feeling you
could reuse http://git.openstack.org/cgit/openstack-infra/yaml2ical
and make sure the schedule is maintained in a compatible YAML
layout. You might need to write/tweak a Sphinx extension to
transform it into RST so it can be embedded into the rendered
version of the schedule, but if so you can get inspiration from
https://git.openstack.org/cgit/openstack/ossa/tree/doc/source/_exts/vmt.py
which does very similar things.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Testing concerns around boot from UEFI spec

2015-12-10 Thread Ren, Qiaowei

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Friday, December 4, 2015 9:47 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Testing concerns around boot from UEFI
> spec
> 
> On 12/04/2015 08:34 AM, Daniel P. Berrange wrote:
> > On Fri, Dec 04, 2015 at 07:43:41AM -0500, Sean Dague wrote:
> >> Can someone explain the licensing issue here? The Fedora comments
> >> make this sound like this is something that's not likely to end up in 
> >> distros.
> >
> > The EDK codebase contains a FAT driver which has a license that
> > forbids reusing the code outside of the EDK project.
> >
> > [quote]
> > Additional terms: In addition to the forgoing, redistribution and use
> > of the code is conditioned upon the FAT 32 File System Driver and all
> > derivative works thereof being used for and designed only to read
> > and/or write to a file system that is directly managed by Intel's
> > Extensible Firmware Initiative (EFI) Specification v. 1.0 and later
> > and/or the Unified Extensible Firmware Interface (UEFI) Forum's UEFI
> > Specifications v.2.0 and later (together the "UEFI Specifications");
> > only as necessary to emulate an implementation of the UEFI
> > Specifications; and to create firmware, applications, utilities and/or 
> > drivers.
> > [/quote]
> >
> > So while the code is open source, it is under a non-free license,
> > hence Fedora will not ship it. For RHEL we're reluctantly choosing to
> > ship it as an exception to our normal policy, since its the only
> > immediate way to make UEFI support available on x86 & aarch64
> >
> > So I don't think the license is a reason to refuse to allow the UEFI
> > feature into Nova though, nor should it prevent us using the current
> > EDK bios in CI for testing purposes. It is really just an issue for
> > distros which only want 100% free software.
> 
> For upstream CI that's also a bar that's set. So for 3rd party, it would 
> probably be
> fine, but upstream won't happen.
> 

Sorry, is there any decision about this? If 3rd CI needs to be added, we could 
also work on it. BTW, if so, the patches could not be merged when the 3rd CI 
could not still work, right?

Thanks,
Qiaowei

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Improving Mistral pep8 rules files to match Mistral guidelines

2015-12-10 Thread ELISHA, Moshe (Moshe)
Thanks, Anastasia!

Who can take start documenting the rules? I remember only a few rules and I 
don’t know all the nuances.
For example, if the return statement is the only statement of a function – do 
you still need a blank line before it?

Once the rules doc will be available I can work on adding these rules to our 
pep8.


From: Anastasia Kuznetsova [mailto:akuznets...@mirantis.com]
Sent: Wednesday, December 09, 2015 1:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [mistral] Improving Mistral pep8 rules files to 
match Mistral guidelines

Hi Moshe,

Great idea!

It is possible to prepare some additional code checks, for example you can take 
a look how it was done in Rally project [1].
Before starting such work in Mistral, I guess that we can describe our addition 
code style rules in our official docs (somewhere in "Developer Guide" section 
[2]).

[1] https://github.com/openstack/rally/tree/master/tests/hacking
[2] http://docs.openstack.org/developer/mistral/#developer-guide

On Wed, Dec 9, 2015 at 11:21 AM, ELISHA, Moshe (Moshe) 
> wrote:
Hi all,

Is it possible to add all / some of the special guidelines of Mistral (like 
blank line before return, period at end of comment, …) to our pep8 rules file?

This can save a lot of time for both committers and reviewers.

Thanks!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-10 Thread Duncan Thomas
On 10 December 2015 at 04:48, Li, Xiaoyan  wrote:

>
> This leads to a problem like extending volume. Extending a volume in an
> incremental snapshot fails
> in vendor storage.  And then the cinder volume goes into error_extending
> status. From my opinion this is not good.
>

In any case where you see this, please file a bug. If you have time, please
propose a tempest test to check that this functionality works on all
drivers - all drivers *must* support the common Cinder API, the core
behaviours are not optional (though they are poorly documented and will
take time to fix).

I'm just starting a week's vacation, but I'll push up a tempest test for
this case when I get back if nobody else has, then we can see which CIs
fail and start raising bugs.


-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] neutron metadata-agent HA

2015-12-10 Thread Alvise Dorigo

Hi,
I've installed Kilo release of OpenStack. An interesting thing, for us, 
is the new high available Neutron L3 agent, which - as far as I 
understand - can be set in active/active mode.
But I verified what is reported in the HA Guide 
(http://docs.openstack.org/ha-guide/networking-ha-metadata.html): 
metadata-agent cannot be configured in high availability active/active mode.


Below that sentence, I read:

"[TODO: Update this information. Can this service now be made HA in 
active/active mode or do we need to pull in the instructions to run this 
service in active/passive mode?]"


So my question is: is there any progress on this topic ? is there a way 
(something like a cronjob script) to make the metadata-agent redundant 
without involving the clustering software Pacemaker/Corosync ?


Thanks,

Alvise

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra] Using Neutron client in the gate

2015-12-10 Thread Gal Sagie
Hello All,

I would like to run some "fullstack" integration tests for Kuryr and run
them in
the gate.
For the tests i would like to use the Neutron client for communicating with
the
working devstack Neutron service.

What is the best way to instantiate the client in terms of auth_url and
credentials ?
(I saw 'secretadmin' is used as admin password, but wondered if using hard
coded
args is the best approach)

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Feature Freeze Exceptions

2015-12-10 Thread Mike Scherbakov
Update after today's IRC meeting [1]:


[1]
http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-12-10-16.00.html


   1. [MERGED] CentOS 7. ETA: Monday 7th. Blueprint:
   https://blueprints.launchpad.net/fuel/+spec/master-on-centos7.
   2. [MOVED to 9.0] Disable queue mirroring for RPC queues in RabbitMQ.
   
https://blueprints.launchpad.net/fuel/+spec/rabbitmq-disable-mirroring-for-rpc
   3. [In progress] Task based deployment with Astute. ETA: Friday, 11th.
   https://blueprints.launchpad.net/fuel/+spec/task-based-deployment-astute.
   Decision is that new code must be disabled by default.
   4. [MERGED] Component Registry. ETA: Wednesday, 10th. Only
   https://review.openstack.org/#/c/246889/ is given with an exception. BP:
   https://blueprints.launchpad.net/fuel/+spec/component-registry
   5. [MERGED] Add vmware cluster after deployment. ETA: Tuesday, 8th. Only
   this patch is given with an exception:
   https://review.openstack.org/#/c/251278/. BP:
   https://blueprints.launchpad.net/fuel/+spec/add-vmware-clusters
   6. [MERGED] Support murano service broker. ETA: Tuesday, 8th. Only this
   patch is given an exception: https://review.openstack.org/#/c/252356.
   BP:
   
https://blueprints.launchpad.net/fuel/+spec/implement-support-for-murano-service-broker
   7. [MERGED] Ubuntu boostrap. Two patches requested for FFE:
   https://review.openstack.org/#/c/250504/,
   https://review.openstack.org/#/c/251873/. Both are merged. So I consider
   that this is actually done.

It's unfortunate that we were not able to complete "disable rabbitmq
mirroring". Let's make sure that we land it as soon as master opens for new
release. I'm especially excited to see that we were able to get CentOS 7
support, as the most challenging thing!


On Wed, Dec 2, 2015 at 10:47 AM Mike Scherbakov 
wrote:

> Hi all,
> we ran a meeting and made a decision on feature freeze exceptions. Full
> log is here:
> https://etherpad.openstack.org/p/fuel-8.0-FF-meeting
>
> The following features were granted with feature freeze exception:
>
>1. CentOS 7. ETA: Monday 7th. Blueprint:
>https://blueprints.launchpad.net/fuel/+spec/master-on-centos7. We have
>rather complicated plan on merges here:
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081026.html
>2. Disable queue mirroring for RPC queues in RabbitMQ. ETA: didn't
>define. As fairly small patch is related, I propose Monday, 7th to be the
>last day.
>
> https://blueprints.launchpad.net/fuel/+spec/rabbitmq-disable-mirroring-for-rpc
>3. Task based deployment with Astute. ETA: Friday, 11th.
>https://blueprints.launchpad.net/fuel/+spec/task-based-deployment-astute.
>Decision is that new code must be disabled by default.
>4. Component Registry. ETA: Wednesday, 10th. Only
>https://review.openstack.org/#/c/246889/ is given with an exception.
>BP: https://blueprints.launchpad.net/fuel/+spec/component-registry
>5. Add vmware cluster after deployment. ETA: Tuesday, 8th. Only this
>patch is given with an exception:
>https://review.openstack.org/#/c/251278/. BP:
>https://blueprints.launchpad.net/fuel/+spec/add-vmware-clusters
>6. Support murano service broker. ETA: Tuesday, 8th. Only this patch
>is given an exception: https://review.openstack.org/#/c/252356. BP:
>
> https://blueprints.launchpad.net/fuel/+spec/implement-support-for-murano-service-broker
>7. Ubuntu boostrap. Two patches requested for FFE:
>https://review.openstack.org/#/c/250504/,
>https://review.openstack.org/#/c/251873/. Both are merged. So I
>consider that this is actually done.
>
> I'm calling everyone to update blueprints status. I'm volunteering to go
> over open blueprints targeting 8.0 tomorrow, and move all which are not in
> "Implemented" status unless those are exceptions or test/docs related
> things.
>
> Thanks all for keeping a focused efforts on getting code into master. I
> strongly suggest that we don't push any exception further down, and if
> something is not done by second deadline - it has to be disabled / reverted
> in 8.0.
> --
> Mike Scherbakov
> #mihgen
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Mid Cycle Sprint

2015-12-10 Thread Amy Marrich
I'd be game to join in if it's in San Antonio. While I'd love to go to London I 
don't think I'd make.

Like Major I'd like to see some doc work.

Amy Marrich

From: Jesse Pretorius 
Sent: Wednesday, December 9, 2015 6:45:56 AM
To: openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org
Subject: [openstack-dev] [openstack-ansible] Mid Cycle Sprint

Hi everyone,

At the Mitaka design summit in Tokyo we had some corridor discussions about 
doing a mid-cycle meetup for the purpose of continuing some design discussions 
and doing some specific sprint work.

***
I'd like indications of who would like to attend and what 
locations/dates/topics/sprints would be of interest to you.
***

For guidance/background I've put some notes together below:

Location

We have contributors, deployers and downstream consumers across the globe so 
picking a venue is difficult. Rackspace have facilities in the UK (Hayes, West 
London) and in the US (San Antonio) and are happy for us to make use of them.

Dates
-
Most of the mid-cycles for upstream OpenStack projects are being held in 
January. The Operators mid-cycle is on February 15-16.

As I feel that it's important that we're all as involved as possible in these 
events, I would suggest that we schedule ours after the Operators mid-cycle.

It strikes me that it may be useful to do our mid-cycle immediately after the 
Ops mid-cycle, and do it in the UK. This may help to optimise travel for many 
of us.

Format
--
The format of the summit is really for us to choose, but typically they're 
formatted along the lines of something like this:

Day 1: Big group discussions similar in format to sessions at the design summit.

Day 2: Collaborative code reviews, usually performed on a projector, where the 
goal is to merge things that day (if a review needs more than a single 
iteration, we skip it. If a review needs small revisions, we do them on the 
spot).

Day 3: Small group / pair programming.

Topics
--
Some topics/sprints that come to mind that we could explore/do are:
 - Install Guide Documentation Improvement [1]
 - Development Documentation Improvement (best practises, testing, how to 
develop a new role, etc)
 - Upgrade Framework [2]
 - Multi-OS Support [3]

[1] https://etherpad.openstack.org/p/oa-install-docs
[2] https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
[3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support

--
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Containerize Flannel/Etcd

2015-12-10 Thread Daneyon Hansen (danehans)
All,

As a follow-on from today's networking subteam meeting, I received additional 
feedback from the Kubernetes community about running Etcd and Flannel in a 
container. Etcd/Flannel were containerized to simply the N different ways that 
these services can be deployed. It simplifies Kubernetes documentation and 
reduces support requirements. Since our current approach of pkg+template has 
worked well, I suggest we do not containerize Etcd and Flannel [1]. IMO the 
benefits of containerizing these services does not outweigh the additional 
complexity of running a 2nd "bootstrap" Docker daemon.

Since our current Flannel version contains a bug [2] that breaks VXLAN, I 
suggest the Flannel package gets upgraded in all images running Flannel version 
0.5.0 to 0.5.3.

[1] https://review.openstack.org/#/c/249503/
[2] https://bugs.launchpad.net/magnum/+bug/1518602

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mitaka Infra Sprint

2015-12-10 Thread Elizabeth K. Joseph
On Thu, Dec 10, 2015 at 3:24 AM, Joshua Hesketh
 wrote:
> Location:
> HPE Fort Collins Colorado Office

Thanks Josh, I've confirmed the address and building number:

 Hewlett-Packard Enterprise - Building 6
 3404 E Harmony Road
 Fort Collins, CO, 80528

Wiki has also been updated.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] bugfix for "Fix concurrency issues by using READ_COMMITTED" unveils / creates a different bug

2015-12-10 Thread Clint Byrum
Excerpts from ELISHA, Moshe (Moshe)'s message of 2015-12-07 08:29:44 -0800:
> Hi all,
> 
> The current bugfix I am working on[1] have unveiled / created a bug.
> Test "WorkflowResumeTest.test_resume_different_task_states" sometimes fails 
> because "task4" is executed twice instead of once (See unit test output and 
> workflow below).
> 
> This happens because task2 on-complete is running task4 as expected but also 
> task3 executes task4 by mistake.
> 
> It is not consistent but it happens quite often - This happens if the unit 
> test resumes the WF and updates action execution of task2 and finishes task2 
> before task3 is finished.
> Scenario:
> 
> 
> 1.   Task2 in method on_action_complete - changes task2 state to RUNNING.
> 
> 2.   Task3 in method on_action_complete - changes task2 state to RUNNING 
> (before task2 calls _on_task_state_change).
> 
> 3.   Task3 in "_on_task_state_change" > "continue_workflow" > 
> "DirectWorkflowController ._find_next_commands" - it finds task2 because 
> task2 is in SUCCESS and processed = False and 
> "_find_next_commands_for_task(task2)" returns task4.
> 
> 4.   Task3 executes command to RunTask task4.
> 
> 5.   Task2 in "_on_task_state_change" > "continue_workflow" > 
> "DirectWorkflowController ._find_next_commands" - it finds task2 because 
> task2 is in SUCCESS and processed = False and 
> "_find_next_commands_for_task(task2)" returns task4.
> 
> 6.   Task2 executes command to RunTask task4.
> 
> 
> [1] - https://review.openstack.org/#/c/253819/
> 
> 
> If I am not mistaken - this issue also exists in the current code and my 
> bugfix only made it much more often. Can you confirm?
> I don't have enough knowledge on how to fix this issue...
> For now - I have modified the test_resume_different_task_states unit test to 
> wait for task3 to be processed before updating the action execution of task2.
> If you agree this bug exist today as well - we can proceed with my bugfix and 
> open a different bug for that issue.
> 

I'd agree that this is likely happening more reliably because READ
COMMITTED will just give you the state that causes the bug more often
than REPEATABLE READ, because now if you happen to have threads running
at the same time when the new vulnerable state is reached, they can both
see the new state and react to it.  Before they only had that problem if
they both started after the enabling state was in the db, thus sharing
the same db snapshot.

What you actually need is to atomically claim the rows. You have to do
like this:

UPDATE whatever_table SET executor = 'me' WHERE executor IS NULL;

And if you get 0 rows updated, that means somebody else claimed it and
you do nothing. Note that you also need some liveness testing in this
system, since if your executor dies, that row will be lost forever. In
Heat they solved it by having a queue for each executor and pinging on
oslo.messaging. Please don't do that.

I suggest instead switching to tooz, and joining the distributed lock
revolution, where zookeeper will give you a nice atomic distributed lock
for this, and detect when to break it because of a dead executor. (or
consul or etcd once our fine community finishes landing those)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cleaning Up Deprecation Warnings in the Gate

2015-12-10 Thread Matthew Treinish
On Thu, Dec 10, 2015 at 03:03:01PM -0500, gord chung wrote:
> 
> 
> On 10/12/15 12:04 PM, Matthew Treinish wrote:
> >Using this query to see how many deprecation warning we emit on jobs running 
> >on
> >master in the past 7 days:
> >
> >http://logstash.openstack.org/#dashboard/file/logstash.json?query=message:%5C%22deprecated%5C%22%20AND%20loglevel:%5C%22WARNING%5C%22%20AND%20build_branch:%5C%22master%5C%22
> >
> >it found 17576707 hits. I don't really see any reason for us to be running 
> >dsvm
> >jobs on master using deprecated options.
> >
> >We're looking for some people to volunteer to help with this, it isn't 
> >something
> >that's very difficult to fix.
> we're discussing it over at #openstack-keystone. this should stop some of
> them: https://review.openstack.org/#/c/256078/
> 
> 
> i think most of them are coming from apache/keystone.txt
> 

Cool, thanks for getting the ball rolling with this. A good chunk of them were
definitely coming from keystone. Although in the case of keystone eventlet we're
still running the postgres-full jobs with keystone in eventlet, so we'll still
have deprecation warnings there. (maybe we should stop using eventlet there)
But, this will definitely remove them from the other jobs.

I created a gerrit topic for all these patches:

https://review.openstack.org/#/q/status:open+project:openstack-dev/devstack+branch:master+topic:stop-using-deprecated-options,n,z

if people are going to push up other changes to do this feel free to add
them there so we can review them from one place.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mesos Conductor using container-create operations

2015-12-10 Thread Ton Ngo
I think extending the container object to Mesos via command like
container-create is a fine idea.  Going into details, however, we run into
some complication.
1. The user would still have to choose a DSL to express the container.
This would have to be a kube and/or swarm DSL since we don't want to invent
a new one.
2. For Mesos bay in particular, kube or swarm may be running on top of
Mesos along side with Marathon, so somewhere along the line, Magnum has to
be able to make the distinction and handle things appropriately.

We should think through the scenarios carefully to come to agreement on how
this would work.

Ton Ngo,




From:   Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   12/09/2015 03:09 PM
Subject:Re: [openstack-dev] Mesos Conductor using container-create
operations



As Bharath mentioned, I am +1 to extend the “container” object to Mesos
bay. In addition, I propose to extend “container” to k8s as well (the
details are described in this BP [1]). The goal is to promote this API
resource to be technology-agnostic and make it portable across all COEs. I
am going to justify this proposal by a use case.

Use case:
I have an app. I used to deploy my app to a VM in OpenStack. Right now, I
want to deploy my app to a container. I have basic knowledge of container
but not familiar with specific container tech. I want a simple and
intuitive API to operate a container (i.e. CRUD), like how I operated a VM
before. I find it hard to learn the DSL introduced by a specific COE
(k8s/marathon). Most importantly, I want my deployment to be portable
regardless of the choice of cluster management system and/or container
runtime. I want OpenStack to be the only integration point, because I don’t
want to be locked-in to specific container tech. I want to avoid the risk
that a specific container tech being replaced by another in the future.
Optimally, I want Keystone to be the only authentication system that I need
to deal with. I don't want the extra complexity to deal with additional
authentication system introduced by specific COE.

Solution:
Implement "container" object for k8s and mesos bay (and all the COEs
introduced in the future).

That's it. I would appreciate if you can share your thoughts on this
proposal.

[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers

Best regards,
Hongbin

From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
Sent: December-08-15 11:40 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Mesos Conductor using container-create operations

Hi,

As we have discussed in last meeting, we cannot continue with changes in
container-create[1] as long as we have suitable use case. But I honestly
feel to have some kind of support for mesos + marathon apps, because magnum
supports COE related functionalities for docker swarm (container-create)
and k8s (pod-create, rc-create..) but not for mesos bays.

As hongbin suggested, we use the existing functionality of container-create
and support in mesos-conductor. Currently we have container-create only for
docker swarm bay. Let's have support for the same command for mesos bay
with out any changes in client side.

Let me know your suggestions.

Regards
Bharath T
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Testing, Rally and Wiki

2015-12-10 Thread Boris Pavlovic
Hi Gal,


> We are also working on combining Rally testing with Kuryr and for that we
> are going to
> introduce Docker context plugin and client and other parts that are
> probably needed by other projects (like Magnum)
> I think it would be great if we can combine forces on this.


What this context is going to do?


Best regards,
Boris Pavlovic

On Thu, Dec 10, 2015 at 6:11 AM, Gal Sagie  wrote:

> Hello everyone,
>
> As some of you have already noticed one of the top priorities for Kuryr
> this cycle is to get
> our CI and gate testing done.
>
> I have been working on creating the base for adding integration tests that
> will run
> in the gate in addition to our unit tests and functional testing.
>
> If you would like to join and help this effort, please stop by
> #openstack-kuryr or email
> me back.
>
> We are also working on combining Rally testing with Kuryr and for that we
> are going to
> introduce Docker context plugin and client and other parts that are
> probably needed by other projects (like Magnum)
> I think it would be great if we can combine forces on this.
>
> I have also created Kuryr Wiki:
> https://wiki.openstack.org/wiki/Kuryr
>
> Feel free to edit and add needed information.
>
>
> Thanks all
> Gal.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-10 Thread Jim Rollenhagen
On Thu, Dec 10, 2015 at 03:57:59PM -0800, Devananda van der Veen wrote:
> All,
> 
> I'm going to attempt to summarize a discussion that's been going on for
> over a year now, and still remains unresolved.
> 
> TLDR;
> 
> 
> The main touch-point between Nova and Ironic continues to be a pain point,
> and despite many discussions between the teams over the last year resulting
> in a solid proposal, we have not been able to get consensus on a solution
> that meets everyone's needs.
> 
> Some folks are asking us to implement a non-virtualization-centric
> scheduler / resource tracker in Nova, or advocating that we wait for the
> Nova scheduler to be split-out into a separate project. I do not believe
> the Nova team is interested in the former, I do not want to wait for the
> latter, and I do not believe that either one will be an adequate solution
> -- there are other clients (besides Nova) that need to schedule workloads
> on Ironic.
> 
> We need to decide on a path of least pain and then proceed. I really want
> to get this done in Mitaka.
> 
> 
> Long version:
> -
> 
> During Liberty, Jim and I worked with Jay Pipes and others on the Nova team
> to come up with a plan. That plan was proposed in a Nova spec [1] and
> approved in October, shortly before the Mitaka summit. It got significant
> reviews from the Ironic team, since it is predicated on work being done in
> Ironic to expose a new "reservations" API endpoint. The details of that
> Ironic change were proposed separately [2] but have deadlocked. Discussions
> with some operators at and after the Mitaka summit have highlighted a
> problem with this plan.
> 
> Actually, more than one, so to better understand the divergent viewpoints
> that result in the current deadlock, I drew a diagram [3]. If you haven't
> read both the Nova and Ironic specs already, this diagram probably won't
> make sense to you. I'll attempt to explain it a bit with more words.
> 
> 
> [A]
> The Nova team wants to remove the (Host, Node) tuple from all the places
> that this exists, and return to scheduling only based on Compute Host. They
> also don't want to change any existing scheduler filters (especially not
> compute_capabilities_filter) or the filter scheduler class or plugin
> mechanisms. And, as far as I understand it, they're not interested in
> accepting a filter plugin that calls out to external APIs (eg, Ironic) to
> identify a Node and pass that Node's UUID to the Compute Host.  [[ nova
> team: please correct me on any point here where I'm wrong, or your
> collective views have changed over the last year. ]]
> 
> [B]
> OpenStack deployers who are using Nova + Ironic rely on a few things:
> - compute_capabilities_filter to match node.properties['capabilities']
> against flavor extra_specs.
> - other downstream nova scheduler filters that do other sorts of hardware
> matching
> These deployers clearly and rightly do not want us to take away either of
> these capabilities, so anything we do needs to be backwards compatible with
> any current Nova scheduler plugins -- even downstream ones.
> 
> [C] To meet the compatibility requirements of [B] without requiring the
> nova-scheduler team to do the work, we would need to forklift some parts of
> the nova-scheduler code into Ironic. But I think that's terrible, and I
> don't think any OpenStack developers will like it. Furthermore, operators
> have already expressed their distase for this because they want to use the
> same filters for virtual and baremetal instances but do not want to
> duplicate the code (because we all know that's a recipe for drift).
> 
> [D]
> What ever solution we devise for scheduling bare metal resources in Ironic
> needs to perform well at the scale Ironic deployments are aiming for (eg,
> thousands of Nodes) without the use of Cells. It also must be integrable
> with other software (eg, it should be exposed in our REST API). And it must
> allow us to run more than one (active-active) nova-compute process, which
> we can't today.
> 
> 
> OK. That's a lot of words... bear with me, though, as I'm not done yet...
> 
> This drawing [3] is a Venn diagram, but not everything overlaps. The Nova
> and Ironic specs [0],[1] meet the needs of the Nova team and the Ironic
> team, and will provide a more performant, highly-available solution, that
> is easier to use with other schedulers or datacenter-management tools.
> However, this solution does not meet the needs of some current OpenStack
> Operators because it will not support Nova Scheduler filter plugins. Thus,
> in the diagram, [A] and [D] overlap but neither one intersects with [B].
> 
> 
> Summary
> --
> 
> We have proposed a solution that fits ironic's HA model into nova-compute's
> failure domain model, but that's only half of the picture -- in so doing,
> we assumed that scheduling of bare metal resources was simplistic when, in
> fact, it needs to be just as rich as the scheduling of virtual resources.
> 
> 

[openstack-dev] [Ironic] [Nova] continuing the "multiple compute host" discussion

2015-12-10 Thread Devananda van der Veen
All,

I'm going to attempt to summarize a discussion that's been going on for
over a year now, and still remains unresolved.

TLDR;


The main touch-point between Nova and Ironic continues to be a pain point,
and despite many discussions between the teams over the last year resulting
in a solid proposal, we have not been able to get consensus on a solution
that meets everyone's needs.

Some folks are asking us to implement a non-virtualization-centric
scheduler / resource tracker in Nova, or advocating that we wait for the
Nova scheduler to be split-out into a separate project. I do not believe
the Nova team is interested in the former, I do not want to wait for the
latter, and I do not believe that either one will be an adequate solution
-- there are other clients (besides Nova) that need to schedule workloads
on Ironic.

We need to decide on a path of least pain and then proceed. I really want
to get this done in Mitaka.


Long version:
-

During Liberty, Jim and I worked with Jay Pipes and others on the Nova team
to come up with a plan. That plan was proposed in a Nova spec [1] and
approved in October, shortly before the Mitaka summit. It got significant
reviews from the Ironic team, since it is predicated on work being done in
Ironic to expose a new "reservations" API endpoint. The details of that
Ironic change were proposed separately [2] but have deadlocked. Discussions
with some operators at and after the Mitaka summit have highlighted a
problem with this plan.

Actually, more than one, so to better understand the divergent viewpoints
that result in the current deadlock, I drew a diagram [3]. If you haven't
read both the Nova and Ironic specs already, this diagram probably won't
make sense to you. I'll attempt to explain it a bit with more words.


[A]
The Nova team wants to remove the (Host, Node) tuple from all the places
that this exists, and return to scheduling only based on Compute Host. They
also don't want to change any existing scheduler filters (especially not
compute_capabilities_filter) or the filter scheduler class or plugin
mechanisms. And, as far as I understand it, they're not interested in
accepting a filter plugin that calls out to external APIs (eg, Ironic) to
identify a Node and pass that Node's UUID to the Compute Host.  [[ nova
team: please correct me on any point here where I'm wrong, or your
collective views have changed over the last year. ]]

[B]
OpenStack deployers who are using Nova + Ironic rely on a few things:
- compute_capabilities_filter to match node.properties['capabilities']
against flavor extra_specs.
- other downstream nova scheduler filters that do other sorts of hardware
matching
These deployers clearly and rightly do not want us to take away either of
these capabilities, so anything we do needs to be backwards compatible with
any current Nova scheduler plugins -- even downstream ones.

[C] To meet the compatibility requirements of [B] without requiring the
nova-scheduler team to do the work, we would need to forklift some parts of
the nova-scheduler code into Ironic. But I think that's terrible, and I
don't think any OpenStack developers will like it. Furthermore, operators
have already expressed their distase for this because they want to use the
same filters for virtual and baremetal instances but do not want to
duplicate the code (because we all know that's a recipe for drift).

[D]
What ever solution we devise for scheduling bare metal resources in Ironic
needs to perform well at the scale Ironic deployments are aiming for (eg,
thousands of Nodes) without the use of Cells. It also must be integrable
with other software (eg, it should be exposed in our REST API). And it must
allow us to run more than one (active-active) nova-compute process, which
we can't today.


OK. That's a lot of words... bear with me, though, as I'm not done yet...

This drawing [3] is a Venn diagram, but not everything overlaps. The Nova
and Ironic specs [0],[1] meet the needs of the Nova team and the Ironic
team, and will provide a more performant, highly-available solution, that
is easier to use with other schedulers or datacenter-management tools.
However, this solution does not meet the needs of some current OpenStack
Operators because it will not support Nova Scheduler filter plugins. Thus,
in the diagram, [A] and [D] overlap but neither one intersects with [B].


Summary
--

We have proposed a solution that fits ironic's HA model into nova-compute's
failure domain model, but that's only half of the picture -- in so doing,
we assumed that scheduling of bare metal resources was simplistic when, in
fact, it needs to be just as rich as the scheduling of virtual resources.

So, at this point, I think we need to accept that the scheduling of
virtualized and bare metal workloads are two different problem domains that
are equally complex.

Either, we:
* build a separate scheduler process in Ironic, forking the Nova scheduler
as a starting point so 

Re: [openstack-dev] [QA][Tempest] Drop javelin off tempest

2015-12-10 Thread Chris Dent

On Thu, 10 Dec 2015, Daniel Mellado wrote:


Before doing so, we'd like to get some feedback about out planned move,
so if you have any questions, comments or feedback, please reply to this
thread.


+1

I think this is the right way to go. I put a lot of work into
javelin related stuff a bit more than a year ago and it seemed
pretty useful. It didn't, however, catch on. Probably in part
because it didn't provide sufficient flexibility.

The new resources.sh functionality provided in the grenade plugin
interface is more likely to catch on because as long as you follow the
gross outlines of the interface you can do whatever you like in the
guts.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Using Neutron client in the gate

2015-12-10 Thread Akihiro Motoki
Hi Gal,

One simple way is to get credentials from clouds.yaml (through
os-client-config).
devstack prepares clouds.yaml (~/.config/openstack/clouds.yaml) which contains
both devstack-admin (admin) and devstack (demo) account.

neutronclient functional test is a good example.
http://git.openstack.org/cgit/openstack/python-neutronclient/tree/neutronclient/tests/functional/base.py#n19

I hope it helps you.

Akihiro


2015-12-10 18:50 GMT+09:00 Gal Sagie :
> Hello All,
>
> I would like to run some "fullstack" integration tests for Kuryr and run
> them in
> the gate.
> For the tests i would like to use the Neutron client for communicating with
> the
> working devstack Neutron service.
>
> What is the best way to instantiate the client in terms of auth_url and
> credentials ?
> (I saw 'secretadmin' is used as admin password, but wondered if using hard
> coded
> args is the best approach)
>
> Thanks
> Gal.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] configurable ack-then-process (at least/most once) behavior

2015-12-10 Thread Renat Akhmerov
Hi, I also left my comment in the patch which explains what we need from 
Mistral perspective. Please take a look.

Renat Akhmerov
@ Mirantis Inc.



> On 02 Dec 2015, at 17:01, Bogdan Dobrelya  wrote:
> 
>> Bogdan,
>> 
>> Which service would use this flag to start with? and how would the
>> code change to provide "app side is fully responsible for duplicates
>> handling"?
> 
> (fixed topic tags to match oslo.messaging)
> 
> AFAIK, this mode is required by Mistral HA. Other projects may want
> the at-least-once rpc delivery model as well.
> 
> I see that the patch scope is not enough. Although it would be nice to
> have it demonstrated by the simple example... Anyway, we should address
> all of the concerns raised here in the spec.
> 
>> 
>> Thanks,
>> Dims
>> 
>> On Tue, Dec 1, 2015 at 4:27 AM, Bogdan Dobrelya  
>> wrote:
>>> On 30.11.2015 14:28, Bogdan Dobrelya wrote:
 Hello.
 Please let's make this change [0] happen to the Oslo messaging.
 This is reasonable, straightforward and backwards compatible change. And
 it is required for OpenStack applications - see [1] - to implement a
 sane HA. The only thing left is to cover this change by unit tests.
 
 [0] https://review.openstack.org/229186
 [1]
 http://lists.openstack.org/pipermail/openstack-dev/2015-October/076217.html
 
>>> 
>>> Here is related bp [0]. I will submit the spec as well and put there all
>>> of the concerns Mehdi Abaakouk provided in the aforementioned patch
>>> review process. I believe the ack-then-process pattern *has* use cases,
>>> that is why this topic will be raised again and again unless adressed.
>>> 
>>> [0]
>>> https://blueprints.launchpad.net/oslo.messaging/+spec/at-least-once-guarantee
>>> 
>>> 
>>> --
>>> Best regards,
>>> Bogdan Dobrelya,
>>> Irc #bogdando
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: OpenStack-dev-request at 
>>> lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> -- 
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db][sqlalchemy][mistral] Configuring default transaction isolation level

2015-12-10 Thread Renat Akhmerov

> On 08 Dec 2015, at 21:10, Mike Bayer  wrote:
> 
> 
> 
> On 12/08/2015 07:28 AM, Renat Akhmerov wrote:
>> Hi,
>> 
>> Moshe, thanks a lot for bringing this up. I remember I tried to find a
>> way to change isolation level per connection but also was unable to do that.
> 
> Current SQLAlchemy has a lot of isolation level options.There is a
> complete guide to doing this from an ORM perspective at
> http://docs.sqlalchemy.org/en/rel_1_0/orm/session_transaction.html#setting-transaction-isolation-levels.
> 
> 
> On a per-connection basis, you would use the isolation_level execution
> option.  An example of how to integrate this into Session usage is in
> this section at
> http://docs.sqlalchemy.org/en/rel_1_0/orm/session_transaction.html#setting-isolation-for-individual-sessions.
> 
> In oslo.db, there's no engine-wide isolation level setting available,
> however when the engine is returned from the oslo.db version of
> create_engine, you can immediately get a copy of that engine with the
> isolation_level option set by calling:
> 
> engine = oslo_db.create_engine(...)
> engine = engine.execution_options(isolation_level='READ_COMMITTED’)

This is very helpful, thanks Mike. We’ll try to use this way.

> The future of oslo.db is oriented around the upgraded "enginefacade"
> API.  Adding isolation hooks to this new API is a TODO, but is
> straightforward; if you're using the new enginefacade API, I can
> expedite having the appropriate hooks added in.

Yes, I think this is a too important property to just ignore it.

Renat Akhmerov
@ Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-novaclient] history of virtual-interface commands

2015-12-10 Thread Juvonen, Tomi (Nokia - FI/Espoo)
Hi,

Also hitting this as making new microversion and would need to have support in 
CLI. Seems CLI works just by bumping API_MAX_VERSION (as I am only adding one 
new attribute to existing API). Anyhow cannot do this because microversion 2.12 
is not implemented (actually API_MAX_VERSION is currently 2.9, but 2.7-2.11 
should be under way).

Br,
Tomi

From: EXT Andrey Kurilin [mailto:akuri...@mirantis.com]
Sent: Friday, December 04, 2015 3:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [python-novaclient] history of virtual-interface 
commands

Hi stackers!

I have found code in novaclient related to virtual-interfaces extension[1], but 
there are no cli commands for it. Since rackspace docs include reference to 
`virtual-interface-list` command[2], I wonder, is there a reason for which 
commands related to virtual-interfaces are missed from upstream master?
Does anyone know the history of virtual-interfaces extension and CLI entrypoint 
for it?

[1] - 
https://github.com/openstack/python-novaclient/blob/2.35.0/novaclient/v2/virtual_interfaces.py
[2] - 
http://docs.rackspace.com/servers/api/v2/cs-gettingstarted/content/nova_list_virt_interfaces_for_server.html

--
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest] Drop javelin off tempest

2015-12-10 Thread Daniel Mellado
Hi All,

In today's QA meeting we were discussing about dropping Javelin off
tempest if it's not being used anymore in grenade, as sdague pointed
out. We were thinking about this as a part of the work for [1], where we
hit issue on Javelin script testing where gate did not detect the
service clients changes in this script.

Our intention it's to drop the following files off tempest:

  * tempest/cmd/javelin.py

  * tempest/cmd/resources.yaml

  * tempest/tests/cmd/test_javelin.py



Before doing so, we'd like to get some feedback about out planned move,
so if you have any questions, comments or feedback, please reply to this
thread.

Thanks!

Daniel Mellado

---
[1]
https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/consistent-service-method-names,n,z



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-10 Thread Dmitry Tantsur

On 12/09/2015 10:58 PM, Jim Rollenhagen wrote:

On Fri, Dec 04, 2015 at 05:38:43PM +0100, Dmitry Tantsur wrote:

Hi!

As you all probably know, we've switched to reno for managing release notes.
What it also means is that the release team has stopped managing milestones
for us. We have to manually open/close milestones in launchpad, if we feel
like. I'm a bit tired of doing it for inspector, so I'd prefer we stop it.
If we need to track release-critical patches, we usually do it in etherpad
anyway. We also have importance fields for bugs, which can be applied to
both important bugs and important features.

During a quick discussion on IRC Sam mentioned that neutron also dropped
using blueprints for tracking features. They only use bugs with RFE tag and
specs. It makes a lot of sense to me to do the same, if we stop tracking
milestones.

For both ironic and ironic-inspector I'd like to get your opinion on the
following suggestions:
1. Stop tracking milestones in launchpad
2. Drop existing milestones to avoid confusion
3. Stop using blueprints and move all active blueprints to bugs with RFE
tags; request a bug URL instead of a blueprint URL in specs.

So in the end we'll end up with bugs for tracking user requests, specs for
complex features and reno for tracking for went into a particular release.

Important note: if you vote for keeping things for ironic-inspector, I may
ask you to volunteer in helping with them ;)


We decided we're going to try this in Monday's meeting, following
roughly the same process as Neutron:
http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements

Note that as the goal here is to stop managing blueprints and milestones
in launchpad, a couple of things will differ from the neutron process:

1) A matching blueprint will not be created; the tracking will only be
done in the bug.

2) A milestone will not be immediately chosen for the feature
enhancement, as we won't track milestones on launchpad.

Now, some requests for volunteers. We need:

1) Someone to document this process in our developer docs.

2) Someone to update the spec template to request a bug link, instead of
a blueprint link.

3) Someone to help move existing blueprints into RFEs.

4) Someone to point specs for incomplete work at the new RFE bugs,
instead of the existing blueprints.

I can help with some or all of these, but hope to not do all the work
myself. :)


I'll help you with as many things as my time allows. Documentation is my 
week point, so I'll start with #2.




Thanks for proposing this, Dmitry!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mitaka Infra Sprint

2015-12-10 Thread Joshua Hesketh
Hey,

Sorry about the formatting on my last email. Clearly my copy+paste skills
are lacking. Let me try again in case anybody was confused:


Hi all,

As discussed during the infra-meeting on Tuesday[0], the infra team will be
holding a mid-cycle sprint to focus on infra-cloud[1].

The sprint is an opportunity to get in a room and really work through as
much code and reviews as we can related to infra-cloud while having each
other near by to discuss blockers, technical challenges and enjoy company.

Information + RSVP:
https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint

Dates:
Mon. February 22nd at 9:00am to Thursday. February 25th

Location:
HPE Fort Collins Colorado Office

Who:
Anybody is welcome. Please put your name on the wiki page if you are
interested in attending.

If you have any questions please don't hesitate to ask.

Cheers,
Josh + Infra team

[0]
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-08-19.00.html
[1]
https://specs.openstack.org/openstack-infra/infra-specs/specs/infra-cloud.html




On Thu, Dec 10, 2015 at 6:30 PM, Spencer Krum  wrote:

> Thanks josh
>
> --
>   Spencer Krum
>   n...@spencerkrum.com
>
>
>
> On Wed, Dec 9, 2015, at 09:17 PM, Joshua Hesketh wrote:
>
> Hi all,
> As discussed during the infra-meeting on Tuesday[0], the infra team will
> be holding a mid-cycle sprint to focus on infra-cloud[1].
> The sprint is an opportunity to get in a room and really work through as
> much code and reviews as we can related to infra-cloud while having each
> other near by to discuss blockers, technical challenges and enjoy company.
> Information + RSVP:
> https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint
> Dates:Mon. February 22nd at 9:00am to Thursday. February 25th
> Location:HPE Fort Collins Colorado Office
> Who:Anybody is welcome. Please put your name on the wiki page if you are
> interested in attending.
> If you have any questions please don't hesitate to ask.
> Cheers,Josh + Infra team
> [0]
> http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-08-19.00.html[1]
> https://specs.openstack.org/openstack-infra/infra-specs/specs/infra-cloud.html
>
> *__*
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] RFC: stop using launchpad milestones and blueprints

2015-12-10 Thread Pavlo Shchelokovskyy
Hi all,
fix for tests in #2 is on review, https://review.openstack.org/#/c/255811/

cheers,

On Thu, Dec 10, 2015 at 1:15 PM Dmitry Tantsur  wrote:

> On 12/09/2015 10:58 PM, Jim Rollenhagen wrote:
> > On Fri, Dec 04, 2015 at 05:38:43PM +0100, Dmitry Tantsur wrote:
> >> Hi!
> >>
> >> As you all probably know, we've switched to reno for managing release
> notes.
> >> What it also means is that the release team has stopped managing
> milestones
> >> for us. We have to manually open/close milestones in launchpad, if we
> feel
> >> like. I'm a bit tired of doing it for inspector, so I'd prefer we stop
> it.
> >> If we need to track release-critical patches, we usually do it in
> etherpad
> >> anyway. We also have importance fields for bugs, which can be applied to
> >> both important bugs and important features.
> >>
> >> During a quick discussion on IRC Sam mentioned that neutron also dropped
> >> using blueprints for tracking features. They only use bugs with RFE tag
> and
> >> specs. It makes a lot of sense to me to do the same, if we stop tracking
> >> milestones.
> >>
> >> For both ironic and ironic-inspector I'd like to get your opinion on the
> >> following suggestions:
> >> 1. Stop tracking milestones in launchpad
> >> 2. Drop existing milestones to avoid confusion
> >> 3. Stop using blueprints and move all active blueprints to bugs with RFE
> >> tags; request a bug URL instead of a blueprint URL in specs.
> >>
> >> So in the end we'll end up with bugs for tracking user requests, specs
> for
> >> complex features and reno for tracking for went into a particular
> release.
> >>
> >> Important note: if you vote for keeping things for ironic-inspector, I
> may
> >> ask you to volunteer in helping with them ;)
> >
> > We decided we're going to try this in Monday's meeting, following
> > roughly the same process as Neutron:
> >
> http://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements
> >
> > Note that as the goal here is to stop managing blueprints and milestones
> > in launchpad, a couple of things will differ from the neutron process:
> >
> > 1) A matching blueprint will not be created; the tracking will only be
> > done in the bug.
> >
> > 2) A milestone will not be immediately chosen for the feature
> > enhancement, as we won't track milestones on launchpad.
> >
> > Now, some requests for volunteers. We need:
> >
> > 1) Someone to document this process in our developer docs.
> >
> > 2) Someone to update the spec template to request a bug link, instead of
> > a blueprint link.
> >
> > 3) Someone to help move existing blueprints into RFEs.
> >
> > 4) Someone to point specs for incomplete work at the new RFE bugs,
> > instead of the existing blueprints.
> >
> > I can help with some or all of these, but hope to not do all the work
> > myself. :)
>
> I'll help you with as many things as my time allows. Documentation is my
> week point, so I'll start with #2.
>
> >
> > Thanks for proposing this, Dmitry!
> >
> > // jim
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-ansible] Mid Cycle Sprint

2015-12-10 Thread Javeria Khan
Great initiative, I'd definitely like to be a part of the mid-cycle. I
can't guarantee attendance, but London would be easier for me to do also.
For topics, I'd like to see more done about:

-  streamlining new role creation, or generally making it easier for newer
contributors
-  +1 to Major, enhancing our documentation


--
Javeria

On Thu, Dec 10, 2015 at 8:16 PM, Kevin Carter 
wrote:

> Count me in as wanting to be part of the mid-cycle. I live in San
> Antonio but I think we should strongly consider having the meetup in the
> UK. It seems most of our deployers live in the UK and it'd be nice to
> get people involved whom may not have been able to attend the summit.
> While I'll need to get travel approval if we decide to hold the event in
> the UK, during the mid-cycle I'd like to focus on working on the
> "Upgrade Framework" and "multi-OS". Additionally, if we have time, I'd
> like to see if people are interested in bringing new services online and
> work with folks regarding the implementation details and how to compose
> new roles.
>
> Cheers!
>
> --
>
> Kevin Carter
> IRC: Cloudnull
>
> On 12/09/2015 08:44 AM, Curtis wrote:
> > On Wed, Dec 9, 2015 at 5:45 AM, Jesse Pretorius
> >  wrote:
> >> Hi everyone,
> >>
> >> At the Mitaka design summit in Tokyo we had some corridor discussions
> about
> >> doing a mid-cycle meetup for the purpose of continuing some design
> >> discussions and doing some specific sprint work.
> >>
> >> ***
> >> I'd like indications of who would like to attend and what
> >> locations/dates/topics/sprints would be of interest to you.
> >> ***
> >>
> >
> > I'd like to get more involved in openstack-ansible. I'll be going to
> > the operators mid-cycle in Feb, so could stay later and attend in West
> > London. However, I could likely make it to San Antonio as well. Not
> > sure if that helps but I will definitely try to attend where ever it
> > occurs.
> >
> > Thanks.
> >
> >> For guidance/background I've put some notes together below:
> >>
> >> Location
> >> 
> >> We have contributors, deployers and downstream consumers across the
> globe so
> >> picking a venue is difficult. Rackspace have facilities in the UK
> (Hayes,
> >> West London) and in the US (San Antonio) and are happy for us to make
> use of
> >> them.
> >>
> >> Dates
> >> -
> >> Most of the mid-cycles for upstream OpenStack projects are being held in
> >> January. The Operators mid-cycle is on February 15-16.
> >>
> >> As I feel that it's important that we're all as involved as possible in
> >> these events, I would suggest that we schedule ours after the Operators
> >> mid-cycle.
> >>
> >> It strikes me that it may be useful to do our mid-cycle immediately
> after
> >> the Ops mid-cycle, and do it in the UK. This may help to optimise
> travel for
> >> many of us.
> >>
> >> Format
> >> --
> >> The format of the summit is really for us to choose, but typically
> they're
> >> formatted along the lines of something like this:
> >>
> >> Day 1: Big group discussions similar in format to sessions at the design
> >> summit.
> >>
> >> Day 2: Collaborative code reviews, usually performed on a projector,
> where
> >> the goal is to merge things that day (if a review needs more than a
> single
> >> iteration, we skip it. If a review needs small revisions, we do them on
> the
> >> spot).
> >>
> >> Day 3: Small group / pair programming.
> >>
> >> Topics
> >> --
> >> Some topics/sprints that come to mind that we could explore/do are:
> >>   - Install Guide Documentation Improvement [1]
> >>   - Development Documentation Improvement (best practises, testing, how
> to
> >> develop a new role, etc)
> >>   - Upgrade Framework [2]
> >>   - Multi-OS Support [3]
> >>
> >> [1] https://etherpad.openstack.org/p/oa-install-docs
> >> [2]
> https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
> >> [3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support
> >>
> >> --
> >> Jesse Pretorius
> >> IRC: odyssey4me
> >>
> >> ___
> >> OpenStack-operators mailing list
> >> openstack-operat...@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >
> >
> >
>
> --
>
> --
>
> Kevin Carter
> IRC: cloudnull
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-10 Thread Brant Knudson
On Thu, Dec 10, 2015 at 11:10 AM, Flavio Percoco  wrote:

> On 10/12/15 08:21 -0600, Brant Knudson wrote:
>
>>
>>
>> On Thu, Dec 10, 2015 at 7:26 AM, Sean Dague  wrote:
>>
>>On 12/10/2015 01:56 AM, Joshua Harlow wrote:
>>> Shouldn't be to hard (although it's probably not on each oslo
>> project,
>>> but on the consumers projects).
>>>
>>> The warnings module can turn warnings into raised exceptions with a
>>> simple command line switch btw...
>>>
>>> For example:
>>>
>>> $ python -Wonce
>>> Python 2.7.6 (default, Jun 22 2015, 17:58:13)
>>> [GCC 4.8.2] on linux2
>>> Type "help", "copyright", "credits" or "license" for more
>> information.
>> import warnings
>> warnings.warn("I am not supposed to be used", DeprecationWarning)
>>> __main__:1: DeprecationWarning: I am not supposed to be used
>>>
>>> $ python -Werror
>>> Python 2.7.6 (default, Jun 22 2015, 17:58:13)
>>> [GCC 4.8.2] on linux2
>>> Type "help", "copyright", "credits" or "license" for more
>> information.
>> import warnings
>> warnings.warn("I am not supposed to be used", DeprecationWarning)
>>> Traceback (most recent call last):
>>>   File "", line 1, in 
>>> DeprecationWarning: I am not supposed to be used
>>>
>>> https://docs.python.org/2/library/warnings.html#the-warnings-filter
>>>
>>> Turn that CLI switch from off to on and I'm pretty sure usage of
>>> deprecated things will become pretty evident real quick ;)
>>
>>It needs to be more targetted than that. There is a long standing
>>warning between paste and pkg_resources that would hard stop everyone.
>>
>>But, yes, the idea of being able to run unit tests with fatal
>>deprecations of oslo easily is what I think would be useful.
>>  -Sean
>>
>>
>>
>> In keystone we set a warnings filter for the unit tests so that if
>> keystone
>> calls any deprecated function it'll raise[1]. So when the oslo timeutils
>> functions were deprecated it broke keystone gate and we fixed it. It
>> would be
>> nicer to have a non-voting gate job to serve as a warning instead, but
>> it's
>> only happened a couple of times where this caused keystone to be blocked
>> for
>> the day that it took to get the fix in. Anyways, it would be easy enough
>> for us
>> to have this enabled/disabled via an environment variable and create a
>> tox job.
>>
>> If we had a non-voting warning job it could also run oslo libs from master
>> rather than released.
>>
>> [1]
>> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/
>> core.py?id=4f8c4a7a10d85080d6db9b30ae1759d45a38a32c#n460
>>
>
> I like this!
>
> Will look into what needs to be done to make it happen in Glance and
> get feedback from the rest of the folks. As you said, it's really few
> times when this would totally break a project's gate and I think
> that's manageable.
> A non-voting gate doesn't have the same effect, FWIW, but I also see
> reasons for having one instead of stopping a project.
>
> Flavio
>
>
If other projects are interested (and since we already copy-pasted it to
other keystone projects), I've got a pull request in to fixtures to add a
fixture that does the same thing, see
https://github.com/testing-cabal/fixtures/pull/19 .

 - Brant


>
>> - Brant
>>
>>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> @flaper87
> Flavio Percoco
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Contribution to improve Nova's config option space

2015-12-10 Thread Markus Zoeller
In case you are one of the people who are already working on patches
for this, you might have observed a merge conflict with your patches
in the last days. Ed Leafe and I are working on a approach to reduce
that. It's basically a list of placeholders in the file 
"nova/conf/__init__.py" in the form of comments like this:
# import nova.conf.configdrive
# import nova.conf.rdp
# import nova.conf.spice
# import nova.conf.keymgr
When you rebase your changes on that patch and remove the appropriate
comment sign you should be saver for merge conflicts. We are still
working on that patch and aim for the next few days. Ping me in IRC
if you have questions (I'm not around at Dec. 11th(FRI) + 15th (TUE)).

Regards and thanks for your efforts, Markus Zoeller (markus_z)

Markus Zoeller/Germany/IBM@IBMDE wrote on 12/03/2015 05:10:48 PM:

> From: Markus Zoeller/Germany/IBM@IBMDE
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 12/03/2015 05:12 PM
> Subject: [openstack-dev] [nova] Contribution to improve Nova's config 
> option space
> 
> Who
> ===
> If you are a new contributor and are still searching for a way to
> contribute to Nova, this mail is for you. If you are not a newbie
> but have a bit bandwidth available, you're welcome too :)
> 
> Why
> ===
> Why you should bother?
> * It's an easy way to start contributing
> * I can offer you to help with:
> * pushing patches, 
> * debugging the gate,
> * dealing with reviews
> * and learning the general workflow
> * you will learn about different functional areas and can give back
>   to the community
> 
> What
> 
> There is effort ongoing to improve the way Nova offers its configuration
> options to the operators [1]. In short, you have to read and understand 
> code and describe the impact of config options as a black box so that 
the 
> operators don't have to read code to understand what they are 
configuring.
> At the end it will look like these two patches:
> * https://review.openstack.org/#/c/244177/
> * https://review.openstack.org/#/c/246465/
> 
> How
> ===
> The organization is done with an etherpad [2] which contains:
> * what needs to be done
> * how it has to be done
> 
> Just ping me (markus_z) in the #openstack-nova channel (I'm in the
> UTC+1 timezone) or grab something out of the etherpad [2] and give
> it a try.
> 
> References
> ==
> [1] blueprint "centralize config options":
> 
> http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/
> centralize-config-options.html
> [2] https://etherpad.openstack.org/p/config-options
> 
> Regards, Markus Zoeller (markus_z)
> 
> 
> 
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-12-10 Thread AFEK, Ifat (Ifat)
Hi Ryota,

> -Original Message-
> From: Ryota Mibu [mailto:r-m...@cq.jp.nec.com]
> Sent: Tuesday, December 08, 2015 11:17 AM
>
> In short, 'event' is generated in OpenStack, 'alarm' is defined by a
> user. 'event' is a container of data passed from other OpenStack
> services through OpenStack notification bus. 'event' and contained data
> will be stored in ceilometer DB and exposed via event api [1]. 'alarm'
> is pre-configured alerting rule defined by a user via alarm API [2].
> 'Alarm' also has state like 'ok' and 'alarm', and history as well.
> 
> [1]
> http://docs.openstack.org/developer/ceilometer/webapi/v2.html#events-
> and-traits
> [2] http://docs.openstack.org/developer/aodh/webapi/v2.html#alarms
> 
> 
> The point is whether we should use 'event' or 'alarm' for all failure
> representation. Maybe we can use 'event' for all raw error/fault
> notification, and use 'alarm' for exposing deduced/wrapped failure.
> This is my view, so might be wrong.
> 

I believe Vitrage should define alarms, as we want the alarm to have
a state and history (that can be queried in horizon UI). Moreover, 
in the future I can imagine that some other OpenStack services might 
want to add their alarm actions to the alarms that Vitrage generated. 
I think this applies both for Vitrage deduced alarms, and for alarms
that Vitrage generated as a result of Nagios test failures for example.
Does that make sense?

Best Regards,
Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] The lock files saga (and where we can go from here)

2015-12-10 Thread D'Angelo, Scott
Could the work for the tooz variant be leveraged to add a truly distributed 
solution (with the proper tooz distributed backend)? IF so, then +1 to this 
idea. Cinder will be implementing a version of tooz based distribute locks, so 
having it in Olso someday is a goal I'd think.


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Wednesday, December 09, 2015 6:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [oslo][all] The lock files saga (and where we can 
go from here)

So,

To try to reach some kind of conclusion here I am wondering if it would
be acceptable to folks (would people even adopt such a change?) if we
(oslo folks/others) provided a new function in say lockutils.py (in
oslo.concurrency) that would let users of oslo.concurrency pick which
kind of lock they would want to use...

The two types would be:

1. A pid based lock, which would *not* be resistant to crashing
processes, it would perhaps use
https://github.com/openstack/pylockfile/blob/master/lockfile/pidlockfile.py
internally. It would be more easily breakable and more easily
introspect-able (by either deleting the file or `cat` the file to see
the pid inside of it).
2. The existing lock that is resistant to crashing processes (it
automatically releases on owner process crash) but is not easily
introspect-able (to know who is using the lock) and is not easily
breakable (aka to forcefully break the lock and release waiters and the
current lock holder).

Would people use these two variants if (oslo) provided them, or would
the status quo exist and nothing much would change?

A third possibility is to spend energy using/integrating tooz
distributed locks and treating different processes on the same system as
distributed instances [even though they really are not distributed in
the classical sense]). These locks that tooz supports are already
introspect-able (via various means) and can be broken if needed (work is
in progress to make this breaking process more useable via API).

Thoughts?

-Josh

Clint Byrum wrote:
> Excerpts from Joshua Harlow's message of 2015-12-01 09:28:18 -0800:
>> Sean Dague wrote:
>>> On 12/01/2015 08:08 AM, Duncan Thomas wrote:
 On 1 December 2015 at 13:40, Sean Dague>   wrote:


   The current approach means locks block on their own, are processed in
   the order they come in, but deletes aren't possible. The busy lock 
 would
   mean deletes were normal. Some extra cpu spent on waiting, and lock
   order processing would be non deterministic. It's trade offs, but I
   don't know anywhere that we are using locks as queues, so order
   shouldn't matter. The cpu cost on the busy wait versus the lock file
   cleanliness might be worth making. It would also let you actually see
   what's locked from the outside pretty easily.


 The cinder locks are very much used as queues in places, e.g. making
 delete wait until after an image operation finishes. Given that cinder
 can already bring a node into resource issues while doing lots of image
 operations concurrently (such as creating lots of bootable volumes at
 once) I'd be resistant to anything that makes it worse to solve a
 cosmetic issue.
>>> Is that really a queue? Don't do X while Y is a lock. Do X, Y, Z, in
>>> order after W is done is a queue. And what you've explains above about
>>> Don't DELETE while DOING OTHER ACTION, is really just the queue model.
>>>
>>> What I mean by treating locks as queues was depending on X, Y, Z
>>> happening in that order after W. With a busy wait approach they might
>>> happen as Y, Z, X or X, Z, B, Y. They will all happen after W is done.
>>> But relative to each other, or to new ops coming in, no real order is
>>> enforced.
>>>
>> So ummm, just so people know the fasteners lock code (and the stuff that
>> has existed for file locks in oslo.concurrency and prior to that
>> oslo-incubator...) never has guaranteed the aboved sequencing.
>>
>> How it works (and has always worked) is the following:
>>
>> 1. A lock object is created
>> (https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L85)
>> 2. That lock object acquire is performed
>> (https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L125)
>> 3. At that point do_open is called to ensure the file exists (if it
>> exists already it is opened in append mode, so no overwrite happen) and
>> the lock object has a reference to the file descriptor of that file
>> (https://github.com/harlowja/fasteners/blob/master/fasteners/process_lock.py#L112)
>> 4. A retry loop starts, that repeats until either a provided timeout is
>> elapsed or the lock is acquired, the retry logic u can skip over but the
>> code that the retry loop calls is
>> 

Re: [openstack-dev] [trove] Upcoming specs and blueprints for Trove/Mitaka

2015-12-10 Thread Vyvial, Craig
Amrith/Victoria,

Thanks for the heads up about this these blueprints for the Mitaka cycle. This 
looks like a lot of work but there shouldn’t be a reason to hold back new 
blueprints this early in the cycle if they plan on being completed in Mitaka. 
Can we get these blueprints written up and submitted so that we can get them 
approved by Jan 8th? Due to the holidays i think this makes sense.

These blueprints should all be complete and merged by M-3 cut date (Feb 29th) 
for the feature freeze.

Let me know if there are concerns around this.

Thanks,
-Craig

On Dec 10, 2015, at 12:11 PM, Victoria Martínez de la Cruz 
> wrote:

2015-12-10 13:10 GMT-03:00 Amrith Kumar 
>:
Members of the Trove community,

Over the past couple of weeks we have discussed the possibility of an early 
deadline for submission of trove specifications for projects that are to be 
included in the Mitaka release. I understand why we're doing it, and agree with 
the concept. Unfortunately though, there are a number of projects for which 
specifications won't be ready in time for the proposed deadline of Friday 12/11 
(aka tomorrow).

I'd like to that the following projects are in the works and specifications 
will be submitted as soon as possible. Now that we know of the new process, we 
will all be able to make sure that we are better planned in time for the N 
release.

Blueprints have been registered for these projects.

The projects in question are:

Cassandra:
- enable/disable/show root 
(https://blueprints.launchpad.net/trove/+spec/cassandra-database-user-functions)
- Clustering 
(https://blueprints.launchpad.net/trove/+spec/cassandra-cluster)

MariaDB:
- Clustering 
(https://blueprints.launchpad.net/trove/+spec/mariadb-clustering)
- GTID replication 
(https://blueprints.launchpad.net/trove/+spec/mariadb-gtid-replication)

Vertica:
- Add/Apply license 
(https://blueprints.launchpad.net/trove/+spec/vertica-licensing)
- User triggered data upload from Swift 
(https://blueprints.launchpad.net/trove/+spec/vertica-bulk-data-load)
- Cluster grow/shrink 
(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-grow-shrink)
- Configuration Groups 
(https://blueprints.launchpad.net/trove/+spec/vertica-configuration-groups)
- Cluster Anti-affinity 
(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-anti-affinity)

Hbase and Hadoop based databases:
- Extend Trove to Hadoop based databases, starting with HBase 
(https://blueprints.launchpad.net/trove/+spec/hbase-support)

Specifications in the trove-specs repository will be submitted for review as 
soon as they are available.

Thanks,

-amrith



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi all,

I'd like to add the feature "Add backup strategy for Ceph backends" [0] to this 
list.

Thanks,

Victoria

[0] https://blueprints.launchpad.net/trove/+spec/implement-ceph-as-backup-option
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Puppet OpenStack Mid-Cycle (Mitaka)

2015-12-10 Thread Emilien Macchi


On 11/24/2015 05:41 AM, Emilien Macchi wrote:
> Hello,
> 
> If you're involved in Puppet OpenStack or if you want to be involved,
> please look at this poll: http://goo.gl/forms/lsBf55Ru8L
> 
> Thanks a lot for your time,

So we got 18 people interested by participating to a Puppet midcycle.
The output was:
* People are interested by a midcycle
* People don't want to do a second physical midcycle (after OPS)
* Some people can't go at OPS midcycle and can only attend virtual midcycle.

I think we should let a chance to anyone to participate and I don't
believe a physical + virtual midcycle would work.
So I propose to organize a virtual midcycle in January.

Please look at the etherpad [1], write your name if you want to
participate and your schedule constraints so we can find the best slot
for everyone.

Also feel free to propose topics.

Looking forward to see you there,

[1] https://etherpad.openstack.org/p/puppet-happy-new-year-2016
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-10 Thread Flavio Percoco

On 10/12/15 08:21 -0600, Brant Knudson wrote:



On Thu, Dec 10, 2015 at 7:26 AM, Sean Dague  wrote:

   On 12/10/2015 01:56 AM, Joshua Harlow wrote:
   > Shouldn't be to hard (although it's probably not on each oslo project,
   > but on the consumers projects).
   >
   > The warnings module can turn warnings into raised exceptions with a
   > simple command line switch btw...
   >
   > For example:
   >
   > $ python -Wonce
   > Python 2.7.6 (default, Jun 22 2015, 17:58:13)
   > [GCC 4.8.2] on linux2
   > Type "help", "copyright", "credits" or "license" for more information.
    import warnings
    warnings.warn("I am not supposed to be used", DeprecationWarning)
   > __main__:1: DeprecationWarning: I am not supposed to be used
   >
   > $ python -Werror
   > Python 2.7.6 (default, Jun 22 2015, 17:58:13)
   > [GCC 4.8.2] on linux2
   > Type "help", "copyright", "credits" or "license" for more information.
    import warnings
    warnings.warn("I am not supposed to be used", DeprecationWarning)
   > Traceback (most recent call last):
   >   File "", line 1, in 
   > DeprecationWarning: I am not supposed to be used
   >
   > https://docs.python.org/2/library/warnings.html#the-warnings-filter
   >
   > Turn that CLI switch from off to on and I'm pretty sure usage of
   > deprecated things will become pretty evident real quick ;)

   It needs to be more targetted than that. There is a long standing
   warning between paste and pkg_resources that would hard stop everyone.

   But, yes, the idea of being able to run unit tests with fatal
   deprecations of oslo easily is what I think would be useful.
  
           -Sean




In keystone we set a warnings filter for the unit tests so that if keystone
calls any deprecated function it'll raise[1]. So when the oslo timeutils
functions were deprecated it broke keystone gate and we fixed it. It would be
nicer to have a non-voting gate job to serve as a warning instead, but it's
only happened a couple of times where this caused keystone to be blocked for
the day that it took to get the fix in. Anyways, it would be easy enough for us
to have this enabled/disabled via an environment variable and create a tox job.

If we had a non-voting warning job it could also run oslo libs from master
rather than released.

[1] http://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/
core.py?id=4f8c4a7a10d85080d6db9b30ae1759d45a38a32c#n460


I like this!

Will look into what needs to be done to make it happen in Glance and
get feedback from the rest of the folks. As you said, it's really few
times when this would totally break a project's gate and I think
that's manageable. 


A non-voting gate doesn't have the same effect, FWIW, but I also see
reasons for having one instead of stopping a project.

Flavio



- Brant




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable team PTL nominations are open

2015-12-10 Thread Matt Riedemann



On 12/9/2015 4:22 AM, Kuvaja, Erno wrote:

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org]
Sent: 09 December 2015 08:57
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are open

Thierry Carrez wrote:

Thierry Carrez wrote:

The nomination deadline is passed, we have two candidates!

I'll be setting up the election shortly (with Jeremy's help to
generate election rolls).


OK, the election just started. Recent contributors to a stable branch
(over the past year) should have received an email with a link to vote.
If you haven't and think you should have, please contact me privately.

The poll closes on Tuesday, December 8th at 23:59 UTC.
Happy voting!


Election is over[1], let me congratulate Matt Riedemann for his election !
Thanks to everyone who participated to the vote.

Now I'll submit the request for spinning off as a separate project team to the
governance ASAP, and we should be up and running very soon.

Cheers,

[1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a

--
Thierry Carrez (ttx)



Congratulations Matt,

Almost 200 voters, sounds like great start for the new team.

- Erno

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks all. I plan to start looking for meeting times next week so we 
can start talking about transition from release team and work items.


This week has been kind of nuts with gate failures and stable 
regressions so my plan is to start pursuing more of this in the weeks to 
come (barring vacation time off toward the end of the year).


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sender Auth Failure] [infra][release][all] Automatic .ics generation for OpenStack's and project's deadlines

2015-12-10 Thread Johnston, Nate
I think this is a great idea.  +1

—N.

> On Dec 10, 2015, at 1:20 PM, Flavio Percoco  wrote:
> 
> Greetings,
> 
> I'd like to explore the possibility of having .ics generated - pretty
> much the same way we generate it for irc-meetings - for the OpenStack
> release schedule and project's deadlines. I believe just 1 calendar
> would be enough but I'd be ok w/  a per-project .ics too. 
> With the new home for the release schedule, and it being a good place
> for projects to add their own deadlines as well, I believe it would be
> good for people that use calendars to have these .ics being generated
> and linked there as well.
> 
> Has this been attempted? Any objections? Is there something I'm not
> considering?
> 
> Cheers,
> Flavio
> 
> -- 
> @flaper87
> Flavio Percoco
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Drop javelin off tempest

2015-12-10 Thread Daniel Mellado
Hi All,

comments inline

El 10/12/15 a las 17:17, Matthew Treinish escribió:
> On Thu, Dec 10, 2015 at 11:15:06AM +0100, Daniel Mellado wrote:
>> Hi All,
>>
>> In today's QA meeting we were discussing about dropping Javelin off
>> tempest if it's not being used anymore in grenade, as sdague pointed
>> out. We were thinking about this as a part of the work for [1], where we
>> hit issue on Javelin script testing where gate did not detect the
>> service clients changes in this script.
> So the reason we didn't remove this from tempest when we stopped using it as
> part of grenade is at the time there were external users. They still wanted to
> keep the tooling around. This is why the unit tests were grown in an effort to
> maintain some semblance of testing after the grenade removal. (for a long time
> it was mostly self testing through the grenade job)
About the unit tests, then the option would be for them to use real
clients instead of the Mock ones. Would that work for you?
>
>> Our intention it's to drop the following files off tempest:
>>
>>   * tempest/cmd/javelin.py
>> 
>>   * tempest/cmd/resources.yaml
>> 
>>   * tempest/tests/cmd/test_javelin.py
>> 
>> 
>>
>> Before doing so, we'd like to get some feedback about out planned move,
>> so if you have any questions, comments or feedback, please reply to this
>> thread.
> You should not just delete these files, there were real users of it in the 
> past
> and there might still be. If you're saying that javelin isn't something we can
> realistically maintain anymore (which I'm not sure I buy, but whatever) we 
> should first mark it for deprecation and have a warning print saying it will 
> be
> removed in the future. This gives people a chance to stop using it and migrate
> to something else. (using ansible would be a good alternative)
Then should we just mark it as deprecate, and if so, how? I'll put a new
commit marking it as deprecate with a different commit message

Thanks
>
>
> -Matt Treinish
>
>> ---
>> [1]
>> https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/consistent-service-method-names,n,z
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? Holiday Edition

2015-12-10 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

This is my final newsletter for the year, as I'm taking summer vacation over 
the next few weeks (yay for Southern Hemisphere weather!). Your regularly 
scheduled programming will resume in mid-January.

I would like to take a moment to acknowledge the incredible work of the entire 
documentation team, especially the core team who have worked all year to keep 
on top of reviews, the speciality team leads who motivate and inspire their 
teams each and every day, and every single documentation contributor and 
reviewer: without you all doing your little bit, we wouldn't have had the 
success we have had. Thank you also to our crew of support people: from infra 
and release management, to our subject matter experts and the wonderful people 
on the TC and at Foundation. Thanks for helping us be the best we can be. I 
would also like to make special mention of Andreas Jaeger and Anne Gentle, who 
both continue to hold my hand daily and make sure I don't make too much of a 
fool of myself in public. Your support and encouragement have been invaluable.

While we're on the goodwill wagon: if there's someone (or something!) you would 
like to call out for a special mention, you can do so using #success in our IRC 
channel. All successes get logged here: 
https://wiki.openstack.org/wiki/Successes and it's a great way to show your 
appreciation for your fellow community members.

== Progress towards Mitaka ==

117 days to go!

150 bugs closed so far for this release.

RST Conversions
* Arch Guide
** RST conversion is complete! Well done Shilla and team for getting this done 
so incredibly quickly :)
* Config Ref
** Is underway: contact the Config Ref Speciality team: 
https://wiki.openstack.org/wiki/Documentation/ConfigRef
* Virtual Machine Image Guide
** Is complete! Well done Tomoyuki-san :)

Reorganisations
* Arch Guide
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-reorg
** Doc sprint planned for December 21-22. Contact the Ops Guide speciality team 
for more info or to get  involved.
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* User Guides
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reorganised
** Contact the User Guide Speciality team: 
https://wiki.openstack.org/wiki/User_Guides

DocImpact
* Waiting to merge the patch for the new Jenkins job: 
https://review.openstack.org/#/c/251301/ which for now will be against Nova 
only so we can make sure it's working correctly before we roll it out across 
the board.

Document openstack-doc-tools and reorganise index page
* Thanks to Brian and Christian for volunteering to take this on!

Horizon UX work
* Thanks to the people who offered to help out on this! If you're interested 
but haven't gotten in touch yet, contact Piet Kruithof of the UX team. There is 
now a blueprint + spec in draft: 
https://blueprints.launchpad.net/openstack-manuals/+spec/ui-content-guidelines 
and https://review.openstack.org/#/c/252668/

== Speciality Teams ==

'''HA Guide - Bogdan Dobrelya'''
The change to shift IRC meeting was not accepted. Going to create a new poll 
this time with correct options suggested by Tony Breeds. No more updates.

'''Installation Guide - Christian Berendt'''
We require Debian Install Guide testers. Please contact Christian if you're 
interested.

'''Networking Guide - Edgar Magana'''
The networking guide had its first official IRC meeting: 
https://wiki.openstack.org/wiki/Documentation/NetworkingGuideMeetings We are 
restructuring a little bit the wikis for better engagement with new 
contributors; Versioning spec for networking guide has been merged: 
https://review.openstack.org/#/c/253283/ Working of the remaining actions items 
from the meeting: 
http://eavesdrop.openstack.org/meetings/networking_guide/2015/networking_guide.2015-12-03-16.00.txt
 We have proposed a time and date for an IRC meeting covering APAC time zone: 
https://review.openstack.org/#/c/254999/

'''Security Guide - Nathaniel Dillon'''
Planning for a bug sprint and general triage/cleanup.

'''User Guides - Joseph Robinson'''
The User Guide Reorganization Spec merged this week, and the final User Guide 
meeting for 2016 has been rescheduled to next Thursday December 16 at 23:30 
UTC, and the plan is to discuss work items. A patch for the User Guide 
Dashboard has begun using the blueprint implementation as a start to the reorg.

'''Ops and Arch Guides - Shilla Saebi'''
Regarding this spec - https://review.openstack.org/#/c/227660/, we decided we 
are going to hold off until the N-release; Architecture Guide Swarm - Asking 
for everyone who wants to help to please use the 
https://etherpad.openstack.org/p/arch-guide-reorg to document potential 
changes; The arch guide RST conversion is complete; For the swarm, everyone 
works in their timezone. They will then submit patches and allocate team 
members in the other 

[openstack-dev] [Cinder] Is there anyone truly working on this issue https://bugs.launchpad.net/cinder/+bug/1520102?

2015-12-10 Thread Sheng Bo Hou
Hi Mitsuhiro, Thang

The patch https://review.openstack.org/#/c/228916 is merged, but sadly it 
does not cover the issue https://bugs.launchpad.net/cinder/+bug/1520102. 
This bug is still valid.
As far as you know, is there someone working on this issue? If not, I am 
gonna fix it.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Drop javelin off tempest

2015-12-10 Thread GHANSHYAM MANN
Hi,

Yes, That's very valid point about there might be some real users or in future.

So Instead of deleting it, how about maintaining it. Only issue here
was gate did not
capture the issues when introduced in his tool.
But we can cover that using Unit tests and if really necessary we can
add experimental job for that.

2 thing we need
- Modify current unit tests to mock clients methods at deeper level
instead of complete service clients class.
- If really needed add experimental job for testing on gate.

Same issue we have for cleanup tool also, I need to check where we can
cover their testing(UT or gate job etc)

I vote it to keep that (which can be useful for some users (may be
future) who want to quickly tests their Cloud's resource
creation/deletion etc )

On Fri, Dec 11, 2015 at 1:17 AM, Matthew Treinish  wrote:
> On Thu, Dec 10, 2015 at 11:15:06AM +0100, Daniel Mellado wrote:
>> Hi All,
>>
>> In today's QA meeting we were discussing about dropping Javelin off
>> tempest if it's not being used anymore in grenade, as sdague pointed
>> out. We were thinking about this as a part of the work for [1], where we
>> hit issue on Javelin script testing where gate did not detect the
>> service clients changes in this script.
>
> So the reason we didn't remove this from tempest when we stopped using it as
> part of grenade is at the time there were external users. They still wanted to
> keep the tooling around. This is why the unit tests were grown in an effort to
> maintain some semblance of testing after the grenade removal. (for a long time
> it was mostly self testing through the grenade job)
>
>>
>> Our intention it's to drop the following files off tempest:
>>
>>   * tempest/cmd/javelin.py
>> 
>>   * tempest/cmd/resources.yaml
>> 
>>   * tempest/tests/cmd/test_javelin.py
>> 
>> 
>>
>> Before doing so, we'd like to get some feedback about out planned move,
>> so if you have any questions, comments or feedback, please reply to this
>> thread.
>
> You should not just delete these files, there were real users of it in the 
> past
> and there might still be. If you're saying that javelin isn't something we can
> realistically maintain anymore (which I'm not sure I buy, but whatever) we
> should first mark it for deprecation and have a warning print saying it will 
> be
> removed in the future. This gives people a chance to stop using it and migrate
> to something else. (using ansible would be a good alternative)
>
>
> -Matt Treinish
>
>>
>> ---
>> [1]
>> https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/consistent-service-method-names,n,z
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards
Ghanshyam Mann
+81-8084200646

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tacker] Next IRC weekly meeting in APAC timezone

2015-12-10 Thread Sridhar Ramaswamy
Tackers,

We now have contributors from APAC region, particularly the ones
contributing to Enhanced VNF Placement area. As discussed in the last
meeting, as a one time change, the next week's IRC meeting will be held in
an APAC friendly time slot.

When: 0100 UTC Wed Dec 16th (Tue 5PM PST, Wed 9AM Beijing)

http://www.timeanddate.com/worldclock/meetingdetails.html?year=2015=12=16=1=0=0=283=64=176=33

Where: #openstack-meeting-4

Agenda:
https://wiki.openstack.org/wiki/Meetings/Tacker#Meeting_Dec_16.2C_2015_-_APAC_Timezone

Based on the participation level we can consider switching to alternate
slots in an ongoing basis.

- Sridhar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mesos Conductor using container-create operations

2015-12-10 Thread Hongbin Lu
Hi Ton,

Thanks for the feedback. Here is a clarification. The proposal is neither for 
using existing DSL to express a container, nor for investing a new DSL. 
Instead, I proposed to hide the complexity of existing DSLs and expose a simple 
API to users. For example, if users want to create a container, they could type 
something like:

magnum container-create –name XXX –image XXX –command XXX

Magnum will process the request and translate it to COE-specific API calls. For 
k8s, we could dynamically generate a pod with a single container and fill the 
pod with the inputted values (image, command, etc.). Similarly, in marathon, we 
could generate an app based on inputs. A key advantage of that is simple and 
doesn’t require COE-specific knowledge.

Best regards,
Hongbin

From: Ton Ngo [mailto:t...@us.ibm.com]
Sent: December-10-15 8:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Mesos Conductor using container-create operations


I think extending the container object to Mesos via command like 
container-create is a fine idea. Going into details, however, we run into some 
complication.
1. The user would still have to choose a DSL to express the container. This 
would have to be a kube and/or swarm DSL since we don't want to invent a new 
one.
2. For Mesos bay in particular, kube or swarm may be running on top of Mesos 
along side with Marathon, so somewhere along the line, Magnum has to be able to 
make the distinction and handle things appropriately.

We should think through the scenarios carefully to come to agreement on how 
this would work.

Ton Ngo,


[Inactive hide details for Hongbin Lu ---12/09/2015 03:09:23 PM---As Bharath 
mentioned, I am +1 to extend the "container" object]Hongbin Lu ---12/09/2015 
03:09:23 PM---As Bharath mentioned, I am +1 to extend the "container" object to 
Mesos bay. In addition, I propose

From: Hongbin Lu >
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: 12/09/2015 03:09 PM
Subject: Re: [openstack-dev] Mesos Conductor using container-create operations





As Bharath mentioned, I am +1 to extend the “container” object to Mesos bay. In 
addition, I propose to extend “container” to k8s as well (the details are 
described in this BP [1]). The goal is to promote this API resource to be 
technology-agnostic and make it portable across all COEs. I am going to justify 
this proposal by a use case.

Use case:
I have an app. I used to deploy my app to a VM in OpenStack. Right now, I want 
to deploy my app to a container. I have basic knowledge of container but not 
familiar with specific container tech. I want a simple and intuitive API to 
operate a container (i.e. CRUD), like how I operated a VM before. I find it 
hard to learn the DSL introduced by a specific COE (k8s/marathon). Most 
importantly, I want my deployment to be portable regardless of the choice of 
cluster management system and/or container runtime. I want OpenStack to be the 
only integration point, because I don’t want to be locked-in to specific 
container tech. I want to avoid the risk that a specific container tech being 
replaced by another in the future. Optimally, I want Keystone to be the only 
authentication system that I need to deal with. I don't want the extra 
complexity to deal with additional authentication system introduced by specific 
COE.

Solution:
Implement "container" object for k8s and mesos bay (and all the COEs introduced 
in the future).

That's it. I would appreciate if you can share your thoughts on this proposal.

[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers

Best regards,
Hongbin

From: bharath thiruveedula [mailto:bharath_...@hotmail.com]
Sent: December-08-15 11:40 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Mesos Conductor using container-create operations

Hi,

As we have discussed in last meeting, we cannot continue with changes in 
container-create[1] as long as we have suitable use case. But I honestly feel 
to have some kind of support for mesos + marathon apps, because magnum supports 
COE related functionalities for docker swarm (container-create) and k8s 
(pod-create, rc-create..) but not for mesos bays.

As hongbin suggested, we use the existing functionality of container-create and 
support in mesos-conductor. Currently we have container-create only for docker 
swarm bay. Let's have support for the same command for mesos bay with out any 
changes in client side.

Let me know your suggestions.

Regards
Bharath 
T__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [Kuryr] Testing, Rally and Wiki

2015-12-10 Thread Baohua Yang
Great!
Would try to offer a hand on the wiki and testing.

On Thu, Dec 10, 2015 at 10:11 PM, Gal Sagie  wrote:

> Hello everyone,
>
> As some of you have already noticed one of the top priorities for Kuryr
> this cycle is to get
> our CI and gate testing done.
>
> I have been working on creating the base for adding integration tests that
> will run
> in the gate in addition to our unit tests and functional testing.
>
> If you would like to join and help this effort, please stop by
> #openstack-kuryr or email
> me back.
>
> We are also working on combining Rally testing with Kuryr and for that we
> are going to
> introduce Docker context plugin and client and other parts that are
> probably needed by other projects (like Magnum)
> I think it would be great if we can combine forces on this.
>
> I have also created Kuryr Wiki:
> https://wiki.openstack.org/wiki/Kuryr
>
> Feel free to edit and add needed information.
>
>
> Thanks all
> Gal.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best wishes!
Baohua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic]Unable to locate configuration file in second reboot

2015-12-10 Thread Zhi Chang
Hi, all
Something goes wrong when the vm reboot again. The console outputs:
"Unable to locate configuration file. Boot failed: press a key to retry, or 
wait for reset..."


And my tftp.conf like this http://paste.openstack.org/show/481472/


I upload a movie in https://youtu.be/jktPIjEmMV8, at 04:30 there is a error 
happens.


Could someone give me some idea?


Thx
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] Testing, Rally and Wiki

2015-12-10 Thread Gal Sagie
Hello everyone,

As some of you have already noticed one of the top priorities for Kuryr
this cycle is to get
our CI and gate testing done.

I have been working on creating the base for adding integration tests that
will run
in the gate in addition to our unit tests and functional testing.

If you would like to join and help this effort, please stop by
#openstack-kuryr or email
me back.

We are also working on combining Rally testing with Kuryr and for that we
are going to
introduce Docker context plugin and client and other parts that are
probably needed by other projects (like Magnum)
I think it would be great if we can combine forces on this.

I have also created Kuryr Wiki:
https://wiki.openstack.org/wiki/Kuryr

Feel free to edit and add needed information.


Thanks all
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-12-10 Thread Marian Horban
Hi guys,

Are there some progress with reloading configuration?
Could we restore oslo-config review https://review.openstack.org/#/c/213062/
?

Marian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Improving Mistral pep8 rules files to match Mistral guidelines

2015-12-10 Thread Anastasia Kuznetsova
Moshe,

I will create blueprint for that and will attach link to etherpad, so we
can form list of the rules all together.
After that it will be possible to publish all our 'rules' to docs and start
their implementation.

On Thu, Dec 10, 2015 at 11:23 AM, ELISHA, Moshe (Moshe) <
moshe.eli...@alcatel-lucent.com> wrote:

> Thanks, Anastasia!
>
>
>
> Who can take start documenting the rules? I remember only a few rules and
> I don’t know all the nuances.
>
> For example, if the return statement is the only statement of a function –
> do you still need a blank line before it?
>
>
>
> Once the rules doc will be available I can work on adding these rules to
> our pep8.
>
>
>
>
>
> *From:* Anastasia Kuznetsova [mailto:akuznets...@mirantis.com]
> *Sent:* Wednesday, December 09, 2015 1:13 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [mistral] Improving Mistral pep8 rules
> files to match Mistral guidelines
>
>
>
> Hi Moshe,
>
>
>
> Great idea!
>
>
>
> It is possible to prepare some additional code checks, for example you can
> take a look how it was done in Rally project [1].
> Before starting such work in Mistral, I guess that we can describe our
> addition code style rules in our official docs (somewhere in "Developer
> Guide" section [2]).
>
>
>
> [1] https://github.com/openstack/rally/tree/master/tests/hacking
>
> [2] http://docs.openstack.org/developer/mistral/#developer-guide
>
>
>
> On Wed, Dec 9, 2015 at 11:21 AM, ELISHA, Moshe (Moshe) <
> moshe.eli...@alcatel-lucent.com> wrote:
>
> Hi all,
>
>
>
> Is it possible to add all / some of the special guidelines of Mistral
> (like blank line before return, period at end of comment, …) to our pep8
> rules file?
>
>
>
> This can save a lot of time for both committers and reviewers.
>
>
>
> Thanks!
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Best regards,
>
> Anastasia Kuznetsova
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Anastasia Kuznetsova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-10 Thread Sean Dague
On 12/10/2015 01:56 AM, Joshua Harlow wrote:
> Shouldn't be to hard (although it's probably not on each oslo project,
> but on the consumers projects).
> 
> The warnings module can turn warnings into raised exceptions with a
> simple command line switch btw...
> 
> For example:
> 
> $ python -Wonce
> Python 2.7.6 (default, Jun 22 2015, 17:58:13)
> [GCC 4.8.2] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
 import warnings
 warnings.warn("I am not supposed to be used", DeprecationWarning)
> __main__:1: DeprecationWarning: I am not supposed to be used
> 
> $ python -Werror
> Python 2.7.6 (default, Jun 22 2015, 17:58:13)
> [GCC 4.8.2] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
 import warnings
 warnings.warn("I am not supposed to be used", DeprecationWarning)
> Traceback (most recent call last):
>   File "", line 1, in 
> DeprecationWarning: I am not supposed to be used
> 
> https://docs.python.org/2/library/warnings.html#the-warnings-filter
> 
> Turn that CLI switch from off to on and I'm pretty sure usage of
> deprecated things will become pretty evident real quick ;)

It needs to be more targetted than that. There is a long standing
warning between paste and pkg_resources that would hard stop everyone.

But, yes, the idea of being able to run unit tests with fatal
deprecations of oslo easily is what I think would be useful.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Christmas Core Cleanup

2015-12-10 Thread Igor Kalnitsky
Hey folks,

In an effort to do some housekeeping, I clean up the list of core
reviewers in Fuel.

According to Stackalytics the following cores show a low contribution rate:

# fuel-web [1]

* Dmitry Shulyak
* Evgeniy L

# python-fuelclient [2]

* Dmitry Pyzhov
* Evgeniy L

# shotgun [3]

* Dmitry Shulyak
* Evgeniy L

# fuel-upgrade [4]

* Aleksey Kasatkin
* Vladimir Kozhukalov

# fuel-main [5]

* Dmitry Pyzhov
* Roman Vyalov

# fuel-agent [6]

* Aleksey Kasatkin
* Evgeniy L
* Igor Kalnitsky

Also, I've removed Sebastian Kalinowski as he said he has no time to
work on Fuel anymore.

Once former cores show high level of contribution again, I'll gladly
add them back.

- Igor

[1] http://stackalytics.com/report/contribution/fuel-web/90
[2] http://stackalytics.com/report/contribution/python-fuelclient/90
[3] http://stackalytics.com/report/contribution/shotgun/90
[4] http://stackalytics.com/report/contribution/fuel-upgrade/90
[5] http://stackalytics.com/report/contribution/fuel-main/90
[6] http://stackalytics.com/report/contribution/fuel-agent/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][all] glance_store drivers deprecation/stabilization: Volunteers needed

2015-12-10 Thread Flavio Percoco

Greetings,

As some of you know, there's a proposal (still a rough draft) for
refactoring the glance_store API. This library is the home for the
store drivers used by Glance to save or access the image data.

As other drivers in OpenStack, this library is facing the issue of
having unmaintained, untested and incomplete implementations of stores
that are, hypotetically, being used in production environments.

In order to guarantee some level of stability and, more important,
maintenance, the Glance team is looking for volunteers to sign up as
maintainers/keepers of the existing drivers.

Unfortunately, given the fact that our team is not as big as we would
like and that we don't have the knowledge to provide support for every
single driver, the Glance team will have to deprecate, and later
remove, the drivers that will remain without a maintainer.

Each driver will have to have a voting CI job running (maintained by
the driver maintainer) that will have to run Glance's functional tests
to ensure the API features are also supported by the driver.

There are 2 drivers I belive shouldn't fall into this category and
that should be maintained by the Glance community itself. These
drivers are:

- Filesystem
- Http

Please, find the full list of drivers here[0] and feel free to sign up
as volunteer in as many drivers as your time permits to maintain.
Please, provide all the information required as the lack of it will
result in the candidacy not being valid. As some sharp eyes will
notice, the Swift driver is not in the list above. The reason for that
is that, although it's a key piece of OpenStack, not everyone in the
Glance community knows the code of that driver well-enough and there
are enough folks that know it that could perhaps volunteer as
maintainers/reviewers for it. Furthermore, adding the swift driver
there would mean we should probably add the Cinder one as well as it's
part of OpenStack just like Swift. We can extend that list later. For
now, I'd like to focus on bringing some stability to the library.

The above information, as soon as it's complete or the due date is
reached, will be added to glance_store's docs so that folks know where
to find the drivers maintainers and who to talk to when things go
south.

Here's an attempt to schedule some of this work (please refer to
this tag[0.1] and this soon-to-be-approved review[0.2] to have more
info w.r.t the deprecation times and backwards compatibility
guarantees):

- By mitaka 2 (Jan 16-22), all drivers should have a maintainer.
 Drivers without one, will be marked as deprecated in Mitaka.

- By N-2 (schedule still not available), all drivers that were marked
 as deprecated in Mitaka will be removed.

- By N-1 (schedule still not available), all drivers should have
 support for the main storage capabilities[1], which are READ_ACCESS,
 WRITE_ACCESS, and DRIVER_REUSABLE. Drivers that won't have support
 for the main set of capabilities will be marked as deprecated and
 then removed in O-1 (except for the HTTP one, which the team has
 agreed on keeping as a read-only driver).

- By N-2 (schedule still not available), all drivers need to have a
 voting gate. Drivers that won't have voting gates will be marked as
 deprecated and then removed in O-1.

Although glance_store has intermediate releases, the above is being
planned based on the integrated release to avoid sudden "surprises"
on already released OpenStack versions.

Note that the above plan requires that the ongoing effort for setting
up a gate based on functional tests for glance_store will be
completed. There's enough time to get all this done for every driver.

In addition to the above, I'd like to note that we need to do this
*before* the refactor[2] so that we can provide a minimum guarantee
that it won't break the existing contract. Furthermore, maintainers of
this drivers will be asked to help migrating their drivers to the new
API but that will follow a different schedule that needs to be
discussed in the spec itself.

This is, obviously, a multi-release effort that will require syncing
with future PTLs of the project.

One more thing. Note that the above work shouldn't distract the team
from the priorities we've scheduled for Mitaka. The requested
work/info should be simple enough to provide and work on without
distracting us. I'll take care of following-up and pinging some folks
as needed.

Please, provide your feedback and/or concerns on the above plan,
Flavio

[0] https://etherpad.openstack.org/p/glance-store-drivers-status
[0.1] 
http://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
[0.2] https://review.openstack.org/#/c/226157/
[1] 
https://github.com/openstack/glance_store/blob/master/glance_store/capabilities.py#L35
[2] https://review.openstack.org/#/c/188050/

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [ironic]Unable to locate configuration file in second reboot

2015-12-10 Thread Arun SAG
Hi,

On Thu, Dec 10, 2015 at 4:34 AM, Zhi Chang  wrote:
>
> And my tftp.conf like this http://paste.openstack.org/show/481472/
>
> I upload a movie in https://youtu.be/jktPIjEmMV8, at 04:30 there is a
> error happens.
>
> Could someone give me some idea?

There is not enough information here,

Did the installation finish? How does your 'ironic node-list' and
'ironic node-show ' look like after the machine reboots? It looks like
tftp config is missing, How does your ironic tftp config template look
like? Once the installation is done ironic conductor is supposed to
switch the tftp config so that the machine can boot from harddisk,
looks like it is not happening here.

-- 
Arun S A G
http://zer0c00l.in/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker][Tricircle]Multi-VIM collaboration

2015-12-10 Thread joehuang
Hi, Sridhar,

Great to know that Tacker also begins to address multi-VIM ( multi-OpenStack ) 
requirement in OPNFV cloud, and your concerns on OPNFV multisite and OpenStack 
Tricircle project.

I just gave some comment on the spec. And I think the most important one for 
Tacker integration with Tricircle is the VNF placement policy, it would be 
great to use region_id( in OpenStack, region is identified with region_name) + 
Availability Zone for VNF placement.
Such a placement policy will work in every scenario: Tacker + multi-region 
OpenStack, Tacker+Amazon, Tacker +Tricircle (+multi OpenStack), Tacker + 
multi-region OpenStack +Tricircle (+multi OpenStack).  For Tricircle will 
integrate multi-OpenStack into one region, and each bottom OpenStack will work 
like one availability zone, Tacker can treat Tricircle as one OpenStack region.

It’s quite important for VNFs(the telecom application) to be deployed into 
multiple availability zones to achieve five 9 carrier grade reliability over 
the four 9 reliability of OpenStack.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Sridhar Ramaswamy [mailto:sric...@gmail.com]
Sent: Thursday, December 10, 2015 10:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: caizhiyuan (A)
Subject: Re: [openstack-dev] [Tacker][Tricircle]Multi-VIM collaboration

Sure.

As mentioned in the BP we stumbled into Tricycle project while researching for 
this feature (and hence got mentioned in the BP). It sure looks promising. The 
immediate asks from our user community is quite modest though, so we are trying 
to keep the scope small. However the integration point you mention make sense, 
so that Tacker + Tricircle could be one of the deployment option. Lets continue 
the discussion in the gerrit as we put all other suggestions coming in (like 
heat multi-cloud / multi-region) in perspective. It will be great to get the 
Tacker multi-site API bake in different multi-site deployment patterns 
underneath.

- Sridhar

On Tue, Dec 8, 2015 at 10:37 PM, Zhipeng Huang 
> wrote:
Hi Tacker team,

As I commented in the BP[1], our team is interested in a collaboration in this 
area. I think one of the collaboration point would be to define a mapping 
between tacker multi-vim api with Tricircle resource routing api table [2].

[1]https://review.openstack.org/#/c/249085/
[2]https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit#heading=h.5t71ara040n5

--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic]Unable to locate configuration file insecond reboot

2015-12-10 Thread Zhi Chang
hi, thanks for your reply. 


Another DHCP server reply this machine. Now, I have already stop that DHCP 
server.
But there is another problem, let me describe it.


When I use "nova boot" command to boot a machine. Nova-conductor always says 
"error",
The detail info is "NoValidHost: No valid host was found. There are not enough 
hosts available" .


I make some trouble shooting followed by 
http://docs.openstack.org/developer/ironic/deploy/troubleshooting.html#nova-returns-no-valid-host-was-found-error


But it is no uses for my question. Nova flavor info and Ironic node info at: 
http://paste.openstack.org/show/481583/


I paste all the info at: http://paste.openstack.org/show/481583/


Could you give some suggestion?


Thx
Zhi Chang
 
-- Original --
From:  "Arun SAG";
Date:  Fri, Dec 11, 2015 02:58 PM
To:  "OpenStack Development Mailing List (not for usage 
questions)"; 

Subject:  Re: [openstack-dev] [ironic]Unable to locate configuration file 
insecond reboot

 
Hi,

On Thu, Dec 10, 2015 at 4:34 AM, Zhi Chang  wrote:
>
> And my tftp.conf like this http://paste.openstack.org/show/481472/
>
> I upload a movie in https://youtu.be/jktPIjEmMV8, at 04:30 there is a
> error happens.
>
> Could someone give me some idea?

There is not enough information here,

Did the installation finish? How does your 'ironic node-list' and
'ironic node-show ' look like after the machine reboots? It looks like
tftp config is missing, How does your ironic tftp config template look
like? Once the installation is done ironic conductor is supposed to
switch the tftp config so that the machine can boot from harddisk,
looks like it is not happening here.

-- 
Arun S A G
http://zer0c00l.in/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] removing keystoneclient.middleware (in favor of keystonemiddleware)

2015-12-10 Thread Steve Martinelli

keystonemiddleware has been around since July 2014 [0], it was forked from
keystoneclient.middleware which we deprecated around the same time [1]
(Juno time frame).

A few cycles later... is anyone still clinging onto
keystoneclient.middleware? Or can we remove it and cut a new major version
of keystoneclient (v3.0.0)?

Patch to remove the code: https://review.openstack.org/#/c/250669/2

[0] https://pypi.python.org/pypi/keystonemiddleware/1.0.0
[1]
https://github.com/openstack/python-keystoneclient/commit/5c9b13d4c7222b71084a0b4fd836e1fdda0edaf7


Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [midonet] Request For Enhancement(RFE) process

2015-12-10 Thread Ryu Ishimoto
Hi All,

For feature proposal in MidoNet, I would like that we follow the RFE
process[1] currently practiced in Neutron.  It is designed to be
lightweight which makes it easy for everyone, including
non-developers, to initiate the feature development process.  I
believe this process is appropriate for Midonet.

Please let me know if anyone feels differently!

Thanks,
Ryu

[1] http://blog.siliconloons.com/posts/2015-06-01-new-neutron-rfe-process/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Christmas Core Cleanup

2015-12-10 Thread Evgeniy L
Hi,

Thank you Igor for cleaning core related groups.

Also I would like to add that many of removed cores are still SME (subject
matter experts)
in specific areas, so they will continue reviewing related patches.

Thanks,

On Thu, Dec 10, 2015 at 2:42 PM, Igor Kalnitsky 
wrote:

> Hey folks,
>
> In an effort to do some housekeeping, I clean up the list of core
> reviewers in Fuel.
>
> According to Stackalytics the following cores show a low contribution rate:
>
> # fuel-web [1]
>
> * Dmitry Shulyak
> * Evgeniy L
>
> # python-fuelclient [2]
>
> * Dmitry Pyzhov
> * Evgeniy L
>
> # shotgun [3]
>
> * Dmitry Shulyak
> * Evgeniy L
>
> # fuel-upgrade [4]
>
> * Aleksey Kasatkin
> * Vladimir Kozhukalov
>
> # fuel-main [5]
>
> * Dmitry Pyzhov
> * Roman Vyalov
>
> # fuel-agent [6]
>
> * Aleksey Kasatkin
> * Evgeniy L
> * Igor Kalnitsky
>
> Also, I've removed Sebastian Kalinowski as he said he has no time to
> work on Fuel anymore.
>
> Once former cores show high level of contribution again, I'll gladly
> add them back.
>
> - Igor
>
> [1] http://stackalytics.com/report/contribution/fuel-web/90
> [2] http://stackalytics.com/report/contribution/python-fuelclient/90
> [3] http://stackalytics.com/report/contribution/shotgun/90
> [4] http://stackalytics.com/report/contribution/fuel-upgrade/90
> [5] http://stackalytics.com/report/contribution/fuel-main/90
> [6] http://stackalytics.com/report/contribution/fuel-agent/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Testing, Rally and Wiki

2015-12-10 Thread Vikas Choudhary
Hi Gal,

Would be happy to help. Please count me IN.


-Vikas Choudhary


On Thu, Dec 10, 2015 at 7:41 PM, Gal Sagie  wrote:

> Hello everyone,
>
> As some of you have already noticed one of the top priorities for Kuryr
> this cycle is to get
> our CI and gate testing done.
>
> I have been working on creating the base for adding integration tests that
> will run
> in the gate in addition to our unit tests and functional testing.
>
> If you would like to join and help this effort, please stop by
> #openstack-kuryr or email
> me back.
>
> We are also working on combining Rally testing with Kuryr and for that we
> are going to
> introduce Docker context plugin and client and other parts that are
> probably needed by other projects (like Magnum)
> I think it would be great if we can combine forces on this.
>
> I have also created Kuryr Wiki:
> https://wiki.openstack.org/wiki/Kuryr
>
> Feel free to edit and add needed information.
>
>
> Thanks all
> Gal.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tap-as-a-service] Tap-as-a-service API

2015-12-10 Thread Fawad Khaliq
Excellent. I will take a look. Thanks for taking care of this!

Fawad Khaliq


On Fri, Dec 11, 2015 at 10:18 AM, Takashi Yamamoto 
wrote:

> On Fri, Dec 4, 2015 at 2:38 AM, Fawad Khaliq  wrote:
> > Thanks Yamamoto. Yes, agree it makes sense to keep it in the TaaS tree.
> > Would be great though to have broader audience come and review it.
> >
> > Seems like there is not much activity. I can go ahead and take a stab at
> > reviving it [1] and then we can iterate over and improve.
>
> https://review.openstack.org/#/c/256210/
>
> >
> > [1] https://review.openstack.org/#/c/96149/
> >
> > Thanks,
> > Fawad Khaliq
> >
> >
> > On Mon, Nov 30, 2015 at 9:54 AM, Takashi Yamamoto  >
> > wrote:
> >>
> >> hi,
> >>
> >> On Fri, Nov 27, 2015 at 2:02 AM, Fawad Khaliq 
> wrote:
> >> > Folks,
> >> >
> >> > Any plan to revive this [1] so we can discuss and finalize the use
> cases
> >> > and
> >> > APIs.
> >> >
> >> > [1] https://review.openstack.org/#/c/96149/
> >>
> >> i think Anil will explain the history of the project.
> >>
> >> i suspect neutron-spec isn't appropriate place for us anymore.
> >> we can have subproject spec as other projects have.  [2]
> >>
> >> [2] eg.
> https://github.com/openstack/networking-midonet/tree/master/specs
> >>
> >> >
> >> > Thanks,
> >> > Fawad Khaliq
> >> >
> >> >
> >> >
> >> >
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Moved blueprints out of 8.0

2015-12-10 Thread Mike Scherbakov
Hi all,
I've moved the following blueprints:
https://etherpad.openstack.org/p/fuel-moved-bps-from-8.0

I called for blueprints status update at [1], [2], [3], [4], and suggested
to move those which are not "Implemented". Now I finally did, except
test/doc related (which can be done after FF).

I think I moved a few which already implemented, as far as I'm aware of.
For instance:
https://blueprints.launchpad.net/fuel/+spec/master-on-centos7
https://blueprints.launchpad.net/fuel/+spec/dynamically-build-bootstrap
https://blueprints.launchpad.net/fuel/+spec/package-for-js-modules
https://blueprints.launchpad.net/fuel/+spec/component-registry

If those are in fact done, please move them back, and set proper status.
There is uncertainty to what to do with those parent blueprints, like
ubuntu bootstrap one, which have incomplete test- and docs- related. My
suggestion would be to set status "Deployment" and move them back to 8.0,
if all coding is done. Once dependent test/docs are done, parent blueprint
should be updated and become "Implemented".

Thank you,

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081047.html
[2] https://etherpad.openstack.org/p/fuel-8.0-FF-meeting, line 428
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-December/081131.html
[4]
http://eavesdrop.openstack.org/meetings/fuel/2015/fuel.2015-12-10-16.00.log.html,
16:32

-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Is there anyone truly working on this issue https://bugs.launchpad.net/cinder/+bug/1520102?

2015-12-10 Thread Thang Pham
I have to try it again myself.  What errors are you seeing?  Is it the
same?  Feel free to post a patch if you already have one that would solve
it.

Regards,
Thang

On Thu, Dec 10, 2015 at 10:51 PM, Sheng Bo Hou  wrote:

> Hi Mitsuhiro, Thang
>
> The patch https://review.openstack.org/#/c/228916is merged, but sadly it
> does not cover the issue https://bugs.launchpad.net/cinder/+bug/1520102.
> This bug is still valid.
> As far as you know, is there someone working on this issue? If not, I am
> gonna fix it.
>
> Best wishes,
> Vincent Hou (侯胜博)
>
> Staff Software Engineer, Open Standards and Open Source Team, Emerging
> Technology Institute, IBM China Software Development Lab
>
> Tel: 86-10-82450778 Fax: 86-10-82453660
> Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com
> Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang
> West Road, Haidian District, Beijing, P.R.C.100193
> 地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][tap-as-a-service] Tap-as-a-service API

2015-12-10 Thread Takashi Yamamoto
On Fri, Dec 4, 2015 at 2:38 AM, Fawad Khaliq  wrote:
> Thanks Yamamoto. Yes, agree it makes sense to keep it in the TaaS tree.
> Would be great though to have broader audience come and review it.
>
> Seems like there is not much activity. I can go ahead and take a stab at
> reviving it [1] and then we can iterate over and improve.

https://review.openstack.org/#/c/256210/

>
> [1] https://review.openstack.org/#/c/96149/
>
> Thanks,
> Fawad Khaliq
>
>
> On Mon, Nov 30, 2015 at 9:54 AM, Takashi Yamamoto 
> wrote:
>>
>> hi,
>>
>> On Fri, Nov 27, 2015 at 2:02 AM, Fawad Khaliq  wrote:
>> > Folks,
>> >
>> > Any plan to revive this [1] so we can discuss and finalize the use cases
>> > and
>> > APIs.
>> >
>> > [1] https://review.openstack.org/#/c/96149/
>>
>> i think Anil will explain the history of the project.
>>
>> i suspect neutron-spec isn't appropriate place for us anymore.
>> we can have subproject spec as other projects have.  [2]
>>
>> [2] eg. https://github.com/openstack/networking-midonet/tree/master/specs
>>
>> >
>> > Thanks,
>> > Fawad Khaliq
>> >
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Testing, Rally and Wiki

2015-12-10 Thread Gal Sagie
Hi Boris,

The way i envision it, is for this to first implement Docker resources
(networks, containers) which are deployed
in a mixed OpenStack environments and use Kuryr to plug their network.
Then we can create some scenarios to benchmark and test these mixed
environments with Kuryr or with Magnum and Kuryr.

Open for any other suggestions/ideas.

Gal.

On Fri, Dec 11, 2015 at 1:22 AM, Boris Pavlovic  wrote:

> Hi Gal,
>
>
>> We are also working on combining Rally testing with Kuryr and for that we
>> are going to
>> introduce Docker context plugin and client and other parts that are
>> probably needed by other projects (like Magnum)
>> I think it would be great if we can combine forces on this.
>
>
> What this context is going to do?
>
>
> Best regards,
> Boris Pavlovic
>
> On Thu, Dec 10, 2015 at 6:11 AM, Gal Sagie  wrote:
>
>> Hello everyone,
>>
>> As some of you have already noticed one of the top priorities for Kuryr
>> this cycle is to get
>> our CI and gate testing done.
>>
>> I have been working on creating the base for adding integration tests
>> that will run
>> in the gate in addition to our unit tests and functional testing.
>>
>> If you would like to join and help this effort, please stop by
>> #openstack-kuryr or email
>> me back.
>>
>> We are also working on combining Rally testing with Kuryr and for that we
>> are going to
>> introduce Docker context plugin and client and other parts that are
>> probably needed by other projects (like Magnum)
>> I think it would be great if we can combine forces on this.
>>
>> I have also created Kuryr Wiki:
>> https://wiki.openstack.org/wiki/Kuryr
>>
>> Feel free to edit and add needed information.
>>
>>
>> Thanks all
>> Gal.
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Christmas Core Cleanup

2015-12-10 Thread Mike Scherbakov
Igor, thank you for driving this. I think that this is a great practice: we
need to keep list of cores up to date.

What about other repos, not mentioned here? Are those are fine being as is?
What about fuel-docs, for instance?

For instance, I see that Irina was able to provide just 5 reviews over 3
month period [1]. So I suspect that she can't pay that much of an attention
to docs now..
Vitaly Kramskikh had 3 reviews, but I don't think is core in that
particular repo (he is core in fuel-web repo). I'm not sure how
stackalytics tracks that.

[1] http://stackalytics.com/report/contribution/fuel-docs/90

Thanks,

On Thu, Dec 10, 2015 at 8:12 AM Igor Kalnitsky 
wrote:

> Hi Evgeniy,
>
> Yes, you absolutely right! I as far as possible will ask them to
> review certain patches (if they have no time to watch all patches_.
> Moreover, I'm going to add them to MAINTAINERS file.
>
> Thanks,
> Igor
>
> P.S: I hope you and others will manage to spend more time on Fuel,
> since your feedback guys are really appreciated since you're proven
> authorities here. ;)
>
>
> On Thu, Dec 10, 2015 at 5:59 PM, Evgeniy L  wrote:
> > Hi,
> >
> > Thank you Igor for cleaning core related groups.
> >
> > Also I would like to add that many of removed cores are still SME
> (subject
> > matter experts)
> > in specific areas, so they will continue reviewing related patches.
> >
> > Thanks,
> >
> > On Thu, Dec 10, 2015 at 2:42 PM, Igor Kalnitsky  >
> > wrote:
> >>
> >> Hey folks,
> >>
> >> In an effort to do some housekeeping, I clean up the list of core
> >> reviewers in Fuel.
> >>
> >> According to Stackalytics the following cores show a low contribution
> >> rate:
> >>
> >> # fuel-web [1]
> >>
> >> * Dmitry Shulyak
> >> * Evgeniy L
> >>
> >> # python-fuelclient [2]
> >>
> >> * Dmitry Pyzhov
> >> * Evgeniy L
> >>
> >> # shotgun [3]
> >>
> >> * Dmitry Shulyak
> >> * Evgeniy L
> >>
> >> # fuel-upgrade [4]
> >>
> >> * Aleksey Kasatkin
> >> * Vladimir Kozhukalov
> >>
> >> # fuel-main [5]
> >>
> >> * Dmitry Pyzhov
> >> * Roman Vyalov
> >>
> >> # fuel-agent [6]
> >>
> >> * Aleksey Kasatkin
> >> * Evgeniy L
> >> * Igor Kalnitsky
> >>
> >> Also, I've removed Sebastian Kalinowski as he said he has no time to
> >> work on Fuel anymore.
> >>
> >> Once former cores show high level of contribution again, I'll gladly
> >> add them back.
> >>
> >> - Igor
> >>
> >> [1] http://stackalytics.com/report/contribution/fuel-web/90
> >> [2] http://stackalytics.com/report/contribution/python-fuelclient/90
> >> [3] http://stackalytics.com/report/contribution/shotgun/90
> >> [4] http://stackalytics.com/report/contribution/fuel-upgrade/90
> >> [5] http://stackalytics.com/report/contribution/fuel-main/90
> >> [6] http://stackalytics.com/report/contribution/fuel-agent/90
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][serial-console-proxy]

2015-12-10 Thread Prathyusha Guduri
Hi All,

I have set up open stack on an Arm64 machine and all the open stack related
services are running fine. Also am able to launch an instance successfully.
Now that I need to get a console for my instance. The noVNC console is not
supported in the machine am using. So I have to use a serial-proxy console
or spice-proxy console.

After rejoining the stack, I have stopped the noVNC service and started the
serial proxy service in  /usr/local/bin  as

ubuntu@ubuntu:~/devstack$ /usr/local/bin/nova-serialproxy --config-file
/etc/nova/nova.conf
2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-]
WebSocket server settings:
2015-12-10 19:07:13.786 21979 INFO nova.console.websocketproxy [-]   -
Listen on 0.0.0.0:6083
2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   -
Flash security policy server
2015-12-10 19:07:13.787 21979 INFO nova.console.websocketproxy [-]   - No
SSL/TLS support (no cert file)
2015-12-10 19:07:13.790 21979 INFO nova.console.websocketproxy [-]   -
proxying from 0.0.0.0:6083 to None:None

But
ubuntu@ubuntu:~/devstack$ nova get-serial-console vm20
ERROR (ClientException): The server has either erred or is incapable of
performing the requested operation. (HTTP 500) (Request-ID:
req-cfe7d69d-3653-4d62-ad0b-50c68f1ebd5e)


the problem seems to be that the nova-compute is not able to communicate
with nova-serial-proxy. The IP and port for serial proxy that I have given
in nova.conf is correct.

I really dont understand where am going wrong. Some help would be very
grateful.


My nova.conf -


[DEFAULT]
vif_plugging_timeout = 300
vif_plugging_is_fatal = True
linuxnet_interface_driver =
security_group_api = neutron
network_api_class = nova.network.neutronv2.api.API
firewall_driver = nova.virt.firewall.NoopFirewallDriver
compute_driver = libvirt.LibvirtDriver
default_ephemeral_format = ext4
metadata_workers = 24
ec2_workers = 24
osapi_compute_workers = 24
rpc_backend = rabbit
keystone_ec2_url = http://10.167.103.101:5000/v2.0/ec2tokens
ec2_dmz_host = 10.167.103.101
vncserver_proxyclient_address = 127.0.0.1
vncserver_listen = 127.0.0.1


*vnc_enabled = falsexvpvncproxy_base_url =
http://10.167.103.101:6081/console
novncproxy_base_url =
http://10.167.103.101:6080/vnc_auto.html
*
logging_context_format_string = %(asctime)s.%(msecs)03d %(levelname)s
%(name)s [%(request_id)s %(user_name)s %(project_name)s]
%(instance)s%(message)s
force_config_drive = True
instances_path = /opt/stack/data/nova/instances
state_path = /opt/stack/data/nova
enabled_apis = ec2,osapi_compute,metadata
instance_name_template = instance-%08x
my_ip = 10.167.103.101
s3_port = 
s3_host = 10.167.103.101
default_floating_pool = public
force_dhcp_release = True
dhcpbridge_flagfile = /etc/nova/nova.conf
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
rootwrap_config = /etc/nova/rootwrap.conf
api_paste_config = /etc/nova/api-paste.ini
allow_migrate_to_same_host = True
allow_resize_to_same_host = True
debug = True
verbose = True

[database]
connection = mysql://root:open@127.0.0.1/nova?charset=utf8

[osapi_v3]
enabled = True

[keystone_authtoken]
signing_dir = /var/cache/nova
cafile = /opt/stack/data/ca-bundle.pem
auth_uri = http://10.167.103.101:5000
project_domain_id = default
project_name = service
user_domain_id = default
password = open
username = nova
auth_url = http://10.167.103.101:35357
auth_plugin = password

[oslo_concurrency]
lock_path = /opt/stack/data/nova






*[spice]#agent_enabled = Trueenabled = falsehtml5proxy_base_url =
http://10.167.103.101:6082/spice_auto.html
#server_listen =
127.0.0.1#server_proxyclient_address = 127.0.0.1*

[oslo_messaging_rabbit]
rabbit_userid = stackrabbit
rabbit_password = open
rabbit_hosts = 10.167.103.101

[glance]
api_servers = http://10.167.103.101:9292

[cinder]
os_region_name = RegionOne

[libvirt]
vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
inject_partition = -2
live_migration_uri = qemu+ssh://ubuntu@%s/system
use_usb_tablet = False
cpu_mode = host-model
virt_type = kvm

[neutron]
service_metadata_proxy = True
url = http://10.167.103.101:9696
region_name = RegionOne
admin_tenant_name = service
auth_strategy = keystone
admin_auth_url = http://10.167.103.101:35357/v2.0
admin_password = open
admin_username = neutron

[keymgr]
fixed_key = c5861a510cda58d367a44fc0aee6405e8e03a70f58c03fdc263af8405cf9a0c6













*[serial_console]enabled = true# Location of serial console proxy. (string
value)base_url = ws://127.0.0.1:6083/ # IP address
on which instance serial console should listen# (string value)listen =
127.0.0.1# The address to which proxy clients (like nova-serialproxy)#
should connect (string value)proxyclient_address = 127.0.0.1*


Thanks,
Prathyusha
__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-10 Thread Davanum Srinivas
Brant,

That's a great pattern for everyone to follow.

Thanks!
Dims

On Thu, Dec 10, 2015 at 5:21 PM, Brant Knudson  wrote:
>
>
> On Thu, Dec 10, 2015 at 7:26 AM, Sean Dague  wrote:
>>
>> On 12/10/2015 01:56 AM, Joshua Harlow wrote:
>> > Shouldn't be to hard (although it's probably not on each oslo project,
>> > but on the consumers projects).
>> >
>> > The warnings module can turn warnings into raised exceptions with a
>> > simple command line switch btw...
>> >
>> > For example:
>> >
>> > $ python -Wonce
>> > Python 2.7.6 (default, Jun 22 2015, 17:58:13)
>> > [GCC 4.8.2] on linux2
>> > Type "help", "copyright", "credits" or "license" for more information.
>>  import warnings
>>  warnings.warn("I am not supposed to be used", DeprecationWarning)
>> > __main__:1: DeprecationWarning: I am not supposed to be used
>> >
>> > $ python -Werror
>> > Python 2.7.6 (default, Jun 22 2015, 17:58:13)
>> > [GCC 4.8.2] on linux2
>> > Type "help", "copyright", "credits" or "license" for more information.
>>  import warnings
>>  warnings.warn("I am not supposed to be used", DeprecationWarning)
>> > Traceback (most recent call last):
>> >   File "", line 1, in 
>> > DeprecationWarning: I am not supposed to be used
>> >
>> > https://docs.python.org/2/library/warnings.html#the-warnings-filter
>> >
>> > Turn that CLI switch from off to on and I'm pretty sure usage of
>> > deprecated things will become pretty evident real quick ;)
>>
>> It needs to be more targetted than that. There is a long standing
>> warning between paste and pkg_resources that would hard stop everyone.
>>
>> But, yes, the idea of being able to run unit tests with fatal
>> deprecations of oslo easily is what I think would be useful.
>>
>> -Sean
>>
>
> In keystone we set a warnings filter for the unit tests so that if keystone
> calls any deprecated function it'll raise[1]. So when the oslo timeutils
> functions were deprecated it broke keystone gate and we fixed it. It would
> be nicer to have a non-voting gate job to serve as a warning instead, but
> it's only happened a couple of times where this caused keystone to be
> blocked for the day that it took to get the fix in. Anyways, it would be
> easy enough for us to have this enabled/disabled via an environment variable
> and create a tox job.
>
> If we had a non-voting warning job it could also run oslo libs from master
> rather than released.
>
> [1]
> http://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/core.py?id=4f8c4a7a10d85080d6db9b30ae1759d45a38a32c#n460
>
> - Brant
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Testing concerns around boot from UEFI spec

2015-12-10 Thread Matt Riedemann



On 12/10/2015 2:21 AM, Ren, Qiaowei wrote:



-Original Message-
From: Sean Dague [mailto:s...@dague.net]
Sent: Friday, December 4, 2015 9:47 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Testing concerns around boot from UEFI
spec

On 12/04/2015 08:34 AM, Daniel P. Berrange wrote:

On Fri, Dec 04, 2015 at 07:43:41AM -0500, Sean Dague wrote:

Can someone explain the licensing issue here? The Fedora comments
make this sound like this is something that's not likely to end up in distros.


The EDK codebase contains a FAT driver which has a license that
forbids reusing the code outside of the EDK project.

[quote]
Additional terms: In addition to the forgoing, redistribution and use
of the code is conditioned upon the FAT 32 File System Driver and all
derivative works thereof being used for and designed only to read
and/or write to a file system that is directly managed by Intel's
Extensible Firmware Initiative (EFI) Specification v. 1.0 and later
and/or the Unified Extensible Firmware Interface (UEFI) Forum's UEFI
Specifications v.2.0 and later (together the "UEFI Specifications");
only as necessary to emulate an implementation of the UEFI
Specifications; and to create firmware, applications, utilities and/or drivers.
[/quote]

So while the code is open source, it is under a non-free license,
hence Fedora will not ship it. For RHEL we're reluctantly choosing to
ship it as an exception to our normal policy, since its the only
immediate way to make UEFI support available on x86 & aarch64

So I don't think the license is a reason to refuse to allow the UEFI
feature into Nova though, nor should it prevent us using the current
EDK bios in CI for testing purposes. It is really just an issue for
distros which only want 100% free software.


For upstream CI that's also a bar that's set. So for 3rd party, it would 
probably be
fine, but upstream won't happen.



Sorry, is there any decision about this? If 3rd CI needs to be added, we could 
also work on it. BTW, if so, the patches could not be merged when the 3rd CI 
could not still work, right?

Thanks,
Qiaowei

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We talked about this in the nova meeting today and agreed that as long 
as there is a warning emitted when this is used saying it's untested and 
therefore considered experimental, we'd be OK with letting this into 
mitaka. It's in Intel's best interest to provide functional testing for 
it, but it wouldn't be required in this case.


I'd like the spec amended for that and then I'm +2.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] what are the key errors with volume detach

2015-12-10 Thread Sean Dague
On 12/02/2015 12:37 PM, Rosa, Andrea (HP Cloud Services) wrote:
> Hi
> 
> thanks Sean for bringing this point, I have been working on the change and on 
> the (abandoned) spec.
> I'll try here to summarize all the discussions we had and what we decided.
> 
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: 02 December 2015 13:31
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [nova] what are the key errors with volume detach
>>
>> This patch to add a bunch of logic to nova-manage for forcing volume detach
>> raised a bunch of questions
>> https://review.openstack.org/#/c/184537/24/nova/cmd/manage.py,cm
> 
> On this specific review there are some valid concerns that I am happy to 
> address, but first we need to understand if we want this change.
> FWIW I think it is still a valid change, please see below.
> 
>> In thinking about this for the last day, I think the real concern is that we 
>> have
>> so many safety checks on volume delete, that if we failed with a partially
>> setup volume, we have too many safety latches to tear it down again.
>>
>> Do we have some detailed bugs about how that happens? Is it possible to
>> just fix DELETE to work correctly even when we're in these odd states?
> 
> In a simplified view of a detach volume we can say that the nova code does:
> 1 detach the volume from the instance
> 2 Inform cinder about the detach and call the terminate_connection on the 
> cinder API. 
> 3 delete the dbm recod in the nova DB
> 
> If 2 fails the volumes get stuck in a detaching status and any further 
> attempt to delete or detach the volume will fail:
> "Delete for volume  failed: Volume  is still attached, 
> detach volume first. (HTTP 400)"

So why isn't this handled in a "finally" pattern.

Ensure that you always do 2 (a) & (b) and 3, collect errors that happen
during 2 (a) & (b), report them back to the user.

What state does that leave things in? Both from the server and the volume.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reg: Blueprint -- add-compute-node-on-the-go

2015-12-10 Thread Atul Ag
 Hi, 

I have added the blue print 
https://blueprints.launchpad.net/nova/+spec/add-compute-node-on-the-go.
Can you please let me know the feasibility, and accept the blueprint.

Thanks & Regards,  
Atul Agarwal
 Tata Consultancy Services
 Mailto: atul...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-10 Thread Brant Knudson
On Thu, Dec 10, 2015 at 7:26 AM, Sean Dague  wrote:

> On 12/10/2015 01:56 AM, Joshua Harlow wrote:
> > Shouldn't be to hard (although it's probably not on each oslo project,
> > but on the consumers projects).
> >
> > The warnings module can turn warnings into raised exceptions with a
> > simple command line switch btw...
> >
> > For example:
> >
> > $ python -Wonce
> > Python 2.7.6 (default, Jun 22 2015, 17:58:13)
> > [GCC 4.8.2] on linux2
> > Type "help", "copyright", "credits" or "license" for more information.
>  import warnings
>  warnings.warn("I am not supposed to be used", DeprecationWarning)
> > __main__:1: DeprecationWarning: I am not supposed to be used
> >
> > $ python -Werror
> > Python 2.7.6 (default, Jun 22 2015, 17:58:13)
> > [GCC 4.8.2] on linux2
> > Type "help", "copyright", "credits" or "license" for more information.
>  import warnings
>  warnings.warn("I am not supposed to be used", DeprecationWarning)
> > Traceback (most recent call last):
> >   File "", line 1, in 
> > DeprecationWarning: I am not supposed to be used
> >
> > https://docs.python.org/2/library/warnings.html#the-warnings-filter
> >
> > Turn that CLI switch from off to on and I'm pretty sure usage of
> > deprecated things will become pretty evident real quick ;)
>
> It needs to be more targetted than that. There is a long standing
> warning between paste and pkg_resources that would hard stop everyone.
>
> But, yes, the idea of being able to run unit tests with fatal
> deprecations of oslo easily is what I think would be useful.
>
> -Sean
>
>
In keystone we set a warnings filter for the unit tests so that if keystone
calls any deprecated function it'll raise[1]. So when the oslo timeutils
functions were deprecated it broke keystone gate and we fixed it. It would
be nicer to have a non-voting gate job to serve as a warning instead, but
it's only happened a couple of times where this caused keystone to be
blocked for the day that it took to get the fix in. Anyways, it would be
easy enough for us to have this enabled/disabled via an environment
variable and create a tox job.

If we had a non-voting warning job it could also run oslo libs from master
rather than released.

[1]
http://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/core.py?id=4f8c4a7a10d85080d6db9b30ae1759d45a38a32c#n460

- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] what are the key errors with volume detach

2015-12-10 Thread Matt Riedemann



On 12/2/2015 11:37 AM, Rosa, Andrea (HP Cloud Services) wrote:

Hi

thanks Sean for bringing this point, I have been working on the change and on 
the (abandoned) spec.
I'll try here to summarize all the discussions we had and what we decided.


From: Sean Dague [mailto:s...@dague.net]
Sent: 02 December 2015 13:31
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova] what are the key errors with volume detach

This patch to add a bunch of logic to nova-manage for forcing volume detach
raised a bunch of questions
https://review.openstack.org/#/c/184537/24/nova/cmd/manage.py,cm


On this specific review there are some valid concerns that I am happy to 
address, but first we need to understand if we want this change.
FWIW I think it is still a valid change, please see below.


In thinking about this for the last day, I think the real concern is that we 
have
so many safety checks on volume delete, that if we failed with a partially
setup volume, we have too many safety latches to tear it down again.

Do we have some detailed bugs about how that happens? Is it possible to
just fix DELETE to work correctly even when we're in these odd states?


In a simplified view of a detach volume we can say that the nova code does:
1 detach the volume from the instance
2 Inform cinder about the detach and call the terminate_connection on the 
cinder API.
3 delete the dbm recod in the nova DB


We actually:

1. terminate the connection in cinder:

https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L2312

2. detach the volume

https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L2315

3. delete the volume (if marked for delete_on_termination):

https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L2348

4. delete the bdm in the nova db:

https://github.com/openstack/nova/blob/c4ca1abb4a49bf0bce765acd3ce906bd117ce9b7/nova/compute/manager.py#L908

So if terminate_connection fails, we shouldn't get to detach. And if 
detach fails, we shouldn't get to delete.




If 2 fails the volumes get stuck in a detaching status and any further attempt 
to delete or detach the volume will fail:
"Delete for volume  failed: Volume  is still attached, detach 
volume first. (HTTP 400)"

And if you try to detach:
"EROR (BadRequest): Invalid input received: Invalid volume: Unable to detach volume. 
Volume status must be 'in-use' and attach_status must be 'attached' to detach. Currently: 
status: 'detaching', attach_status: 'attached.' (HTTP 400)"

at the moment the only way to clean up the situation is to hack the nova DB for 
deleting the bdm record and do some hack on the cinder side as well.
We wanted a way to clean up the situation avoiding the manual hack to the nova 
DB.


Can't cinder rollback state somehow if it's bogus or failed an 
operation? For example, if detach failed, shouldn't we not be in 
'detaching' state? This is like auto-reverting task_state on server 
instances when an operation fails so that we can reset or delete those 
servers if needed.




Solution proposed #1
Move the deletion of the bdm record so as it happens before calling cinder, I 
thought that was ok as from nova side we have done, no leaking bdm and the 
problem was just in the cinder side, but I was wrong.
We have to call the terminate_connection otherwise the device may show back on 
the nova host, for example that is true for iSCSI volumes:
  "if an iSCSI session from the compute host to the storage backend still exists 
(because other volumes are connected), then the volume you just removed will show back up 
on the next scsi bus rescan."
The key point here is that Nova must call the terminate_connection because just Nova has 
the "connector info" to call the terminate connection method, so Cinder can't 
fix it.

Solution proposed #2
Then I thought, ok, so let's expose a new nova API called force delete volume which skips 
all the check and allow to detach a volume in "detaching" status, I thought it 
was ok but I was wrong (again).
The main concern here is that we do not want to have the concept of "force 
delete", the user already asked for detaching the volume and the call should be 
idempotent and just work.
So adding a new API was just adding a technical debt in the RESP API for a 
buggy/weak interaction between the Cinder API and Nova, or in other words we 
are adding a Nova API for fixing a bug in Cinder, which is very odd.

Solution proposed #3
Ok, so the solution is to fix the Cinder API and makes the interaction between 
Nova volume manager and that API robust.
This time I was right (YAY) but as you can imagine this fix is not going to be 
an easy one and after talking with Cinder guys they clearly told me that thatt 
is going to be a massive change in the Cinder API and it is unlikely to land in 
the N(utella) or O(melette)  release.

Solution 

Re: [openstack-dev] [Openstack-operators] [openstack-ansible] Mid Cycle Sprint

2015-12-10 Thread Kevin Carter
Count me in as wanting to be part of the mid-cycle. I live in San 
Antonio but I think we should strongly consider having the meetup in the 
UK. It seems most of our deployers live in the UK and it'd be nice to 
get people involved whom may not have been able to attend the summit. 
While I'll need to get travel approval if we decide to hold the event in 
the UK, during the mid-cycle I'd like to focus on working on the 
"Upgrade Framework" and "multi-OS". Additionally, if we have time, I'd 
like to see if people are interested in bringing new services online and 
work with folks regarding the implementation details and how to compose 
new roles.

Cheers!

--

Kevin Carter
IRC: Cloudnull

On 12/09/2015 08:44 AM, Curtis wrote:
> On Wed, Dec 9, 2015 at 5:45 AM, Jesse Pretorius
>  wrote:
>> Hi everyone,
>>
>> At the Mitaka design summit in Tokyo we had some corridor discussions about
>> doing a mid-cycle meetup for the purpose of continuing some design
>> discussions and doing some specific sprint work.
>>
>> ***
>> I'd like indications of who would like to attend and what
>> locations/dates/topics/sprints would be of interest to you.
>> ***
>>
>
> I'd like to get more involved in openstack-ansible. I'll be going to
> the operators mid-cycle in Feb, so could stay later and attend in West
> London. However, I could likely make it to San Antonio as well. Not
> sure if that helps but I will definitely try to attend where ever it
> occurs.
>
> Thanks.
>
>> For guidance/background I've put some notes together below:
>>
>> Location
>> 
>> We have contributors, deployers and downstream consumers across the globe so
>> picking a venue is difficult. Rackspace have facilities in the UK (Hayes,
>> West London) and in the US (San Antonio) and are happy for us to make use of
>> them.
>>
>> Dates
>> -
>> Most of the mid-cycles for upstream OpenStack projects are being held in
>> January. The Operators mid-cycle is on February 15-16.
>>
>> As I feel that it's important that we're all as involved as possible in
>> these events, I would suggest that we schedule ours after the Operators
>> mid-cycle.
>>
>> It strikes me that it may be useful to do our mid-cycle immediately after
>> the Ops mid-cycle, and do it in the UK. This may help to optimise travel for
>> many of us.
>>
>> Format
>> --
>> The format of the summit is really for us to choose, but typically they're
>> formatted along the lines of something like this:
>>
>> Day 1: Big group discussions similar in format to sessions at the design
>> summit.
>>
>> Day 2: Collaborative code reviews, usually performed on a projector, where
>> the goal is to merge things that day (if a review needs more than a single
>> iteration, we skip it. If a review needs small revisions, we do them on the
>> spot).
>>
>> Day 3: Small group / pair programming.
>>
>> Topics
>> --
>> Some topics/sprints that come to mind that we could explore/do are:
>>   - Install Guide Documentation Improvement [1]
>>   - Development Documentation Improvement (best practises, testing, how to
>> develop a new role, etc)
>>   - Upgrade Framework [2]
>>   - Multi-OS Support [3]
>>
>> [1] https://etherpad.openstack.org/p/oa-install-docs
>> [2] https://etherpad.openstack.org/p/openstack-ansible-upgrade-framework
>> [3] https://etherpad.openstack.org/p/openstack-ansible-multi-os-support
>>
>> --
>> Jesse Pretorius
>> IRC: odyssey4me
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
>

-- 

--

Kevin Carter
IRC: cloudnull

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] liberty doesn't have caps on deps

2015-12-10 Thread Matt Riedemann



On 10/16/2015 7:16 AM, Sean Dague wrote:

On 10/16/2015 04:23 AM, Thierry Carrez wrote:

Robert Collins wrote:

[...]
BUT: we haven't (ever!) tested that the lowest versions we specify
work. When folk know they are adding a hard dependency on a version we
do raise the lower versions, but thats adhoc and best effort today.
I'd like to see a lower-constraints.txt reflecting the oldest version
that works across all of OpenStack (as a good boundary case to test) -
but we need to fix pip first to teach it to choose lower versions over
higher versions (https://github.com/pypa/pip/issues/3188 - I thought
I'd filed it previously but couldn't find it...)

More generally, we don't [yet] have the testing setup to test multiple
versions on an ongoing basis, so we can't actually make any statement
other than 'upper-constraints.txt is known to work'. Note: before
constraints we couldn't even make *that* statement. The statement we
could make then was 'if you look up the change in gerrit and from that
the CI dvsm test run which got through the gate, then you can
figureout *a* version of the dependencies that worked.


And that is the critical bit. The system we had in kilo and before may
appear to be more practical to interpret downstream, but the assertions
it was making were mostly untested. So the capping was a convenient
illusion: things beyond the cap may be working, and things below the cap
could actually be broken. At least the upper-constraints expresses
clearly the combination that works and was tested. Combined with the
uncapped requirements (which express what *should* be working, to the
best of our knowledge), they make a more accurate, albeit admittedly
more complex, set of information for downstream packagers.


And equally important is that pip only really reacts well to version
capping / pinning if you do it all at once across all your things.

When we had a cap, and raised it, we had to:

A) raise it in low level oslo libs (like oslo.config)
B) release those libraries with new caps
C) raise the cap in all the things that used that library
D) release those libraries with new caps
E) ... repeat
...
M) raise the cap in all top level openstack server projects

So a dozen library releases could easily be triggered by fixing one cap
or pin in one low level library, that were no functional changes, they
were just requirements changes. The only reason M could use the old
version of A is because pip wouldn't let you install the 2 together. Not
for any functional reasons.

-Sean



This thread is now directly related to the upgrade impact being 
discussed in liberty here:


https://review.openstack.org/#/c/255245/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Last vitrage meeting

2015-12-10 Thread AFEK, Ifat (Ifat)
Hi,

Our last Vitrage meeting was not recorded properly due to an update of 
OpenStack bot that was performed in the middle of our meeting. Anyway, you can 
still view the meeting log in 
http://eavesdrop.openstack.org/meetings/vitrage/2015/vitrage.2015-12-09-09.00.log.txt
 

Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Upcoming specs and blueprints for Trove/Mitaka

2015-12-10 Thread Amrith Kumar
Members of the Trove community,

Over the past couple of weeks we have discussed the possibility of an early 
deadline for submission of trove specifications for projects that are to be 
included in the Mitaka release. I understand why we're doing it, and agree with 
the concept. Unfortunately though, there are a number of projects for which 
specifications won't be ready in time for the proposed deadline of Friday 12/11 
(aka tomorrow).

I'd like to that the following projects are in the works and specifications 
will be submitted as soon as possible. Now that we know of the new process, we 
will all be able to make sure that we are better planned in time for the N 
release. 

Blueprints have been registered for these projects.

The projects in question are:

Cassandra:
- enable/disable/show root 
(https://blueprints.launchpad.net/trove/+spec/cassandra-database-user-functions)
- Clustering 
(https://blueprints.launchpad.net/trove/+spec/cassandra-cluster)

MariaDB:
- Clustering 
(https://blueprints.launchpad.net/trove/+spec/mariadb-clustering)
- GTID replication 
(https://blueprints.launchpad.net/trove/+spec/mariadb-gtid-replication)

Vertica:
- Add/Apply license 
(https://blueprints.launchpad.net/trove/+spec/vertica-licensing)
- User triggered data upload from Swift 
(https://blueprints.launchpad.net/trove/+spec/vertica-bulk-data-load)
- Cluster grow/shrink 
(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-grow-shrink)
- Configuration Groups 
(https://blueprints.launchpad.net/trove/+spec/vertica-configuration-groups)
- Cluster Anti-affinity 
(https://blueprints.launchpad.net/trove/+spec/vertica-cluster-anti-affinity)

Hbase and Hadoop based databases:
- Extend Trove to Hadoop based databases, starting with HBase 
(https://blueprints.launchpad.net/trove/+spec/hbase-support)

Specifications in the trove-specs repository will be submitted for review as 
soon as they are available.

Thanks,

-amrith



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Christmas Core Cleanup

2015-12-10 Thread Igor Kalnitsky
Hi Evgeniy,

Yes, you absolutely right! I as far as possible will ask them to
review certain patches (if they have no time to watch all patches_.
Moreover, I'm going to add them to MAINTAINERS file.

Thanks,
Igor

P.S: I hope you and others will manage to spend more time on Fuel,
since your feedback guys are really appreciated since you're proven
authorities here. ;)


On Thu, Dec 10, 2015 at 5:59 PM, Evgeniy L  wrote:
> Hi,
>
> Thank you Igor for cleaning core related groups.
>
> Also I would like to add that many of removed cores are still SME (subject
> matter experts)
> in specific areas, so they will continue reviewing related patches.
>
> Thanks,
>
> On Thu, Dec 10, 2015 at 2:42 PM, Igor Kalnitsky 
> wrote:
>>
>> Hey folks,
>>
>> In an effort to do some housekeeping, I clean up the list of core
>> reviewers in Fuel.
>>
>> According to Stackalytics the following cores show a low contribution
>> rate:
>>
>> # fuel-web [1]
>>
>> * Dmitry Shulyak
>> * Evgeniy L
>>
>> # python-fuelclient [2]
>>
>> * Dmitry Pyzhov
>> * Evgeniy L
>>
>> # shotgun [3]
>>
>> * Dmitry Shulyak
>> * Evgeniy L
>>
>> # fuel-upgrade [4]
>>
>> * Aleksey Kasatkin
>> * Vladimir Kozhukalov
>>
>> # fuel-main [5]
>>
>> * Dmitry Pyzhov
>> * Roman Vyalov
>>
>> # fuel-agent [6]
>>
>> * Aleksey Kasatkin
>> * Evgeniy L
>> * Igor Kalnitsky
>>
>> Also, I've removed Sebastian Kalinowski as he said he has no time to
>> work on Fuel anymore.
>>
>> Once former cores show high level of contribution again, I'll gladly
>> add them back.
>>
>> - Igor
>>
>> [1] http://stackalytics.com/report/contribution/fuel-web/90
>> [2] http://stackalytics.com/report/contribution/python-fuelclient/90
>> [3] http://stackalytics.com/report/contribution/shotgun/90
>> [4] http://stackalytics.com/report/contribution/fuel-upgrade/90
>> [5] http://stackalytics.com/report/contribution/fuel-main/90
>> [6] http://stackalytics.com/report/contribution/fuel-agent/90
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Tempest] Drop javelin off tempest

2015-12-10 Thread Matthew Treinish
On Thu, Dec 10, 2015 at 11:15:06AM +0100, Daniel Mellado wrote:
> Hi All,
> 
> In today's QA meeting we were discussing about dropping Javelin off
> tempest if it's not being used anymore in grenade, as sdague pointed
> out. We were thinking about this as a part of the work for [1], where we
> hit issue on Javelin script testing where gate did not detect the
> service clients changes in this script.

So the reason we didn't remove this from tempest when we stopped using it as
part of grenade is at the time there were external users. They still wanted to
keep the tooling around. This is why the unit tests were grown in an effort to
maintain some semblance of testing after the grenade removal. (for a long time
it was mostly self testing through the grenade job)

> 
> Our intention it's to drop the following files off tempest:
> 
>   * tempest/cmd/javelin.py
> 
>   * tempest/cmd/resources.yaml
> 
>   * tempest/tests/cmd/test_javelin.py
> 
> 
> 
> Before doing so, we'd like to get some feedback about out planned move,
> so if you have any questions, comments or feedback, please reply to this
> thread.

You should not just delete these files, there were real users of it in the past
and there might still be. If you're saying that javelin isn't something we can
realistically maintain anymore (which I'm not sure I buy, but whatever) we 
should first mark it for deprecation and have a warning print saying it will be
removed in the future. This gives people a chance to stop using it and migrate
to something else. (using ansible would be a good alternative)


-Matt Treinish

> 
> ---
> [1]
> https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:bp/consistent-service-method-names,n,z
> 


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Configuration management for Fuel 7.0

2015-12-10 Thread Roman Sokolkov
Hi there,

one small step into this direction. I've checked how idempotent
controller's tasks. As result bugs below were reported:

https://bugs.launchpad.net/fuel/+bug/1524759 NEW
https://bugs.launchpad.net/fuel/+bug/1524747 NEW
https://bugs.launchpad.net/fuel/+bug/1524731 NEW
https://bugs.launchpad.net/fuel/+bug/1524727 NEW
https://bugs.launchpad.net/fuel/+bug/1524724 NEW
https://bugs.launchpad.net/fuel/+bug/1524719 NEW
https://bugs.launchpad.net/fuel/+bug/1524713 NEW
https://bugs.launchpad.net/fuel/+bug/1524687 NEW
https://bugs.launchpad.net/fuel/+bug/1524630 NEW
https://bugs.launchpad.net/fuel/+bug/1524327 CONFIRMED
https://bugs.launchpad.net/fuel/+bug/1522857 IN PROGRESS

If it's interesting i can go thru other roles and tasks? Please let me know.

Thanks



On Thu, Dec 3, 2015 at 10:33 PM, Yuriy Taraday  wrote:

> Hi, Roman.
>
> On Thu, Dec 3, 2015 at 5:36 PM Roman Sokolkov 
> wrote:
>
>> I've selected 13 real-world tasks from customer (i.e. update flag X in
>> nova.conf):
>> - 6/13 require fuel-library patching (or is #2 unusable)
>> - 3/13 are OK and can be done with #2
>> - 4/13 can be done with some limitations.
>>
>> If needed i'll provide details.
>>
>> Rough statistics is that *only ~20-25% of use cases can be done with #2*.
>>
>> Let me give a very popular use case that will fail with #2. Assume we'r
>> executing whole task graph every two hours.
>> We want to change nova.conf "DEFAULT/amqp_durable_queues" from False to
>> True.
>>
>> There is no parameter in hiera for "amqp_durable_queues". We have two
>> solutions here (both are bad):
>> 1) Redefine "DEFAULT/amqp_durable_queues" = True in plugin task. What
>> will happen on the node. amqp_durable_queues will continue changing value
>> between True and False on every execution. We shouldn't do it this way.
>> 2) Patch fuel-library. Value for amqp_durable_queues should be taken from
>> hiera. This is also one way ticket.
>>
>
> You are describing one of use cases we want to cover in future with Config
> Service. If we store all configuration variables consumed by all deployment
> tasks in the service, one will be able to change (override) the value in
> the same service and let deployment tasks apply config changes on nodes.
>
> This would require support from the deployment side (source of all config
> values will become a service, not static file) and from Nailgun (all data
> should be stored in the service). In the future this approach will allow us
> to clarify which value goes where and to define new values and override old
> ones in a clearly manageable fashion.
>
> Config Service would also allow us to use data defined outside of Nailgun
> to feed values into deployment tasks, such as external CM services (e.g.
> Puppet Master).
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Roman Sokolkov,
Deployment Engineer,
Mirantis, Inc.
Skype rsokolkov,
rsokol...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev