Re: [openstack-dev] [Manila] Expected Manila behavior for creation of share from snapshot

2015-06-03 Thread Deepak Shetty
On Tue, Jun 2, 2015 at 4:42 PM, Valeriy Ponomaryov  wrote:

> Deepak,
>
> "transfer-*" is not suitable in this particular case. Usage of share
> networks causes creation of resources, when "transfer" does not. Also in
> this topic we have "creation" of new share based on some snapshot.
>

In the original mail it was said:
"
>From user point of view, he may want to copy share and use its copy in
different network and it is valid case.
"
So create share from snapshot, then transfer that share to a different
tenant , doesn't that work ?


> Valeriy
>
> On Sun, May 31, 2015 at 4:23 PM, Deepak Shetty 
> wrote:
>
>>
>> On Thu, May 28, 2015 at 4:54 PM, Duncan Thomas 
>> wrote:
>>
>>> On 28 May 2015 at 13:03, Deepak Shetty  wrote:
>>>
 Isn't this similar to what cinder transfer-* cmds are for ? Ability to
 transfer cinder volume across tenants
 So Manila should be implementing the transfer-* cmds, after which
 admin/user can create a clone
 then initiate a transfer to a diff tenant  ?


>>> Cinder doesn't seem to have any concept analogous to a share network
>>> from what I can see; the cinder transfer commands are for moving a volume
>>> between tenants, which is a different thing, I think.
>>>
>>
>> Yes, cinder doesn't have any eq of share network. But my comment was from
>> the functionality perpsective. In cinder transfer-* commands are used to
>> transfer ownership of volumes across tenants. IIUC the ability in Manila to
>> create a share from snapshot and have that share in a different share
>> network is eq to creating a share from a snapshot for a different tenant,
>> no ? Share networks are typically 1-1 with tenant network AFAIK, correct me
>> if i am wrong
>>
>
>>
>>>
>>> --
>>> Duncan Thomas
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Thierry Carrez
John Garbutt wrote:
>> I support moving nova to intermediate release, but not this cycle.
> 
> +1
> 
> My main motivation here is actually making it clear how useful a
> milestone release can be to get access to a feature you really, really
> need much more quickly.
> 
> Its a shame its called a "beta", because I think the milestones are
> more (production) useful than that. On the other had, they are also
> worse than many "betas" you get (in terms of completed translations
> and docs, etc). So maybe thats still the best label for the
> milestones.

It's actually not called a "beta", it's called a "milestone" which is
slightly less pejorative. The fact that the tag is using "b1" is more an
artifact of the PEP440 limitations...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Paul Belanger

On 06/03/2015 11:23 AM, Thomas Goirand wrote:

i
On 06/03/2015 12:41 AM, James E. Blair wrote:

Hi,

This came up at the TC meeting today, and I volunteered to provide an
update from the discussion.


I've just read the IRC logs. And there's one thing I would like to make
super clear.

We, ie: Debian & Ubuntu folks, are very much clear on what we want to
achieve. The project has been maturing in our heads for like more than 2
years. We would like that ultimately, only a single set of packages Git
repositories exist. We already worked on *some* convergence during the
last years, but now we want a *full* alignment.

We're not 100% sure how the implementation details will look like for
the core packages (like about using the Debconf interface for
configuring packages), but it will eventually happen. For all the rest
(ie: Python module packaging), which represent the biggest work, we're
already converging and this has zero controversy.

Now, the Fedora/RDO/Suse people jumped on the idea to push packaging on
the upstream infra. Great. That's socially tempting. But technically, I
don't really see the point, apart from some of the infra tooling (super
cool if what Paul Belanger does works for both Deb+RPM). Finally,
indeed, this is not totally baked. But let's please not delay the
Debian+Ubuntu upstream Gerrit collaboration part because of it. We would
like to get started, and for the moment, nobody is approving the
/stackforge/deb-openstack-pkg-tools [1] new repository because we're
waiting on the TC decision.

I would agree with not gating on stuff that -infra is working on too. 
If getting gerrit collaboration is useful to Debian / Ubuntu, simple 
gate-noop testing seems like a easy solution.


I agree, anything we in -infra can to do help establish some base 
tooling (chroots for example) would be super awesome.  However, I also 
have expectation for packagers to use their own toolchains if more 
convenient.



Cheers,

Thomas Goirand (zigo)

[1] https://review.openstack.org/#/c/185164/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Haïkel
2015-06-03 17:23 GMT+02:00 Thomas Goirand :
> i
> On 06/03/2015 12:41 AM, James E. Blair wrote:
>> Hi,
>>
>> This came up at the TC meeting today, and I volunteered to provide an
>> update from the discussion.
>
> I've just read the IRC logs. And there's one thing I would like to make
> super clear.
>

I still haven't read the logs as we had our post-mortem meeting today,
but I'll try to address your points.

> We, ie: Debian & Ubuntu folks, are very much clear on what we want to
> achieve. The project has been maturing in our heads for like more than 2
> years. We would like that ultimately, only a single set of packages Git
> repositories exist. We already worked on *some* convergence during the
> last years, but now we want a *full* alignment.
>
> We're not 100% sure how the implementation details will look like for
> the core packages (like about using the Debconf interface for
> configuring packages), but it will eventually happen. For all the rest
> (ie: Python module packaging), which represent the biggest work, we're
> already converging and this has zero controversy.
>
> Now, the Fedora/RDO/Suse people jumped on the idea to push packaging on
> the upstream infra. Great. That's socially tempting. But technically, I
> don't really see the point, apart from some of the infra tooling (super
> cool if what Paul Belanger does works for both Deb+RPM). Finally,
> indeed, this is not totally baked. But let's please not delay the
> Debian+Ubuntu upstream Gerrit collaboration part because of it. We would
> like to get started, and for the moment, nobody is approving the
> /stackforge/deb-openstack-pkg-tools [1] new repository because we're
> waiting on the TC decision.
>

First, we all agree that we should move packaging recipes (to use a
neutral term)
and reviewing to upstream gerrit. That should *NOT* be delayed.
We (RDO) are even willing to transfer full control of the openstack-packages
namespace on github. If you want to use another namespace, it's also
fine with us.

Then, about the infra/tooling things, it looks like a misunderstanding.
If we don't find an agreement on these topics, it's perfectly fine and
should not
prevent moving to upstream gerrit

So let's break the discussion in two parts.

1. upstream gerrit shared by everyone and get this started asap
2. continue discussion about infra/tooling within the new project, without
presumin the outcome.

Does it look like a good compromise to you?

Regards,
H.


> Cheers,
>
> Thomas Goirand (zigo)
>
> [1] https://review.openstack.org/#/c/185164/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Kilo v3 identity problems

2015-06-03 Thread Amy Zhang
Hi guys,

I have installed Kilo and try to use identity v3. I am using v3 policy
file. I changed the domain_id for cloud admin as "default". As cloud admin,
I tried "openstack domain list" and got the error message saying that I was
not authorized.

The part I changed in policy.json:

"cloud_admin": "rule:admin_required and domain_id:default",


The error I got from "openstack domain list":

ERROR: openstack You are not authorized to perform the requested action:
identity:create_domain (Disable debug mode to suppress these details.)
(HTTP 403) (Request-ID: req-2f42b1da-9933-4494-9b39-c1664d154377)

Has anyone tried identity v3 in Kilo? Did you have this problem? Any
suggestions?

Thanks
Amy
-- 
Best regards,
Amy (Yun Zhang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-03 Thread Adam Young

On 06/03/2015 06:47 AM, Sean Dague wrote:

Where I get fuzzy on what I've read / discussed on Dynamic Policy right
now is the fact that every API call is going to need another round trip
to Keystone for a policy check (which would be db calls in keystone?)
Which, maybe is fine, but it seems like there are some challenges and
details around how this consolidated view of the world gets back to the
servers. It*almost*  feels like that /policy API could be used to signal
catch flush as well on changes in Keystone (though we'd need to handle
the HA proxy case). I don't know, this seems a place where devil is in
the details, and lots of people probably need to weigh in on options.
Don't worry, I am not proposing this. I am proposing extending the 
existing mechanism to fetch and cache the policy.json file.  I'm 
currently thinking a default of 1-5 minutes...feedback?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Getting `ValueError: Field `volume_id' cannot be None`

2015-06-03 Thread Deepak Shetty
Hi All,
  I am hitting a strange issue when running Cinder unit tests against my
patch @
https://review.openstack.org/#/c/172808/5

I have spent 1 day and haven't been successfull at figuring how/why my
patch is causing it!

All tests failing are part of VolumeTestCase suite and from the error (see
below) it seems
the Snapshot Object is complaining that 'volume_id' field is null (while it
shouldn't be)

An example error from the associated Jenkins run can be seen @
http://logs.openstack.org/08/172808/5/check/gate-cinder-python27/0abd15e/console.html.gz#_2015-05-22_13_28_47_140

I am seeing a total of 21 such errors.

Its strange because, when I try to reproduce it locally in my devstack env,
I see the below:

1) When i just run: ./run_tests.sh -N
cinder.tests.unit.test_volume.VolumeTestCase
all testcases pass

2) When i run 1 individual testcase: ./run_tests.sh -N
cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
that passes too

3) When i run : ./run_tests.sh -N
I see 21 tests failing and all are failing with error similar to the below

{0} cinder.tests.unit.test_volume.VolumeTestCase.test_delete_busy_snapshot
[0.537366s] ... FAILED

Captured traceback:
~~~
Traceback (most recent call last):
  File "cinder/tests/unit/test_volume.py", line 3219, in
test_delete_busy_snapshot
snapshot_obj = objects.Snapshot.get_by_id(self.context, snapshot_id)
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 163,
in wrapper
result = fn(cls, context, *args, **kwargs)
  File "cinder/objects/snapshot.py", line 130, in get_by_id
expected_attrs=['metadata'])
  File "cinder/objects/snapshot.py", line 112, in _from_db_object
snapshot[name] = value
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 675,
in __setitem__
setattr(self, name, value)
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/base.py", line 70,
in setter
field_value = field.coerce(self, name, value)
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line
182, in coerce
return self._null(obj, attr)
  File
"/usr/lib/python2.7/site-packages/oslo_versionedobjects/fields.py", line
160, in _null
raise ValueError(_("Field `%s' cannot be None") % attr)
ValueError: Field `volume_id' cannot be None

Any suggestions / thoughts on why this could be happening ?

thanx,
deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-03 Thread Tim Hinrichs
I definitely buy the idea of layering policies on top of each other.  But
I'd worry about the long-term feasibility of putting default policies into
code mainly because it ensures we'll never be able to provide any tools
that help users (or other services like Horizon) know what the effective
policy actually is.  In contrast, if the code is just an implementation of
the API, and there is some (or perhaps several) declarative description(s)
of which of those APis are permitted to be executed by whom, we can build
tools to analyze those policies.  Two thoughts.

1) If the goal is to provide warnings to the user about questionable API
policy choices, I'd suggest adding policy-analysis functionality to say
oslo_policy.  The policy-analysis code would take 2 inputs: (i) the policy
and (ii) a list of policy properties, and would generate a warning if any
of the properties are true for the given policy.   Then each project could
provide a file that describes which policy properties are questionable, and
anyone wanting to see the warnings run the functionality on that project's
policy and the project's policy property file.

It would definitely help me if we saw a handful of examples of the warnings
we'd want to generate.

2) If the goal is to provide sensible defaults so the system functions if
there's no policy.json (or a dynamic policy cached from Keystone), why not
create a default_policy.json file and use that whenever policy.json doesn't
exist (or more precisely to use policy.json to override default_policy.json
in some reasonable way).

Tim




On Wed, Jun 3, 2015 at 3:47 AM, Sean Dague  wrote:

> On 06/02/2015 06:27 PM, Morgan Fainberg wrote:
> >
> >
> > On Tue, Jun 2, 2015 at 12:09 PM, Adam Young  > > wrote:
> >
> > Since this a cross project concern, sending it out to the wider
> > mailing list:
> >
> > We have a sub-effort in Keystone to do better access control policy
> > (not the  Neutron or  Congress based policy efforts).
> >
> > I presented on this at the summit, and the effort is under full
> > swing.  We are going to set up a subteam meeting for this, but would
> > like to get some input from outside the Keystone developers working
> > on it.  In particular, we'd like input from the Nova team that was
> > thinking about hard-coding policy decisions in Python, and ask you,
> > instead, to work with us to come up with a solution that works for
> > all the service.
> >
> >
> > I want to be sure we look at what Nova is presenting here. While
> > building policy into python may not (on the surface) look like an
> > approach that is wanted due to it restricting the flexibility that we've
> > had with policy.json, I don't want to exclude the concept without
> > examination. If there is a series of base level functionality that is
> > expected to work with Nova in all cases - is that something that should
> > be codified in the policy rules? This doesn't preclude having a mix
> > between the two approaches (allowing custom roles, etc, but having a
> > baseline for a project that is a known quantity that could be
> overridden).
> >
> > Is there real value (from a UX and interoperability standpoint) to have
> > everything 100% flexible in all the ways? If we are working to redesign
> > how policy works, we should be very careful of excluding the (more)
> > radical ideas without consideration. I'd argue that dynamic policy does
> > fall on the opposite side of the spectrum from the Nova proposal. In
> > truth I'm going to guess we end up somewhere in the middle.
>
> I also don't think it's removing any flexibility at all. Moving the
> default policy into code is about having sane defaults encoded somewhere
> that we can analyze what people did with the policy, and WARN them when
> they did something odd. That odd might be an interop thing, it might
> also be 'you realize you disabled server creation, right, probably want
> to go look at that'.
>
> Our intent is this applies in layers.
>
> You start with policy in code, that's a set of defaults, which can be
> annotated with ("WARN if policy is restricted further than these
> defaults") for specific rules.
>
> Then you apply policy.json as a set of overrides. Compute and emit any
> warnings.
>
> Where this comes into dynamic policy I think is interesting, because
> dynamic policy seems to require a few things.
>
> Where is the start of day origin seed for policy?
>
> There are a few options here. But if we think about a world where
> components are releasing on different schedules, and being upgraded at
> different times, it seems like the Nova installation has to be that
> source of original truth.
>
> So having a GET /policy API call that would provide the composite policy
> that Nova knows about (code + json patch) would make a lot of sense. It
> would make that discoverable to all kinds of folks on the network, not
> just Keystone. Win.
>
> This also seems like the only sane thing in a

Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Allison Randal
On 06/03/2015 07:22 AM, Thomas Goirand wrote:
> 
> However, talking with James Page (from Canonical, head of their server
> team which does the OpenStack packaging), we believe it's best if we had
> 2 different distinct teams: one for Fedora/SuSe/everything-rpm, and one
> for Debian based distribution.
> 
> We could try to work as a single entity (RPM + deb teams), but rpm+yum
> and dpkg+apt are 2 distinct worlds which have very few common
> attributes. So even if it may socially be nice, it's not the right
> technical decision.

Taking a step back, even though the tooling and packaging formats are
different, it is a massive benefit to OpenStack and to operators if the
end result of installing OpenStack packages on any distro is as similar
as possible. To that end, this should be one unified packaging team
focused on delivering a usable OpenStack through the distros.

Allison

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][third-party] CI FC passthrough scripts now available on stackforge

2015-06-03 Thread Sean McGinnis
Ramy and Patrick - thank you for your work on this. This piece is 
definitely a challenge for any FC vendors setting up third party CI.


On 06/03/2015 09:59 AM, Asselin, Ramy wrote:


For anyone working on 3^rd party CI FC drivers:

Patrick East and I have been working on making “FC pass-through” scripts.

The main use case of these scripts is to present the FC HBAs directly 
inside a VM in order to test your FC cinder driver.


Now available in stackforge [1]

Link available in cinder FAQ [2]

It’s a working solution (3 drivers using these scripts, more using its 
precursors), but not the ‘best’ solution.  It’s open source:  so any 
improvements/bug fixes are welcome J


Ramy

[1] 
https://git.openstack.org/cgit/stackforge/third-party-ci-tools/tree/provisioning_scripts/fibre_channel


[2] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#FAQ




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread John Garbutt
On 3 June 2015 at 15:22, Doug Hellmann  wrote:
> Excerpts from John Garbutt's message of 2015-06-03 14:24:40 +0100:
>> On 3 June 2015 at 14:09, Thierry Carrez  wrote:
>> > John Garbutt wrote:
>> >> Given we are thinking Liberty is moving to semantic versioning, maybe
>> >> it could look like this:
>> >> * 12.0.1 (liberty-1) will have some features (hopefully), and will be a 
>> >> tag
>> >> * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
>> >> * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
>> >> * 12.0.2 (liberty-2) will also contain features
>> >> * 12.0.3 (liberty-3) is focused on priority features (given the current 
>> >> plan)
>> >> * 12.1 is Liberty release is just bug fixes on 12.0.3
>> >> * 13.0.0.dev1 would be the first commit to open M
>> >
>> > The current thinking on the release management team would be to do
>> > something like this for projects that are still doing milestone-based
>> > development:
>> >
>> > * 12.0.0b1 (liberty-1)
>> > * 12.0.0b2 (liberty-2)
>> > * 12.0.0b3 (liberty-3)
>> > * 12.0.0rc1 (RC1)
>> > * 12.0.0 is Liberty release
>> >
>> > I think assuming people can tell 12.0.1 is an alpha and 12.1 is a
>> > release that is just bug fixes over 12.0.3 is a bit crazy...
>>
>> We go to great lengths to ensure folks can upgrade from b1 -> b3 and
>> b2 -> release. I am really looking for a way to advertise that, incase
>> its useful.
>>
>> ... But it could/will be missing aligned docs and translations. So
>> maybe its not enough different from beta... needs more thought.
>>
>> > The alternative would be to go full intermediary releases and do:
>> >
>> > * 11.1.0
>> > * 11.2.0
>> > * 11.2.1
>> > * 11.3.0
>> > * 11.3.1 (oh! that happens to also be the "liberty" release!)
>> > * 11.4.0
>> >
>> > I don't think we can maintain an middle ground.
>>
>> I think that could still work.
>>
>> But I was attempting to skip the exception of creating 11.2.1 just
>> because 11.2.0.dev42 fixes a critical bug present in 11.2.1. You would
>> have to wait for the next (time bound) release to get the extra bug
>> fixes and features.
>
> If we don't assume stable branches for every tag, tags are pretty
> cheap in terms of maintenance. That's the appeal of using intermediate
> semver-based releases -- fixes get rolled out in the next release,
> whenever we want.

True, and I like that.

I think not having a stable branch has upgrade implications. People on
one stable branch probably expect a smooth upgrade to the next one. So
the stable branches define the N and N+1 in our upgrade story, I
think.

> I support moving nova to intermediate release, but not this cycle.

+1

My main motivation here is actually making it clear how useful a
milestone release can be to get access to a feature you really, really
need much more quickly.

Its a shame its called a "beta", because I think the milestones are
more (production) useful than that. On the other had, they are also
worse than many "betas" you get (in terms of completed translations
and docs, etc). So maybe thats still the best label for the
milestones.

> We have a couple of smaller projects experimenting with it this
> cycle, and I think it would be a good idea for the nova team to
> wait until M to start that transition.  That will give us time to
> figure out how to make that work well for applications (we're already
> doing it for libraries, and Swift, so I don't expect a *lot* of
> trouble, but still).

Agreed. I am looking closely at how it works for ironic.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread John Garbutt
On 3 June 2015 at 15:37, Daniel P. Berrange  wrote:
> On Wed, Jun 03, 2015 at 10:26:03AM -0400, Doug Hellmann wrote:
>> Excerpts from Daniel P. Berrange's message of 2015-06-03 14:28:01 +0100:
>> > On Wed, Jun 03, 2015 at 03:09:28PM +0200, Thierry Carrez wrote:
>> > > John Garbutt wrote:
>> > > > Given we are thinking Liberty is moving to semantic versioning, maybe
>> > > > it could look like this:
>> > > > * 12.0.1 (liberty-1) will have some features (hopefully), and will be 
>> > > > a tag
>> > > > * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
>> > > > * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
>> > > > * 12.0.2 (liberty-2) will also contain features
>> > > > * 12.0.3 (liberty-3) is focused on priority features (given the 
>> > > > current plan)
>> > > > * 12.1 is Liberty release is just bug fixes on 12.0.3
>> > > > * 13.0.0.dev1 would be the first commit to open M
>> > >
>> > > The current thinking on the release management team would be to do
>> > > something like this for projects that are still doing milestone-based
>> > > development:
>> > >
>> > > * 12.0.0b1 (liberty-1)
>> > > * 12.0.0b2 (liberty-2)
>> > > * 12.0.0b3 (liberty-3)
>> > > * 12.0.0rc1 (RC1)
>> > > * 12.0.0 is Liberty release
>> >
>> > This kind of numberig is something I'd really like us to get away from
>> > in Nova, as by including beta/alpha nomenculture, it is really telling
>> > users that these releases are not to be deployed outside the lab.
>> >
>> > > I think assuming people can tell 12.0.1 is an alpha and 12.1 is a
>> > > release that is just bug fixes over 12.0.3 is a bit crazy...
>> > >
>> > > The alternative would be to go full intermediary releases and do:
>> > >
>> > > * 11.1.0
>> > > * 11.2.0
>> > > * 11.2.1
>> > > * 11.3.0
>> > > * 11.3.1 (oh! that happens to also be the "liberty" release!)
>> > > * 11.4.0
>> > >
>> > > I don't think we can maintain an middle ground.
>> >
>> > What I think we're saying for Nova is that we're not going to change
>> > the cadence of what we're releasing. ie we're still following the
>> > milestone based development timeline. Instead we're trying to get
>> > across that the milestone releases are none the less formal releases
>> > you can deploy and/or base downsteam products on.
>> >
>> > Personally I like the idea of every release we do being fully equal
>> > in status, but at least in the short term we'll have limitations
>> > that some of the releases will not be synced with docs & translations
>> > teams, so will not quite be at the same level.
>> >
>> > On IRC John also mentioned that the point at which we bump the
>> > second digit in the semantic version is also the marker bouy at
>> > which we remove deprecated config parameters, and/or merge /
>> > drop database migrations.
>>
>> That's not how I interpret the semver rules. If we consider removing
>> a configuration option a backwards-incompatible change, that means
>> incrementing the major version number (rule 8 from [1]).  The second
>> digit would be incremented when the deprecation is *started* (rule
>> 7).
>
> If this doesn't match semver, then don't call it semvar versioning.
> We should do what's right for the nova project, rather than try to
> fit with an arbitrary set of versioning rules defined elsewhere.

So, I was trying to see how far I could bend SemVer before it broke.
There appears to be consensus that I bent it too far, so thats cool.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread James Bottomley
On Wed, 2015-06-03 at 17:45 +0300, Boris Pavlovic wrote:
> James B.
> 
> One more time.
> Everybody makes mistakes and it's perfectly OK.
> I don't want to punish anybody and my goal is to make system
> that catch most of them (human mistakes) no matter how it is complicated.

I'm not saying never do systems to catch human mistakes, I'm saying it's
a tradeoff: you have to assess what the consequence of the caught
mistake is vs how much bother is it to implement and maintain the system
that would have caught the mistake (and how much annoyance does it
cause).  Complexity kills, whether in code or in systems, so I don't
think it's right to say we do the system "no matter how complicated".

In this case, the benefit looks to be small, because the system we have
today already copes with mistakes by cores and the implementation and
maintenance cost in both gerrit code and maintaining the maps looks to
be high.  So, in my book, it's a bad tradeoff.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Thomas Goirand
i
On 06/03/2015 12:41 AM, James E. Blair wrote:
> Hi,
> 
> This came up at the TC meeting today, and I volunteered to provide an
> update from the discussion.

I've just read the IRC logs. And there's one thing I would like to make
super clear.

We, ie: Debian & Ubuntu folks, are very much clear on what we want to
achieve. The project has been maturing in our heads for like more than 2
years. We would like that ultimately, only a single set of packages Git
repositories exist. We already worked on *some* convergence during the
last years, but now we want a *full* alignment.

We're not 100% sure how the implementation details will look like for
the core packages (like about using the Debconf interface for
configuring packages), but it will eventually happen. For all the rest
(ie: Python module packaging), which represent the biggest work, we're
already converging and this has zero controversy.

Now, the Fedora/RDO/Suse people jumped on the idea to push packaging on
the upstream infra. Great. That's socially tempting. But technically, I
don't really see the point, apart from some of the infra tooling (super
cool if what Paul Belanger does works for both Deb+RPM). Finally,
indeed, this is not totally baked. But let's please not delay the
Debian+Ubuntu upstream Gerrit collaboration part because of it. We would
like to get started, and for the moment, nobody is approving the
/stackforge/deb-openstack-pkg-tools [1] new repository because we're
waiting on the TC decision.

Cheers,

Thomas Goirand (zigo)

[1] https://review.openstack.org/#/c/185164/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Nikola Đipanov
On 06/03/2015 02:43 PM, Boris Pavlovic wrote:
> 
> I don't believe even my self, because I am human and I make mistakes. 
> My goal on the PTL position is to make such process that stops "human"
> mistakes before they land in master. In other words  everything should be
> automated and pre not post checked. 
> 

I used to believe exactly this some time ago - but I don't anymore. Lack
of bugs is not what makes good software (tho you don't want too many of
them :) ).

Focusing on bugs and automation to avoid them is misguided, and so is
the idea that code review is there to spot bugs before they land in
tree. Code reviewers should make sure that the abstractions are solid,
the code is modular, readable and maintainable - exactly the stuff
machines (still?) can't do (*).

This was one of the arguments against doing exactly what you propose in
Nova - we want the same (high?) level of reviews in all parts of the
code, and strong familiarity with the whole.

But I think it's failing - Nova is just too big - and there is not
enough skilled people to do the work without a massive scope reduction.

I am not sure how to fix it TBH (tho my gut feeling says we should
loosen not tighten the constraints).

N.

(*) Machines can run automated tests to find bugs, but tests are also
software that needs reviewing, maintaining and testing... so you want to
make sure you spend your finite resources catching the right kind of bugs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][third-party] CI FC passthrough scripts now available on stackforge

2015-06-03 Thread Asselin, Ramy
For anyone working on 3rd party CI FC drivers:

Patrick East and I have been working on making “FC pass-through” scripts.
The main use case of these scripts is to present the FC HBAs directly inside a 
VM in order to test your FC cinder driver.

Now available in stackforge [1]
Link available in cinder FAQ [2]

It’s a working solution (3 drivers using these scripts, more using its 
precursors), but not the ‘best’ solution.  It’s open source:  so any 
improvements/bug fixes are welcome ☺

Ramy

[1] 
https://git.openstack.org/cgit/stackforge/third-party-ci-tools/tree/provisioning_scripts/fibre_channel

[2] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#FAQ

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Thomas Goirand
On 06/03/2015 04:15 PM, Derek Higgins wrote:
> o Tools to build packages in CI jobs should provide a consistent
> interface regardless of packaging being built

Sure, we can have *some* of the tooling converging. But I don't see
Debian/Ubuntu using anything else than git-buildpackage and sbuild (as
this is what is used everywhere in Debian and Ubuntu), and I don't see
how RPM guys could be using that either.

>> 3) What are the plans for repositories and their contents?
>>
>> What repos will be created, and what will be in them.  When will new
>> ones be created, and is there any process around that.
> 
> Assuming you mean git repositories ? I think anything under the
> openstack (or stackforge) umbrella

Just to make things more clear...

Here, we're talking about the /openstack namespace, which is why the TC
is involved. Otherwise, pushing to /stackforge wouldn't require the
blessing of the TC.

> If you meant package repositories I think none is a fine answer for the
> moment but if there is an appetite for them then I think what would
> eventually make most sense are repositories for master branches along
> with supported stable branches. This may differ between packaging
> formats and what their teams are prepared to support.

As I wrote earlier, we can't technically avoid to have packages stored
in upstream infra, because of build-dependency chains.

Publishing the resulting packages publicly is another story, which we
may decide later on. To me, this really is a tiny small implementation
detail, as what counts anyway, is having stable packages available on
distribution repositories, which means publishing stable package
repositories publicly makes very little sense.

I agree that publishing master/trunk could be a lot of fun though!

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Jeremy Stanley
On 2015-06-03 17:15:43 +0300 (+0300), Boris Pavlovic wrote:
> I can't talk for other projects, so let's talk about Rally specific.
> 
> We have single .git in root for whole project.
> 
> We have 4 subdir that can have own maintainers:
> - rally/deploy
> - rally/verify
> - rally/benchmark
> - rally/plugins
> 
> First 3 subdir are quite different and usually isolated communities.
> Plugins are not so hard to review and mostly developed part.
> 
> If I would be able to have cores for specific areas that will scale up
> code reviewing process a lot
> without any trust, process, social, arch, whatever changes in project.

I get that, but if you already have rally/deploy, rally/verify and
rally/benchmark as separate directory trees then why not just git
filter-branch those into new repos and add dedicated review teams to
them? That doesn't involve any "rearchitecting" since they're
already effectively split up and just happen to coexist in one repo
today.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa] Empty "Build succeeded" when filtering jobs

2015-06-03 Thread James E. Blair
Evgeny Antyshev  writes:

> Some CIs like to narrow their scope to a certain set of files.
> For that, they specify file mask on per-job basis. So there appear
> annoying comments with only "Build succeeded".
> (an example complaint:
> http://lists.openstack.org/pipermail/openstack-dev/2015-June/065367.html)
>
> Moreover, most of CIs which don't bother filtering, make lots of
> comments to doc/unittest changes, which is also wrong.
> (seehttps://review.openstack.org/#/c/152006, and most of CIs don't run 
> unittests)
> What if Zuul would not comment when no real jobs run?
> The only meaningful task that is done is merging the patch,
> but anyway in case of merge failure there should be "Merge failed" comment.
>
> In case of no objections, I'll make corresponding change in zuul.

Sounds good to me.  In fact, if you specify no jobs for a
project-pipeline in Zuul, it does nothing (which is why we have the noop
jobs).  Arguably the fact that when the job set reduces to nothing due
to filtering the change is still enqueued is a bug.

I will note that this may complicate efforts to track the performance of
third-party CI systems, especially determining whether they are
reporting on all changes.  I still think you should make the change; the
reporting systems may just need to be a little more sophisticated
(perhaps they should only look at changes where OpenStack's CI system
ran certain jobs).

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread James Bottomley
On Wed, 2015-06-03 at 09:29 +0300, Boris Pavlovic wrote:
> *- Why not just trust people*
> 
> People get tired and make mistakes (very often).
> That's why we have blocking CI system that checks patches,
> That's why we have rule 2 cores / review (sometimes even 3,4,5...)...
> 
> In ideal work Lieutenants model will work out of the box. In real life all
> checks like:
> person X today has permission to do Y operation should be checked
> automatically.
> 
> This is exactly what I am proposing.

This is completely antithetical to the open source model.  You have to
trust people, that's why the project has hierarchies filled with more
trusted people.  Do we trust people never to make mistakes?  Of course
not; everyone's human, that's why there are cross checks.  It's simply
not possible to design a system where all the possible human mistakes
are eliminated by rules (well, it's not possible to imagine: brave new
world and 1984 try looking at something like this, but it's impossible
to build currently in practise).

So, before we build complex checking systems, the correct question to
ask is: what's the worst that could happen if we didn't?  In this case,
two or more of your lieutenants accidentally approve a patch not in
their area and no-one spots it before it gets into the build.
Presumably, even though it's not supposed to be their areas, they
reviewed the patch and found it OK.  Assuming the build isn't broken,
everything proceeds as normal.  Even if there was some subtle bug in the
code that perhaps some more experienced person would spot, eventually it
gets found and fixed.

You see the point?  This is roughly equivalent to what would happen
today if a core made a mistake in a review ... it's a normal consequence
we expect to handle.  If it happened deliberately then the bad
Lieutenant eventually gets found and ejected (in the same way a bad core
would).  The bottom line is there's no point building a complex
permission system when it wouldn't really improve anything and it would
get in the way of flexibility.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
James B.

One more time.
Everybody makes mistakes and it's perfectly OK.
I don't want to punish anybody and my goal is to make system
that catch most of them (human mistakes) no matter how it is complicated.

Best regards,
Boris Pavlovic


On Wed, Jun 3, 2015 at 5:33 PM, James Bottomley <
james.bottom...@hansenpartnership.com> wrote:

> On Wed, 2015-06-03 at 09:29 +0300, Boris Pavlovic wrote:
> > *- Why not just trust people*
> >
> > People get tired and make mistakes (very often).
> > That's why we have blocking CI system that checks patches,
> > That's why we have rule 2 cores / review (sometimes even 3,4,5...)...
> >
> > In ideal work Lieutenants model will work out of the box. In real life
> all
> > checks like:
> > person X today has permission to do Y operation should be checked
> > automatically.
> >
> > This is exactly what I am proposing.
>
> This is completely antithetical to the open source model.  You have to
> trust people, that's why the project has hierarchies filled with more
> trusted people.  Do we trust people never to make mistakes?  Of course
> not; everyone's human, that's why there are cross checks.  It's simply
> not possible to design a system where all the possible human mistakes
> are eliminated by rules (well, it's not possible to imagine: brave new
> world and 1984 try looking at something like this, but it's impossible
> to build currently in practise).
>
> So, before we build complex checking systems, the correct question to
> ask is: what's the worst that could happen if we didn't?  In this case,
> two or more of your lieutenants accidentally approve a patch not in
> their area and no-one spots it before it gets into the build.
> Presumably, even though it's not supposed to be their areas, they
> reviewed the patch and found it OK.  Assuming the build isn't broken,
> everything proceeds as normal.  Even if there was some subtle bug in the
> code that perhaps some more experienced person would spot, eventually it
> gets found and fixed.
>
> You see the point?  This is roughly equivalent to what would happen
> today if a core made a mistake in a review ... it's a normal consequence
> we expect to handle.  If it happened deliberately then the bad
> Lieutenant eventually gets found and ejected (in the same way a bad core
> would).  The bottom line is there's no point building a complex
> permission system when it wouldn't really improve anything and it would
> get in the way of flexibility.
>
> James
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Kyle Mestery
On Wed, Jun 3, 2015 at 9:33 AM, James Bottomley <
james.bottom...@hansenpartnership.com> wrote:

> On Wed, 2015-06-03 at 09:29 +0300, Boris Pavlovic wrote:
> > *- Why not just trust people*
> >
> > People get tired and make mistakes (very often).
> > That's why we have blocking CI system that checks patches,
> > That's why we have rule 2 cores / review (sometimes even 3,4,5...)...
> >
> > In ideal work Lieutenants model will work out of the box. In real life
> all
> > checks like:
> > person X today has permission to do Y operation should be checked
> > automatically.
> >
> > This is exactly what I am proposing.
>
> This is completely antithetical to the open source model.  You have to
> trust people, that's why the project has hierarchies filled with more
> trusted people.  Do we trust people never to make mistakes?  Of course
> not; everyone's human, that's why there are cross checks.  It's simply
> not possible to design a system where all the possible human mistakes
> are eliminated by rules (well, it's not possible to imagine: brave new
> world and 1984 try looking at something like this, but it's impossible
> to build currently in practise).
>
> So, before we build complex checking systems, the correct question to
> ask is: what's the worst that could happen if we didn't?  In this case,
> two or more of your lieutenants accidentally approve a patch not in
> their area and no-one spots it before it gets into the build.
> Presumably, even though it's not supposed to be their areas, they
> reviewed the patch and found it OK.  Assuming the build isn't broken,
> everything proceeds as normal.  Even if there was some subtle bug in the
> code that perhaps some more experienced person would spot, eventually it
> gets found and fixed.
>
> You see the point?  This is roughly equivalent to what would happen
> today if a core made a mistake in a review ... it's a normal consequence
> we expect to handle.  If it happened deliberately then the bad
> Lieutenant eventually gets found and ejected (in the same way a bad core
> would).  The bottom line is there's no point building a complex
> permission system when it wouldn't really improve anything and it would
> get in the way of flexibility.
>
> James
>

I agree with what James is saying here. The entire system is built on trust
and accountability. If either of those things are breached, the system will
need some help to self correct. Building things into the process which slow
it down shouldn't be the goal. Building the trust and accountability at a
human level should be the goal.

Thanks,
Kyle


>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Daniel P. Berrange
On Wed, Jun 03, 2015 at 10:26:03AM -0400, Doug Hellmann wrote:
> Excerpts from Daniel P. Berrange's message of 2015-06-03 14:28:01 +0100:
> > On Wed, Jun 03, 2015 at 03:09:28PM +0200, Thierry Carrez wrote:
> > > John Garbutt wrote:
> > > > Given we are thinking Liberty is moving to semantic versioning, maybe
> > > > it could look like this:
> > > > * 12.0.1 (liberty-1) will have some features (hopefully), and will be a 
> > > > tag
> > > > * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
> > > > * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
> > > > * 12.0.2 (liberty-2) will also contain features
> > > > * 12.0.3 (liberty-3) is focused on priority features (given the current 
> > > > plan)
> > > > * 12.1 is Liberty release is just bug fixes on 12.0.3
> > > > * 13.0.0.dev1 would be the first commit to open M
> > > 
> > > The current thinking on the release management team would be to do
> > > something like this for projects that are still doing milestone-based
> > > development:
> > > 
> > > * 12.0.0b1 (liberty-1)
> > > * 12.0.0b2 (liberty-2)
> > > * 12.0.0b3 (liberty-3)
> > > * 12.0.0rc1 (RC1)
> > > * 12.0.0 is Liberty release
> > 
> > This kind of numberig is something I'd really like us to get away from
> > in Nova, as by including beta/alpha nomenculture, it is really telling
> > users that these releases are not to be deployed outside the lab.
> > 
> > > I think assuming people can tell 12.0.1 is an alpha and 12.1 is a
> > > release that is just bug fixes over 12.0.3 is a bit crazy...
> > > 
> > > The alternative would be to go full intermediary releases and do:
> > > 
> > > * 11.1.0
> > > * 11.2.0
> > > * 11.2.1
> > > * 11.3.0
> > > * 11.3.1 (oh! that happens to also be the "liberty" release!)
> > > * 11.4.0
> > > 
> > > I don't think we can maintain an middle ground.
> > 
> > What I think we're saying for Nova is that we're not going to change
> > the cadence of what we're releasing. ie we're still following the
> > milestone based development timeline. Instead we're trying to get
> > across that the milestone releases are none the less formal releases
> > you can deploy and/or base downsteam products on.
> > 
> > Personally I like the idea of every release we do being fully equal
> > in status, but at least in the short term we'll have limitations
> > that some of the releases will not be synced with docs & translations
> > teams, so will not quite be at the same level.
> > 
> > On IRC John also mentioned that the point at which we bump the
> > second digit in the semantic version is also the marker bouy at
> > which we remove deprecated config parameters, and/or merge /
> > drop database migrations.
> 
> That's not how I interpret the semver rules. If we consider removing
> a configuration option a backwards-incompatible change, that means
> incrementing the major version number (rule 8 from [1]).  The second
> digit would be incremented when the deprecation is *started* (rule
> 7).

If this doesn't match semver, then don't call it semvar versioning.
We should do what's right for the nova project, rather than try to
fit with an arbitrary set of versioning rules defined elsewhere.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RequestSpec object and Instance model

2015-06-03 Thread Sylvain Bauza



Le 03/06/2015 15:15, Nikola Đipanov a écrit :

On 06/02/2015 03:14 PM, Sylvain Bauza wrote:

Hi,

Currently working on implementing the RequestSpec object BP [1], I had
some cool comments on my change here :
https://review.openstack.org/#/c/145528/12/nova/objects/request_spec.py,cm

Since we didn't discussed on how to persist that RequestSpec object, I
think the comment is valuable.

For the moment, the only agreed spec for persisting the object that we
have is [2] but there is also a corollar here that means that we would
have to persist more than the current fields
https://review.openstack.org/#/c/169901/3/specs/liberty/approved/add-buildrequest-obj.rst,cm


So, there are 2 possibilities :
  #1, we only persist the RequestSpec for the sole Scheduler and in that
case, we can leave as it is - only a few fields from Instance are stored
  #2, we consider that RequestSpec can be used for more than just the
Scheduler, and then we need to make sure that we will have all the
instance fields then.


So these are 2 possibilities if we agree that we need to make progress
on the spec as is defined and merged now. What I was complaining
yesterday is that we don't seem to have done enough of high level
investigation into this stuff before embarking on writing a set of specs
that then due to their format obscure the problems we are actually
trying to solve.


Since Nova is big, it's pretty hard to take the big picture of all the 
problems that we have and provide a spec which can be fine grained 
enough for making sure it will take in account the overall problem.


At least, I'm seriously considering the RequestSpec object and how it is 
persisted as a first attempt to version a Scheduler API and provide an 
upgrade path for changing it.




Work around the scheduler touches on a lot of issues that have only
recently been noticed. While I am all for the incremental approach, it
seems silly to completely disregard the issues we already know about. We
should have a high level overview of the problems we know we want to
solve, and then come up with an incremental way of solving them, but not
without keeping an eye on the big picture at all times.


True story. That's why the biggest question I have is "do I actually add 
more technical debt to the project by writing RequestSpec, or is it 
something that helps reducing the debt ?"


So, that's why I'm very open to any comments explaining which kind of 
problem the current proposal could raise, and how we could prevent that.
IIUC, your concern is about duplicating some information within the 
RequestSpec object which would be persisted. My point is to consider 
that if the usage is well defined (ie. only a contract between conductor 
and scheduler, and not reused anywhere else) then that's a reasonable 
trade-off.



An ad-hoc list of individual issues that we know about and should be
trying to solve (in no particular order) that all seem related to the
data model design problem we are trying to take a stab at here:
1/ RequestSpec is an unversioned dict even though it's the central piece
of a placement request for the scheduler
Targeted by 
http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/request-spec-object.html

2/ There are scheduler_hints that are needed throughout the lifecycle of
an instance but are never persisted so are lost after boot
Targeted by 
http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/persist-request-spec.html

3/ We have the Migration objects that are used both for resource
tracking for instances being migrated, and as an indication of an
instance being moved, but are not used in all the places we need this
kind of book keeping (live migration, rebuild)
http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/robustify_evacuate.html 
is aiming to provide that evacuations, should we also propose to track 
rebuilds and live migrations as well ?

4/ Evacuate (an orchestrated rebuild) is especially problematic because
it usually involves failure modes, which are difficult to identify and
handle properly without a consistently used data model.
Could you please describe more the problem ? Is it due to the fact that 
we're not persisting the request spec so it's basically a wet finger for 
knowing if that's good or not ?

5/ Some of the recently added constraints that influence resource
tracking (NUMA, CPU pinning) cannot simply be calculated from the flavor
on the fly when tracking resources, but need to be persisted after a
successful claim as they are dependent on the state of the host at that
very moment (see [1])


I just wonder if a scheduler API called 'check_destination(RequestSpec, 
destination) could help ? I mean, when live-migrating, we know that we 
try to check what we can, but since neither the spec is persisted, nor 
the migration is verifying that the destination host can fulfill the 
request, there are many bugs with that.


FYI, I tried to propose that https://review.openstack.org/

Re: [openstack-dev] [packaging] Source RPMs for RDO Kilo?

2015-06-03 Thread Haïkel
2015-06-03 12:59 GMT+02:00 Neil Jerram :
> Many thanks, Haïkel, that looks like the information that my team needed.
>
> Neil
>

Feel free to ask or join us on our downstream irc channel (#rdo @ freenode) if
you have further questions.
We also hold weekly public irc meetings about downstream packaging.

H.

>
>
> On 03/06/15 11:18, Haïkel wrote:
>>
>> Hi Neil,
>>
>> We're already having this discussion on the downstream list.
>> RDO is currently moving packages publication for RHEL/CentOS over CentOS
>> mirrors. That's just a matter of time and finish the tooling
>> automating the publication
>> process for source packages.
>>
>> In the mean time, you can find sources in the following places
>> * our packaging sources live in Fedora dist-git:
>> ie: packaging sources for all services
>> http://pkgs.fedoraproject.org/cgit/openstack
>> * source packages are in Fedora and CBS (RHEL/CentOS) build systems.
>> http://koji.fedoraproject.org/
>> http://cbs.centos.org/koji/
>>
>> Regards,
>> H.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Matthew Thode
On 06/03/2015 06:47 AM, Sean Dague wrote:
> On 06/02/2015 10:40 PM, Matthew Thode wrote:
>> On 06/02/2015 05:41 PM, James E. Blair wrote:
>>> Hi,
>>>
>>> This came up at the TC meeting today, and I volunteered to provide an
>>> update from the discussion.
>>>
>>> In general, I think there is a lot of support for a packaging effort in
>>> OpenStack.  The discussion here has been great; we need to answer a few
>>> questions, get some decisions written down, and make sure we have
>>> agreement.
>>>
>>> Here's what we need to know:
>>>
>>> 1) Is this one or more than one horizontal effort?
>>>
>>> In other words, do we think the idea of having a single packaging
>>> project/team with collaboration among distros is going to work?  Or
>>> should we look at it more like the deployment projects where we have
>>> puppet and chef as top level OpenStack projects?
>>>
>>> Either way is fine, and regardless, we need to answer the next
>>> questions:
>>>
>>> 2) What's the collaboration plan?
>>>
>>> How will different distros collaborate with each other, if at all?  What
>>> things are important to standardize on, what aren't and how do we
>>> support them all.
>>>
>>> 3) What are the plans for repositories and their contents?
>>>
>>> What repos will be created, and what will be in them.  When will new
>>> ones be created, and is there any process around that.
>>>
>>> 4) Who is on the team(s)?
>>>
>>> Who is interested in the overall effort?  Who is signing up for
>>> distro-specific work?  Who will be the initial PTL?
>>>
>>> I think if the discussion here can answer those questions, you should
>>> update the governance repo change with that information, we can get all
>>> the participants to ack that, and the TC will be able to act.
>>>
>>> Thanks again for driving this.
>>>
>>> -Jim
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> Gentoo packages from source client side, don't think this effects us.
> 
> Possibly, and that's definitely a legit answer. I think in the deb
> packaging effort the primary desire is that package build files would be
> in Gerrit to encourage collaboration in the wider community.
> 
> So an openstack/ebuild-packaging that was the git tree with the ebuilds
> could be a thing if it was a thing you wanted.
> 
>   -Sean
> 
ya, that might be able to work, specifically with the package mapping
proposal (we call oslo.messaging dev-python/oslo-messaging).  We are
already close as a distro to switching to git for our package repos,
using cvs now.  Maintenance of the ebuilds primarily consists of
updating the dependencies between versions and maybe updating some tests
(we allow end users to test before install of a package).  We could even
generate a deb or rpm from the ebuild too :P

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Doug Hellmann
Excerpts from Daniel P. Berrange's message of 2015-06-03 14:28:01 +0100:
> On Wed, Jun 03, 2015 at 03:09:28PM +0200, Thierry Carrez wrote:
> > John Garbutt wrote:
> > > Given we are thinking Liberty is moving to semantic versioning, maybe
> > > it could look like this:
> > > * 12.0.1 (liberty-1) will have some features (hopefully), and will be a 
> > > tag
> > > * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
> > > * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
> > > * 12.0.2 (liberty-2) will also contain features
> > > * 12.0.3 (liberty-3) is focused on priority features (given the current 
> > > plan)
> > > * 12.1 is Liberty release is just bug fixes on 12.0.3
> > > * 13.0.0.dev1 would be the first commit to open M
> > 
> > The current thinking on the release management team would be to do
> > something like this for projects that are still doing milestone-based
> > development:
> > 
> > * 12.0.0b1 (liberty-1)
> > * 12.0.0b2 (liberty-2)
> > * 12.0.0b3 (liberty-3)
> > * 12.0.0rc1 (RC1)
> > * 12.0.0 is Liberty release
> 
> This kind of numberig is something I'd really like us to get away from
> in Nova, as by including beta/alpha nomenculture, it is really telling
> users that these releases are not to be deployed outside the lab.
> 
> > I think assuming people can tell 12.0.1 is an alpha and 12.1 is a
> > release that is just bug fixes over 12.0.3 is a bit crazy...
> > 
> > The alternative would be to go full intermediary releases and do:
> > 
> > * 11.1.0
> > * 11.2.0
> > * 11.2.1
> > * 11.3.0
> > * 11.3.1 (oh! that happens to also be the "liberty" release!)
> > * 11.4.0
> > 
> > I don't think we can maintain an middle ground.
> 
> What I think we're saying for Nova is that we're not going to change
> the cadence of what we're releasing. ie we're still following the
> milestone based development timeline. Instead we're trying to get
> across that the milestone releases are none the less formal releases
> you can deploy and/or base downsteam products on.
> 
> Personally I like the idea of every release we do being fully equal
> in status, but at least in the short term we'll have limitations
> that some of the releases will not be synced with docs & translations
> teams, so will not quite be at the same level.
> 
> On IRC John also mentioned that the point at which we bump the
> second digit in the semantic version is also the marker bouy at
> which we remove deprecated config parameters, and/or merge /
> drop database migrations.

That's not how I interpret the semver rules. If we consider removing
a configuration option a backwards-incompatible change, that means
incrementing the major version number (rule 8 from [1]).  The second
digit would be incremented when the deprecation is *started* (rule
7).

Doug

[1] http://docs.openstack.org/developer/pbr/semver.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Thomas Goirand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi James B.,

Thanks for this reply.

As you asked for ACK from all parts, my words will be very much like the
ones of James P. (I've just read his message, and I'm jealous of his
nice native-English wording...:)).

On 06/03/2015 12:41 AM, James E. Blair wrote:
> Hi,
> 
> This came up at the TC meeting today, and I volunteered to provide an
> update from the discussion.

Thanks.

> In general, I think there is a lot of support for a packaging effort in
> OpenStack.  The discussion here has been great; we need to answer a few
> questions, get some decisions written down, and make sure we have
> agreement.
> 
> Here's what we need to know:
> 
> 1) Is this one or more than one horizontal effort?
> 
> In other words, do we think the idea of having a single packaging
> project/team with collaboration among distros is going to work?
>
> Or should we look at it more like the deployment projects where we have
> puppet and chef as top level OpenStack projects?

I don't really know about the puppet project, so I don't know what you
refer to here.

However, talking with James Page (from Canonical, head of their server
team which does the OpenStack packaging), we believe it's best if we had
2 different distinct teams: one for Fedora/SuSe/everything-rpm, and one
for Debian based distribution.

We could try to work as a single entity (RPM + deb teams), but rpm+yum
and dpkg+apt are 2 distinct worlds which have very few common
attributes. So even if it may socially be nice, it's not the right
technical decision.

At least, during the summit, what we agreed on is that we don't want
Debian/Ubuntu guys having the rights to core-review RPM packaging and
vice-versa. So the core reviewer lists will have to be separated. The
list of repositories will also have to be split, because repositories
must match package names, and we have different naming (and even naming
policies). Plus the ACLs will be better managed on a per-repo basis than
on a per-branch one.

It would also maybe make sense to have separate PTLs too (though we're
open to other views, if it's easier to have a single one for the rest of
the community).

> Either way is fine, and regardless, we need to answer the next
> questions:
> 
> 2) What's the collaboration plan?
> 
> How will different distros collaborate with each other, if at all?  What
> things are important to standardize on, what aren't and how do we
> support them all.

I can answer only for the Debian/Ubuntu part, and I'll let the RPM world
guys reply from themselves

First, we already worked together between Debian and Ubuntu. All of Juno
in Ubuntu was using Python packages I worked on (in fact, all of
OpenStack but the core packages). We have the intention to at least
merge all of our efforts on everything but the core packages first, then
see on case by case basis how we can merge all packaging for the more
complex packages. Nova and Neutron packages have unfortunately diverged
over the years, so we have to be extra careful on how this is
technically going to happen, without breaking any distro. But I'm
confident we'll succeed.

What sparked this, is that during the summit, Mark Shuttleworth told me
he was supportive of more collaboration between Debian & Ubuntu. Just 5
minutes after his words, I (very fortunately) bumped into James Page,
and we decided to push everything to /stackforge, then try to merge all
of our source packages.

Then, when discussing the mater with others, I heard sentences like "you
should push this into the /openstack namespace to make it more
big-tent-ish", on which I agree.

So there is at least a strong will to maintain OpenStack packages for
Debian and Ubuntu collectively. This means it would be both James Page
team (for Canonical) and myself (working in Debian). I hope to push
everyone else in Mirantis who works on MOS to also do packaging on
upstream Gerrit, at least for the Debian/Ubuntu part. So that is 3
OpenStack distributions that will work as one.

Also, I have to say that the merging effort between Debian and Ubuntu
has already started.

> 3) What are the plans for repositories and their contents?
> 
> What repos will be created, and what will be in them.  When will new
> ones be created, and is there any process around that.

Currently, OpenStack in Debian represents 237 packages, which are all
listed there:
https://qa.debian.org/developer.php?login=openstack-de...@lists.alioth.debian.org

Some of the more generic ones will be moved to do package maintenance
under the DPMT (Debian Python Module Team). I'm thinking for example
about python-nose-parametrized, python-nose-timer, python-termcolor,
python-termstyle, python-rednose, python-jingo, python-couleur,
python-croniter, python-nosehtmloutput, python-nose-exclude,
python-mockito... This kind of packages.

That's maybe 20 to 30 packages to move there. However, I am waiting for
the DPMT to finish its migration from SVN to Git, as I don't really want
to jump 20 years

Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Doug Hellmann
Excerpts from John Garbutt's message of 2015-06-03 14:24:40 +0100:
> On 3 June 2015 at 14:09, Thierry Carrez  wrote:
> > John Garbutt wrote:
> >> Given we are thinking Liberty is moving to semantic versioning, maybe
> >> it could look like this:
> >> * 12.0.1 (liberty-1) will have some features (hopefully), and will be a tag
> >> * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
> >> * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
> >> * 12.0.2 (liberty-2) will also contain features
> >> * 12.0.3 (liberty-3) is focused on priority features (given the current 
> >> plan)
> >> * 12.1 is Liberty release is just bug fixes on 12.0.3
> >> * 13.0.0.dev1 would be the first commit to open M
> >
> > The current thinking on the release management team would be to do
> > something like this for projects that are still doing milestone-based
> > development:
> >
> > * 12.0.0b1 (liberty-1)
> > * 12.0.0b2 (liberty-2)
> > * 12.0.0b3 (liberty-3)
> > * 12.0.0rc1 (RC1)
> > * 12.0.0 is Liberty release
> >
> > I think assuming people can tell 12.0.1 is an alpha and 12.1 is a
> > release that is just bug fixes over 12.0.3 is a bit crazy...
> 
> We go to great lengths to ensure folks can upgrade from b1 -> b3 and
> b2 -> release. I am really looking for a way to advertise that, incase
> its useful.
> 
> ... But it could/will be missing aligned docs and translations. So
> maybe its not enough different from beta... needs more thought.
> 
> > The alternative would be to go full intermediary releases and do:
> >
> > * 11.1.0
> > * 11.2.0
> > * 11.2.1
> > * 11.3.0
> > * 11.3.1 (oh! that happens to also be the "liberty" release!)
> > * 11.4.0
> >
> > I don't think we can maintain an middle ground.
> 
> I think that could still work.
> 
> But I was attempting to skip the exception of creating 11.2.1 just
> because 11.2.0.dev42 fixes a critical bug present in 11.2.1. You would
> have to wait for the next (time bound) release to get the extra bug
> fixes and features.

If we don't assume stable branches for every tag, tags are pretty
cheap in terms of maintenance. That's the appeal of using intermediate
semver-based releases -- fixes get rolled out in the next release,
whenever we want.

I support moving nova to intermediate release, but not this cycle.
We have a couple of smaller projects experimenting with it this
cycle, and I think it would be a good idea for the nova team to
wait until M to start that transition.  That will give us time to
figure out how to make that work well for applications (we're already
doing it for libraries, and Swift, so I don't expect a *lot* of
trouble, but still).

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Updating Our Concept of Resources

2015-06-03 Thread Sylvain Bauza



Le 03/06/2015 16:02, Nikola Đipanov a écrit :

On 06/03/2015 02:13 PM, John Garbutt wrote:

On 3 June 2015 at 13:53, Ed Leafe  wrote:

On Jun 2, 2015, at 5:58 AM, Alexis Lee  wrote:


If you allocate all the memory of a box to high-mem instances, you may
not be billing for all the CPU and disk which are now unusable. That's
why flavors were introduced, afaik, and it's still a valid need.

So we had a very good discussion at the weekly IRC meeting for the Scheduler, and we 
agreed to follow that up here on the ML. One thing that came up, noted in the quote 
above, is that I gave the impression in my first email that I thought flavors were 
useless. I think I did a better job in the original blog post of explaining that flavors 
are a great way to handle the sane division of a resource like a compute node. The issue 
I have with flavors is that we seem to be locked into the "everything that can be 
requested has to fit into the flavor", and that really doesn't make sense.

Another concern was from the cloud provider's POV, which makes a flavor a convenient way of 
packaging cloud resources for sale. The customer can simply say "give me one of these" to 
specify a complex combination of virtualized resources. That's great, but it means that there has 
to be a flavor for every possible permutation of resources. If you restricted flavors to only 
represent the sane ways of dividing up compute nodes, any other features could be add-ons to the 
request. Something like ordering a pizza: offer the customer a fixed choice of sizes, but then let 
them specify any toppings in whatever combination they want. That's certainly more sane than 
presenting them with a menu with hundreds of pizza "flavors", each representing a 
different size/topping combination.

I feel there is a lot to be said for treating "consumable" resources
very separately to "free" options.

For example grouping the vCPUs into sockets can be "free" in terms of
capacity planning, so is a valid optional add on (assuming you are not
doing some level of pinning to match that).

For things where you are trying to find a specific compute node, that
kind of attribute has clear capacity planning concerns, and is likely
to have a specific "cost" associated with it. So we need to make sure
its clear how that cost concept can be layered on top of the Nova API.
For example "os_type" often changes the cost, and is implemented on
top of flavors using a combination of protected image properties on
glance and the way snapshots inherit image properties.


I totally agree the scheduler doesn't have to know anything about
flavors though. We should push them out to request validation in the
Nova API. This can be considered part of cleaning up the scheduler API.

This idea was also discussed and seemed to get a lot of support. Basically, it means that by the time the request hits 
the scheduler, there is no "flavor" anymore; instead, the scheduler gets a request for so much RAM, so much 
disk, etc., and these amounts have already been validated at the API layer. So a customer requests a flavor just like 
they do now, and the API has the responsibility to verify that the flavor is valid, but then "unpacks" the 
flavor into its components and passes that on to compute. The end result is the same, but there would be no more need 
to store "flavors" anywhere but the front end. This has the added benefit of eliminating the problem with new 
flavors being propagated down to cells, since they would no longer need to have to translate what "flavor X" 
means. Don Dugger volunteered to write up a spec for removing flavors from the scheduler.


+1 for Nova translating the incoming request to a "resource request"
the scheduler understands, given the resources it knows about.

I would look at scoping that to "compute" resources, so its easier to
add "volume" and "network" into that request at a later date.


I also agree with this pretty much completely. I feel that the single
thing that made some of the scheduler discussions drag on for months is
our lack of willingness to bite of the big chunk that is coming up with
a solid API to the scheduler.

Starting from nouns and verbs - it definitely seems like a good idea to
pass in the _requested_ resources to a scheduler that knows about
_avalible_ resources. [1] seems like an excellent start.

I seem to remember Jay discussing at one point that not all of the
things we want the scheduler to know about make sense to be modelled as
resources (running instances for example) and it made a lot of sense to
me, but it seems like it's the kind of thing that would be the easiest
to figure out once you see the code (I also don't see it mentioned in
[1] but I assume Jay dropped it to keep the scope of that BP manageable).

N.


+1 to that. That's now 2 cycles that we're trying to make clean 
interfaces for the scheduler. We identified the relationship between the 
ResourceTracker and the Scheduler as one to be cleaned, and [1] is 
targeting th

Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Jeremy,


Except that reorganizing files in a repo so that you can have sane
> pattern matches across them for different review subteams is
> _exactly_ this. The question is really one of "do you have a
> separate .git in each of the directory trees for your subteams or
> only one .git in the parent directory?"


I can't talk for other projects, so let's talk about Rally specific.

We have single .git in root for whole project.

We have 4 subdir that can have own maintainers:
- rally/deploy
- rally/verify
- rally/benchmark
- rally/plugins

First 3 subdir are quite different and usually isolated communities.
Plugins are not so hard to review and mostly developed part.

If I would be able to have cores for specific areas that will scale up
code reviewing process a lot
without any trust, process, social, arch, whatever changes in project.


Best regards,
Boris Pavlovic


On Wed, Jun 3, 2015 at 5:00 PM, Julien Danjou  wrote:

> On Wed, Jun 03 2015, Boris Pavlovic wrote:
>
> > And I don't understand "what" so serious problem we have.
> > We were not able to do reverts so  we build CI that doesn't allow us to
> > break master
> >  so we don't need to do reverts. I really don't see here any big
> problems.
>
> Doing revert does not mean breaking nor unbreaking master. It's just
> about canceling changes. You're not able to break master if you have a
> good test coverage – and I'm sure Rally has.
>
> > I was talking about reverting patches. And I believe the process is
> broken
> > if you need to revert patches. It means that core team is not enough team
> > or CI is not enough good.
>
> Sure, reverting a patch means that a mistake has been made somewhere,
> *but* the point is that having a few mistakes done and reverted is far
> less a problem than freezing an entire project because everyone fears a
> mistake might be made. Just learn to make mistake, fix/revert them, and
> change fast. Not freeze everyone in terror of something being done. :)
>
> --
> Julien Danjou
> /* Free Software hacker
>http://julien.danjou.info */
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Ed Leafe
On Jun 3, 2015, at 9:10 AM, Doug Hellmann  wrote:

> These numbers don't match the meaning of semver, though. Semver
> describes clearly why you increment each part of the version number
> [1]. We can't call it semver and then make up our own completely
> different rules.

Heh, I was just about to write something similar, but you beat me to it.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Targeting icehouse-eol?

2015-06-03 Thread Matt Riedemann
Following on the thread about no longer doing stable point releases [1] 
at the summit we talked about doing icehouse-eol pretty soon [2].


I scrubbed the open stable/icehouse patches last week and we're down to 
at least one screen of changes now [3].


My thinking was once we've processed that list, i.e. either approved 
what we're going to approve or -2 what we aren't, then we should proceed 
with doing the icehouse-eol tag and deleting the branch.


Is everyone generally in agreement with doing this soon?  If so, then 
I'm thinking target a week from today for the stable maint core team to 
scrub the list of open reviews in the next week and we then get the 
infra team to tag the branch and close it out.


The only open question I have is if we need to do an Icehouse point 
release prior to the tag and dropping the branch, but I don't think 
that's happened in the past with branch end of life - the eol tag 
basically serves as the placeholder to the last 'release'.


[1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/065144.html
[2] https://etherpad.openstack.org/p/YVR-relmgt-stable-branch
[3] https://review.openstack.org/#/q/status:open+branch:stable/icehouse,n,z

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2015-06-03 15:09:28 +0200:
> John Garbutt wrote:
> > Given we are thinking Liberty is moving to semantic versioning, maybe
> > it could look like this:
> > * 12.0.1 (liberty-1) will have some features (hopefully), and will be a tag
> > * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
> > * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
> > * 12.0.2 (liberty-2) will also contain features
> > * 12.0.3 (liberty-3) is focused on priority features (given the current 
> > plan)
> > * 12.1 is Liberty release is just bug fixes on 12.0.3
> > * 13.0.0.dev1 would be the first commit to open M
> 
> The current thinking on the release management team would be to do
> something like this for projects that are still doing milestone-based
> development:
> 
> * 12.0.0b1 (liberty-1)
> * 12.0.0b2 (liberty-2)
> * 12.0.0b3 (liberty-3)
> * 12.0.0rc1 (RC1)
> * 12.0.0 is Liberty release
> 
> I think assuming people can tell 12.0.1 is an alpha and 12.1 is a
> release that is just bug fixes over 12.0.3 is a bit crazy...
> 
> The alternative would be to go full intermediary releases and do:
> 
> * 11.1.0
> * 11.2.0
> * 11.2.1
> * 11.3.0
> * 11.3.1 (oh! that happens to also be the "liberty" release!)
> * 11.4.0
> 
> I don't think we can maintain an middle ground.

In the other thread on versioning we talked about using major version
bumps at our release boundaries as a way to signal major intermediate
upgrade points (to go from 12 to 14 you have to first go from 12
to 13 and then 13 to 14).  It's not minimal semver, but it may make
sense to do that anyway if we have those sorts of upgrade requirements.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TC] [All] [searchlight] Proposal for Project Searchlight

2015-06-03 Thread Tripp, Travis S
Hello TC members and fellow stackers!

We have just submitted a review for project Searchlight to the OpenStack
governance projects list [1]. Searchlight is a new project being split out
of Glance based on the Glance Catalog Index Service, which was developed
and released in Kilo [2]. We received community and operator feedback that
this would be very useful for more than just Glance. At the Liberty Summit
it was decided to broaden the scope and make it its own project with a
mission to provide advanced and scalable search across multi-tenant cloud
resources. This was presented and discussed at both Glance and Horizon
fishbowl sessions dedicated to this topic where it was enthusiastically
received.

A narrated screencast of the demo shown at the summit is available at the
project wiki [3].

Thank you,
Travis Tripp
Nikhil Komawar

[1] https://review.openstack.org/188014
[2] 
http://specs.openstack.org/openstack/glance-specs/specs/kilo/catalog-index-
service.html
[3] https://wiki.openstack.org/wiki/Searchlight







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Derek Higgins



On 02/06/15 23:41, James E. Blair wrote:

Hi,

This came up at the TC meeting today, and I volunteered to provide an
update from the discussion.

In general, I think there is a lot of support for a packaging effort in
OpenStack.  The discussion here has been great; we need to answer a few
questions, get some decisions written down, and make sure we have
agreement.

Here's what we need to know:

1) Is this one or more than one horizontal effort?

In other words, do we think the idea of having a single packaging
project/team with collaboration among distros is going to work?  Or
should we look at it more like the deployment projects where we have
puppet and chef as top level OpenStack projects?


As far as packaging goes Id imaging the teams will be split into groups 
of people who are interested into specific packaging formats (or perhaps 
distro), these people would be responsible for package updates, reviews, 
etc...


On the specifics of the packaging details, collaboration between these 
groups should be encouraged  but not enforced. I would hope that this 
means we would find the places where packaging details can converge 
while staying within the constraints of distro recommendations.




Either way is fine, and regardless, we need to answer the next
questions:

2) What's the collaboration plan?

How will different distros collaborate with each other, if at all?  What
things are important to standardize on, what aren't and how do we
support them all.


Collaboration between these groups is important in order to keep a few 
things consistent


o package repository naming, we should all agree on a naming scheme for 
the packaging repositories to avoid situations where we have rpm-nova 
and deb-compute


o Tools to build packages in CI jobs should provide a consistent 
interface regardless of packaging being built




3) What are the plans for repositories and their contents?

What repos will be created, and what will be in them.  When will new
ones be created, and is there any process around that.


Assuming you mean git repositories ? I think anything under the 
openstack(or stackforge) umbrella is fair game along with anything in 
the global-requirements file.


If you meant package repositories I think none is a fine answer for the 
moment but if there is an appetite for them then I think what would 
eventually make most sense are repositories for master branches along 
with supported stable branches. This may differ between packaging 
formats and what their teams are prepared to support.




4) Who is on the team(s)?

Who is interested in the overall effort?  Who is signing up for
distro-specific work?  Who will be the initial PTL?


From the RDO point of view we are doing the trunk chasing work already 
downstream. If we were to shift this packaging upstream of RDO I would 
imagine we would just switch the gerrit we are submitting too. I don't 
speak for RDO but of the people I spoke too I didn't hear any resistance 
to this idea.




I think if the discussion here can answer those questions, you should
update the governance repo change with that information, we can get all
the participants to ack that, and the TC will be able to act.


Great and thanks,
Derek.



Thanks again for driving this.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Discuss simulated-execution-mode-murano-engine blueprint

2015-06-03 Thread Ekaterina Chernova
Hi all!

I'd like to discuss first implementation thoughts about this [1] blueprint,
that we want to implement in Liberty.
This feature is supposed to increase the speed of application development.

Now engine interacts with API to get input task and packages.

Items, planned to implement first would enable loading local task and new
package, without API and Rabbit involved.

After that simple testing machinery will be added to MuranoPL: mock support
and simple test-runner.

So user can test application methods as he wants by creating simple tests.
Deployment parameters, such as heat stack and murano execution
plan outputs
may be set as returned value in tests.

Finally, tests may be placed into a murano package for easier package
verification and later modification.

I'm going to write specification soon. But before, we need to prepare list
of functions, that are needed to
implement simple mocking machinery in MuranoPL.

Please, leave your thoughts here or directly in the blueprint.

Regards, Kate.


[1] -
https://blueprints.launchpad.net/murano/+spec/simulated-execution-mode-murano-engine
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Doug Hellmann
Excerpts from John Garbutt's message of 2015-06-03 14:01:06 +0100:
> Hi,
> 
> (To be clear, this is a proposal to be discussed and not a decision.)
> 
> The version number can help us communicate that:
> * you can consume a milestone release
> ** ... but the docs and translations may not be totally up to date
> * you can consume any commit
> ** ... but there is no formal tracking of bugs and features in that commit
> ** ... but can still live upgrade from the previous release to any
> commit in the current release
> * if you need completed docs and translations, wait for the final
> liberty release
> * we only support upgrade between .x and .x
> ** to ensure we can do live upgrades, but with minimal technical debt over 
> time
> ** 
> http://docs.openstack.org/developer/nova/devref/project_scope.html#upgrade-expectations
> 
> The idea is to keep what we do today, but try and communicate what
> that is a little better. Making it clear you can consume the milestone
> releases, and indeed the master branch, if thats what you want to,
> while being clear about what you loose and/or gain over waiting for
> the final release and/or stable branch.
> 
> Given we are thinking Liberty is moving to semantic versioning, maybe
> it could look like this:
> * 12.0.1 (liberty-1) will have some features (hopefully), and will be a tag
> * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
> * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
> * 12.0.2 (liberty-2) will also contain features
> * 12.0.3 (liberty-3) is focused on priority features (given the current plan)
> * 12.1 is Liberty release is just bug fixes on 12.0.3
> * 13.0.0.dev1 would be the first commit to open M

These numbers don't match the meaning of semver, though. Semver
describes clearly why you increment each part of the version number
[1]. We can't call it semver and then make up our own completely
different rules.

Doug

[1] http://docs.openstack.org/developer/pbr/semver.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [nova] [oslo] [cross-project] Dynamic Policy

2015-06-03 Thread Adam Young

I gave a presentation on Dynamic Policy for Access Control at the Summit.

https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/dynamic-policy-for-access-control

My slides are here:
http://adam.younglogic.com/presentations/dynamic_policy.pp.pdf


My original blog post attempted to lay out the direction:

http://adam.younglogic.com/2014/11/dynamic-policy-in-keystone/

And the Overview spec is here:
https://review.openstack.org/#/c/147651/


This references multiple smaller specs:

A unified policy file:
https://review.openstack.org/134656

Hierarchical Roles:
https://review.openstack.org/125704

Managing the Rules from a database as opposed to flat files:
https://review.openstack.org/184926


Fetching the policy file from the server
https://review.openstack.org/134655

Enforcing the policy via common logic in keystonemiddleware.
https://review.openstack.org/133480


I've been pleased to get such a positive response;  I think most people 
agree that we need to improve the policy management in OpenStack.  This 
is not, by any means, set in stone, and all of this is still subject to 
the same review process that covers all of OpenStack.  The more I 
discuss and design, the more I've learned.


One recent discussion has driven home the fact that our policy can be 
Fragile.  We want to make it easy for people to customize policy, but 
only in certain ways.  There are parts that should be managed as part of 
the code review/engineering process, such as determining where the 
project_id exists for matching the scope of a resource. Contrast this 
with a deployer tweaking the role assignment  required in order for user 
to call that API.


Neutron uses Policy in innovative ways, and I would not want to remove 
that power.


Let's figure out what the real requirements are here, beyond what policy 
does today.  Policy is something about halfway between config and code, 
and figuring out how to manage it properly is the next step.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Updating Our Concept of Resources

2015-06-03 Thread Nikola Đipanov
On 06/03/2015 02:13 PM, John Garbutt wrote:
> On 3 June 2015 at 13:53, Ed Leafe  wrote:
>> On Jun 2, 2015, at 5:58 AM, Alexis Lee  wrote:
>>
>>> If you allocate all the memory of a box to high-mem instances, you may
>>> not be billing for all the CPU and disk which are now unusable. That's
>>> why flavors were introduced, afaik, and it's still a valid need.
>>
>> So we had a very good discussion at the weekly IRC meeting for the 
>> Scheduler, and we agreed to follow that up here on the ML. One thing that 
>> came up, noted in the quote above, is that I gave the impression in my first 
>> email that I thought flavors were useless. I think I did a better job in the 
>> original blog post of explaining that flavors are a great way to handle the 
>> sane division of a resource like a compute node. The issue I have with 
>> flavors is that we seem to be locked into the "everything that can be 
>> requested has to fit into the flavor", and that really doesn't make sense.
>>
>> Another concern was from the cloud provider's POV, which makes a flavor a 
>> convenient way of packaging cloud resources for sale. The customer can 
>> simply say "give me one of these" to specify a complex combination of 
>> virtualized resources. That's great, but it means that there has to be a 
>> flavor for every possible permutation of resources. If you restricted 
>> flavors to only represent the sane ways of dividing up compute nodes, any 
>> other features could be add-ons to the request. Something like ordering a 
>> pizza: offer the customer a fixed choice of sizes, but then let them specify 
>> any toppings in whatever combination they want. That's certainly more sane 
>> than presenting them with a menu with hundreds of pizza "flavors", each 
>> representing a different size/topping combination.
> 
> I feel there is a lot to be said for treating "consumable" resources
> very separately to "free" options.
> 
> For example grouping the vCPUs into sockets can be "free" in terms of
> capacity planning, so is a valid optional add on (assuming you are not
> doing some level of pinning to match that).
> 
> For things where you are trying to find a specific compute node, that
> kind of attribute has clear capacity planning concerns, and is likely
> to have a specific "cost" associated with it. So we need to make sure
> its clear how that cost concept can be layered on top of the Nova API.
> For example "os_type" often changes the cost, and is implemented on
> top of flavors using a combination of protected image properties on
> glance and the way snapshots inherit image properties.
> 
>>> I totally agree the scheduler doesn't have to know anything about
>>> flavors though. We should push them out to request validation in the
>>> Nova API. This can be considered part of cleaning up the scheduler API.
>>
>> This idea was also discussed and seemed to get a lot of support. Basically, 
>> it means that by the time the request hits the scheduler, there is no 
>> "flavor" anymore; instead, the scheduler gets a request for so much RAM, so 
>> much disk, etc., and these amounts have already been validated at the API 
>> layer. So a customer requests a flavor just like they do now, and the API 
>> has the responsibility to verify that the flavor is valid, but then 
>> "unpacks" the flavor into its components and passes that on to compute. The 
>> end result is the same, but there would be no more need to store "flavors" 
>> anywhere but the front end. This has the added benefit of eliminating the 
>> problem with new flavors being propagated down to cells, since they would no 
>> longer need to have to translate what "flavor X" means. Don Dugger 
>> volunteered to write up a spec for removing flavors from the scheduler.
>>
> 
> +1 for Nova translating the incoming request to a "resource request"
> the scheduler understands, given the resources it knows about.
> 
> I would look at scoping that to "compute" resources, so its easier to
> add "volume" and "network" into that request at a later date.
> 

I also agree with this pretty much completely. I feel that the single
thing that made some of the scheduler discussions drag on for months is
our lack of willingness to bite of the big chunk that is coming up with
a solid API to the scheduler.

Starting from nouns and verbs - it definitely seems like a good idea to
pass in the _requested_ resources to a scheduler that knows about
_avalible_ resources. [1] seems like an excellent start.

I seem to remember Jay discussing at one point that not all of the
things we want the scheduler to know about make sense to be modelled as
resources (running instances for example) and it made a lot of sense to
me, but it seems like it's the kind of thing that would be the easiest
to figure out once you see the code (I also don't see it mentioned in
[1] but I assume Jay dropped it to keep the scope of that BP manageable).

N.

[1]
https://review.openstack.org/#/c/184534/1/specs/liberty/approved/resource-objects.rst

Re: [openstack-dev] [openstackclient] Image create-or-update

2015-06-03 Thread Marek Aufart

Hi Steve,

yes, it makes sense, thanks for clarification.

Flag --or-update for image create cmd looks as a good solution.

Marek

On 2.6.2015 23:34 Steve Martinelli wrote:

I'm thinking that the current approach is probably how we want to keep
things. I can't imagine many other projects being okay with multiple
create calls with the same name.
Though if you're really adamant about including that support, we could
include a new flag (--or-update) that performs the update if it's found,
otherwise it continues with a new create.

Does that make sense?

Thanks,

Steve Martinelli
OpenStack Keystone Core

Marek Aufart  wrote on 06/02/2015 10:55:20 AM:

 > From: Marek Aufart 
 > To: openstack-dev@lists.openstack.org
 > Date: 06/02/2015 10:55 AM
 > Subject: [openstack-dev] [openstackclient] Image create-or-update
 >
 > Hi,
 >
 > I have a question related to openstack image create command v1 from
 > python-openstackclient.
 >
 > It behaves like create-or-update (if image with *name* specified for
 > create already existed, it is updated). Actually it looks, that it is in
 > collision with glance, which allows create multiple images with same
 > names instead of update one.
 >
 > Is the create-or-update approach still wanted?
 >
 > Related code:
 > https://github.com/openstack/python-openstackclient/blob/master/
 > openstackclient/image/v1/image.py#L247-L269
 >
 > Thanks.
 >
 > --
 > Marek Aufart
 >
 > Email: mauf...@redhat.com
 >
 > IRC: maufart / aufi on freenode
 >
 >
__
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Marek Aufart

Engineer, OpenStack Management UI team, Red Hat

Email: mauf...@redhat.com
Cell: +420 737 366 697
IRC: maufart / aufi on freenode

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Julien Danjou
On Wed, Jun 03 2015, Boris Pavlovic wrote:

> And I don't understand "what" so serious problem we have.
> We were not able to do reverts so  we build CI that doesn't allow us to
> break master
>  so we don't need to do reverts. I really don't see here any big problems.

Doing revert does not mean breaking nor unbreaking master. It's just
about canceling changes. You're not able to break master if you have a
good test coverage – and I'm sure Rally has.

> I was talking about reverting patches. And I believe the process is broken
> if you need to revert patches. It means that core team is not enough team
> or CI is not enough good.

Sure, reverting a patch means that a mistake has been made somewhere,
*but* the point is that having a few mistakes done and reverted is far
less a problem than freezing an entire project because everyone fears a
mistake might be made. Just learn to make mistake, fix/revert them, and
change fast. Not freeze everyone in terror of something being done. :)

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread James Page
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi James

On 02/06/15 23:41, James E. Blair wrote:
> This came up at the TC meeting today, and I volunteered to provide
> an update from the discussion.

Thankyou - much appreciated.

> In general, I think there is a lot of support for a packaging
> effort in OpenStack.  The discussion here has been great; we need
> to answer a few questions, get some decisions written down, and
> make sure we have agreement.
> 
> Here's what we need to know:
> 
> 1) Is this one or more than one horizontal effort?
> 
> In other words, do we think the idea of having a single packaging 
> project/team with collaboration among distros is going to work?
> Or should we look at it more like the deployment projects where we
> have puppet and chef as top level OpenStack projects?

After some discussion with Thomas on IRC, I think this is more than
one effort; The skills and motivation for developers reviewing
proposed packaging changes needs to be aligned IMO - so I think it
makes sense to split the packaging teams between:

 Debian/Ubuntu + derivatives
 CentOS/Fedora/RHEL + derivatives

> Either way is fine, and regardless, we need to answer the next 
> questions:
> 
> 2) What's the collaboration plan?
> 
> How will different distros collaborate with each other, if at all?
> What things are important to standardize on, what aren't and how do
> we support them all.

For Debian/Ubuntu, Thomas and I are already working on re-aligning
packaging as much as possible for the Liberty cycle; to start off with
this will be the dependency chain, but we've also agreed to look at
the core openstack packages as well, which is where we have the
greatest delta today.

We will have to come up with some sort of smart way to continue to
manage that delta going forward - we've had quite black and white
opinions in the past as to what should be in the core packages and
what should not.  That said, we want to enable wider collaboration as
well.

By aligning branches for packaging against OpenStack releases, rather
than Debian or Ubuntu releases, I think we can gain the maximum
collaboration between Debian and Ubuntu, irrespective of which
OpenStack version is being shipped by each distro.

> 3) What are the plans for repositories and their contents?
> 
> What repos will be created, and what will be in them.  When will
> new ones be created, and is there any process around that.

I think Thomas has the definitive list of existing git repositories in
Debian that we can use for the majority of repos - but I'd like to
ensure we get the branches setup right on the openstack projects
themselves to represent the simpler Ubuntu base packaging and the more
complex Debian packaging.

Thomas and I can work on the details of that.

> 4) Who is on the team(s)?
> 
> Who is interested in the overall effort?  Who is signing up for 
> distro-specific work?  Who will be the initial PTL?

I'd like the members of my team who work on OpenStack packaging at
Canonical to be part of the Debian/Ubuntu development team; that would
include myself, Chuck Short and Corey Bryant.

As Thomas is already taking a lead and has control of the majority of
repository sources, I'd be happy for him to be the initial PTL;
however I would like to see the PTL role switch each cycle so as to
not overburden one individual, and to make sure that the team enjoys
the diversity of technical leadership and objectives of each distro.

> I think if the discussion here can answer those questions, you
> should update the governance repo change with that information, we
> can get alhat.l the participants to ack that, and the TC will be
> able to act.

Regards

James

- -- 
James Page
Ubuntu and Debian Developer
james.p...@ubuntu.com
jamesp...@debian.org
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJVbweuAAoJEL/srsug59jDRnEP/A6avj+hGBo46Y8H5K3LkEjn
YgUteWnHs0QsrOSO5zZBdx/xzAK+ADbrOL4hQ/8vGoBhzy3MhQIPXbWemrowE+CK
h3x71xYlSUzzxIuvLYmt+Gy9wsB/K5KEocB7hmgOL6lKRC1QyA0RF6RFEZ7HMbZ/
DydeeK0c4GW5mtrsZa708pVoWHsfcRpGUmX+80iXT4faREHQwTyscG/zb/xUaUhc
yBd47tIagcs28nuy7xOENiWwb++ydgbDtzg6OOWa48Eb1Jtskeh6cyiW4Yk8G8mS
OsOXZtRVpRpYAfu6dH8jg6k24I6oQBhPbqxz0bj4lI6eRvSPC+1fT8IvDHobisE/
rB2I51QXgBfeSFeOmZi2gP3lptXvA7cnY6z7L66BdVzooVbMJuLlUG50G4XXI4qt
XOhb8c+Gk0DpMtMq34HsBNN512TeCIqfWBbo+ZwKHHEGET/5GrWxuiFaGg9nPvQW
z/L88ew8pswulI64rxu7FKbokB245GNNutB6/nJY+YhqdrKR7ojROcXcgT1CYX6p
TzyuahaepW+M6N0Xs1Gh+YmacxnyJDFNZQyM7FoHiUc4+Wwp/DTmxBS4+aoVvedP
3nJ51ueBoXu5a5sSSxNfwWH+mvVtwpoKi5KYR8FWaAG0z0x7Wf0WEKtR+TM4ixXp
NNpLGulILZqefi6fyZ+w
=NuyQ
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][reseller] New way to get a project scoped token by name

2015-06-03 Thread Henrique Truta
Hi David,

You mean creating some kind of "delimiter" attribute in the domain entity?
That seems like a good idea, although it does not solve the problem
Morgan's mentioned that is the global hierarchy delimiter.

Henrique

Em qua, 3 de jun de 2015 às 04:21, David Chadwick 
escreveu:

>
>
> On 02/06/2015 23:34, Morgan Fainberg wrote:
> > Hi Henrique,
> >
> > I don't think we need to specifically call out that we want a domain, we
> > should always reference the namespace as we do today. Basically, if we
> > ask for a project name we need to also provide it's namespace (your
> > option #1). This clearly lines up with how we handle projects in domains
> > today.
> >
> > I would, however, focus on how to represent the namespace in a single
> > (usable) string. We've been delaying the work on this for a while since
> > we have historically not provided a clear way to delimit the hierarchy.
> > If we solve the issue with "what is the delimiter" between domain,
> > project, and subdomain/subproject, we end up solving the usability
>
> why not allow the top level domain/project to define the delimiter for
> its tree, and to carry the delimiter in the JSON as a new parameter.
> That provides full flexibility for all languages and locales
>
> David
>
> > issues with proposal #1, and not breaking the current behavior you'd
> > expect with implementing option #2 (which at face value feels to be API
> > incompatible/break of current behavior).
> >
> > Cheers,
> > --Morgan
> >
> > On Tue, Jun 2, 2015 at 7:43 AM, Henrique Truta
> > mailto:henriquecostatr...@gmail.com>>
> wrote:
> >
> > Hi folks,
> >
> >
> > In Reseller[1], we’ll have the domains concept merged into projects,
> > that means that we will have projects that will behave as domains.
> > Therefore, it will be possible to have two projects with the same
> > name in a hierarchy, one being a domain and another being a regular
> > project. For instance, the following hierarchy will be valid:
> >
> > A - is_domain project, with domain A
> >
> > |
> >
> > B - project
> >
> > |
> >
> > A - project with domain A
> >
> >
> > That hierarchy faces a problem when a user requests a project scoped
> > token by name, once she’ll pass “domain = ‘A’” and project.name
> >  = “A”. Currently, we have no way to
> > distinguish which project we are referring to. We have two proposals
> > for this.
> >
> >
> >  1.
> >
> > Specify the whole hierarchy in the token request body, which
> > means that when requesting a token for the child project for
> > that hierarchy, we’ll have in the scope field something like:
> >
> > "project": {
> >"domain": {
> >"name": "A"
> >},
> >"name": [“A”', “B”, “A”]
> >}
> >
> >
> > If the project name is unique inside the domain (project “B”, for
> > example), the hierarchy is optional.
> >
> >
> >  2.
> >
> > When a conflict happen, always provide a token to the child
> > project. That means that, in case we have a name clashing as
> > described, it will only be possible to get a project scoped
> > token to the is_domain project through its id.
> >
> >
> >
> > The former will give us more clarity and won’t create any more
> > restrictions than we already have. As a con, we currently are not
> > able to get the names of projects in the hierarchy above a given
> > project. Although the latter seems to hurt fewer people, it has the
> > disadvantage of creating another set of constraints that might
> > difficult the UX in the future.
> >
> >
> > What do you think about that? We want to hear your oppinion, so we
> > can discuss it at today’s Keystone Meeting.
> >
> >
> > [1]
> >
> https://github.com/openstack/keystone-specs/blob/master/specs/liberty/reseller.rst
> >
> >
> >
>  __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > <
> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___

Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Jeremy Stanley
On 2015-06-03 09:29:38 +0300 (+0300), Boris Pavlovic wrote:
> I will try to summarize all questions and reply on them:
> 
> *- Why not splitting repo/plugins?*
> 
>   I don't want to make "architectural" decisions based on "social" or
>   "not enough good tool for review" issues.
[...]

Except that reorganizing files in a repo so that you can have sane
pattern matches across them for different review subteams is
_exactly_ this. The question is really one of "do you have a
separate .git in each of the directory trees for your subteams or
only one .git in the parent directory?"
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Functional tests coverage

2015-06-03 Thread ZZelle
Hi Serge,

... tox -e cover is not really efficient for functional tests ...

You can start with dhcp, as there is already a base (abandoned change[1]
from Marios).


Regards,

Cedric/ZZelle


[1] https://review.openstack.org/136834

On Wed, Jun 3, 2015 at 3:21 PM, Andreas Jaeger  wrote:

> On 06/03/2015 03:13 PM, Sergey Belous wrote:
>
>> Hi All,
>>
>> I want to write the functional tests for Neutron. But the first I want
>> to know the current coverage. How to measure test coverage of code?
>> Where to look and what to start?
>>
>
> "tox -e cover" should run the coverage tests of neutron,
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Julien,

If I were on you shoes I would pick words more carefully.

When you are saying:

> Reverting patches is unacceptable for Rally project.
> Then you have a more serious problem than the rest of OpenStack.


"you" means Rally community which is quite large.
http://stackalytics.com/?release=liberty&metric=commits&project_type=openstack&module=rally


And I don't understand "what" so serious problem we have.
We were not able to do reverts so  we build CI that doesn't allow us to
break master
 so we don't need to do reverts. I really don't see here any big problems.

> This means that we merged bug and this is epic fail of PTL of project.
> Your code is already full of bugs and misfeatures, like the rest of the
> software. That's life.


I was talking about reverting patches. And I believe the process is broken
if you need to revert patches. It means that core team is not enough team
or CI is not enough good.


If you're having trust issues, good luck maintaining any large-scale
> successful (open source) project. This is terrible management and leads
> to micro-managing tasks and people, which has never build something
> awesome.


I don't believe even my self, because I am human and I make mistakes.
My goal on the PTL position is to make such process that stops "human"
mistakes before they land in master. In other words  everything should be
automated and pre not post checked.

Best regards,
Boris Pavlovic

On Wed, Jun 3, 2015 at 4:00 PM, Thierry Carrez 
wrote:

> So yeah, that's precisely what we discussed at the cross-project
> workshop about In-team scaling in Vancouver (led by Kyle and myself).
> For those not present, I invite you to read the notes:
>
> https://etherpad.openstack.org/p/liberty-cross-project-in-team-scaling
>
> The conclusion was to explore splitting review areas and building trust
> relationships. Those could happen:
>
> - along architectural lines (repo splits)
> - along areas of expertise with implicit trust to not review anything else
>
> ... which is precisely what you seem to oppose.
>
> Boris Pavlovic wrote:
> > *- Why not splitting repo/plugins?*
> >
> >   I don't want to make "architectural" decisions based on "social" or
> >   "not enough good tool for review" issues.
> >
> >   If we take a look at OpenStack that was splited many times: Glance,
> > Cinder, ...
> >   we will see that there is a log of code duplication that can't be
> > removed even after
> >   two or even more years of oslo effort. As well it produce such issues
> > like syncing
> >   requirements, requirements in such large bash script like devstack,
> >   there is not std installator, it's quite hard to manage and test it
> > and so on..
> >
> >   That's why I don't think that splitting repo is good "architecture"
> > decision - it makes
> >simple things complicated...
>
> I know we disagree on that one, but I don't think monolithic means
> "simpler". Having smaller parts that have a simpler role and explicit
> contracts to communicate with other pieces is IMHO better and easier to
> maintain.
>
> We shouldn't split repositories when it only results in code
> duplication. But whenever we can isolate something that could have a
> more dedicated maintenance team, I think that's worth exploring as a
> solution to the review scaling issue.
>
> > *- Why not just trust people*
> >
> > People get tired and make mistakes (very often).
> > That's why we have blocking CI system that checks patches,
> > That's why we have rule 2 cores / review (sometimes even 3,4,5...)...
>
> It's not because "we don't trust people" that we have the 2-core rule.
> Core reviewers check the desirability and quality of implementation. By
> default we consider that if 2 of those agree that a change is sane, it
> probably is. The CI system checks something else, and that is that you
> don't break everyone or introduce a regression. So you shouldn't be able
> to "introduce a bug" that would be so serious that a simple revert would
> still be embarrassing. If you can, then you should work on your tests.
>
> I think it's totally fine to give people the ability to +2/approve
> generally, together with limits on where they are supposed to use that
> power. They will be more careful as to what they approve this way. For
> corner cases you can revert.
>
> As an example, Ubuntu development has worked on that trust model for
> ages. Once you are a developer, you may commit changes to any package in
> the distro. But you're supposed to use that power wisely. You end up
> staying away from risky packages and everything you don't feel safe to
> approve.
>
> If you can't trust your core reviewers to not approve things that are
> outside their comfort zone, I'd argue they should not be core reviewers
> in the first place.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.or

[openstack-dev] [Manila] Changing DB regarding IDs for future migration/replication/AZ support

2015-06-03 Thread Rodrigo Barbieri
Hello guys,

I would like to bring everyone up to speed on this topic, since we have a
weekly meeting tomorrow and I would like to further discuss this, either
here or tomorrow at the meeting, since this is something that is a
pre-requisite for future features planned for liberty.

We had a discussion on IRC last week about possible improvements to Share
Migration concerning the IDs and additional temporary DB row. So far, our
conclusion has been that we prefer to have the additional DB row, but we
must deal with the fact that current architecture does not expect a Share
to have two separate IDs, the "API ID" and the "Driver ID". We have came up
with several ways to improve this, and we would like to continue the
discussion and decide how we can better improve it thinking about the
future features such as Replication and AZ.

Current scenario (as of prototype):
- Migration creates a totally new share in destination backend, copy data,
copy new DB values (such as destination export location) to the original DB
entry, and then deletes the new DB entry, and the source physical share.
The result is the original DB entry with the new DB values (such as
destination export location). In this prototype, the export location is
being used as "Driver ID", because it is derived from the "API ID". After
migration, the migrated Share has "API ID" X and export location Y, because
Y was derived from the temporary DB row created for the destination share.

Proposal 1: Use Private Driver Storage to store "Driver ID". This will
require all drivers to follow the guideline as implemented in the generic
driver, which manages the volume ID ("Driver ID" for this driver) separate
from the "API ID".

Proposal 2: Use additional DB column so we have separate IDs in each
column. This will require less effort from drivers, because this column
value can be transferred from the temporary DB row to the original DB
entry, similar to what is done with the export location column in the
prototype. Drivers can manage the value in this column if they want, but if
they do not, we can derive from the API ID if we continue to use the
approach currently implemented for Share Migration, and keep in mind that
for replication or other features, we need to fill this field with a value
as if we are creating a new share. This approach also has the disadvantage
of being confusing for debugging and require more changes in Manila Core
code, but at least this is handled by Manila Core code instead of Driver
code.

Additionally, proposal 1 can be mixed with proposal 2, if the Manila Core
code attempts to store the "Driver ID" value in Private Share Data instead
of column, but we argued that Manila Core should not touch Private Share
Data, we have not come to a conclusion on this.

Proposal 3: Create new table "instances" that will be linked to the "API
ID", so a share can have several instances, which have their own ID, and
only one is considered "Active". This approach sounds very interesting for
future features, the admin can find the ID for which instances are in the
backend through a "manila share-instances-show " command. There
has been a lot of discussion regarding how we use the Instance ID, if we
provide them directly to drivers as if it was the API ID, or include in a
field in the Share object so the driver can continue to use the API ID and
reads the Instance ID if it wants (which makes it similar to proposal 1).
It was stated that for replication, drivers need to see the instance IDs,
so providing the Instance ID as if it was the API ID would not make much
sense here. This approach will also require a lot of changes on Manila Core
code, and depending on what we decide to do with the Instance ID regarding
drivers, may require no changes or minimal changes to drivers.

Proposal 4: Allow drivers to change the "API ID". The
advantages/disadvanges of this proposal are not very clear to me, it would
fix part of Share Migration problem (not sure if replication would need to
do the same), but I see as it breaking the concept that we were trying to
migrate a share, it becomes cloning shares and erasing the original, we do
not know how it would impact users, and certainly would be much less
transparent.

I think that from here we can proceed on expressing our concerns,
disadvantages or advantages or each approach, for other features as well
(Unfortunately I am familiar with migration only), develop each proposal
further with diagrams if that's the case, so we can decide on which one is
best for our future features.


-- 
Rodrigo Barbieri
Computer Scientist
Federal University of São Carlos
(11) 96889 3412
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Daniel P. Berrange
On Wed, Jun 03, 2015 at 03:09:28PM +0200, Thierry Carrez wrote:
> John Garbutt wrote:
> > Given we are thinking Liberty is moving to semantic versioning, maybe
> > it could look like this:
> > * 12.0.1 (liberty-1) will have some features (hopefully), and will be a tag
> > * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
> > * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
> > * 12.0.2 (liberty-2) will also contain features
> > * 12.0.3 (liberty-3) is focused on priority features (given the current 
> > plan)
> > * 12.1 is Liberty release is just bug fixes on 12.0.3
> > * 13.0.0.dev1 would be the first commit to open M
> 
> The current thinking on the release management team would be to do
> something like this for projects that are still doing milestone-based
> development:
> 
> * 12.0.0b1 (liberty-1)
> * 12.0.0b2 (liberty-2)
> * 12.0.0b3 (liberty-3)
> * 12.0.0rc1 (RC1)
> * 12.0.0 is Liberty release

This kind of numberig is something I'd really like us to get away from
in Nova, as by including beta/alpha nomenculture, it is really telling
users that these releases are not to be deployed outside the lab.

> I think assuming people can tell 12.0.1 is an alpha and 12.1 is a
> release that is just bug fixes over 12.0.3 is a bit crazy...
> 
> The alternative would be to go full intermediary releases and do:
> 
> * 11.1.0
> * 11.2.0
> * 11.2.1
> * 11.3.0
> * 11.3.1 (oh! that happens to also be the "liberty" release!)
> * 11.4.0
> 
> I don't think we can maintain an middle ground.

What I think we're saying for Nova is that we're not going to change
the cadence of what we're releasing. ie we're still following the
milestone based development timeline. Instead we're trying to get
across that the milestone releases are none the less formal releases
you can deploy and/or base downsteam products on.

Personally I like the idea of every release we do being fully equal
in status, but at least in the short term we'll have limitations
that some of the releases will not be synced with docs & translations
teams, so will not quite be at the same level.

On IRC John also mentioned that the point at which we bump the
second digit in the semantic version is also the marker bouy at
which we remove deprecated config parameters, and/or merge /
drop database migrations.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Daniel P. Berrange
On Wed, Jun 03, 2015 at 02:01:06PM +0100, John Garbutt wrote:
> Hi,
> 
> (To be clear, this is a proposal to be discussed and not a decision.)
> 
> The version number can help us communicate that:
> * you can consume a milestone release
> ** ... but the docs and translations may not be totally up to date
> * you can consume any commit
> ** ... but there is no formal tracking of bugs and features in that commit
> ** ... but can still live upgrade from the previous release to any
> commit in the current release
> * if you need completed docs and translations, wait for the final
> liberty release
> * we only support upgrade between .x and .x
> ** to ensure we can do live upgrades, but with minimal technical debt over 
> time
> ** 
> http://docs.openstack.org/developer/nova/devref/project_scope.html#upgrade-expectations
> 
> The idea is to keep what we do today, but try and communicate what
> that is a little better. Making it clear you can consume the milestone
> releases, and indeed the master branch, if thats what you want to,
> while being clear about what you loose and/or gain over waiting for
> the final release and/or stable branch.
> 
> Given we are thinking Liberty is moving to semantic versioning, maybe
> it could look like this:
> * 12.0.1 (liberty-1) will have some features (hopefully), and will be a tag
> * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
> * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
> * 12.0.2 (liberty-2) will also contain features
> * 12.0.3 (liberty-3) is focused on priority features (given the current plan)
> * 12.1 is Liberty release is just bug fixes on 12.0.3
> * 13.0.0.dev1 would be the first commit to open M

FYI for reference, I have previously suggested that we make
intermediate releases on a 2 monthly cadence which, with all
releases being treated in the same way as production ready,
deployable releases

  http://lists.openstack.org/pipermail/openstack-dev/2015-February/057614.html

What John is suggesting doesn't go as far as my proposal, since it
is still describing that the milestones releases have a specific
focus (features vs priority features vs bugs).

None the less, I think this suggestion on versioning would be a
step forward as it does improve the messaging to encourage the
idea that the milestone releases are not just thrown over the
wall for adhoc testing, but are in fact formal releases in their
own right that can be used for production if desired.

Perhaps in the future we'll move further towards the model I
had outlined, but this is a good start in that direction at
least.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread John Garbutt
On 3 June 2015 at 14:09, Thierry Carrez  wrote:
> John Garbutt wrote:
>> Given we are thinking Liberty is moving to semantic versioning, maybe
>> it could look like this:
>> * 12.0.1 (liberty-1) will have some features (hopefully), and will be a tag
>> * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
>> * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
>> * 12.0.2 (liberty-2) will also contain features
>> * 12.0.3 (liberty-3) is focused on priority features (given the current plan)
>> * 12.1 is Liberty release is just bug fixes on 12.0.3
>> * 13.0.0.dev1 would be the first commit to open M
>
> The current thinking on the release management team would be to do
> something like this for projects that are still doing milestone-based
> development:
>
> * 12.0.0b1 (liberty-1)
> * 12.0.0b2 (liberty-2)
> * 12.0.0b3 (liberty-3)
> * 12.0.0rc1 (RC1)
> * 12.0.0 is Liberty release
>
> I think assuming people can tell 12.0.1 is an alpha and 12.1 is a
> release that is just bug fixes over 12.0.3 is a bit crazy...

We go to great lengths to ensure folks can upgrade from b1 -> b3 and
b2 -> release. I am really looking for a way to advertise that, incase
its useful.

... But it could/will be missing aligned docs and translations. So
maybe its not enough different from beta... needs more thought.

> The alternative would be to go full intermediary releases and do:
>
> * 11.1.0
> * 11.2.0
> * 11.2.1
> * 11.3.0
> * 11.3.1 (oh! that happens to also be the "liberty" release!)
> * 11.4.0
>
> I don't think we can maintain an middle ground.

I think that could still work.

But I was attempting to skip the exception of creating 11.2.1 just
because 11.2.0.dev42 fixes a critical bug present in 11.2.1. You would
have to wait for the next (time bound) release to get the extra bug
fixes and features.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Functional tests coverage

2015-06-03 Thread Andreas Jaeger

On 06/03/2015 03:13 PM, Sergey Belous wrote:

Hi All,

I want to write the functional tests for Neutron. But the first I want
to know the current coverage. How to measure test coverage of code?
Where to look and what to start?


"tox -e cover" should run the coverage tests of neutron,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][qa] Empty "Build succeeded" when filtering jobs

2015-06-03 Thread Evgeny Antyshev

Some CIs like to narrow their scope to a certain set of files.
For that, they specify file mask on per-job basis. So there appear
annoying comments with only "Build succeeded".
(an example complaint:
http://lists.openstack.org/pipermail/openstack-dev/2015-June/065367.html)

Moreover, most of CIs which don't bother filtering, make lots of
comments to doc/unittest changes, which is also wrong.
(seehttps://review.openstack.org/#/c/152006, and most of CIs don't run 
unittests)
What if Zuul would not comment when no real jobs run?
The only meaningful task that is done is merging the patch,
but anyway in case of merge failure there should be "Merge failed" comment.

In case of no objections, I'll make corresponding change in zuul.

--
Best regards,
Evgeny Antyshev,
Parallels PCS6 CI


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] RequestSpec object and Instance model

2015-06-03 Thread Nikola Đipanov
On 06/02/2015 03:14 PM, Sylvain Bauza wrote:
> Hi,
> 
> Currently working on implementing the RequestSpec object BP [1], I had
> some cool comments on my change here :
> https://review.openstack.org/#/c/145528/12/nova/objects/request_spec.py,cm
> 
> Since we didn't discussed on how to persist that RequestSpec object, I
> think the comment is valuable.
> 
> For the moment, the only agreed spec for persisting the object that we
> have is [2] but there is also a corollar here that means that we would
> have to persist more than the current fields
> https://review.openstack.org/#/c/169901/3/specs/liberty/approved/add-buildrequest-obj.rst,cm
> 
> 
> So, there are 2 possibilities :
>  #1, we only persist the RequestSpec for the sole Scheduler and in that
> case, we can leave as it is - only a few fields from Instance are stored
>  #2, we consider that RequestSpec can be used for more than just the
> Scheduler, and then we need to make sure that we will have all the
> instance fields then.
> 

So these are 2 possibilities if we agree that we need to make progress
on the spec as is defined and merged now. What I was complaining
yesterday is that we don't seem to have done enough of high level
investigation into this stuff before embarking on writing a set of specs
that then due to their format obscure the problems we are actually
trying to solve.

Work around the scheduler touches on a lot of issues that have only
recently been noticed. While I am all for the incremental approach, it
seems silly to completely disregard the issues we already know about. We
should have a high level overview of the problems we know we want to
solve, and then come up with an incremental way of solving them, but not
without keeping an eye on the big picture at all times.

An ad-hoc list of individual issues that we know about and should be
trying to solve (in no particular order) that all seem related to the
data model design problem we are trying to take a stab at here:

1/ RequestSpec is an unversioned dict even though it's the central piece
of a placement request for the scheduler
2/ There are scheduler_hints that are needed throughout the lifecycle of
an instance but are never persisted so are lost after boot
3/ We have the Migration objects that are used both for resource
tracking for instances being migrated, and as an indication of an
instance being moved, but are not used in all the places we need this
kind of book keeping (live migration, rebuild)
4/ Evacuate (an orchestrated rebuild) is especially problematic because
it usually involves failure modes, which are difficult to identify and
handle properly without a consistently used data model.
5/ Some of the recently added constraints that influence resource
tracking (NUMA, CPU pinning) cannot simply be calculated from the flavor
on the fly when tracking resources, but need to be persisted after a
successful claim as they are dependent on the state of the host at that
very moment (see [1])
6/ Related to the previous one - there is data related to the instance
in addition to the flavor that need to follow the '_old' and '_new'
pattern (needs the values related to both source and destination host
persisted during a migration/resize/live migration/)
7/ The issues cells v2 folks are hitting (mentioned above) where they
don't want to have any Instances in the top level cell but still need to
persist stuff.
8/ Issues with having no access to individual instance UUIDs in the
scheduler, but a lot of data access for more complex filtering revolves
around it being present.

Most of the above have individual bugs that I can try to find and link
here too.

[1] https://bugs.launchpad.net/nova/+bug/1417667

The overall theme of all the above is (to paraphrase alaski from IRC)
how to organize the big blob of data that is an instance in all of it's
possible states, in such a way that it makes sense, nothing is missing,
there is as little duplication as possible, and access patterns of
different services that require different bits can work without massive
overhead.

> 
> I'm not strongly opiniated on that, I maybe consider that #2 is probably
> the best option but there is a tie in my mind. Help me figuring out
> what's the best option.
> 

If we want to keep things moving forward on this particular BP - I'd go
with adding the RequestSpec object and make sure the code that uses it
is migrated. I believe that spike alone will leave us with much better
idea about the problem.

In addition - writing a high level spec/wiki that we can refer back to
in individual BPs and see how they solve it would be massively helpful too.

N.

> -Sylvain
> 
> [1] :
> http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/request-spec-object.html
> 
> [2] :
> http://specs.openstack.org/openstack/nova-specs/specs/liberty/approved/persist-request-spec.html
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage question

Re: [openstack-dev] [nova][scheduler] Updating Our Concept of Resources

2015-06-03 Thread John Garbutt
On 3 June 2015 at 13:53, Ed Leafe  wrote:
> On Jun 2, 2015, at 5:58 AM, Alexis Lee  wrote:
>
>> If you allocate all the memory of a box to high-mem instances, you may
>> not be billing for all the CPU and disk which are now unusable. That's
>> why flavors were introduced, afaik, and it's still a valid need.
>
> So we had a very good discussion at the weekly IRC meeting for the Scheduler, 
> and we agreed to follow that up here on the ML. One thing that came up, noted 
> in the quote above, is that I gave the impression in my first email that I 
> thought flavors were useless. I think I did a better job in the original blog 
> post of explaining that flavors are a great way to handle the sane division 
> of a resource like a compute node. The issue I have with flavors is that we 
> seem to be locked into the "everything that can be requested has to fit into 
> the flavor", and that really doesn't make sense.
>
> Another concern was from the cloud provider's POV, which makes a flavor a 
> convenient way of packaging cloud resources for sale. The customer can simply 
> say "give me one of these" to specify a complex combination of virtualized 
> resources. That's great, but it means that there has to be a flavor for every 
> possible permutation of resources. If you restricted flavors to only 
> represent the sane ways of dividing up compute nodes, any other features 
> could be add-ons to the request. Something like ordering a pizza: offer the 
> customer a fixed choice of sizes, but then let them specify any toppings in 
> whatever combination they want. That's certainly more sane than presenting 
> them with a menu with hundreds of pizza "flavors", each representing a 
> different size/topping combination.

I feel there is a lot to be said for treating "consumable" resources
very separately to "free" options.

For example grouping the vCPUs into sockets can be "free" in terms of
capacity planning, so is a valid optional add on (assuming you are not
doing some level of pinning to match that).

For things where you are trying to find a specific compute node, that
kind of attribute has clear capacity planning concerns, and is likely
to have a specific "cost" associated with it. So we need to make sure
its clear how that cost concept can be layered on top of the Nova API.
For example "os_type" often changes the cost, and is implemented on
top of flavors using a combination of protected image properties on
glance and the way snapshots inherit image properties.

>> I totally agree the scheduler doesn't have to know anything about
>> flavors though. We should push them out to request validation in the
>> Nova API. This can be considered part of cleaning up the scheduler API.
>
> This idea was also discussed and seemed to get a lot of support. Basically, 
> it means that by the time the request hits the scheduler, there is no 
> "flavor" anymore; instead, the scheduler gets a request for so much RAM, so 
> much disk, etc., and these amounts have already been validated at the API 
> layer. So a customer requests a flavor just like they do now, and the API has 
> the responsibility to verify that the flavor is valid, but then "unpacks" the 
> flavor into its components and passes that on to compute. The end result is 
> the same, but there would be no more need to store "flavors" anywhere but the 
> front end. This has the added benefit of eliminating the problem with new 
> flavors being propagated down to cells, since they would no longer need to 
> have to translate what "flavor X" means. Don Dugger volunteered to write up a 
> spec for removing flavors from the scheduler.
>

+1 for Nova translating the incoming request to a "resource request"
the scheduler understands, given the resources it knows about.

I would look at scoping that to "compute" resources, so its easier to
add "volume" and "network" into that request at a later date.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Functional tests coverage

2015-06-03 Thread Sergey Belous
Hi All,

I want to write the functional tests for Neutron. But the first I want to
know the current coverage. How to measure test coverage of code? Where to
look and what to start?

-- 
Best Regards,
Sergey Belous
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread Thierry Carrez
John Garbutt wrote:
> Given we are thinking Liberty is moving to semantic versioning, maybe
> it could look like this:
> * 12.0.1 (liberty-1) will have some features (hopefully), and will be a tag
> * 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
> * 12.0.2.dev1234 would be the 1234th commit after 12.0.1
> * 12.0.2 (liberty-2) will also contain features
> * 12.0.3 (liberty-3) is focused on priority features (given the current plan)
> * 12.1 is Liberty release is just bug fixes on 12.0.3
> * 13.0.0.dev1 would be the first commit to open M

The current thinking on the release management team would be to do
something like this for projects that are still doing milestone-based
development:

* 12.0.0b1 (liberty-1)
* 12.0.0b2 (liberty-2)
* 12.0.0b3 (liberty-3)
* 12.0.0rc1 (RC1)
* 12.0.0 is Liberty release

I think assuming people can tell 12.0.1 is an alpha and 12.1 is a
release that is just bug fixes over 12.0.3 is a bit crazy...

The alternative would be to go full intermediary releases and do:

* 11.1.0
* 11.2.0
* 11.2.1
* 11.3.0
* 11.3.1 (oh! that happens to also be the "liberty" release!)
* 11.4.0

I don't think we can maintain an middle ground.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][security] Enable user password complexity verification

2015-06-03 Thread Lingxian Kong
On Wed, Jun 3, 2015 at 7:49 PM, David Stanek  wrote:
>
> On Wed, Jun 3, 2015 at 6:04 AM liusheng  wrote:
>>
>> Thanks for this topic, also, I think it is similar situation when talking
>> about keystone users, not only the instances's password.
>>
>
> In the past we've talked about having more advanced password management
> features in Keystone (complexity checks, rotation, etc). The end result is
> that we are not adding them because we would like to get away from managing
> users in Keystone that way. Instead we are pushing for users to integrate
> Keystone with more fully featured identity products.
>

Hi, David,

Thanks for the info you provided. Would you please give me some
links(emails or etherpad) about the discussion before?

IMHO, as an identity management project, with user-faced APIs, Keytone
should provide such security feature options to users, to make it
self-contained, we could not always rely on other compoment or
product(Horizon, 3rd party component, etc.) to do something really
import for Keystoen itself.

-- 
Regards!
---
Lingxian Kong

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Release versioning proposal for Liberty

2015-06-03 Thread John Garbutt
Hi,

(To be clear, this is a proposal to be discussed and not a decision.)

The version number can help us communicate that:
* you can consume a milestone release
** ... but the docs and translations may not be totally up to date
* you can consume any commit
** ... but there is no formal tracking of bugs and features in that commit
** ... but can still live upgrade from the previous release to any
commit in the current release
* if you need completed docs and translations, wait for the final
liberty release
* we only support upgrade between .x and .x
** to ensure we can do live upgrades, but with minimal technical debt over time
** 
http://docs.openstack.org/developer/nova/devref/project_scope.html#upgrade-expectations

The idea is to keep what we do today, but try and communicate what
that is a little better. Making it clear you can consume the milestone
releases, and indeed the master branch, if thats what you want to,
while being clear about what you loose and/or gain over waiting for
the final release and/or stable branch.

Given we are thinking Liberty is moving to semantic versioning, maybe
it could look like this:
* 12.0.1 (liberty-1) will have some features (hopefully), and will be a tag
* 12.0.2.dev1 is the first commit after 12.0.1 and does not get a tag
* 12.0.2.dev1234 would be the 1234th commit after 12.0.1
* 12.0.2 (liberty-2) will also contain features
* 12.0.3 (liberty-3) is focused on priority features (given the current plan)
* 12.1 is Liberty release is just bug fixes on 12.0.3
* 13.0.0.dev1 would be the first commit to open M

Thanks,
John

PS
This is part of my general push to make sure we are better at
communicating what we are up to, and more importantly, WHY are we
doing what we are doing.

I see lots of frustration in new (and existing) contributors, where
you get stuck between three layers of process, and left very
frustrated and confused. I see Liberty as being a time where we can
get more explicit at explaining whats going on and why, so we can have
a well informed debate on how to move forward. Making it clear why we
are doing what we are doing, feels to me like the logical first step.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Thierry Carrez
So yeah, that's precisely what we discussed at the cross-project
workshop about In-team scaling in Vancouver (led by Kyle and myself).
For those not present, I invite you to read the notes:

https://etherpad.openstack.org/p/liberty-cross-project-in-team-scaling

The conclusion was to explore splitting review areas and building trust
relationships. Those could happen:

- along architectural lines (repo splits)
- along areas of expertise with implicit trust to not review anything else

... which is precisely what you seem to oppose.

Boris Pavlovic wrote:
> *- Why not splitting repo/plugins?*
> 
>   I don't want to make "architectural" decisions based on "social" or 
>   "not enough good tool for review" issues. 
> 
>   If we take a look at OpenStack that was splited many times: Glance,
> Cinder, ...
>   we will see that there is a log of code duplication that can't be
> removed even after 
>   two or even more years of oslo effort. As well it produce such issues
> like syncing 
>   requirements, requirements in such large bash script like devstack, 
>   there is not std installator, it's quite hard to manage and test it
> and so on.. 
> 
>   That's why I don't think that splitting repo is good "architecture"
> decision - it makes 
>simple things complicated...

I know we disagree on that one, but I don't think monolithic means
"simpler". Having smaller parts that have a simpler role and explicit
contracts to communicate with other pieces is IMHO better and easier to
maintain.

We shouldn't split repositories when it only results in code
duplication. But whenever we can isolate something that could have a
more dedicated maintenance team, I think that's worth exploring as a
solution to the review scaling issue.

> *- Why not just trust people*
> 
> People get tired and make mistakes (very often). 
> That's why we have blocking CI system that checks patches, 
> That's why we have rule 2 cores / review (sometimes even 3,4,5...)... 

It's not because "we don't trust people" that we have the 2-core rule.
Core reviewers check the desirability and quality of implementation. By
default we consider that if 2 of those agree that a change is sane, it
probably is. The CI system checks something else, and that is that you
don't break everyone or introduce a regression. So you shouldn't be able
to "introduce a bug" that would be so serious that a simple revert would
still be embarrassing. If you can, then you should work on your tests.

I think it's totally fine to give people the ability to +2/approve
generally, together with limits on where they are supposed to use that
power. They will be more careful as to what they approve this way. For
corner cases you can revert.

As an example, Ubuntu development has worked on that trust model for
ages. Once you are a developer, you may commit changes to any package in
the distro. But you're supposed to use that power wisely. You end up
staying away from risky packages and everything you don't feel safe to
approve.

If you can't trust your core reviewers to not approve things that are
outside their comfort zone, I'd argue they should not be core reviewers
in the first place.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]Big Tent Mode within respective projects

2015-06-03 Thread Zhipeng Huang
THX Jay :)
On Jun 3, 2015 8:41 PM, "Jay Pipes"  wrote:

> On 06/03/2015 08:25 AM, Zhipeng Huang wrote:
>
>> Hi All,
>>
>> As I understand, Neutron by far has the clearest big tent mode via its
>> in-tree/out-of-tree decomposition, thanks to Kyle and other Neutron team
>> members effort.
>>
>> So my question is, is it the same for the other projects? For example,
>> does Nova also have the project-level Big Tent Mode Neutron has?
>>
>
> Hi Zhipeng,
>
> At this time, Neutron is the only project that has done any splitting out
> of driver and advanced services repos. Other projects have discussed doing
> this, but, at least in Nova, that discussion was put on hold for the time
> being. Last I remember, we agreed that we would clean up, stabilize and
> document the virt driver API in Nova before any splitting of driver repos
> would be feasible.
>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Updating Our Concept of Resources

2015-06-03 Thread Ed Leafe
On Jun 2, 2015, at 5:58 AM, Alexis Lee  wrote:

> If you allocate all the memory of a box to high-mem instances, you may
> not be billing for all the CPU and disk which are now unusable. That's
> why flavors were introduced, afaik, and it's still a valid need.

So we had a very good discussion at the weekly IRC meeting for the Scheduler, 
and we agreed to follow that up here on the ML. One thing that came up, noted 
in the quote above, is that I gave the impression in my first email that I 
thought flavors were useless. I think I did a better job in the original blog 
post of explaining that flavors are a great way to handle the sane division of 
a resource like a compute node. The issue I have with flavors is that we seem 
to be locked into the "everything that can be requested has to fit into the 
flavor", and that really doesn't make sense.

Another concern was from the cloud provider's POV, which makes a flavor a 
convenient way of packaging cloud resources for sale. The customer can simply 
say "give me one of these" to specify a complex combination of virtualized 
resources. That's great, but it means that there has to be a flavor for every 
possible permutation of resources. If you restricted flavors to only represent 
the sane ways of dividing up compute nodes, any other features could be add-ons 
to the request. Something like ordering a pizza: offer the customer a fixed 
choice of sizes, but then let them specify any toppings in whatever combination 
they want. That's certainly more sane than presenting them with a menu with 
hundreds of pizza "flavors", each representing a different size/topping 
combination.

> I totally agree the scheduler doesn't have to know anything about
> flavors though. We should push them out to request validation in the
> Nova API. This can be considered part of cleaning up the scheduler API.

This idea was also discussed and seemed to get a lot of support. Basically, it 
means that by the time the request hits the scheduler, there is no "flavor" 
anymore; instead, the scheduler gets a request for so much RAM, so much disk, 
etc., and these amounts have already been validated at the API layer. So a 
customer requests a flavor just like they do now, and the API has the 
responsibility to verify that the flavor is valid, but then "unpacks" the 
flavor into its components and passes that on to compute. The end result is the 
same, but there would be no more need to store "flavors" anywhere but the front 
end. This has the added benefit of eliminating the problem with new flavors 
being propagated down to cells, since they would no longer need to have to 
translate what "flavor X" means. Don Dugger volunteered to write up a spec for 
removing flavors from the scheduler.

So did I miss anything? :)


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all]Big Tent Mode within respective projects

2015-06-03 Thread Jay Pipes

On 06/03/2015 08:25 AM, Zhipeng Huang wrote:

Hi All,

As I understand, Neutron by far has the clearest big tent mode via its
in-tree/out-of-tree decomposition, thanks to Kyle and other Neutron team
members effort.

So my question is, is it the same for the other projects? For example,
does Nova also have the project-level Big Tent Mode Neutron has?


Hi Zhipeng,

At this time, Neutron is the only project that has done any splitting 
out of driver and advanced services repos. Other projects have discussed 
doing this, but, at least in Nova, that discussion was put on hold for 
the time being. Last I remember, we agreed that we would clean up, 
stabilize and document the virt driver API in Nova before any splitting 
of driver repos would be feasible.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] openstacklib::db::sync proposal

2015-06-03 Thread Martin Mágr


On 06/02/2015 07:05 PM, Mathieu Gagné wrote:

On 2015-06-02 12:41 PM, Yanis Guenane wrote:

The openstacklib::db::sync[2] is currently only a wrapper around an exec
that does the actual db sync, this allow to make any modification to the
exec into a single place. The main advantage IMO is that a contributor
is provided with the same experience as it is not the case today across
all modules.


The amount of possible change to an exec resource is very limited. [1] I
don't see a value in this change which outweighs the code churn and
review load needed to put it in place. Unless we have real use cases or
outrageously genius feature to add to it, I'm not in favor of this change.

Furthermore, any change to the public interface of
openstacklib::db::sync would require changes across all our modules
anyway to benefit from this latest hypothetical feature. I think we are
starting to nitpick over as little "generic" code we could possibly find
to put in openstacklib.

[1] https://docs.puppetlabs.com/references/latest/type.html#exec



Wearing my consistency hat I must say I like this change. On the other 
hand I agree with Mathieu that delegating single resource from several 
modules to single module is necessary in this case.


Regards,
Martin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Julien Danjou
On Wed, Jun 03 2015, Boris Pavlovic wrote:

> Reverting patches is unacceptable for Rally project.

Then you have a more serious problem than the rest of OpenStack.

> This means that we merged bug and this is epic fail of PTL of project.

Your code is already full of bugs and misfeatures, like the rest of the
software. That's life.

> Let's take a look from other side, Ihar would you share with me
> your  password of your email?
> You can believe me I won't do anything wrong with it.

This is a completely wrong analogy. Email is personal, open source
software are not.

> And "yes" I don't want to trust anybody this is huge amount of work to PTL.
>
> PTL in such case is bottleneck because he need to check that all 100500+
> subcores are reviewing pieces that they can review and passing +2 only on
> patches that they can actually merge.
>
>
> Let's just automate this stuff.
> Like we have automated CI for testing.

If you're having trust issues, good luck maintaining any large-scale
successful (open source) project. This is terrible management and leads
to micro-managing tasks and people, which has never build something
awesome.

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] New micro-version needed for api bug fix or not?

2015-06-03 Thread John Garbutt
On 3 June 2015 at 12:52, Jay Pipes  wrote:
> On 06/03/2015 02:34 AM, Chris Friesen wrote:
>>
>> On 06/03/2015 12:16 AM, Jens Rosenboom wrote:
>>
>>> I'm wondering though whether the current API behaviour here should be
>>> changed more generally. Is there a plausible reason to silently
>>> discard options that are not allowed for non-admins? For me it would
>>> make more sense to return an error in that case.
>>
>>
>> If we're bumping the microversion anyways, I'd be in favor of having
>> that throw an error rather than silently ignore options.
>>
>> You could maybe even have a helpful "those options require admin
>> privileges" error message that gets displayed to the user.
>
> ++

+1

We must keep adding this sort of validation as we evolve v2.1

This is a one of the big changes in the "default behaviour" since
v2.0, validate input, and make things discoverable, rather than
silently fail.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all]Big Tent Mode within respective projects

2015-06-03 Thread Zhipeng Huang
Hi All,

As I understand, Neutron by far has the clearest big tent mode via its
in-tree/out-of-tree decomposition, thanks to Kyle and other Neutron team
members effort.

So my question is, is it the same for the other projects? For example, does
Nova also have the project-level Big Tent Mode Neutron has?

Many thanks.

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/03/2015 01:56 PM, Boris Pavlovic wrote:
> Ihar,
> 
> Reverting patches is unacceptable for Rally project. This means
> that we merged bug and this is epic fail of PTL of project.
> 

That's a bar set too high. Though I don't believe Rally team does not
merge any buggy code, ever. Neither I believe in elves.

> 
> Let's take a look from other side, Ihar would you share with me 
> your  password of your email? You can believe me I won't do
> anything wrong with it.

No, I cannot. There is no trust between us. (Though I *would* share my
password with my wife though, if situation would require it.)

> 
> And "yes" I don't want to trust anybody this is huge amount of work
> to PTL.
> 

Well, if you don't trust *anybody*, then it will be hard indeed to
find any contributors for your project. But here, I think it's not ACL
to fix, but PTL attitude.

> PTL in such case is bottleneck because he need to check that all
> 100500+ subcores are reviewing pieces that they can review and
> passing +2 only on patches that they can actually merge.
> 

He wouldn't need to check all those tiny details if he would trust his
own team just a little.

There are plenty of ways to break a project, no matter how limited
your ACLs are. The good news is that people care about their
reputation and other's trust.

There are also plenty of ways to suspend velocity of your project, and
excessive ACL fences are one of those.

> 
> Let's just automate this stuff. Like we have automated CI for
> testing.
> 
> Best regards, Boris Pavlovic
> 
> On Wed, Jun 3, 2015 at 2:28 PM, Ihar Hrachyshka
> mailto:ihrac...@redhat.com>> wrote:
> 
> On 06/03/2015 08:29 AM, Boris Pavlovic wrote:
>> Guys,
> 
>> I will try to summarize all questions and reply on them:
> 
>> *- Why not splitting repo/plugins?*
> 
>> I don't want to make "architectural" decisions based on "social"
>> or "not enough good tool for review" issues.
> 
>> If we take a look at OpenStack that was splited many times: 
>> Glance, Cinder, ... we will see that there is a log of code 
>> duplication that can't be removed even after two or even more
>> years of oslo effort. As well it produce such issues like
>> syncing requirements, requirements in such large bash script like
>> devstack, there is not std installator, it's quite hard to manage
>> and test it and so on..
> 
>> That's why I don't think that splitting repo is good 
>> "architecture" decision - it makes simple things complicated...
> 
> 
>> *- Why not just trust people*
> 
>> People get tired and make mistakes (very often).
> 
> I wouldn't say they make mistakes *too* often. And if there is a 
> mistake, we always have an option to git-revert and talk to the
> guy about it. I believe no one in the neutron team merges random
> crap, and I would expect the same from other openstack teams.
> 
> It's also quite natural that people who do more reviews extend
> their field of expertise. Do we really want to chase PTLs to
> introduce a change into turing-complete-acl-description each time
> we feel someone is now ready to start reviewing code from yet
> another submodule?
> 
> Or consider a case when a patch touches most, if not all
> submodules, but applies some very trivial changes, like a new
> graduated oslo library being consumed, or python3 adoption changes.
> Do you want to wait for a super-core with enough ACL permissions
> for all those submodules touched to approve it? I would go the
> opposite direction, allowing a single core to merge such a trivial
> patch, without waiting for the second one to waste his time
> reviewing it.
> 
> Core reviewers are not those who are able to put +2 on any patch,
> but those who are able to understand where *not* to put it. I would
> better allow people themselves to decide where they are capable and
> where their expertise ends, and free PTLs from micro-managing the
> cats.
> 
> So in essence: mistakes are cheap; reputation works; people are 
> responsible enough; and more ACL fences are evil.
> 
>> That's why we have blocking CI system that checks patches,
> 
> Those checks are easy to automate. Trust is not easily formalized
> though .
> 
> Ihar
> 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
>
> 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVbu6SAAoJEC5aWaUY1u57y7AIAMcumeEWEapzT387uNNXDfz0
+nRrKPvJdgp3

Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Ihar,

Reverting patches is unacceptable for Rally project.
This means that we merged bug and this is epic fail of PTL of project.


Let's take a look from other side, Ihar would you share with me
your  password of your email?
You can believe me I won't do anything wrong with it.

And "yes" I don't want to trust anybody this is huge amount of work to PTL.

PTL in such case is bottleneck because he need to check that all 100500+
subcores are reviewing pieces that they can review and passing +2 only on
patches that they can actually merge.


Let's just automate this stuff.
Like we have automated CI for testing.

Best regards,
Boris Pavlovic

On Wed, Jun 3, 2015 at 2:28 PM, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 06/03/2015 08:29 AM, Boris Pavlovic wrote:
> > Guys,
> >
> > I will try to summarize all questions and reply on them:
> >
> > *- Why not splitting repo/plugins?*
> >
> > I don't want to make "architectural" decisions based on "social" or
> >  "not enough good tool for review" issues.
> >
> > If we take a look at OpenStack that was splited many times:
> > Glance, Cinder, ... we will see that there is a log of code
> > duplication that can't be removed even after two or even more years
> > of oslo effort. As well it produce such issues like syncing
> > requirements, requirements in such large bash script like devstack,
> >  there is not std installator, it's quite hard to manage and test
> > it and so on..
> >
> > That's why I don't think that splitting repo is good
> > "architecture" decision - it makes simple things complicated...
> >
> >
> > *- Why not just trust people*
> >
> > People get tired and make mistakes (very often).
>
> I wouldn't say they make mistakes *too* often. And if there is a
> mistake, we always have an option to git-revert and talk to the guy
> about it. I believe no one in the neutron team merges random crap, and
> I would expect the same from other openstack teams.
>
> It's also quite natural that people who do more reviews extend their
> field of expertise. Do we really want to chase PTLs to introduce a
> change into turing-complete-acl-description each time we feel someone
> is now ready to start reviewing code from yet another submodule?
>
> Or consider a case when a patch touches most, if not all submodules,
> but applies some very trivial changes, like a new graduated oslo
> library being consumed, or python3 adoption changes. Do you want to
> wait for a super-core with enough ACL permissions for all those
> submodules touched to approve it? I would go the opposite direction,
> allowing a single core to merge such a trivial patch, without waiting
> for the second one to waste his time reviewing it.
>
> Core reviewers are not those who are able to put +2 on any patch, but
> those who are able to understand where *not* to put it. I would better
> allow people themselves to decide where they are capable and where
> their expertise ends, and free PTLs from micro-managing the cats.
>
> So in essence: mistakes are cheap; reputation works; people are
> responsible enough; and more ACL fences are evil.
>
> > That's why we have blocking CI system that checks patches,
>
> Those checks are easy to automate. Trust is not easily formalized though
> .
>
> Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJVbuS9AAoJEC5aWaUY1u57v2wH/iDLvCrebTtTpocZ8a0BFJ7T
> ssgjM+1F2JiEuieNg7qRqkdW8fZuMuODc7EnWihjDjfP4OMQkelO2711KSPTCSmT
> 76RLMQrSHhyB2FO29qu+4bE5uwUV4uutaDyK8IRZpra+nrSoU8dtL6NuTa/csEeU
> QbmJBB2UMSXdrQmA6HfzoQV9Dmqk5ePbjzg1HXTFy/AtxCb2DLf2IUmeHqwtqg1o
> WoC5ISqoUkRzWx5h1IbV26hhJuGrW6pWjrX50UEFmR/VZwz9T13s7BVE4ReE7mnA
> 2cIGdFnhaJY/VzD4WEzXRfNXV0qetTJG6w30wktKq6y1mG6q8nm+N6KQ4Onq0FQ=
> =DZSF
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] New micro-version needed for api bug fix or not?

2015-06-03 Thread Jay Pipes

On 06/03/2015 02:34 AM, Chris Friesen wrote:

On 06/03/2015 12:16 AM, Jens Rosenboom wrote:


I'm wondering though whether the current API behaviour here should be
changed more generally. Is there a plausible reason to silently
discard options that are not allowed for non-admins? For me it would
make more sense to return an error in that case.


If we're bumping the microversion anyways, I'd be in favor of having
that throw an error rather than silently ignore options.

You could maybe even have a helpful "those options require admin
privileges" error message that gets displayed to the user.


++

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][security] Enable user password complexity verification

2015-06-03 Thread David Stanek
On Wed, Jun 3, 2015 at 6:04 AM liusheng  wrote:

>  Thanks for this topic, also, I think it is similar situation when talking
> about keystone users, not only the instances's password.
>
>
In the past we've talked about having more advanced password management
features in Keystone (complexity checks, rotation, etc). The end result is
that we are not adding them because we would like to get away from managing
users in Keystone that way. Instead we are pushing for users to integrate
Keystone with more fully featured identity products.


>
> 在 2015/6/3 17:48, 郑振宇 写道:
>
> Hi All,
>
>  The current OpenStack does not provide user password complexity
> verification option.
>
>
>   When performing actions such as create instances, evacuate instances,
> rebuild instances, rescue instances and update instances' admin password.
> The complexity of user provided admin password has not been verified. This
> can cause security problems.
>
>  One solution will be adding a configuration option:
> using_complex_admin_password = True, if this option is set in configure
> file by administrator, then Nova will perform password complexity checks,
> the check standards can be set to following the IT industry general
> standard, if the provided admin password is not complex enough, an
> exception will be throw. If this option is not set in configure file, then
> the complexity check will be skipped.
>
>  When the user dose not provide admin password, generate_password() in
> utils.py is used to generate an admin password. Generate_password() now
> uses two password symbol groups: default and easier, the default symbol
> group contains numbers, upper case letters and small case letters. the
> easier symbol group contains only numbers and upper case letters. The
> generated password is not complex enough and can also cause security
> problems.
>
>  One possible solution is to add a new symbol group:
> STRONGER_PASSWORD_SYMBOLS which contains numbers, upper case letters, lower
> case letters and also special characters such as `~!@#$%^&*()-_=+ and
> space. Then adding a new option in configuration file:
> generate_strong_password = True, when this option is set, nova will
> generate password using STRONGER_PASSWORD_SYMBOLS symbol group and with
> longer password length. If this option is not set, the password will be
> generated using the default symbol group and default length.
>
>  AWS allows the selection of password policy to configure which kind of
> password complexity is used in the cloud. Please see:
>
> http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html
>
>  And about the standard of complexity, Microsoft also have an advise
> about it, please see:
> https://technet.microsoft.com/en-us/library/hh994562%28v=ws.10%29.aspx
>
>  Thanks,
> BR,
> Zhenyu Zheng
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>  __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Adding packaging as an OpenStack project

2015-06-03 Thread Sean Dague
On 06/02/2015 10:40 PM, Matthew Thode wrote:
> On 06/02/2015 05:41 PM, James E. Blair wrote:
>> Hi,
>>
>> This came up at the TC meeting today, and I volunteered to provide an
>> update from the discussion.
>>
>> In general, I think there is a lot of support for a packaging effort in
>> OpenStack.  The discussion here has been great; we need to answer a few
>> questions, get some decisions written down, and make sure we have
>> agreement.
>>
>> Here's what we need to know:
>>
>> 1) Is this one or more than one horizontal effort?
>>
>> In other words, do we think the idea of having a single packaging
>> project/team with collaboration among distros is going to work?  Or
>> should we look at it more like the deployment projects where we have
>> puppet and chef as top level OpenStack projects?
>>
>> Either way is fine, and regardless, we need to answer the next
>> questions:
>>
>> 2) What's the collaboration plan?
>>
>> How will different distros collaborate with each other, if at all?  What
>> things are important to standardize on, what aren't and how do we
>> support them all.
>>
>> 3) What are the plans for repositories and their contents?
>>
>> What repos will be created, and what will be in them.  When will new
>> ones be created, and is there any process around that.
>>
>> 4) Who is on the team(s)?
>>
>> Who is interested in the overall effort?  Who is signing up for
>> distro-specific work?  Who will be the initial PTL?
>>
>> I think if the discussion here can answer those questions, you should
>> update the governance repo change with that information, we can get all
>> the participants to ack that, and the TC will be able to act.
>>
>> Thanks again for driving this.
>>
>> -Jim
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> Gentoo packages from source client side, don't think this effects us.

Possibly, and that's definitely a legit answer. I think in the deb
packaging effort the primary desire is that package build files would be
in Gerrit to encourage collaboration in the wider community.

So an openstack/ebuild-packaging that was the git tree with the ebuilds
could be a thing if it was a thing you wanted.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] stevedore release 1.5.0 (liberty)

2015-06-03 Thread doug
We are excited to announce the release of:

stevedore 1.5.0: Manage dynamic plugins for Python applications

This release is part of the liberty release series.

With source available at:

http://git.openstack.org/cgit/openstack/stevedore

For more details, please see the git log history below and:

https://launchpad.net/python-stevedore/+milestone/1.5.0

Please report issues through launchpad:

https://bugs.launchpad.net/python-stevedore

Changes in stevedore 1.4.0..1.5.0
-

6ddec6e Removed non-free color profile from .jpg
7295f78 Add sphinx integration
fd46261 Updated from global requirements
2ef21b3 Fix Python versions supported
db174c5 Remove run_cross_tests.sh
9f1d7b6 fix author contact details
f20b94c re-raise exception with full traceback
502e74a Add pypi download + version badges

Diffstat (except docs and test files)
-

README.rst  |   8 +++
requirements.txt|   2 +-
setup.cfg   |   8 +--
stevedore/driver.py |   2 +-
stevedore/example/setup.py  |   4 +-
stevedore/sphinxext.py  | 108 
test-requirements.txt   |   2 +-
tox.ini |   2 +-
21 files changed, 322 insertions(+), 81 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index f7f4cc9..408d229 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-pbr>=0.6,!=0.7,<1.0
+pbr>=0.11,<2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 9d75458..b3680da 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5 +5 @@
-Pillow==2.4.0 # MIT
+Pillow>=2.4.0 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Global Requirements] Adding apscheduler to global requirements

2015-06-03 Thread BORTMAN, Limor (Limor)
From: Renat Akhmerov 
Subject: Re: [openstack-dev] [Global Requirements] Adding apscheduler to global 
requirements
Date: 3 Jun 2015 16:48:08 GMT+6
To: "OpenStack Development Mailing List (not for usage questions)" 


Thanks Doug, got it.

>Limor, can you please explain why exactly do you need this library?
This is the only "live" library I found that enable creating scheduled job in 
seconds granularity.
I am planning on using : BackgroundScheduler 
(apscheduler.schedulers.background).
Doug, Does Oslo have this ability?

Renat Akhmerov
@ Mirantis Inc.



On 02 Jun 2015, at 18:45, Doug Hellmann  wrote:

Excerpts from Renat Akhmerov's message of 2015-06-02 18:26:40 +0600:

Any comments from TC on that? What is the typical procedure of accepting new 
libs into global requirements?

There is a requirements management team, and usually we would want
a patch to the list in openstack/requirements, with some description
of the need for the package, especially which projects are going
to use it and why no existing package with similar functionality
is suitable. There are more details about the evaluation criteria
in http://git.openstack.org/cgit/openstack/requirements/tree/README.rst

Doug



Thanks

Renat Akhmerov
@ Mirantis Inc.


On 02 Jun 2015, at 18:11, BORTMAN, Limor (Limor) 
 wrote:

Hi all,
As part as a BP in mistral (Add seconds granularity in cron-trigger execute[1])
I would like to add apscheduler (Advanced Python Scheduler[2]) to the openstack 
Global Requirements.

Any objections?

[1] 
https://blueprints.launchpad.net/mistral/+spec/cron-trigger-seconds-granularity
[2] https://apscheduler.readthedocs.org/en/latest/


Thanks Stotland Limor 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Change abandonment policy

2015-06-03 Thread Martin Mágr


On 06/02/2015 08:39 PM, Colleen Murphy wrote:

4) Auto-abandon after N months/weeks if patch has a -1 or -2

```
If a change is given a -2 and the author has been unresponsive for at 
least 3 months, a script will automatically abandon the change, 
leaving a message about how the author can restore the change and 
attempt to resolve the -2 with the reviewer who left it.

```

We would use a tool like this one[1] to automatically abandon changes 
meeting a certain criteria. We would have to decide whether we want to 
only auto-abandon changes with -2's or go as far as to auto-abandon 
those with -1's. The policy proposal above assumes -2. The tool would 
leave a canned message about how to restore the change.
+1 for auto-abandoning. 3-4 weeks inactivity with -1/-2 seems reasonable 
proof that commiter gave up on patch.


Martin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 06/03/2015 08:29 AM, Boris Pavlovic wrote:
> Guys,
> 
> I will try to summarize all questions and reply on them:
> 
> *- Why not splitting repo/plugins?*
> 
> I don't want to make "architectural" decisions based on "social" or
>  "not enough good tool for review" issues.
> 
> If we take a look at OpenStack that was splited many times:
> Glance, Cinder, ... we will see that there is a log of code
> duplication that can't be removed even after two or even more years
> of oslo effort. As well it produce such issues like syncing 
> requirements, requirements in such large bash script like devstack,
>  there is not std installator, it's quite hard to manage and test
> it and so on..
> 
> That's why I don't think that splitting repo is good
> "architecture" decision - it makes simple things complicated...
> 
> 
> *- Why not just trust people*
> 
> People get tired and make mistakes (very often).

I wouldn't say they make mistakes *too* often. And if there is a
mistake, we always have an option to git-revert and talk to the guy
about it. I believe no one in the neutron team merges random crap, and
I would expect the same from other openstack teams.

It's also quite natural that people who do more reviews extend their
field of expertise. Do we really want to chase PTLs to introduce a
change into turing-complete-acl-description each time we feel someone
is now ready to start reviewing code from yet another submodule?

Or consider a case when a patch touches most, if not all submodules,
but applies some very trivial changes, like a new graduated oslo
library being consumed, or python3 adoption changes. Do you want to
wait for a super-core with enough ACL permissions for all those
submodules touched to approve it? I would go the opposite direction,
allowing a single core to merge such a trivial patch, without waiting
for the second one to waste his time reviewing it.

Core reviewers are not those who are able to put +2 on any patch, but
those who are able to understand where *not* to put it. I would better
allow people themselves to decide where they are capable and where
their expertise ends, and free PTLs from micro-managing the cats.

So in essence: mistakes are cheap; reputation works; people are
responsible enough; and more ACL fences are evil.

> That's why we have blocking CI system that checks patches,

Those checks are easy to automate. Trust is not easily formalized though
.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVbuS9AAoJEC5aWaUY1u57v2wH/iDLvCrebTtTpocZ8a0BFJ7T
ssgjM+1F2JiEuieNg7qRqkdW8fZuMuODc7EnWihjDjfP4OMQkelO2711KSPTCSmT
76RLMQrSHhyB2FO29qu+4bE5uwUV4uutaDyK8IRZpra+nrSoU8dtL6NuTa/csEeU
QbmJBB2UMSXdrQmA6HfzoQV9Dmqk5ePbjzg1HXTFy/AtxCb2DLf2IUmeHqwtqg1o
WoC5ISqoUkRzWx5h1IbV26hhJuGrW6pWjrX50UEFmR/VZwz9T13s7BVE4ReE7mnA
2cIGdFnhaJY/VzD4WEzXRfNXV0qetTJG6w30wktKq6y1mG6q8nm+N6KQ4Onq0FQ=
=DZSF
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Andreas Jaeger

On 06/03/2015 12:32 PM, Boris Pavlovic wrote:

Guys,

One more time it's NOT about reputation and it's NOT about believing
somebody.

It's about human nature. We are all making mistakes.


And if we do, we can always revert a patch.


System that checks can code review merge patch is just extra check
to avoid unintentional mistakes of core reviewers and make things
self organized.


I suggest you start with trusting these leutnants and go forward with a 
social contract - and if that really does not work at all, let's discuss 
other options.


For the security guide, we have a social contract: It needs two +2s as 
usual - but one +2 from a security team member, one from a Documentation 
team member. This social contract works for our small group just fine 
and I encourage you to try it out,


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Source RPMs for RDO Kilo?

2015-06-03 Thread Neil Jerram

Many thanks, Haïkel, that looks like the information that my team needed.

Neil


On 03/06/15 11:18, Haïkel wrote:

Hi Neil,

We're already having this discussion on the downstream list.
RDO is currently moving packages publication for RHEL/CentOS over CentOS
mirrors. That's just a matter of time and finish the tooling
automating the publication
process for source packages.

In the mean time, you can find sources in the following places
* our packaging sources live in Fedora dist-git:
ie: packaging sources for all services
http://pkgs.fedoraproject.org/cgit/openstack
* source packages are in Fedora and CBS (RHEL/CentOS) build systems.
http://koji.fedoraproject.org/
http://cbs.centos.org/koji/

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [HA] How long we need to wait for cloud recovery after some destructive scenarios?

2015-06-03 Thread Anastasia Urlapova
Timur,
some numbers and devs recommendations you can find by link[0], it is our HA
Guid, feel free to contribute.

Nastya.

[0]
https://wiki.openstack.org/wiki/HAGuideImprovements/TOC#HA_Intro_and_Concepts

On Wed, Jun 3, 2015 at 1:06 PM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Looks like I forgot to add the link to [1] in the first email:
>
> [1] https://github.com/stackforge/haos
>
> On Wed, Jun 3, 2015 at 12:50 PM, Timur Nurlygayanov <
> tnurlygaya...@mirantis.com> wrote:
>
>> Hi team,
>>
>> I'm working on HA / destructive / recovery automated tests [1] for
>> OpenStack clouds and I want to get some expectations from users, operators
>> and developers for the speed of OpenStack recovery after some destructive
>> actions.
>> For example, how long cluster should be unavailable if one of three
>> controller will be destroyed? I think that the right answer is '0 seconds,
>> no downtime' - users shouldn't see anything strange when we lost one
>> controller in our cloud (if it is 'true' HA configuration).
>> In the real world I can see that such destructive scenarios require some
>> time to recover the cloud (1-15 minutes in different cases) - and I just
>> want to get your expectations or the requirements.
>>
>> How fast we can / should fully recover the cloud in the following cases:
>> 1. Restart RabbitMQ services
>> 2. Restart MySQL / Galera services
>> 3. Restart Neutron services (like L3 agents)
>> 4. Hard shutdown of any OpenStack controllers
>> 5. Shutdown of the ethernet interfaces of management / data networks
>>
>> Of course, it depends on the configuration, but we can describe some
>> common, 'expected', asseptance values (SLA) for downtime in differrent
>> destructive cases and use them to verify the clouds today and in the future.
>> We will use these values in HAOS project [1], which will allow to
>> validate any clouds with the same scenarios and with the same SLA for
>> recovery time.
>>
>> Any comments are welcome :)
>> Thank you!
>>
>> --
>>
>> Timur,
>> Senior QA Engineer
>> OpenStack Projects
>> Mirantis Inc
>>
>
>
>
> --
>
> Timur,
> Senior QA Engineer
> OpenStack Projects
> Mirantis Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Global Requirements] Adding apscheduler to global requirements

2015-06-03 Thread Renat Akhmerov
Thanks Doug, got it.

Limor, can you please explain why exactly do you need this library?

Renat Akhmerov
@ Mirantis Inc.



> On 02 Jun 2015, at 18:45, Doug Hellmann  wrote:
> 
> Excerpts from Renat Akhmerov's message of 2015-06-02 18:26:40 +0600:
>> Any comments from TC on that? What is the typical procedure of accepting new 
>> libs into global requirements?
> 
> There is a requirements management team, and usually we would want
> a patch to the list in openstack/requirements, with some description
> of the need for the package, especially which projects are going
> to use it and why no existing package with similar functionality
> is suitable. There are more details about the evaluation criteria
> in http://git.openstack.org/cgit/openstack/requirements/tree/README.rst 
> 
> 
> Doug
> 
>> 
>> Thanks
>> 
>> Renat Akhmerov
>> @ Mirantis Inc.
>> 
>>> On 02 Jun 2015, at 18:11, BORTMAN, Limor (Limor) 
>>>  wrote:
>>> 
>>> Hi all,
>>> As part as a BP in mistral (Add seconds granularity in cron-trigger 
>>> execute[1])
>>> I would like to add apscheduler (Advanced Python Scheduler[2]) to the 
>>> openstack Global Requirements.
>>> 
>>> Any objections?
>>> 
>>> [1] 
>>> https://blueprints.launchpad.net/mistral/+spec/cron-trigger-seconds-granularity
>>> [2] https://apscheduler.readthedocs.org/en/latest/
>>> 
>>> 
>>> Thanks Stotland Limor 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Dynamic Policy for Access Control Subteam Meeting

2015-06-03 Thread Sean Dague
On 06/02/2015 06:27 PM, Morgan Fainberg wrote:
> 
> 
> On Tue, Jun 2, 2015 at 12:09 PM, Adam Young  > wrote:
> 
> Since this a cross project concern, sending it out to the wider
> mailing list:
> 
> We have a sub-effort in Keystone to do better access control policy
> (not the  Neutron or  Congress based policy efforts).
> 
> I presented on this at the summit, and the effort is under full
> swing.  We are going to set up a subteam meeting for this, but would
> like to get some input from outside the Keystone developers working
> on it.  In particular, we'd like input from the Nova team that was
> thinking about hard-coding policy decisions in Python, and ask you,
> instead, to work with us to come up with a solution that works for
> all the service.
> 
> 
> I want to be sure we look at what Nova is presenting here. While
> building policy into python may not (on the surface) look like an
> approach that is wanted due to it restricting the flexibility that we've
> had with policy.json, I don't want to exclude the concept without
> examination. If there is a series of base level functionality that is
> expected to work with Nova in all cases - is that something that should
> be codified in the policy rules? This doesn't preclude having a mix
> between the two approaches (allowing custom roles, etc, but having a
> baseline for a project that is a known quantity that could be overridden).
> 
> Is there real value (from a UX and interoperability standpoint) to have
> everything 100% flexible in all the ways? If we are working to redesign
> how policy works, we should be very careful of excluding the (more)
> radical ideas without consideration. I'd argue that dynamic policy does
> fall on the opposite side of the spectrum from the Nova proposal. In
> truth I'm going to guess we end up somewhere in the middle.

I also don't think it's removing any flexibility at all. Moving the
default policy into code is about having sane defaults encoded somewhere
that we can analyze what people did with the policy, and WARN them when
they did something odd. That odd might be an interop thing, it might
also be 'you realize you disabled server creation, right, probably want
to go look at that'.

Our intent is this applies in layers.

You start with policy in code, that's a set of defaults, which can be
annotated with ("WARN if policy is restricted further than these
defaults") for specific rules.

Then you apply policy.json as a set of overrides. Compute and emit any
warnings.

Where this comes into dynamic policy I think is interesting, because
dynamic policy seems to require a few things.

Where is the start of day origin seed for policy?

There are a few options here. But if we think about a world where
components are releasing on different schedules, and being upgraded at
different times, it seems like the Nova installation has to be that
source of original truth.

So having a GET /policy API call that would provide the composite policy
that Nova knows about (code + json patch) would make a lot of sense. It
would make that discoverable to all kinds of folks on the network, not
just Keystone. Win.

This also seems like the only sane thing in a big tent world where
Keystone might have a *ton* of projects in it's catalog. When something
registered in the catalog, Keystone would reach back into that end point
and look for /policy and populate it's base source of truth for that
service from there.

Dynamic Policy overrides in Keystone would just be another set of
patches (conceptually). These stored in a database instead. Thats fine.

Where I get fuzzy on what I've read / discussed on Dynamic Policy right
now is the fact that every API call is going to need another round trip
to Keystone for a policy check (which would be db calls in keystone?)
Which, maybe is fine, but it seems like there are some challenges and
details around how this consolidated view of the world gets back to the
servers. It *almost* feels like that /policy API could be used to signal
catch flush as well on changes in Keystone (though we'd need to handle
the HA proxy case). I don't know, this seems a place where devil is in
the details, and lots of people probably need to weigh in on options.


But, the tl;dr is that Nova wanting to put defaults in code doesn't hide
anything away, and doesn't break the Dynamic policy model. It just adds
another layer that needs to be computed, and make it so that you'd get
the policy from Nova via that API instead of rooting around in the
filesystem (which is a far more useful way for most people to get it).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] when trove-mgmt-client can be ready?

2015-06-03 Thread Li Tianqing
When  trove-mgmt-client can be ready?




--

Best
Li Tianqing__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Boris Pavlovic
Guys,

One more time it's NOT about reputation and it's NOT about believing
somebody.

It's about human nature. We are all making mistakes.

System that checks can code review merge patch is just extra check
to avoid unintentional mistakes of core reviewers and make things
self organized.


Best regards,
Boris Pavlovic

On Wed, Jun 3, 2015 at 12:55 PM, Alexis Lee  wrote:

> Robert Collins said on Wed, Jun 03, 2015 at 11:12:35AM +1200:
> > So I'd like us to really get our heads around the idea that folk are
> > able to make promises ('I will only commit changes relevant to the DB
> > abstraction/transaction management') and honour them. And if they
> > don't - well, remove their access. *even with* CD in the picture,
> > thats a wholly acceptable risk IMO.
>
> +1, optimism about promises is the solution. The reputational cost of
> violating such a promise is high, given what a small world open source
> can turn out to be.
>
>
> Alexis
> --
> Nova Engineer, HP Cloud.  AKA lealexis, lxsli.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [oslo] oslo.policy requests from the Nova team

2015-06-03 Thread Sean Dague
On 06/02/2015 06:16 PM, David Lyle wrote:
> The Horizon project also uses the nova policy.json file to do role based
> access control (RBAC) on the actions a user can perform. If the defaults
> are hidden in the code, that makes those checks a lot more difficult to
> perform. Horizon will then get to duplicate all the hard coded defaults
> in our code base. Fully understanding UI is not everyone's primary
> concern, I will just point out that it's a terrible user experience to
> have 10 actions listed on an instance that will only fail when actually
> attempted by making the API call.
> 
> To accomplish this level of RBAC, Horizon has to maintain a sync'd copy
> of the nova policy file. The move to centralized policy is something I
> am very excited about. But this seems to be a move in the opposite
> direction.
> 
> I think simply documenting the default values in the policy.json file
> would be a simpler and more straight-forward approach. I think the
> defcore resolution is also a documentation issue.

I think we should separate what you want to do:

* Determine Policy User Can Do

With how you are are currently doing it:

* Reading the nova policy.json file

Because I definitely think that determining the policy the user can do
is definitely something we all want, but reading the policy.json file
only works if Horizon and the API servers are on the same nodes.
Otherwise you are copying a bunch of stuff around. It also requires
(today) a ton of the url logic in Nova to be duplicated in to Horizon
because our policy names are weird.

Documentation isn't really good enough, we'd actually like to WARN the
operator if they did a silly thing with their policy to get preventive
about configuration mistakes and errors. For that, we need a baseline.

How I'd imagine this going is the following:

Default Policy in Code => Patch Policy in policy.json on the API server
=> Dynamic Policy from Keystone

(things to the right override things to the left)

We could provide a /policy API resource for users to GET a policy
definition that's relevant to them (and something more global with admin
credentials). I would imagine this would only provide Default + Patch,
if you wanted Dynamic you'd ask Keystone for that (once it exists).

That would be a better near term fetching point from Horizon instead of
syncing a file. In the Dynamic Policy world that would be the origin
source of truth for Keystone to get started, and it could dynamically
modify after that point.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging] Source RPMs for RDO Kilo?

2015-06-03 Thread Haïkel
Hi Neil,

We're already having this discussion on the downstream list.
RDO is currently moving packages publication for RHEL/CentOS over CentOS
mirrors. That's just a matter of time and finish the tooling
automating the publication
process for source packages.

In the mean time, you can find sources in the following places
* our packaging sources live in Fedora dist-git:
ie: packaging sources for all services
http://pkgs.fedoraproject.org/cgit/openstack
* source packages are in Fedora and CBS (RHEL/CentOS) build systems.
http://koji.fedoraproject.org/
http://cbs.centos.org/koji/

Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Sahara] Can't upload jar file to Job Binaries from Horizon

2015-06-03 Thread Nikita Konovalov
Hi,

This issues was introduced when python-sahara client moved to using 
keystone-client sessions. The keystone client has debug logs enabled and tries 
to log all requests and responses. So when uploading a job binary the request 
body has a binary content which is not printable from python’s perspective. 
I’ve submitted a change to keystone-client 
https://review.openstack.org/#/c/183514/ 
 it’s in review progress now.

If you need a fast workaround you can disable debug logging of keystone client. 
Or you can use Sahara’s feature of storing job binaries in Swift containers.

Best Regards,
Nikita Konovalov
Mirantis, Inc

> On May 7, 2015, at 12:29 , Li, Chen  wrote:
> 
> Hi sahara,
>  
> I have a fresh installed devstack environment.
>  
> I try to upload sahara/etc/edp-examples/edp-pig/trim-spaces/udf.jar to Job 
> binaries (store in internal database) but failed.
> I get error in horizon_error.log, which complains “UnicodeDecodeError: 
> 'ascii' codec can't decode byte 0xe6 in position 14: ordinal not in 
> range(128)”. (https://bugs.launchpad.net/sahara/+bug/1452116)
>  
> I checked everywhere I know, but can’t find any clue why this happen because 
> this used to work.
>  
> There is message in locale/sahara.pot:
> msgid "Job binary internal data must be a string of length greater 
> than zero"
> Is this means I can’t upload “jar” file to Job binary because Job binary 
> internal data must be a string ???
>  
> Anything I have missed ???
>  
> Looking forward to your reply!
>  
> Thanks.
> -chen
>  
>  
>  
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-06-03 Thread Miguel Ángel Ajo
Doesn’t this overlap with the work done for the OSProfiler ?  


More comments inline.  

Miguel Ángel Ajo


On Wednesday, 3 de June de 2015 at 11:43, Kekane, Abhishek wrote:

> Hi Devs,
>  
> So for I have got following responses on the proposed solutions:
>  
> Solution 1: Return tuple containing headers and body from - 3 +1
> Solution 2: Use thread local storage to store 'x-openstack-request-id' 
> returned from headers - 0 +1
> Solution 3: Unique request-id across OpenStack Services - 1 +1
>  
>  


I’d vote for Solution 3, without involving keystone (first caller with no 
req-id generates one randomly),
the req-id contains a call/hop count, which is incremented on every new call... 
 
>   
>  
>  

>  
> Requesting community people, cross-project members and PTL's to go through 
> this mailing thread [1] and give your suggestions/opinions about the 
> solutions proposed so that It will be easy to finalize the solution.
>  
> [1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064842.html
>  
> Thanks & Regards,
>  
> Abhishek Kekane
>  
> -Original Message-
> From: Nikhil Komawar [mailto:nik.koma...@gmail.com]  
> Sent: 28 May 2015 12:34
> To: openstack-dev@lists.openstack.org 
> (mailto:openstack-dev@lists.openstack.org)
> Subject: Re: [openstack-dev] [all] cross project communication: Return 
> request-id to caller
>  
> Did you get to talk with anyone in the LogWG ( 
> https://wiki.openstack.org/wiki/LogWorkingGroup )? In wonder what kind of 
> recommendations, standards we can come up with while adopting a cross project 
> solution. If our logs follow certain prefix and or suffix style across 
> projects, that would help a long way.
>  
> Personally: +1 on Solution 1
>  
> On 5/28/15 2:14 AM, Kekane, Abhishek wrote:
> >  
> > Hi Devs,
> >  
> >  
> > Thank you for your opinions/thoughts.
> >  
> > However I would like to suggest that please give +1 against the  
> > solution which you will like to propose so that at the end it will be  
> > helpful for us to consolidate the voting against each solution and  
> > make some decision.
> >  
> >  
> > Thanks in advance.
> >  
> >  
> > Abhishek Kekane
> >  
> >  
> >  
> > *From:*Joe Gordon [mailto:joe.gord...@gmail.com]
> > *Sent:* 28 May 2015 00:31
> > *To:* OpenStack Development Mailing List (not for usage questions)
> > *Subject:* Re: [openstack-dev] [all] cross project communication:
> > Return request-id to caller
> >  
> >  
> >  
> >  
> > On Wed, May 27, 2015 at 12:06 AM, Kekane, Abhishek  
> > mailto:abhishek.kek...@nttdata.com>> wrote:
> >  
> > Hi Devs,
> >  
> >  
> > Each OpenStack service sends a request ID header with HTTP responses.
> > This request ID can be useful for tracking down problems in the logs.
> > However, when operation crosses service boundaries, this tracking can  
> > become difficult, as each service has its own request ID. Request ID  
> > is not returned to the caller, so it is not easy to track the request.
> > This becomes especially problematic when requests are coming in  
> > parallel. For example, glance will call cinder for creating image, but  
> > that cinder instance may be handling several other requests at the  
> > same time. By using same request ID in the log, user can easily find  
> > the cinder request ID that is same as glance request ID in the g-api  
> > log. It will help operators/developers to analyse logs effectively.
> >  
> >  
> > Thank you for writing this up.
> >  
> >  
> >  
> > To address this issue we have come up with following solutions:
> >  
> >  
> > Solution 1: Return tuple containing headers and body from
> > respective clients (also favoured by Joe Gordon)
> >  
> > Reference:
> >  
> > https://review.openstack.org/#/c/156508/6/specs/log-request-id-mapping
> > s.rst
> >  
> >  
> > Pros:
> >  
> > 1. Maintains backward compatibility
> >  
> > 2. Effective debugging/analysing of the problem as both calling
> > service request-id and called service request-id are logged in
> > same log message
> >  
> > 3. Build a full call graph
> >  
> > 4. End user will able to know the request-id of the request and
> > can approach service provider to know the cause of failure of
> > particular request.
> >  
> >  
> > Cons:
> >  
> > 1. The changes need to be done first in cross-projects before
> > making changes in clients
> >  
> > 2. Applications which are using python-*clients needs to do
> > required changes (check return type of response)
> >  
> >  
> > Additional cons:
> >  
> >  
> > 3. Cannot simply search all logs (ala logstash) using the request-id  
> > returned to the user without any post processing of the logs.
>  

> >  
> >  
> >  
> >  
> >  
> > Solution 2: Use thread local storage to store
> > 'x-openstack-request-id' returned from headers (suggested by Doug
> > Hellmann)
> >  
> > Reference:
> >  
> > https://review.openstack.org/#/c/156508/9/specs/log-request-id-mapping
> > s.rst
> >  
> >  
> > Add new method 'get_openstack_request_id' to return 

Re: [openstack-dev] [fuel] [HA] How long we need to wait for cloud recovery after some destructive scenarios?

2015-06-03 Thread Timur Nurlygayanov
Looks like I forgot to add the link to [1] in the first email:

[1] https://github.com/stackforge/haos

On Wed, Jun 3, 2015 at 12:50 PM, Timur Nurlygayanov <
tnurlygaya...@mirantis.com> wrote:

> Hi team,
>
> I'm working on HA / destructive / recovery automated tests [1] for
> OpenStack clouds and I want to get some expectations from users, operators
> and developers for the speed of OpenStack recovery after some destructive
> actions.
> For example, how long cluster should be unavailable if one of three
> controller will be destroyed? I think that the right answer is '0 seconds,
> no downtime' - users shouldn't see anything strange when we lost one
> controller in our cloud (if it is 'true' HA configuration).
> In the real world I can see that such destructive scenarios require some
> time to recover the cloud (1-15 minutes in different cases) - and I just
> want to get your expectations or the requirements.
>
> How fast we can / should fully recover the cloud in the following cases:
> 1. Restart RabbitMQ services
> 2. Restart MySQL / Galera services
> 3. Restart Neutron services (like L3 agents)
> 4. Hard shutdown of any OpenStack controllers
> 5. Shutdown of the ethernet interfaces of management / data networks
>
> Of course, it depends on the configuration, but we can describe some
> common, 'expected', asseptance values (SLA) for downtime in differrent
> destructive cases and use them to verify the clouds today and in the future.
> We will use these values in HAOS project [1], which will allow to validate
> any clouds with the same scenarios and with the same SLA for recovery time.
>
> Any comments are welcome :)
> Thank you!
>
> --
>
> Timur,
> Senior QA Engineer
> OpenStack Projects
> Mirantis Inc
>



-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][security] Enable user password complexity verification

2015-06-03 Thread liusheng
Thanks for this topic, also, I think it is similar situation when 
talking about keystone users, not only the instances's password.


在 2015/6/3 17:48, 郑振宇 写道:

Hi All,

The current OpenStack does not provide user password complexity 
verification option.



When performing actions such as create instances, evacuate instances, 
rebuild instances, rescue instances and update instances' admin 
password. The complexity of user provided admin password has not been 
verified. This can cause security problems.


One solution will be adding a configuration option: 
using_complex_admin_password = True, if this option is set in 
configure file by administrator, then Nova will perform password 
complexity checks, the check standards can be set to following the IT 
industry general standard, if the provided admin password is not 
complex enough, an exception will be throw. If this option is not set 
in configure file, then the complexity check will be skipped.


When the user dose not provide admin password, generate_password() in 
utils.py is used to generate an admin password. Generate_password() 
now uses two password symbol groups: default and easier, the default 
symbol group contains numbers, upper case letters and small case 
letters. the easier symbol group contains only numbers and upper case 
letters. The generated password is not complex enough and can also 
cause security problems.


One possible solution is to add a new symbol group: 
STRONGER_PASSWORD_SYMBOLS which contains numbers, upper case letters, 
lower case letters and also special characters such as 
`~!@#$%^&*()-_=+ and space. Then adding a new option in configuration 
file: generate_strong_password = True, when this option is set, nova 
will generate password using STRONGER_PASSWORD_SYMBOLS symbol group 
and with longer password length. If this option is not set, the 
password will be generated using the default symbol group and default 
length.


AWS allows the selection of password policy to configure which kind of 
password complexity is used in the cloud. Please see:

http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html

And about the standard of complexity, Microsoft also have an advise 
about it, please see:

https://technet.microsoft.com/en-us/library/hh994562%28v=ws.10%29.aspx

Thanks,
BR,
Zhenyu Zheng


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Alexis Lee
Robert Collins said on Wed, Jun 03, 2015 at 11:12:35AM +1200:
> So I'd like us to really get our heads around the idea that folk are
> able to make promises ('I will only commit changes relevant to the DB
> abstraction/transaction management') and honour them. And if they
> don't - well, remove their access. *even with* CD in the picture,
> thats a wholly acceptable risk IMO.

+1, optimism about promises is the solution. The reputational cost of
violating such a promise is high, given what a small world open source
can turn out to be.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [oslo] oslo.policy requests from the Nova team

2015-06-03 Thread Bhandaru, Malini K
Hello Sean!

+1 on defaults, resource-url style entries, hierarchy

But, in the interest of staying "declarative", I am not comfortable with having 
default policies in code.
I would rather have a default nova policy.json file in the nova code base and 
if no policy.json is supplied, have the nova code
copy over this default to the /etc location, and log the same.

Admin related access changes are easier to determine in the custom policy.json, 
but with the introduction of roles, which could act as aliases,
Policy.json can easily be morphed to become more promiscuous or ultra 
stringent. Harder to determine and alert.

Also thinking that in the context of dynamic policies and being able via API to 
introduce policy changes that take into consideration new roles
Introduced, can see policy changes being saved in the database, changes being 
logged, but also for ease of use/review, nice to write out to a policy.json
file, one per project.

Thanks
Malini

-Original Message-
From: John Garbutt [mailto:j...@johngarbutt.com] 
Sent: Wednesday, June 03, 2015 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone] [nova] [oslo] oslo.policy requests from 
the Nova team

On 2 June 2015 at 17:22, Sean Dague  wrote:
> Nova has a very large API, and during the last release cycle a lot of 
> work was done to move all the API checking properly into policy, and 
> not do admin context checks at the database level. The result is a 
> very large policy file - 
> https://github.com/openstack/nova/blob/master/etc/nova/policy.json

In summary, we need to make it easier for the deployer configuring the policy 
to "to the right thing".

The plan to remove the ability to turn off API "extensions", so we get the Nova 
API back to a single official (microversioned) API, will make it more important 
that its easy to "maintain" policy tweaks.

> This provides a couple of challenges. One of which is in recent 
> defcore discussions some deployers have been arguing that the 
> existence of policy files means that anything you can do with 
> policy.json is valid and shouldn't impact trademark usage, because the 
> knobs were given. Nova specifically states this is not ok - 
> https://github.com/openstack/nova/blob/master/doc/source/devref/policy
> _enforcement.rst#existed-nova-api-being-restricted
> however, we'd like to go a step further here.
>
> What we'd really like is sane defaults for policy that come from code, 
> not from etc files. So that a Nova deploy with an empty policy.json is 
> completely valid, and does a reasonable thing.
>
> Policy.json would then be just a set of overrides for existing policy.
> That would make it a lot more clear what was changed from the existing 
> policy.
>
> We'd also really like the policy system to be able to WARN when the 
> server starts if the policy was changed in some way that could 
> negatively impact compatibility of the system, i.e. if functions that 
> we felt were essential were turned off. Because the default policy is 
> in code, we could have a view of the old and new world and actually 
> warn the Operator that they did a weird thing.
>
> Lastly, we'd actually really like to redo our policy to look more like 
> resource urls instead of extension names, as this should be a lot more 
> sensible to the administrators, and hopefully make it easier to think 
> about policy. Which I think means an aliasing facility in oslo.policy 
> to allow a graceful transition for users. (This may exist, I don't know).

+1 to all that.

One more thing to help those maintaining a policy that has several levels of 
"admin" (frankly the most acceptable use of policy tweaks, and something we 
might want to encode into our defaults at some point if clear patterns emerge).

I think we need more hierarchy in the policy. For example, if you want to 
disable all floating ip actions, it would be nice if that was a single policy 
change. Basically having all floating ip actions inherit from the top level 
policy (i.e. the actions default to the top level policy, and have overrides 
when required). As we add extra API actions, or extra more granular policy 
items, it should default in a way thats easy to understand across an upgrade.

> I'm happy to write specs here, but mostly wanted to have the 
> discussion on the list first to ensure we're all generally good with this 
> direction.

Thanks for the awesome summary here.

I have added this to the list of post summit actions I am (still!) compiling, 
in the section where we need folks to step on an own stuff:
https://etherpad.openstack.org/p/YVR-nova-liberty-summit-action-items

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] [HA] How long we need to wait for cloud recovery after some destructive scenarios?

2015-06-03 Thread Timur Nurlygayanov
Hi team,

I'm working on HA / destructive / recovery automated tests [1] for
OpenStack clouds and I want to get some expectations from users, operators
and developers for the speed of OpenStack recovery after some destructive
actions.
For example, how long cluster should be unavailable if one of three
controller will be destroyed? I think that the right answer is '0 seconds,
no downtime' - users shouldn't see anything strange when we lost one
controller in our cloud (if it is 'true' HA configuration).
In the real world I can see that such destructive scenarios require some
time to recover the cloud (1-15 minutes in different cases) - and I just
want to get your expectations or the requirements.

How fast we can / should fully recover the cloud in the following cases:
1. Restart RabbitMQ services
2. Restart MySQL / Galera services
3. Restart Neutron services (like L3 agents)
4. Hard shutdown of any OpenStack controllers
5. Shutdown of the ethernet interfaces of management / data networks

Of course, it depends on the configuration, but we can describe some
common, 'expected', asseptance values (SLA) for downtime in differrent
destructive cases and use them to verify the clouds today and in the future.
We will use these values in HAOS project [1], which will allow to validate
any clouds with the same scenarios and with the same SLA for recovery time.

Any comments are welcome :)
Thank you!

-- 

Timur,
Senior QA Engineer
OpenStack Projects
Mirantis Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][security] Enable user password complexity verification

2015-06-03 Thread 郑振宇
Hi All,
The current OpenStack does not provide user password complexity verification 
option.
When performing actions such as create instances, evacuate instances, rebuild 
instances, rescue instances and update instances' admin password. The 
complexity of user provided admin password has not been verified. This can 
cause security problems. 
One solution will be adding a configuration option: 
using_complex_admin_password = True, if this option is set in configure file by 
administrator, then Nova will perform password complexity checks, the check 
standards can be set to following the IT industry general standard, if the 
provided admin password is not complex enough, an exception will be throw. If 
this option is not set in configure file, then the complexity check will be 
skipped.
When the user dose not provide admin password, generate_password() in utils.py 
is used to generate an admin password. Generate_password() now uses two 
password symbol groups: default and easier, the default symbol group contains 
numbers, upper case letters and small case letters. the easier symbol group 
contains only numbers and upper case letters. The generated password is not 
complex enough and can also cause security problems.
One possible solution is to add a new symbol group: STRONGER_PASSWORD_SYMBOLS 
which contains numbers, upper case letters, lower case letters and also special 
characters such as `~!@#$%^&*()-_=+ and space. Then adding a new option in 
configuration file: generate_strong_password = True, when this option is set, 
nova will generate password using STRONGER_PASSWORD_SYMBOLS symbol group and 
with longer password length. If this option is not set, the password will be 
generated using the default symbol group and default length.
AWS allows the selection of password policy to configure which kind of password 
complexity is used in the cloud. Please 
see:http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html
And about the standard of complexity, Microsoft also have an advise about it, 
please 
see:https://technet.microsoft.com/en-us/library/hh994562%28v=ws.10%29.aspx
Thanks,BR,Zhenyu Zheng__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] cross project communication: Return request-id to caller

2015-06-03 Thread Kekane, Abhishek
Hi Devs,

So for I have got following responses on the proposed solutions:

Solution 1: Return tuple containing headers and body from - 3 +1
Solution 2: Use thread local storage to store 'x-openstack-request-id' returned 
from headers - 0 +1
Solution 3: Unique request-id across OpenStack Services - 1 +1

Requesting community people, cross-project members and PTL's to go through this 
mailing thread [1] and give your suggestions/opinions about the solutions 
proposed so that It will be easy to finalize the solution.

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-May/064842.html

Thanks & Regards,

Abhishek Kekane

-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
Sent: 28 May 2015 12:34
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] cross project communication: Return 
request-id to caller

Did you get to talk with anyone in the LogWG ( 
https://wiki.openstack.org/wiki/LogWorkingGroup )? In wonder what kind of 
recommendations, standards we can come up with while adopting a cross project 
solution. If our logs follow certain prefix and or suffix style across 
projects, that would help a long way.

Personally: +1 on Solution 1

On 5/28/15 2:14 AM, Kekane, Abhishek wrote:
>
> Hi Devs,
>
>  
>
> Thank you for your opinions/thoughts.
>
> However I would like to suggest that please give +1 against the 
> solution which you will like to propose so that at the end it will be 
> helpful for us to consolidate the voting against each solution and 
> make some decision.
>
>  
>
> Thanks in advance.
>
>  
>
> Abhishek Kekane
>
>  
>
>  
>
> *From:*Joe Gordon [mailto:joe.gord...@gmail.com]
> *Sent:* 28 May 2015 00:31
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [all] cross project communication:
> Return request-id to caller
>
>  
>
>  
>
>  
>
> On Wed, May 27, 2015 at 12:06 AM, Kekane, Abhishek 
> mailto:abhishek.kek...@nttdata.com>> wrote:
>
> Hi Devs,
>
>  
>
> Each OpenStack service sends a request ID header with HTTP responses.
> This request ID can be useful for tracking down problems in the logs.
> However, when operation crosses service boundaries, this tracking can 
> become difficult, as each service has its own request ID. Request ID 
> is not returned to the caller, so it is not easy to track the request.
> This becomes especially problematic when requests are coming in 
> parallel. For example, glance will call cinder for creating image, but 
> that cinder instance may be handling several other requests at the 
> same time. By using same request ID in the log, user can easily find 
> the cinder request ID that is same as glance request ID in the g-api 
> log. It will help operators/developers to analyse logs effectively.
>
>  
>
> Thank you for writing this up.
>
>  
>
>  
>
> To address this issue we have come up with following solutions:
>
>  
>
> Solution 1: Return tuple containing headers and body from
> respective clients (also favoured by Joe Gordon)
>
> Reference:
> 
> https://review.openstack.org/#/c/156508/6/specs/log-request-id-mapping
> s.rst
>
>  
>
> Pros:
>
> 1. Maintains backward compatibility
>
> 2. Effective debugging/analysing of the problem as both calling
> service request-id and called service request-id are logged in
> same log message
>
> 3. Build a full call graph
>
> 4. End user will able to know the request-id of the request and
> can approach service provider to know the cause of failure of
> particular request.
>
>  
>
> Cons:
>
> 1. The changes need to be done first in cross-projects before
> making changes in clients
>
> 2. Applications which are using python-*clients needs to do
> required changes (check return type of  response)
>
>  
>
> Additional cons:
>
>  
>
> 3. Cannot simply search all logs (ala logstash) using the request-id 
> returned to the user without any post processing of the logs.
>
>  
>
>  
>
>  
>
> Solution 2:  Use thread local storage to store
> 'x-openstack-request-id' returned from headers (suggested by Doug
> Hellmann)
>
> Reference:
> 
> https://review.openstack.org/#/c/156508/9/specs/log-request-id-mapping
> s.rst
>
>  
>
> Add new method 'get_openstack_request_id' to return this
> request-id to the caller.
>
>  
>
> Pros:
>
> 1. Doesn't break compatibility
>
> 2. Minimal changes are required in client
>
> 3. Build a full call graph
>
>  
>
> Cons:
>
> 1. Malicious user can send long request-id to fill up the
> disk-space, resulting in potential DoS
>
> 2. Changes need to be done in all python-*clients
>
> 3. Last request id should be flushed out in a subsequent call
> otherwise it will return wrong request id to the caller
>
>  
>
>  
>
> Solution 3: Unique request-id across OpenStack Services (suggested
> by Jamie Lennox)
>
> 

Re: [openstack-dev] [keystone] [nova] [oslo] oslo.policy requests from the Nova team

2015-06-03 Thread John Garbutt
On 2 June 2015 at 17:22, Sean Dague  wrote:
> Nova has a very large API, and during the last release cycle a lot of
> work was done to move all the API checking properly into policy, and not
> do admin context checks at the database level. The result is a very
> large policy file -
> https://github.com/openstack/nova/blob/master/etc/nova/policy.json

In summary, we need to make it easier for the deployer configuring the
policy to "to the right thing".

The plan to remove the ability to turn off API "extensions", so we get
the Nova API back to a single official (microversioned) API, will make
it more important that its easy to "maintain" policy tweaks.

> This provides a couple of challenges. One of which is in recent defcore
> discussions some deployers have been arguing that the existence of
> policy files means that anything you can do with policy.json is valid
> and shouldn't impact trademark usage, because the knobs were given. Nova
> specifically states this is not ok -
> https://github.com/openstack/nova/blob/master/doc/source/devref/policy_enforcement.rst#existed-nova-api-being-restricted
> however, we'd like to go a step further here.
>
> What we'd really like is sane defaults for policy that come from code,
> not from etc files. So that a Nova deploy with an empty policy.json is
> completely valid, and does a reasonable thing.
>
> Policy.json would then be just a set of overrides for existing policy.
> That would make it a lot more clear what was changed from the existing
> policy.
>
> We'd also really like the policy system to be able to WARN when the
> server starts if the policy was changed in some way that could
> negatively impact compatibility of the system, i.e. if functions that we
> felt were essential were turned off. Because the default policy is in
> code, we could have a view of the old and new world and actually warn
> the Operator that they did a weird thing.
>
> Lastly, we'd actually really like to redo our policy to look more like
> resource urls instead of extension names, as this should be a lot more
> sensible to the administrators, and hopefully make it easier to think
> about policy. Which I think means an aliasing facility in oslo.policy to
> allow a graceful transition for users. (This may exist, I don't know).

+1 to all that.

One more thing to help those maintaining a policy that has several
levels of "admin" (frankly the most acceptable use of policy tweaks,
and something we might want to encode into our defaults at some point
if clear patterns emerge).

I think we need more hierarchy in the policy. For example, if you want
to disable all floating ip actions, it would be nice if that was a
single policy change. Basically having all floating ip actions inherit
from the top level policy (i.e. the actions default to the top level
policy, and have overrides when required). As we add extra API
actions, or extra more granular policy items, it should default in a
way thats easy to understand across an upgrade.

> I'm happy to write specs here, but mostly wanted to have the discussion
> on the list first to ensure we're all generally good with this direction.

Thanks for the awesome summary here.

I have added this to the list of post summit actions I am (still!)
compiling, in the section where we need folks to step on an own stuff:
https://etherpad.openstack.org/p/YVR-nova-liberty-summit-action-items

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Nominating Filip Blaha for murano-core

2015-06-03 Thread Filip Blaha
Great thanks for your votes! Thanks for trust in this challenging role. 
I don't know what to say more :-)


Regards
Filip

On 06/02/2015 05:17 PM, Serg Melikyan wrote:

Filip, my congratulations! Welcome!

On Tue, Jun 2, 2015 at 5:34 PM, Stan Lagun > wrote:


+1

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis


On Tue, Jun 2, 2015 at 9:25 AM, Serg Melikyan
mailto:smelik...@mirantis.com>> wrote:

Folks, I'd like to propose Filip Blaha to core members of
Murano team.

Filip is active member of our community and he maintains a
good score
as contributor:
http://stackalytics.com/report/users/filip-blaha

Existing Murano cores, please vote +1/-1 for the addition of
Filip to
the murano-core.
--
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com  | smelik...@mirantis.com 



+7 (495) 640-4904, 0261
+7 (903) 156-0836


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [oslo] oslo.policy requests from the Nova team

2015-06-03 Thread John Garbutt
On 2 June 2015 at 23:48, Kevin L. Mitchell  wrote:
> On Tue, 2015-06-02 at 16:16 -0600, David Lyle wrote:
>> The Horizon project also uses the nova policy.json file to do role
>> based access control (RBAC) on the actions a user can perform. If the
>> defaults are hidden in the code, that makes those checks a lot more
>> difficult to perform. Horizon will then get to duplicate all the hard
>> coded defaults in our code base.

Yeah, thats totally nuts.
Policy discovery is the fix to this tight coupling I guess.

>> Fully understanding UI is not
>> everyone's primary concern, I will just point out that it's a terrible
>> user experience to have 10 actions listed on an instance that will
>> only fail when actually attempted by making the API call.

We are super worried about this, at least I am.
Its a bad API user experience.

However, we are still getting the plumbing sorted to let us fix that.
And no one has stepped up to own writing up the proposed solutions (yet...?)

> For the record, the discussion at the summit also touched on the
> discoverability of the policy affecting a given user/API.  I don't
> believe we considered the ordering between that and the defaults feature
> we suggested, but I believe we can code a defaults mechanism to
> dynamically generate an output file in the interim (as is done for
> configuration now), which may improve the situation from Horizon's
> standpoint, until the discoverability piece is in place.

We were planning on having all the default lines commented out, but we
can sure skip that if it helps horizon until the discoverable policy
is complete. There should be something that works out there.

Honestly, it probably has to be more than policy, the capabilities of
the system as its configured, are also an important input into this.
It seems harsh to assume the deployer has to setup their policy to
accuratly reflect what the system is capable of. I hope that gets
unified, possibly via dynamic policy "defaults", or ideally something
less evil.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra][tc][ptl] Scaling up code review process (subdir cores)

2015-06-03 Thread Sahid Orentino Ferdjaoui
On Wed, Jun 03, 2015 at 10:22:59AM +0200, Julien Danjou wrote:
> On Wed, Jun 03 2015, Robert Collins wrote:
> 
> > We *really* don't need a technical solution to a social problem.
> 
> I totally agree. The trust issues is not going to be solve with a tool.

+1 I can not believe people will commit something on a area he does
not understand.

> -- 
> Julien Danjou
> ;; Free Software hacker
> ;; http://julien.danjou.info



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [packaging] Source RPMs for RDO Kilo?

2015-06-03 Thread Neil Jerram
Where are the source RPMs that correspond to the binary RPMs at 
https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7/ ?


I guess that 
https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing/source/ 
might be quite close - but the 'testing' in this URL suggests that 
https://repos.fedorapeople.org/repos/openstack/openstack-kilo/testing is 
more bleeding edge than 
https://repos.fedorapeople.org/repos/openstack/openstack-kilo/el7.


Thanks, 
Neil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Should we add instance action event to live migration?

2015-06-03 Thread Rui Chen
Hi all:

We have the instance action and action event for most of the instance
operations,

exclude: live-migration. In the current master code, when we do
live-migration, the

instance action is recorded, but the action event for live-migration is
lost. I'm not sure that

it's a bug or design behavior, so I want to get more feedback in mail list.

I found the patch https://review.openstack.org/#/c/95440/

It's add the live migration action, but no event. It looks weird.

I think there are two improvement we can do

[1]: add the live migration event, keep consistence with other instance
operations.

[2]: remove the live migration action in order to make the operation
transparent to end-users, like Andrew say in the patch comments.

Which way you like? please let me know, thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   >