Re: [openstack-dev] [Neutron] [IPv6] Ubuntu PPA with IPv6 enabled, need help to achieve it

2014-04-04 Thread Thomas Goirand
On 04/02/2014 02:33 AM, Martinx - ジェームズ wrote:
> Guys!
> 
> I would like to do this:
> 
> 
> 1- Create and maintain a Ubuntu PPA Archive to host Neutron with IPv6
> patches (from Nephos6 / Shixiong?).
> 
> 
> Why?
> 
> 
> Well, I'm feeling that Neutron with native and complete IPv6 support
> will be only available in October (or maybe later, am I right?) but, I
> really need this (Neutron IPv6) ASAP, so, I'm volunteering myself to
> create / maintain this PPA for Neutron with IPv6, until it reaches mainline.
> 
> To be able to achieve it, I just need to know which files do I need to
> patch (the diff), then repackage Neutron deb packages but, I'll need
> help here, because I don't know where are those "Neutron IPv6 patches"
> (links?)...
> 
> Let me know if there are interest on this...
> 
> Thanks!
> Thiago

Hi Martinx,

If you would like to take care of maintaining the IPv6 patch for the
life of Icehouse, then I'll happily use them in the Debian packages
(note: I also produce Ubuntu packages, and maintain 10 repository mirrors).

Also, if you would like to join the OpenStack packaging team in
alioth.debian.org, and contribute to it at least for this IPv6 support,
that'd be just great! I'm available if you need my help.

Could you please point to me to the list of needed patches? I would need
to keep them separated, in debian/patches, rather than pulling from a
different git repository.

Cheers,

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-04 Thread Clint Byrum
Excerpts from Adam Young's message of 2014-04-04 18:48:40 -0700:
> On 04/04/2014 02:46 PM, Clint Byrum wrote:
> > Excerpts from Michael Elder's message of 2014-04-04 07:16:55 -0700:
> >> Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624
> >>
> >> I still have concerns though about the design approach of creating a new
> >> project for every stack and new users for every resource.
> >>
> >> If I provision 1000 patterns a day with an average of 10 resources per
> >> pattern, you're looking at 10,000 users per day. How can that scale?
> >>
> > If that can't scale, then keystone is not viable at all. I like to think
> > we can scale keystone to the many millions of users level.
> >
> >> How can we ensure that all stale projects and users are cleaned up as
> >> instances are destroy? When users choose to go through horizon or nova to
> >> tear down instances, what cleans up the project & users associated with
> >> that heat stack?
> >>
> > So, they created these things via Heat, but have now left the dangling
> > references in Heat, and expect things to work properly?
> >
> > If they create it via Heat, they need to delete it via Heat.
> >
> >> Keystone defines the notion of tokens to support authentication, why
> >> doesn't the design provision and store a token for the stack and its
> >> equivalent management?
> >>
> > Tokens are _authentication_, not _authorization_.
> 
> Tokens are authorization, not authentication.  For Authentication you 
> should be using a real cryptographically secure authentication 
> mechanism:  either Kerberos or X509.
> 

Indeed, I may have used the terms incorrectly.

Unless I'm mistaken, a token is valid wherever it is presented. It is
simply proving that you authenticated yourself to keystone and that you
have xyz roles.

Perhaps the roles are "authorization". But those roles aren't scoped to
a token, they're scoped to a user, so it still remains that it serves
as authentication for what you have and what you're authorized to do as
a whole user.

That is why I suggest OAUTH, because that is a scheme which offers
tokens with limited scope. We kind of have the same thing with trusts,
but that also doesn't really offer the kind of isolation what we want,
nor does it really offer advantages over user-per-deployment.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Neutron] API inconsistencies with security groups

2014-04-04 Thread Joshua Hesketh

Howdy,

I'm moving a conversation that has begun on a review to this mailing list as it
is perhaps systematic of a larger issue regarding API compatibility
(specifically between neutron and nova-networking). Unfortunately these are
areas I don't have much experience with so I'm hoping to gain some clarity
here.

There is a bug in nova where launching an instance with a given security group
is case-insensitive for nova-networks but case-sensitive for neutron. This
highlights inconsistencies but I also think this is a legitimate bug[0].
Specifically the 'nova boot' command accepts the incorrectly cased security-
group but the instance enters an error state as it has been unable to boot it.
There is an inherent mistake here where the initial check approves the
security-group name but when it comes time to assign the security group (at the
scheduler level) it fails.

I think this should be fixed but then the nova CLI behaves differently with
different tasks. For example, `nova secgroup-add-rule` is case sensitive. So in
reality it is unclear if security groups should, or should not, be case
sensitive. The API implies that they should not. The CLI has methods where some
are and some are not.

I've addressed the initial bug as a patch to the neutron driver[1] and also
amended the case-sensitive lookup in the python-novaclient[2] but both reviews
are being held up by this issue.

I guess the questions are:
 - are people aware of this inconsistency?
 - is there some documentation on the inconsistencies?
 - is a fix of this nature considered an API compatibility break?
 - and what are the expectations (in terms of case-sensitivity)?

Cheers,
Josh

[0] https://launchpad.net/bugs/1286463
[1] https://review.openstack.org/#/c/77347/
[2] https://review.openstack.org/#/c/81688/

--
Rackspace Australia


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack] [nova] admin user create instance for another user/tenant

2014-04-04 Thread Xu (Simon) Chen
I wonder if there is a way to do the following. I have a user A with admin
role in tenant A, and I want to create a VM in/for tenant B as user A.
Obviously, I can use A's admin privilege to add itself to tenant B, but I
want to avoid that.

Based on the policy.json file, it seems doable:
https://github.com/openstack/nova/blob/master/etc/nova/policy.json#L8

I read this as, as long as a user is an admin, it can create an instance..
Just like an admin user can remove an instance from another tenant.

But in here, it looks like as long as the context project ID and target
project ID don't match, an action would be rejected:
https://github.com/openstack/nova/blob/master/nova/api/openstack/wsgi.py#L968

Indeed, when I try to use user A's token to create a VM (POST to
v2//servers), I got the exception from the above link.

On the other hand, according to here, VM's project_id only comes from the
context:
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L767

I wonder if it makes sense to allow admin users to specify a "project_id"
field (which overrides context.project_id) when creating a VM. This
probably requires non-trivial code change.

Or maybe there is another way of doing what I want?

Thanks.
-Simon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSG][OSSN] Heat templates with invalid references allows unintended network access

2014-04-04 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Heat templates with invalid references allows unintended network access
- ---

### Summary ###
Orchestration templates can create security groups to define network
access rules.  When creating these rules, it is possible to have a rule
grant incoming network access to instances belonging to another security
group.  If a rule references a non-existent security group, it can
result in allowing incoming access to all hosts for that rule.

### Affected Services / Software ###
Heat, nova-network, Havana

### Discussion ###
When defining security groups of the "AWS::EC2::SecurityGroup" type in a
CloudFormation-compatible format (CFN) orchestration template, it is
possible to use references to other security groups as the source for
ingress rules.  When these rules are evaluated by Heat in the OpenStack
Havana release, a reference to a non-existent security group will be
silently ignored.  This results in the rule using a "CidrIp" property of
"0.0.0.0/0".  This will allow incoming access to any host for the
affected rule.  This has the effect of allowing unintended network
access to instances.

This issue only occurs when Nova is used for networking (nova-network).
The Neutron networking service is not affected by this issue.

The OpenStack Icehouse release is not affected by this issue.  In the
Icehouse release, Heat will check if a non-existent security group is
referenced in a template and return an error, causing the creation of
the security group to fail.

### Recommended Actions ###
If you are using Heat in the OpenStack Havana release with Nova for
networking (nova-network), you should review your orchestration
templates to ensure that all references to security groups in ingress
rules are valid.  Specifically, you should look at the use of the
"SourceSecurityGroupName" property in your templates to ensure that
all referenced security groups exist.

One particular improper usage of security group references that you
should look for is the case where you define multiple security groups
in one template and use references between them.  In this case, you
need to make sure that you are using the "Ref" intrinsic function to
indicate that you are referencing a security group that is defined in
the same template.  Here is an example of a template with a valid
security group reference:

-  begin example correct template snippet 
"WikiDatabaseSecurityGroup" : {
  "Type" : "AWS::EC2::SecurityGroup",
  "Properties" : {
"GroupDescription" : "Enable HTTP access plus SSH access",
"SecurityGroupIngress" : [
  {
"IpProtocol" : "icmp",
"FromPort" : "-1",
"ToPort" : "-1",
"CidrIp" : "10.1.1.0/24"
  },
  {
"IpProtocol" : "tcp",
"FromPort" : "80",
"ToPort" : "80",
"CidrIp" : "10.1.1.0/24"
  },
  {
"IpProtocol" : "tcp",
"FromPort" : "22",
"ToPort" : "22",
"CidrIp" : "10.1.1.0/24"
  },
  {
"IpProtocol" : "tcp",
"FromPort" : "3306",
"ToPort" : "3306",
"SourceSecurityGroupName" : {
  "Ref": "WebServerSecurityGroup"
}
  }
]
  }
},

"WebServerSecurityGroup" : {
  "Type" : "AWS::EC2::SecurityGroup",
  "Properties" : {
"GroupDescription" : "Enable HTTP access plus SSH access",
"SecurityGroupIngress" : [
  {
"IpProtocol" : "icmp",
"FromPort" : "-1",
"ToPort" : "-1",
"CidrIp" : "10.1.1.0/24"
  },
  {
"IpProtocol" : "tcp",
"FromPort" : "80",
"ToPort" : "80",
"CidrIp" : "10.1.1.0/24"
  },
  {
"IpProtocol" : "tcp",
"FromPort" : "22",
"ToPort" : "22",
"CidrIp" : "10.1.1.0/24"
  }
]
  }
},
-  end example correct template snippet 

Here is an example of an incorrect reference to a security group defined
in the same template:

-  begin example INVALID template snippet 
  {
"IpProtocol" : "tcp",
"FromPort" : "3306",
"ToPort" : "3306",
"SourceSecurityGroupName" : "WebServerSecurityGroup" #INCORRECT!
  }
-  end example INVALID template snippet 

The above invalid reference will result in allowing incoming networking
on port 3306 from all hosts:

IP Protocol | From Port | To Port | IP Range| Source Group |
  +-+---+-+-+--+
  |icmp |-1 |  -1 | 10.1.1.0/24 |  |
  | tcp |80 |  80 | 10.1.1.0/24 |  |
  | tcp |22 |  22 | 10.1.1.0/24 |  |
  | tcp |  3306 |3306 |   0.0.0.0/0 |  |
  +-+---+-+-+--+

It is also recommended that you test your templates if you are using
security group references to ensure that the resulting network rules
are as intended.

### Contacts / References ###
This OSSN : 

Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-04 Thread Anne Gentle



> On Apr 3, 2014, at 8:40 PM, Steve Baker  wrote:
> 
>> On 04/04/14 14:05, Michael Elder wrote:
>> Hello, 
>> 
>> I'm looking for insights about the interaction between keystone and the 
>> software configuration work that's gone into Icehouse in the last month or 
>> so. 
>> 
>> I've found that when using software configuration, the KeystoneV2 is broken 
>> because the server.py#_create_transport_credentials() explicitly depends on 
>> KeystoneV3 methods. 
>> 
>> Here's what I've come across: 
>> 
>> In the following commit, the introduction of 
>> _create_transport_credentials() on server.py begins to create a user for 
>> each OS::Nova::Server resource in the template: 
>> 
>> commit b776949ae94649b4a1eebd72fabeaac61b404e0f 
>> Author: Steve Baker  
>> Date:   Mon Mar 3 16:39:57 2014 +1300 
>> Change: https://review.openstack.org/#/c/77798/   
>> 
>> server.py lines 470-471: 
>> 
>> if self.user_data_software_config(): 
>> self._create_transport_credentials() 
>> 
>> With the introduction of this change, each server resource which is 
>> provisioned results in the creation of a new user ID. The call delegates 
>> through to stack_user.py lines 40-54: 
>> 
>> 
>> def _create_user(self): 
>> # Check for stack user project, create if not yet set 
>> if not self.stack.stack_user_project_id: 
>> project_id = self.keystone().create_stack_domain_project( 
>> self.stack.id) 
>> self.stack.set_stack_user_project_id(project_id) 
>> 
>> # Create a keystone user in the stack domain project 
>> user_id = self.keystone().create_stack_domain_user( 
>> username=self.physical_resource_name(),## HERE THE 
>> USERNAME IS SET TO THE RESOURCE NAME 
>> password=self.password, 
>> project_id=self.stack.stack_user_project_id) 
>> 
>> # Store the ID in resource data, for compatibility with 
>> SignalResponder 
>> db_api.resource_data_set(self, 'user_id', user_id) 
>> 
>> My concerns with this approach: 
>> 
>> - Each resource is going to result in the creation of a unique user 
>> in Keystone. That design point seems hardly teneble if you're 
>> provisioning a large number of templates by an organization every day.
> Compared to the resources consumed by creating a new nova server (or a 
> keystone token!), I don't think creating new users will present a significant 
> overhead.
> 
> As for creating users bound to resources, this is something heat has done 
> previously but we're doing it with more resources now. With havana heat (or 
> KeystoneV2) those users will be created in the same project as the stack 
> launching user, and the stack launching user needs admin permissions to 
> create these users.
>> - If you attempt to set your resource names to some human-readable string 
>> (like "web_server"), you get one shot to provision the template, wherein 
>> future attempts to provision it will result in exceptions due to duplicate 
>> user ids.
> This needs a bug raised. This isn't an issue on KeystoneV3 since the users 
> are created in a project which is specific to the stack. Also for v3 
> operations the username is ignored as the user_id is used exclusively.
>> 
>> - The change prevents compatibility between Heat on Icehouse and KeystoneV2.
> Please continue to test this with KeystoneV2. However any typical icehouse 
> OpenStack should really have the keystone v3 API enabled.

I don't believe this statement reflects a grasp of our current reality.

There is no such thing as a typical Icehouse installation yet -- it is not even 
released. And when we went to document v3 Keystone API for ops we couldn't find 
enough info for deployments. 

- client support is not documented with Openstack client examples
- users and ops find your explanatory concept docs for roles and domains lacking
- best practices and service catalog explanations are not useful or not yet 
written to my knowledge 

Joe Topjian can explain more about operators needs here, hopefully he'll have 
more details to add. 

> Can you explain the reasons why yours isn't?
> 

For all the reasons above and more.

Anne 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-04 Thread Adam Young

On 04/04/2014 02:46 PM, Clint Byrum wrote:

Excerpts from Michael Elder's message of 2014-04-04 07:16:55 -0700:

Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624

I still have concerns though about the design approach of creating a new
project for every stack and new users for every resource.

If I provision 1000 patterns a day with an average of 10 resources per
pattern, you're looking at 10,000 users per day. How can that scale?


If that can't scale, then keystone is not viable at all. I like to think
we can scale keystone to the many millions of users level.


How can we ensure that all stale projects and users are cleaned up as
instances are destroy? When users choose to go through horizon or nova to
tear down instances, what cleans up the project & users associated with
that heat stack?


So, they created these things via Heat, but have now left the dangling
references in Heat, and expect things to work properly?

If they create it via Heat, they need to delete it via Heat.


Keystone defines the notion of tokens to support authentication, why
doesn't the design provision and store a token for the stack and its
equivalent management?


Tokens are _authentication_, not _authorization_.


Tokens are authorization, not authentication.  For Authentication you 
should be using a real cryptographically secure authentication 
mechanism:  either Kerberos or X509.




For the latter, we
need to have a way to lock down access to an individual resource in
Heat. This allows putting secrets in deployments and knowing that only
the instance which has been deployed to will have access to the secrets.
I do see an optimization possible, which is to just create a user for the
box that is given access to any deployments on the box. That would make
sense if users are going to create many many deployments per server. But
even at 10 per server, having 10 users is simpler than trying to manage
shared users and edit their authorization rules.

Now, I actually think that OAUTH tokens _are_ intended to be authorization
as well as authentication, so that is probably where the focus should
be long term. But really, you're talking about the same thing: a single
key lookup in keystone.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Trove] Managed Instances Feature

2014-04-04 Thread Hopper, Justin
Greetings,

I am trying to address an issue from certain perspectives and I think some
support from Nova may be needed.

Problem
Services like Trove use run in Nova Compute Instances.  These Services try
to provide an integrated and stable platform for which the ³service² can run
in a predictable manner.  Such elements include configuration of the
service, networking, installed packages, etc.  In today¹s world, when Trove
spins up an Instance to deploy a database on, it creates that Instance with
the Users Credentials.  Thus, to Nova, the User has full access to that
Instance through Nova¹s API.  This access can be used in ways which
unintentionally compromise the service.

Solution
A proposal is being formed that would put such Instances in a read-only or
invisible mode from the perspective of Nova.  That is, the Instance can only
be managed from the Service from which it was created.  At this point, we do
not need any granular controls.  A simple lock-down of the Nova API for
these Instances would suffice.  However, Trove would still need to interact
with this Instance via Nova API.

The basic requirements for Nova would beŠ

> A way to identify a request originating from a Service vs coming directly from
> an end-user
> A way to Identify which instances are being managed by a Service
> A way to prevent some or all access to the Instance unless the Service ID in
> the request matches that attached to the Instance
> 
Any feedback on this would be appreciated.

Thanks, 

Justin Hopper
Software Engineer - DBaaS
irc: juice | gpg: EA238CF3 | twt: @justinhopper




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-04-04 Thread Zane Bitter
I was just going to let this thread die, because it's clear that we're 
just approaching this from different philosophical viewpoints, and I 
think that we need _both_ viewpoints expressed in the community. Trying 
to change each other's mind would be as pointless as it is futile ;)


That said, it turns out there is still one point that I need to make...

On 26/03/14 05:54, Stan Lagun wrote:

And let users build environments of any
complexity from small components while providing reasonable
default so
that one-click deployment would also be possible. And such
system to be
useful the design process need to be guided and driven by UI. System
must know what combination of components are possible and what
are not
and not let user create Microsoft IIS hosted on Fedora.


If I may go all editorial on you again, this sounds like the same
thing we've been hearing since the 1970s: "When everything is object
oriented, non-technical users will be able to program just by
plugging together existing chunks of code." Except it hasn't ever
worked. 35+ years. No results.


Agree. I know it sounds like marketing bullshit. I never believed myself
this would work for programming. It never worked because OOP approach
doesn't save you from writing code.


This is unfair to marketing.

It never worked because writing the perfect object that could be used in 
every conceivable situation is considerably more expensive and requires 
*more* understanding of how it works than writing/adapting the one you 
need for each given situation.


I submit that the exact same situation is the case here.

What is really missing from this conversation is a detailed analysis of 
who exactly is going to develop and use these applications, and their 
economic incentives for doing so. (If this has happened, I didn't see it 
in this thread.) Or, in other words, marketing.


Basically you're saying that the developer is providing a pre-packaged 
application that has to work in any conceivable environment, where its 
actual components are not known to the developer. The testing burden of 
that is enormous - O(2^n) in the number of options - while the benefit 
over bundling the dependencies is at best incremental, even ignoring the 
downside that the application will probably be broken for most users. If 
I were a developer, I just don't understand why I would sign up for this.



I shall now return to making calculators for people who are currently 
counting on their fingers :)


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] setting up cross-project unit test jobs for oslo libs

2014-04-04 Thread Doug Hellmann
I have submitted a patch to add jobs to run the unit tests of projects
using oslo libraries with the unreleased master HEAD version of those
libraries, to gate both the project and the library. If you are a PTL
or Oslo liaison, and are interested in following its progress, please
subscribe: https://review.openstack.org/#/c/85487/3

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Tempest without branches

2014-04-04 Thread Sean Dague
On 04/04/2014 04:31 PM, Rochelle.RochelleGrober wrote:
> (easier to insert my questions at top of discussion as they are more
> general)
> 
>  
> 
>  
> 
> How would test deprecations work in a branchless Tempest?  Right now,
> there is the discussion on removing the XML tests from Tempest, yet they
> are still valid for Havana and Icehouse.  If they get “removed”, will
> they still be accessible and runnable for Havana version tests?  I can
> see running from a tagged version for Havana, but if you are **not**
> running from the tag, then the files would be “gone”.  So, I’m wondering
> how this would work for Refstack, testing backported bugfixes, etc.

If a feature is deprecated in OpenStack, eventually that feature will be
deleted. The appropriate tests will be put behind a "feature flag" and
not be run if that feature isn't turned on in the Tempest config.

Once that feature no longer exists in a supported OpenStack release,
then it would be deleted from Tempest. This does mean it takes longer to
get code out of Tempest.

As a hypothetical, if Nova v2 XML is removed in Juno, we'll ensure we
have a toggle for Nova v2 XML in the tempest config, and we'll stop
testing it on Juno and going forward.

Once Icehouse goes eol (a year from now), we'd delete those tests
entirely, as there would be no version of supported OpenStack that has
that feature.

I would imagine in the mean time we'd unwind some of the inheritance of
the XML/JSON tests to make it simpler to enhance the JSON tests without
impacting the existing XML tests. But that will be a case by case basis.


> Another related question arises from the discussion of Nova API
> versions.  Tempest tests are being enhanced to do validation, and the
> newer API versions  (2.1,  3.n, etc. when the approach is decided) will
> do validation, etc.  How will these “backward incompatible” tests be
> handled if the test that works for Havana gets modified to work for Juno
> and starts failing Havana code base?

Enhancing validation shouldn't be backwards incompatible. If it is, it's
a bug, and probably something we need to address on all active branches
at the same time.

Also, realize the co-gate means that you won't be able to land a Tempest
master test if it can't simultaneously pass master and stable/icehouse.
So it's self testing.

> With the discussion of project functional tests that could be maintained
> in one place, but run in two (maintenance location undecided, run locale
> local and Tempest/Integrated), how would this “cross project” effort be
> affected by a branchless Tempest?

Honestly, I think that's out of scope. I think there is actually some
confusion about run in devstack, and run with Tempest. And I think that
any code that's not in the Tempest tree probably shouldn't be run in
tempest.

It can run on a real devstack though. We have the swift functional tests
today like that.

> Maybe we need some use cases to ferret out the corner cases of a
> branchless Tempest implementation?  I think we need to get more into
> some of the details to understand what would be needed to be
> added/modified/ removed to make this design proposal work.
> 
>  
> 
> --Rocky
> 
>  
> 
>  
> 
>  
> 
> *From:*David Kranz [mailto:dkr...@redhat.com]
> *Sent:* Friday, April 04, 2014 6:10 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [RFC] Tempest without branches
> 
>  
> 
> On 04/04/2014 07:37 AM, Sean Dague wrote:
> 
> An interesting conversation has cropped up over the last few days in -qa
> 
> and -infra which I want to bring to the wider OpenStack community. When
> 
> discussing the use of Tempest as part of the Defcore validation we came
> 
> to an interesting question:
> 
>  
> 
> Why does Tempest have stable/* branches? Does it need them?
> 
>  
> 
> Historically the Tempest project has created a stable/foo tag the week
> 
> of release to lock the version of Tempest that will be tested against
> 
> stable branches. The reason we did that is until this cycle we had
> 
> really limited nobs in tempest to control which features were tested.
> 
> stable/havana means - test everything we know how to test in havana. So
> 
> when, for instance, a new API extension landed upstream in icehouse,
> 
> we'd just add the tests to Tempest. It wouldn't impact stable/havana,
> 
> because we wouldn't backport changes.
> 
>  
> 
> But is this really required?
> 
>  
> 
> For instance, we don't branch openstack clients. They are supposed to
> 
> work against multiple server versions. Tempest, at some level, is
> 
> another client. So there is some sense there.
> 
>  
> 
> Tempest now also have flags on features, and tests are skippable if
> 
> services, or even extensions aren't enabled (all explicitly setable in
> 
> the tempest.conf). This is a much better control mechanism than the
> 
> course grained selection of st

[openstack-dev] [TripleO] stable/icehouse branches cut

2014-04-04 Thread James Slagle
The stable/icehouse branches for:

tripleo-image-elements
tripleo-heat-templates
tuskar

Have been created from the latest tags, which I just tagged and
released yesterday.

The stable/icehouse branch for tripleo-incubator was cut from the
latest sha as of this afternoon (since we don't tag and release this
repo).

For now, we need some ACL overrides in gerrit to allow folks to review
appropriately. I've submitted that change to openstack-infra/config:
https://review.openstack.org/85485

For questions about committing/proposing/backporting changes to the
stable branches, this link[1] has a lot of good info. It talks about
milestone-proposed branches, but the process would be the same for our
stable/icehouse branches.

[1] 
https://wiki.openstack.org/wiki/GerritJenkinsGithub#Authoring_Changes_for_milestone-proposed

-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-04 Thread Zane Bitter

On 04/04/14 13:58, Clint Byrum wrote:

>We could keep roughly the same structure: a separate template for each
>OpenStack service (compute, block storage, object storage, ironic, nova
>baremetal). We would then use Heat environments to treat each of these
>templates as a custom resource (e.g. OS::TripleO::Nova,
>OS::TripleO::Swift, etc.).
>

I've never fully embraced providers for composition. Perhaps I've missed
that as a key feature. An example of this would be helpful. I think if
we deprecated all of merge.py except the "merge unique params and
resources into one template" part, we could probably just start using
nested stacks for that and drop merge.py. However, I'm not a huge fan of
nested stacks as they are a bit clunky. Maybe providers would make that
better?

Anyway, I think I need to see how this would actually work before I can
really grasp it.


AIUI this use case is pretty much a canonical example of where you'd 
want to use providers. You have a server that you would like to treat as 
just a server, but can't because it comes with a WaitCondition and a 
random string generator (or whatever) into the bargain. So you group 
those resources together into a provider template behind a server-like 
facade, and just treat them like the single server you'd prefer them to be.


This could actually be a big win where you're creating multiple ones 
with a similar configuration, because you can parametrise it and move it 
inside the template and then you only need to specify the custom parts 
rather than repeat the whole declaration when you add more resources in 
the same "group".


From there moving into scaling groups when the time comes should be 
trivial. I'm actually pushing for the autoscaling code to literally use 
the providers mechanism to implement scaling of stacks, but the 
ResourceGroup does something basically equivalent too - splitting your 
scaling unit into a separate template is in all cases the first step.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Tempest without branches

2014-04-04 Thread Rochelle.RochelleGrober
(easier to insert my questions at top of discussion as they are more general)


How would test deprecations work in a branchless Tempest?  Right now, there is 
the discussion on removing the XML tests from Tempest, yet they are still valid 
for Havana and Icehouse.  If they get "removed", will they still be accessible 
and runnable for Havana version tests?  I can see running from a tagged version 
for Havana, but if you are *not* running from the tag, then the files would be 
"gone".  So, I'm wondering how this would work for Refstack, testing backported 
bugfixes, etc.

Another related question arises from the discussion of Nova API versions.  
Tempest tests are being enhanced to do validation, and the newer API versions  
(2.1,  3.n, etc. when the approach is decided) will do validation, etc.  How 
will these "backward incompatible" tests be handled if the test that works for 
Havana gets modified to work for Juno and starts failing Havana code base?

With the discussion of project functional tests that could be maintained in one 
place, but run in two (maintenance location undecided, run locale local and 
Tempest/Integrated), how would this "cross project" effort be affected by a 
branchless Tempest?

Maybe we need some use cases to ferret out the corner cases of a branchless 
Tempest implementation?  I think we need to get more into some of the details 
to understand what would be needed to be added/modified/ removed to make this 
design proposal work.

--Rocky



From: David Kranz [mailto:dkr...@redhat.com]
Sent: Friday, April 04, 2014 6:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [RFC] Tempest without branches

On 04/04/2014 07:37 AM, Sean Dague wrote:

An interesting conversation has cropped up over the last few days in -qa

and -infra which I want to bring to the wider OpenStack community. When

discussing the use of Tempest as part of the Defcore validation we came

to an interesting question:



Why does Tempest have stable/* branches? Does it need them?



Historically the Tempest project has created a stable/foo tag the week

of release to lock the version of Tempest that will be tested against

stable branches. The reason we did that is until this cycle we had

really limited nobs in tempest to control which features were tested.

stable/havana means - test everything we know how to test in havana. So

when, for instance, a new API extension landed upstream in icehouse,

we'd just add the tests to Tempest. It wouldn't impact stable/havana,

because we wouldn't backport changes.



But is this really required?



For instance, we don't branch openstack clients. They are supposed to

work against multiple server versions. Tempest, at some level, is

another client. So there is some sense there.



Tempest now also have flags on features, and tests are skippable if

services, or even extensions aren't enabled (all explicitly setable in

the tempest.conf). This is a much better control mechanism than the

course grained selection of stable/foo.





If we decided not to set a stable/icehouse branch in 2 weeks, the gate

would change as follows:



Project masters: no change

Project stable/icehouse: would be gated against Tempest master

Tempest master: would double the gate jobs, gate on project master and

project stable/icehouse on every commit.



(That last one needs infra changes to work right, those are all in

flight right now to assess doability.)



Some interesting effects this would have:



 * Tempest test enhancements would immediately apply on stable/icehouse *



... giving us more confidence. A large amount of tests added to master

in every release are enhanced checking for existing function.



 * Tempest test changes would need server changes in master and

stable/icehouse *



In trying tempest master against stable/havana we found a number of

behavior changes in projects that there had been a 2 step change in the

Tempest tests to support. But this actually means that stable/havana and

stable/icehouse for the same API version are different. Going forward

this would require master + stable changes on the projects + Tempest

changes. Which would provide much more friction in changing these sorts

of things by accident.



 * Much more stable testing *



If every Tempest change is gating on stable/icehouse, the week long

stable/havana can't pass tests won't happen. There will be much more

urgency to keep stable branches functioning.





If we got rid of branches in Tempest the path would be:

 * infrastructure to support this in infra - in process, probably

landing today

 * don't set stable/icehouse - decision needed by Apr 17th

 * changes to d-g/devstack to be extra explicit about what features

stable/icehouse should support in tempest.conf

 * see if we can make master work with stable/havana to remove the

stable/havana Tempest branch (if this is doable in a month, great, if

not just wait for havana to age o

Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Joshua Harlow
I found https://github.com/kennethreitz/requests/issues/713

"""
Lukasa commented a month 
ago

There's been no progress on this, and it's not high on the list of priorities 
for any of the core development team. This is only likely to happen any time 
soon if someone else develops it. =)

"""

So maybe someone from openstack (or other) just needs to finish the above up?

From: Chuck Thier mailto:cth...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, April 4, 2014 at 11:50 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] Issues with Python Requests

I think I have worked out the performance issues with eventlet and Requests 
with most of it being that swiftclient needs to make use of requests.session to 
re-use connections, and there are likely other areas there that we can make 
improvements.

Now on to expect: 100-continue support, has anyone else looked into that?

--
Chuck


On Fri, Apr 4, 2014 at 9:41 AM, Chuck Thier 
mailto:cth...@gmail.com>> wrote:
Howdy,

Now that swift has aligned with the other projects to use requests in 
python-swiftclient, we have lost a couple of features.

1.  Requests doesn't support expect: 100-continue.  This is very useful for 
services like swift or glance where you want to make sure a request can 
continue before you start uploading GBs of data (for example find out that you 
need to auth).

2.  Requests doesn't play nicely with eventlet or other async frameworks [1].  
I noticed this when suddenly swift-bench (which uses swiftclient) wasn't 
performing as well as before.  This also means that, for example, if you are 
using keystone with swift, the auth requests to keystone will block the proxy 
server until they complete, which is also not desirable.

Does anyone know if these issues are being addressed, or begun working on them?

Thanks,

--
Chuck

[1] 
http://docs.python-requests.org/en/latest/user/advanced/#blocking-or-non-blocking

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Better handling of lists in Heat - a proposal to add a map function

2014-04-04 Thread Zane Bitter

On 19/02/14 02:48, Clint Byrum wrote:

Since picking up Heat and trying to think about how to express clusters
of things, I've been troubled by how poorly the CFN language supports
using lists. There has always been the Fn::Select function for
dereferencing arrays and maps, and recently we added a nice enhancement
to HOT to allow referencing these directly in get_attr and get_param.

However, this does not help us when we want to do something with all of
the members of a list.

In many applications I suspect the template authors will want to do what
we want to do now in TripleO. We have a list of identical servers and
we'd like to fetch the same attribute from them all, join it with other
attributes, and return that as a string.

The specific case is that we need to have all of the hosts in a cluster
of machines addressable in /etc/hosts (please, Designate, save us,
eventually. ;). The way to do this if we had just explicit resources
named NovaCompute0, NovaCompute1, would be:

   str_join:
 - "\n"
 - - str_join:
 - ' '
 - get_attr:
   - NovaCompute0
   - networks.ctlplane.0
 - get_attr:
   - NovaCompute0
   - name
   - str_join:
 - ' '
 - get_attr:
   - NovaCompute1
   - networks.ctplane.0
 - get_attr:
   - NovaCompute1
   - name

Now, what I'd really like to do is this:

map:
   - str_join:
 - "\n"
 - - str_join:
   - ' '
   - get_attr:
 - "$1"
 - networks.ctlplane.0
   - get_attr:
 - "$1"
 - name
   - - NovaCompute0
 - NovaCompute1

This would be helpful for the instances of resource groups too, as we
can make sure they return a list. The above then becomes:


map:
   - str_join:
 - "\n"
 - - str_join:
   - ' '
   - get_attr:
 - "$1"
 - networks.ctlplane.0
   - get_attr:
 - "$1"
 - name
   - get_attr:
   - NovaComputeGroup
   - member_resources

Thoughts on this idea? I will throw together an implementation soon but
wanted to get this idea out there into the hive mind ASAP.


Apparently I read this at the time, but completely forgot about it. 
Sorry about that! Since it has come up again in the context of the 
"TripleO Heat templates and merge.py" thread, allow me to contribute my 2c.


Without expressing an opinion on this proposal specifically, consensus 
within the Heat core team has been heavily -1 on any sort of for-each 
functionality. I'm happy to have the debate again (and TBH I don't 
really know what the right answer is), but I wouldn't consider the lack 
of comment on this as a reliable indicator of lazy consensus in favour; 
equivalent proposals have been considered and rejected on multiple 
occasions.


Since it looks like TripleO will soon be able to move over to using 
AutoscalingGroups (or ResourceGroups, or something) for groups of 
similar servers, maybe we could consider baking this functionality into 
Autoscaling groups instead of as an intrinsic function.


For example, when you do get_attr on an autoscaling resource it could 
fetch the corresponding attribute from each member of the group and 
return them as a list. (It might be wise to prepend "Output." or 
something similar - maybe "Members." - to the attribute names, as 
AWS::CloudFormation::Stack does, so that attributes of the autoscaling 
group itself can remain in a separate namespace.)


Since members of your NovaComputeGroup will be nested stacks anyway 
(using ResourceGroup or some equivalent feature - preferably autoscaling 
with rolling updates), in the case above you'd define in the scaled 
template:


  outputs:
hosts_entry:
  description: An /etc/hosts entry for the NovaComputeServer
  value:
- str_join:
  - ' '
  - - get_attr:
  - NovaComputeServer
  - networks
  - ctlplane
  - 0
- get_attr:
  - NovaComputeServer
  - name

And then in the main template (containing the autoscaling group):

str_join:
  - "\n"
  - get_attr:
- NovaComputeGroup
- Members.hosts_entry

would give the same output as your example would.

IMHO we should do something like this regardless of whether it solves 
your use case, because it's fairly easy, requires no changes to the 
template format, and users have been asking for ways to access e.g. a 
list of IP addresses from a scaling group. That said, it seems very 
likely that making the other changes required for TripleO to get rid of 
merge.py (i.e. switching to scaling groups of templates instead of by 
multiplying resources in templates) will make this a viable solution for 
TripleO's use case as well.


cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Chuck Thier
I think I have worked out the performance issues with eventlet and Requests
with most of it being that swiftclient needs to make use of
requests.session to re-use connections, and there are likely other areas
there that we can make improvements.

Now on to expect: 100-continue support, has anyone else looked into that?

--
Chuck


On Fri, Apr 4, 2014 at 9:41 AM, Chuck Thier  wrote:

> Howdy,
>
> Now that swift has aligned with the other projects to use requests in
> python-swiftclient, we have lost a couple of features.
>
> 1.  Requests doesn't support expect: 100-continue.  This is very useful
> for services like swift or glance where you want to make sure a request can
> continue before you start uploading GBs of data (for example find out that
> you need to auth).
>
> 2.  Requests doesn't play nicely with eventlet or other async frameworks
> [1].  I noticed this when suddenly swift-bench (which uses swiftclient)
> wasn't performing as well as before.  This also means that, for example, if
> you are using keystone with swift, the auth requests to keystone will block
> the proxy server until they complete, which is also not desirable.
>
> Does anyone know if these issues are being addressed, or begun working on
> them?
>
> Thanks,
>
> --
> Chuck
>
> [1]
> http://docs.python-requests.org/en/latest/user/advanced/#blocking-or-non-blocking
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-04 Thread Clint Byrum
Excerpts from Michael Elder's message of 2014-04-04 07:16:55 -0700:
> Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624
> 
> I still have concerns though about the design approach of creating a new 
> project for every stack and new users for every resource. 
> 
> If I provision 1000 patterns a day with an average of 10 resources per 
> pattern, you're looking at 10,000 users per day. How can that scale? 
> 

If that can't scale, then keystone is not viable at all. I like to think
we can scale keystone to the many millions of users level.

> How can we ensure that all stale projects and users are cleaned up as 
> instances are destroy? When users choose to go through horizon or nova to 
> tear down instances, what cleans up the project & users associated with 
> that heat stack? 
> 

So, they created these things via Heat, but have now left the dangling
references in Heat, and expect things to work properly?

If they create it via Heat, they need to delete it via Heat.

> Keystone defines the notion of tokens to support authentication, why 
> doesn't the design provision and store a token for the stack and its 
> equivalent management? 
> 

Tokens are _authentication_, not _authorization_. For the latter, we
need to have a way to lock down access to an individual resource in
Heat. This allows putting secrets in deployments and knowing that only
the instance which has been deployed to will have access to the secrets.
I do see an optimization possible, which is to just create a user for the
box that is given access to any deployments on the box. That would make
sense if users are going to create many many deployments per server. But
even at 10 per server, having 10 users is simpler than trying to manage
shared users and edit their authorization rules.

Now, I actually think that OAUTH tokens _are_ intended to be authorization
as well as authentication, so that is probably where the focus should
be long term. But really, you're talking about the same thing: a single
key lookup in keystone.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Jan Provazník

On 04/03/2014 01:02 PM, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core


+1 to all

Jan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Clint Byrum
Excerpts from Robert Collins's message of 2014-04-03 04:02:20 -0700:
> Getting back in the swing of things...
> 
> Hi,
> like most OpenStack projects we need to keep the core team up to
> date: folk who are not regularly reviewing will lose context over
> time, and new folk who have been reviewing regularly should be trusted
> with -core responsibilities.
> 
> In this months review:
>  - Dan Prince for -core
>  - Jordan O'Mara for removal from -core
>  - Jiri Tomasek for removal from -core
>  - Jamomir Coufal for removal from -core
> 


+1 for all changes.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-04 Thread Meghal Gosalia
I am fine with taking the approach of user passing multiple avail. zones 
Az1,Az2 if he wants vm to be in (intersection of AZ1 and Az2).
It will be more cleaner.

But, similar approach should also be used while setting the 
default_scheduling_zone.

Since, we will not be able to add host to multiple zones,
only way to guarantee even distribution across zones when user does not pass 
any zone,
is to allow multiple zones in default_scheduling_zone param.

Thanks,
Meghal

On Apr 4, 2014, at 2:38 AM, Sylvain Bauza 
mailto:sylvain.ba...@gmail.com>> wrote:




2014-04-04 10:30 GMT+02:00 Sylvain Bauza 
mailto:sylvain.ba...@gmail.com>>:
Hi all,



2014-04-03 18:47 GMT+02:00 Meghal Gosalia 
mailto:meg...@yahoo-inc.com>>:

Hello folks,

Here is the bug [1] which is currently not allowing a host to be part of two 
availability zones.
This bug was targeted for havana.

The fix in the bug was made because it was assumed
that openstack does not support adding hosts to two zones by design.

The assumption was based on the fact that ---
if hostX is added to zoneA as well as zoneB,
and if you boot a vm vmY passing zoneB in boot params,
nova show vmY still returns zoneA.

In my opinion, we should fix the case of nova show
rather than changing aggregate api to not allow addition of hosts to multiple 
zones.

I have added my comments in comments #7 and #9 on that bug.

Thanks,
Meghal

[1] Bug - https://bugs.launchpad.net/nova/+bug/1196893





Thanks for the pointer, now I see why the API is preventing host to be added to 
a 2nd aggregated if there is a different AZ. Unfortunately, this patch missed 
the fact that aggregates metadata can be modified once the aggregate is 
created, so we should add a check when updating metadate in order to cover all 
corner cases.

So, IMHO, it's worth providing a patch for API consistency so as we enforce the 
fact that a host should be in only one AZ (but possibly 2 or more aggregates) 
and see how we can propose to user ability to provide 2 distincts AZs when 
booting.

Does everyone agree ?




Well, I'm replying to myself. The corner case is even trickier. I missed this 
patch [1] which already checks that when updating an aggregate to set an AZ, 
its hosts are not already part of another AZ. So, indeed, the coverage is 
already there... except for one thing :

If an operator is creating an aggregate with an AZ set to the default AZ 
defined in nova.conf and adds an host to this aggregate, nova 
availability-zone-list does show the host being part of this default AZ (normal 
behaviour). If we create an aggregate 'foo' without AZ,  then we add the same 
host to that aggregate, and then we update the metadata of the aggregate to set 
an AZ 'foo', then the AZ check won't notice that the host is already part of an 
AZ and will allow the host to be part of two distinct AZs.

Proof here : http://paste.openstack.org/show/75066/

I'm on that bug.
-Sylvain

[1] : https://review.openstack.org/#/c/36786
-Sylvain

On Apr 3, 2014, at 9:05 AM, Steve Gordon 
mailto:sgor...@redhat.com>> wrote:

- Original Message -

Currently host aggregates are quite general, but the only ways for an
end-user to make use of them are:

1) By making the host aggregate an availability zones (where each host
is only supposed to be in one availability zone) and selecting it at
instance creation time.

2) By booting the instance using a flavor with appropriate metadata
(which can only be set up by admin).


I would like to see more flexibility available to the end-user, so I
think we should either:

A) Allow hosts to be part of more than one availability zone (and allow
selection of multiple availability zones when booting an instance), or

While changing to allow hosts to be in multiple AZs changes the concept from an 
operator/user point of view I do think the idea of being able to specify 
multiple AZs when booting an instance makes sense and would be a nice 
enhancement for users working with multi-AZ environments - "I'm OK with this 
instance running in AZ1 and AZ2, but not AZ*".

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][glance][heat][neutron][nova] Quick fix for "upstream-translation-update Jenkins job failing" (bug 1299349)

2014-04-04 Thread Andreas Jaeger
On 04/04/2014 05:57 PM, Dolph Mathews wrote:
> tl;dr:
> 
>   $ python clean_po.py PROJECT/locale/
>   $ git commit
> 
> The comments on bug 1299349 are already quite long, so apparently this
> got lost. To save everyone some time, the fix is as easy as above. So
> what's clean_po.py?
> 
> Devananda van der Veen (devananda) posted a handy script to fix the
> issue in a related bug [1], to which I added a bit more automation [2]
> because I'm lazy and stuck it into a gist [3]. Grab a copy, and run it
> on milestone-proposed (or wherever necessary). It'll delete a bunch of
> lines from *.po files, and you can commit the result as Closes-Bug.
> 
> Thanks, Devananda!
> 
> [1] https://bugs.launchpad.net/ironic/+bug/1298645/comments/2
> [2] https://bugs.launchpad.net/keystone/+bug/1299349/comments/25
> [3] https://gist.github.com/dolph/9915293

thanks, Dolph.

I've been working on a check for this (with Clark Boylan's help) so that
this kind of duplicates cannot get merged in again. During the review, I
came upon quite a few situations that I did not anticipate before.

The IRC discussion we had let me create a patch that enhances the pep8
pipeline - instead of creating a new separate job - since pep8 already
contains some smaller patches and we don't need to start another VM.

Looking at the reviews, I've improved the patch so that it should work
now on Mac OS X (David Stanek updated 84211 so that it works there).

The patch now runs msgfmt and we thus need to require it. It's available
in our CI infrastructure but not available on some users machines In
https://review.openstack.org/#/c/85123/3/tox.ini it was suggested to
document in nova's doc/source/devref the requirement on msgfmt now.

The patch adds the following command to tox.ini (plus some tox sugar):
bash -c "find nova -type f -regex '.*\.pot?' -print0| \
 xargs -0 -n 1 msgfmt --check-format -o /dev/null"

This really needs to use "bash -c", otherwise tox will not execute it.

There was a concern by Sean Dague about using a pipeline (see
https://review.openstack.org/#/c/83961/3/tox.ini). Sean, do you have a
pointer on how you like to see this done?

Could I get some advise on how to move this forward in the same way for
all projects, please? Also, testing on OS X is appreciated.

Basically my questions are:
* Should we run this as part of pep8 or is there a better place?
* Is there a better way to implement the above command?

I'll try to implement a solution if nobody beats me to it ;)

thanks,
Andreas


Patches:
https://review.openstack.org/#/c/85123/
https://review.openstack.org/#/c/84239/
https://review.openstack.org/#/c/84236/
https://review.openstack.org/#/c/84211/
https://review.openstack.org/#/c/84207/
https://review.openstack.org/#/c/85135/
https://review.openstack.org/#/c/84233/
https://review.openstack.org/#/c/83954/
https://review.openstack.org/#/c/84226/


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Jiří Stránský

On 3.4.2014 13:02, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.


+1 to all.

Jirka


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-04 Thread Stan Lagun
On Fri, Apr 4, 2014 at 9:05 PM, Clint Byrum  wrote:

> IMO that is not really true and trying to stick all these databases into
> one "SQL database" interface is not a use case I'm interested in
> pursuing.
>

Indeed. "Any SQL database" is a useless interface. What I was trying to say
is that some apps may work just on any MySQL while others require some
specific version or variation or even impose some constraints on license or
underlying operating system. One possible solution was to have some sort of
interface hierarchy for that. Even better solution would be that all such
properties be declared somewhere in HOT so that consumer could say not just
"I require MySQL-compatible template" but "I require MySQL-compatible
template with version >= 5.0 and clustered = True". Probably you can come
with better example for this. Though interface alone is a good starting
point.

So for instance there is the non-Neutron LBaaS and the Neutron LBaaS, and
> both have their
> merits for operators, but are basically identical from an application
> standpoint.


While conforming to the same interface from consumer's point of view
different load-balancers has many configuration options (template
parameters) and many of them are specific to particular implementation
(otherwise all of them be functionally equal). This need to be addressed
somehow.

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-04 Thread Clint Byrum
Excerpts from Tomas Sedovic's message of 2014-04-04 08:47:46 -0700:
> Hi All,
> 
> I was wondering if the time has come to document what exactly are we
> doing with tripleo-heat-templates and merge.py[1], figure out what needs
> to happen to move away and raise the necessary blueprints on Heat and
> TripleO side.
> 

Yes indeed, it is time.

> (merge.py is a script we use to build the final TripleO Heat templates
> from smaller chunks)
> 
> There probably isn't an immediate need for us to drop merge.py, but its
> existence either indicates deficiencies within Heat or our unfamiliarity
> with some of Heat's features (possibly both).
> 
> I worry that the longer we stay with merge.py the harder it will be to
> move forward. We're still adding new features and fixing bugs in it (at
> a slow pace but still).
>

Merge.py is where we've amassed our debt. We'll pay it back by moving
features into Heat. A huge debt payment is coming in the form of
software config migration, which you mention at the bottom of this
message.

> Below is my understanding of the main marge.py functionality and a rough
> plan of what I think might be a good direction to move to. It is almost
> certainly incomplete -- please do poke holes in this. I'm hoping we'll
> get to a point where everyone's clear on what exactly merge.py does and
> why. We can then document that and raise the appropriate blueprints.
> 
> 
> ## merge.py features ##
> 
> 
> 1. Merging parameters and resources
> 
> Any uniquely-named parameters and resources from multiple templates are
> put together into the final template.
> 
> If a resource of the same name is in multiple templates, an error is
> raised. Unless it's of a whitelisted type (nova server, launch
> configuration, etc.) in which case they're all merged into a single
> resource.
> 
> For example: merge.py overcloud-source.yaml swift-source.yaml
> 
> The final template has all the parameters from both. Moreover, these two
> resources will be joined together:
> 
>  overcloud-source.yaml 
> 
>   notCompute0Config:
> Type: AWS::AutoScaling::LaunchConfiguration
> Properties:
>   ImageId: '0'
>   InstanceType: '0'
> Metadata:
>   admin-password: {Ref: AdminPassword}
>   admin-token: {Ref: AdminToken}
>   bootstack:
> public_interface_ip:
>   Ref: NeutronPublicInterfaceIP
> 
> 
>  swift-source.yaml 
> 
>   notCompute0Config:
> Type: AWS::AutoScaling::LaunchConfiguration
> Metadata:
>   swift:
> devices:
>   ...
> hash: {Ref: SwiftHashSuffix}
> service-password: {Ref: SwiftPassword}
> 
> 
> The final template will contain:
> 
>   notCompute0Config:
> Type: AWS::AutoScaling::LaunchConfiguration
> Properties:
>   ImageId: '0'
>   InstanceType: '0'
> Metadata:
>   admin-password: {Ref: AdminPassword}
>   admin-token: {Ref: AdminToken}
>   bootstack:
> public_interface_ip:
>   Ref: NeutronPublicInterfaceIP
>   swift:
> devices:
>   ...
> hash: {Ref: SwiftHashSuffix}
> service-password: {Ref: SwiftPassword}
> 
> 
> We use this to keep the templates more manageable (instead of having one
> huge file) and also to be able to pick the components we want: instead
> of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
> uses the VirtualPowerManager driver) or `ironic-vm-source`.
> 

The merging of white-listed types is superseded entirely by
OS::Heat::StructuredConfig and OS::Heat::StructuredDeployment. I would
move that we replace all uses of it with those, and deprecate the
feature.

> 
> 
> 2. FileInclude
> 
> If you have a pseudo resource with the type of `FileInclude`, we will
> look at the specified Path and SubKey and put the resulting dictionary in:
> 
>  overcloud-source.yaml 
> 
>   NovaCompute0Config:
> Type: FileInclude
> Path: nova-compute-instance.yaml
> SubKey: Resources.NovaCompute0Config
> Parameters:
>   NeutronNetworkType: "gre"
>   NeutronEnableTunnelling: "True"
> 
> 
>  nova-compute-instance.yaml 
> 
>   NovaCompute0Config:
> Type: AWS::AutoScaling::LaunchConfiguration
> Properties:
>   InstanceType: '0'
>   ImageId: '0'
> Metadata:
>   keystone:
> host: {Ref: KeystoneHost}
>   neutron:
> host: {Ref: NeutronHost}
>   tenant_network_type: {Ref: NeutronNetworkType}
>   network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
>   bridge_mappings: {Ref: NeutronBridgeMappings}
>   enable_tunneling: {Ref: NeutronEnableTunnelling}
>   physical_bridge: {Ref: NeutronPhysicalBridge}
>   public_interface: {Ref: NeutronPublicInterface}
> service-password:
>   Ref: NeutronPassword
>   admin-password: {Ref: AdminPassword}
> 
> The result:
> 
>   NovaCompute0Config:
> Type: AWS::AutoScaling::LaunchConfiguration
> Properties:
>   InstanceType: '0'
>   

Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Ezra Silvera
> Ironic's responsibility ends where the host OS begins. Ironic is a bare 
metal provisioning service, not a configuration management service.

I agree with the above, but just to clarify I would say that Ironic 
shouldn't *interact*  with the host OS once it booted. Obviously it can 
still perform BM tasks underneath the OS (while it's up and running)  if 
needed (e.g., force shutdown through IPMI, etc..)





Ezra


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Ling Gao
>seems that this discussion is splitted in 2 threads
Lucas,
 That's because I added a subject when responded. :-)

Ling Gao



From:   Lucas Alvares Gomes 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   04/04/2014 01:16 PM
Subject:Re: [openstack-dev] [Ironic][Agent]




There are lots of configuration management agents already out there (chef? 
puppet? salt? ansible? ... the list is pretty long these days...) which 
you can bake into the images that you deploy with Ironic, but I'd like to 
be clear that, in my opinion, Ironic's responsibility ends where the host 
OS begins. Ironic is a bare metal provisioning service, not a 
configuration management service.

What you're suggesting is similar to saying, "we want to run an agent in 
every KVM VM in our cloud," except most customers would clearly object to 
this. The only difference here is that you (and tripleo) are the deployer 
*and* the user of Ironic; that's a special case, but not the only use case 
which Ironic is servicing.


+1 (already agreed with something similar in another thread[1], seems that 
this discussion is splitted in 2 threads)

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2014-April/031896.html 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Clint Byrum
Excerpts from Vladimir Kozhukalov's message of 2014-04-04 05:19:41 -0700:
> Hello, everyone,
> 
> I'd like to involve more people to express their opinions about the way how
> we are going to run Ironic-python-agent. I mean should we run it with root
> privileges or not.
> 
> From the very beginning agent is supposed to run under ramdisk OS and it is
> intended to make disk partitioning, RAID configuring, firmware updates and
> other stuff according to installing OS. Looks like we always will run agent
> with root privileges. Right? There are no reasons to limit agent
> permissions.
> 
> On the other hand, it is easy to imagine a situation when you want to run
> agent on every node of your cluster after installing OS. It could be useful
> to keep hardware info consistent (for example, many hardware configurations
> allow one to add hard drives in run time). It also could be useful for "on
> the fly" firmware updates. It could be useful for "on the fly"
> manipulations with lvm groups/volumes and so on.
> 
> Frankly, I am not even sure that we need to run agent with root privileges
> even in ramdisk OS, because, for example, there are some system default
> limitations such as number of connections, number of open files, etc. which
> are different for root and ordinary user and potentially can influence
> agent behaviour. Besides, it is possible that some vulnerabilities will be
> found in the future and they potentially could be used to compromise agent
> and damage hardware configuration.
> 
> Consequently, it is better to run agent under ordinary user even under
> ramdisk OS and use rootwrap if agent needs to run commands with root
> privileges. I know that rootwrap has some performance issues
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.htmlbut
> it is still pretty suitable for ironic agent use case.
> 

My opinion: If you are going to listen for connections, do so on a low
port as root, but then drop privs immediately thereafter. Run things
with sudo, not rootwrap, as the flexibility will just become a burden
if you ever do need to squeeze more performance out, and it won't be
much of a boon to what are likely to be very straight forward commands.

Finally, as others have said, this is for the deploy ramdisk only. For
the case where you want to do an on-the-fly firmware update, there are
a bazillion options to do remote execution. Ironic is for the case where
you don't have on-the-fly capabilities.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Lucas Alvares Gomes
> There are lots of configuration management agents already out there (chef?
> puppet? salt? ansible? ... the list is pretty long these days...) which you
> can bake into the images that you deploy with Ironic, but I'd like to be
> clear that, in my opinion, Ironic's responsibility ends where the host OS
> begins. Ironic is a bare metal provisioning service, not a configuration
> management service.
>
> What you're suggesting is similar to saying, "we want to run an agent in
> every KVM VM in our cloud," except most customers would clearly object to
> this. The only difference here is that you (and tripleo) are the deployer
> *and* the user of Ironic; that's a special case, but not the only use case
> which Ironic is servicing.
>
>
+1 (already agreed with something similar in another thread[1], seems that
this discussion is splitted in 2 threads)

[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-April/031896.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Jim Rollenhagen
On April 4, 2014 at 9:12:56 AM, Devananda van der Veen 
(devananda@gmail.com) wrote:
Ironic's responsibility ends where the host OS begins. Ironic is a bare metal 
provisioning service, not a configuration management service.
+1

// jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-04 Thread Clint Byrum
Excerpts from Stan Lagun's message of 2014-04-04 02:54:05 -0700:
> Hi Steve, Thomas
> 
> I'm glad the discussion is so constructive!
> 
> If we add type interfaces to HOT this may do the job.
> Applications in AppCatalog need to be portable across OpenStack clouds.
> Thus if we use some globally-unique type naming system applications could
> identify their dependencies in unambiguous way.
> 
> We also would need to establish relations between between interfaces.
> Suppose there is My::Something::Database interface and 7 compatible
> materializations:
> My::Something::TroveMySQL
> My::Something::GaleraMySQL
> My::Something::PostgreSQL
> My::Something::OracleDB
> My::Something::MariaDB
> My::Something::MongoDB
> My::Something::HBase
> 
> There are apps that (say SQLAlchemy-based apps) are fine with any
> relational DB. In that case all templates except for MongoDB and HBase
> should be matched. There are apps that design to work with MySQL only. In
> that case only TroveMySQL, GaleraMySQL and MariaDB should be matched. There
> are application who uses PL/SQL and thus require OracleDB (there can be
> several Oracle implementations as well). There are also applications
> (Marconi and Ceilometer are good example) that can use both some SQL and
> NoSQL databases. So conformance to Database interface is not enough and
> some sort of interface hierarchy required.

IMO that is not really true and trying to stick all these databases into
one "SQL database" interface is not a use case I'm interested in
pursuing.

Far more interesting is having each one be its own interface which apps
can assert that they support or not.

> 
> Another thing that we need to consider is that even compatible
> implementations may have different set of parameters. For example
> clustered-HA PostgreSQL implementation may require additional parameters
> besides those needed for plain single instance variant. Template that
> consumes *any* PostgreSQL will not be aware of those additional parameters.
> Thus they need to be dynamically added to environment's input parameters
> and resource consumer to be patched to pass those parameters to actual
> implementation
> 

I think this is a middleware pipe-dream and the devil is in the details.

Just give users the ability to be specific, and then generic patterns
will arise from those later on.

I'd rather see a focus on namespacing and relative composition, so that
providers of the same type that actually do use the same interface but
are alternate implementations will be able to be consumed. So for instance
there is the non-Neutron LBaaS and the Neutron LBaaS, and both have their
merits for operators, but are basically identical from an application
standpoint. That seems a better guiding use case than different databases.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Apr 4 2014

2014-04-04 Thread Anne Gentle
Please take this survey to understand the obstacles to doc contributions.
We are very interested in increasing the doc contributions and making our
doc processes work well for OpenStack as it scales. I believe we can take
actions based on the questions here, please send the link to OpenStack
contributors as far as you can!

https://docs.google.com/forms/d/136-BssH-OxjVo8vNoOD-gW4x8fDFpvixbgCfeV1w_do/viewform

1. In review and merged this past week:
We've reviewed and merged nearly 60 patches this week. I would like to see
this pace increase however as we need to keep working on the backlog of
over 350 doc bugs in openstack-manuals alone. We are definitely patching
doc bugs with the items that landed recently, let's keep it up!

2. High priority doc work:

Install Guide, install guide, install guide. We are testing and updating
and Matt Kassawara has been leading the way to get Networking scenarios
tested and documented. Matt's asking an important question on the -docs
list: Shall the docs support both OVS and ML2 in the installation guide
(with removal of OVS in Juno instead of Icehouse?) Please join in at
http://lists.openstack.org/pipermail/openstack-docs/2014-April/004204.html

Doc bugs generated from DocImpact are also a high priority. Please refer to
https://launchpad.net/openstack-manuals/+milestone/icehouse. As Tom noted,
we need to fix six bugs a day to make our docs bug list goals. There are
still 118 bugs awaiting your input. Even if you can only triage or comment
on a bug, every effort helps.

If you prefer to work on bugs against API docs, refer to
https://launchpad.net/openstack-api-site/+milestone/icehouse.

Release notes are a high priority. It's release time, can you tell?

3. Doc work going on that I know of:

ML2 driver documentation for compute, controller, and network nodes in
Install Guides
ISO Image documentation
Trove install documentation
Scheduler filter documentation

4. New incoming doc requests:

Lots of users want Trove documentation.

5. Doc tools updates:

We will freeze on 0.10.0 of openstack-doc-tools and 1.15.0 of
clouddocs-maven-plugin for the Icehouse doc builds.

6. Other doc news:

I've proposed a cross-project session for the Summit at
http://summit.openstack.org/cfp/details/204. I'd love to hear your thoughts
on these ideas -- at the session or on the list.
We have different docs for different audiences:
cross-project docs for deploy/install/config: openstack-manuals
API docs references, standards: api-site and others

These are written with the git/gerrit method. I want to talk about standing
up a new docs site that serves these types of people with these
requirements:

Experience:
Solution must be completely open source
Content must be available online
Content must be indexable by search engines
Content must be searchable
Content should be easily cross-linked by topic and type (priority:low)
Enable comments, ratings, and analytics (or ask.openstack.org integration)
(priority:low)

Distribution:
Readers must get versions of technical content specific to version of
product
Modular authoring of content
Graphic and text content should be stored as files, not in a database
Consumers must get technical content in PDF, html, video, audio
Workflow for review and approval prior to publishing content

Authoring:
Content must be re-usable across authors and personas (Single source)
Must support many content authors with multiple authoring tools
Existing content must migrate smoothly
All content versions need to be comparable (diff) across versions
Content must be organizationally segregated based on user personas
Draft content must be reviewable in HTML
Link maintenance - Links must update with little manual maintenance to
avoid broken links and link validation
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Chuck Thier
On Fri, Apr 4, 2014 at 11:18 AM, Donald Stufft  wrote:

>
> On Apr 4, 2014, at 10:56 AM, Chuck Thier  wrote:
>
> On Fri, Apr 4, 2014 at 9:44 AM, Donald Stufft  wrote:
>
>> requests should work fine if you used the event let monkey patch the
>> socket module prior to import requests.
>>
>
> That's what I had hoped as well (and is what swift-bench did already), but
> it performs the same if I monkey patch or not.
>
> --
> Chuck
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> Is it running inside of a eventlet.spawn thread?
>

It looks like I missed something the first time, as I tried again and got a
little different behavior.  monkey patching the socket helps, but is still
far slower than it was before.

Currently, swift bench running with requests, does about 25 requests/second
for PUTs and 50 requests/second for GETs.  The same test without requests
does 50 requests/second for PUTs and 200 requests/second for GETs.

I'll try to keep digging to figure out why there is such a performance
difference, but if anyone else has had experience tuning performance with
requests, I would appreciate any input.

--
Chuck
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Tenant quotas can now be updated during a benchmark

2014-04-04 Thread Boris Pavlovic
Bruno,

Btw great idea add benchmark scenarios for quotas as well!

Best regards,
Boris Pavlovic


On Fri, Apr 4, 2014 at 7:28 PM, Bruno Semperlotti <
bruno.semperlo...@gmail.com> wrote:

> Hi Joshua,
>
> Quotas will not be expanded during the scenario, they will be updated
> *prior* the scenario with the requested values as context of this
> scenario. If values are too low, the scenario will continue to fail.
> This update does not allow to benchmark quotas update modification time.
>
> Regards,
>
>
> --
> Bruno Semperlotti
>
>
> 2014-04-04 0:45 GMT+02:00 Joshua Harlow :
>
>  Cool, so would that mean that once a quota is reached (for whatever
>> reason) and the scenario wants to continue running (instead of failing due
>> to quota issues) that it can expand that quota automatically (for cases
>> where this is needed/necessary). Or is this also useful for benchmarking
>> how fast quotas can be  changed, or is it maybe a combination of both?
>>
>>   From: Boris Pavlovic 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Thursday, April 3, 2014 at 1:43 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [rally] Tenant quotas can now be updated
>> during a benchmark
>>
>>   Bruno,
>>
>>  Well done. Finally we have this feature in Rally!
>>
>>
>>  Best regards,
>> Boris Pavlovic
>>
>>
>> On Thu, Apr 3, 2014 at 11:37 PM, Bruno Semperlotti <
>> bruno.semperlo...@gmail.com> wrote:
>>
>>> Hi Rally users,
>>>
>>>  I would like to inform you that the feature allowing to update
>>> tenant's quotas during a benchmark is available with the implementation of
>>> this blueprint:
>>> https://blueprints.launchpad.net/rally/+spec/benchmark-context-tenant-quotas
>>>
>>>  Currently, only Nova and Cinder quotas are supported (Neutron coming
>>> soon).
>>>
>>>  Here a small sample of how to do it:
>>>
>>>  In the json file describing the benchmark scenario, use the "context"
>>> section to indicate quotas for each service. Quotas will be applied for
>>> each generated tenants.
>>>
>>>   {
>>>  "NovaServers.boot_server": [
>>>  {
>>>  "args": {
>>>  "flavor_id": "1",
>>>  "image_id": "6e25e859-2015-4c6b-9940-aa21b2ab8ab2"
>>>  },
>>>  "runner": {
>>>  "type": "continuous",
>>>  "times":100,
>>>  "active_users": 10
>>>  },
>>>  "context": {
>>>  "users": {
>>>  "tenants": 1,
>>>  "users_per_tenant": 1
>>>  },
>>>  *"quotas": {*
>>>  *"nova": {*
>>>  *"instances": 150,*
>>>  *"cores": 150,*
>>>  *"ram": -1*
>>>  *}*
>>>  *}*
>>>  }
>>>  }
>>>  ]
>>>  }
>>>
>>>
>>>  Following, the list of supported quotas:
>>> *nova:*
>>>  instances, cores, ram, floating-ips, fixed-ips, metadata-items,
>>> injected-files, injected-file-content-bytes, injected-file-path-bytes,
>>> key-pairs, security-groups, security-group-rules
>>>
>>>  *cinder:*
>>>  gigabytes, snapshots, volumes
>>>
>>>  *neutron (coming soon):*
>>>  network, subnet, port, router, floatingip, security-group,
>>> security-group-rule
>>>
>>>
>>>  Regards,
>>>
>>> --
>>> Bruno Semperlotti
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] Using oslo.cache in keystoneclient.middleware.auth_token

2014-04-04 Thread Doug Hellmann
On Fri, Apr 4, 2014 at 12:22 PM, Dean Troyer  wrote:
> On Fri, Apr 4, 2014 at 10:51 AM, Kurt Griffiths
>  wrote:
>>
>> > It appears the current version of oslo.cache is going to bring in quite
>> >a few oslo libraries that we would not want keystone client to depend on
>> >[1]. Moving the middleware to a separate library would solve that.
>
>
> +++
>
>>
>> I think it makes a lot of sense to separate out the middleware. Would this
>> be a new project under Identity or would it go to Oslo since it would be a
>> shared library among the other programs?
>
>
> I think it really just needs to be a separate repo, similar to how
> keystoneclient is a separate repo but still part of the Keystone project.
> The primary problem being addressed is dependencies and packaging, not
> governance.

Right, that's what I meant.

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] Using oslo.cache in keystoneclient.middleware.auth_token

2014-04-04 Thread Dean Troyer
On Fri, Apr 4, 2014 at 10:51 AM, Kurt Griffiths <
kurt.griffi...@rackspace.com> wrote:

> > It appears the current version of oslo.cache is going to bring in quite
> >a few oslo libraries that we would not want keystone client to depend on
> >[1]. Moving the middleware to a separate library would solve that.
>

+++


> I think it makes a lot of sense to separate out the middleware. Would this
> be a new project under Identity or would it go to Oslo since it would be a
> shared library among the other programs?
>

I think it really just needs to be a separate repo, similar to how
keystoneclient is a separate repo but still part of the Keystone project.
 The primary problem being addressed is dependencies and packaging, not
governance.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] incubator open for juno development

2014-04-04 Thread Doug Hellmann
The oslo incubator is open for changes scheduled for Juno. Please make
sure you have your bugs and blueprints targeted correctly before
submitting patches.

The libraries are still frozen until we have cross-project unit test
jobs working, so please only approve changes related to making those
test jobs work for now.

Doug

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Donald Stufft

On Apr 4, 2014, at 10:56 AM, Chuck Thier  wrote:

> On Fri, Apr 4, 2014 at 9:44 AM, Donald Stufft  wrote:
> requests should work fine if you used the event let monkey patch the socket 
> module prior to import requests.
> 
> That's what I had hoped as well (and is what swift-bench did already), but it 
> performs the same if I monkey patch or not.
> 
> --
> Chuck
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Is it running inside of a eventlet.spawn thread?

-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-04 Thread Jay Dobies

One thing I'm not seeing handled is cross-app knowledge.

From an earlier post:

> 1) User selects an app which requires a DB
> 2) Murano sees this requirement for DB and do a search in the app
> catalog to find all apps which expose this functionality. Murano uses
> app package definitions for that.
> 3) User select in UI specific DB implementation he wants to use.

For example, take TripleO deploying OpenStack and just look at Nova and 
Glance. Presumably they are two different apps, allowing the user fine 
grained control over how many of each type to deploy and what not.


Both apps would indicate they need a message broker. But they need the 
same message broker, so it's invalid for QPID for one app and rabbit for 
another.


Perhaps I'm viewing it wrong, and instead of having different apps 
there's just one big "OpenStack" app, but that still has the same issue.


On 04/04/2014 05:54 AM, Stan Lagun wrote:

Hi Steve, Thomas

I'm glad the discussion is so constructive!

If we add type interfaces to HOT this may do the job.
Applications in AppCatalog need to be portable across OpenStack clouds.
Thus if we use some globally-unique type naming system applications
could identify their dependencies in unambiguous way.

We also would need to establish relations between between interfaces.
Suppose there is My::Something::Database interface and 7 compatible
materializations:
My::Something::TroveMySQL
My::Something::GaleraMySQL
My::Something::PostgreSQL
My::Something::OracleDB
My::Something::MariaDB
My::Something::MongoDB
My::Something::HBase

There are apps that (say SQLAlchemy-based apps) are fine with any
relational DB. In that case all templates except for MongoDB and HBase
should be matched. There are apps that design to work with MySQL only.
In that case only TroveMySQL, GaleraMySQL and MariaDB should be matched.
There are application who uses PL/SQL and thus require OracleDB (there
can be several Oracle implementations as well). There are also
applications (Marconi and Ceilometer are good example) that can use both
some SQL and NoSQL databases. So conformance to Database interface is
not enough and some sort of interface hierarchy required.


Along those lines, a variation is if a particular version is needed. 
It's an odd parallel, but think of the differences in running a Python 
2.4 app v. 2.6 or higher. There's a pretty clear line there of things 
that won't run on 2.4 and require higher.


Assuming a parallel in these resources, would we end up with multiple 
MongoDB templates with slightly different names? I think the interface 
hierarchy concept comes into play here because we may need to express 
something like "matches at least".



Another thing that we need to consider is that even compatible
implementations may have different set of parameters. For example
clustered-HA PostgreSQL implementation may require additional parameters
besides those needed for plain single instance variant. Template that
consumes *any* PostgreSQL will not be aware of those additional
parameters. Thus they need to be dynamically added to environment's
input parameters and resource consumer to be patched to pass those
parameters to actual implementation







On Fri, Apr 4, 2014 at 9:53 AM, Thomas Spatzier
mailto:thomas.spatz...@de.ibm.com>> wrote:

Hi Steve,

your indexing idea sounds interesting, but I am not sure it would work
reliably. The kind of matching based on names of parameters and
outputs and
internal get_attr uses has very strong assumptions and I think there
is a
not so low risk of false positives. What if the templates includes some
internal details that would not affect the matching but still change the
behavior in a way that would break the composition. Or what if a user by
chance built a template that by pure coincidence uses the same parameter
and output names as one of those abstract types that was mention,
but the
template is simply not built for composition?

I think it would be much cleaner to have an explicit attribute in the
template that says "this template can be uses as realization of type
My::SomeType" and use that for presenting the user choice and
building the
environment.

Regards,
Thomas

Steve Baker mailto:sba...@redhat.com>> wrote on
04/04/2014 06:12:38:
 > From: Steve Baker mailto:sba...@redhat.com>>
 > To: openstack-dev@lists.openstack.org

 > Date: 04/04/2014 06:14
 > Subject: Re: [openstack-dev] [Heat] [Murano] [Solum] applications
inthe
cloud
 >
 > On 03/04/14 13:04, Georgy Okrokvertskhov wrote:
 > Hi Steve,
 >
 > I think this is exactly the place where we have a boundary between
 > Murano catalog and HOT.
 >
 > In your example one can use abstract resource type and specify a
 > correct implementation via environment file. This is how it will be
 > done on the final stage in Murano 

Re: [openstack-dev] [Ironic][Agent]

2014-04-04 Thread Devananda van der Veen
On Fri, Apr 4, 2014 at 5:19 AM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> On the other hand, it is easy to imagine a situation when you want to run
> agent on every node of your cluster after installing OS. It could be useful
> to keep hardware info consistent (for example, many hardware configurations
> allow one to add hard drives in run time). It also could be useful for "on
> the fly" firmware updates. It could be useful for "on the fly"
> manipulations with lvm groups/volumes and so on.
>
>
There are lots of configuration management agents already out there (chef?
puppet? salt? ansible? ... the list is pretty long these days...) which you
can bake into the images that you deploy with Ironic, but I'd like to be
clear that, in my opinion, Ironic's responsibility ends where the host OS
begins. Ironic is a bare metal provisioning service, not a configuration
management service.

What you're suggesting is similar to saying, "we want to run an agent in
every KVM VM in our cloud," except most customers would clearly object to
this. The only difference here is that you (and tripleo) are the deployer
*and* the user of Ironic; that's a special case, but not the only use case
which Ironic is servicing.

Regards,
Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PTL Voting is now open

2014-04-04 Thread Anita Kuno
Elections are underway and will remain open for you to cast your vote
until at least 1300 utc April 11, 2014.

We are having elections for Nova, Neutron, Cinder, Ceilometer,  Heat and
TripleO.

If you are a Foundation individual member and had a commit in one of the
program's projects[0] over the Havana-Icehouse timeframe (April 4, 2013
06:00 UTC to April 4, 2014 05:59 UTC) then you are eligible to vote. You
should find your email with a link to the Condorcet page to cast your
vote in the inbox of your gerrit preferred email[1].

What to do if you don't see the email and have a commit in at least one
of the programs having an election:
 * check the trash of your gerrit Preferred Email address, in case
it went into trash or spam
 * wait a bit and check again, in case your email server is a bit slow
 * find the sha of at least one commit from the program project
repos[0] and email me and Tristan[2] at the above email address. If we
can confirm that you are entitled to vote, we will add you to the voters
list for the appropriate election.

Our democratic process is important to the health of OpenStack, please
exercise your right to vote.

Candidate statements/platforms can be found linked to Candidate names on
this page:
https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014#Candidates

Happy voting,
Anita. (anteaya)

[0] The list of the program projects eligible for electoral status:
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml
[1] Sign into review.openstack.org: Go to Settings > Contact
Information. Look at the email listed as your Preferred Email. That is
where the ballot has been sent.
[2] Anita's email: anteaya at anteaya dot info Tristan's email: tristan
dot cacqueray at enovance dot com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][glance][heat][neutron][nova] Quick fix for "upstream-translation-update Jenkins job failing" (bug 1299349)

2014-04-04 Thread Dolph Mathews
tl;dr:

  $ python clean_po.py PROJECT/locale/
  $ git commit

The comments on bug 1299349 are already quite long, so apparently this got
lost. To save everyone some time, the fix is as easy as above. So what's
clean_po.py?

Devananda van der Veen (devananda) posted a handy script to fix the issue
in a related bug [1], to which I added a bit more automation [2] because
I'm lazy and stuck it into a gist [3]. Grab a copy, and run it on
milestone-proposed (or wherever necessary). It'll delete a bunch of lines
from *.po files, and you can commit the result as Closes-Bug.

Thanks, Devananda!

[1] https://bugs.launchpad.net/ironic/+bug/1298645/comments/2
[2] https://bugs.launchpad.net/keystone/+bug/1299349/comments/25
[3] https://gist.github.com/dolph/9915293
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

2014-04-04 Thread Lucas Alvares Gomes
On Fri, Apr 4, 2014 at 3:10 PM, Ling Gao  wrote:

> Hello Vladimir,
>  I would prefer an agent-less node, meaning the agent is only used
> under the ramdisk OS to collect hw info, to do firmware updates and to
> install nodes etc. In this sense, the agent running as root is fine. Once
> the node is installed, the agent should be out of the picture. I have been
> working with HPC customers, in that environment they prefer as less memory
> prints as possible. Even as a ordinary tenant, I do not feel secure to have
> some agents running on my node. For the firmware update on the fly, I do
> not know how many customers will trust us doing it while their critical
> application is running. Even they do and ready to do it, Ironic can then
> send an agent to the node through scp/wget as admin/root and quickly do it
> and then kill the agent on the node.   Just my 2 cents.
>
>
+1



>
> From:Vladimir Kozhukalov 
> To:"OpenStack Development Mailing List (not for usage questions)"
> ,
> Date:04/04/2014 08:24 AM
> Subject:[openstack-dev] [Ironic][Agent]
> --
>
>
>
> Hello, everyone,
>
> I'd like to involve more people to express their opinions about the way
> how we are going to run Ironic-python-agent. I mean should we run it with
> root privileges or not.
>
> From the very beginning agent is supposed to run under ramdisk OS and it
> is intended to make disk partitioning, RAID configuring, firmware updates
> and other stuff according to installing OS. Looks like we always will run
> agent with root privileges. Right? There are no reasons to limit agent
> permissions.
>
> On the other hand, it is easy to imagine a situation when you want to run
> agent on every node of your cluster after installing OS. It could be useful
> to keep hardware info consistent (for example, many hardware configurations
> allow one to add hard drives in run time). It also could be useful for "on
> the fly" firmware updates. It could be useful for "on the fly"
> manipulations with lvm groups/volumes and so on.
>
> Frankly, I am not even sure that we need to run agent with root privileges
> even in ramdisk OS, because, for example, there are some system default
> limitations such as number of connections, number of open files, etc. which
> are different for root and ordinary user and potentially can influence
> agent behaviour. Besides, it is possible that some vulnerabilities will be
> found in the future and they potentially could be used to compromise agent
> and damage hardware configuration.
>
> Consequently, it is better to run agent under ordinary user even under
> ramdisk OS and use rootwrap if agent needs to run commands with root
> privileges. I know that rootwrap has some performance issues
> *http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.html*but
>  it is still pretty suitable for ironic agent use case.
>
> It would be great to hear as many opinions as possible according to this
> case.
>
>
> Vladimir Kozhukalov___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] Using oslo.cache in keystoneclient.middleware.auth_token

2014-04-04 Thread Kurt Griffiths
> It appears the current version of oslo.cache is going to bring in quite
>a few oslo libraries that we would not want keystone client to depend on
>[1]. Moving the middleware to a separate library would solve that.

I think it makes a lot of sense to separate out the middleware. Would this
be a new project under Identity or would it go to Oslo since it would be a
shared library among the other programs?

Kurt G. | @kgriffs

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-04-04 Thread Dina Belova
Hello stackers!

Here are our meeting minutes:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-04-15.01.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-04-15.01.txt
Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-04-15.01.log.html

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] [Heat] TripleO Heat Templates and merge.py

2014-04-04 Thread Tomas Sedovic
Hi All,

I was wondering if the time has come to document what exactly are we
doing with tripleo-heat-templates and merge.py[1], figure out what needs
to happen to move away and raise the necessary blueprints on Heat and
TripleO side.

(merge.py is a script we use to build the final TripleO Heat templates
from smaller chunks)

There probably isn't an immediate need for us to drop merge.py, but its
existence either indicates deficiencies within Heat or our unfamiliarity
with some of Heat's features (possibly both).

I worry that the longer we stay with merge.py the harder it will be to
move forward. We're still adding new features and fixing bugs in it (at
a slow pace but still).

Below is my understanding of the main marge.py functionality and a rough
plan of what I think might be a good direction to move to. It is almost
certainly incomplete -- please do poke holes in this. I'm hoping we'll
get to a point where everyone's clear on what exactly merge.py does and
why. We can then document that and raise the appropriate blueprints.


## merge.py features ##


1. Merging parameters and resources

Any uniquely-named parameters and resources from multiple templates are
put together into the final template.

If a resource of the same name is in multiple templates, an error is
raised. Unless it's of a whitelisted type (nova server, launch
configuration, etc.) in which case they're all merged into a single
resource.

For example: merge.py overcloud-source.yaml swift-source.yaml

The final template has all the parameters from both. Moreover, these two
resources will be joined together:

 overcloud-source.yaml 

  notCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  ImageId: '0'
  InstanceType: '0'
Metadata:
  admin-password: {Ref: AdminPassword}
  admin-token: {Ref: AdminToken}
  bootstack:
public_interface_ip:
  Ref: NeutronPublicInterfaceIP


 swift-source.yaml 

  notCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Metadata:
  swift:
devices:
  ...
hash: {Ref: SwiftHashSuffix}
service-password: {Ref: SwiftPassword}


The final template will contain:

  notCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  ImageId: '0'
  InstanceType: '0'
Metadata:
  admin-password: {Ref: AdminPassword}
  admin-token: {Ref: AdminToken}
  bootstack:
public_interface_ip:
  Ref: NeutronPublicInterfaceIP
  swift:
devices:
  ...
hash: {Ref: SwiftHashSuffix}
service-password: {Ref: SwiftPassword}


We use this to keep the templates more manageable (instead of having one
huge file) and also to be able to pick the components we want: instead
of `undercloud-bm-source.yaml` we can pick `undercloud-vm-source` (which
uses the VirtualPowerManager driver) or `ironic-vm-source`.



2. FileInclude

If you have a pseudo resource with the type of `FileInclude`, we will
look at the specified Path and SubKey and put the resulting dictionary in:

 overcloud-source.yaml 

  NovaCompute0Config:
Type: FileInclude
Path: nova-compute-instance.yaml
SubKey: Resources.NovaCompute0Config
Parameters:
  NeutronNetworkType: "gre"
  NeutronEnableTunnelling: "True"


 nova-compute-instance.yaml 

  NovaCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  InstanceType: '0'
  ImageId: '0'
Metadata:
  keystone:
host: {Ref: KeystoneHost}
  neutron:
host: {Ref: NeutronHost}
  tenant_network_type: {Ref: NeutronNetworkType}
  network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
  bridge_mappings: {Ref: NeutronBridgeMappings}
  enable_tunneling: {Ref: NeutronEnableTunnelling}
  physical_bridge: {Ref: NeutronPhysicalBridge}
  public_interface: {Ref: NeutronPublicInterface}
service-password:
  Ref: NeutronPassword
  admin-password: {Ref: AdminPassword}

The result:

  NovaCompute0Config:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
  InstanceType: '0'
  ImageId: '0'
Metadata:
  keystone:
host: {Ref: KeystoneHost}
  neutron:
host: {Ref: NeutronHost}
  tenant_network_type: "gre"
  network_vlan_ranges: {Ref: NeutronNetworkVLANRanges}
  bridge_mappings: {Ref: NeutronBridgeMappings}
  enable_tunneling: "True"
  physical_bridge: {Ref: NeutronPhysicalBridge}
  public_interface: {Ref: NeutronPublicInterface}
service-password:
  Ref: NeutronPassword
  admin-password: {Ref: AdminPassword}

Note the `NeutronNetworkType` and `NeutronEnableTunneling` parameter
substitution.

This is useful when you want to pick only bits and pieces of an existing
template. In the example above, `nova-compute-instance.yaml` is a
standalone template you can launch on it

Re: [openstack-dev] [tempest]:Please updated etherpad before adding tempest tests

2014-04-04 Thread Mike Spreitzer
"Kekane, Abhishek"  wrote on 04/04/2014 
06:26:58 AM:

> This is regarding implementation of blueprint https://
> blueprints.launchpad.net/tempest/+spec/testcases-expansion-icehouse.
> 
> As per mentioned in etherpads for this blueprint, please add your 
> name if you are working on any of the items mentioned in the list.
> Otherwise efforts will get duplicated.

Why are only four projects listed?

I see that those etherpads have Icehouse in their names.  What happens as 
we work on Juno?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Release Notes for Icehouse

2014-04-04 Thread Mark McClain

On Apr 4, 2014, at 11:03 AM, Yaguang Tang 
mailto:yaguang.t...@canonical.com>> wrote:

I think it's important for our developers to publish an official Release Note 
as other core openstack projects does at the end of Icehouse development cycle, 
it contains the new features added and upgrade issue to be noticed by the 
users. any one like to be volunteer to help accomplish it?
https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse

I’ve typically waited until after we publish the second RC to add release notes 
to the Wiki. I’ll update the release notes for Neutron when we close out RC2.

mark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-04 Thread Armando M.
Hi Simon,

You are absolutely right in your train of thoughts: unless the
third-party CI monitors and vets all the potential changes it cares
about there's always a chance something might break. This is why I
think it's important that each Neutron third party CI should not only
test Neutron changes, but also Nova's, DevStack's and Tempest's.
Filters may be added to test only the relevant subtrees.

For instance, the VMware CI runs the full suite of tempest smoke
tests, as they come from upstream and it vets all the changes that go
in Tempest made to API and scenario tests as well as configuration
changes. As for Nova, we test changes to the vif parts, and for
DevStack, we validate changes made to lib/neutron*.

Vetting all the changes coming in VS only the ones that can
potentially break third-party support is a balancing act when you
don't have infinite resources at your disposal, or you're just ramping
up the CI infrastructure.

Cheers,
Armando

On 4 April 2014 02:00, Simon Pasquier  wrote:
> Hi Salvatore,
>
> On 03/04/2014 14:56, Salvatore Orlando wrote:
>> Hi Simon,
>>
> 
>>
>> I hope stricter criteria will be enforced for Juno; I personally think
>> every CI should run at least the smoketest suite for L2/L3 services (eg:
>> load balancer scenario will stay optional).
>
> I had a little thinking about this and I feel like it might not have
> caught _immediately_ the issue Kyle talked about [1].
>
> Let's rewind the time line:
> 1/ Change to *Nova* adding external events API is merged
> https://review.openstack.org/#/c/76388/
> 2/ Change to *Neutron* notifying Nova when ports are ready is merged
> https://review.openstack.org/#/c/75253/
> 3/ Change to *Nova* making libvirt wait for Neutron notifications is merged
> https://review.openstack.org/#/c/74832/
>
> At this point and assuming that the external ODL CI system were running
> the L2/L3 smoke tests, change #3 could have passed since external
> Neutron CI aren't voting for Nova. Instead it would have voted against
> any subsequent change to Neutron.
>
> Simon
>
> [1] https://bugs.launchpad.net/neutron/+bug/1301449
>
>>
>> Salvatore
>>
>> [1] https://review.openstack.org/#/c/75304/
>>
>>
>>
>> On 3 April 2014 12:28, Simon Pasquier > > wrote:
>>
>> Hi,
>>
>> I'm looking at [1] but I see no requirement of which Tempest tests
>> should be executed.
>>
>> In particular, I'm a bit puzzled that it is not mandatory to boot an
>> instance and check that it gets connected to the network. To me, this is
>> the very minimum for asserting that your plugin or driver is working
>> with Neutron *and* Nova (I'm not even talking about security groups). I
>> had a quick look at the existing 3rd party CI systems and I found none
>> running this kind of check (correct me if I'm wrong).
>>
>> Thoughts?
>>
>> [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>> --
>> Simon Pasquier
>> Software Engineer (OpenStack Expertise Center)
>> Bull, Architect of an Open World
>> Phone: + 33 4 76 29 71 49 
>> http://www.bull.com
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] BaremetalHostManager unused?

2014-04-04 Thread Devananda van der Veen
Hi Matt!

I've looked into this a bit now, too, and don't have a conclusive answer as
to how (or even whether) it's working today.

Instead, I'd like to point out that Ironic is aiming to deprecate
nova.virt.baremetal and nova.scheduler.baremetal_host_manager, in favor of
nova.virt.ironic and nova.scheduler.ironic_host_manager, both of which
currently live in Ironic's git tree. These were initially based on the
baremetal code that you're looking at, and so probably inherited this
problem (if it is a problem). Let's fix it in Ironic.

Thanks!
-Devananda



On Fri, Apr 4, 2014 at 7:46 AM, Matthew Booth  wrote:

> Whilst looking at something unrelated in HostManager, I noticed that
> HostManager.service_states appears to be unused, and decided to remove
> it. This seems to have a number of implications:
>
> 1. capabilities in HostManager.get_all_host_states will always be None.
> 2. capabilities passed to host_state_cls() will always be None
> (host_state_cls doesn't appear to be used anywhere else)
> 3. baremetal_host_manager.new_host_state() capabilities will always be
> None.
> 4. cap will always be {}, so will never contain 'baremetal_driver'
> 5. BaremetalNodeState will never be instantiated
> 6. BaremetalHostManager is a no-op
>
> possibly resulting in
>
> 7. The filter scheduler could try to put multiple instances on a single
> bare metal host
>
> This was going to be a 3 line cleanup, but it looks like a can of worms
> so I'm going to drop it. It's entirely possible that I've missed another
> entry point in to this code, but it might be worth a quick look.
> Incidentally, the tests seem to populate service_states in fake, so the
> behaviour of the automated tests probably isn't reliable.
>
> Matt
> --
> Matthew Booth, RHCA, RHCSS
> Red Hat Engineering, Virtualisation Team
>
> GPG ID:  D33C3490
> GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Tenant quotas can now be updated during a benchmark

2014-04-04 Thread Bruno Semperlotti
Hi Joshua,

Quotas will not be expanded during the scenario, they will be updated
*prior* the scenario with the requested values as context of this scenario.
If values are too low, the scenario will continue to fail.
This update does not allow to benchmark quotas update modification time.

Regards,


--
Bruno Semperlotti


2014-04-04 0:45 GMT+02:00 Joshua Harlow :

>  Cool, so would that mean that once a quota is reached (for whatever
> reason) and the scenario wants to continue running (instead of failing due
> to quota issues) that it can expand that quota automatically (for cases
> where this is needed/necessary). Or is this also useful for benchmarking
> how fast quotas can be  changed, or is it maybe a combination of both?
>
>   From: Boris Pavlovic 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, April 3, 2014 at 1:43 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [rally] Tenant quotas can now be updated
> during a benchmark
>
>   Bruno,
>
>  Well done. Finally we have this feature in Rally!
>
>
>  Best regards,
> Boris Pavlovic
>
>
> On Thu, Apr 3, 2014 at 11:37 PM, Bruno Semperlotti <
> bruno.semperlo...@gmail.com> wrote:
>
>> Hi Rally users,
>>
>>  I would like to inform you that the feature allowing to update tenant's
>> quotas during a benchmark is available with the implementation of this
>> blueprint:
>> https://blueprints.launchpad.net/rally/+spec/benchmark-context-tenant-quotas
>>
>>  Currently, only Nova and Cinder quotas are supported (Neutron coming
>> soon).
>>
>>  Here a small sample of how to do it:
>>
>>  In the json file describing the benchmark scenario, use the "context"
>> section to indicate quotas for each service. Quotas will be applied for
>> each generated tenants.
>>
>>   {
>>  "NovaServers.boot_server": [
>>  {
>>  "args": {
>>  "flavor_id": "1",
>>  "image_id": "6e25e859-2015-4c6b-9940-aa21b2ab8ab2"
>>  },
>>  "runner": {
>>  "type": "continuous",
>>  "times":100,
>>  "active_users": 10
>>  },
>>  "context": {
>>  "users": {
>>  "tenants": 1,
>>  "users_per_tenant": 1
>>  },
>>  *"quotas": {*
>>  *"nova": {*
>>  *"instances": 150,*
>>  *"cores": 150,*
>>  *"ram": -1*
>>  *}*
>>  *}*
>>  }
>>  }
>>  ]
>>  }
>>
>>
>>  Following, the list of supported quotas:
>> *nova:*
>>  instances, cores, ram, floating-ips, fixed-ips, metadata-items,
>> injected-files, injected-file-content-bytes, injected-file-path-bytes,
>> key-pairs, security-groups, security-group-rules
>>
>>  *cinder:*
>>  gigabytes, snapshots, volumes
>>
>>  *neutron (coming soon):*
>>  network, subnet, port, router, floatingip, security-group,
>> security-group-rule
>>
>>
>>  Regards,
>>
>> --
>> Bruno Semperlotti
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

2014-04-04 Thread Jay Faulkner

+1

   The agent is a tool Ironic is using to take the place of a
   hypervisor to discover and prepare nodes to recieve workloads. For
   hardware, this includes more work -- such as firmware flashing, bios
   configuration, and disk imaging -- all of which must be done in an
   OOB manner. (This is also why deploy drivers that interact directly
   with the hardware when the supported - such as Seamicro or the
   proposed HP iLo driver - are good alternative approaches.)


-Jay Faulkner

On 4/4/2014 7:10 AM, Ling Gao wrote:

Hello Vladimir,
 I would prefer an agent-less node, meaning the agent is only used 
under the ramdisk OS to collect hw info, to do firmware updates and to 
install nodes etc. In this sense, the agent running as root is fine. 
Once the node is installed, the agent should be out of the picture. I 
have been working with HPC customers, in that environment they prefer 
as less memory prints as possible. Even as a ordinary tenant, I do not 
feel secure to have some agents running on my node. For the firmware 
update on the fly, I do not know how many customers will trust us 
doing it while their critical application is running. Even they do and 
ready to do it, Ironic can then send an agent to the node through 
scp/wget as admin/root and quickly do it and then kill the agent on 
the node. Just my 2 cents.


Ling Gao




From: Vladimir Kozhukalov 
To: "OpenStack Development Mailing List (not for usage questions)" 
,

Date: 04/04/2014 08:24 AM
Subject: [openstack-dev] [Ironic][Agent]




Hello, everyone,

I'd like to involve more people to express their opinions about the 
way how we are going to run Ironic-python-agent. I mean should we run 
it with root privileges or not.


From the very beginning agent is supposed to run under ramdisk OS and 
it is intended to make disk partitioning, RAID configuring, firmware 
updates and other stuff according to installing OS. Looks like we 
always will run agent with root privileges. Right? There are no 
reasons to limit agent permissions.


On the other hand, it is easy to imagine a situation when you want to 
run agent on every node of your cluster after installing OS. It could 
be useful to keep hardware info consistent (for example, many hardware 
configurations allow one to add hard drives in run time). It also 
could be useful for "on the fly" firmware updates. It could be useful 
for "on the fly" manipulations with lvm groups/volumes and so on.


Frankly, I am not even sure that we need to run agent with root 
privileges even in ramdisk OS, because, for example, there are some 
system default limitations such as number of connections, number of 
open files, etc. which are different for root and ordinary user and 
potentially can influence agent behaviour. Besides, it is possible 
that some vulnerabilities will be found in the future and they 
potentially could be used to compromise agent and damage hardware 
configuration.


Consequently, it is better to run agent under ordinary user even under 
ramdisk OS and use rootwrap if agent needs to run commands with root 
privileges. I know that rootwrap has some performance issues 
_http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.html_but 
it is still pretty suitable for ironic agent use case.


It would be great to hear as many opinions as possible according to 
this case.



Vladimir Kozhukalov___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help in re-running openstack

2014-04-04 Thread Dean Troyer
On Fri, Apr 4, 2014 at 3:43 AM, Deepak Shetty  wrote:

> Shiva,
>   Can u tell what exactly u r trying to change in /opt/stack/ ?
> My guess is that u might be running into stack.sh re-pulling the sources
> hence overriding ur changes ? Try with OFFLINE=True in localrc (create a
> localrc file in /opt/stack/ and put OFFLINE=True) and redo stack.sh
>

FWIW, RECLONE controls the 'pull sources every time' behaviour without
cutting off the rest of your net access.  OFFLINE short-circuits functions
that attempt network access to avoid waiting on the timeouts when you know
they will fail.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

2014-04-04 Thread Dickson, Mike (HP Servers)
+1

From: Ling Gao [mailto:ling...@us.ibm.com]
Sent: Friday, April 04, 2014 10:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

Hello Vladimir,
 I would prefer an agent-less node, meaning the agent is only used under 
the ramdisk OS to collect hw info, to do firmware updates and to install nodes 
etc. In this sense, the agent running as root is fine. Once the node is 
installed, the agent should be out of the picture. I have been working with HPC 
customers, in that environment they prefer as less memory prints as possible. 
Even as a ordinary tenant, I do not feel secure to have some agents running on 
my node. For the firmware update on the fly, I do not know how many customers 
will trust us doing it while their critical application is running. Even they 
do and ready to do it, Ironic can then send an agent to the node through 
scp/wget as admin/root and quickly do it and then kill the agent on the node.   
Just my 2 cents.

Ling Gao




From:Vladimir Kozhukalov 
mailto:vkozhuka...@mirantis.com>>
To:"OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>,
Date:04/04/2014 08:24 AM
Subject:[openstack-dev] [Ironic][Agent]




Hello, everyone,

I'd like to involve more people to express their opinions about the way how we 
are going to run Ironic-python-agent. I mean should we run it with root 
privileges or not.

>From the very beginning agent is supposed to run under ramdisk OS and it is 
>intended to make disk partitioning, RAID configuring, firmware updates and 
>other stuff according to installing OS. Looks like we always will run agent 
>with root privileges. Right? There are no reasons to limit agent permissions.

On the other hand, it is easy to imagine a situation when you want to run agent 
on every node of your cluster after installing OS. It could be useful to keep 
hardware info consistent (for example, many hardware configurations allow one 
to add hard drives in run time). It also could be useful for "on the fly" 
firmware updates. It could be useful for "on the fly" manipulations with lvm 
groups/volumes and so on.

Frankly, I am not even sure that we need to run agent with root privileges even 
in ramdisk OS, because, for example, there are some system default limitations 
such as number of connections, number of open files, etc. which are different 
for root and ordinary user and potentially can influence agent behaviour. 
Besides, it is possible that some vulnerabilities will be found in the future 
and they potentially could be used to compromise agent and damage hardware 
configuration.

Consequently, it is better to run agent under ordinary user even under ramdisk 
OS and use rootwrap if agent needs to run commands with root privileges. I know 
that rootwrap has some performance issues 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.html but 
it is still pretty suitable for ironic agent use case.

It would be great to hear as many opinions as possible according to this case.


Vladimir Kozhukalov___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Release Notes for Icehouse

2014-04-04 Thread Yaguang Tang
Hi all,

I think it's important for our developers to publish an official Release
Note as other core openstack projects does at the end of Icehouse
development cycle, it contains the new features added and upgrade issue to
be noticed by the users. any one like to be volunteer to help accomplish it?
https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse

-- 
Tang Yaguang

Canonical Ltd. | www.ubuntu.com | www.canonical.com
Mobile:  +86 152 1094 6968
gpg key: 0x187F664F
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Chuck Thier
On Fri, Apr 4, 2014 at 9:44 AM, Donald Stufft  wrote:

> requests should work fine if you used the event let monkey patch the
> socket module prior to import requests.
>

That's what I had hoped as well (and is what swift-bench did already), but
it performs the same if I monkey patch or not.

--
Chuck
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ironic] BaremetalHostManager unused?

2014-04-04 Thread Matthew Booth
Whilst looking at something unrelated in HostManager, I noticed that
HostManager.service_states appears to be unused, and decided to remove
it. This seems to have a number of implications:

1. capabilities in HostManager.get_all_host_states will always be None.
2. capabilities passed to host_state_cls() will always be None
(host_state_cls doesn't appear to be used anywhere else)
3. baremetal_host_manager.new_host_state() capabilities will always be None.
4. cap will always be {}, so will never contain 'baremetal_driver'
5. BaremetalNodeState will never be instantiated
6. BaremetalHostManager is a no-op

possibly resulting in

7. The filter scheduler could try to put multiple instances on a single
bare metal host

This was going to be a 3 line cleanup, but it looks like a can of worms
so I'm going to drop it. It's entirely possible that I've missed another
entry point in to this code, but it might be worth a quick look.
Incidentally, the tests seem to populate service_states in fake, so the
behaviour of the automated tests probably isn't reliable.

Matt
-- 
Matthew Booth, RHCA, RHCSS
Red Hat Engineering, Virtualisation Team

GPG ID:  D33C3490
GPG FPR: 3733 612D 2D05 5458 8A8A 1600 3441 EA19 D33C 3490

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Issues with Python Requests

2014-04-04 Thread Donald Stufft

On Apr 4, 2014, at 10:41 AM, Chuck Thier  wrote:

> Howdy,
> 
> Now that swift has aligned with the other projects to use requests in 
> python-swiftclient, we have lost a couple of features.
> 
> 1.  Requests doesn't support expect: 100-continue.  This is very useful for 
> services like swift or glance where you want to make sure a request can 
> continue before you start uploading GBs of data (for example find out that 
> you need to auth).
> 
> 2.  Requests doesn't play nicely with eventlet or other async frameworks [1]. 
>  I noticed this when suddenly swift-bench (which uses swiftclient) wasn't 
> performing as well as before.  This also means that, for example, if you are 
> using keystone with swift, the auth requests to keystone will block the proxy 
> server until they complete, which is also not desirable.

requests should work fine if you used the event let monkey patch the socket 
module prior to import requests.

> 
> Does anyone know if these issues are being addressed, or begun working on 
> them?
> 
> Thanks,
> 
> --
> Chuck
> 
> [1] 
> http://docs.python-requests.org/en/latest/user/advanced/#blocking-or-non-blocking
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-
Donald Stufft
PGP: 0x6E3CBCE93372DCFA // 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][neutron]Requesting consideration of httmock package for test-requirements in Juno

2014-04-04 Thread Paul Michali (pcm)
I’d like to get this added to the test-requirements for Neutron. It is a very 
flexible HTTP mock module that works with the Requests package. It is a 
decorator that wraps the Request’s send() method and allows easy mocking of 
responses, etc (w/o using a web server).

The bug is: https://bugs.launchpad.net/neutron/+bug/1282855

Initially I had requested both httmock and newer requests, but was requested to 
separate them, so this is to target httmock as it is more important (to me :) 
to get approval,


The review request is: https://review.openstack.org/#/c/75296/

An example of code that would use this:

https://github.com/openstack/neutron/blob/master/neutron/tests/unit/services/vpn/device_drivers/notest_cisco_csr_rest.py
https://github.com/openstack/neutron/blob/master/neutron/tests/unit/services/vpn/device_drivers/cisco_csr_mock.py

Looking forward to hearing whether or not we can include this package into Juno.

Thanks in advance!


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread James Slagle
On Thu, Apr 3, 2014 at 7:02 AM, Robert Collins
 wrote:
> Getting back in the swing of things...
>
> Hi,
> like most OpenStack projects we need to keep the core team up to
> date: folk who are not regularly reviewing will lose context over
> time, and new folk who have been reviewing regularly should be trusted
> with -core responsibilities.
>
> In this months review:
>  - Dan Prince for -core
>  - Jordan O'Mara for removal from -core
>  - Jiri Tomasek for removal from -core
>  - Jamomir Coufal for removal from -core

+1 to all.


-- 
-- James Slagle
--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Issues with Python Requests

2014-04-04 Thread Chuck Thier
Howdy,

Now that swift has aligned with the other projects to use requests in
python-swiftclient, we have lost a couple of features.

1.  Requests doesn't support expect: 100-continue.  This is very useful for
services like swift or glance where you want to make sure a request can
continue before you start uploading GBs of data (for example find out that
you need to auth).

2.  Requests doesn't play nicely with eventlet or other async frameworks
[1].  I noticed this when suddenly swift-bench (which uses swiftclient)
wasn't performing as well as before.  This also means that, for example, if
you are using keystone with swift, the auth requests to keystone will block
the proxy server until they complete, which is also not desirable.

Does anyone know if these issues are being addressed, or begun working on
them?

Thanks,

--
Chuck

[1]
http://docs.python-requests.org/en/latest/user/advanced/#blocking-or-non-blocking
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Serveral questions about alarm function of ceilometer

2014-04-04 Thread Swann Croiset
Hi Yuanjing,

some pointers inline.


2014-04-03 4:33 GMT+02:00 Yuanjing (D) :

>  Hi
>
>  I have a requirement of monitoring VMs, if a VM's meter like cpu_util
> become too high, then system generate an alarm for this VM with meter
> information.
>
>  I have tested alarm function of ceilometer, below are commands I used to
> create alarm object with meter and resource id or not:
> ceilometer alarm-threshold-create  --name alarm1 --meter-name cpu_util
> --period 60 --evaluation-periods 1 --statistic avg --comparison-operator gt
> --threshold 1 -q resource_id=757dadaa-0707-4fad-808d-81edc11438aa
>  ceilometer alarm-threshold-create  --name alarm1 --meter-name cpu_util
> --period 60 --evaluation-periods 1 --statistic avg --comparison-operator gt
> --threshold 1
>
>  I have the following question:
> If I have to define alarm object for every VM and every meter?
>
>From my comprehension, yes it's the good and the unique way


> Take 100 VM and 2 meter cpu_util, memory_util as an example, I will have
> to define 100*2 alarm objects for them.
> I think if I just define alarm object with meter not but VM(resource_id),
> then alarm evaluator will count all VM's meter.
>
your're right, Here your alarm will be trigered on the average of all
samples in the period of all VMs .. which is not what you want I'm sure.


>  Another question produced by question above: I know that alarm evaluator
> will process alarm object one by one, so too many alarm object may result
> in performance problems too.
>
On which component have you observed a performance issue ?

This should be better/mitigate if you deploy correctly multiple services
involved in the alarm evaluation: at least the evaluator and the API.
About the backend, I read there is some performance issue with SQL [1] but
seems not about alarming but rather notifications handling.
The prefered backends for performance should be MongoDB or Hbase (on multi
nodes).


>  I am not a ceilometer programmer and I apologize if I am missing
> something very obvious.
> Can you give me some help to make me clear about them and how to implement
> my requirement?
>

>  Thanks
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> [1]
http://lists.openstack.org/pipermail/openstack-dev/2014-March/030288.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Problems with Heat software configurations and KeystoneV2

2014-04-04 Thread Michael Elder
Opened in Launchpad: https://bugs.launchpad.net/heat/+bug/1302624

I still have concerns though about the design approach of creating a new 
project for every stack and new users for every resource. 

If I provision 1000 patterns a day with an average of 10 resources per 
pattern, you're looking at 10,000 users per day. How can that scale? 

How can we ensure that all stale projects and users are cleaned up as 
instances are destroy? When users choose to go through horizon or nova to 
tear down instances, what cleans up the project & users associated with 
that heat stack? 

Keystone defines the notion of tokens to support authentication, why 
doesn't the design provision and store a token for the stack and its 
equivalent management? 

-M


Kind Regards,

Michael D. Elder

STSM | Master Inventor
mdel...@us.ibm.com  | linkedin.com/in/mdelder

"Success is not delivering a feature; success is learning how to solve the 
customer’s problem.” -Mark Cook



From:   Steve Baker 
To: openstack-dev@lists.openstack.org
Date:   04/03/2014 10:13 PM
Subject:Re: [openstack-dev] [heat] Problems with Heat software 
configurations and KeystoneV2



On 04/04/14 14:05, Michael Elder wrote:
Hello, 

I'm looking for insights about the interaction between keystone and the 
software configuration work that's gone into Icehouse in the last month or 
so. 

I've found that when using software configuration, the KeystoneV2 is 
broken because the server.py#_create_transport_credentials() explicitly 
depends on KeystoneV3 methods. 

Here's what I've come across: 

In the following commit, the introduction of 
_create_transport_credentials() on server.py begins to create a user for 
each OS::Nova::Server resource in the template: 

commit b776949ae94649b4a1eebd72fabeaac61b404e0f 
Author: Steve Baker  
Date:   Mon Mar 3 16:39:57 2014 +1300 
Change: https://review.openstack.org/#/c/77798/ 

server.py lines 470-471: 

if self.user_data_software_config(): 
self._create_transport_credentials() 

With the introduction of this change, each server resource which is 
provisioned results in the creation of a new user ID. The call delegates 
through to stack_user.py lines 40-54: 


def _create_user(self): 
# Check for stack user project, create if not yet set 
if not self.stack.stack_user_project_id: 
project_id = self.keystone().create_stack_domain_project( 
self.stack.id) 
self.stack.set_stack_user_project_id(project_id) 
 
# Create a keystone user in the stack domain project 
user_id = self.keystone().create_stack_domain_user( 
username=self.physical_resource_name(),## HERE THE 
USERNAME IS SET TO THE RESOURCE NAME 
password=self.password, 
project_id=self.stack.stack_user_project_id) 

# Store the ID in resource data, for compatibility with 
SignalResponder 
db_api.resource_data_set(self, 'user_id', user_id) 

My concerns with this approach: 

- Each resource is going to result in the creation of a unique user in 
Keystone. That design point seems hardly teneble if you're provisioning a 
large number of templates by an organization every day. 
Compared to the resources consumed by creating a new nova server (or a 
keystone token!), I don't think creating new users will present a 
significant overhead.

As for creating users bound to resources, this is something heat has done 
previously but we're doing it with more resources now. With havana heat 
(or KeystoneV2) those users will be created in the same project as the 
stack launching user, and the stack launching user needs admin permissions 
to create these users.
- If you attempt to set your resource names to some human-readable string 
(like "web_server"), you get one shot to provision the template, wherein 
future attempts to provision it will result in exceptions due to duplicate 
user ids. 
This needs a bug raised. This isn't an issue on KeystoneV3 since the users 
are created in a project which is specific to the stack. Also for v3 
operations the username is ignored as the user_id is used exclusively.

- The change prevents compatibility between Heat on Icehouse and 
KeystoneV2. 
Please continue to test this with KeystoneV2. However any typical icehouse 
OpenStack should really have the keystone v3 API enabled. Can you explain 
the reasons why yours isn't?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Agent] Ironic-python-agent

2014-04-04 Thread Ling Gao
Hello Vladimir,
 I would prefer an agent-less node, meaning the agent is only used 
under the ramdisk OS to collect hw info, to do firmware updates and to 
install nodes etc. In this sense, the agent running as root is fine. Once 
the node is installed, the agent should be out of the picture. I have been 
working with HPC customers, in that environment they prefer as less memory 
prints as possible. Even as a ordinary tenant, I do not feel secure to 
have some agents running on my node. For the firmware update on the fly, I 
do not know how many customers will trust us doing it while their critical 
application is running. Even they do and ready to do it, Ironic can then 
send an agent to the node through scp/wget as admin/root and quickly do it 
and then kill the agent on the node.   Just my 2 cents.

Ling Gao




From:   Vladimir Kozhukalov 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   04/04/2014 08:24 AM
Subject:[openstack-dev] [Ironic][Agent]



Hello, everyone,

I'd like to involve more people to express their opinions about the way 
how we are going to run Ironic-python-agent. I mean should we run it with 
root privileges or not.

>From the very beginning agent is supposed to run under ramdisk OS and it 
is intended to make disk partitioning, RAID configuring, firmware updates 
and other stuff according to installing OS. Looks like we always will run 
agent with root privileges. Right? There are no reasons to limit agent 
permissions.

On the other hand, it is easy to imagine a situation when you want to run 
agent on every node of your cluster after installing OS. It could be 
useful to keep hardware info consistent (for example, many hardware 
configurations allow one to add hard drives in run time). It also could be 
useful for "on the fly" firmware updates. It could be useful for "on the 
fly" manipulations with lvm groups/volumes and so on. 

Frankly, I am not even sure that we need to run agent with root privileges 
even in ramdisk OS, because, for example, there are some system default 
limitations such as number of connections, number of open files, etc. 
which are different for root and ordinary user and potentially can 
influence agent behaviour. Besides, it is possible that some 
vulnerabilities will be found in the future and they potentially could be 
used to compromise agent and damage hardware configuration.   

Consequently, it is better to run agent under ordinary user even under 
ramdisk OS and use rootwrap if agent needs to run commands with root 
privileges. I know that rootwrap has some performance issues 
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.html 
but it is still pretty suitable for ironic agent use case.

It would be great to hear as many opinions as possible according to this 
case.


Vladimir Kozhukalov___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][Nova][Heat] Sample config generator issue

2014-04-04 Thread Doug Hellmann
On Thu, Apr 3, 2014 at 5:42 PM, Zane Bitter  wrote:
> On 03/04/14 08:48, Doug Hellmann wrote:
>>
>> On Wed, Apr 2, 2014 at 9:55 PM, Zane Bitter  wrote:
>>>
>>> We have an issue in Heat where the sample config generator from Oslo is
>>> currently broken (see bug #1288586). Unfortunately it turns out that
>>> there
>>> is no fix to the generator script itself that can do the Right Thing for
>>> both Heat and Nova.
>>>
>>> A brief recap on how the sample config generator works: it goes through
>>> all
>>> of the files specified and finds all the ConfigOpt objects at the top
>>> level.
>>> It then searches for them in the registered options, and returns the name
>>> of
>>> the group in which they are registered. Previously it looked for the
>>> identical object being registered, but now it just looks for any
>>> equivalent
>>> ones. When you register two or more equivalent options, the second and
>>> subsequent ones are just ignored by oslo.config.
>>>
>>> The situation in Heat is that we have a bunch of equivalent options
>>> registered in multiple groups. This is because we have a set of options
>>> for
>>> each client library (i.e. python-novaclient, python-cinderclient, &c.),
>>> with
>>> each set containing equivalent options (e.g. every client has an
>>> "endpoint_type" option for looking up the keystone catalog). This used to
>>> work, but now that equivalent options (and not just identical options)
>>> match
>>> when searching for them in a group, we just end up with multiple copies
>>> of
>>> each option in the first group to be searched, and none in any of the
>>> other
>>> groups, in the generated sample config.
>>>
>>> Nova, on the other hand, has the opposite problem (see bug #1262148).
>>> Nova
>>> adds the auth middleware from python-keystoneclient to its list of files
>>> to
>>> search for options. That middleware imports a file from oslo-incubator
>>> that
>>> registers the option in the default group - a registration that is *not*
>>> wanted by the keystone middleware, because it registers an equivalent
>>> option
>>> in a different group instead (or, as it turns out, as well). Just to make
>>> it
>>> interesting, Nova uses the same oslo-incubator module and relies on the
>>> option being registered in the default group. Of course, oslo-incubator
>>> is
>>> not a real library, so it gets registered a second time but ignored
>>> (since
>>> an equivalent one is already present). Crucially, the oslo-incubator file
>>> from python-keystoneclient is not on the list of extra modules to search
>>> in
>>> Nova, so when the generator script was looking for options identical to
>>> the
>>> ones it found in modules, it didn't see this option at all. Hence the
>>> change
>>> to looking for equivalent options, which broke Heat.
>>>
>>> Neither comparing for equivalence nor for identity in the generator
>>> script
>>> can solve both use cases. It's hard to see what Heat could or should be
>>> doing differently. I think it follows that the fix needs to be in either
>>> Nova or python-keystoneclient in the first instance.
>>>
>>> One option I suggested was for the auth middleware to immediately
>>> deregister
>>> the extra option that had accidentally been registered upon importing a
>>> module from oslo-incubator. I put up patches to do this, but it seemed to
>>> be
>>> generally agreed by Oslo folks that this was a Bad Idea.
>>>
>>> Another option would be to specifically include the relevant module from
>>> keystoneclient.openstack.common when generating the sample config. This
>>> seems quite brittle to me.
>>>
>>> We could fix it by splitting the oslo-incubator module into one that
>>> provides the code needed by the auth middleware and one that does the
>>> registration of options, but this will likely result in cascading changes
>>> to
>>> a whole bunch of projects.
>>>
>>> Does anybody have any thoughts on what the right fix looks like here?
>>> Currently, verification of the sample config is disabled in the Heat gate
>>> because of this issue, so it would be good to get it resolved.
>>>
>>> cheers,
>>> Zane.
>>
>>
>> We've seen some similar issues in other projects where the "guessing"
>> done by the generator is not matching the newer ways we use
>> configuration options. In those cases, I suggested that projects use
>> the new entry-point feature that allows them to explicitly list
>> options within groups, instead of scanning a set of files. This
>> feature was originally added so apps can include the options from
>> libraries that use oslo.config (such as oslo.messaging), but it can be
>> used for options define by the applications as well.
>>
>> To define an option discovery entry point, create a function that
>> returns a sequence of (group name, option list) pairs. For an example,
>> see list_opts() in oslo.messaging [1]. Then define the entry point in
>> your setup.cfg under the "oslo.config.opts" namespace [2]. If you need
>> more than one function, register them separately.
>>
>> Then chan

Re: [openstack-dev] [heat] Problems with software config and Heat standalone configurations

2014-04-04 Thread Michael Elder
No problem. 

Filed here: https://bugs.launchpad.net/heat/+bug/1302578 for continued 
discussion. 

-M


Kind Regards,

Michael D. Elder

STSM | Master Inventor
mdel...@us.ibm.com  | linkedin.com/in/mdelder

"Success is not delivering a feature; success is learning how to solve the 
customer’s problem.” -Mark Cook



From:   Steve Baker 
To: openstack-dev@lists.openstack.org
Date:   04/03/2014 10:13 PM
Subject:Re: [openstack-dev] [heat] Problems with software config 
and Heat standalone configurations



On 04/04/14 14:26, Michael Elder wrote:
Hello, 

While adopting the latest from the software configurations in Icehouse, we 
discovered an issue with the new software configuration type and its 
assumptions about using the heat client to perform behavior. 

The change was introduced in: 

commit 21f60b155e4b65396ebf77e05a0ef300e7c3c1cf 
Author: Steve Baker  
Change: https://review.openstack.org/#/c/67621/ 

The net is that the software config type in software_config.py lines 
147-152 relies on the heat client to create/clone software configuration 
resources in the heat database: 

def handle_create(self): 
props = dict(self.properties) 
props[self.NAME] = self.physical_resource_name() 

sc = self.heat().software_configs.create(**props) ## HERE THE HEAT 
CLIENT IS CREATING A NEW SOFTWARE_CONFIG TO MAKE EACH ONE IMMUTABLE 
self.resource_id_set(sc.id) 

My concerns with this approach: 

When used in standalone mode, the Heat engine receives headers which are 
used to drive authentication (X-Auth-Url, X-Auth-User, X-Auth-Key, ..): 

curl -i -X POST -H 'X-Auth-Key: password' -H 'Accept: application/json' -H 
'Content-Type: application/json' -H 'X-Auth-Url: http://[host]:5000/v2.0' 
-H 'X-Auth-User: admin' -H 'User-Agent: python-heatclient' -d '{...}' 
http://10.0.2.15:8004/v1/{tenant_id} 

In this mode, the heat config file indicates standalone mode and can also 
indicate multicloud support: 

# /etc/heat/heat.conf 
[paste_deploy] 
flavor = standalone 

[auth_password] 
allowed_auth_uris = http://[host1]:5000/v2.0,http://[host2]:5000/v2.0 
multi_cloud = true 

Any keystone URL which is referenced is unaware of the orchestration 
engine which is interacting with it. Herein lies the design flaw. 
Its not so much a design flaw, its a bug where a new piece of code 
interacts poorly with a mode that currently has few users and no 
integration test coverage.


When software_config calls self.heat(), it resolves clients.py's heat 
client: 

def heat(self): 
if self._heat: 
return self._heat 
 
con = self.context 
if self.auth_token is None: 
logger.error(_("Heat connection failed, no auth_token!")) 
return None 
# try the token 
args = { 
'auth_url': con.auth_url, 
'token': self.auth_token, 
'username': None, 
'password': None, 
'ca_file': self._get_client_option('heat', 'ca_file'), 
'cert_file': self._get_client_option('heat', 'cert_file'), 

'key_file': self._get_client_option('heat', 'key_file'), 
'insecure': self._get_client_option('heat', 'insecure') 
 } 

endpoint_type = self._get_client_option('heat', 'endpoint_type') 
endpoint = self._get_heat_url() 
if not endpoint: 
endpoint = self.url_for(service_type='orchestration', 
endpoint_type=endpoint_type) 
self._heat = heatclient.Client('1', endpoint, **args) 

return self._heat 

Here, an attempt to look up the orchestration URL (which is already 
executing in the context of the heat engine) comes up wrong because 
Keystone doesn't know about this remote standalone Heat engine. 

If you look at self._get_heat_url() you'll see that the heat.conf 
[clients_heat] url will be used for the heat endpoint if it is set. I 
would recommend setting that for standalone mode. A devstack change for 
HEAT_STANDALONE would be helpful here.

Further, at this point, the username and password are null, and when the 
auth_password standza is applied in the config file, Heat will deny any 
attempts at authorization which only provide a token. As I understand it 
today, that's because it doesn't have individual keystone admin users for 
all remote keystone services in the list of allowed_auth_urls. Hence, if 
only provided with a token, I don't think the heat engine can validate the 
token against the remote keystone. 

One workaround that I've implemented locally is to change the logic to 
check for standalone mode and send the username and password. 

   flavor = 'default' 
try: 
logger.info("Configuration is %s" % str(cfg.CONF)) 
flavor = cfg.CONF.paste_deploy.flavor 
except cfg.NoSuchOptError as nsoe: 
flavor = 'default' 
logger.info("Flavo

Re: [openstack-dev] Marconi PTL Candidacy

2014-04-04 Thread Flavio Percoco

On 03/04/14 17:53 +, Kurt Griffiths wrote:
[snip]


If elected, my priorities during Juno will include:

1. Operational Maturity: Marconi is already production-ready, but we still
have work to do to get to world-class reliability, monitoring, logging,
and efficiency.
2. Documentation: During Icehouse, Marconi made a good start on user and
operator manuals, and I would like to see those docs fleshed out, as well
as reworking the program wiki to make it much more informative and
engaging.
3. Security: During Juno I want to start doing per-milestone threat
modeling, and build out a suite of security tests.
4. Integration: I have heard from several other OpenStack programs who
would like to use Marconi, and so I look forward to working with them to
understand their needs and to assist them however we can.
5. Notifications: Beginning the work on the missing pieces needed to build
a notifications service on top of the Marconi messaging platform, that can
be used to surface events to end-users via SMS, email, web hooks, etc.
6. Graduation: Completing all remaining graduation requirements so that
Marconi can become integrated in the "K" cycle, which will allow other
programs to be more confident about taking dependencies on the service for
features they are planning.
7. Growth: I'd like to welcome several more contributors to the Marconi
core team, continue on-boarding new contributors and interns, and see
several more large deployments of Marconi in production.



All the above sounds amazing to me! You've done an amazing work so
far!

--
@flaper87
Flavio Percoco


pgpN9lAprsvWv.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Icehouse RC1 available

2014-04-04 Thread Sergey Lukjanov
Hello everyone,

Sahara published its first Icehouse release candidate today. The list of
bugs fixed since feature freeze and the RC1 tarball are available at:

https://launchpad.net/sahara/icehouse/icehouse-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released as the 2014.1 final
version on April 17. You are therefore strongly encouraged to test and
validate this tarball.

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/sahara/tree/milestone-proposed

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/sahara/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Note that the "master" branch of Sahara is now open for Juno
development, and feature freeze restrictions no longer apply there.

P.S. Thanks for Thierry for release management and this cool
announcement template.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [RFC] Tempest without branches

2014-04-04 Thread David Kranz

On 04/04/2014 07:37 AM, Sean Dague wrote:

An interesting conversation has cropped up over the last few days in -qa
and -infra which I want to bring to the wider OpenStack community. When
discussing the use of Tempest as part of the Defcore validation we came
to an interesting question:

Why does Tempest have stable/* branches? Does it need them?

Historically the Tempest project has created a stable/foo tag the week
of release to lock the version of Tempest that will be tested against
stable branches. The reason we did that is until this cycle we had
really limited nobs in tempest to control which features were tested.
stable/havana means - test everything we know how to test in havana. So
when, for instance, a new API extension landed upstream in icehouse,
we'd just add the tests to Tempest. It wouldn't impact stable/havana,
because we wouldn't backport changes.

But is this really required?

For instance, we don't branch openstack clients. They are supposed to
work against multiple server versions. Tempest, at some level, is
another client. So there is some sense there.

Tempest now also have flags on features, and tests are skippable if
services, or even extensions aren't enabled (all explicitly setable in
the tempest.conf). This is a much better control mechanism than the
course grained selection of stable/foo.


If we decided not to set a stable/icehouse branch in 2 weeks, the gate
would change as follows:

Project masters: no change
Project stable/icehouse: would be gated against Tempest master
Tempest master: would double the gate jobs, gate on project master and
project stable/icehouse on every commit.

(That last one needs infra changes to work right, those are all in
flight right now to assess doability.)

Some interesting effects this would have:

  * Tempest test enhancements would immediately apply on stable/icehouse *

... giving us more confidence. A large amount of tests added to master
in every release are enhanced checking for existing function.

  * Tempest test changes would need server changes in master and
stable/icehouse *

In trying tempest master against stable/havana we found a number of
behavior changes in projects that there had been a 2 step change in the
Tempest tests to support. But this actually means that stable/havana and
stable/icehouse for the same API version are different. Going forward
this would require master + stable changes on the projects + Tempest
changes. Which would provide much more friction in changing these sorts
of things by accident.

  * Much more stable testing *

If every Tempest change is gating on stable/icehouse, the week long
stable/havana can't pass tests won't happen. There will be much more
urgency to keep stable branches functioning.


If we got rid of branches in Tempest the path would be:
  * infrastructure to support this in infra - in process, probably
landing today
  * don't set stable/icehouse - decision needed by Apr 17th
  * changes to d-g/devstack to be extra explicit about what features
stable/icehouse should support in tempest.conf
  * see if we can make master work with stable/havana to remove the
stable/havana Tempest branch (if this is doable in a month, great, if
not just wait for havana to age out).


I think we would still want to declare Tempest versions from time to
time. I'd honestly suggest a quarterly timebox. The events that are
actually important to Tempest are less the release itself, but the eol
of branches, as that would mean features which removed completely from
any supported tree could be removed.


My current leaning is that this approach would be a good thing, and
provide a better experience for both the community and the defcore
process. However it's a big enough change that we're still collecting
data, and it would be interesting to hear other thoughts from the
community at large on this approach.

-Sean


With regard to havana, the problems with DefCore using stable/havana are 
the same as many of us have felt with testing real deployments of havana.
Master (now icehouse) has many more tests, is more robust to individual 
test failures, and is more configurable. But the work to backport 
improvements is difficult or impossible due to many refactorings on 
master, api changes, and the tempest backport policy that we don't want 
to spend our review time looking backwards. The reality is that almost 
nothing has been backported to stable/havana tempest, and we don't want 
to start doing that now. As defcore/refstack becomes a reality, more 
bugs and desired features in tempest will be found and it would be good 
if issues could be addressed on master.


The approach advocated here would prevent this from happening again with 
icehouse and going forward. That still leaves havana as an important 
case for many folks. I did an initial run of master tempest against 
havana using nova-network but no heat/ceilo/swift). 148 out of 2009 
tests failed. The failures seemed to be in these categories:


1. An api c

Re: [openstack-dev] Swift ring building..

2014-04-04 Thread Shyam Prasad N
Thanks Christian.
That reply covered everything I was seeking to know on this subject.


On Fri, Apr 4, 2014 at 3:11 PM, Christian Schwede <
christian.schw...@enovance.com> wrote:

> Hi,
>
> Am 04.04.14 11:14, schrieb Shyam Prasad N:
> > I have a question regarding the ring building process in a swift cluster.
> > Many sources online suggest building the rings using ring-builder and
> > scp the generated ring files to all the nodes in the cluster.
> > What I'm trying to understand is if the scp step is just to simplify
> > things, or is it absolutely necessary that the ring files on all the
> > nodes is exactly the same?
> > Can I instead individually build the rings on each node individually?
>
> no, the ring files must be the same on all nodes.
>
> Ring files in combination with the full object name define which storage
> nodes are responsible for the object.
>
> A very simplified example with four storage servers A, B, C, D and only
> two replicas:
>
> 1. The proxy server wants to store an object and based on its ring file
> decides that storage server "A" and "B" should store it.
>
> 2. The storage nodes "A" and "B" use different ringfiles; their
> replicators now assume that the object is misplaced and will replicate
> the object to nodes "C" and "D".
>
> 3. Now the proxy wants to get the object sometime later, and because of
> the different ring expects the object on server "A" and "B". But the
> object is no longer stored on these servers and the request will fail.
>
> Christian
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
-Shyam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][Agent]

2014-04-04 Thread Vladimir Kozhukalov
Hello, everyone,

I'd like to involve more people to express their opinions about the way how
we are going to run Ironic-python-agent. I mean should we run it with root
privileges or not.

>From the very beginning agent is supposed to run under ramdisk OS and it is
intended to make disk partitioning, RAID configuring, firmware updates and
other stuff according to installing OS. Looks like we always will run agent
with root privileges. Right? There are no reasons to limit agent
permissions.

On the other hand, it is easy to imagine a situation when you want to run
agent on every node of your cluster after installing OS. It could be useful
to keep hardware info consistent (for example, many hardware configurations
allow one to add hard drives in run time). It also could be useful for "on
the fly" firmware updates. It could be useful for "on the fly"
manipulations with lvm groups/volumes and so on.

Frankly, I am not even sure that we need to run agent with root privileges
even in ramdisk OS, because, for example, there are some system default
limitations such as number of connections, number of open files, etc. which
are different for root and ordinary user and potentially can influence
agent behaviour. Besides, it is possible that some vulnerabilities will be
found in the future and they potentially could be used to compromise agent
and damage hardware configuration.

Consequently, it is better to run agent under ordinary user even under
ramdisk OS and use rootwrap if agent needs to run commands with root
privileges. I know that rootwrap has some performance issues
http://lists.openstack.org/pipermail/openstack-dev/2014-March/029017.htmlbut
it is still pretty suitable for ironic agent use case.

It would be great to hear as many opinions as possible according to this
case.


Vladimir Kozhukalov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [RFC] Tempest without branches

2014-04-04 Thread Sean Dague
An interesting conversation has cropped up over the last few days in -qa
and -infra which I want to bring to the wider OpenStack community. When
discussing the use of Tempest as part of the Defcore validation we came
to an interesting question:

Why does Tempest have stable/* branches? Does it need them?

Historically the Tempest project has created a stable/foo tag the week
of release to lock the version of Tempest that will be tested against
stable branches. The reason we did that is until this cycle we had
really limited nobs in tempest to control which features were tested.
stable/havana means - test everything we know how to test in havana. So
when, for instance, a new API extension landed upstream in icehouse,
we'd just add the tests to Tempest. It wouldn't impact stable/havana,
because we wouldn't backport changes.

But is this really required?

For instance, we don't branch openstack clients. They are supposed to
work against multiple server versions. Tempest, at some level, is
another client. So there is some sense there.

Tempest now also have flags on features, and tests are skippable if
services, or even extensions aren't enabled (all explicitly setable in
the tempest.conf). This is a much better control mechanism than the
course grained selection of stable/foo.


If we decided not to set a stable/icehouse branch in 2 weeks, the gate
would change as follows:

Project masters: no change
Project stable/icehouse: would be gated against Tempest master
Tempest master: would double the gate jobs, gate on project master and
project stable/icehouse on every commit.

(That last one needs infra changes to work right, those are all in
flight right now to assess doability.)

Some interesting effects this would have:

 * Tempest test enhancements would immediately apply on stable/icehouse *

... giving us more confidence. A large amount of tests added to master
in every release are enhanced checking for existing function.

 * Tempest test changes would need server changes in master and
stable/icehouse *

In trying tempest master against stable/havana we found a number of
behavior changes in projects that there had been a 2 step change in the
Tempest tests to support. But this actually means that stable/havana and
stable/icehouse for the same API version are different. Going forward
this would require master + stable changes on the projects + Tempest
changes. Which would provide much more friction in changing these sorts
of things by accident.

 * Much more stable testing *

If every Tempest change is gating on stable/icehouse, the week long
stable/havana can't pass tests won't happen. There will be much more
urgency to keep stable branches functioning.


If we got rid of branches in Tempest the path would be:
 * infrastructure to support this in infra - in process, probably
landing today
 * don't set stable/icehouse - decision needed by Apr 17th
 * changes to d-g/devstack to be extra explicit about what features
stable/icehouse should support in tempest.conf
 * see if we can make master work with stable/havana to remove the
stable/havana Tempest branch (if this is doable in a month, great, if
not just wait for havana to age out).


I think we would still want to declare Tempest versions from time to
time. I'd honestly suggest a quarterly timebox. The events that are
actually important to Tempest are less the release itself, but the eol
of branches, as that would mean features which removed completely from
any supported tree could be removed.


My current leaning is that this approach would be a good thing, and
provide a better experience for both the community and the defcore
process. However it's a big enough change that we're still collecting
data, and it would be interesting to hear other thoughts from the
community at large on this approach.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-04 Thread Julie Pichon
On 03/04/14 23:20, Jay Pipes wrote:
> On Thu, 2014-04-03 at 14:41 -0500, Kevin L. Mitchell wrote:
>> On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
>>> Jay, thanks for taking ownership on this idea, we are really
>>> interested to contribute to this, so what do you think are the next
>>> steps to move on?
>>
>> Perhaps a summit session on quota management would be in order?
> 
> Done:
> 
> http://summit.openstack.org/cfp/details/221

Thank you for proposing the session, I'm hopeful having this in the new
cross-project track will have a positive impact on the discussion. I'm
under the impression that this comes back regularly as a session topic
but keeps hitting barriers when it comes to actual implementation
(perhaps because important stakeholders were missing from the session
before).

I'd like to bring up the cross-project discussion from last time this
was discussed in December [1] as a reference, since the same
questions/objections will likely come back again. One of the main issues
was that this shouldn't live in Keystone, which could be resolved by
using Boson, but the rest shows a reluctance from the projects to
delegate quota management, and uncertainty around the use cases. Oslo
was also mentioned as a possible place to help with improving the
consistency.

I'd love a more consistent way to handle and manage quotas across
multiple projects as this would help Horizon too, for very similar
reasons than are mentioned here.

Thanks,

Julie

[1]
http://eavesdrop.openstack.org/meetings/project/2013/project.2013-12-10-21.02.log.html
from 21:10

> Best,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest]:Please updated etherpad before adding tempest tests

2014-04-04 Thread Kekane, Abhishek
Hello everyone,

This is regarding implementation of blueprint 
https://blueprints.launchpad.net/tempest/+spec/testcases-expansion-icehouse.

As per mentioned in etherpads for this blueprint, please add your name if you 
are working on any of the items mentioned in the list.
Otherwise efforts will get duplicated.


Thanks & Regards,

Abhishek Kekane

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 1302376 : bug or feature

2014-04-04 Thread Denis Makogon
Goodday Shweta, it's a definitely a bug, thanks for registering the
bug-report.



Best regards,
Denis Makogon


On Fri, Apr 4, 2014 at 1:04 PM, Shweta shweta wrote:

> Hi all,
>
> I've logged a bug in trove. I'm a little unsure if this is a bug or
> feature. Please have a look at the bug @
> https://bugs.launchpad.net/trove/+bug/1302376 and suggest if it is valid.
>
>
> Thanks,
> Shweta | Consultant Engineering
> GlobalLogic
> www.globallogic.com
>  
> http://www.globallogic.com/email_disclaimer.txt
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 1302376 : bug or feature

2014-04-04 Thread Shweta shweta
Hi all,

I've logged a bug in trove. I'm a little unsure if this is a bug or
feature. Please have a look at the bug @
https://bugs.launchpad.net/trove/+bug/1302376 and suggest if it is valid.


Thanks,
Shweta | Consultant Engineering
GlobalLogic
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-04 Thread Stan Lagun
Hi Steve, Thomas

I'm glad the discussion is so constructive!

If we add type interfaces to HOT this may do the job.
Applications in AppCatalog need to be portable across OpenStack clouds.
Thus if we use some globally-unique type naming system applications could
identify their dependencies in unambiguous way.

We also would need to establish relations between between interfaces.
Suppose there is My::Something::Database interface and 7 compatible
materializations:
My::Something::TroveMySQL
My::Something::GaleraMySQL
My::Something::PostgreSQL
My::Something::OracleDB
My::Something::MariaDB
My::Something::MongoDB
My::Something::HBase

There are apps that (say SQLAlchemy-based apps) are fine with any
relational DB. In that case all templates except for MongoDB and HBase
should be matched. There are apps that design to work with MySQL only. In
that case only TroveMySQL, GaleraMySQL and MariaDB should be matched. There
are application who uses PL/SQL and thus require OracleDB (there can be
several Oracle implementations as well). There are also applications
(Marconi and Ceilometer are good example) that can use both some SQL and
NoSQL databases. So conformance to Database interface is not enough and
some sort of interface hierarchy required.

Another thing that we need to consider is that even compatible
implementations may have different set of parameters. For example
clustered-HA PostgreSQL implementation may require additional parameters
besides those needed for plain single instance variant. Template that
consumes *any* PostgreSQL will not be aware of those additional parameters.
Thus they need to be dynamically added to environment's input parameters
and resource consumer to be patched to pass those parameters to actual
implementation



On Fri, Apr 4, 2014 at 9:53 AM, Thomas Spatzier
wrote:

> Hi Steve,
>
> your indexing idea sounds interesting, but I am not sure it would work
> reliably. The kind of matching based on names of parameters and outputs and
> internal get_attr uses has very strong assumptions and I think there is a
> not so low risk of false positives. What if the templates includes some
> internal details that would not affect the matching but still change the
> behavior in a way that would break the composition. Or what if a user by
> chance built a template that by pure coincidence uses the same parameter
> and output names as one of those abstract types that was mention, but the
> template is simply not built for composition?
>
> I think it would be much cleaner to have an explicit attribute in the
> template that says "this template can be uses as realization of type
> My::SomeType" and use that for presenting the user choice and building the
> environment.
>
> Regards,
> Thomas
>
> Steve Baker  wrote on 04/04/2014 06:12:38:
> > From: Steve Baker 
> > To: openstack-dev@lists.openstack.org
> > Date: 04/04/2014 06:14
> > Subject: Re: [openstack-dev] [Heat] [Murano] [Solum] applications inthe
> cloud
> >
> > On 03/04/14 13:04, Georgy Okrokvertskhov wrote:
> > Hi Steve,
> >
> > I think this is exactly the place where we have a boundary between
> > Murano catalog and HOT.
> >
> > In your example one can use abstract resource type and specify a
> > correct implementation via environment file. This is how it will be
> > done on the final stage in Murano too.
> >
> > Murano will solve another issue. In your example user should know
> > what template to use as a provider template. In Murano this will be
> > done in the following way:
> > 1) User selects an app which requires a DB
> > 2) Murano sees this requirement for DB and do a search in the app
> > catalog to find all apps which expose this functionality. Murano
> > uses app package definitions for that.
> > 3) User select in UI specific DB implementation he wants to use.
> >
> > As you see, in Murano case user has no preliminary knowledge of
> > available apps\templates and it uses catalog to find it. A search
> > criteria can be quite complex with using different application
> > attribute. If we think about moving application definition to HOT
> > format it should provide all necessary information for catalog.
> >
> > In order to search apps in catalog which uses HOT format we need
> > something like that:
> >
> > One needs to define abstract resource like
> > OS:HOT:DataBase
> >
> > Than in each DB implementation of DB resource one has to somehow
> > refer this abstract resource as a parent like
> >
> > Resource OS:HOT:MySQLDB
> >   Parent: OS:HOT:DataBase
> >
> > Then catalog part can use this information and build a list of all
> > apps\HOTs with resources with parents OS:HOT:DataBase
> >
> > That is what we are looking for. As you see, in this example I am
> > not talking about version and other attributes which might be
> > required for catalog.
> >
> >
> > This sounds like a vision for Murano that I could get behind. It
> > would be a tool which allows fully running applications to be
> > assembled and launched from a catalog of Heat te

Re: [openstack-dev] Swift ring building..

2014-04-04 Thread Christian Schwede
Hi,

Am 04.04.14 11:14, schrieb Shyam Prasad N:
> I have a question regarding the ring building process in a swift cluster.
> Many sources online suggest building the rings using ring-builder and
> scp the generated ring files to all the nodes in the cluster.
> What I'm trying to understand is if the scp step is just to simplify
> things, or is it absolutely necessary that the ring files on all the
> nodes is exactly the same?
> Can I instead individually build the rings on each node individually?

no, the ring files must be the same on all nodes.

Ring files in combination with the full object name define which storage
nodes are responsible for the object.

A very simplified example with four storage servers A, B, C, D and only
two replicas:

1. The proxy server wants to store an object and based on its ring file
decides that storage server "A" and "B" should store it.

2. The storage nodes "A" and "B" use different ringfiles; their
replicators now assume that the object is misplaced and will replicate
the object to nodes "C" and "D".

3. Now the proxy wants to get the object sometime later, and because of
the different ring expects the object on server "A" and "B". But the
object is no longer stored on these servers and the request will fail.

Christian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-04 Thread Sylvain Bauza
2014-04-04 10:30 GMT+02:00 Sylvain Bauza :

> Hi all,
>
>
>
> 2014-04-03 18:47 GMT+02:00 Meghal Gosalia :
>
>  Hello folks,
>>
>>  Here is the bug [1] which is currently not allowing a host to be part
>> of two availability zones.
>> This bug was targeted for havana.
>>
>>  The fix in the bug was made because it was assumed
>> that openstack does not support adding hosts to two zones by design.
>>
>>  The assumption was based on the fact that ---
>> if hostX is added to zoneA as well as zoneB,
>> and if you boot a vm vmY passing zoneB in boot params,
>> nova show vmY still returns zoneA.
>>
>>  In my opinion, we should fix the case of nova show
>> rather than changing aggregate api to not allow addition of hosts to
>> multiple zones.
>>
>>  I have added my comments in comments #7 and #9 on that bug.
>>
>>  Thanks,
>> Meghal
>>
>>
>  [1] Bug - https://bugs.launchpad.net/nova/+bug/1196893
>>
>>
>>
>
>
> Thanks for the pointer, now I see why the API is preventing host to be
> added to a 2nd aggregated if there is a different AZ. Unfortunately, this
> patch missed the fact that aggregates metadata can be modified once the
> aggregate is created, so we should add a check when updating metadate in
> order to cover all corner cases.
>
> So, IMHO, it's worth providing a patch for API consistency so as we
> enforce the fact that a host should be in only one AZ (but possibly 2 or
> more aggregates) and see how we can propose to user ability to provide 2
> distincts AZs when booting.
>
> Does everyone agree ?
>
>


Well, I'm replying to myself. The corner case is even trickier. I missed
this patch [1] which already checks that when updating an aggregate to set
an AZ, its hosts are not already part of another AZ. So, indeed, the
coverage is already there... except for one thing :

If an operator is creating an aggregate with an AZ set to the default AZ
defined in nova.conf and adds an host to this aggregate, nova
availability-zone-list does show the host being part of this default AZ
(normal behaviour). If we create an aggregate 'foo' without AZ,  then we
add the same host to that aggregate, and then we update the metadata of the
aggregate to set an AZ 'foo', then the AZ check won't notice that the host
is already part of an AZ and will allow the host to be part of two distinct
AZs.

Proof here : http://paste.openstack.org/show/75066/

I'm on that bug.
-Sylvain

[1] : https://review.openstack.org/#/c/36786

> -Sylvain
>
>
>>On Apr 3, 2014, at 9:05 AM, Steve Gordon  wrote:
>>
>> - Original Message -
>>
>> Currently host aggregates are quite general, but the only ways for an
>> end-user to make use of them are:
>>
>> 1) By making the host aggregate an availability zones (where each host
>> is only supposed to be in one availability zone) and selecting it at
>> instance creation time.
>>
>> 2) By booting the instance using a flavor with appropriate metadata
>> (which can only be set up by admin).
>>
>>
>> I would like to see more flexibility available to the end-user, so I
>> think we should either:
>>
>> A) Allow hosts to be part of more than one availability zone (and allow
>> selection of multiple availability zones when booting an instance), or
>>
>>
>> While changing to allow hosts to be in multiple AZs changes the concept
>> from an operator/user point of view I do think the idea of being able to
>> specify multiple AZs when booting an instance makes sense and would be a
>> nice enhancement for users working with multi-AZ environments - "I'm OK
>> with this instance running in AZ1 and AZ2, but not AZ*".
>>
>> -Steve
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help in re-running openstack

2014-04-04 Thread abhishek jain
Hi shiva

You can reload the openstack services by following the below steps..

i)  Enter into the devstack directry i.e cd devstack
ii) Execute rejoin-stackh.sh i.e ./rejoin-stack.sh
iii) Press ctrl+a+shift+"
iv) Select the appropriate service to restart

Please let me know if further issues.

Thanks
Abhishek Jain


On Fri, Apr 4, 2014 at 2:13 PM, Deepak Shetty  wrote:

> Shiva,
>   Can u tell what exactly u r trying to change in /opt/stack/ ?
> My guess is that u might be running into stack.sh re-pulling the sources
> hence overriding ur changes ? Try with OFFLINE=True in localrc (create a
> localrc file in /opt/stack/ and put OFFLINE=True) and redo stack.sh
>
>
> On Thu, Apr 3, 2014 at 4:17 PM, shiva m  wrote:
>
>> Hi,
>>
>> I  am trying to modify code in /op/stack/* and did  ./unstack.sh and
>> ./stack.sh. But after ./stack.sh it reloading to previous values. Any one
>> please help where to modify code and  re-run. Say if I modify some python
>> file or some configurtaion file like /etc/nova/nova.conf, how  do I make
>> these changes get effected. I have ubuntu-havana devstack  setup.
>>
>> I am new to openstack code, correct if I am wrong.
>>
>> Thanks,
>> Shiva
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Devstack] add support for ceph

2014-04-04 Thread Chmouel Boudjnah
Hello,

We had quite a lengthy discussion on this review :

https://review.openstack.org/#/c/65113/

about a patch that seb has sent to add ceph support to devstack.

The main issues seems to resolve around the fact that in devstack we
support only packages that are in the distros and not having to add
external apt sources for that.

In devstack we are moving as well toward a nice and solid plugin system
where people can hook externally and not needing to submit patch to add
a feature that change the 'core' of devstack.

I think the best way to go forward with this would be to :

* Split the patch mentioned above to get the generic things bit in
their own patch. i.e the storage file :

https://review.openstack.org/#/c/65113/19/lib/storage

and the create_disk (which would need to be used by lib/swift as well) :

https://review.openstack.org/#/c/65113/19/functions

* Get the existing drivers converted to that new storage format.

* Adding new hooks to the plugin system to be able to do what we want
for this:

https://review.openstack.org/#/c/65113/19/lib/cinder

and for injecting things in libvirt :

https://review.openstack.org/#/c/65113/19/lib/nova

Hopefully to have folks using devstack and ceph would just need to be :

$ git clone devstack 
$ curl -O lib/storages/ceph http:///ceph_devstack
(and maybe an another file for extras.d)

am I missing a step ?

Cheers,
Chmouel.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift ring building..

2014-04-04 Thread Shyam Prasad N
Hi,

I have a question regarding the ring building process in a swift cluster.
Many sources online suggest building the rings using ring-builder and scp
the generated ring files to all the nodes in the cluster.
What I'm trying to understand is if the scp step is just to simplify
things, or is it absolutely necessary that the ring files on all the nodes
is exactly the same?
Can I instead individually build the rings on each node individually?

-- 
-Shyam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] minimal scope covered by third-party testing

2014-04-04 Thread Simon Pasquier
Hi Salvatore,

On 03/04/2014 14:56, Salvatore Orlando wrote:
> Hi Simon,
> 

> 
> I hope stricter criteria will be enforced for Juno; I personally think
> every CI should run at least the smoketest suite for L2/L3 services (eg:
> load balancer scenario will stay optional).

I had a little thinking about this and I feel like it might not have
caught _immediately_ the issue Kyle talked about [1].

Let's rewind the time line:
1/ Change to *Nova* adding external events API is merged
https://review.openstack.org/#/c/76388/
2/ Change to *Neutron* notifying Nova when ports are ready is merged
https://review.openstack.org/#/c/75253/
3/ Change to *Nova* making libvirt wait for Neutron notifications is merged
https://review.openstack.org/#/c/74832/

At this point and assuming that the external ODL CI system were running
the L2/L3 smoke tests, change #3 could have passed since external
Neutron CI aren't voting for Nova. Instead it would have voted against
any subsequent change to Neutron.

Simon

[1] https://bugs.launchpad.net/neutron/+bug/1301449

> 
> Salvatore
> 
> [1] https://review.openstack.org/#/c/75304/
> 
> 
> 
> On 3 April 2014 12:28, Simon Pasquier  > wrote:
> 
> Hi,
> 
> I'm looking at [1] but I see no requirement of which Tempest tests
> should be executed.
> 
> In particular, I'm a bit puzzled that it is not mandatory to boot an
> instance and check that it gets connected to the network. To me, this is
> the very minimum for asserting that your plugin or driver is working
> with Neutron *and* Nova (I'm not even talking about security groups). I
> had a quick look at the existing 3rd party CI systems and I found none
> running this kind of check (correct me if I'm wrong).
> 
> Thoughts?
> 
> [1] https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
> --
> Simon Pasquier
> Software Engineer (OpenStack Expertise Center)
> Bull, Architect of an Open World
> Phone: + 33 4 76 29 71 49 
> http://www.bull.com
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

2014-04-04 Thread Padmanabhan Krishnan
The blueprint is updated with more information on the requirements and 
interaction with VDP

https://blueprints.launchpad.net/neutron/+spec/netron-ml2-mechnism-driver-for-cisco-dfa-support


On Monday, March 31, 2014 12:50 PM, Padmanabhan Krishnan  
wrote:
 
Hi Mathieu,
Thanks for the link. Some similarities for sure. I see Nova libvirt being used. 
I had looked at libvirt earlier.

Firstly, the libvirt support that Nova uses to communicate with LLDPAD doesn't 
have support for the latest 2.2 standard. The support is also only for the VEPA 
mode and not for VEB mode. It's also quite not clear as how the VLAN provided 
by VDP is used by libvirt to communicate it back to Openstack.
There's already an existing blueprint where i can add more details 
(https://blueprints.launchpad.net/neutron/+spec/netron-ml2-mechnism-driver-for-cisco-dfa-support)

Even for a single physical network, you need more parameters in the ini file. I 
was thinking of Host or Network Overlay with or w/o VDP for Tunnel mode. I will 
add more to the blueprint.

Thanks,
Paddu

On Friday, March 28, 2014 8:42 AM, Mathieu Rohon  
wrote:
 
Hi,


the more I think about your use case, the more I think you should
create a BP to have tenant network based on interfaces created with
VDP protocol.
I'm not a VDP specialist, but if it creates some vlan back interfaces,
you might match those physical interfaces with the
physical_interface_mappings parameter in your ml2_conf.ini. Then you
could create flat networks backed on those interfaces.
SR-IOv use cases also talk about using vif_type 802.1qbg :
https://wiki.openstack.org/wiki/Nova-neutron-sriov



Mathieu___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] the ability about list the available volume back-ends and their capabilities

2014-04-04 Thread Zhangleiqiang (Trump)
Hi, Mike:

Thanks for your time and your advice. 

I will contact Avishay in #openstack-cinder tonight.


--
zhangleiqiang (Trump)

Best Regards


> -Original Message-
> From: Mike Perez [mailto:thin...@gmail.com]
> Sent: Friday, April 04, 2014 1:51 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [cinder] the ability about list the available 
> volume
> back-ends and their capabilities
> 
> On 06:11 Thu 03 Apr , Zhangleiqiang (Trump) wrote:
> > Hi stackers:
> >
> > I think the ability about list the available volume back-ends, along 
> > with
> their capabilities, total capacity, available capacity is useful for admin. 
> For
> example, this can help admin to select a destination for volume migration.
> > But I can't find the cinder api about this ability.
> >
> > I find a BP about this ability:
> > https://blueprints.launchpad.net/cinder/+spec/list-backends-and-capabi
> > lities But the BP is not approved. Who can tell me the reason?
> 
> Hi Zhangleiqiang,
> 
> I think it's not approved because it has not been set to a series goal by the
> drafter. I don't have permission myself to change the series goal, but I would
> recommend going into the #openstack-cinder IRC channel and ask for the BP to
> be set for the Juno release assuming there is a good approach. We'd also need
> a contributor to take on this task.
> 
> I think it would be good to use the os-hosts extension which can be found in
> cinder.api.contrib.hosts and add the additional response information there. It
> already lists total volume/snapshot count and capacity used [1].
> 
> [1] - http://paste.openstack.org/show/74996
> 
> --
> Mike Perez
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Help in re-running openstack

2014-04-04 Thread Deepak Shetty
Shiva,
  Can u tell what exactly u r trying to change in /opt/stack/ ?
My guess is that u might be running into stack.sh re-pulling the sources
hence overriding ur changes ? Try with OFFLINE=True in localrc (create a
localrc file in /opt/stack/ and put OFFLINE=True) and redo stack.sh


On Thu, Apr 3, 2014 at 4:17 PM, shiva m  wrote:

> Hi,
>
> I  am trying to modify code in /op/stack/* and did  ./unstack.sh and
> ./stack.sh. But after ./stack.sh it reloading to previous values. Any one
> please help where to modify code and  re-run. Say if I modify some python
> file or some configurtaion file like /etc/nova/nova.conf, how  do I make
> these changes get effected. I have ubuntu-havana devstack  setup.
>
> I am new to openstack code, correct if I am wrong.
>
> Thanks,
> Shiva
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Whats the way to do cleanup during service shutdown / restart ?

2014-04-04 Thread Deepak Shetty
resendign it with correct cinder prefix in subject.

thanx,
deepak


On Thu, Apr 3, 2014 at 7:44 PM, Deepak Shetty  wrote:

>
> Hi,
> I am looking to umount the glsuterfs shares that are mounted as part
> of gluster driver, when c-vol is being restarted or Ctrl-C'ed (as in
> devstack env) or when c-vol service is being shutdown.
>
> I tried to use __del__ in GlusterfsDriver(nfs.RemoteFsDriver) and it
> didn't work
>
>  def __del__(self):
>  LOG.info(_("DPKS: Inside __del__ Hurray!, shares=%s")%
> self._mounted_shares)
>  for share in self._mounted_shares:
>  mount_path = self._get_mount_point_for_share(share)
>  command = ['umount', mount_path]
>  self._do_umount(command, True, share)
>
> self._mounted_shares is defined in the base class (RemoteFsDriver)
>
>1. ^C2014-04-03 13:29:55.547 INFO cinder.openstack.common.service [-]
>Caught SIGINT, stopping children
>2. 2014-04-03 13:29:55.548 INFO cinder.openstack.common.service [-]
>Caught SIGTERM, exiting
>3. 2014-04-03 13:29:55.550 INFO cinder.openstack.common.service [-]
>Caught SIGTERM, exiting
>4. 2014-04-03 13:29:55.560 INFO cinder.openstack.common.service [-]
>Waiting on 2 children to exit
>5. 2014-04-03 13:29:55.561 INFO cinder.openstack.common.service [-]
>Child 30185 exited with status 1
>6. 2014-04-03 13:29:55.562 INFO cinder.volume.drivers.glusterfs [-]
>DPKS: Inside __del__ Hurray!, shares=[]
>7. 2014-04-03 13:29:55.563 INFO cinder.openstack.common.service [-]
>Child 30186 exited with status 1
>8. Exception TypeError: "'NoneType' object is not callable" in method GlusterfsDriver.__del__ of
>>
>ignored
>9. [stack@devstack-vm tempest]$
>
> So the _mounted_shares is empty ([]) which isn't true since I have 2
> glsuterfs shares mounted and when i print _mounted_shares in other parts of
> code, it does show me the right thing.. as below...
>
> From volume/drivers/glusterfs.py @ line 1062:
> LOG.debug(_('Available shares: %s') % self._mounted_shares)
>
> which dumps the debugprint  as below...
>
> 2014-04-03 13:29:45.414 DEBUG cinder.volume.drivers.glusterfs
> [req-2cf69316-cc42-403a-96f1-90e8e77375aa None None]* Available shares:
> [u'devstack-vm.localdomain:/gvol1', u'devstack-vm.localdomain:/gvol1']*from 
> (pid=30185) _ensure_shares_mounted
> /opt/stack/cinder/cinder/volume/drivers/glusterfs.py:1061
>  This brings in few Qs ( I am usign devstack env) ...
>
> 1) Is __del__ the right way to do cleanup for a cinder driver ? I have 2
> gluster backends setup, hence 2 cinder-volume instances, but i see __del__
> being called once only (as per above debug prints)
> 2) I tried atexit and registering a function to do the cleanup. Ctrl-C'ing
> c-vol (from screen ) gives the same issue.. shares is empty ([]), but this
> time i see that my atexit handler called twice (once for each backend)
> 3) In general, whats the right way to do cleanup inside cinder volume
> driver when a service is going down or being restarted ?
> 4) The solution should work in both devstack (ctrl-c to shutdown c-vol
> service) and production (where we do service restart c-vol)
>
> Would appreciate a response
>
> thanx,
> deepak
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Hosts within two Availability Zones : possible or not ?

2014-04-04 Thread Sylvain Bauza
Hi all,



2014-04-03 18:47 GMT+02:00 Meghal Gosalia :

>  Hello folks,
>
>  Here is the bug [1] which is currently not allowing a host to be part of
> two availability zones.
> This bug was targeted for havana.
>
>  The fix in the bug was made because it was assumed
> that openstack does not support adding hosts to two zones by design.
>
>  The assumption was based on the fact that ---
> if hostX is added to zoneA as well as zoneB,
> and if you boot a vm vmY passing zoneB in boot params,
> nova show vmY still returns zoneA.
>
>  In my opinion, we should fix the case of nova show
> rather than changing aggregate api to not allow addition of hosts to
> multiple zones.
>
>  I have added my comments in comments #7 and #9 on that bug.
>
>  Thanks,
> Meghal
>
>
 [1] Bug - https://bugs.launchpad.net/nova/+bug/1196893
>
>
>


Thanks for the pointer, now I see why the API is preventing host to be
added to a 2nd aggregated if there is a different AZ. Unfortunately, this
patch missed the fact that aggregates metadata can be modified once the
aggregate is created, so we should add a check when updating metadate in
order to cover all corner cases.

So, IMHO, it's worth providing a patch for API consistency so as we enforce
the fact that a host should be in only one AZ (but possibly 2 or more
aggregates) and see how we can propose to user ability to provide 2
distincts AZs when booting.

Does everyone agree ?

-Sylvain


>   On Apr 3, 2014, at 9:05 AM, Steve Gordon  wrote:
>
> - Original Message -
>
> Currently host aggregates are quite general, but the only ways for an
> end-user to make use of them are:
>
> 1) By making the host aggregate an availability zones (where each host
> is only supposed to be in one availability zone) and selecting it at
> instance creation time.
>
> 2) By booting the instance using a flavor with appropriate metadata
> (which can only be set up by admin).
>
>
> I would like to see more flexibility available to the end-user, so I
> think we should either:
>
> A) Allow hosts to be part of more than one availability zone (and allow
> selection of multiple availability zones when booting an instance), or
>
>
> While changing to allow hosts to be in multiple AZs changes the concept
> from an operator/user point of view I do think the idea of being able to
> specify multiple AZs when booting an instance makes sense and would be a
> nice enhancement for users working with multi-AZ environments - "I'm OK
> with this instance running in AZ1 and AZ2, but not AZ*".
>
> -Steve
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Icehouse RC1 available

2014-04-04 Thread Thierry Carrez
Hello everyone,

Last but not least, Swift just published its first Icehouse release
candidate. You can find the tarball for 1.13.1-rc1 at:

https://launchpad.net/swift/icehouse/1.13.1-rc1

Unless release-critical issues are found that warrant a release
candidate respin, this RC1 will be formally released together with all
other OpenStack integrated components as the Swift 1.13.1 final version
on April 17. You are therefore strongly encouraged to test and validate
this tarball.

Alternatively, you can directly test the milestone-proposed branch at:
https://github.com/openstack/swift/tree/milestone-proposed

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/swift/+filebug

and tag it *icehouse-rc-potential* to bring it to the release crew's
attention.

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Quota Management

2014-04-04 Thread Yingjun Li
Glad to see this, i will be glad to contribute on it if the project could move 
on..

On Apr 4, 2014, at 10:01, Cazzolato, Sergio J  
wrote:

> 
> Glad to see that, for sure I'll participate of this session.
> 
> Thanks
> 
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com] 
> Sent: Thursday, April 03, 2014 7:21 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Quota Management
> 
> On Thu, 2014-04-03 at 14:41 -0500, Kevin L. Mitchell wrote:
>> On Thu, 2014-04-03 at 19:16 +, Cazzolato, Sergio J wrote:
>>> Jay, thanks for taking ownership on this idea, we are really 
>>> interested to contribute to this, so what do you think are the next 
>>> steps to move on?
>> 
>> Perhaps a summit session on quota management would be in order?
> 
> Done:
> 
> http://summit.openstack.org/cfp/details/221
> 
> Best,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Imre Farkas

On 04/03/2014 01:02 PM, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.


ACK for all proposed changes.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread mar...@redhat.com
On 03/04/14 14:02, Robert Collins wrote:
> Getting back in the swing of things...
> 
> Hi,
> like most OpenStack projects we need to keep the core team up to
> date: folk who are not regularly reviewing will lose context over
> time, and new folk who have been reviewing regularly should be trusted
> with -core responsibilities.
> 
> In this months review:
>  - Dan Prince for -core
>  - Jordan O'Mara for removal from -core
>  - Jiri Tomasek for removal from -core
>  - Jamomir Coufal for removal from -core

+1

> 
> Existing -core members are eligible to vote - please indicate your
> opinion on each of the three changes above in reply to this email.
> 

<---snip>


> 
> -core that are not keeping up recently... :
> 
> |   tomas-8c8 **  |  310   4   2  25   887.1% |
> 1 (  3.2%)  |
> |marios **|  270   1  17   9   796.3% |
> 3 ( 11.1%)  |

thanks for the heads up - after some time away, I've been keeping the '3
a day' for the last couple weeks so hopefully this will improve.
However, my reviews are mainly in tripleo-heat-templates and tuskar-ui;
I guess the latter no longer counts towards these statistics (under
horizon?) and I'm not sure how to reconcile this ...? Should I just drop
the tuskar-ui reviews altogether ( I am trying to become more active in
neutron too, so something has to give somewhere)...

thanks! marios


> |   tzumainn **   |  270   3  23   1   488.9% |
> 0 (  0.0%)  |
> |pblaho **|  170   0   4  13   4   100.0% |
> 1 (  5.9%)  |
> |jomara **|   00   0   0   0   1 0.0% |
> 0 (  0.0%)  |
> 
> 
> Please remember - the stats are just an entry point to a more detailed
> discussion about each individual, and I know we all have a bunch of
> work stuff, on an ongoing basis :)
> 
> I'm using the fairly simple metric we agreed on - 'average at least
> three reviews a
> day' as a proxy for 'sees enough of the code and enough discussion of
> the code to be an effective reviewer'. The three review a day thing we
> derived based
> on the need for consistent volume of reviews to handle current
> contributors - we may
> lower that once we're ahead (which may happen quickly if we get more cores... 
> :)
> But even so:
>  - reading three patches a day is a pretty low commitment to ask for
>  - if you don't have time to do that, you will get stale quickly -
> you'll only see under
>33% of the code changes going on (we're doing about 10 commits
>a day - twice as many since december - and hopefully not slowing down!)
> 
> Cheers,
> Rob
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Murano] [Solum] applications in the cloud

2014-04-04 Thread Thomas Spatzier
Hi Steve,

your indexing idea sounds interesting, but I am not sure it would work
reliably. The kind of matching based on names of parameters and outputs and
internal get_attr uses has very strong assumptions and I think there is a
not so low risk of false positives. What if the templates includes some
internal details that would not affect the matching but still change the
behavior in a way that would break the composition. Or what if a user by
chance built a template that by pure coincidence uses the same parameter
and output names as one of those abstract types that was mention, but the
template is simply not built for composition?

I think it would be much cleaner to have an explicit attribute in the
template that says "this template can be uses as realization of type
My::SomeType" and use that for presenting the user choice and building the
environment.

Regards,
Thomas

Steve Baker  wrote on 04/04/2014 06:12:38:
> From: Steve Baker 
> To: openstack-dev@lists.openstack.org
> Date: 04/04/2014 06:14
> Subject: Re: [openstack-dev] [Heat] [Murano] [Solum] applications inthe
cloud
>
> On 03/04/14 13:04, Georgy Okrokvertskhov wrote:
> Hi Steve,
>
> I think this is exactly the place where we have a boundary between
> Murano catalog and HOT.
>
> In your example one can use abstract resource type and specify a
> correct implementation via environment file. This is how it will be
> done on the final stage in Murano too.
>
> Murano will solve another issue. In your example user should know
> what template to use as a provider template. In Murano this will be
> done in the following way:
> 1) User selects an app which requires a DB
> 2) Murano sees this requirement for DB and do a search in the app
> catalog to find all apps which expose this functionality. Murano
> uses app package definitions for that.
> 3) User select in UI specific DB implementation he wants to use.
>
> As you see, in Murano case user has no preliminary knowledge of
> available apps\templates and it uses catalog to find it. A search
> criteria can be quite complex with using different application
> attribute. If we think about moving application definition to HOT
> format it should provide all necessary information for catalog.
>
> In order to search apps in catalog which uses HOT format we need
> something like that:
>
> One needs to define abstract resource like
> OS:HOT:DataBase
>
> Than in each DB implementation of DB resource one has to somehow
> refer this abstract resource as a parent like
>
> Resource OS:HOT:MySQLDB
>   Parent: OS:HOT:DataBase
>
> Then catalog part can use this information and build a list of all
> apps\HOTs with resources with parents OS:HOT:DataBase
>
> That is what we are looking for. As you see, in this example I am
> not talking about version and other attributes which might be
> required for catalog.
>
>
> This sounds like a vision for Murano that I could get behind. It
> would be a tool which allows fully running applications to be
> assembled and launched from a catalog of Heat templates (plus some
> app lifecycle workflow beyond the scope of this email).
>
> We could add type interfaces to HOT but I still think duck typing
> would be worth considering. To demonstrate, lets assume that when a
> template gets cataloged, metadata is also indexed about what
> parameters and outputs the template has. So for the case above:
> 1) User selects an app to launch from the catalog
> 2) Murano performs a heat resource-type-list and compares that with
> the types in the template. The resource-type list is missing
> My::App::Database for a resource named my_db
> 3) Murano analyses the template and finds that My::App::Database is
> assigned 2 properties (db_username, db_password) and elsewhere in
> the template is a {get_attr: [my_db, db_url]} attribute access.
> 4) Murano queries glance for templates, filtering by templates which
> have parameters [db_username, db_password] and outputs [db_url]
> (plus whatever appropriate metadata filters)
> 5) Glance returns 2 matches. Murano prompts the user for a choice
> 6) Murano constructs an environment based on the chosen template,
> mapping My::App::Database to the chosen template
> 7) Murano launches the stack
>
> Sure, there could be a type interface called My::App::Database which
> declares db_username, db_password and db_url, but since a heat
> template is in a readily parsable declarative format, all required
> information is available to analyze, both during glance indexing and
> app launching.
>

>

> On Wed, Apr 2, 2014 at 3:30 PM, Steve Baker  wrote:
> On 03/04/14 10:39, Ruslan Kamaldinov wrote:
> > This is a continuation of the "MuranoPL questions" thread.
> >
> > As a result of ongoing discussions, we figured out that definitionof
layers
> > which each project operates on and has responsibility for is not yet
agreed
> > and discussed between projects and teams (Heat, Murano, Solum (in
> > alphabetical order)).
> >
> > Our suggestion and expectation from this working gro

Re: [openstack-dev] [TripleO] reviewer update march

2014-04-04 Thread Ladislav Smola

+1
On 04/03/2014 01:02 PM, Robert Collins wrote:

Getting back in the swing of things...

Hi,
 like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

In this months review:
  - Dan Prince for -core
  - Jordan O'Mara for removal from -core
  - Jiri Tomasek for removal from -core
  - Jamomir Coufal for removal from -core

Existing -core members are eligible to vote - please indicate your
opinion on each of the three changes above in reply to this email.

Ghe, please let me know if you're willing to be in tripleo-core. Jan,
Jordan, Martyn, Jiri & Jaromir, if you are planning on becoming
substantially more active in TripleO reviews in the short term, please
let us know.

My approach to this caused some confusion a while back, so I'm keeping
the boilerplate :) - I'm
going to talk about stats here, but they are only part of the picture
: folk that aren't really being /felt/ as effective reviewers won't be
asked to take on -core responsibility, and folk who are less active
than needed but still very connected to the project may still keep
them : it's not pure numbers.

Also, it's a vote: that is direct representation by the existing -core
reviewers as to whether they are ready to accept a new reviewer as
core or not. This mail from me merely kicks off the proposal for any
changes.

But, the metrics provide an easy fingerprint - they are a useful tool
to avoid bias (e.g. remembering folk who are just short-term active) -
human memory can be particularly treacherous - see 'Thinking, Fast and
Slow'.

With that prelude out of the way:

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up so they
aren't caught by surprise.

90 day active-enough stats:

+-+---++
| Reviewer| Reviews   -2  -1  +1  +2  +A+/- % |
Disagreements* |
+-+---++
|slagle **| 6550 145   7 503 15477.9% |
36 (  5.5%)  |
| clint-fewbar ** | 5494 120  11 414 11577.4% |
32 (  5.8%)  |
|   lifeless **   | 518   34 203   2 279 11354.2% |
21 (  4.1%)  |
|  rbrady | 4530  14 439   0   096.9% |
60 ( 13.2%)  |
| cmsj ** | 3220  24   1 297 13692.5% |
22 (  6.8%)  |
|derekh **| 2610  50   1 210  9080.8% |
12 (  4.6%)  |
|dan-prince   | 2570  67 157  33  1673.9% |
15 (  5.8%)  |
|   jprovazn **   | 1900  21   2 167  4388.9% |
13 (  6.8%)  |
|ifarkas **   | 1860  28  18 140  8284.9% |
6 (  3.2%)  |
===
| jistr **| 1770  31  16 130  2882.5% |
4 (  2.3%)  |
|  ghe.rivero **  | 1761  21  25 129  5587.5% |
7 (  4.0%)  |
|lsmola **| 1722  12  55 103  6391.9% |
21 ( 12.2%)  |
|   jdob  | 1660  31 135   0   081.3% |
9 (  5.4%)  |
|  bnemec | 1380  38 100   0   072.5% |
17 ( 12.3%)  |
|greghaynes   | 1260  21 105   0   083.3% |
22 ( 17.5%)  |
|  dougal | 1250  26  99   0   079.2% |
13 ( 10.4%)  |
|   tzumainn **   | 1190  30  69  20  1774.8% |
2 (  1.7%)  |
|rpodolyaka   | 1150  15 100   0   087.0% |
15 ( 13.0%)  |
| ftcjeff | 1030   3 100   0   097.1% |
9 (  8.7%)  |
| thesheep|  930  26  31  36  2172.0% |
3 (  3.2%)  |
|pblaho **|  881   8  37  42  2289.8% |
3 (  3.4%)  |
| jonpaul-sullivan|  800  33  47   0   058.8% |
17 ( 21.2%)  |
|   tomas-8c8 **  |  780  15   4  59  2780.8% |
4 (  5.1%)  |
|marios **|  750   7  53  15  1090.7% |
14 ( 18.7%)  |
| stevenk |  750  15  60   0   080.0% |
9 ( 12.0%)  |
|   rwsu  |  740   3  71   0   095.9% |
11 ( 14.9%)  |
| mkerrin |  700  14  56   0   080.0% |
14 ( 20.0%)  |

The  line is set at the just voted on minimum expected of core: 3
reviews per work day, 60 work days in a 90 day period (64 - fudge for
holidays), 180 reviews.
I cut the full report out at the point we had been previously - with
the commitment to 3 reviews per day, next months report will have a
much higher minimum. In future reviews, we'll set the