[openstack-dev] [Swift] Havana Release Notes Known Issues is talking about Nova (Re: [Openstack] OpenStack 2013.2 ("Havana") is released !)

2013-10-18 Thread Akihiro Motoki
Hi Thierry, John,

In Havana release notes, Swift known issues section is talking about
Nova Cells issue. Could you confirm?
https://wiki.openstack.org/wiki/ReleaseNotes/Havana#Known_Issues

Thanks,
Akihiro

On Thu, Oct 17, 2013 at 11:23 PM, Thierry Carrez  wrote:
> Hello everyone,
>
> It is my great pleasure to announce the final release of OpenStack
> 2013.2. It marks the end of the "Havana" 6-month-long development cycle,
> which saw the addition of two integrated components (Ceilometer and
> Heat), the completion of more than 400 feature blueprints and the fixing
> of more than 3000 reported bugs !
>
> You can find source tarballs for each integrated project, together with
> lists of features and bugfixes, at:
>
> OpenStack Compute:https://launchpad.net/nova/havana/2013.2
> OpenStack Object Storage: https://launchpad.net/swift/havana/1.10.0
> OpenStack Image Service:  https://launchpad.net/glance/havana/2013.2
> OpenStack Networking: https://launchpad.net/neutron/havana/2013.2
> OpenStack Block Storage:  https://launchpad.net/cinder/havana/2013.2
> OpenStack Identity:   https://launchpad.net/keystone/havana/2013.2
> OpenStack Dashboard:  https://launchpad.net/horizon/havana/2013.2
> OpenStack Metering:   https://launchpad.net/ceilometer/havana/2013.2
> OpenStack Orchestration:  https://launchpad.net/heat/havana/2013.2
>
> The Havana Release Notes contain an overview of the key features, as
> well as upgrade notes and current lists of known issues. You can access
> them at: https://wiki.openstack.org/wiki/ReleaseNotes/Havana
>
> In 19 days, our community will gather in Hong-Kong for the OpenStack
> Summit: 4 days of conference to discuss all things OpenStack and a
> Design Summit to plan the next 6-month development cycle, codenamed
> "Icehouse". It's not too late to join us there, see
> http://www.openstack.org/summit/openstack-summit-hong-kong-2013/ for
> more details.
>
> Congratulations to everyone who contributed to this development cycle
> and participated in making this awesome release possible !
>
> --
> Thierry Carrez (ttx)
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Call for a clear COPYRIGHT-HOLDERS file in all OpenStack projects (and [trove] python-troveclient_0.1.4-1_amd64.changes REJECTED)

2013-10-18 Thread Thomas Goirand

Hi there,

TroveClient just got rejected by Debian FTP masters. Reply from Luke
Faraone is below.

In general, I would strongly advise that a clean COPYRIGHT-HOLDER file
is created with the copyright holders in them. Why? Because it is hard
to distinguish between authors and copyright holders, which are very
distinct things. Listing the authors in debian/copyright doesn't seem to
satisfy the FTP masters as well... :(

FYI, my reply was that I knew some of the authors were working for
Rackspace, because I met them in Portland, and that I knew Rackspace was
one of the copyright holders. Though that's of course not enough for the
Debian FTP masters.

Your thoughts?

Cheers,

Thomas Goirand (zigo)

 Original Message 
Subject: [Openstack-devel] python-troveclient_0.1.4-1_amd64.changes REJECTED
Date: Sat, 19 Oct 2013 04:00:19 +
From: Luke Faraone 
To: PKG OpenStack , Thomas
Goirand 


Dear maintainer,

debian/copyright is **not** an AUTHORS list. This package appears to be
Copyright (c) 2013 Hewlett-Packard Development Company, L.P., and some
other
companies, not copyrighted each individual employee at HP who worked on it.

Your automated debian/copyright generation is most probably suboptimal for
most packages, and is most certainly not a substitute for manual review.
One
missed copyright holder:

python-troveclient-0.1.4\troveclient\base.py:
Copyright 2010 Jacob Kaplan-Moss

Cheers,

Luke Faraone
FTP Team


===

Please feel free to respond to this email if you don't understand why
your files were rejected, or if you upload new files which address our
concerns.


___
Openstack-devel mailing list
openstack-de...@lists.alioth.debian.org
http://lists.alioth.debian.org/mailman/listinfo/openstack-devel




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Adam Young

On 10/18/2013 07:21 PM, Sean Dague wrote:

On 10/18/2013 05:09 PM, Dolph Mathews wrote:


On Fri, Oct 18, 2013 at 3:19 PM, David Stanek mailto:dsta...@dstanek.com>> wrote:


On Fri, Oct 18, 2013 at 1:48 PM, Sean Dague mailto:s...@dague.net>> wrote:

On 10/18/2013 12:04 PM, Brant Knudson wrote:


2) "git clone"ing the keystoneclient doesn't work well with
parallel
testing (we have a similar problem in our tests with our
"pristine"
database backup)


Can you go into the specifics of why?


We use unsafe paths for the test SQLite database and test config
files.  Instead of using something like tempfile we are using
hardcoded paths.  When the setUp method is run in parallel it will
stomp on other tests.  I believe the 'git clone' is the same way.
  The clone happens in the setUp so if you have 2 test methods in
that test class one of the cloning operations will break.

I have a bug filed for the DB/config file issue already. The
cloning issue may solved by putting it into the setupClass instead
of setUp.  I'd have to try it.


test_keystoneclient is really an integration test between the client &
server, but expecting internet access to run the tests in keystone's own
repo has been a long-standing complaint (although this bug was only
recently filed):
https://bugs.launchpad.net/keystone/+bug/1191999


Ok, cool. It sounds like we actually should probably talk through 
what's all needed to do this right for keystone at summit, especially 
as we're talking about creating a new class of tests for it. Can 
someone propose a session for it in the QA track?


I think many of us were caught off guard by the patch as typically 
we're pretty deliberate about structure in the tree, and it hadn't yet 
come up at a QA meeting. So a summit session probably could get us all 
agreed on a plan moving forward and make sure we figure out all the 
needs of keystone here. I do think it might warrant uniqueness given 
that *all* the other services need it.


-Sean


Done
http://summit.openstack.org/cfp/details/313

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Sean Dague

On 10/18/2013 05:09 PM, Dolph Mathews wrote:


On Fri, Oct 18, 2013 at 3:19 PM, David Stanek mailto:dsta...@dstanek.com>> wrote:


On Fri, Oct 18, 2013 at 1:48 PM, Sean Dague mailto:s...@dague.net>> wrote:

On 10/18/2013 12:04 PM, Brant Knudson wrote:


2) "git clone"ing the keystoneclient doesn't work well with
parallel
testing (we have a similar problem in our tests with our
"pristine"
database backup)


Can you go into the specifics of why?


We use unsafe paths for the test SQLite database and test config
files.  Instead of using something like tempfile we are using
hardcoded paths.  When the setUp method is run in parallel it will
stomp on other tests.  I believe the 'git clone' is the same way.
  The clone happens in the setUp so if you have 2 test methods in
that test class one of the cloning operations will break.

I have a bug filed for the DB/config file issue already.  The
cloning issue may solved by putting it into the setupClass instead
of setUp.  I'd have to try it.


test_keystoneclient is really an integration test between the client &
server, but expecting internet access to run the tests in keystone's own
repo has been a long-standing complaint (although this bug was only
recently filed):
https://bugs.launchpad.net/keystone/+bug/1191999


Ok, cool. It sounds like we actually should probably talk through what's 
all needed to do this right for keystone at summit, especially as we're 
talking about creating a new class of tests for it. Can someone propose 
a session for it in the QA track?


I think many of us were caught off guard by the patch as typically we're 
pretty deliberate about structure in the tree, and it hadn't yet come up 
at a QA meeting. So a summit session probably could get us all agreed on 
a plan moving forward and make sure we figure out all the needs of 
keystone here. I do think it might warrant uniqueness given that *all* 
the other services need it.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana neutron security groups config issue

2013-10-18 Thread Aaron Rosen
Is anything showing up in the agents log on the hypervisors? Also, can you
confirm you have this setting in your nova.conf:


libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver



On Fri, Oct 18, 2013 at 1:14 PM, Leandro Reox wrote:

> Aaaron, i fixed the config issues moving the neutron opts up to the
> default section. But now im having this issue
>
> i can launch intances normally, it seems that the rules are not getting
> applied anywhere, i have full access to the docker containers. If i do
> iptable -t nat -L and iptables -L , no rules seems to be applied to any flow
>
> I see the calls on the nova-api normally ... , but no rule applied
>
>
> 2013-10-18 16:10:09.873 31548 DEBUG neutronclient.client [-] RESP:{'date':
> 'Fri, 18 Oct 2013 20:10:07 GMT', 'status': '200', 'content-length': '2331',
> 'content-type': 'application/json; charset=UTF-8', 'content-location': '
> http://172.16.124.16:9696/v2.0/security-groups.json'} {"security_groups":
> [{"tenant_id": "df26f374a7a84eddb06881c669ffd62f", "name": "default",
> "description": "default", "security_group_rules": [{"remote_group_id":
> null, "direction": "egress", "remote_ip_prefix": null, "protocol": null,
> "ethertype": "IPv4", "tenant_id": "df26f374a7a84eddb06881c669ffd62f",
> "port_range_max": null, "port_range_min": null, "id":
> "131f26d3-6b7b-47ef-9abf-fd664e59a972", "security_group_id":
> "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"remote_group_id": null,
> "direction": "egress", "remote_ip_prefix": null, "protocol": null,
> "ethertype": "IPv6", "tenant_id": "df26f374a7a84eddb06881c669ffd62f",
> "port_range_max": null, "port_range_min": null, "id":
> "93a8882b-adcd-489a-89e4-694f5955", "security_group_id":
> "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"remote_group_id":
> "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction": "ingress",
> "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv4",
> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
> "port_range_min": null, "id": "fb15316c-efd0-4a70-ae98-23f260f0d76d",
> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
> {"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
> "ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
> "port_range_min": null, "id": "fc524bb9-b015-42b0-bdab-cd64db2763a6",
> "security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}], "id":
> "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"tenant_id":
> "df26f374a7a84eddb06881c669ffd62f", "name": "culo", "description": "",
> "security_group_rules": [{"remote_group_id": null, "direction": "egress",
> "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
> "tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
> "port_range_min": null, "id": "2c23f70a-691b-4601-87a0-2ec092488746",
> "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"},
> {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
> "protocol": null, "ethertype": "IPv4", "tenant_id":
> "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
> "port_range_min": null, "id": "7a445e16-81c1-45c1-8efd-39ce3bcd9ca6",
> "security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"}], "id":
> "fe569b17-d6e0-4b1e-bae3-1132e748190c"}]}
>  http_log_resp
> /usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
> 2013-10-18 16:10:09.959 31548 INFO nova.osapi_compute.wsgi.server
> [req-87c41dc0-d90a-47b9-bfa8-bd7921a26609 223f36a9e1fc44659ac93479cb508902
> df26f374a7a84eddb06881c669ffd62f] 172.16.124.10 "GET
> /v2/df26f374a7a84eddb06881c669ffd62f/servers/detail HTTP/1.1" status: 200
> len: 1878 time: 0.6089120
>
>
>
>
> On Fri, Oct 18, 2013 at 5:07 PM, Aaron Rosen  wrote:
>
>> Do you have [default] at the top of your nova.conf? Could you pastebin
>> your nova.conf  for us to see.
>>  On Oct 18, 2013 12:31 PM, "Leandro Reox"  wrote:
>>
>>> Yes it is, but i found that is not reading the parameter from the
>>> nova.conf , i forced on the code on /network/manager.py and took the
>>> argument finally but stacks cause says that the neutron_url and if i fix it
>>> it stacks on the next neutron parameter like timeout :
>>>
>>> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
>>> 1648, in __getattr__
>>> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack raise
>>> NoSuchOptError(name)
>>> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack NoSuchOptError:
>>> no such option: neutron_url
>>>
>>> and then
>>>
>>> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
>>> 1648, in __getattr__
>>> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack raise
>>> NoSuchOptError(name)
>>> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack NoSuchOptError:
>>> no such option: neutron_url_timeout
>>>
>>> Its really weird, like its not reading the nova.conf neutron parameter
>>> at all ...
>>>
>

Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-18 Thread Doug Hellmann
On Fri, Oct 18, 2013 at 2:21 PM, John Dennis  wrote:

> On 10/18/2013 12:57 PM, Doug Hellmann wrote:
> >
> >
> >
> > On Thu, Oct 17, 2013 at 2:24 PM, John Dennis  > > wrote:
> >
> > On 10/17/2013 12:22 PM,  Luis A. Garcia wrote:
> > > On 10/16/2013 1:11 PM, Doug Hellmann wrote:
> > >>
> > >> [snip]
> > >> Option 3 is closer to the new plan for Icehouse, which is to have
> _()
> > >> return a Message, allow Message to work in a few contexts like a
> > string
> > >> (so that, for example, log calls and exceptions can be left
> > alone, even
> > >> if they use % to combine a translated string with arguments), but
> > then
> > >> have the logging and API code explicitly handle the translation of
> > >> Message instances so we can always pass unicode objects outside of
> > >> OpenStack code (to logging or to web frameworks). Since the
> > logging code
> > >> is part of Oslo and the API code can be, this seemed to provide
> > >> isolation while removing most of the magic.
> > >>
> > >
> > > I think this is exactly what we have right now inherited form
> Havana.
> > > The _() returns a Message that is then translated on-demand by the
> API
> > > or in a special Translation log handler.
> > >
> > > We just did not make Message look and feel enough like a str() and
> > some
> > > outside components (jsonifier in Glance and log Formatter all
> > over) did
> > > not know how to handle non text types correctly when non-ascii
> > > characters were present.
> > >
> > > I think extending from unicode and removing all the
> implementations in
> > > place such that the unicode implementation kick in for all magic
> > methods
> > > will solve the problems we saw at the end of Havana.
> >
> > I'm relatively new to OpenStack so I can't comment on prior OpenStack
> > implementations but I'm a long standing veteran of Python i18n
> issues.
> >
> > What you're describing sounds a lot like problems that result from
> the
> > fact Python's default encoding is ASCII as opposed to the more
> sensible
> > UTF-8. I have a long write up on this issue from a few years ago but
> > I'll cut to the chase. Python will attempt to automatically encode
> > Unicode objects into ASCII during output which will fail if there are
> > non-ASCII code points in the Unicode. Python does this is in two
> > distinct contexts depending on whether destination of the output is a
> > file or terminal. If it's a terminal it attempts to use the encoding
> > associated with the TTY. Hence you can different results if you
> output
> > to a TTY or a file handle.
> >
> >
> > That was related to the problem we had with logging and Message
> instances.
> >
> >
> >
> > The simple solution to many of the encoding exceptions that Python
> will
> > throw is to override the default encoding and change it to UTF-8. But
> > the default encoding is locked by site.py due to internal Python
> string
> > optimizations which cache the default encoded version of the string
> so
> > the encoding happens only once. Changing the default encoding would
> > invalidate cached strings and there is no mechanism to deal with
> that,
> > that's why the default encoding is locked. But you can change the
> > default encoding using this trick if you do early enough during the
> > module loading process:
> >
> >
> > I don't think we want to have force the encoding at startup. Setting the
> > locale properly through the environment and then using unicode objects
> > also solves the issue without any startup timing issues, and allows
> > deployers to choose the encoding for output.
>
>
> Setting the locale only solves some of the problems, the locale is only
> respected some of the time. The discrepancies and inconsistencies in how
> Unicode conversion occurs in Python2 is maddening and one of the worst
> aspects of Python2, it was never carefully thought out, Unicode in
> Python2 is basically a bolted on hack that only works if every piece of
> code plays by the exact same rules which of course they don't and never
> will. I can almost guarantee unless you attack this problem at the core
> you'll continue to get bitten. Either code is encoding aware and
> explicitly forces a codec (presumably utf-8) or the code is encoding
> naive and allows the default encoding to be applied, except when the
> locale is respected which overrides the default encoding for the naive
> case.
>

The vast majority of our code should not care at all about encodings or
locales. If we're encoding and decoding strings all over the place, we're
doing it wrong. That's why I wanted Message.__str__() to raise an exception
-- to help us find the places where we are treating something that should
be a unicode string like it is a byte string.


>
> When Python3 was being worked on on

Re: [openstack-dev] [Nova] VMWare Mine Sweeper, Congrats!

2013-10-18 Thread Sreeram Yerrapragada
We had some infrastructure issues in the morning and went back to silent mode. 
I just re- triggered tempest run for your patchset. Also note that until we 
stabilize our CI infrastructure you would only be seeing postings from vmware 
minesweeper for passed builds. For failed build we will manually update it on 
the review. 

Thanks 
Sreeram 

- Original Message -

From: "Yaguang Tang"  
To: "OpenStack Development Mailing List"  
Sent: Friday, October 18, 2013 8:59:19 AM 
Subject: Re: [openstack-dev] [Nova] VMWare Mine Sweeper, Congrats! 

How can I enable or trigger Mine Sweeper for VMware related patches? I have 
update a patch about VMware driver today 
https://review.openstack.org/#/c/51793/ . but haven't seen any posting results 
. 


2013/10/18 Sean Dague < s...@dague.net > 



On 10/17/2013 02:29 PM, Dan Smith wrote: 




This system is running tempest against a VMWare deployment and posting 
the results publicly. This is really great progress. It will go a long 
way in helping reviewers be more confident in changes to this driver. 



This is huge progress, congrats and thanks to the VMware team for making 
this happen! There is really no substitute for the value this will 
provide for overall quality. 



Agreed. Nice job guys! It's super cool to now see SmokeStack and Mine Sweeper 
posting back on patches. 

Tip of the hat to the VMWare team for pulling this together so quickly. 

-Sean 

-- 
Sean Dague 
http://dague.net 


__ _ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack. org 
http://lists.openstack.org/ cgi-bin/mailman/listinfo/ openstack-dev 






-- 
Tang Yaguang 

Canonical Ltd. | www.ubuntu.com | www.canonical.com 
Mobile: +86 152 1094 6968 
gpg key: 0x187F664F 

___ 
OpenStack-dev mailing list 
OpenStack-dev@lists.openstack.org 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Dolph Mathews
On Fri, Oct 18, 2013 at 3:19 PM, David Stanek  wrote:

>
> On Fri, Oct 18, 2013 at 1:48 PM, Sean Dague  wrote:
>
>> On 10/18/2013 12:04 PM, Brant Knudson wrote:
>>
>>>
>>> 2) "git clone"ing the keystoneclient doesn't work well with parallel
>>> testing (we have a similar problem in our tests with our "pristine"
>>> database backup)
>>>
>>
>> Can you go into the specifics of why?
>>
>
> We use unsafe paths for the test SQLite database and test config files.
>  Instead of using something like tempfile we are using hardcoded paths.
>  When the setUp method is run in parallel it will stomp on other tests.  I
> believe the 'git clone' is the same way.  The clone happens in the setUp so
> if you have 2 test methods in that test class one of the cloning operations
> will break.
>
> I have a bug filed for the DB/config file issue already.  The cloning
> issue may solved by putting it into the setupClass instead of setUp.  I'd
> have to try it.
>

test_keystoneclient is really an integration test between the client &
server, but expecting internet access to run the tests in keystone's own
repo has been a long-standing complaint (although this bug was only
recently filed):
https://bugs.launchpad.net/keystone/+bug/1191999


>
>
> --
> David
> blog: http://www.traceback.org
> twitter: http://twitter.com/dstanek
> www: http://dstanek.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does openstack have a notification system that will let us know when a server changes state ?

2013-10-18 Thread Gabriel Hurley
The answer is "sort of". Most projects (including Nova) publish to an RPC 
"notifications" channel (e.g. in rabbitMQ or whichever you use in your 
deployment). This is how Ceilometer gets some of its data.

There is common code for connecting to the notification queue in Oslo (the 
"rpc" and "notifier" modules, particularly), but the exercise of actually 
setting up your consumer is left up to you, and there are various gotchas that 
aren't well-documented. Ceilometer's code is a reasonable starting point for 
building your own.

As this is an area I've been experimenting with lately I'll say that once you 
get it all working it is certainly functional and will deliver exactly what 
you're asking for, but it can be a fair bit of engineering effort if you're not 
familiar with how these things work already.

This is an area I hope can be improved in OpenStack in future releases.

Hope that helps,


-  Gabriel

From: openstack learner [mailto:openstacklea...@gmail.com]
Sent: Friday, October 18, 2013 11:57 AM
To: openst...@lists.openstack.org; openstack-dev@lists.openstack.org
Subject: [openstack-dev] Does openstack have a notification system that will 
let us know when a server changes state ?

Hi all,


I am using the openstack python api. After I boot an instance, I will keep 
polling the instance status to check if its status changes from BUILD to ACTIVE.

My question is:

does openstack have a notification system that will let us know when a vm 
changes state (e.g. goes into ACTIVE state)? then we won't have to keep on 
polling it  when we need to know the change of the machine state.
Thanks
xin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-18 Thread Tim Simpson
Hi Josh,

>> Given that Trove currently only supports a single datastore deployment per 
>> control system, does the current work also allow for a default type/version 
>> to be defined so that operators of Trove can set this as a property to 
>> maintain the current API compatibility/behavior?

Yes, the current pull request to support this allows for a default type, which, 
if there is only a single version for that type in the Trove infrastructure 
database, means that the existing behavior would be preserved. However as soon 
as an operator adds more than one datastore version of the default type then 
API users would need to always include the version ID. This would be fixed by 
recommendation #4 in my original message.

Thanks,

Tim



From: Josh Odom [josh.o...@rackspace.com]
Sent: Friday, October 18, 2013 3:16 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] [Trove] How users should specify a datastore type 
when creating an instance

Hi Tim,
I do think your recommendation in 3 & 4 makes a lot of sense and improves the 
usability of the API.  Given that Trove currently only supports a single 
datastore deployment per control system, does the current work also allow for a 
default type/version to be defined so that operators of Trove can set this as a 
property to maintain the current API compatibility/behavior?

Josh


From: Tim Simpson mailto:tim.simp...@rackspace.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, October 18, 2013 2:30 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Trove] How users should specify a datastore type when 
creating an instance

Hello fellow Trovians,

There has been some good work recently to figure out a way to specify a 
specific datastore  when using Trove. This is essential to supporting multiple 
datastores from the same install of Trove.

I have an issue with some elements of the proposed solution though, so I 
decided I'd start a thread here so we could talk about it.

As a quick refresher, here is the blue print for this work (there are some 
gists ammended to the end but I figured the mailing list would be an easier 
venue for discussion):
https://wiki.openstack.org/wiki/Trove/trove-versions-types

One issue I have is with the way the instance create call will change to 
support different data stores. For example, here is the post call:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore_type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
  "datastore_version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b",
  "volume" : { "size" : "1" }
}
}
"""

1. I think since we have two fields in the instance object we should make a new 
object for datastore and avoid the name prefixing, like this:

"""
{
 "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

2. I also think a datastore_version alone should be sufficient since the 
associated datastore type will be implied:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

3. Additionally, while a datastore_type should have an ID in the Trove 
infastructure database, it should also be possible to pass just the name of the 
datastore type to the instance call, such as "mysql" or "mongo". Maybe we could 
allow this in addition to the ID? I think this form should actually use the 
argument "type", and the id should then be passed as "type_id" instead.

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}

"""

4. Additionally, in the current pull request to implement this it is possible 
to avoid passing a version, but only if no more than one version of the 
datastore_type exists in the database.

I think instead the datastore_type row in the database should also have a 
"default_version_id" property, that an operator could update to the most recent 
version or whatever other criteria they wish to use, meaning the call could 
become this simple:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql"
  }
  "volume" : { "size" : "1" }
}
}
"""

Thoughts?

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread David Stanek
On Fri, Oct 18, 2013 at 1:48 PM, Sean Dague  wrote:

> On 10/18/2013 12:04 PM, Brant Knudson wrote:
>
>>
>> 2) "git clone"ing the keystoneclient doesn't work well with parallel
>> testing (we have a similar problem in our tests with our "pristine"
>> database backup)
>>
>
> Can you go into the specifics of why?


We use unsafe paths for the test SQLite database and test config files.
 Instead of using something like tempfile we are using hardcoded paths.
 When the setUp method is run in parallel it will stomp on other tests.  I
believe the 'git clone' is the same way.  The clone happens in the setUp so
if you have 2 test methods in that test class one of the cloning operations
will break.

I have a bug filed for the DB/config file issue already.  The cloning issue
may solved by putting it into the setupClass instead of setUp.  I'd have to
try it.

-- 
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek
www: http://dstanek.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-18 Thread Josh Odom
Hi Tim,
I do think your recommendation in 3 & 4 makes a lot of sense and improves the 
usability of the API.  Given that Trove currently only supports a single 
datastore deployment per control system, does the current work also allow for a 
default type/version to be defined so that operators of Trove can set this as a 
property to maintain the current API compatibility/behavior?

Josh


From: Tim Simpson mailto:tim.simp...@rackspace.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Friday, October 18, 2013 2:30 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Trove] How users should specify a datastore type when 
creating an instance

Hello fellow Trovians,

There has been some good work recently to figure out a way to specify a 
specific datastore  when using Trove. This is essential to supporting multiple 
datastores from the same install of Trove.

I have an issue with some elements of the proposed solution though, so I 
decided I'd start a thread here so we could talk about it.

As a quick refresher, here is the blue print for this work (there are some 
gists ammended to the end but I figured the mailing list would be an easier 
venue for discussion):
https://wiki.openstack.org/wiki/Trove/trove-versions-types

One issue I have is with the way the instance create call will change to 
support different data stores. For example, here is the post call:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore_type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
  "datastore_version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b",
  "volume" : { "size" : "1" }
}
}
"""

1. I think since we have two fields in the instance object we should make a new 
object for datastore and avoid the name prefixing, like this:

"""
{
 "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

2. I also think a datastore_version alone should be sufficient since the 
associated datastore type will be implied:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

3. Additionally, while a datastore_type should have an ID in the Trove 
infastructure database, it should also be possible to pass just the name of the 
datastore type to the instance call, such as "mysql" or "mongo". Maybe we could 
allow this in addition to the ID? I think this form should actually use the 
argument "type", and the id should then be passed as "type_id" instead.

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}

"""

4. Additionally, in the current pull request to implement this it is possible 
to avoid passing a version, but only if no more than one version of the 
datastore_type exists in the database.

I think instead the datastore_type row in the database should also have a 
"default_version_id" property, that an operator could update to the most recent 
version or whatever other criteria they wish to use, meaning the call could 
become this simple:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql"
  }
  "volume" : { "size" : "1" }
}
}
"""

Thoughts?

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana neutron security groups config issue

2013-10-18 Thread Leandro Reox
Aaaron, i fixed the config issues moving the neutron opts up to the default
section. But now im having this issue

i can launch intances normally, it seems that the rules are not getting
applied anywhere, i have full access to the docker containers. If i do
iptable -t nat -L and iptables -L , no rules seems to be applied to any flow

I see the calls on the nova-api normally ... , but no rule applied


2013-10-18 16:10:09.873 31548 DEBUG neutronclient.client [-] RESP:{'date':
'Fri, 18 Oct 2013 20:10:07 GMT', 'status': '200', 'content-length': '2331',
'content-type': 'application/json; charset=UTF-8', 'content-location': '
http://172.16.124.16:9696/v2.0/security-groups.json'} {"security_groups":
[{"tenant_id": "df26f374a7a84eddb06881c669ffd62f", "name": "default",
"description": "default", "security_group_rules": [{"remote_group_id":
null, "direction": "egress", "remote_ip_prefix": null, "protocol": null,
"ethertype": "IPv4", "tenant_id": "df26f374a7a84eddb06881c669ffd62f",
"port_range_max": null, "port_range_min": null, "id":
"131f26d3-6b7b-47ef-9abf-fd664e59a972", "security_group_id":
"2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"remote_group_id": null,
"direction": "egress", "remote_ip_prefix": null, "protocol": null,
"ethertype": "IPv6", "tenant_id": "df26f374a7a84eddb06881c669ffd62f",
"port_range_max": null, "port_range_min": null, "id":
"93a8882b-adcd-489a-89e4-694f5955", "security_group_id":
"2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"remote_group_id":
"2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction": "ingress",
"remote_ip_prefix": null, "protocol": null, "ethertype": "IPv4",
"tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
"port_range_min": null, "id": "fb15316c-efd0-4a70-ae98-23f260f0d76d",
"security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"},
{"remote_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb", "direction":
"ingress", "remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
"tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
"port_range_min": null, "id": "fc524bb9-b015-42b0-bdab-cd64db2763a6",
"security_group_id": "2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}], "id":
"2391ac97-447e-45b7-97f2-cd8fbcafb0cb"}, {"tenant_id":
"df26f374a7a84eddb06881c669ffd62f", "name": "culo", "description": "",
"security_group_rules": [{"remote_group_id": null, "direction": "egress",
"remote_ip_prefix": null, "protocol": null, "ethertype": "IPv6",
"tenant_id": "df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
"port_range_min": null, "id": "2c23f70a-691b-4601-87a0-2ec092488746",
"security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"},
{"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null,
"protocol": null, "ethertype": "IPv4", "tenant_id":
"df26f374a7a84eddb06881c669ffd62f", "port_range_max": null,
"port_range_min": null, "id": "7a445e16-81c1-45c1-8efd-39ce3bcd9ca6",
"security_group_id": "fe569b17-d6e0-4b1e-bae3-1132e748190c"}], "id":
"fe569b17-d6e0-4b1e-bae3-1132e748190c"}]}
 http_log_resp
/usr/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
2013-10-18 16:10:09.959 31548 INFO nova.osapi_compute.wsgi.server
[req-87c41dc0-d90a-47b9-bfa8-bd7921a26609 223f36a9e1fc44659ac93479cb508902
df26f374a7a84eddb06881c669ffd62f] 172.16.124.10 "GET
/v2/df26f374a7a84eddb06881c669ffd62f/servers/detail HTTP/1.1" status: 200
len: 1878 time: 0.6089120




On Fri, Oct 18, 2013 at 5:07 PM, Aaron Rosen  wrote:

> Do you have [default] at the top of your nova.conf? Could you pastebin
> your nova.conf  for us to see.
> On Oct 18, 2013 12:31 PM, "Leandro Reox"  wrote:
>
>> Yes it is, but i found that is not reading the parameter from the
>> nova.conf , i forced on the code on /network/manager.py and took the
>> argument finally but stacks cause says that the neutron_url and if i fix it
>> it stacks on the next neutron parameter like timeout :
>>
>> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
>> 1648, in __getattr__
>> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack raise
>> NoSuchOptError(name)
>> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack NoSuchOptError: no
>> such option: neutron_url
>>
>> and then
>>
>> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
>> 1648, in __getattr__
>> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack raise
>> NoSuchOptError(name)
>> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack NoSuchOptError: no
>> such option: neutron_url_timeout
>>
>> Its really weird, like its not reading the nova.conf neutron parameter at
>> all ...
>>
>> If i hardcode all the settings on the neutronv2/init.py .. at least it
>> works, and bring all the secgroup details from netruon
>>
>>
>>
>> On Fri, Oct 18, 2013 at 3:48 PM, Aaron Rosen  wrote:
>>
>>> Hi Leandro,
>>>
>>>
>>> I don't believe the setting of:  security_group_api=neutron in
>>> nova.conf actually doesn't matter at all on the compute nodes (still good
>>> to set it though). But it m

Re: [openstack-dev] [horizon] dashboard not showing all hard drive

2013-10-18 Thread Qing He
Hi,
My system hard drive of 250G was divided into two volumes, one 50G and the 
rest. But the dashboard only shows 50G, I'm wondering if anyone knows how to 
make it show the other 200G?

Thanks,

Qing
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana neutron security groups config issue

2013-10-18 Thread Leandro Reox
Now that i can launch intances normally, it seems that the rules are not
getting applied anywhere, i have full access to the docker containers. If i
do iptable -t nat -L and iptables -L , no rules seems to be applied to any
flow


On Fri, Oct 18, 2013 at 4:28 PM, Leandro Reox wrote:

> Yes it is, but i found that is not reading the parameter from the
> nova.conf , i forced on the code on /network/manager.py and took the
> argument finally but stacks cause says that the neutron_url and if i fix it
> it stacks on the next neutron parameter like timeout :
>
> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
> 1648, in __getattr__
> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack raise
> NoSuchOptError(name)
> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack NoSuchOptError: no
> such option: neutron_url
>
> and then
>
> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
> 1648, in __getattr__
> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack raise
> NoSuchOptError(name)
> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack NoSuchOptError: no
> such option: neutron_url_timeout
>
> Its really weird, like its not reading the nova.conf neutron parameter at
> all ...
>
> If i hardcode all the settings on the neutronv2/init.py .. at least it
> works, and bring all the secgroup details from netruon
>
>
>
> On Fri, Oct 18, 2013 at 3:48 PM, Aaron Rosen  wrote:
>
>> Hi Leandro,
>>
>>
>> I don't believe the setting of:  security_group_api=neutron in nova.conf
>> actually doesn't matter at all on the compute nodes (still good to set it
>> though). But it matters on the nova-api node. can you confirm that your
>> nova-api node has: security_group_api=neutron in it's nova.conf?
>>
>> Thanks,
>>
>> Aaron
>>
>>
>> On Fri, Oct 18, 2013 at 10:32 AM, Leandro Reox wrote:
>>
>>> Dear all,
>>>
>>> Im struggling with centralized sec groups on nova, were using OVS, it
>>> seems like no matter what flag i change on nova conf, the node still
>>> searchs the segroups on nova region local db
>>>
>>> We added :
>>>
>>>
>>> [compute node]
>>>
>>> *nova.conf*
>>>
>>> firewall_driver=neutron.agent.firewall.NoopFirewallDriver
>>> security_group_api=neutron
>>>
>>>
>>> *ovs_neutron_plugin.ini*
>>>
>>> [securitygroup]
>>> firewall_driver =
>>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>>
>>>
>>> Restarted the agent, nova-compute services ... still the same, are we
>>> missing something ?
>>>
>>> NOTE: we're using dockerIO as virt system
>>>
>>> Best
>>> Leitan
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana neutron security groups config issue

2013-10-18 Thread Aaron Rosen
Do you have [default] at the top of your nova.conf? Could you pastebin your
nova.conf  for us to see.
On Oct 18, 2013 12:31 PM, "Leandro Reox"  wrote:

> Yes it is, but i found that is not reading the parameter from the
> nova.conf , i forced on the code on /network/manager.py and took the
> argument finally but stacks cause says that the neutron_url and if i fix it
> it stacks on the next neutron parameter like timeout :
>
> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
> 1648, in __getattr__
> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack raise
> NoSuchOptError(name)
> 2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack NoSuchOptError: no
> such option: neutron_url
>
> and then
>
> File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
> 1648, in __getattr__
> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack raise
> NoSuchOptError(name)
> 2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack NoSuchOptError: no
> such option: neutron_url_timeout
>
> Its really weird, like its not reading the nova.conf neutron parameter at
> all ...
>
> If i hardcode all the settings on the neutronv2/init.py .. at least it
> works, and bring all the secgroup details from netruon
>
>
>
> On Fri, Oct 18, 2013 at 3:48 PM, Aaron Rosen  wrote:
>
>> Hi Leandro,
>>
>>
>> I don't believe the setting of:  security_group_api=neutron in nova.conf
>> actually doesn't matter at all on the compute nodes (still good to set it
>> though). But it matters on the nova-api node. can you confirm that your
>> nova-api node has: security_group_api=neutron in it's nova.conf?
>>
>> Thanks,
>>
>> Aaron
>>
>>
>> On Fri, Oct 18, 2013 at 10:32 AM, Leandro Reox wrote:
>>
>>> Dear all,
>>>
>>> Im struggling with centralized sec groups on nova, were using OVS, it
>>> seems like no matter what flag i change on nova conf, the node still
>>> searchs the segroups on nova region local db
>>>
>>> We added :
>>>
>>>
>>> [compute node]
>>>
>>> *nova.conf*
>>>
>>> firewall_driver=neutron.agent.firewall.NoopFirewallDriver
>>> security_group_api=neutron
>>>
>>>
>>> *ovs_neutron_plugin.ini*
>>>
>>> [securitygroup]
>>> firewall_driver =
>>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>>
>>>
>>> Restarted the agent, nova-compute services ... still the same, are we
>>> missing something ?
>>>
>>> NOTE: we're using dockerIO as virt system
>>>
>>> Best
>>> Leitan
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Linux Bridge MTU bug when the VXLAN tunneling is used

2013-10-18 Thread Édouard Thuleau
Hi all,

I made some tests with the ML2 plugin and the Linux Bridge agent with VXLAN
tunneling.

By default, physical interface (used for VXLAN tunneling) has an MTU of
1500 octets. And when LB agent creates a VXLAN interface, the MTU is
automatically 50 octets less than the physical interface (so 1450 octets)
[1]. Therefore, the bridge use to plug tap of VM, veth from network
namespaces (l3 or dhcp) and VXLAN interface has an MTU of 1450 octets
(Linux bridges take minimum of all the underlying ports [2]).

So the bridge could only forward packets of length smaller than 1450 octets
to VXLAN interface [3].

But the veth interfaces used to link network namespaces and bridges are
spawn by l3 and dhcp agents (and perhaps other agents) with an MTU of 1500
octets. So, packets which arriving from them are dropped if they need to be
forwarded to the VXLAN interface.

A simple workaround is to increase by 50 at least the MTU of the physical
interface to harmonize MTU between interfaces. But by default (without MTU
customizing), the LB/VXLAN mode have strange behavior (cannot make curl
from server behind a router or execute command with verbose output in SSH
through a floating IP (SSH connection works)...)

So my question is, do you think we need to open a bug and find a fix for
that ? Or do we need to put warning in docs (and logs perhaps)?

[1]
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/net/vxlan.c#n2437
[2]
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/bridge/br_if.c#n402
[3]
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/net/bridge/br_forward.c#n74

Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] How users should specify a datastore type when creating an instance

2013-10-18 Thread Tim Simpson
Hello fellow Trovians,

There has been some good work recently to figure out a way to specify a 
specific datastore  when using Trove. This is essential to supporting multiple 
datastores from the same install of Trove.

I have an issue with some elements of the proposed solution though, so I 
decided I'd start a thread here so we could talk about it.

As a quick refresher, here is the blue print for this work (there are some 
gists ammended to the end but I figured the mailing list would be an easier 
venue for discussion):
https://wiki.openstack.org/wiki/Trove/trove-versions-types

One issue I have is with the way the instance create call will change to 
support different data stores. For example, here is the post call:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore_type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
  "datastore_version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b",
  "volume" : { "size" : "1" }
}
}
"""

1. I think since we have two fields in the instance object we should make a new 
object for datastore and avoid the name prefixing, like this:

"""
{
 "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "e60153d4-8ac4-414a-ad58-fe2e0035704a",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

2. I also think a datastore_version alone should be sufficient since the 
associated datastore type will be implied:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}
"""

3. Additionally, while a datastore_type should have an ID in the Trove 
infastructure database, it should also be possible to pass just the name of the 
datastore type to the instance call, such as "mysql" or "mongo". Maybe we could 
allow this in addition to the ID? I think this form should actually use the 
argument "type", and the id should then be passed as "type_id" instead.

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql",
"version" : "94ed1f9f-6c1a-4d6e-87e9-04ecff37b64b"
  }
  "volume" : { "size" : "1" }
}
}

"""

4. Additionally, in the current pull request to implement this it is possible 
to avoid passing a version, but only if no more than one version of the 
datastore_type exists in the database.

I think instead the datastore_type row in the database should also have a 
"default_version_id" property, that an operator could update to the most recent 
version or whatever other criteria they wish to use, meaning the call could 
become this simple:

"""
{
  "instance" : {
  "flavorRef" : "2",
  "name" : "as",
  "datastore": {
"type" : "mysql"
  }
  "volume" : { "size" : "1" }
}
}
"""

Thoughts?

Thanks,

Tim
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana neutron security groups config issue

2013-10-18 Thread Leandro Reox
Yes it is, but i found that is not reading the parameter from the nova.conf
, i forced on the code on /network/manager.py and took the argument finally
but stacks cause says that the neutron_url and if i fix it it stacks on the
next neutron parameter like timeout :

File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
1648, in __getattr__
2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack raise
NoSuchOptError(name)
2013-10-18 15:21:04.397 30931 TRACE nova.api.openstack NoSuchOptError: no
such option: neutron_url

and then

File "/usr/local/lib/python2.7/dist-packages/oslo/config/cfg.py", line
1648, in __getattr__
2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack raise
NoSuchOptError(name)
2013-10-18 15:25:20.811 31305 TRACE nova.api.openstack NoSuchOptError: no
such option: neutron_url_timeout

Its really weird, like its not reading the nova.conf neutron parameter at
all ...

If i hardcode all the settings on the neutronv2/init.py .. at least it
works, and bring all the secgroup details from netruon



On Fri, Oct 18, 2013 at 3:48 PM, Aaron Rosen  wrote:

> Hi Leandro,
>
>
> I don't believe the setting of:  security_group_api=neutron in nova.conf
> actually doesn't matter at all on the compute nodes (still good to set it
> though). But it matters on the nova-api node. can you confirm that your
> nova-api node has: security_group_api=neutron in it's nova.conf?
>
> Thanks,
>
> Aaron
>
>
> On Fri, Oct 18, 2013 at 10:32 AM, Leandro Reox wrote:
>
>> Dear all,
>>
>> Im struggling with centralized sec groups on nova, were using OVS, it
>> seems like no matter what flag i change on nova conf, the node still
>> searchs the segroups on nova region local db
>>
>> We added :
>>
>>
>> [compute node]
>>
>> *nova.conf*
>>
>> firewall_driver=neutron.agent.firewall.NoopFirewallDriver
>> security_group_api=neutron
>>
>>
>> *ovs_neutron_plugin.ini*
>>
>> [securitygroup]
>> firewall_driver =
>> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>>
>>
>> Restarted the agent, nova-compute services ... still the same, are we
>> missing something ?
>>
>> NOTE: we're using dockerIO as virt system
>>
>> Best
>> Leitan
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-18 Thread Lakshminaraya Renganarayana
Just wanted to add a couple of clarifications:

1. the cross-vm dependences are captured via the read/writes of attributes
in resources and in software components (described in metadata sections).

2. these dependences are then realized via blocking-reads and writes to
zookeeper, which realizes the cross-vm synchronization and communication of
values between the resources.

Thanks,
LN


Lakshminaraya Renganarayana/Watson/IBM@IBMUS wrote on 10/18/2013 02:45:01
PM:

> From: Lakshminaraya Renganarayana/Watson/IBM@IBMUS
> To: OpenStack Development Mailing List

> Date: 10/18/2013 02:48 PM
> Subject: [openstack-dev] [Heat] A prototype for cross-vm
> synchronization and communication
>
> Hi,
>
> In the last Openstack Heat meeting there was good interest in
> proposals for cross-vm synchronization and communication and I had
> mentioned the prototype I have built. I had also promised that I
> will post an outline of the prototype ... Here it is. I might have
> missed some details, please feel free to ask / comment and I would
> be happy to explain more.
> ---
> Goal of the prototype: Enable cross-vm synchronization and
> communication using high-level declarative description (no wait-
> conditions) Use chef as the CM tool.
>
> Design rationale / choices of the prototype (note that these were
> made just for the prototype and I am not proposing them to be the
> choices for Heat/HOT):
>
> D1: No new construct in Heat template
> => use metadata sections
> D2: No extensions to core Heat engine
> => use a pre-processor that will produce a Heat template that the
> standard Heat engine can consume
> D3: Do not require chef recipes to be modified
> => use a convention of accessing inputs/outputs from chef node[][]
> => use ruby meta-programming to intercept reads/writes to node[][]
> forward values
> D4: Use a standard distributed coordinator (don't reinvent)
> => use zookeeper as a coordinator and as a global data space for
communciation
>
> Overall, the flow is the following:
> 1. User specifies a Heat template with details about software config
> and dependences in the metadata section of resources (see step S1 below).
> 2. A pre-processor consumes this augmented heat template and
> produces another heat template with user-data sections with cloud-
> init scripts and also sets up a zookeeper instance with enough
> information to coordinate between the resources at runtime to
> realize the dependences and synchronization (see step S2)
> 3. The generated heat template is fed into standard heat engine to
> deploy. After the VMs are created the cloud-init script kicks in.
> The cloud init script installs chef solo and then starts the
> execution of the roles specified in the metadata section. During
> this execution of the recipes the coordination is realized (see
> steps S2 and S3 below).
>
> Implementation scheme:
> S1. Use metadata section of each resource to describe  (see attached
example)
> - a list of roles
> - inputs to and outputs from each role and their mapping to resource
> attrs (any attr)
> - convention: these inputs/outputs will be through chef node attrs node
[][]
>
> S2. Dependence analysis and cloud init script generation
>
> Dependence analysis:
> - resolve every reference that can be statically resolved using
> Heat's fucntions (this step just uses Heat's current dependence
> analysis -- Thanks to Zane Bitter for helping me understand this)
> - flag all unresolved references as values resolved at run-time at
> communicated via the coordinator
>
> Use cloud-init in user-data sections:
> - automatically generate a script that would bootstrap chef and will
> run the roles/recipes in the order specified in the metadata section
> - generate dependence info for zookeeper to coordinate at runtime
>
> S3. Coordinate synchronization and communication at run-time
> - intercept reads and writes to node[][]
> - if it is a remote read, get it from Zookeeper
> - execution will block till the value is available
> - if write is for a value required by a remote resource, write the
> value to Zookeeper
>
> The prototype is implemented in Python and Ruby is used for chef
> interception.
>
> There are alternatives for many of the choices I have made for the
prototype:
> - zookeeper can be replaced with any other service that provides a
> data space and distributed coordination
> - chef can be replaced by any other CM tool (a little bit of design
> / convention needed for other CM tools because of the interception
> used in the prototype to catch reads/writes to node[][])
> - the whole dependence analysis can be integrated into the Heat's
> dependence analyzer
> - the component construct proposed recently (by Steve Baker) for
> HOT/Heat can be used to specify much of what is specified using the
> metadata sections in this prototype.
>
> I am interested in using my experience with this prototype to
> contribute to HOT/Heat's cross-vm synchronization and communication
> design and code.  I look forward to your comments.
>
> 

[openstack-dev] Does openstack have a notification system that will let us know when a server changes state ?

2013-10-18 Thread openstack learner
Hi all,


I am using the openstack python api. After I boot an instance, I will keep
polling the instance status to check if its status changes from BUILD to
ACTIVE.

My question is:

does openstack have a notification system that will let us know when a vm
changes state (e.g. goes into ACTIVE state)? then we won't have to keep on
polling it  when we need to know the change of the machine state.

Thanks
xin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Havana neutron security groups config issue

2013-10-18 Thread Aaron Rosen
Hi Leandro,


I don't believe the setting of:  security_group_api=neutron in nova.conf
actually doesn't matter at all on the compute nodes (still good to set it
though). But it matters on the nova-api node. can you confirm that your
nova-api node has: security_group_api=neutron in it's nova.conf?

Thanks,

Aaron


On Fri, Oct 18, 2013 at 10:32 AM, Leandro Reox wrote:

> Dear all,
>
> Im struggling with centralized sec groups on nova, were using OVS, it
> seems like no matter what flag i change on nova conf, the node still
> searchs the segroups on nova region local db
>
> We added :
>
>
> [compute node]
>
> *nova.conf*
>
> firewall_driver=neutron.agent.firewall.NoopFirewallDriver
> security_group_api=neutron
>
>
> *ovs_neutron_plugin.ini*
>
> [securitygroup]
> firewall_driver =
> neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
>
>
> Restarted the agent, nova-compute services ... still the same, are we
> missing something ?
>
> NOTE: we're using dockerIO as virt system
>
> Best
> Leitan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-18 Thread Lakshminaraya Renganarayana

Hi,

In the last Openstack Heat meeting there was good interest in proposals for
cross-vm synchronization and communication and I had mentioned the
prototype I have built. I had also promised that I will post an outline of
the prototype ... Here it is. I might have missed some details, please feel
free to ask / comment and I would be happy to explain more.
---
Goal of the prototype: Enable cross-vm synchronization and communication
using high-level declarative description (no wait-conditions) Use chef as
the CM tool.

Design rationale / choices of the prototype (note that these were made just
for the prototype and I am not proposing them to be the choices for
Heat/HOT):

D1: No new construct in Heat template
=> use metadata sections
D2: No extensions to core Heat engine
=> use a pre-processor that will produce a Heat template that the
standard Heat engine can consume
D3: Do not require chef recipes to be modified
=> use a convention of accessing inputs/outputs from chef node[][]
=> use ruby meta-programming to intercept reads/writes to node[][]
forward values
D4: Use a standard distributed coordinator (don't reinvent)
=> use zookeeper as a coordinator and as a global data space for
communciation

Overall, the flow is the following:
1. User specifies a Heat template with details about software config and
dependences in the metadata section of resources (see step S1 below).
2. A pre-processor consumes this augmented heat template and produces
another heat template with user-data sections with cloud-init scripts and
also sets up a zookeeper instance with enough information to coordinate
between the resources at runtime to realize the dependences and
synchronization (see step S2)
3. The generated heat template is fed into standard heat engine to deploy.
After the VMs are created the cloud-init script kicks in. The cloud init
script installs chef solo and then starts the execution of the roles
specified in the metadata section. During this execution of the recipes the
coordination is realized (see steps S2 and S3 below).

Implementation scheme:
S1. Use metadata section of each resource to describe  (see attached
example)
- a list of roles
- inputs to and outputs from each role and their mapping to resource
attrs (any attr)
- convention: these inputs/outputs will be through chef node attrs
node[][]

S2. Dependence analysis and cloud init script generation

Dependence analysis:
- resolve every reference that can be statically resolved using
Heat's fucntions (this step just uses Heat's current dependence analysis --
Thanks to Zane Bitter for helping me understand this)
- flag all unresolved references as values resolved at run-time at
communicated via the coordinator

Use cloud-init in user-data sections:
- automatically generate a script that would bootstrap chef and will
run the roles/recipes in the order specified in the metadata section
- generate dependence info for zookeeper to coordinate at runtime

S3. Coordinate synchronization and communication at run-time
- intercept reads and writes to node[][]
- if it is a remote read, get it from Zookeeper
- execution will block till the value is available
- if write is for a value required by a remote resource, write the
value to Zookeeper

The prototype is implemented in Python and Ruby is used for chef
interception.

There are alternatives for many of the choices I have made for the
prototype:
- zookeeper can be replaced with any other service that provides a
data space and distributed coordination
- chef can be replaced by any other CM tool (a little bit of design /
convention needed for other CM tools because of the interception used in
the prototype to catch reads/writes to node[][])
- the whole dependence analysis can be integrated into the Heat's
dependence analyzer
- the component construct proposed recently (by Steve Baker) for
HOT/Heat can be used to specify much of what is specified using the
metadata sections in this prototype.

I am interested in using my experience with this prototype to contribute to
HOT/Heat's cross-vm synchronization and communication design and code.  I
look forward to your comments.

Thanks,
LN___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Building on Debian: Havana unit tests at build time report

2013-10-18 Thread Ben Nemec

On 2013-10-18 03:37, Thomas Goirand wrote:

On 10/18/2013 02:06 AM, Clint Byrum wrote:
A link to instructions on setting up a wheezy box for this testing 
would

be helpful.


Just install a minimal Wheezy machine, add my repositories (using the
Jenkins one at "*.pkgs.enovance.com"), then do "apt-get install
openstack-toaster". I'll be trying to provide a new preseed script so
that just running the script will be enough. I have that already, 
though
it needs to be updated for Havana. I'll post instructions when it is 
ready.



No unit test failure / error (in both Sid and Wheezy). However, Heat
doesn't include a -P option in ./run_tests.sh, and insists on running
the PEP8 checks, which fails because Sid has pyflakes 0.7.3, and Heat
wants 0.7.2:



run_tests.sh is there for _your_ convenience. And by you I mean
packagers and others who want to run the tests with system python.


And it's been very helpful, though there's room for improvement.


IMO you'd be better off adding a --no-venv option to tox to run the
prescribed commands with system python, or even just parsing tox.ini 
to

do that yourself. You'll find that the commands in tox.ini are better
maintained, since they are gated.

Anyway, I'd gladly accept a patch to add -P.


I wonder: are the different run_tests.sh scripts maintained in Oslo? It
feels strange that do we have different options for different OpenStack
projects (like missing -P and -N options sometimes).

On the nit-picking details side, an option to disable color output 
would

be great (it's ugly to read [32mOK  0.03 ...). Different
projects have different behaviors in this regard. (for example, Glance
always display in colors, while Cinder doesn't)


I believe there was some work done around getting run_tests.sh into 
Oslo, but I don't know that it was ever finished, and if it was it 
obviously isn't being used in the other projects.  Glancing through 
oslo-incubator I don't see anything that looks related so I'm guessing 
it just didn't happen.


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software orchestration proposal for workflows

2013-10-18 Thread John Davidge -X (jodavidg - AAP3 INC at Cisco)
It looks like this discussion involves many of the issues faced when
developing the Curvature & Donabe frameworks, which were presented at the
Portland Summit - slides and video here:

http://www.openstack.org/summit/portland-2013/session-videos/presentation/i
nteractive-visual-orchestration-with-curvature-and-donabe

Much of the work on the Donabe side revolved around defining a simple
JSON-based API for describing the sorts of virtual application templates
being discussed. All of the code for both Curvature and Donabe has
recently been made open source and is available here:

http://ciscosystems.github.io/curvature/

http://ciscosystems.github.io/donabe/

It looks like some of the ground covered by these projects can be helpful
to this discussion.

John Davidge
jodav...@cisco.com



>-- Forwarded message --
>From: Thomas Spatzier 
>Date: Wed, Oct 9, 2013 at 12:40 AM
>Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
>proposal for workflows
>To: OpenStack Development Mailing List 
>
>
>Excerpts from Clint Byrum's message
>
>> From: Clint Byrum 
>> To: openstack-dev ,
>> Date: 09.10.2013 03:54
>> Subject: Re: [openstack-dev] [Heat] HOT Software orchestration
>> proposal for workflows
>>
>> Excerpts from Stan Lagun's message of 2013-10-08 13:53:45 -0700:
>> > Hello,
>> >
>> >
>> > That is why it is necessary to have some central coordination service
>which
>> > would handle deployment workflow and perform specific actions (create
>VMs
>> > and other OpenStack resources, do something on that VM) on each stage
>> > according to that workflow. We think that Heat is the best place for
>such
>> > service.
>> >
>>
>> I'm not so sure. Heat is part of the Orchestration program, not
>>workflow.
>>
>
>I agree. HOT so far was thought to be a format for describing templates in
>a structural, declaritive way. Adding workflows would stretch it quite a
>bit. Maybe we should see what aspects make sense to be added to HOT, and
>then how to do workflow like orchestration in a layer on top.
>
>> > Our idea is to extend HOT DSL by adding  workflow definition
>capabilities
>> > as an explicit list of resources, components¹ states and actions.
>States
>> > may depend on each other so that you can reach state X only after
>you¹ve
>> > reached states Y and Z that the X depends on. The goal is from initial
>> > state to reach some final state ³Deployed².
>> >
>
>We also would like to add some mechanisms to HOT for declaratively doing
>software component orchestration in Heat, e.g. saying that one component
>depends on another one, or needs input from another one once it has been
>deployed etc. (I BTW started to write a wiki page, which is admittedly far
>from complete, but I would be happy to work on it with interested folks -
>https://wiki.openstack.org/wiki/Heat/Software-Configuration-Provider).
>However, we must be careful not to make such features too complicated so
>nobody will be able to use it any more. That said, I believe we could make
>HOT cover some levels of complexity, but not all. And then maybe workflow
>based orchestration on top is needed.
>
>>
>> Orchestration is not workflow, and HOT is an orchestration templating
>> language, not a workflow language. Extending it would just complect two
>> very different (though certainly related) tasks.
>>
>> I think the appropriate thing to do is actually to join up with the
>> TaskFlow project and consider building it into a workflow service or
>tools
>> (it is just a library right now).
>>
>> > There is such state graph for each of our deployment entities
>>(service,
>> > VMs, other things). There is also an action that must be performed on
>each
>> > state.
>>
>> Heat does its own translation of the orchestration template into a
>> workflow right now, but we have already discussed using TaskFlow to
>> break up the orchestration graph into distributable jobs. As we get more
>> sophisticated on updates (rolling/canary for instance) we'll need to
>> be able to reason about the process without having to glue all the
>> pieces together.
>>
>> > We propose to extend HOT DSL with workflow definition capabilities
>where
>> > you can describe step by step instruction to install service and
>properly
>> > handle errors on each step.
>> >
>> > We already have an experience in implementation of the DSL, workflow
>> > description and processing mechanism for complex deployments and
>believe
>> > we¹ll all benefit by re-using this experience and existing code,
>>having
>> > properly discussed and agreed on abstraction layers and distribution
>>of
>> > responsibilities between OS components. There is an idea of
>implementing
>> > part of workflow processing mechanism as a part of Convection
>>proposal,
>> > which would allow other OS projects to benefit by using this.
>> >
>> > We would like to discuss if such design could become a part of future
>Heat
>> > version as well as other possible contributions from Murano team.
>> >
>>
>> Thanks really for thinking 

Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-18 Thread John Dennis
On 10/18/2013 12:57 PM, Doug Hellmann wrote:
> 
> 
> 
> On Thu, Oct 17, 2013 at 2:24 PM, John Dennis  > wrote:
> 
> On 10/17/2013 12:22 PM,  Luis A. Garcia wrote:
> > On 10/16/2013 1:11 PM, Doug Hellmann wrote:
> >>
> >> [snip]
> >> Option 3 is closer to the new plan for Icehouse, which is to have _()
> >> return a Message, allow Message to work in a few contexts like a
> string
> >> (so that, for example, log calls and exceptions can be left
> alone, even
> >> if they use % to combine a translated string with arguments), but
> then
> >> have the logging and API code explicitly handle the translation of
> >> Message instances so we can always pass unicode objects outside of
> >> OpenStack code (to logging or to web frameworks). Since the
> logging code
> >> is part of Oslo and the API code can be, this seemed to provide
> >> isolation while removing most of the magic.
> >>
> >
> > I think this is exactly what we have right now inherited form Havana.
> > The _() returns a Message that is then translated on-demand by the API
> > or in a special Translation log handler.
> >
> > We just did not make Message look and feel enough like a str() and
> some
> > outside components (jsonifier in Glance and log Formatter all
> over) did
> > not know how to handle non text types correctly when non-ascii
> > characters were present.
> >
> > I think extending from unicode and removing all the implementations in
> > place such that the unicode implementation kick in for all magic
> methods
> > will solve the problems we saw at the end of Havana.
> 
> I'm relatively new to OpenStack so I can't comment on prior OpenStack
> implementations but I'm a long standing veteran of Python i18n issues.
> 
> What you're describing sounds a lot like problems that result from the
> fact Python's default encoding is ASCII as opposed to the more sensible
> UTF-8. I have a long write up on this issue from a few years ago but
> I'll cut to the chase. Python will attempt to automatically encode
> Unicode objects into ASCII during output which will fail if there are
> non-ASCII code points in the Unicode. Python does this is in two
> distinct contexts depending on whether destination of the output is a
> file or terminal. If it's a terminal it attempts to use the encoding
> associated with the TTY. Hence you can different results if you output
> to a TTY or a file handle.
> 
> 
> That was related to the problem we had with logging and Message instances.
>  
> 
> 
> The simple solution to many of the encoding exceptions that Python will
> throw is to override the default encoding and change it to UTF-8. But
> the default encoding is locked by site.py due to internal Python string
> optimizations which cache the default encoded version of the string so
> the encoding happens only once. Changing the default encoding would
> invalidate cached strings and there is no mechanism to deal with that,
> that's why the default encoding is locked. But you can change the
> default encoding using this trick if you do early enough during the
> module loading process:
> 
> 
> I don't think we want to have force the encoding at startup. Setting the
> locale properly through the environment and then using unicode objects
> also solves the issue without any startup timing issues, and allows
> deployers to choose the encoding for output.


Setting the locale only solves some of the problems, the locale is only
respected some of the time. The discrepancies and inconsistencies in how
Unicode conversion occurs in Python2 is maddening and one of the worst
aspects of Python2, it was never carefully thought out, Unicode in
Python2 is basically a bolted on hack that only works if every piece of
code plays by the exact same rules which of course they don't and never
will. I can almost guarantee unless you attack this problem at the core
you'll continue to get bitten. Either code is encoding aware and
explicitly forces a codec (presumably utf-8) or the code is encoding
naive and allows the default encoding to be applied, except when the
locale is respected which overrides the default encoding for the naive
case.

When Python3 was being worked on one of the major objectives was to
clean up the horrible state of strings and unicode in Python2. Python3
to the best of my knowledge has gotten it right. What's the default
encoding in Python3? UTF-8, Can you change the default encoding in
Python3? No. It's hardwired to UTF-8 period. You can override the
encoding at obvious points (e.g. when opening IO streams) or allow
things like TextIOWrapper to default to what
locale.getpreferredencoding() returns, but the main point is it's
consistently applied, it's not the haphazard mess in Python2 where
you're never quite sure how a Unicode string

Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Adam Young

On 10/18/2013 01:54 PM, Sean Dague wrote:

On 10/18/2013 11:59 AM, Dolph Mathews wrote:


On Fri, Oct 18, 2013 at 10:34 AM, Steven Hardy mailto:sha...@redhat.com>> wrote:

Hi all,

Starting a thread to discuss $subject, as requested in:

https://review.openstack.org/#/c/51558/

First a bit of background.  I wrote a keystoneclient patch, and 
ayoung

stated he'd like it tested via tempest before he'd ack it:

https://review.openstack.org/#/c/48462/

So I spoke to ayoung and dkranz on IRC, showing them my local tests
for the
patch.  dkranz suggested creating a "client_lib" directory, where we
could
build out a more comprehensive set of tests over time, adding to the
inital
tests related to keystone trusts client additions.

A couple of things to note:
- These are end-to-end tests, designed to test not only the 
client, but

   also the API and keystone backend functionality.  So arguably
this could
   just be a scenario test, e.g scenario/keystone/test_v3_auth.py


I'd love to be able to run these tests against a wider variety of
service configurations (e.g. LDAP!), which tempest is obviously more
suitable for.


Realize that today, all the gate is a very simplistic keystone setup. 
If there had been work to bring up different keystone backends with 
the tests we currently have, I think I'd have a different take on 
these tests.


My main focus is how we get the biggest bang for our buck, and up 
until this point we've left direct client testing largely off the 
table because we had API testing (so the API surface should be a known 
quantity) and cli testing, because the cli turned out to be massively 
full of exceptions. But client lib testing feels like it should be 
able to be accomplished without redoing this all over again by 
assuming the contract, mocking to it, and testing in unit tests.


Is there a reason we don't think that's viable?

Also, this is probably a good summit session, so if someone wants to 
submit it, I'll land it on the QA track. Realize that if we do expand 
tempest scope to include client libs, we really need to do so somewhat 
systematically to cover all the clients, so we don't end up with just 
a few keystone client tests and that's it.


Client work needs to be in a project external to both the client and the 
server.  Hence our push toward Tempest.  I think that all of the 
projects have this same limitation:  In order to get changes tested in 
their client, they need a live server, but the client tests don't run a 
server.  We can't put the tests in the server, because the corresponding 
code changes have not been yet checked in to the client.  Chicken and Egg.


Client code is public code.  In many cases, it is the primary way that 
people integrate with Keystone, Nova, and other long established 
projects.  We cannot afford to break the contract in the Clients as that 
breaks things for lots of consumers.  Tempest is the system of record.  
Having Tempest devs review tests keep Keystone (and other) devs 
honest.   No more sneaking through a unit test change with code changes 
that secretly break things.  Now, if you want to change a public API,  
you need to address that in a Tempest test before ever making a change 
to either Keystone or the Client. This is like double book accounting, 
and it is a good-thing (tm).


I don't know, however, if we need to push all of the projects to do 
this, or if, intstead, we should just let Keystone show the way. Once 
the pattern is established, and we have worked out the kinks between 
Tempest and Keystone, the Keystone devs can act as amabassadors to other 
projects in order to pass on the accumulated wisdom.


There are a lot of short comings to the current testing.  Keystone runs 
SQL migration tests only against SQLite.  This is a waste of time.  In 
addition, the gate does not run them against MySQL or PostGRES,  which 
means that bugs get through.  The API tests are not run against multiple 
Backends.  At a minimum, we need to ensure that both SQL and LDAP are 
tested for Identity, and SQL and Memcached are tested for Tokens.  
Again, ideally against a real database, not just SQLite.


While we are submitting these tests to Tempest, they don't need to be 
run for commits on other projects.  Only for commits to Keystone and 
Keystone Client.  So the impact on the Gate jobs should be minimal.  
Keystone patches are not so proligfic that they alone are killing the 
gate, are they?


I think we can work toward a setup like this:

Devstack + mysql as the default.
Deploy a non-default domain via LDAP

Same thing for Postgtesql
The non-default domain in LDAP will require us sorting out some issues 
we've identified, so I would not expect that to be ready until around 
Icehouse 1 timeframe.






-Sean




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listin

Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread David Kranz

On 10/18/2013 01:54 PM, Sean Dague wrote:

On 10/18/2013 11:59 AM, Dolph Mathews wrote:


On Fri, Oct 18, 2013 at 10:34 AM, Steven Hardy mailto:sha...@redhat.com>> wrote:

Hi all,

Starting a thread to discuss $subject, as requested in:

https://review.openstack.org/#/c/51558/

First a bit of background.  I wrote a keystoneclient patch, and 
ayoung

stated he'd like it tested via tempest before he'd ack it:

https://review.openstack.org/#/c/48462/

So I spoke to ayoung and dkranz on IRC, showing them my local tests
for the
patch.  dkranz suggested creating a "client_lib" directory, where we
could
build out a more comprehensive set of tests over time, adding to the
inital
tests related to keystone trusts client additions.

A couple of things to note:
- These are end-to-end tests, designed to test not only the 
client, but

   also the API and keystone backend functionality.  So arguably
this could
   just be a scenario test, e.g scenario/keystone/test_v3_auth.py


I'd love to be able to run these tests against a wider variety of
service configurations (e.g. LDAP!), which tempest is obviously more
suitable for.


Realize that today, all the gate is a very simplistic keystone setup. 
If there had been work to bring up different keystone backends with 
the tests we currently have, I think I'd have a different take on 
these tests.


My main focus is how we get the biggest bang for our buck, and up 
until this point we've left direct client testing largely off the 
table because we had API testing (so the API surface should be a known 
quantity) and cli testing, because the cli turned out to be massively 
full of exceptions. But client lib testing feels like it should be 
able to be accomplished without redoing this all over again by 
assuming the contract, mocking to it, and testing in unit tests.
I really don't understand why cli (shell programming language) and the 
python clients should be treated differently. The exact same argument 
could be made for cli.


Is there a reason we don't think that's viable?

Also, this is probably a good summit session, so if someone wants to 
submit it, I'll land it on the QA track. Realize that if we do expand 
tempest scope to include client libs, we really need to do so somewhat 
systematically to cover all the clients, so we don't end up with just 
a few keystone client tests and that's it.
I put this in the etherpad a few weeks ago "Strategy for avoiding 
duplication with unit tests." We need a real strategy for where 
different kinds of tests should go. And all this makes it even more 
clear that we need a way to separate the functional description of a 
test from the environment in which it can run. The decision of whether a 
test should run in a real env, or mocked in various ways, should be more 
abstracted from the actual test code where possible IMO.


 -David



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-18 Thread Luis A. Garcia

On 10/17/2013 10:13 AM, John S Warren wrote:

[snip]
Instead of faking unicode behavior, I'm suggesting that we use its
functionality in Message as-is, only overriding the __mod__ method
in order to support translation in downstream code. In short, the
Message class would differ from unicode in only these ways:

1. A "translate" method is introduced.
2. Attributes necessary to perform the translation operation (e.g.
message ID) are added.
3. The __mod__ method is overridden to preserve the parameters so
they can be used in case a translation operation is needed.

So I guess where we differ is in that I don't see the need to have
Message objects that are distinct from unicode objects. It seems
to me that having _() return objects that can be used in the same
manner as the ones returned by the original gettext implementation
avoids a lot of complications.  Because Message is extending unicode,
and it is not overriding any of the relevant behaviors, for all intents
and purposes it's not pretending to be something it isn't when an
instance of it is being used as a unicode object would be used.


+1
Yeah, I think this is the way to go, it would solve all the problems we 
are seeing with "trying" to look and feel like unicode, when the Message 
class is not, but it would also greatly remove a lot of teh "magic" and 
complexity previously in the Message class.


Basically the Message class would look like this (I changed the name too):

class TranslatableUnicode(unicode):

def __new__(cls, *args, **kwargs):
return super(TranslatableUnicode, cls).__new__(cls, args[0])

def __init__(self, msgid, domain):
self.domain = domain
self.msgid = msgid
self.params = None

def __mod__(self, other):
# We just save the params in case they are needed in a translation
self.params = other
return super(TranslatableUnicode, self).__mod__(other)

def translate(self, locale):
localedir = os.environ.get(self.domain.upper() + '_LOCALEDIR')
if not locale:
locale = locale.getdefaultlocale()
lang = gettext.translation(self.domain,
   localedir=localedir,
   languages=[locale],
   fallback=True)

ugettext = lang.gettext if six.PY3 else lang.ugettext
translated = ugettext(self.msgid)

if self.params is not None:
# Recurse and translate parameters
translated = translated % self.params
return translated


--
Luis A. García
Cloud Solutions & OpenStack Development
IBM Systems and Technology Group
Ph: (915) 307-6568 | T/L: 363-6276

"Everything should be made as simple as possible, but not simpler."
- Albert Einstein

"Simple can be harder than complex: You have to work hard to get
your thinking clean to make it simple."
– Steve Jobs


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Sean Dague

On 10/18/2013 11:59 AM, Dolph Mathews wrote:


On Fri, Oct 18, 2013 at 10:34 AM, Steven Hardy mailto:sha...@redhat.com>> wrote:

Hi all,

Starting a thread to discuss $subject, as requested in:

https://review.openstack.org/#/c/51558/

First a bit of background.  I wrote a keystoneclient patch, and ayoung
stated he'd like it tested via tempest before he'd ack it:

https://review.openstack.org/#/c/48462/

So I spoke to ayoung and dkranz on IRC, showing them my local tests
for the
patch.  dkranz suggested creating a "client_lib" directory, where we
could
build out a more comprehensive set of tests over time, adding to the
inital
tests related to keystone trusts client additions.

A couple of things to note:
- These are end-to-end tests, designed to test not only the client, but
   also the API and keystone backend functionality.  So arguably
this could
   just be a scenario test, e.g scenario/keystone/test_v3_auth.py


I'd love to be able to run these tests against a wider variety of
service configurations (e.g. LDAP!), which tempest is obviously more
suitable for.


Realize that today, all the gate is a very simplistic keystone setup. If 
there had been work to bring up different keystone backends with the 
tests we currently have, I think I'd have a different take on these tests.


My main focus is how we get the biggest bang for our buck, and up until 
this point we've left direct client testing largely off the table 
because we had API testing (so the API surface should be a known 
quantity) and cli testing, because the cli turned out to be massively 
full of exceptions. But client lib testing feels like it should be able 
to be accomplished without redoing this all over again by assuming the 
contract, mocking to it, and testing in unit tests.


Is there a reason we don't think that's viable?

Also, this is probably a good summit session, so if someone wants to 
submit it, I'll land it on the QA track. Realize that if we do expand 
tempest scope to include client libs, we really need to do so somewhat 
systematically to cover all the clients, so we don't end up with just a 
few keystone client tests and that's it.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Adam Young

On 10/18/2013 12:04 PM, Brant Knudson wrote:
To provide a bit more background... Keystone has a bunch of 
keystoneclient tests for the v2 API. These tests actually "git clone" 
a version of keystoneclient (master, essex-3, and 0.1.1)[0], to use 
for testing.  Maybe at some point the tests were just for 
client-server compatibility, but now they're used for more than that; 
for example, they're used for tests that require going through the 
paste pipeline and v2 controller. It's just the quickest way to get 
some tests written. In addition, there's versions of the client tests 
for both the kvs and sql backends.


This causes several problems,
1) It looks like we're not keeping the versions to test up-to-date -- 
should we checkout supported releases instead?
2) "git clone"ing the keystoneclient doesn't work well with parallel 
testing (we have a similar problem in our tests with our "pristine" 
database backup)

3) These tests eat up lots of memory which we've gotten complaints about.

Getting v3 API keystoneclient/keystone testing in tempest is going to 
hopefully lead to getting the v2 tests out of Keystone. Thanks to 
Steve for taking this first step!
I was going to try an approach where we used tempest to just call the 
code in Keystone as a first step.  That was one of the reasons that I 
was in favor of moving the Keystone tests into the keystone namespace.


We need to skip the git clone, step, obviously, and I am not certain 
about our use of fixtures:  we might need to redo these so the sample 
data doesn't conflict with what Tempest expects.






For the v3 API, the tests don't use the keystoneclient but instead use 
webtest [1] and the REST API.


[0] 
https://github.com/openstack/keystone/blob/master/keystone/tests/test_keystoneclient.py#L1070
[1] 
https://github.com/openstack/keystone/blob/master/keystone/tests/test_content_types.py#L69


We'll need V3 client support eventually, and we should use Tempest as 
the primary test environement  for that.




- Brant



On Fri, Oct 18, 2013 at 10:59 AM, Dolph Mathews 
mailto:dolph.math...@gmail.com>> wrote:



On Fri, Oct 18, 2013 at 10:34 AM, Steven Hardy mailto:sha...@redhat.com>> wrote:

Hi all,

Starting a thread to discuss $subject, as requested in:

https://review.openstack.org/#/c/51558/

First a bit of background.  I wrote a keystoneclient patch,
and ayoung
stated he'd like it tested via tempest before he'd ack it:

https://review.openstack.org/#/c/48462/

So I spoke to ayoung and dkranz on IRC, showing them my local
tests for the
patch.  dkranz suggested creating a "client_lib" directory,
where we could
build out a more comprehensive set of tests over time, adding
to the inital
tests related to keystone trusts client additions.

A couple of things to note:
- These are end-to-end tests, designed to test not only the
client, but
  also the API and keystone backend functionality.  So
arguably this could
  just be a scenario test, e.g scenario/keystone/test_v3_auth.py


I'd love to be able to run these tests against a wider variety of
service configurations (e.g. LDAP!), which tempest is obviously
more suitable for.


- The intention is to excercise logic which is hard to fully
test with
  unit or integration tests, and to catch issues like
incompatibility
  between client and API - e.g keystoneclient tests may pass,
but we need
  to make sure the client actually works against the real
keystone API.


All of our tests under keystone.tests.test_keystoneclient fall
into this category as well:


https://github.com/openstack/keystone/blob/a0e26c1882d83989bee3726a5ae08cbe3f32a2b5/keystone/tests/test_keystoneclient.py


https://github.com/openstack/keystone/blob/a0e26c1882d83989bee3726a5ae08cbe3f32a2b5/keystone/tests/test_keystoneclient_sql.py


Working on Heat has given me a pretty good insight into the
python-*client
API's, as we use them to orchestrate actions with every
openstack service;
IMO anything we can do to make these interfaces more robust
(and catch
bugs, several of which I found already while writing these
tests) is a
good-thing (tm).


++


I'd welcome feedback on the patch above, and what will be the most
acceptable approach to the tempest team for adding these tests.

More links:

https://review.openstack.org/#/c/51559/
https://review.openstack.org/#/c/51560/
https://blueprints.launchpad.net/tempest/+spec/keystoneclient-api

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.op

Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Sean Dague

On 10/18/2013 12:04 PM, Brant Knudson wrote:

To provide a bit more background... Keystone has a bunch of
keystoneclient tests for the v2 API. These tests actually "git clone" a
version of keystoneclient (master, essex-3, and 0.1.1)[0], to use for
testing.  Maybe at some point the tests were just for client-server
compatibility, but now they're used for more than that; for example,
they're used for tests that require going through the paste pipeline and
v2 controller. It's just the quickest way to get some tests written. In
addition, there's versions of the client tests for both the kvs and sql
backends.

This causes several problems,
1) It looks like we're not keeping the versions to test up-to-date --
should we checkout supported releases instead?
2) "git clone"ing the keystoneclient doesn't work well with parallel
testing (we have a similar problem in our tests with our "pristine"
database backup)


Can you go into the specifics of why?


3) These tests eat up lots of memory which we've gotten complaints about.


Again, can you go into the specifics of why?


Getting v3 API keystoneclient/keystone testing in tempest is going to
hopefully lead to getting the v2 tests out of Keystone. Thanks to Steve
for taking this first step!

For the v3 API, the tests don't use the keystoneclient but instead use
webtest [1] and the REST API.

[0]
https://github.com/openstack/keystone/blob/master/keystone/tests/test_keystoneclient.py#L1070
[1]
https://github.com/openstack/keystone/blob/master/keystone/tests/test_content_types.py#L69


So v3 keystone API is one thing, but I'm a little concerned with moving 
the client testing to Tempest haphazardly.  If we are testing the API 
surface on the servers, the clients should be able to correctly test all 
of this via a mock of those API returns, which would let us separate 
concerns here and keep the client tests close to their code as unit tests.


We're actually actively trying to figure out what can migrate out of 
tempest back to the integrated projects, so that we get our biggest bang 
for our buck.


Also, realize in a tempest environment there is only going to be the 
latest version of the clients, so this is going to massively reduce your 
test environment from what you have today.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Havana neutron security groups config issue

2013-10-18 Thread Leandro Reox
Dear all,

Im struggling with centralized sec groups on nova, were using OVS, it seems
like no matter what flag i change on nova conf, the node still searchs the
segroups on nova region local db

We added :


[compute node]

*nova.conf*

firewall_driver=neutron.agent.firewall.NoopFirewallDriver
security_group_api=neutron


*ovs_neutron_plugin.ini*

[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


Restarted the agent, nova-compute services ... still the same, are we
missing something ?

NOTE: we're using dockerIO as virt system

Best
Leitan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Metrics] Improving the data about contributor/affiliation/time

2013-10-18 Thread Stefano Maffulli
On 10/18/2013 05:33 AM, Sean Dague wrote:
> I'm not sure it is well understoond that all members have to join the
> foundation. We don't make that a requirement on someone slinging a
> patch. 

I believe we do make it a requirement: you can't sign the CLA if you are
not also a member of the Foundation and you can't land a patch for
review if you haven't signed the CLA. All of this is enforced by gerrit.

> The thing is, the Foundation data currently seems to be the least
> accurate of all the data sets. 

As of today it's the most complete one though, and given the assumption
that 1 ATC == 1 Member of the Foundation it's also the easiest one to
fix, compared to others.

> Also, both gitdm and stackalytics have active open developer communities
> (and they are open source all the way down, don't need non open
> components to run), so again, I'm not sure why defaulting to the least
> open platform makes any sense.

I'm not talking about the visualization here. Let's focus only on the
source of data for person/affiliation/time.

Thierry: "affiliation" in the Members db is indeed to be intended in the
sanse mandated by the bylaws.

As Jesus was saying, we want to track activities also besides the git
repos and launchpad. I would like to have visibility over things done on
Ask OpenStack, translations, the upcoming groups.openstack.org and other
things we'll have in the future. That's why we're developing our own
OpenID provider.

> If the foundation member database was it's own thing, had a REST API to
> bulk fetch, and supported temporal associations, and let others propose
> updates to people's affiliation, then it would be an option.

It seems we're on the same page, and so is Jesus too. Here are my
thoughts at the moment:

  - the OpenID provider the Foundation is building will provide the
basic bulk of data with an interface (REST or whatever, including a
regular csv dump): username, all known email addresses, current
affiliation, past affiliations
  - we build a system to sanitize the bulky dump, doing things like
cleaning the names of companies, and provide ways to enrich the data for
other
  - the result of such process will be used by all reporting systems we
have, from Activity Board to gitdm to stackalytics.

How does that sound?

/stef
-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announce of Rally - benchmarking system for OpenStack

2013-10-18 Thread David Kranz

On 10/18/2013 12:17 PM, Boris Pavlovic wrote:

John,

Actually seems like a pretty good suggestion IMO, at least something 
worth some investigation and consideration before quickly discounting 
it.  Rather than "that's not what tempest is", maybe it's something 
tempest "could do".  Don't know, not saying one way or the other, just 
wondering if it's worth some investigation or thought.



These investigations I made before start working around Rally. It was 
about 3 months ago.
It is not "quickly discounting" I didn't have yesterday time to make 
long response, so I will write it today:


I really don't like to make a copy of another projects, so I tried to 
reuse all projects & libs that we already have.


To explain why we shouldn't merge Rally & Tempest in one project (and 
should keep both)  we should analyze their use cases.



1. DevStack - one "click" and get your OpenStack cloud from sources

2. Tempest - one "click" and get your OpenStack Cloud verified

Both of these projects are great, because they are very useful and 
solve complicated tasks without "pain" for end user. (and I like them)


3. Rally is also one "click" system that solve OpenStack benchmarking.

To clarify situation. We should analyze what I mean by one "click" 
benchmarking and what are the use cases.


Use Cases:
1. Investigate how deployments influence on OS performance (find the 
set of good OpenStack deployment architectures)
2. Automatically get numbers & profiling info about how your changes 
influence on OS performance

3. Using Rally profiler detect scale & performance issues.
Like here when we are trying to delete 3 VMs by one request they are 
deleted one by one because of DB lock on quotas table 
http://37.58.79.43:8080/traces/0011f252c9d98e31

4. Determine maximal load that could handle production cloud

To cover these cases we should actually test OpenStack deployments 
making simultaneously OpenStack API calls.


So to get results we have to:
1. Deploy OpenStack cloud somewhere. (Or get existing cloud)
2. Verify It
3. Run Benchmarks
4. Collect all results & present it in human readable form.


The goal of Rally was designed to automate these steps:
1.a Use existing cloud.
1.b.1 Automatically get (virtual) Servers from (soft layer, Amazon, 
RackSpace or you private public cloud, or OpenStack cloud)
1.b.2 DeployOpenStack on these servers from source (using Devstack, 
Anvli, Fuel or TrippleO...).
1.b.3 Patch this OpenStack with tomograph to get profiling information 
(I hope we will merge these patches into upstream).
2. Using tempest verify this cloud (we are going to switch from 
fuel-ostf-tests)
3. Run specified parametrized (to be able to make different load) 
benchmark scenarios
4. Collect all information about execution & present it in human 
readable form. (Tomograph, Zipking, matplotlib...)



So I am not sure that we should put inside Tempest Rally, because 
Rally use tempest. It is something like putting Nova into Cinder =)
Also putting Tempest into Rally is not a good idea. (same as putting 
Cinder back to Nova).



Best regards,
Boris Pavlovic
---
Mirantis Inc.


On Thu, Oct 17, 2013 at 11:56 PM, John Griffith 
mailto:john.griff...@solidfire.com>> wrote:





On Thu, Oct 17, 2013 at 1:44 PM, Jay Pipes mailto:jaypi...@gmail.com>> wrote:

On 10/17/2013 03:32 PM, Boris Pavlovic wrote:

Jay,


Or, alternately, just have Rally as part of Tempest.


Actually, tempest is used only to verify that cloud works
properly.
And verification is only small part of the Rally.

At this moment we are using fuel-ostf-tests, but we are
going to use
tempest to verify cloud.


OK, cool... was just a suggestion :) Tempest has a set of
stress tests [1] which are kind of related, which is the only
reason I brought it up.

Best,
-jay

[1]
https://github.com/openstack/tempest/tree/master/tempest/stress


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Actually seems like a pretty good suggestion IMO, at least
something worth some investigation and consideration before
quickly discounting it.  Rather than "that's not what tempest is",
maybe it's something tempest "could do".  Don't know, not saying
one way or the other, just wondering if it's worth some
investigation or thought.

By the way, VERY COOL!!


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists

Re: [openstack-dev] [oslo] i18n Message improvements

2013-10-18 Thread Doug Hellmann
On Thu, Oct 17, 2013 at 2:24 PM, John Dennis  wrote:

> On 10/17/2013 12:22 PM,  Luis A. Garcia wrote:
> > On 10/16/2013 1:11 PM, Doug Hellmann wrote:
> >>
> >> [snip]
> >> Option 3 is closer to the new plan for Icehouse, which is to have _()
> >> return a Message, allow Message to work in a few contexts like a string
> >> (so that, for example, log calls and exceptions can be left alone, even
> >> if they use % to combine a translated string with arguments), but then
> >> have the logging and API code explicitly handle the translation of
> >> Message instances so we can always pass unicode objects outside of
> >> OpenStack code (to logging or to web frameworks). Since the logging code
> >> is part of Oslo and the API code can be, this seemed to provide
> >> isolation while removing most of the magic.
> >>
> >
> > I think this is exactly what we have right now inherited form Havana.
> > The _() returns a Message that is then translated on-demand by the API
> > or in a special Translation log handler.
> >
> > We just did not make Message look and feel enough like a str() and some
> > outside components (jsonifier in Glance and log Formatter all over) did
> > not know how to handle non text types correctly when non-ascii
> > characters were present.
> >
> > I think extending from unicode and removing all the implementations in
> > place such that the unicode implementation kick in for all magic methods
> > will solve the problems we saw at the end of Havana.
>
> I'm relatively new to OpenStack so I can't comment on prior OpenStack
> implementations but I'm a long standing veteran of Python i18n issues.
>
> What you're describing sounds a lot like problems that result from the
> fact Python's default encoding is ASCII as opposed to the more sensible
> UTF-8. I have a long write up on this issue from a few years ago but
> I'll cut to the chase. Python will attempt to automatically encode
> Unicode objects into ASCII during output which will fail if there are
> non-ASCII code points in the Unicode. Python does this is in two
> distinct contexts depending on whether destination of the output is a
> file or terminal. If it's a terminal it attempts to use the encoding
> associated with the TTY. Hence you can different results if you output
> to a TTY or a file handle.
>

That was related to the problem we had with logging and Message instances.


>
> The simple solution to many of the encoding exceptions that Python will
> throw is to override the default encoding and change it to UTF-8. But
> the default encoding is locked by site.py due to internal Python string
> optimizations which cache the default encoded version of the string so
> the encoding happens only once. Changing the default encoding would
> invalidate cached strings and there is no mechanism to deal with that,
> that's why the default encoding is locked. But you can change the
> default encoding using this trick if you do early enough during the
> module loading process:
>

I don't think we want to have force the encoding at startup. Setting the
locale properly through the environment and then using unicode objects also
solves the issue without any startup timing issues, and allows deployers to
choose the encoding for output.

Doug


>
> import sys
> reload(sys)
> sys.setdefaultencoding('utf-8')
>
> The reason this works is because site.py deletes the setdefaultencoding
> from the sys module, but after reloading sys it's available again. One
> can also use a tiny CPython module to set the default encoding without
> having to use the sys reload trick. The following illustrates the reload
> trick:
>
> $ python
> Python 2.7.3 (default, Aug  9 2012, 17:23:57)
> [GCC 4.7.1 20120720 (Red Hat 4.7.1-5)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import sys
> >>> sys.getdefaultencoding()
> 'ascii'
> >>> sys.setdefaultencoding('utf-8')
> Traceback (most recent call last):
>   File "", line 1, in 
> AttributeError: 'module' object has no attribute 'setdefaultencoding'
> >>> reload(sys)
> 
> >>> sys.setdefaultencoding('utf-8')
> >>> sys.getdefaultencoding()
> 'utf-8'
>
>
> Not fully undersanding the role of Python's default encoding and how
> it's application differs between terminal and non-terminal output can
> cause a lot of confusion and misunderstanding which can sometimes lead
> to false conclusions as to what is going wrong.
>
> If I get a chance I'll try to publicly post my write-up on Python i18n
> issues.


>
> --
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announce of Rally - benchmarking system for OpenStack

2013-10-18 Thread John Griffith
On Fri, Oct 18, 2013 at 10:17 AM, Boris Pavlovic  wrote:

> John,
>
> Actually seems like a pretty good suggestion IMO, at least something worth
> some investigation and consideration before quickly discounting it.  Rather
> than "that's not what tempest is", maybe it's something tempest "could do".
>  Don't know, not saying one way or the other, just wondering if it's worth
> some investigation or thought.
>
>
> These investigations I made before start working around Rally. It was
> about 3 months ago.
> It is not "quickly discounting" I didn't have yesterday time to make long
> response, so I will write it today:
>
> I really don't like to make a copy of another projects, so I tried to
> reuse all projects & libs that we already have.
>
> To explain why we shouldn't merge Rally & Tempest in one project (and
> should keep both)  we should analyze their use cases.
>
>
> 1. DevStack - one "click" and get your OpenStack cloud from sources
>
> 2. Tempest - one "click" and get your OpenStack Cloud verified
>
> Both of these projects are great, because they are very useful and solve
> complicated tasks without "pain" for end user. (and I like them)
>
> 3. Rally is also one "click" system that solve OpenStack benchmarking.
>
> To clarify situation. We should analyze what I mean by one "click"
> benchmarking and what are the use cases.
>
> Use Cases:
> 1. Investigate how deployments influence on OS performance (find the set
> of good OpenStack deployment architectures)
> 2. Automatically get numbers & profiling info about how your changes
> influence on OS performance
> 3. Using Rally profiler detect scale & performance issues.
> Like here when we are trying to delete 3 VMs by one request they are
> deleted one by one because of DB lock on quotas table
> http://37.58.79.43:8080/traces/0011f252c9d98e31
> 4. Determine maximal load that could handle production cloud
>
> To cover these cases we should actually test OpenStack deployments making
> simultaneously OpenStack API calls.
>
> So to get results we have to:
> 1. Deploy OpenStack cloud somewhere. (Or get existing cloud)
> 2. Verify It
> 3. Run Benchmarks
> 4. Collect all results & present it in human readable form.
>
>
> The goal of Rally was designed to automate these steps:
> 1.a Use existing cloud.
> 1.b.1 Automatically get (virtual) Servers from (soft layer, Amazon,
> RackSpace or you private public cloud, or OpenStack cloud)
> 1.b.2 DeployOpenStack on these servers from source (using Devstack, Anvli,
> Fuel or TrippleO...).
> 1.b.3 Patch this OpenStack with tomograph to get profiling information (I
> hope we will merge these patches into upstream).
> 2. Using tempest verify this cloud (we are going to switch from
> fuel-ostf-tests)
> 3. Run specified parametrized (to be able to make different load)
> benchmark scenarios
> 4. Collect all information about execution & present it in human readable
> form. (Tomograph, Zipking, matplotlib...)
>
>
> So I am not sure that we should put inside Tempest Rally, because Rally
> use tempest. It is something like putting Nova into Cinder =)
> Also putting Tempest into Rally is not a good idea. (same as putting
> Cinder back to Nova).
>
>
> Best regards,
> Boris Pavlovic
> ---
> Mirantis Inc.
>
>
> On Thu, Oct 17, 2013 at 11:56 PM, John Griffith <
> john.griff...@solidfire.com> wrote:
>
>>
>>
>>
>> On Thu, Oct 17, 2013 at 1:44 PM, Jay Pipes  wrote:
>>
>>> On 10/17/2013 03:32 PM, Boris Pavlovic wrote:
>>>
 Jay,


 Or, alternately, just have Rally as part of Tempest.


 Actually, tempest is used only to verify that cloud works properly.
 And verification is only small part of the Rally.

 At this moment we are using fuel-ostf-tests, but we are going to use
 tempest to verify cloud.

>>>
>>> OK, cool... was just a suggestion :) Tempest has a set of stress tests
>>> [1] which are kind of related, which is the only reason I brought it up.
>>>
>>> Best,
>>> -jay
>>>
>>> [1] 
>>> https://github.com/openstack/**tempest/tree/master/tempest/**stress
>>>
>>>
>>> __**_
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.**org 
>>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>>
>>
>> Actually seems like a pretty good suggestion IMO, at least something
>> worth some investigation and consideration before quickly discounting it.
>>  Rather than "that's not what tempest is", maybe it's something tempest
>> "could do".  Don't know, not saying one way or the other, just wondering if
>> it's worth some investigation or thought.
>>
>> By the way, VERY COOL!!
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> 

Re: [openstack-dev] Announce of Rally - benchmarking system for OpenStack

2013-10-18 Thread Boris Pavlovic
John,

Actually seems like a pretty good suggestion IMO, at least something worth
some investigation and consideration before quickly discounting it.  Rather
than "that's not what tempest is", maybe it's something tempest "could do".
 Don't know, not saying one way or the other, just wondering if it's worth
some investigation or thought.


These investigations I made before start working around Rally. It was about
3 months ago.
It is not "quickly discounting" I didn't have yesterday time to make long
response, so I will write it today:

I really don't like to make a copy of another projects, so I tried to reuse
all projects & libs that we already have.

To explain why we shouldn't merge Rally & Tempest in one project (and
should keep both)  we should analyze their use cases.


1. DevStack - one "click" and get your OpenStack cloud from sources

2. Tempest - one "click" and get your OpenStack Cloud verified

Both of these projects are great, because they are very useful and solve
complicated tasks without "pain" for end user. (and I like them)

3. Rally is also one "click" system that solve OpenStack benchmarking.

To clarify situation. We should analyze what I mean by one "click"
benchmarking and what are the use cases.

Use Cases:
1. Investigate how deployments influence on OS performance (find the set of
good OpenStack deployment architectures)
2. Automatically get numbers & profiling info about how your changes
influence on OS performance
3. Using Rally profiler detect scale & performance issues.
Like here when we are trying to delete 3 VMs by one request they are
deleted one by one because of DB lock on quotas table
http://37.58.79.43:8080/traces/0011f252c9d98e31
4. Determine maximal load that could handle production cloud

To cover these cases we should actually test OpenStack deployments making
simultaneously OpenStack API calls.

So to get results we have to:
1. Deploy OpenStack cloud somewhere. (Or get existing cloud)
2. Verify It
3. Run Benchmarks
4. Collect all results & present it in human readable form.


The goal of Rally was designed to automate these steps:
1.a Use existing cloud.
1.b.1 Automatically get (virtual) Servers from (soft layer, Amazon,
RackSpace or you private public cloud, or OpenStack cloud)
1.b.2 DeployOpenStack on these servers from source (using Devstack, Anvli,
Fuel or TrippleO...).
1.b.3 Patch this OpenStack with tomograph to get profiling information (I
hope we will merge these patches into upstream).
2. Using tempest verify this cloud (we are going to switch from
fuel-ostf-tests)
3. Run specified parametrized (to be able to make different load) benchmark
scenarios
4. Collect all information about execution & present it in human readable
form. (Tomograph, Zipking, matplotlib...)


So I am not sure that we should put inside Tempest Rally, because Rally use
tempest. It is something like putting Nova into Cinder =)
Also putting Tempest into Rally is not a good idea. (same as putting Cinder
back to Nova).


Best regards,
Boris Pavlovic
---
Mirantis Inc.


On Thu, Oct 17, 2013 at 11:56 PM, John Griffith  wrote:

>
>
>
> On Thu, Oct 17, 2013 at 1:44 PM, Jay Pipes  wrote:
>
>> On 10/17/2013 03:32 PM, Boris Pavlovic wrote:
>>
>>> Jay,
>>>
>>>
>>> Or, alternately, just have Rally as part of Tempest.
>>>
>>>
>>> Actually, tempest is used only to verify that cloud works properly.
>>> And verification is only small part of the Rally.
>>>
>>> At this moment we are using fuel-ostf-tests, but we are going to use
>>> tempest to verify cloud.
>>>
>>
>> OK, cool... was just a suggestion :) Tempest has a set of stress tests
>> [1] which are kind of related, which is the only reason I brought it up.
>>
>> Best,
>> -jay
>>
>> [1] 
>> https://github.com/openstack/**tempest/tree/master/tempest/**stress
>>
>>
>> __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.**org 
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>
>
> Actually seems like a pretty good suggestion IMO, at least something worth
> some investigation and consideration before quickly discounting it.  Rather
> than "that's not what tempest is", maybe it's something tempest "could do".
>  Don't know, not saying one way or the other, just wondering if it's worth
> some investigation or thought.
>
> By the way, VERY COOL!!
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Agenda for next team meeting on Monday at 1600 UTC

2013-10-18 Thread Kurt Griffiths
The Marconi project team holds a weekly meeting in #openstack-meeting-
alt on Mondays, 1600 UTC.

The next meeting is Monday, Oct 21. Everyone is welcome, but please
take a minute to review the wiki before attending for the first time:

  http://wiki.openstack.org/marconi

Proposed Agenda:

  * Review actions from last time
  * Updates on Sharding
  * The future of the proxy
  * API Spec server side
  * API versioning strategy?
  * Updates on bugs
  * Open discussion (time permitting)

If you have additions to the agenda, please add them to the wiki and
note your IRC name so we can call on you during the meeting:

  http://wiki.openstack.org/Meetings/Marconi

Cheers,

---
@kgriffs
Kurt Giffiths



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Metrics] Improving the data about contributor/affiliation/time

2013-10-18 Thread Jesus M. Gonzalez-Barahona
On Fri, 2013-10-18 at 08:33 -0400, Sean Dague wrote:
> On 10/17/2013 05:34 PM, Stefano Maffulli wrote:
> > [...]
> > Four sources of data for this reporting is bad and not sustainable.
> >
> > Since it seems commonly accepted that all developers need to be members
> > of the Foundation, and that Foundation members need to state their
> > affiliation when they join and keep such data current when it changes, I
> > think the Foundation is in a good place to provide the authoritative
> > data for all projects to use.
> 
> I'm not sure it is well understoond that all members have to join the 
> foundation. We don't make that a requirement on someone slinging a 
> patch. It would be nice to know what percentage of ATCs actually are 
> foundation members at the moment (presumably that number is easy to 
> generate?)

My impression is that we need a data source that covers all contributors
as much as possible. As you said, even for developers it is not always
the case. If you are also interested in tracking bug reporters or
message posters, for example, that is even less the case. Linking
affiliation information to Foundation membership could be risky from
this point of view.

A different issue is that the Foundation maintains a system for claiming
or fixing affiliation information, so that all of us producing metrics
can use it. It could be based on the current datasets (the best of them,
or maybe a combination of some of them), and could provide some
interface for easy and public proposal of changes. It should also
provide some interface so that any metrics collecting system can use it.

For being useful, it should also include data for identification of
developers (usually, the email addresses they are using in the different
OpenStack repositories), since developers not only change organization,
they also tend to change identification from time to time.

> The thing is, the Foundation data currently seems to be the least 
> accurate of all the data sets. Also, the lack of affiliation over time 
> is really a problem for this project, especially if one of the driving 
> factors for so much interest in statistics comes from organizations 
> wanting to ensure contributions by their employees get counted. A 
> significant percentage of top contributors to OpenStack have not 
> remained at a single employer over their duration to contributing to 
> OpenStack, and I expect that to be the norm as the project ages.
> 
> Also, both gitdm and stackalytics have active open developer communities 
> (and they are open source all the way down, don't need non open 
> components to run), so again, I'm not sure why defaulting to the least 
> open platform makes any sense.

Just for the record, the MetricsGrimoire / vizGrimoire stack that is
producing the dashboards at http://activity.openstack.org/dash/ is also
complete open source, with an open developer community, see
http://metricsgrimoire.github.io and http://vizgrimoire.github.io

All the data is also available, in the form of JSON files and SQL
databases, see
http://activity.openstack.org/dash/newbrowser/browser/data/db/
(which includes affilation data)

This said, I'm not intending that our affiliation datasets are the best
ones. We'd be more than happy to collaborate with the rest to produce a
common dataset, or to revert to some other if it proves better
maintained. In fact, we have already incorporated affiliation data from
gitdm and (partially) from stackalytics.

> Member affiliation in the Foundation database can also only be fixed by 
> the individual. In the other tools people in the know can fix it. It 
> means we get a wikipedia effect in getting the data more accurate, as 
> you can fix any issue you see, not just your own.

This is something very important, from my point of view. The ability of
changing any data you may find inaccurate, along with the use of a
review system, just to ensure that we don't include malicious requests
for change, would be desired features for any system we use.

> If the foundation member database was it's own thing, had a REST API to 
> bulk fetch, and supported temporal associations, and let others propose 
> updates to people's affiliation, then it would be an option. But right 
> now it seems very far from being useful, and is probably the least, not 
> most, accurate version of the world.
[...]

>From my point of view, having a REST API would be helpful, but not a
must. The usual way to include bulk data for us is to retrieve the
external bulk data, compare it with the current we have, and decide (in
part by hand) on the differences one by one, trying to incorporate the
most reliable option. If the external data were always more reliable, it
would be a matter of just comparing and using the external data when a
match is found, and could be done automatically. And no REST data is
really needed for this.

Support of temporal associations, proposal of updates by anyone, review
system, and support for multiple identites, would b

Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread David Kranz
Thanks, Steve. I suggested a new directory because we really need to 
have more complete tests of the client libs since they are not tied to 
particular OpenStack releases and we claim the current libs should work 
with older releases. That said, I did not realize the intent was to do 
more than test the client libraries. If the intent of the follow-ons to 
this patch is to be more scenario-like rather than just increasing 
coverage for the keystoneclient api then it makes sense to go in 
scenario/keystone.


I still think we should have more client lib coverage than the haphazard 
usage we will get from scenario but a new directory could wait until 
such code starts to exist.


 -David

On 10/18/2013 11:34 AM, Steven Hardy wrote:

Hi all,

Starting a thread to discuss $subject, as requested in:

https://review.openstack.org/#/c/51558/

First a bit of background.  I wrote a keystoneclient patch, and ayoung
stated he'd like it tested via tempest before he'd ack it:

https://review.openstack.org/#/c/48462/

So I spoke to ayoung and dkranz on IRC, showing them my local tests for the
patch.  dkranz suggested creating a "client_lib" directory, where we could
build out a more comprehensive set of tests over time, adding to the inital
tests related to keystone trusts client additions.

A couple of things to note:
- These are end-to-end tests, designed to test not only the client, but
   also the API and keystone backend functionality.  So arguably this could
   just be a scenario test, e.g scenario/keystone/test_v3_auth.py

- The intention is to excercise logic which is hard to fully test with
   unit or integration tests, and to catch issues like incompatibility
   between client and API - e.g keystoneclient tests may pass, but we need
   to make sure the client actually works against the real keystone API.

Working on Heat has given me a pretty good insight into the python-*client
API's, as we use them to orchestrate actions with every openstack service;
IMO anything we can do to make these interfaces more robust (and catch
bugs, several of which I found already while writing these tests) is a
good-thing (tm).

I'd welcome feedback on the patch above, and what will be the most
acceptable approach to the tempest team for adding these tests.

More links:

https://review.openstack.org/#/c/51559/
https://review.openstack.org/#/c/51560/
https://blueprints.launchpad.net/tempest/+spec/keystoneclient-api

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Brant Knudson
To provide a bit more background... Keystone has a bunch of keystoneclient
tests for the v2 API. These tests actually "git clone" a version of
keystoneclient (master, essex-3, and 0.1.1)[0], to use for testing.  Maybe
at some point the tests were just for client-server compatibility, but now
they're used for more than that; for example, they're used for tests that
require going through the paste pipeline and v2 controller. It's just the
quickest way to get some tests written. In addition, there's versions of
the client tests for both the kvs and sql backends.

This causes several problems,
1) It looks like we're not keeping the versions to test up-to-date --
should we checkout supported releases instead?
2) "git clone"ing the keystoneclient doesn't work well with parallel
testing (we have a similar problem in our tests with our "pristine"
database backup)
3) These tests eat up lots of memory which we've gotten complaints about.

Getting v3 API keystoneclient/keystone testing in tempest is going to
hopefully lead to getting the v2 tests out of Keystone. Thanks to Steve for
taking this first step!

For the v3 API, the tests don't use the keystoneclient but instead use
webtest [1] and the REST API.

[0]
https://github.com/openstack/keystone/blob/master/keystone/tests/test_keystoneclient.py#L1070
[1]
https://github.com/openstack/keystone/blob/master/keystone/tests/test_content_types.py#L69

- Brant



On Fri, Oct 18, 2013 at 10:59 AM, Dolph Mathews wrote:

>
> On Fri, Oct 18, 2013 at 10:34 AM, Steven Hardy  wrote:
>
>> Hi all,
>>
>> Starting a thread to discuss $subject, as requested in:
>>
>> https://review.openstack.org/#/c/51558/
>>
>> First a bit of background.  I wrote a keystoneclient patch, and ayoung
>> stated he'd like it tested via tempest before he'd ack it:
>>
>> https://review.openstack.org/#/c/48462/
>>
>> So I spoke to ayoung and dkranz on IRC, showing them my local tests for
>> the
>> patch.  dkranz suggested creating a "client_lib" directory, where we could
>> build out a more comprehensive set of tests over time, adding to the
>> inital
>> tests related to keystone trusts client additions.
>>
>> A couple of things to note:
>> - These are end-to-end tests, designed to test not only the client, but
>>   also the API and keystone backend functionality.  So arguably this could
>>   just be a scenario test, e.g scenario/keystone/test_v3_auth.py
>>
>
> I'd love to be able to run these tests against a wider variety of service
> configurations (e.g. LDAP!), which tempest is obviously more suitable for.
>
>
>>
>> - The intention is to excercise logic which is hard to fully test with
>>   unit or integration tests, and to catch issues like incompatibility
>>   between client and API - e.g keystoneclient tests may pass, but we need
>>   to make sure the client actually works against the real keystone API.
>>
>
> All of our tests under keystone.tests.test_keystoneclient fall into this
> category as well:
>
>
> https://github.com/openstack/keystone/blob/a0e26c1882d83989bee3726a5ae08cbe3f32a2b5/keystone/tests/test_keystoneclient.py
>
>
> https://github.com/openstack/keystone/blob/a0e26c1882d83989bee3726a5ae08cbe3f32a2b5/keystone/tests/test_keystoneclient_sql.py
>
>
>>
>> Working on Heat has given me a pretty good insight into the python-*client
>> API's, as we use them to orchestrate actions with every openstack service;
>> IMO anything we can do to make these interfaces more robust (and catch
>> bugs, several of which I found already while writing these tests) is a
>> good-thing (tm).
>>
>
> ++
>
>
>>
>> I'd welcome feedback on the patch above, and what will be the most
>> acceptable approach to the tempest team for adding these tests.
>>
>> More links:
>>
>> https://review.openstack.org/#/c/51559/
>> https://review.openstack.org/#/c/51560/
>> https://blueprints.launchpad.net/tempest/+spec/keystoneclient-api
>>
>> Thanks!
>>
>> Steve
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
>
> -Dolph
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Dolph Mathews
On Fri, Oct 18, 2013 at 10:34 AM, Steven Hardy  wrote:

> Hi all,
>
> Starting a thread to discuss $subject, as requested in:
>
> https://review.openstack.org/#/c/51558/
>
> First a bit of background.  I wrote a keystoneclient patch, and ayoung
> stated he'd like it tested via tempest before he'd ack it:
>
> https://review.openstack.org/#/c/48462/
>
> So I spoke to ayoung and dkranz on IRC, showing them my local tests for the
> patch.  dkranz suggested creating a "client_lib" directory, where we could
> build out a more comprehensive set of tests over time, adding to the inital
> tests related to keystone trusts client additions.
>
> A couple of things to note:
> - These are end-to-end tests, designed to test not only the client, but
>   also the API and keystone backend functionality.  So arguably this could
>   just be a scenario test, e.g scenario/keystone/test_v3_auth.py
>

I'd love to be able to run these tests against a wider variety of service
configurations (e.g. LDAP!), which tempest is obviously more suitable for.


>
> - The intention is to excercise logic which is hard to fully test with
>   unit or integration tests, and to catch issues like incompatibility
>   between client and API - e.g keystoneclient tests may pass, but we need
>   to make sure the client actually works against the real keystone API.
>

All of our tests under keystone.tests.test_keystoneclient fall into this
category as well:


https://github.com/openstack/keystone/blob/a0e26c1882d83989bee3726a5ae08cbe3f32a2b5/keystone/tests/test_keystoneclient.py


https://github.com/openstack/keystone/blob/a0e26c1882d83989bee3726a5ae08cbe3f32a2b5/keystone/tests/test_keystoneclient_sql.py


>
> Working on Heat has given me a pretty good insight into the python-*client
> API's, as we use them to orchestrate actions with every openstack service;
> IMO anything we can do to make these interfaces more robust (and catch
> bugs, several of which I found already while writing these tests) is a
> good-thing (tm).
>

++


>
> I'd welcome feedback on the patch above, and what will be the most
> acceptable approach to the tempest team for adding these tests.
>
> More links:
>
> https://review.openstack.org/#/c/51559/
> https://review.openstack.org/#/c/51560/
> https://blueprints.launchpad.net/tempest/+spec/keystoneclient-api
>
> Thanks!
>
> Steve
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] VMWare Mine Sweeper, Congrats!

2013-10-18 Thread Yaguang Tang
How can I enable or trigger Mine Sweeper for VMware related patches?  I
have update a patch about VMware driver today
https://review.openstack.org/#/c/51793/ . but haven't seen any posting
results .


2013/10/18 Sean Dague 

> On 10/17/2013 02:29 PM, Dan Smith wrote:
>
>> This system is running tempest against a VMWare deployment and posting
>>> the results publicly.  This is really great progress.  It will go a long
>>> way in helping reviewers be more confident in changes to this driver.
>>>
>>
>> This is huge progress, congrats and thanks to the VMware team for making
>> this happen! There is really no substitute for the value this will
>> provide for overall quality.
>>
>
> Agreed. Nice job guys! It's super cool to now see SmokeStack and Mine
> Sweeper posting back on patches.
>
> Tip of the hat to the VMWare team for pulling this together so quickly.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>



-- 
Tang Yaguang

Canonical Ltd. | www.ubuntu.com | www.canonical.com
Mobile:  +86 152 1094 6968
gpg key: 0x187F664F
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][keystone] Adding client library related tests to tempest

2013-10-18 Thread Steven Hardy
Hi all,

Starting a thread to discuss $subject, as requested in:

https://review.openstack.org/#/c/51558/

First a bit of background.  I wrote a keystoneclient patch, and ayoung
stated he'd like it tested via tempest before he'd ack it:

https://review.openstack.org/#/c/48462/

So I spoke to ayoung and dkranz on IRC, showing them my local tests for the
patch.  dkranz suggested creating a "client_lib" directory, where we could
build out a more comprehensive set of tests over time, adding to the inital
tests related to keystone trusts client additions.

A couple of things to note:
- These are end-to-end tests, designed to test not only the client, but
  also the API and keystone backend functionality.  So arguably this could
  just be a scenario test, e.g scenario/keystone/test_v3_auth.py

- The intention is to excercise logic which is hard to fully test with
  unit or integration tests, and to catch issues like incompatibility
  between client and API - e.g keystoneclient tests may pass, but we need
  to make sure the client actually works against the real keystone API.

Working on Heat has given me a pretty good insight into the python-*client
API's, as we use them to orchestrate actions with every openstack service;
IMO anything we can do to make these interfaces more robust (and catch
bugs, several of which I found already while writing these tests) is a
good-thing (tm).

I'd welcome feedback on the patch above, and what will be the most
acceptable approach to the tempest team for adding these tests.

More links:

https://review.openstack.org/#/c/51559/
https://review.openstack.org/#/c/51560/
https://blueprints.launchpad.net/tempest/+spec/keystoneclient-api

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] RFC: Freeze deps versions in requirements

2013-10-18 Thread Sean Dague

On 10/17/2013 08:12 AM, Petr Blaho wrote:

Hi all,

this is probably 3rd or 4th time in quite short time when we were bitten
by new version of some out dependencies.

Last one (https://review.openstack.org/#/c/52327/3) was about WSME
released version 0.5b6 and our tests fail.

I do not want to argue if problem is on tuskar side (probably tuskar is
to blame according to what jistr found out) or WSME but I think
we should discuss possibility of freezing versions in our dependencies
definitions and once in a time check for new versions of deps and do
update as a separate task.

Current state of having open version constraints like WSME<=0.5b5 leads
to occasional (or as I see quite frequent) jenkins job failures (as jenkins
use clean venv for instalation - devs can have older one so they can
miss failure with new version of dep) and these "sudden" failures force
us to investigate and divert from planned tasks etc...

I think that having regular "update deps" task can lead to better time
management and having less "sudden" jenkins failures will be benefical
to developers' health :-)

Please, tell me if I am missing some obvious problem with freezing dep
versions and/or regular update task and if I miss a good side of these
"sudden version strikes" too.


Most of this has already been addressed in the thread. There is a huge 
trade of here, freezing requirements in the short term makes everything 
work, in the long term, it causes huge pain. Just look at SQLA pin at <= 
0.7.99, which has gone far too long without getting resolved, and has 
all the distros carrying patches to work around it (now why none of them 
have contributed those back is another question).


Getting to a single global requirements list this summer took 3 weeks of 
cross team coordination, because of pinning. Having to unwind 
coordinated patches like that is lots of fun, let me tell you. :) And 
while the result isn't perfect, it's so much better than the random gate 
wedges we were regularly getting that were actually unsolvable through 
normal process fixes.


If you are a project with tempest integration, then you actually get a 
code voice in global requirements bumps, because a g-r change can't land 
without passing tempest / devstack tests. So integrated projects take 
note, there are lots of reasons you want to have good tempest tests, and 
protecting yourself from requirements changes is one of those (wouldn't 
help in the tuscar case, as that's only got unit tests).


So, I think we largely need to just take our lumps realizing that we 
don't have tight control over the python world, and it's better to react 
quickly, then to hide behind a pin, and not realize that we're massively 
broken with the latest versions of libraries out there.


That being said, because WSME is in stackforge, I think we could 
actually do better than just take our lumps. But I think that's for 
discussing at the requirements session at summit.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Setting up compute node on XenServer.

2013-10-18 Thread Bob Ball
Hi Parikshit,

More details can be extracted by setting debug=true in /etc/nova/nova.conf or 
setting default_log_levels to include nova.virt.xenapi.driver=debug.

This is likely to be a mis-configured nova.conf - check 
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/introduction-to-xen.html
 - did you set the nova.conf values as described there?

Thanks,

Bob

From: Parikshit Manur [mailto:parikshit.ma...@citrix.com]
Sent: 18 October 2013 15:47
To: OpenStack Development Mailing List
Subject: [openstack-dev] Setting up compute node on XenServer.

Hi,


* I have the Openstack setup working with KVM being one of the compute 
node. The compute node of type XenServer was added to the setup. The XenServer 
has Dom0 and DomU nodes with the DomU node running the nova -compute service. 
The xenserver is shown in nova hypervisor-list ommand.



* The Openstack setup is backed by vlan and it is running neutron for 
the networking.


* After issuing the nova boot command for XenServer compute node the 
request fails with the status of the vm is in Error state when nova list 
command is run.


* The following is the log details which was captured pertaining to the 
instance where the error state is being set.

2013-10-18 19:43:31.704 21735 INFO nova.scheduler.filter_scheduler 
[req-9078a74a-5b32-4521-8398-fc582964cfed bc7a0e98500f4633a47546ba542585b2 
1d770e03eb134f58977313cac6edef92] Attempting to build 1 instance(s) uuids: 
[u'b77060d3-050a-46d1-893e-43c9d6c1d48d']
2013-10-18 19:43:31.736 21735 WARNING nova.scheduler.driver 
[req-9078a74a-5b32-4521-8398-fc582964cfed bc7a0e98500f4633a47546ba542585b2 
1d770e03eb134f58977313cac6edef92] [instance: 
b77060d3-050a-46d1-893e-43c9d6c1d48d] Setting instance to ERROR state.

Can you suggest any reason for the above error?

Is there any way to enable detailed logging and to create vm on the xenserver 
Compute node?


Thanks,
Parikshit Manur

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Setting up compute node on XenServer.

2013-10-18 Thread Parikshit Manur
Hi,


* I have the Openstack setup working with KVM being one of the compute 
node. The compute node of type XenServer was added to the setup. The XenServer 
has Dom0 and DomU nodes with the DomU node running the nova -compute service. 
The xenserver is shown in nova hypervisor-list ommand.



* The Openstack setup is backed by vlan and it is running neutron for 
the networking.


* After issuing the nova boot command for XenServer compute node the 
request fails with the status of the vm is in Error state when nova list 
command is run.


* The following is the log details which was captured pertaining to the 
instance where the error state is being set.

2013-10-18 19:43:31.704 21735 INFO nova.scheduler.filter_scheduler 
[req-9078a74a-5b32-4521-8398-fc582964cfed bc7a0e98500f4633a47546ba542585b2 
1d770e03eb134f58977313cac6edef92] Attempting to build 1 instance(s) uuids: 
[u'b77060d3-050a-46d1-893e-43c9d6c1d48d']
2013-10-18 19:43:31.736 21735 WARNING nova.scheduler.driver 
[req-9078a74a-5b32-4521-8398-fc582964cfed bc7a0e98500f4633a47546ba542585b2 
1d770e03eb134f58977313cac6edef92] [instance: 
b77060d3-050a-46d1-893e-43c9d6c1d48d] Setting instance to ERROR state.

Can you suggest any reason for the above error?

Is there any way to enable detailed logging and to create vm on the xenserver 
Compute node?


Thanks,
Parikshit Manur

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-18 Thread Matt Riedemann
And this guy: https://bugs.launchpad.net/nova/+bug/1241628 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Matt Riedemann/Rochester/IBM@IBMUS
To: OpenStack Development Mailing List 
, 
Date:   10/18/2013 09:25 AM
Subject:Re: [openstack-dev] [nova][powervm] my notes from the 
meeting on  powervm CI



I just opened this bug, it's going to be one of the blockers for us to get 
PowerVM CI going in Icehouse: 

https://bugs.launchpad.net/nova/+bug/1241619 



Thanks, 

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development 

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com 


3605 Hwy 52 N
Rochester, MN 55901-1407
United States





From:Matt Riedemann/Rochester/IBM@IBMUS 
To:OpenStack Development Mailing List 
, 
Date:10/11/2013 10:59 AM 
Subject:Re: [openstack-dev] [nova][powervm] my notes from the 
meeting onpowervm CI 







Matthew Treinish  wrote on 10/10/2013 10:31:29 PM:

> From: Matthew Treinish  
> To: OpenStack Development Mailing List 
, 
> Date: 10/10/2013 11:07 PM 
> Subject: Re: [openstack-dev] [nova][powervm] my notes from the 
> meeting on powervm CI 
> 
> On Thu, Oct 10, 2013 at 07:39:37PM -0700, Joe Gordon wrote:
> > On Thu, Oct 10, 2013 at 7:28 PM, Matt Riedemann  
wrote:
> > > >
> > > > > 4. What is the max amount of time for us to report test results? 
 Dan
> > > > > didn't seem to think 48 hours would fly. :)
> > > >
> > > > Honestly, I think that 12 hours during peak times is the upper 
limit of
> > > > what could be considered useful. If it's longer than that, many 
patches
> > > > could go into the tree without a vote, which defeats the point.
> > >
> > > Yeah, I was just joking about the 48 hour thing, 12 hours seems 
excessive
> > > but I guess that has happened when things are super backed up with 
gate
> > > issues and rechecks.
> > >
> > > Right now things take about 4 hours, with Tempest being around 1.5 
hours
> > > of that. The rest of the time is setup and install, which includes 
heat
> > > and ceilometer. So I guess that raises another question, if we're 
really
> > > setting this up right now because of nova, do we need to have heat 
and
> > > ceilometer installed and configured in the initial delivery of this 
if
> > > we're not going to run tempest tests against them (we don't right 
now)?
> > >
> > 
> > 
> > In general the faster the better, and if things get to slow enough 
that we
> > have to wait for powervm CI to report back, I
> > think its reasonable to go ahead and approve things without hearing 
back.
> >  In reality if you can report back in under 12 hours this will rarely
> > happen (I think).
> > 
> > 
> > >
> > > I think some aspect of the slow setup time is related to DB2 and how
> > > the migrations perform with some of that, but the overall time is 
not
> > > considerably different from when we were running this with MySQL so
> > > I'm reluctant to blame it all on DB2.  I think some of our topology
> > > could have something to do with it too since the IVM hypervisor is 
running
> > > on a separate system and we are gated on how it's performing at any
> > > given time.  I think that will be our biggest challenge for the 
scale
> > > issues with community CI.
> > >
> > > >
> > > > > 5. What are the minimum tests that need to run (excluding 
> APIs that the
> > > > > powervm driver doesn't currently support)?
> > > > > - smoke/gate/negative/whitebox/scenario/cli?  Right 
> now we have
> > > > > 1152 tempest tests running, those are only within 
api/scenario/cli and
> > > > > we don't run everything.
> 
> Well that's almost a full run right now, the full tempest jobs have 1290 
tests
> of which we skip 65 because of bugs or configuration. (don't run neutron 
api
> tests without neutron) That number is actually pretty high since you are
> running with neutron. Right now the neutron gating jobs only have 221 
jobs and
> skip 8 of those. Can you share the list of things you've got working 
with
> neutron so we can up the number of gating tests? 

Here is the nose.cfg we run with: 



Some of the tests are excluded because of performance issues that still 
need to 
be worked out (like test_list_image_filters - it works but it takes over 
20 
minutes sometimes). 

Some of the tests are excluded because of limitations with DB2, e.g. 
test_list_servers_filtered_by_name_wildcard 

Some of them are probably old excludes on bugs that are now fixed. We have 
to 
go back through what's excluded every once in awhile to figure out what's 
still broken and clean things up. 

Here is the tempest.cfg we use on ppc64: 



And here are the xunit results from our latest run: 



Note that we have known issues with some cinder and neutron failures 
in there. 

> 
> > > >
> > 

Re: [openstack-dev] [nova][powervm] my notes from the meeting on powervm CI

2013-10-18 Thread Matt Riedemann
I just opened this bug, it's going to be one of the blockers for us to get 
PowerVM CI going in Icehouse:

https://bugs.launchpad.net/nova/+bug/1241619 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Matt Riedemann/Rochester/IBM@IBMUS
To: OpenStack Development Mailing List 
, 
Date:   10/11/2013 10:59 AM
Subject:Re: [openstack-dev] [nova][powervm] my notes from the 
meeting on  powervm CI







Matthew Treinish  wrote on 10/10/2013 10:31:29 PM:

> From: Matthew Treinish  
> To: OpenStack Development Mailing List 
, 
> Date: 10/10/2013 11:07 PM 
> Subject: Re: [openstack-dev] [nova][powervm] my notes from the 
> meeting on powervm CI 
> 
> On Thu, Oct 10, 2013 at 07:39:37PM -0700, Joe Gordon wrote:
> > On Thu, Oct 10, 2013 at 7:28 PM, Matt Riedemann  
wrote:
> > > >
> > > > > 4. What is the max amount of time for us to report test results? 
 Dan
> > > > > didn't seem to think 48 hours would fly. :)
> > > >
> > > > Honestly, I think that 12 hours during peak times is the upper 
limit of
> > > > what could be considered useful. If it's longer than that, many 
patches
> > > > could go into the tree without a vote, which defeats the point.
> > >
> > > Yeah, I was just joking about the 48 hour thing, 12 hours seems 
excessive
> > > but I guess that has happened when things are super backed up with 
gate
> > > issues and rechecks.
> > >
> > > Right now things take about 4 hours, with Tempest being around 1.5 
hours
> > > of that. The rest of the time is setup and install, which includes 
heat
> > > and ceilometer. So I guess that raises another question, if we're 
really
> > > setting this up right now because of nova, do we need to have heat 
and
> > > ceilometer installed and configured in the initial delivery of this 
if
> > > we're not going to run tempest tests against them (we don't right 
now)?
> > >
> > 
> > 
> > In general the faster the better, and if things get to slow enough 
that we
> > have to wait for powervm CI to report back, I
> > think its reasonable to go ahead and approve things without hearing 
back.
> >  In reality if you can report back in under 12 hours this will rarely
> > happen (I think).
> > 
> > 
> > >
> > > I think some aspect of the slow setup time is related to DB2 and how
> > > the migrations perform with some of that, but the overall time is 
not
> > > considerably different from when we were running this with MySQL so
> > > I'm reluctant to blame it all on DB2.  I think some of our topology
> > > could have something to do with it too since the IVM hypervisor is 
running
> > > on a separate system and we are gated on how it's performing at any
> > > given time.  I think that will be our biggest challenge for the 
scale
> > > issues with community CI.
> > >
> > > >
> > > > > 5. What are the minimum tests that need to run (excluding 
> APIs that the
> > > > > powervm driver doesn't currently support)?
> > > > > - smoke/gate/negative/whitebox/scenario/cli?  Right 
> now we have
> > > > > 1152 tempest tests running, those are only within 
api/scenario/cli and
> > > > > we don't run everything.
> 
> Well that's almost a full run right now, the full tempest jobs have 1290 
tests
> of which we skip 65 because of bugs or configuration. (don't run neutron 
api
> tests without neutron) That number is actually pretty high since you are
> running with neutron. Right now the neutron gating jobs only have 221 
jobs and
> skip 8 of those. Can you share the list of things you've got working 
with
> neutron so we can up the number of gating tests? 

Here is the nose.cfg we run with: 



Some of the tests are excluded because of performance issues that still 
need to 
be worked out (like test_list_image_filters - it works but it takes over 
20 
minutes sometimes). 

Some of the tests are excluded because of limitations with DB2, e.g. 
test_list_servers_filtered_by_name_wildcard 

Some of them are probably old excludes on bugs that are now fixed. We have 
to 
go back through what's excluded every once in awhile to figure out what's 
still broken and clean things up. 

Here is the tempest.cfg we use on ppc64: 



And here are the xunit results from our latest run: 



Note that we have known issues with some cinder and neutron failures 
in there. 

> 
> > > >
> > > > I think that "a full run of tempest" should be required. That 
said, if
> > > > there are things that the driver legitimately doesn't support, it 
makes
> > > > sense to exclude those from the tempest run, otherwise it's not 
useful.
> > >
> > 
> > ++
> > 
> > 
> > 
> > >  >
> > > > I think you should publish the tempest config (or config script, 
or
> > > > patch, or whatever) that you're using so that we can see what it 
means
> > > > in terms of the coverage you're providing.
> > >
> > > Just to clarify, do you mean pub

Re: [openstack-dev] [Nova][Glance] Support of v1 and v2 glance APIs in Nova

2013-10-18 Thread stuart . mclaren


I'd just like to echo Tim Bell's comments.


From a rolling upgrade perspective where you have Glance nodes behind

a load balancer we'd probably need a way to manually 'hold back' on v1
until all nodes are upgraded to support v2 aswell.
(Otherwise your auto-discovery may hit an upgraded node but your
subsequent API request could hit a node which hasn't been upgraded yet.)


On 10/17/2013 03:12 PM, Eddie Sheffield wrote:

I don't oppose having the version autodiscovered. But I do feel the option 
should be available to override if desired. I've had many problems with

over the years with autoconfiguring systems not doing what I need to be willing 
to depend on them 100% without a manual override available if at all
possible. I'm thinking particularly for testing or upgrade evaluation 
scenarios. For example, you want to turn on v2 in glance and evaluate it for a 
bit
before committing to using it for all of your nova nodes.


The current patch also had a couple of comments from Dan Prince and Chris 
Behrens early on when this was brought up there supporting the use of a

config value.


From the implementation side of things, to do this properly would require some 
changes to the glanceclient to allow querying of available versions.

Lacking that ability currently is one reason this is currently a config value. 
Once the client supports this, the nova side change would likely be quite
small.


Might I propose a compromise?

1) For the VERY short term, keep the config value and get the change otherwise 
reviewed and hopefully accepted.

2) Immediately file two blueprints:
   - python-glanceclient - expose a way to discover available versions
   - nova - depends on the glanceclient bp and allowing autodiscovery of glance 
version
and making the config value optional (tho not deprecated / removed)


Supporting both seems reasonable.  At least then *most* people don't
need to worry about it and it "just works", but the override is there if
necessary, since multiple people seem to be expressing a desire to have
it available.

Can we just do this all at once?  Adding this to glanceclient doesn't
seem like a huge task.

--
Russell Bryant


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Metrics] Improving the data about contributor/affiliation/time

2013-10-18 Thread Sean Dague

On 10/17/2013 05:34 PM, Stefano Maffulli wrote:

hello folks

first of all: congratulations to all developers, testers, users,
translators, tech writers for the new release: Havana is out of the gate
with impressive numbers.

Speaking of numbers, a lot of you have noticed mistakes in the reported
numbers, from misspelling of names to missing/wrong company
affiliations. With my apologies for the mistakes comes an explanation of
where I see things fail and a suggestion on how to fix this for the future.

Currently there are three places where statistics about the project are
released:

  - OpenStack Activity Board http://activity.openstack.org/
  - gitdm http://git.openstack.org/cgit/openstack-infra/gitdm/
  - Stackalytics http://git.openstack.org/cgit/stackforge/stackalytics/

Activity Board is actually made of two pieces: the Dash and Insights.
Insights pulls straight from the OpenStack Foundation Members db
http://www.openstack.org/community/members/, so what you see in personal
pages like

http://activity.openstack.org/data/plugins/zfacts/view.action?instance=Person,person3986c85a-b9af-4686-8c7b-45525f62e396

is exactly what is written on Robert's personal profile
http://www.openstack.org/community/members/profile/3619 (these
confluence pages are updated daily).

The data about companies on the Dash are the result of semi-automatic
processing and cleanup of the data from OpenStack Foundation Members db.
The cleanup is necessary because a) one can't always rely on people
spelling correctly the name of their company b) the Profile pages lack
the UI to properly track the history of affiliation [1]. Here is what
the Dash looks like for Canonical:

http://activity.openstack.org/dash/releases/company.html?company=Canonical

gitdm and Stackalytics take their developer/company/time tuples from
files maintained by developers themselves compensated by heuristics to
'guess' affiliations from things like email addresses in the commit logs.

Four sources of data for this reporting is bad and not sustainable.

Since it seems commonly accepted that all developers need to be members
of the Foundation, and that Foundation members need to state their
affiliation when they join and keep such data current when it changes, I
think the Foundation is in a good place to provide the authoritative
data for all projects to use.


I'm not sure it is well understoond that all members have to join the 
foundation. We don't make that a requirement on someone slinging a 
patch. It would be nice to know what percentage of ATCs actually are 
foundation members at the moment (presumably that number is easy to 
generate?)


The thing is, the Foundation data currently seems to be the least 
accurate of all the data sets. Also, the lack of affiliation over time 
is really a problem for this project, especially if one of the driving 
factors for so much interest in statistics comes from organizations 
wanting to ensure contributions by their employees get counted. A 
significant percentage of top contributors to OpenStack have not 
remained at a single employer over their duration to contributing to 
OpenStack, and I expect that to be the norm as the project ages.


Also, both gitdm and stackalytics have active open developer communities 
(and they are open source all the way down, don't need non open 
components to run), so again, I'm not sure why defaulting to the least 
open platform makes any sense.


Member affiliation in the Foundation database can also only be fixed by 
the individual. In the other tools people in the know can fix it. It 
means we get a wikipedia effect in getting the data more accurate, as 
you can fix any issue you see, not just your own.


If the foundation member database was it's own thing, had a REST API to 
bulk fetch, and supported temporal associations, and let others propose 
updates to people's affiliation, then it would be an option. But right 
now it seems very far from being useful, and is probably the least, not 
most, accurate version of the world.



We can make things easier by making the personal profile pages more
useful so people login more often and improve quality of data. Fixing
the known shortcomings mentioned above is one step. Furthermore, we're
working to develop an OpenID provider based on the Members DB that will
be used across all our web properties (from gerrit to the upcoming
groups.openstack.org, etc) so those profile will be used for more than
just for the initial signup to be a member [2].

Since nobody can rely on user input we will still have to 'cleanup' the
data as it comes in from the Members DB in order to create a 'Master
Data Record' that we can export for all to consume. Here things get a
bit fuzzy because currently the Members DB has an API that is not
designed to be securely consumed publicly[3].

What I think we can do is to have a periodic job pulling the full list
of members and their stated affiliation, and run on that an
automatic/manual cleanup/sanitizing job that c

Re: [openstack-dev] [Heat]Blueprint for retry function with idenpotency in Heat

2013-10-18 Thread Mitsuru Kanabuchi

On Fri, 18 Oct 2013 10:34:11 +0100
Steven Hardy  wrote:
> IMO we don't want to go down the path of retry-loops in Heat, or scheduled
> self-healing. We should just allow the user to trigger an stack update from
> a failed state (CREATE_FAILED, or UPDATE_FAILED), and then they can define
> their own policy on when recovery happens by triggering a stack update.

I think "retry" has two different implications in this topic.
I'd like to organize "retry" means.

=
1) Stack Creation retry

  proposed here:
https://blueprints.launchpad.net/heat/+spec/retry-failed-update

  - trigger: stack update to failed stack
  - function: replace failed resource and go ahead

2) API retry

  proposed here(Our blueprint):
https://blueprints.launchpad.net/heat/+spec/support-retry-with-idempotency

  - trigger: can't get API response or get unexpected response code
  - function: retry API requests until it gets expected response code or it 
reaches a retry limit
=

Our proposal is 2)
After over the retry limit, Stack would change to XXX_FAILED status.
I think it is same of currently heat behavior.
We won't change mechanism of stack state transition.

I understand proposal 1) aims to restart stack-processing of failed stack.
These are different layer's subject, and both functionality will able to exist 
together.


On Fri, 18 Oct 2013 10:34:11 +0100
Steven Hardy  wrote:

> On Fri, Oct 18, 2013 at 12:13:45PM +1300, Steve Baker wrote:
> > On 10/18/2013 01:54 AM, Mitsuru Kanabuchi wrote:
> > > Hello Mr. Clint,
> > >
> > > Thank you for your comment and prioritization.
> > > I'm glad to discuss you who feel same issue.
> > >
> > >> I took the liberty of targeting your blueprint at icehouse. If you don't
> > >> think it is likely to get done in icehouse, please raise that with us at
> > >> the weekly meeting if you can and we can remove it from the list.
> > > Basically, this blueprint is targeted IceHouse release.
> > >
> > > However, the schedule is depend on follows blueprint:
> > >   https://blueprints.launchpad.net/nova/+spec/idempotentcy-client-token
> > >
> > > We're going to start implementation to Heat after ClientToken implemented.
> > > I think ClientToken is necessary function for this blueprint, and 
> > > important function for other callers!
> > Can there not be a default retry implementation which deletes any
> > ERRORed resource and attempts the operation again? Then specific
> > resources can switch to ClientToken as they become available.
> 
> Yes, I think this is the way to go - have logic in every resources
> handle_update (which would probably be common with check_create_complete),
> which checks the status of the underlying physical resource, and if it's
> not in the expected status, we replace it.
> 
> This probably needs to be a new flag or API operation, as it clearly has
> the possibility to be more destructive than a normal update (may delete
> resources which have not changed in the template, but are in a bad state)
> 
> > > On Wed, 16 Oct 2013 23:32:22 -0700
> > > Clint Byrum  wrote:
> > >
> > >> Excerpts from Mitsuru Kanabuchi's message of 2013-10-16 04:47:08 -0700:
> > >>> Hi all,
> > >>>
> > >>> We proposed a blueprint that supports API retry function with 
> > >>> idenpotency for Heat.
> > >>> Prease review the blueprint.
> > >>>
> > >>>   
> > >>> https://blueprints.launchpad.net/heat/+spec/support-retry-with-idempotency
> > >>>
> > >> This looks great. It addresses some of what I've struggled with while
> > >> thinking of how to handle the retry problem.
> > >>
> > >> I went ahead and linked bug #1160052 to the blueprint, as it is one that
> > >> I've been trying to get a solution for.
> > >>
> > >> I took the liberty of targeting your blueprint at icehouse. If you don't
> > >> think it is likely to get done in icehouse, please raise that with us at
> > >> the weekly meeting if you can and we can remove it from the list.
> > >>
> > >> Note that there is another related blueprint here:
> > >>
> > >> https://blueprints.launchpad.net/heat/+spec/retry-failed-update
> > >>
> > >>
> > 
> > Has any thought been given to where the policy should be specified for
> > how many retries to attempt?
> > 
> > Maybe sensible defaults should be defined in the python resources, and a
> > new resource attribute can allow an override in the template on a
> > per-resource basis (I'm referring to an attribute at the same level as
> > Type, Properties, Metadata)
> 
> IMO we don't want to go down the path of retry-loops in Heat, or scheduled
> self-healing. We should just allow the user to trigger an stack update from
> a failed state (CREATE_FAILED, or UPDATE_FAILED), and then they can define
> their own policy on when recovery happens by triggering a stack update.
> 
> This is basically what's described for discussion here:
> http://summit.openstack.org/cfp/details/95
> 
> I personally think the scheduled self-healing is a bad idea, but the
> convergence (a

[openstack-dev] Glance Client tool - ask for a blueprint review

2013-10-18 Thread GROSZ, Maty (Maty)
Hey *,

I have initiated a new blueprint, which can viewed here:
https://blueprints.launchpad.net/python-glanceclient/+spec/make-schema-api-calls-configurable
I am waiting for your comments.

Thanks,

Maty.

[logo]
Maty Grosz
Alcatel-Lucent
APIs Functional Owner, R&D
CLOUDBAND BUSINESS UNIT
16 Atir Yeda St. Kfar-Saba 44643, ISRAEL
T: +972 (0) 9 7933078
F: +972 (0) 9 7933700
maty.gr...@alcatel-lucent.com


<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Metrics] Improving the data about contributor/affiliation/time

2013-10-18 Thread Thierry Carrez
Stefano Maffulli wrote:
> Since it seems commonly accepted that all developers need to be members
> of the Foundation, and that Foundation members need to state their
> affiliation when they join and keep such data current when it changes, I
> think the Foundation is in a good place to provide the authoritative
> data for all projects to use.

Note that "affiliation" in the Foundation membership sense may
completely differ from the company you are currently working for. Those
are different things.

For example, the bylaws say contractors are "affiliated" to any company
they worked for which gave them more than $60K of revenue over the last
12 months. They might be contributing patches on behalf of a new
company, but until it gives them 60K$ worth, they are not "affiliated"
to that company. Contractors can also be affiliated to more than one
company at a time, if they are lucky. Other rules affect board members
of multiple companies.

Affiliation is defined by rules in the bylaws to ensure director
diversity. It's not "the company you're currently working for", even if
in most cases it looks like it.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] A new blueprint for Nova-scheduler: Policy-based Scheduler

2013-10-18 Thread Khanh-Toan Tran
I like what you proposed in the blueprint. I totally agree that nova-scheduler 
needs
finer granularity in its usage of filters and weighers. Our objective is thus 
very
similar.

Our approach is little different. Since flavors are choices of clients, and
aggregates are selected during host selection (which comes after filters), we 
choose
to separate the policies from flavors and aggregates and put them into a Policy 
Repository (a database or a simple file). The Policy-based Scheduler then looks 
at
the Repo first to know which policy applied to which target (aggregates, 
tenants, etc).
It is an extensible architecture: It allows to customize policies and plug 
other 
solutions easily. The policy may be as simple as to apply, like in your 
proposal, a filter 
(policy -> (filter + aggregate)), a weigher, a combination of them or a 
completely 
new driver, say a new scheduling solution.

Currently we're working on an implementation of the blueprint which allows only 
admin 
to set up policies, but I also like the idea of letting client say their 
preferences 
(e.g. preferred availability-zone, anti-affinity, choice between silver-class 
or 
gold-class service). It is a question of philosophy.

Best regards,

Toan

Global archi:
https://docs.google.com/document/d/1gr4Pb1ErXymxN9QXR4G_jVjLqNOg2ij9oA0JrLwMVRA

-- Message original 
Sujet: Re: [openstack-dev] [nova][scheduler] A new blueprint for
Nova-scheduler: Policy-based Scheduler
Date : Wed, 16 Oct 2013 14:38:38 +0300
De : Alex Glikson 
Répondre à : OpenStack Development Mailing List

Pour : OpenStack Development Mailing List


This sounds very similar to
https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers

We worked on it in Havana, learned a lot from feedbacks during the review
cycle, and hopefully will finalize the details at the summit and will be
able to continue & finish the implementation in Icehouse. Would be great
to collaborate.

Regards,
Alex





From:   Khanh-Toan Tran 
To: openstack-dev@lists.openstack.org,
Date:   16/10/2013 01:42 PM
Subject:[openstack-dev] [nova][scheduler] A new blueprint for
Nova-scheduler: Policy-based Scheduler



Dear all,

I've registered a new blueprint for nova-scheduler. The purpose of the
blueprint is to propose a new scheduler that is based on policy:

   https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler

With current Filter_Scheduler, admin cannot change his placement policy
without restarting nova-scheduler. Neither can he define local policy for
a group of resources (say, an aggregate), or a particular client. Thus we
propose this scheduler to provide admin with the capability of
defining/changing his placement policy in runtime. The placement policy
can be global (concerning all resources), local (concerning a group of
resources), or tenant-specific.

Please don't hesitate to contact us for discussion, all your comments are
welcomed!

Best regards,

Khanh-Toan TRAN
Cloudwatt
Email: khanh-toan.tran[at]cloudwatt.com
892 Rue Yves Kermen
92100 BOULOGNE-BILLANCOURT
FRANCE

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's Up Doc? Oct 17th 2013

2013-10-18 Thread Thierry Carrez
Anne Gentle wrote:
> Special release day edition of What's Up Doc -- I want to take a minute
> to recognize this milestone and give a huge thanks to everyone who made
> it possible to release docs today. A sincere thank you, pat on the back,
> and special recognition to the top 10:
> Andreas Jaeger, Diane Fleming, (me), Tom Fifield, Christian
> Berendt, Sean Roberts, Stephen Gordon, Summer Long, Lorin Hochstein,
> and Nermina Miller. Way to rock the docs!

Kudos on the docs team for meeting the impossible deadline against all
odds ! Let's all see how we can make it less stressful for you all next
time.

Congrats!

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] debug output truncated

2013-10-18 Thread Thierry Carrez
Snider, Tim wrote:
> I  have swift version 1.9.1-dev loaded. Debug output listing the 1st curl 
> command is truncated.  Is there anyway to get the full command that was 
> issued displayed? Has it been corrected in a later version? 
> [...]

Hey Tim,

This is a development mailing-list where the future of OpenStack is
discussed, support questions on already-released versions are off-topic
and create noise.

You should post to the general mailing-list instead, where more people
should be able to see and reply to your question.

For more details on our mailing-lists:
https://wiki.openstack.org/wiki/Mailing_Lists

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat]Blueprint for retry function with idenpotency in Heat

2013-10-18 Thread Steven Hardy
On Fri, Oct 18, 2013 at 12:13:45PM +1300, Steve Baker wrote:
> On 10/18/2013 01:54 AM, Mitsuru Kanabuchi wrote:
> > Hello Mr. Clint,
> >
> > Thank you for your comment and prioritization.
> > I'm glad to discuss you who feel same issue.
> >
> >> I took the liberty of targeting your blueprint at icehouse. If you don't
> >> think it is likely to get done in icehouse, please raise that with us at
> >> the weekly meeting if you can and we can remove it from the list.
> > Basically, this blueprint is targeted IceHouse release.
> >
> > However, the schedule is depend on follows blueprint:
> >   https://blueprints.launchpad.net/nova/+spec/idempotentcy-client-token
> >
> > We're going to start implementation to Heat after ClientToken implemented.
> > I think ClientToken is necessary function for this blueprint, and important 
> > function for other callers!
> Can there not be a default retry implementation which deletes any
> ERRORed resource and attempts the operation again? Then specific
> resources can switch to ClientToken as they become available.

Yes, I think this is the way to go - have logic in every resources
handle_update (which would probably be common with check_create_complete),
which checks the status of the underlying physical resource, and if it's
not in the expected status, we replace it.

This probably needs to be a new flag or API operation, as it clearly has
the possibility to be more destructive than a normal update (may delete
resources which have not changed in the template, but are in a bad state)

> > On Wed, 16 Oct 2013 23:32:22 -0700
> > Clint Byrum  wrote:
> >
> >> Excerpts from Mitsuru Kanabuchi's message of 2013-10-16 04:47:08 -0700:
> >>> Hi all,
> >>>
> >>> We proposed a blueprint that supports API retry function with idenpotency 
> >>> for Heat.
> >>> Prease review the blueprint.
> >>>
> >>>   
> >>> https://blueprints.launchpad.net/heat/+spec/support-retry-with-idempotency
> >>>
> >> This looks great. It addresses some of what I've struggled with while
> >> thinking of how to handle the retry problem.
> >>
> >> I went ahead and linked bug #1160052 to the blueprint, as it is one that
> >> I've been trying to get a solution for.
> >>
> >> I took the liberty of targeting your blueprint at icehouse. If you don't
> >> think it is likely to get done in icehouse, please raise that with us at
> >> the weekly meeting if you can and we can remove it from the list.
> >>
> >> Note that there is another related blueprint here:
> >>
> >> https://blueprints.launchpad.net/heat/+spec/retry-failed-update
> >>
> >>
> 
> Has any thought been given to where the policy should be specified for
> how many retries to attempt?
> 
> Maybe sensible defaults should be defined in the python resources, and a
> new resource attribute can allow an override in the template on a
> per-resource basis (I'm referring to an attribute at the same level as
> Type, Properties, Metadata)

IMO we don't want to go down the path of retry-loops in Heat, or scheduled
self-healing. We should just allow the user to trigger an stack update from
a failed state (CREATE_FAILED, or UPDATE_FAILED), and then they can define
their own policy on when recovery happens by triggering a stack update.

This is basically what's described for discussion here:
http://summit.openstack.org/cfp/details/95

I personally think the scheduled self-healing is a bad idea, but the
convergence (as a special type of stack update) is a good one.

For automatic recovery, we should instead be looking at triggering things
via Ceilometer alarms, so we can move towards removing all periodic task
stuff from Heat (because it doesn't scale, and it presents major issues
when scaling out)

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Setting host routes in a subnet on Neutron

2013-10-18 Thread Robert Collins
On 18 October 2013 22:18, Dionysis Grigoropoulos  wrote:
> Hello all,
>
> I've started implementing the Neutron API in Python for Synnefo [1], but I've
> hit a bump. Specifically, I'm troubled by the way Neutron seems to handle host
> routes in subnet definitions.
>
> I'm experimenting with the implementation of the API in Neutron from Grizzly,
> with a PackStack-based installation.
>
> For starters, when creating a subnet and setting host routes, it's possible to
> set the nexthop as an IP address outside of the subnet CIDR.
>
> For example, when defining a subnet, I pass the following JSON to Neutron:
>
> {
> "cidr": "192.168.28.0/24",
> "host_routes": [
> {
> "destination": "100.100.100.0/24",
> "nexthop": "8.8.8.8"
> }
> ],
> "ip_version": 4,
> "network_id": "f52b51b6-3749-4306-bc76-97802fb3f48e"
> }
>
> I can set "8.8.8.8" as the gateway for network "100.100.100.0/24",
> although 8.8.8.8 does not belong in the range of the subnet's CIDR
> 192.168.28.0/24.
>
> Is there an obvious usecase for allowing this, that I'm missing at the moment?

It seems unusual but not invalid.

Consider that I could supply two host routes.
One to 8.8.8.8 on something in the host's subnet.
One to another network via 8.8.8.8.

So that should work. Note then that the route to 8.8.8.8 might be
delivered via a dynamic protocol (IS-IS or whatever) and it seems
fairly clear neutron shouldn't reject the route via 8.8.8.8.

-Rob

> Moreover, it's possible to set the destination CIDR for a host route to
> 0.0.0.0/0. If I understand correctly, this would set a host route for
> 0.0.0.0, effectively changing the default gateway for the system.
> However, a default gateway is *already* defined for the subnet, via the
> "gateway" field. Shouldn't there be some sort of verification that these
> two values coincide, or just disallow setting host routes for 0.0.0.0/0
> anyway?


I've no particular opinion on the 0.0.0.0 question.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Setting host routes in a subnet on Neutron

2013-10-18 Thread Dionysis Grigoropoulos
Hello all,

I've started implementing the Neutron API in Python for Synnefo [1], but I've 
hit a bump. Specifically, I'm troubled by the way Neutron seems to handle host 
routes in subnet definitions.

I'm experimenting with the implementation of the API in Neutron from Grizzly, 
with a PackStack-based installation.

For starters, when creating a subnet and setting host routes, it's possible to 
set the nexthop as an IP address outside of the subnet CIDR. 

For example, when defining a subnet, I pass the following JSON to Neutron:

{
"cidr": "192.168.28.0/24", 
"host_routes": [
{
"destination": "100.100.100.0/24", 
"nexthop": "8.8.8.8"
}
], 
"ip_version": 4, 
"network_id": "f52b51b6-3749-4306-bc76-97802fb3f48e"
}

I can set "8.8.8.8" as the gateway for network "100.100.100.0/24",
although 8.8.8.8 does not belong in the range of the subnet's CIDR
192.168.28.0/24.

Is there an obvious usecase for allowing this, that I'm missing at the moment?

Moreover, it's possible to set the destination CIDR for a host route to
0.0.0.0/0. If I understand correctly, this would set a host route for
0.0.0.0, effectively changing the default gateway for the system.
However, a default gateway is *already* defined for the subnet, via the
"gateway" field. Shouldn't there be some sort of verification that these
two values coincide, or just disallow setting host routes for 0.0.0.0/0
anyway?

Thanks in advance,
Dionysis Grigoropoulos

[1] https://www.synnefo.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Building on Debian: Havana unit tests at build time report

2013-10-18 Thread Thomas Goirand
On 10/18/2013 02:06 AM, Clint Byrum wrote:
> A link to instructions on setting up a wheezy box for this testing would
> be helpful.

Just install a minimal Wheezy machine, add my repositories (using the
Jenkins one at "*.pkgs.enovance.com"), then do "apt-get install
openstack-toaster". I'll be trying to provide a new preseed script so
that just running the script will be enough. I have that already, though
it needs to be updated for Havana. I'll post instructions when it is ready.

>> No unit test failure / error (in both Sid and Wheezy). However, Heat
>> doesn't include a -P option in ./run_tests.sh, and insists on running
>> the PEP8 checks, which fails because Sid has pyflakes 0.7.3, and Heat
>> wants 0.7.2:
>>
> 
> run_tests.sh is there for _your_ convenience. And by you I mean
> packagers and others who want to run the tests with system python.

And it's been very helpful, though there's room for improvement.

> IMO you'd be better off adding a --no-venv option to tox to run the
> prescribed commands with system python, or even just parsing tox.ini to
> do that yourself. You'll find that the commands in tox.ini are better
> maintained, since they are gated.
> 
> Anyway, I'd gladly accept a patch to add -P.

I wonder: are the different run_tests.sh scripts maintained in Oslo? It
feels strange that do we have different options for different OpenStack
projects (like missing -P and -N options sometimes).

On the nit-picking details side, an option to disable color output would
be great (it's ugly to read [32mOK  0.03 ...). Different
projects have different behaviors in this regard. (for example, Glance
always display in colors, while Cinder doesn't)

>> Also, heat uses /usr/bin/coverage. Please switch to python -m coverage
>> (IMO, the best way...), or check if /usr/bin/coverage and if not, use
>> /usr/bin/python-coverage.
> 
> I appreciate this message, however, the bug tracker is over there ->
> https://bugs.launchpad.net/heat/+filebug

ACK: https://bugs.launchpad.net/heat/+bug/1241330

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Data Processing ("Savanna") 0.3 release

2013-10-18 Thread Sergey Lukjanov
Hi everyone,

I’m glad to announce the 0.3 release of OpenStack Data Processing (“Savanna”). 
It’s targeted to work with Havana release of OpenStack. There are a lot of 
fixed bugs and implement blueprints including EDP, Neutron support, HDP plugin 
and etc., more info at Launchpad [1].

Release notes with the list of key features: 
https://wiki.openstack.org/wiki/Savanna/ReleaseNotes/0.3

The next major development cycle is adjusted with OpenStack and, so, it’ll be 
the 6-month dev cycle, codenamed “Icehouse”. Scope, technical details and all 
other things for it will be discussed at OpenStack Summit Nov 5-8 in Hong Kong 
[2].

I would like to thank the Savanna team and everyone who help us for making 0.3 
release possible!

[1] https://launchpad.net/savanna/0.3/0.3
[2] http://www.openstack.org/summit/openstack-summit-hong-kong-2013/

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Technical Committee - October 2013 election results

2013-10-18 Thread Thierry Carrez
Hello everyone,

Voting is complete and the results are in:

The following people are elected to one-year seats:
* Monty Taylor
* Russell Bryant
* Anne Gentle
* Mark McLoughlin
* Doug Hellmann
* Sean Dague

The following people are elected to six-month seats:
* James E. Blair
* Michael Still
* John Griffith
* Mark McClain
* Robert Collins

They join Vish and myself (elected six months ago to one-year seats) to
form the 13-member Technical Committee for the Icehouse cycle.

Detailed results:
http://www.cs.cornell.edu/w8/~andru/cgi-perl/civs/results.pl?id=E_5ef3f04b3c641f3b

More information about this election:
https://wiki.openstack.org/wiki/TC_Elections_Fall_2013

Congratulations to the new members, and thanks to everyone who participated.

-- 
Thierry Carrez (ttx)
Election official

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev