Re: [openstack-dev] We need a new version of hacking for Icehouse, or provide compatibility with oslo.sphinx in oslosphinx

2014-03-21 Thread Thomas Goirand
On 03/22/2014 12:58 AM, Joe Gordon wrote:
> 
> So it sounds like we need:
> 
> * Hacking 0.8.1 to fix the oslo.sphinx  oslosphinx issue for Icehouse.
> Since we cap hacking versions at 0.9 [1] this will  get used in icehouse.
> * Hacking 0.9 to release all the new hacking goodness. This will be
> targeted for use in Juno.

I agree with this plan.

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][NOVA][VCDriver][live-migration] VCDriver live migration problem

2014-03-21 Thread Jay Lau
Hi,

Currently we cannot do live migration with VCDriver in nova, live migration
is really an important feature, so any plan to fix this?

I noticed that there is already bug tracing this but seems no progress
since last year's November: https://bugs.launchpad.net/nova/+bug/1192192

Here just bring this problem up to see if there are any plan to fix this.
After some investigation, I think that this might deserve to be a blueprint
but not a bug.

We may need to resolve issues for the following cases:
1) How to live migration with only one nova compute? (one nova compute can
manage multiple clusters and there can be multi hosts in one cluster)
2) Support live migration between clusters
3) Support live migration between resource pools
4) Support live migration between hosts
5) Support live migration between cluster and host
6) Support live migration between cluster and resource pool
7) Support live migration between resource pool and host
8) Might be more cases.

Please show your comments if any and correct me if anything is not correct.

-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Alejandro Cabrera
+1.

Malini is dedicated to making Marconi and Openstack a healthier, better
place. I am very happy to see Malini being proposed for Core. I trust
that she'll do wonders for the project and will help drive interaction
with the larger Openstack ecosystem. :)

> -Original Message-
> From: Flavio Percoco [mailto:flavio at redhat.com 
> ] 
> Sent: Friday, March 21, 2014 11:18 AM
> To: openstack-dev at lists.openstack.org 
> 
> Subject: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the 
> marconi-core team
>
> Greetings,
>
> I'd like to propose adding Malini Kamalambal to Marconi's core. Malini has 
> been an outstanding contributor for a long time. She's taken care of 
> Marconi's tests, benchmarks, gate integration, tempest support and way more 
> other things. She's also actively participated in the mailing list 
> discussions, she's contributed with thoughtful reviews and participated in 
> the project's meeting since she first joined the project.
>
> Folks in favor or against please explicitly +1 / -1 the proposal.
>
> Thanks Malini, it's an honor to have you in the team.
>
> --
> @flaper87
> Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] What's Up Doc? Mar 21, 2014

2014-03-21 Thread Lana Brindley

On 22 Mar 2014, at 5:18 am, Anne Gentle  wrote:



> 
> 
> On April 16, the APAC OpenStack docs writers will hold a face-to-face 
> meeting. Check out the latest docs team meeting for details or contact Lana 
> Brindley. [4]

Just a quick note that this has moved to 2 April, to avoid a conflict. Thanks :)

L

> 
> [4] 
> http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-03-19-03.01.log.html
> ___
> Openstack-docs mailing list
> openstack-d...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs

Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Using Python-Neutronclient from Python - docstrings needed?

2014-03-21 Thread Rajdeep Dua
Sean,
If you can point me to the project file in github which needs to be modified , 
i will include these docs

Thanks
Rajdeep



On Sunday, February 9, 2014 9:04 PM, "Collins, Sean" 
 wrote:
 
Do you have plans to submit these back upstream? It would be a great first 
start, perhaps we could add these as examples underneath the JSON 
request/reponse in http://api.openstack.org/api-ref-networking.html



Sean M. Collins



 
From: Rajdeep Dua [dua_rajd...@yahoo.com]
Sent: Saturday, February 08, 2014 11:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Using Python-Neutronclient from Python - 
docstrings needed?


Sean,
We have written a few docs for writing these samples

http://python-api-guide.cfapps.io/content/neutron.html


You can find get the source here https://github.com/rajdeepd/openstack-samples

Thanks
Rajdeep



On Sunday, February 9, 2014 12:57 AM, "Collins, Sean" 
 wrote:

Hi,

I was writing a small script yesterday to parse a list of IP blocks and
create security groups and rules, by using python-neutronclient.

To be honest, it was very difficult - even though I have actually
written extensions to Python-Neutronclient for the QoS API. 

For those that are trying to use the client from inside their code,
they end up getting zero help as to how to actually call any of the
functions, and what parameters they take. 


    >>> neutron = client.Client('2.0', auth_url=os.environ['OS_AUTH_URL'],
    ...                            tenant_id=os.environ['OS_TENANT_ID'],
    ...                            username=os.environ['OS_USERNAME'],
    ...                            password=os.environ['OS_PASSWORD'])
    >>> help(neutron)

  |  create_credential = 
  |  
  |  create_firewall = 
  |  
  |  create_firewall_policy = 
  |  
  |  create_firewall_rule = 
  |  
  |  create_floatingip = 
  |  
  |  create_health_monitor = 
  |  
  |  create_ikepolicy = 
  |  
  |  create_ipsec_site_connection = 
  |  
  |  create_ipsecpolicy = 
  |  
  |  create_member = 
  |  
  |  create_metering_label = 


Since there was nothing there, I decided to go check the source of
python-neutronclient and see if there are any examples.

https://github.com/openstack/python-neutronclient/blob/master/doc/source/index.rst

If you read closely enough, you'll find out that the function takes a
dictionary, that looks very similar to the request/response examples
listed in the API documentation. So, I went over and checked it out.

http://docs.openstack.org/api/openstack-network/2.0/content/POST_security-groups-v2.0_createSecGroup_v2.0_security-groups_security-groups-ext.html

So from there, I was able to remember enough that each of these
functions takes a single argument, that is a dictionary, that mimics
the same structure that you see in the API documentation, so from there
it was just some experimentation to get the structure right.

Honestly it wasn't easy to remember all this stuff, since
it had been a couple months since I had worked with
python-neutronclient, and it had been from inside the library itself.

This was my first experience using it "on the outside" and it was pretty
tough - so I'm going to try and look into how we can improve the
docstrings for the client object, to make it a bit easier to figure out.

Thoughts?

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-21 Thread Steve Gordon
- Original Message -
> We recently discussed the idea of using gerrit to review blueprint
> specifications [1].  There was a lot of support for the idea so we have
> proceeded with putting this together before the start of the Juno
> development cycle.
> 
> We now have a new project set up, openstack/nova-specs.  You submit
> changes to it just like any other project in gerrit.  Find the README
> and a template for specifications here:
> 
>   http://git.openstack.org/cgit/openstack/nova-specs/tree/README.rst
> 
>   http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst

Adding the documentation team - the above is the template for nova blueprints 
under the new process, at the time of writing the documentation impact section 
reads:

"""
Documentation Impact


What is the impact on the docs team of this change? Some changes might require
donating resources to the docs team to have the documentation updated. Don't
repeat details discussed above, but please reference them here.
"""

Under the current procedure documentation impact is only really directly 
addressed when the code itself is committed, with the DocImpact tag in the 
commit message, and a documentation bug is raised via automation. The above 
addition to the blueprint template offers a good opportunity to start thinking 
about the documentation impact, and articulating it much earlier in the 
process*.

I'm wondering if we shouldn't provide some more guidance on what a good 
documentation impact assessment would like though, I know Anne previously 
articulated some thoughts on this here:

http://justwriteclick.com/2013/09/17/openstack-docimpact-flag-walk-through/

TL;DR:

* Who would use the feature?
* Why use the feature?
* What is the exact usage for the feature?
* Does the feature also have permissions/policies attached? 
* If it is a configuration option, which flag grouping should it go into? 

Do these questions or some approximation of them belong in the template? Or can 
we do better? Interested in your thoughts :). On a separate note a specific 
type of documentation I have often bemoaned not having a field in launchpad for 
is a release note. Is this something separate or does it belong in 
documentation impact? A good release note answers most if not all of the above 
questions but is also short and concise.

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Jenkins test logs and their retention period

2014-03-21 Thread Clark Boylan
Hello everyone,

Back at the Portland summit the Infra team committed to archiving six months
of test logs for Openstack. Since then we have managed to do just that.
However, more recently we have seen the growth rate on those logs continue
to grow beyond what is a currently sustainable level.

For reasons, we currently store logs on a filesystem backed by cinder
volumes. Rackspace limits the size and number of volumes attached to a
single host meaning the upper bound on the log archive filesystem is ~12TB
and we are almost there. You can see real numbers and pretty graphs on our
cacti server [0].

Long term we are trying to move to putting all of the logs in swift, but it
turns out there are some use case issues we need to sort out around that
before we can do so (but this is being worked on so should happen). Until
that day arrives we need to work on logging more smartly, and if we can't do
that we will have to reduce the log retention period.

So what can you do? Well it appears that our log files may need a diet. I
have listed the worst offenders below (after a small sampling, there may be
more) and it would be great if we could go through those with a comb and
figure out if we are logging actually useful data. The great thing about
doing this is it will make lives better for deployers of Openstack too.

Some initial checking indicates a lot of this noise may be related to
ceilometer. It looks like it is logging AMQP stuff frequently and inflating
the logs of individual services as it polls them.

Offending files from tempest tests:
screen-n-cond.txt.gz 7.3M
screen-ceilometer-collector.txt.gz 6.0M
screen-n-api.txt.gz 3.7M
screen-n-cpu.txt.gz 3.6M
tempest.txt.gz 2.7M
screen-ceilometer-anotification.txt.gz 1.9M
subunit_log.txt.gz 1.5M
screen-g-api.txt.gz 1.4M
screen-ceilometer-acentral.txt.gz 1.4M
screen-n-net.txt.gz 1.4M
from: 
http://logs.openstack.org/52/81252/2/gate/gate-tempest-dsvm-full/488bc4e/logs/?C=S;O=D

Unittest offenders:
Nova subunit_log.txt.gz 14M
Neutron subunit_log.txt.gz 7.8M
Keystone subunit_log.txt.gz 4.8M

Note all of the above files are compressed with gzip -9 and the filesizes
above reflect compressed file sizes.

Debug logs are important to you guys when dealing with Jenkins results. We
want your feedback on how we can make this better for everyone.

[0] 
http://cacti.openstack.org/cacti/graph.php?action=view&local_graph_id=717&rra_id=all

Thank you,
Clark Boylan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We need a new version of hacking for Icehouse, or provide compatibility with oslo.sphinx in oslosphinx

2014-03-21 Thread Joe Gordon
On Fri, Mar 21, 2014 at 1:01 PM, Sergey Lukjanov wrote:

> ++ for having 0.8.1 with oslospinx del fixed.
>
>
Hacking 0.8.1 has just been released.

http://git.openstack.org/cgit/openstack-dev/hacking/tag/?id=0.8.1



> P.S. The 0.9.0 release will find some style issues in mostly all
> projects I think, so, it's better to release it after Icehouse release
> or at least RC1.
>
> On Fri, Mar 21, 2014 at 8:58 PM, Joe Gordon  wrote:
> >
> >
> >
> > On Fri, Mar 21, 2014 at 8:36 AM, Doug Hellmann <
> doug.hellm...@dreamhost.com>
> > wrote:
> >>
> >> There is quite a list of un-released changes to hacking:
> >>
> >> * Make H202 check honor pep8 #noqa comment
> >> * Updated from global requirements
> >> * Updated from global requirements
> >> * Switch over to oslosphinx
> >> * HACKING.rst: Fix odd indentation in an example code
> >> * Remove tox locale overrides
> >> * Updated from global requirements
> >> * Clarify H403 message
> >> * More portable way to detect modules for H302
> >> * Fix python 3 incompatibility in _get_import_type
> >> * Trigger warnings for raw and unicode docstrings
> >> * Enhance H233 rule
> >> * Add check for removed modules in Python 3
> >> * Add Python3 deprecated assert* to HACKING.rst
> >> * Turn Python3 section into a list
> >> * Re-Add section on assertRaises(Exception
> >> * Cleanup HACKING.rst
> >> * Move hacking guide to root directory
> >> * Fix typo in package summary
> >> * Add H904: don't wrap lines using a backslash
> >> * checking for metaclass to be Python 3.x compatible
> >> * Remove unnecessary headers
> >> * Add -U to pip install command in tox.ini
> >> * Fix typos of comment in module core
> >> * Updated from global requirements
> >> * Add a check for file with only comments
> >> * Enforce grouping like imports together
> >> * Add noqa support for H201 (bare except)
> >> * Enforce import grouping
> >> * Clean up how test env variables are parsed
> >> * Fix the escape character
> >> * Remove vim modeline sample
> >> * Add a check for newline after docstring summary
> >>
> >> It looks like it might be time for a new release anyway, especially if
> it
> >> resolves the packaging issue you describe.
> >
> >
> >
> > I think two new releases are needed. I have been holding off cutting the
> > next hacking release until we are closer to Juno. Since the next release
> > will include new rules I didn't want to distract anyone from focusing on
> > stabilizing Icehouse.
> >
> > So it sounds like we need:
> >
> > * Hacking 0.8.1 to fix the oslo.sphinx  oslosphinx issue for Icehouse.
> Since
> > we cap hacking versions at 0.9 [1] this will  get used in icehouse.
> > * Hacking 0.9 to release all the new hacking goodness. This will be
> targeted
> > for use in Juno.
> >
> > [1] https://review.openstack.org/#/c/81356/
> >
> >
> > If this sounds good, I will cut 0.8.1 this afternoon.
> >
> >>
> >> As far as the symlink, I think that's a potentially bad idea. It's only
> >> going to encourage the continued use of oslo.sphinx. Since the package
> is
> >> only needed to build the documentation, and not to actually use the
> tool, I
> >> don't think we need the symlink in place, do we?
> >>
> >> Doug
> >>
> >>
> >> On Fri, Mar 21, 2014 at 6:17 AM, Thomas Goirand 
> wrote:
> >>>
> >>> Hi,
> >>>
> >>> The current version of python-hacking wants python-oslo.sphinx, but
> >>> we're moving to python-oslosphinx. In Debian, I made python-oslo.sphinx
> >>> as a transition empty package that only depends on python-oslosphinx.
> As
> >>> a consequence, python-hacking needs to be updated to use
> >>> python-oslosphinx, otherwise it wont have available build-dependencies.
> >>>
> >
> >
> > Thank you for bringing this to our attention, I wonder how we can detect
> > this in our CI system in the future to prevent this.
> >
> >>>
> >>> I was also thinking about providing a symlink from oslo/sphinx to
> >>> oslosphinx. Maybe it'd be nice to have this directly in oslosphinx?
> >>>
> >>> Thoughts anyone?
> >>>
> >>> Cheers,
> >>>
> >>> Thomas
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-21 Thread Stan Lagun
Zane,

I appreciate your explanations on Heat/HOT. This really makes sense.
I didn't mean to say that MuranoPL is better for Heat. Actually HOT is good
for Heat's mission. I completely acknowledge it.
I've tried to avoid comparison between languages and I'm sorry if it felt
that way. This is not productive as I don't offer you to replace HOT with
MuranoPL (although I believe that certain elements of MuranoPL syntax can
be contributed to HOT and be valuable addition there). Also people tend to
protect what they have developed and invested into and to be fair this is
what we did in this thread to great extent.

What I'm trying to achieve is that you and the rest of Heat team understand
why it was designed the way it is. I don't feel that Murano can become
full-fledged member of OpenStack ecosystem without a bless from Heat team.
And it would be even better if we agree on certain design, join our efforts
and contribute to each other for sake of Orchestration program.

I'm sorry for long mail texts written in not-so-good English and appreciate
you patience reading and answering them.

Having said that let me step backward and explain our design decisions.

Cloud administrators are usually technical guys that are capable of
learning HOT and writing YAML templates. They know exact configuration of
their cloud (what services are available, what is the version of OpenStack
cloud is running) and generally understands how OpenStack works. They also
know about software they intent to install. If such guy wants to install
Drupal he knows exactly that he needs HOT template describing Fedora VM
with Apache + PHP + MySQL + Drupal itself. It is not a problem for him to
write such HOT template.

Note that such template would be designed for very particular
configuration. There are hundreds of combinations that may be used to
install that Drupal - use RHEL/Windows/etc instead of Fedora, use
ngnix/IIS/etc instead of Apache, use FastCGI instead of mod_php, PostgreSQL
instead of MySQL. You may choose to have all software on single VM or have
one VM for database and another for Drupal. There are also constraints to
those combinations. For example you cannot have Fedora + IIS on the same
VM. You cannot have Apache and Drupal on different VMs.

So the HOT template represent fixed combination of those software
components. HOT may have input parameters like "username" or "dbImageName"
but the overall structure of template is fixed. You cannot have template
that choses whether to use Windows or Linux based on parameter value. You
cannot write HOT that accepts number of instances it allowed to create and
then decide what would be installed on each of them. This is just not
needed for Heat users.

With Murano the picture looks the opposite. Typical Murano user is a guy
who bought account from cloud hosting vendor (cloud operator) and want to
run some software in the cloud. He may not even be aware that it is
OpenStack. He knows nothing about programming in general and Heat in
particular. He doesn't want to write YAMLs. He may not know how exactly
Drupal is installed and what components it consists of.

So what he does is he goes to his cloud (Murano) dashboard, browses through
application catalog, finds Drupal and drags it onto his environment board
(think like Visio-style designer). He can stop at this point, click
"deploy" button and the system will deploy Drupal. In another words the
system (or maybe better to say cloud operator or application developer)
decides what set of components is going to be installed (like 1 Fedora VM
for MySQL and 1 CentOS VM for Apache-PHP-Drupal). But user may decide he
wants to customize his environment. He digs down and sees that Drupal
requires database instance and the default is MySQL. He clicks on a button
to see what are other options available for that role.

In Heat HOT developer is the user. But in Murano those are completely
different roles. There are developers that write application definitions
(that is DSL code) and there are end users who compose environments from
those applications (components). Application developers may have nothing to
do with particular cloud their application deployed on. As for Drupal
application the developer knows that Drupal can be run with MySQL or
PostgreSQL. But there may be many compatible implementations of those
DBMSes - Galera MySQL, TroveMySQL, MMM MySQL etc. So to get a list of what
components can be placed in a database role Murano needs to look at all
applications in Application Catalog and find which of them are compatible
with MySQL and PostgreSQL so that user could choose what implementation is
better suits his needs (trade performance for high-availability etc.).

User can go deeper and to decide that he wants that MySQL instance (this
can be 1 or more VMs depending on implementation) to be shared between
Drupal and another application in that environment (say WordPress). He can
go even deeper to VM level and decide that he wants to have WordPress,

[openstack-dev] requirements repository core reviewer updates

2014-03-21 Thread Doug Hellmann
In the last project meeting, we discussed updating the list of core
reviewers on the global requirements project. The review stats for the last
90 days on the project show that several current core reviewers haven't
been active, so as a first step before adding new cores I propose that we
make sure everyone who is currently core is still interested in
participating.

The current list of cores is visible in gerrit:
https://review.openstack.org/#/admin/groups/131,members

I generated a set of review status for the last 90 days using git://
git.openstack.org/openstack-infra/reviewstats and posted the results in
http://paste.openstack.org/show/74046/.

We had a few reviewers with 0-1 reviews in the last 90 days:

Dan Prince
Dave Walker
Gabriel Hurley
Joe Heck
Eric Windisch

If any of you wish to remain on the core reviewer list during Juno, speak
up. Otherwise we'll purge the list around the time of the dependency freeze
(Thierry, let me know if you had different timing in mind for that).

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-21 Thread John Griffith
On Mon, Mar 17, 2014 at 9:32 PM, Zhangleiqiang (Trump) <
zhangleiqi...@huawei.com> wrote:

> > From: Vishvananda Ishaya [mailto:vishvana...@gmail.com]
> > Sent: Tuesday, March 18, 2014 2:28 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [nova][cinder] non-persistent storage(after
> > stopping VM, data will be rollback automatically), do you think we shoud
> > introduce this feature?
> >
> >
> > On Mar 17, 2014, at 4:34 AM, Yuzhou (C)  wrote:
> >
> > > Hi Duncan Thomas,
> > >
> > > Maybe the statement about approval process is not very exact. In
> fact in
> > my mail, I mean:
> > > In the enterprise private cloud, if beyond the quota, you want to
> create a new
> > VM ,that needs to wait for approval process.
> > >
> > >
> > > @stackers,
> > >
> > > I think the following two use cases show why non-persistent disk is
> useful:
> > >
> > > 1.Non-persistent VDI:
> > > When users access a non-persistent desktop, none of their settings
> or
> > data is saved once they log out. At the end of a session,
> > > the desktop reverts back to its original state and the user
> receives a fresh
> > image the next time he logs in.
> > > 1). Image manageability, Since non-persistent desktops are built
> from a
> > master image, it's easier for administrators to patch and update the
> image,
> > back it up quickly and deploy company-wide applications to all end users.
> > > 2). Greater security, Users can't alter desktop settings or
> install their own
> > applications, making the image more secure.
> > > 3). Less storage.
> > >
> > > 2.As the use case mentioned several days ago by zhangleiqiang:
> > >
> > > "Let's take a virtual machine which hosts a web service, but it is
> primarily
> > a read-only web site with content that rarely changes. This VM has three
> disks.
> > Disk 1 contains the Guest OS and web application (e.g.Apache).
> Disk 2
> > contains the web pages for the web site. Disk 3 contains all the logging
> activity.
> > > In this case, disk 1 (OS & app) are dependent (default)
> settings and
> > is backed up nightly. Disk 2 is independent non-persistent (not backed
> up, and
> > any changes to these pages will be discarded). Disk 3 is  independent
> > persistent (not backed up, but any changes are persisted to the disk).
> > > If updates are needed to the web site's pages, disk 2 must be
> > taken out of independent non-persistent mode temporarily to allow the
> > changes to be made.
> > > Now let's say that this site gets hacked, and the pages are
> > doctored with something which is not very nice. A simple reboot of this
> host will
> > discard the changes made to the web pages on disk 2, but will persist
>   the
> > logs on disk 3 so that a root cause analysis can be carried out."
> > >
> > > Hope to get more suggestions about non-persistent disk!
> >
> >
> > Making the disk rollback on reboot seems like an unexpected side-effect
> we
> > should avoid. Rolling back the system to a known state is a useful
> feature, but
> > this should be an explicit api command, not a side-effect of rebooting
> the
> > machine, IMHO.
>
> I think there is some misunderstanding about non-persistent disk, the
> non-persistent disk will only rollback if the instance is shutdown and
> start again, and will persistent the data if it is soft-reboot.
>

I think your intent is understood here, however I think I have to agree
with others that it's a use case that really is already provided for and in
fact is pretty much the nature of an elastic cloud to begin with.

I also want to highlight the comment by Vish about the confusion and
unhappy users we'll have if we suddenly change the behavior of reboot.
 Certainly this could be an option but IMHO just because you *can* create
an option in an API doesn't always mean that you should.

I feel that we provide the necessary steps to do what you're asking here
already, and if a provider does in fact restrict things like creating
instances, then just as others said this is sort of against the whole point
of having the cloud in the first place.  This might be a great thing for
them to implement as their own custom extension, but it doesn't seem to fit
with the existing core project IMO.


> Non-persistent disk does have use cases. Using explicit API command can
> achieve it, but I think there will be some work need to be done before
> booting the instance or after shutdown the instance, including:
> 1. For cinder volume, create a snapshot; For libvirt ephemeral image
> backend, create new image
> 2.Update attached volume info for instance
> 3.Delete the cinder snapshot and libvirt ephemeral image, and update
> volume/image info for instance again
>
> These works can be done by users manually or by some "Upper system" ? Or
> non-persistent can be set as a metadata/property of volume/image, and
> handled by Nova?
>
>
>
> > Vish
> >
> > >
> > > Thanks.
> > >
> > > Zhou 

Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-21 Thread Doug Hellmann
On Fri, Mar 21, 2014 at 5:13 PM, Joe Gordon  wrote:

>
>
>
> On Fri, Mar 21, 2014 at 8:58 AM, Doug Hellmann <
> doug.hellm...@dreamhost.com> wrote:
>
>>
>>
>>
>> On Fri, Mar 21, 2014 at 7:04 AM, Sean Dague  wrote:
>>
>>> On 03/20/2014 06:18 PM, Joe Gordon wrote:
>>> >
>>> >
>>> >
>>> > On Thu, Mar 20, 2014 at 3:03 PM, Alexei Kornienko
>>> > mailto:alexei.kornie...@gmail.com>>
>>> wrote:
>>> >
>>> > Hello,
>>> >
>>> > We've done some profiling and results are quite interesting:
>>> > during 1,5 hour ceilometer inserted 59755 events (59755 calls to
>>> > record_metering_data)
>>> > this calls resulted in total 2591573 SQL queries.
>>> >
>>> > And the most interesting part is that 291569 queries were ROLLBACK
>>> > queries.
>>> > We do around 5 rollbacks to record a single event!
>>> >
>>> > I guess it means that MySQL backend is currently totally unusable
>>> in
>>> > production environment.
>>> >
>>> >
>>> > It should be noticed that SQLAlchemy is horrible for performance, in
>>> > nova we usually see sqlalchemy overheads of well over 10x (time
>>> > nova.db.api call vs the time MySQL measures when slow log is recording
>>> > everything).
>>>
>>> That's not really a fair assessment. Python object inflation takes time.
>>> I do get that there is SQLA overhead here, but even if you trimmed it
>>> out you would not get the the mysql query time.
>>>
>>> That being said, having Ceilometer's write path be highly tuned and not
>>> use SQLA (and written for every back end natively) is probably
>>> appropriate.
>>>
>>
>> I have been working to get Mike Bayer (author of SQLAlchemy) to the
>> summit in Atlanta. He is interested in working with us to improve
>> SQLAlchemy, so if we have specific performance or feature issues like this,
>> it would be good to make a list. If we have enough, maybe we can set aside
>> a session in the Oslo track, otherwise we can at least have some hallway
>> conversations.
>>
>
>
> That would be really amazing. Is he on IRC, so we can get the ball rolling?
>

I'll ask him to join #openstack-dev if he is.

Doug



>
>
>>
>> Doug
>>
>>
>>
>>>
>>> -Sean
>>>
>>> --
>>> Sean Dague
>>> Samsung Research America
>>> s...@dague.net / sean.da...@samsung.com
>>> http://dague.net
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-21 Thread Alexei Kornienko

Hello,

Please see some comments inline.

Best Regards,
Alexei Kornienko

On 03/21/2014 11:11 PM, Joe Gordon wrote:




On Fri, Mar 21, 2014 at 4:04 AM, Sean Dague > wrote:


On 03/20/2014 06:18 PM, Joe Gordon wrote:
>
>
>
> On Thu, Mar 20, 2014 at 3:03 PM, Alexei Kornienko
> mailto:alexei.kornie...@gmail.com>
>> wrote:
>
> Hello,
>
> We've done some profiling and results are quite interesting:
> during 1,5 hour ceilometer inserted 59755 events (59755 calls to
> record_metering_data)
> this calls resulted in total 2591573 SQL queries.
>
> And the most interesting part is that 291569 queries were
ROLLBACK
> queries.
> We do around 5 rollbacks to record a single event!
>
> I guess it means that MySQL backend is currently totally
unusable in
> production environment.
>
>
> It should be noticed that SQLAlchemy is horrible for performance, in
> nova we usually see sqlalchemy overheads of well over 10x (time
> nova.db.api call vs the time MySQL measures when slow log is
recording
> everything).

That's not really a fair assessment. Python object inflation takes
time.
I do get that there is SQLA overhead here, but even if you trimmed it
out you would not get the the mysql query time.


To give an example from nova:

doing a nova list with no servers:

stack@devstack:~/devstack$ nova --timing list

| GET 
http://10.0.0.16:8774/v2/a82ededa9a934b93a7184d06f302d745/servers/detail 
| 0.0817470550537 |


So nova command takes 0.0817470550537 seconds.

Inside the nova logs (when putting a timer around all nova.db.api 
calls [1] ), nova.db.api.instance_get_all_by_filters takes 0.06 seconds:


2014-03-21 20:58:46.760 DEBUG nova.db.api 
[req-91879f86-7665-4943-8953-41c92c42c030 demo demo] 
'instance_get_all_by_filters' 0.06 seconds timed 
/mnt/stack/nova/nova/db/api.py:1940


But the sql slow long reports the same query takes only 0.001006 
seconds with a lock_time of 0.000269 for a total of  0.00127 seconds.


# Query_time: 0.001006  Lock_time: 0.000269 Rows_sent: 0 
 Rows_examined: 0



So in this case only 2% of the time 
that  nova.db.api.instance_get_all_by_filters takes is spent inside of 
mysql. Or to put it differently 
 nova.db.api.instance_get_all_by_filters is 47 times slower then the 
raw DB call underneath.


Yes I agree that that turning raw sql data into python objects should 
take time, but I just don't think it should take 98% of the time.
If you would open actual code of nova.db.api.instance_get_all_by_filters 
- 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L1817

You will find out that python code is actually doing lot's of things:
1) setup join conditions
2) create query filters
3) doing some heavy matching, loops in exact_filter, regex_filter, 
tag_filter
This code won't go away with python objects since it's related to 
busyness logic.
I think that it's quite hypocritical to say that the problem is "turning 
raw sql data into python objects"




[1] 
https://github.com/jogo/nova/commit/7743ee366bbf8746f1c0f634f29ebf73bff16ea1


That being said, having Ceilometer's write path be highly tuned
and not
use SQLA (and written for every back end natively) is probably
appropriate.


While I like this idea, they loose free postgresql support by dropping 
SQLA. But that is a solvable problem.



-Sean

--
Sean Dague
Samsung Research America
s...@dague.net  / sean.da...@samsung.com

http://dague.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-21 Thread Joe Gordon
On Fri, Mar 21, 2014 at 8:58 AM, Doug Hellmann
wrote:

>
>
>
> On Fri, Mar 21, 2014 at 7:04 AM, Sean Dague  wrote:
>
>> On 03/20/2014 06:18 PM, Joe Gordon wrote:
>> >
>> >
>> >
>> > On Thu, Mar 20, 2014 at 3:03 PM, Alexei Kornienko
>> > mailto:alexei.kornie...@gmail.com>> wrote:
>> >
>> > Hello,
>> >
>> > We've done some profiling and results are quite interesting:
>> > during 1,5 hour ceilometer inserted 59755 events (59755 calls to
>> > record_metering_data)
>> > this calls resulted in total 2591573 SQL queries.
>> >
>> > And the most interesting part is that 291569 queries were ROLLBACK
>> > queries.
>> > We do around 5 rollbacks to record a single event!
>> >
>> > I guess it means that MySQL backend is currently totally unusable in
>> > production environment.
>> >
>> >
>> > It should be noticed that SQLAlchemy is horrible for performance, in
>> > nova we usually see sqlalchemy overheads of well over 10x (time
>> > nova.db.api call vs the time MySQL measures when slow log is recording
>> > everything).
>>
>> That's not really a fair assessment. Python object inflation takes time.
>> I do get that there is SQLA overhead here, but even if you trimmed it
>> out you would not get the the mysql query time.
>>
>> That being said, having Ceilometer's write path be highly tuned and not
>> use SQLA (and written for every back end natively) is probably
>> appropriate.
>>
>
> I have been working to get Mike Bayer (author of SQLAlchemy) to the summit
> in Atlanta. He is interested in working with us to improve SQLAlchemy, so
> if we have specific performance or feature issues like this, it would be
> good to make a list. If we have enough, maybe we can set aside a session in
> the Oslo track, otherwise we can at least have some hallway conversations.
>


That would be really amazing. Is he on IRC, so we can get the ball rolling?


>
> Doug
>
>
>
>>
>> -Sean
>>
>> --
>> Sean Dague
>> Samsung Research America
>> s...@dague.net / sean.da...@samsung.com
>> http://dague.net
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-21 Thread Joe Gordon
On Fri, Mar 21, 2014 at 4:04 AM, Sean Dague  wrote:

> On 03/20/2014 06:18 PM, Joe Gordon wrote:
> >
> >
> >
> > On Thu, Mar 20, 2014 at 3:03 PM, Alexei Kornienko
> > mailto:alexei.kornie...@gmail.com>> wrote:
> >
> > Hello,
> >
> > We've done some profiling and results are quite interesting:
> > during 1,5 hour ceilometer inserted 59755 events (59755 calls to
> > record_metering_data)
> > this calls resulted in total 2591573 SQL queries.
> >
> > And the most interesting part is that 291569 queries were ROLLBACK
> > queries.
> > We do around 5 rollbacks to record a single event!
> >
> > I guess it means that MySQL backend is currently totally unusable in
> > production environment.
> >
> >
> > It should be noticed that SQLAlchemy is horrible for performance, in
> > nova we usually see sqlalchemy overheads of well over 10x (time
> > nova.db.api call vs the time MySQL measures when slow log is recording
> > everything).
>
> That's not really a fair assessment. Python object inflation takes time.
> I do get that there is SQLA overhead here, but even if you trimmed it
> out you would not get the the mysql query time.
>
>
To give an example from nova:

doing a nova list with no servers:

stack@devstack:~/devstack$ nova --timing list

| GET
http://10.0.0.16:8774/v2/a82ededa9a934b93a7184d06f302d745/servers/detail |
0.0817470550537 |

So nova command takes 0.0817470550537 seconds.

Inside the nova logs (when putting a timer around all nova.db.api calls [1]
), nova.db.api.instance_get_all_by_filters takes 0.06 seconds:

2014-03-21 20:58:46.760 DEBUG nova.db.api
[req-91879f86-7665-4943-8953-41c92c42c030 demo demo]
'instance_get_all_by_filters' 0.06 seconds timed
/mnt/stack/nova/nova/db/api.py:1940

But the sql slow long reports the same query takes only 0.001006 seconds
with a lock_time of 0.000269 for a total of  0.00127 seconds.

# Query_time: 0.001006  Lock_time: 0.000269 Rows_sent: 0
 Rows_examined: 0


So in this case only 2% of the time
that  nova.db.api.instance_get_all_by_filters takes is spent inside of
mysql. Or to put it differently  nova.db.api.instance_get_all_by_filters is
47 times slower then the raw DB call underneath.

Yes I agree that that turning raw sql data into python objects should take
time, but I just don't think it should take 98% of the time.

[1]
https://github.com/jogo/nova/commit/7743ee366bbf8746f1c0f634f29ebf73bff16ea1

That being said, having Ceilometer's write path be highly tuned and not
> use SQLA (and written for every back end natively) is probably appropriate.
>

While I like this idea, they loose free postgresql support by dropping
SQLA. But that is a solvable problem.


>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-21 Thread David Kranz

On 03/21/2014 02:18 PM, Chris Behrens wrote:


FWIW, I'm fine with any of the options posted. But I'm curious about 
the precedence that reverting would create. It essentially sounds like 
if we release a version with an API bug, the bug is no longer a bug in 
the API and the bug becomes a bug in the documentation. The only way 
to 'fix' the API then would be to rev it. Is that an accurate 
representation and is that desirable? Or do we just say we take these 
on a case-by-case basis?


- Chris
It has to be on a case-by-case basis. Obviously if fixing a security bug 
required an api change we would make it. But as the eco-system around 
the OpenStack APIs continue to grow, there should be a higher and higher 
bar based on what is gained by the change. In my view, we are well past 
the point where the value of a change like this one would justify 
violating the API stability guidelines.


There was a related discussion about getting more eyes on changes that 
propose to be excepted from the API stability guidelines here 
http://lists.openstack.org/pipermail/openstack-dev/2014-January/023254.html


 -David



On Mar 21, 2014, at 10:34 AM, David Kranz > wrote:



On 03/21/2014 05:04 AM, Christopher Yeoh wrote:

On Thu, 20 Mar 2014 15:45:11 -0700
Dan Smith mailto:d...@danplanet.com>> wrote:

I know that our primary delivery mechanism is releases right now, and
so if we decide to revert before this gets into a release, that's
cool. However, I think we need to be looking at CD as a very important
use-case and I don't want to leave those folks out in the cold.


I don't want to cause issues for the CD people, but perhaps it won't be
too disruptive for them (some direct feedback would be handy). The
initial backwards incompatible change did not result in any bug reports
coming back to us at all. If there were lots of users using it I think
we could have expected some complaints as they would have had to adapt
their programs to no longer manually add the flavor access (otherwise
that would fail). It is of course possible that new programs written in
the meantime would rely on the new behaviour.

I think (please correct me if I'm wrong) the public CD clouds don't
expose that part of API to their users so the fallout could be quite
limited. Some opinions from those who do CD for private clouds would be
very useful. I'll send an email to openstack-operators asking what
people there believe the impact would be but at the moment I'm thinking
that revert is the way we should go.


Could we consider a middle road? What if we made the extension
silently tolerate an add-myself operation to a flavor, (potentially
only) right after create? Yes, that's another change, but it means
that old clients (like horizon) will continue to work, and new
clients (which expect to automatically get access) will continue to
work. We can document in the release notes that we made the change to
match our docs, and that anyone that *depends* on the (admittedly
weird) behavior of the old broken extension, where a user doesn't
retain access to flavors they create, may need to tweak their client
to remove themselves after create.

My concern is that we'd be digging ourselves an even deeper hole with
that approach. That for some reason we don't really understand at the
moment, people have programs which rely on adding flavor access to a
tenant which is already on the access list being rejected rather than
silently accepted. And I'm not sure its the behavior from flavor access
that we actually want.

But we certainly don't want to end up in the situation of trying to
work out how to rollback two backwards incompatible API changes.

Chris
Nope.  IMO we should just accept that an incompatible change was made 
that should not have been, revert it, and move on. I hope that saying 
our code base is going to support CD does not mean that any 
incompatible change that slips through our very limited gate cannot 
be reverted. October was a while back but I'm not sure what principle 
we would use to draw the line. I am also not sure why this is phrased 
as a CD vs. not issue. Are the *users* of a system that happens to be 
managed using CD thought to be more tolerant of their code breaking?


Perhaps it would be a good time to 
reviewhttps://wiki.openstack.org/wiki/Governance/Approved/APIStabilityand 
the details ofhttps://wiki.openstack.org/wiki/APIChangeGuidelinesto 
make sure they still reflect the will of the TC and our community.


-David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org 


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
Op

Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-21 Thread Matt Van Winkle
>From: Kyle Mestery 
>mailto:mest...@noironetworks.com>>
>Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>mailto:openstack-dev@lists.openstack.org>>
>Date: Friday, March 21, 2014 2:49 PM
>To: "OpenStack Development Mailing List (not for usage questions)" 
>mailto:openstack-dev@lists.openstack.org>>
>Subject: Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

>This is the part I'm keenly interested in Russell. Once you have some feedback 
>on things here,
>this model of using gerrit for review is something I'd like to see other 
>projects use going forward
>as well.

For what it's worth, as a person who's 99% on the operator side, I used the 
proposed template as my first foray into:


  *   Getting things set up with Geritt
  *   Submitting a change for review
  *   Getting myself set up to be aware of the nova-specs project for future 
Blueprint reviews

All in all, aside from me getting my head wrapped around the commit message 
guidelines, it went pretty smooth.  I've also gained a lot of insight off the 
blueprints already submitted (though to Russell's credit, they are trying to 
push back a little for a few days until the template is good and baked)

I think the overall model has a lot of promise. I'm just waiting on word form 
Russell and company to say a lot more on the operator's list.  From my 
experience, however, it's a great way to lower the barrier to entry for 
"operators" and the like in the the "code" side of OpenStack – one of the key 
outputs from the mini summit!

Thanks again, for pushing this along!  Let me know when you are ready for a 
bigger deal to be made outside the dev lists.

Thanks!
Matt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][TaskFlow] Long running actions

2014-03-21 Thread Joshua Harlow
Will advise soon, out sick with not so fun case of poison oak, will reply next 
week (hopefully) when I'm less incapacitated...

Sent from my really tiny device...

On Mar 21, 2014, at 3:24 AM, "Renat Akhmerov" 
mailto:rakhme...@mirantis.com>> wrote:

Valid concerns. It would be great to get Joshua involved in this discussion. If 
it’s possible to do in TaskFlow he could advise on how exactly.

Renat Akhmerov
@ Mirantis Inc.



On 21 Mar 2014, at 16:23, Stan Lagun 
mailto:sla...@mirantis.com>> wrote:

Don't forget HA issues. Mistral can be restarted at any moment and need to be 
able to proceed from the place it was interrupted on another instance. In 
theory it can be addressed by TaskFlow but I'm not sure it can be done without 
complete redesign of it


On Fri, Mar 21, 2014 at 8:33 AM, W Chan 
mailto:m4d.co...@gmail.com>> wrote:
Can the long running task be handled by putting the target task in the workflow 
in a persisted state until either an event triggers it or timeout occurs?  An 
event (human approval or trigger from an external system) sent to the transport 
will rejuvenate the task.  The timeout is configurable by the end user up to a 
certain time limit set by the mistral admin.

Based on the TaskFlow examples, it seems like the engine instance managing the 
workflow will be in memory until the flow is completed.  Unless there's other 
options to schedule tasks in TaskFlow, if we have too many of these workflows 
with long running tasks, seems like it'll become a memory issue for mistral...


On Thu, Mar 20, 2014 at 3:07 PM, Dmitri Zimine 
mailto:d...@stackstorm.com>> wrote:

For the 'asynchronous manner' discussion see http://tinyurl.com/n3v9lt8; I'm 
still not sure why u would want to make is_sync/is_async a primitive concept in 
a workflow system, shouldn't this be only up to the entity running the workflow 
to decide? Why is a task allowed to be sync/async, that has major side-effects 
for state-persistence, resumption (and to me is a incorrect abstraction to 
provide) and general workflow execution control, I'd be very careful with this 
(which is why I am hesitant to add it without much much more discussion).

Let's remove the confusion caused by "async". All tasks [may] run async from 
the engine standpoint, agreed.

"Long running tasks" - that's it.

Examples: wait_5_days, run_hadoop_job, take_human_input.
The Task doesn't do the job: it delegates to an external system. The flow 
execution needs to wait  (5 days passed, hadoob job finished with data x, user 
inputs y), and than continue with the received results.

The requirement is to survive a restart of any WF component without loosing the 
state of the long running operation.

Does TaskFlow already have a way to do it? Or ongoing ideas, considerations? If 
yes let's review. Else let's brainstorm together.

I agree,
that has major side-effects for state-persistence, resumption (and to me is a 
incorrect abstraction to provide) and general workflow execution control, I'd 
be very careful with this
But these requirement  comes from customers'  use cases: wait_5_day - lifecycle 
management workflow, long running external system - Murano requirements, user 
input - workflow for operation automations with control gate checks, provisions 
which require 'approval' steps, etc.

DZ>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We need a new version of hacking for Icehouse, or provide compatibility with oslo.sphinx in oslosphinx

2014-03-21 Thread Sergey Lukjanov
++ for having 0.8.1 with oslospinx del fixed.

P.S. The 0.9.0 release will find some style issues in mostly all
projects I think, so, it's better to release it after Icehouse release
or at least RC1.

On Fri, Mar 21, 2014 at 8:58 PM, Joe Gordon  wrote:
>
>
>
> On Fri, Mar 21, 2014 at 8:36 AM, Doug Hellmann 
> wrote:
>>
>> There is quite a list of un-released changes to hacking:
>>
>> * Make H202 check honor pep8 #noqa comment
>> * Updated from global requirements
>> * Updated from global requirements
>> * Switch over to oslosphinx
>> * HACKING.rst: Fix odd indentation in an example code
>> * Remove tox locale overrides
>> * Updated from global requirements
>> * Clarify H403 message
>> * More portable way to detect modules for H302
>> * Fix python 3 incompatibility in _get_import_type
>> * Trigger warnings for raw and unicode docstrings
>> * Enhance H233 rule
>> * Add check for removed modules in Python 3
>> * Add Python3 deprecated assert* to HACKING.rst
>> * Turn Python3 section into a list
>> * Re-Add section on assertRaises(Exception
>> * Cleanup HACKING.rst
>> * Move hacking guide to root directory
>> * Fix typo in package summary
>> * Add H904: don't wrap lines using a backslash
>> * checking for metaclass to be Python 3.x compatible
>> * Remove unnecessary headers
>> * Add -U to pip install command in tox.ini
>> * Fix typos of comment in module core
>> * Updated from global requirements
>> * Add a check for file with only comments
>> * Enforce grouping like imports together
>> * Add noqa support for H201 (bare except)
>> * Enforce import grouping
>> * Clean up how test env variables are parsed
>> * Fix the escape character
>> * Remove vim modeline sample
>> * Add a check for newline after docstring summary
>>
>> It looks like it might be time for a new release anyway, especially if it
>> resolves the packaging issue you describe.
>
>
>
> I think two new releases are needed. I have been holding off cutting the
> next hacking release until we are closer to Juno. Since the next release
> will include new rules I didn't want to distract anyone from focusing on
> stabilizing Icehouse.
>
> So it sounds like we need:
>
> * Hacking 0.8.1 to fix the oslo.sphinx  oslosphinx issue for Icehouse. Since
> we cap hacking versions at 0.9 [1] this will  get used in icehouse.
> * Hacking 0.9 to release all the new hacking goodness. This will be targeted
> for use in Juno.
>
> [1] https://review.openstack.org/#/c/81356/
>
>
> If this sounds good, I will cut 0.8.1 this afternoon.
>
>>
>> As far as the symlink, I think that's a potentially bad idea. It's only
>> going to encourage the continued use of oslo.sphinx. Since the package is
>> only needed to build the documentation, and not to actually use the tool, I
>> don't think we need the symlink in place, do we?
>>
>> Doug
>>
>>
>> On Fri, Mar 21, 2014 at 6:17 AM, Thomas Goirand  wrote:
>>>
>>> Hi,
>>>
>>> The current version of python-hacking wants python-oslo.sphinx, but
>>> we're moving to python-oslosphinx. In Debian, I made python-oslo.sphinx
>>> as a transition empty package that only depends on python-oslosphinx. As
>>> a consequence, python-hacking needs to be updated to use
>>> python-oslosphinx, otherwise it wont have available build-dependencies.
>>>
>
>
> Thank you for bringing this to our attention, I wonder how we can detect
> this in our CI system in the future to prevent this.
>
>>>
>>> I was also thinking about providing a symlink from oslo/sphinx to
>>> oslosphinx. Maybe it'd be nice to have this directly in oslosphinx?
>>>
>>> Thoughts anyone?
>>>
>>> Cheers,
>>>
>>> Thomas
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-21 Thread Kyle Mestery
On Fri, Mar 21, 2014 at 2:42 PM, Russell Bryant  wrote:

> On 03/21/2014 03:16 PM, Tim Bell wrote:
> >
> > I am a strong advocate of the Blueprint-on-Blueprints process we
> discussed in the operator mini-summit so that experienced cloud
> administrators can give input before lots of code is written (
> https://etherpad.openstack.org/p/operators-feedback-mar14) but we need to
> be aware that these people with real life experience of the impact of
> changes on production clouds may not be working with gerrit day-to-day.
>
> I certainly understand that folks may not be in gerrit day-to-day.  I
> imagine even fewer are in launchpad day-to-day.  :-)
>
> > There has been some excellent work by the documentation team in the past
> year to make it easy for new contributors to work on improvements to the
> documentation which also helps to introduce the tools/processes to the
> novices.
>
> Do you have any pointers to docs stuff you mentioned that we might be
> able to build on?
>
> > Can we find a way to keep the bar low for the review of blueprints while
> at the same time making sure we engage the full community spectrum in the
> future direction of OpenStack ?
>
> I think we can confidently say that the change to use gerrit for the
> review design specs lowers the bar for participation by everyone
> interested.  It's an actual review system, where before we had no system
> for organizing feedback and iterating on specs.  That's really the key.
>
> Once we have some active reviews going, it will be a bit easier to
> provide clear examples of what the reviews will look like.  We can use
> that to help show how to get involved and provide feedback.
>
> This is the part I'm keenly interested in Russell. Once you have some
feedback on things here,
this model of using gerrit for review is something I'd like to see other
projects use going forward
as well. I think each project has it's own system for blueprint review and
approval, but if we can
leverage the work you're doing in nova with other projects to keep them as
consistent as possible,
all the better.

Thanks,
Kyle


> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Ozgur Akan
+2


On Fri, Mar 21, 2014 at 12:15 PM, Balaji Iyer wrote:

> +1
>
> On 3/21/14, 11:35 AM, "Amit Gandhi"  wrote:
>
> >+1
> >
> >On 3/21/14, 11:17 AM, "Flavio Percoco"  wrote:
> >
> >>Greetings,
> >>
> >>I'd like to propose adding Malini Kamalambal to Marconi's core. Malini
> >>has been an outstanding contributor for a long time. She's taken care
> >>of Marconi's tests, benchmarks, gate integration, tempest support and
> >>way more other things. She's also actively participated in the mailing
> >>list discussions, she's contributed with thoughtful reviews and
> >>participated in the project's meeting since she first joined the
> >>project.
> >>
> >>Folks in favor or against please explicitly +1 / -1 the proposal.
> >>
> >>Thanks Malini, it's an honor to have you in the team.
> >>
> >>--
> >>@flaper87
> >>Flavio Percoco
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-21 Thread Rochelle.RochelleGrober

> From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]

> 
> We are talking about different levels of testing,
> 
> 1. Unit tests - which everybody agrees should be in the individual
> project
> itself
> 2. System Tests - 'System' referring to (& limited to), all the
> components
> that make up the project. These are also the functional tests for the
> project.
> 3. Integration Tests - This is to verify that the OS components
> interact
> well and don't break other components -Keystone being the most obvious
> example. This is where I see getting the maximum mileage out of
> Tempest.
> 
> I see value in projects taking ownership of the System Tests - because
> if
> the project is not 'functionally ready', it is not ready to integrate
> with
> other components of Openstack.
> But for this approach to be successful, projects should have diversity
> in
> the team composition - we need more testers who focus on creating these
> tests.
> This will keep the teams honest in their quality standards.

+1000  I love your approach to this.  You are right.  Functional tests for the 
project, that exist in an environment, but that exercise the intricacies of 
just the project aren't there for most projects, but really should be.  And 
these tests should be exercised against new code before the code enters the 
gerrit/Jenkins stream. But, as Malini points out, it's at most a dream for most 
projects as the test developers just aren't part of most projects.


> As long as individual projects cannot guarantee functional test
> coverage,
> we will need more tests in Tempest.
> But that will shift focus away from Integration Testing, which can be
> done
> ONLY in Tempest.

+1  This is also an important point.  If functional testing belonged to the 
projects, then most of these tests would be run before a tempest test was ever 
run and would not need to be part of the integrated tests, except as a subset 
that demonstrate the functioning integration with other projects.

> 
> Regardless of whatever we end up deciding, it will be good to have
> these
> discussions sooner than later.
> This will help at least the new projects to move in the right
> direction.

Maybe a summit topic?  How do we push functional testing into the project level 
development?

--Rocky

> 
> -Malini
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-21 Thread Russell Bryant
On 03/21/2014 03:16 PM, Tim Bell wrote:
> 
> I am a strong advocate of the Blueprint-on-Blueprints process we discussed in 
> the operator mini-summit so that experienced cloud administrators can give 
> input before lots of code is written 
> (https://etherpad.openstack.org/p/operators-feedback-mar14) but we need to be 
> aware that these people with real life experience of the impact of changes on 
> production clouds may not be working with gerrit day-to-day. 

I certainly understand that folks may not be in gerrit day-to-day.  I
imagine even fewer are in launchpad day-to-day.  :-)

> There has been some excellent work by the documentation team in the past year 
> to make it easy for new contributors to work on improvements to the 
> documentation which also helps to introduce the tools/processes to the 
> novices.

Do you have any pointers to docs stuff you mentioned that we might be
able to build on?

> Can we find a way to keep the bar low for the review of blueprints while at 
> the same time making sure we engage the full community spectrum in the future 
> direction of OpenStack ?

I think we can confidently say that the change to use gerrit for the
review design specs lowers the bar for participation by everyone
interested.  It's an actual review system, where before we had no system
for organizing feedback and iterating on specs.  That's really the key.

Once we have some active reviews going, it will be a bit easier to
provide clear examples of what the reviews will look like.  We can use
that to help show how to get involved and provide feedback.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-21 Thread Joe Gordon
On Fri, Mar 21, 2014 at 12:16 PM, Tim Bell  wrote:

>
> I am a strong advocate of the Blueprint-on-Blueprints process we discussed
> in the operator mini-summit so that experienced cloud administrators can
> give input before lots of code is written (
> https://etherpad.openstack.org/p/operators-feedback-mar14) but we need to
> be aware that these people with real life experience of the impact of
> changes on production clouds may not be working with gerrit day-to-day.
>
> There has been some excellent work by the documentation team in the past
> year to make it easy for new contributors to work on improvements to the
> documentation which also helps to introduce the tools/processes to the
> novices.
>
> Can we find a way to keep the bar low for the review of blueprints while
> at the same time making sure we engage the full community spectrum in the
> future direction of OpenStack ?
>
>
A lot of work has gone into making code review have a very low barrier to
entry, the nova-specs uses the same process.

Going through the etherpad list I think we hit almost everything.


   1. Operator Review of new features - Blueprint on Blueprints (aka BOB) -
   aim: apply a consistent operational view to all the things


   1. Specific information put into blueprints
   2.
   3.   http://git.openstack.org/cgit/openstack/nova-specs/tree/template.rst


   1. Blueprint alerts


 Use gerrit  to watch the nova-specs repo using
   https://review.openstack.org/#/settings/projects. This will let you get
   emails every time a new BP is posted


   1. Operators in design summit sessions
   2.
  1. Not sure what the blockers are on this one. nova-specs doesn't
  directly relate to to the summit.


   1. BluePrint Review Process (gerrit?)
   2.
   3.

  
https://review.openstack.org/#/q/status:open+project:openstack/nova-specs,n,z

Tim
>
> > -Original Message-
> > From: Stefano Maffulli [mailto:stef...@openstack.org]
> > Sent: 21 March 2014 19:55
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Nova] Updates to Juno blueprint review
> process
> >
> > On 03/20/2014 03:50 PM, Jay Lau wrote:
> > > It is better that we can have some diagram workflow just like
> > > Gerrit_Workflow  to
> > > show the new process.
> >
> > Indeed, I think it would help.
> >
> > While I'm here, and for the records, I think that creating a new
> workflow 'temporarily' only until we have Storyboard usable, is a
> > *huge* mistake. It seems to me that you're ignoring or at least
> underestimating the amount of *people* that will need to be
> > retrained, the amount of documentation that need to be fixed/adjusted.
> And the confusion that this will create on the 'long tail'
> > developers.
> >
> > A change like this, done with a couple of announcements on a mailing
> list and a few mentions on IRC is not enough to steer the ~400
> > developers who may be affected by this change. And then we'll have to
> manage the change again when we switch to Storyboard. If I
> > were you, I'd focus on getting storyboard ready to use asap, instead.
> >
> > There, I said it, and I'm now going back to my cave.
> >
> > .stef
> >
> > --
> > Ask and answer questions on https://ask.openstack.org
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-21 Thread Duncan Thomas
On 17 March 2014 11:34, Yuzhou (C)  wrote:
> Hi Duncan Thomas,
>
> Maybe the statement about approval process is not very exact. In fact 
> in my mail, I mean:
> In the enterprise private cloud, if beyond the quota, you want to create a 
> new VM ,that needs to wait for approval process.

I'm still failing to understand something here. If you're over your
quota, you need a bigger quota. The entire idea of cloud is that
resources are created and deleted on demand, not perfectly laid out in
advance and never changed. If you're over quota, you need to reduce
your workload or get more quota. That is the cloud answer. Putting in
weird behaviours that make no sense unless you're working at the very
edge of your quota is not a path I think we want to go down.

I've said it in other threads, but it bears repeating: Every new API,
method, function we add comes at a significant and growing maintenance
and testing cost. We need to evaluate new features carefully, and if
something can trivially be done by stringing together a couple of
existing primitives (like this case) we probably don't want to go add
new behaviours, particularly if the only advantage of the new
behaviour is that it enables you to do something when you're very
tight for quota.

If a private cloud has a billing model with perverse incentives, then
fix the billing model. If many/most of your users are constantly
running at the edge of their quota, particularly while doing
development or evaluation, you probably want to rethink your cloud
strategy - you might be able to paper over one crack but you are going
to find there are a million others.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Preparing for 2013.2.3 -- branches freeze March 27th

2014-03-21 Thread Adam Gandelman
Hi All-

We'll be freezing the stable/havana branches for integrated projects this
Thursday March 27th in preparation for the 2013.2.3 stable release on
Thursday April 3rd.  You can view the current queue of proposed patches
on gerrit [1].  I'd like to request all interested parties review current
bugs affecting Havana and help ensure any relevant fixes be proposed
soon and merged by Thursday, or notify the stable-maint team of
anything critical that may land late.

Thanks,
Adam

[1] https://review.openstack.org/#/q/status:open+branch:stable/havana,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Offset support in REST API pagination

2014-03-21 Thread Duncan Thomas
On 18 March 2014 18:30, Steven Kaufer  wrote:
> I realize that if only one solution had to be chosen, then limit/marker
> would always win this war. But why can't both be supported?

One reason is that every line of extra code has a testing and
maintenance cost, so the real question isn't 'why shouldn't we add it'
but 'is this feature worth the effort'? The combination of filtering
and pagination was recently found to be fundamentally broken in cinder
(since fixed), and it took some significant time for this to be
noticed. The more options you have, the more likely you are to have
this sort of combinatorial bug.

At the very least, anybody submitting this feature to cinder is going
to find themselves writing some /very/ comprehensive unit tests, for
both methods of pagination, with and without filtering, if they want
to see it merged.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Mar 21, 2014

2014-03-21 Thread Anne Gentle
1. In review and merged this past week:

Lots of work to get the latest configuration reference groups for glance,
cinder, neutron, ceilometer, and trove. Thanks Gauvain for the efforts! You
can read more about the latest in his mailing list post, Config reference
news. [1]

Adding to our group of core reviewers definitely increased our review
times, thanks so much to our latest members and their reviewing efforts. We
merged over 50 patches in openstack-manuals alone.

What a great response to the copy-edits from O'Reilly, thank you all who
made those edits and reviewed them. Handoff today! Over 40 patches went
into the operations-guide repository in the last week. Great work.

Across all the doc repositories we are reviewing and merging about 10 a
day. Follow along to be a star doc reviewer by entering this in the Search
on review.openstack.org. (project:openstack/openstack-manuals OR
project:openstack/api-site OR project:openstack/object-api OR
project:openstack/image-api OR project:openstack/identity-api OR
project:openstack/compute-api OR project:openstack/volume-api OR
project:openstack/netconn-api OR project:openstack/operations-guide OR
project:openstack/database-api)

2. High priority doc work:

Much appreciation to Matt Kassawara for creating architecture diagrams for
the Install Guide for Icehouse. The install guide remains a high priority
but we are still debugging packages to get a working install on distros.
For Ubuntu, security groups are not working. For RHEL/ RDO, Matt is testing
-3 (-2 had many database woes).

Matt also modified the docs in response to the UTF-8 configuration need for
MySQL at https://review.openstack.org/#/c/80799/.

3. Doc work going on that I know of:

I love what Edgar Magana has been doing lately to ping neutron developers
to ask them to write docs for a feature they've marked with DocImpact.
Great work, and good thinking. I know we'd like for each project to have
some point person looking out for docs, and Edgar has been a great model.
He talks about docs at each team meeting, answers questions, triages doc
bugs related to neutron, and seeks the people that will know the most about
a feature to write docs for it. Edgar's the model for project doc lead.

4. New incoming doc requests:

Release notes will be next and I've already asked for hints and tips on how
to gather up what's new in Icehouse. Steve Gordon has a sneak peek at [2].

5. Doc tools updates:

Andreas Jaeger just let us know that the translation jobs for api-site,
openstack-manuals, and operations-guide are now working properly. This is
great news! Read more at [3] to find out about upstream-translation jobs
and propose-translation jobs.

I've handed out two more Oxygen licenses so we're halfway through the set,
and it's good because it means that people are writing docs! Keep it up,
we're in the home stretch.

6. Other doc news:
Our next monthly Google Hangout is scheduled for March 26 at 21:00 UTC. The
first 10 to join get voice, and anyone can participate as a live audience
with chat enabled.

The Design Summit submission site is now active at
http://summit.openstack.org/.  I've created placeholders for two topics
we'll want to discuss to plan for Juno: Install guides and continuous
publishing and automation for the docs site. We should have room for at
least three more proposals and our timeslots are Wed-Friday. I know there
will be a proposal about making docs onboarding easier as well. I'm sure we
can come up with an API docs topic too.  Looking forward to it!

Nathan Kinder is working on bringing all the OpenStack Security Notes to
life in a github repo to be organized under the Documentation program.
https://review.openstack.org/#/c/73157/ The plan is to then publish the
Security Notes in a collection as an Appendix in the OpenStack Security
Guide.

On April 16, the APAC OpenStack docs writers will hold a face-to-face
meeting. Check out the latest docs team meeting for details or contact Lana
Brindley. [4]

[1]
http://lists.openstack.org/pipermail/openstack-docs/2014-March/004011.html
[2]
http://redhatstackblog.redhat.com/2014/03/11/an-icehouse-sneak-peek-openstack-compute-nova/
[3]
http://lists.openstack.org/pipermail/openstack-docs/2014-March/004126.html
[4]
http://eavesdrop.openstack.org/meetings/docteam/2014/docteam.2014-03-19-03.01.log.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-21 Thread Tim Bell

I am a strong advocate of the Blueprint-on-Blueprints process we discussed in 
the operator mini-summit so that experienced cloud administrators can give 
input before lots of code is written 
(https://etherpad.openstack.org/p/operators-feedback-mar14) but we need to be 
aware that these people with real life experience of the impact of changes on 
production clouds may not be working with gerrit day-to-day. 

There has been some excellent work by the documentation team in the past year 
to make it easy for new contributors to work on improvements to the 
documentation which also helps to introduce the tools/processes to the novices.

Can we find a way to keep the bar low for the review of blueprints while at the 
same time making sure we engage the full community spectrum in the future 
direction of OpenStack ?

Tim

> -Original Message-
> From: Stefano Maffulli [mailto:stef...@openstack.org]
> Sent: 21 March 2014 19:55
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Updates to Juno blueprint review process
> 
> On 03/20/2014 03:50 PM, Jay Lau wrote:
> > It is better that we can have some diagram workflow just like
> > Gerrit_Workflow  to
> > show the new process.
> 
> Indeed, I think it would help.
> 
> While I'm here, and for the records, I think that creating a new workflow 
> 'temporarily' only until we have Storyboard usable, is a
> *huge* mistake. It seems to me that you're ignoring or at least 
> underestimating the amount of *people* that will need to be
> retrained, the amount of documentation that need to be fixed/adjusted. And 
> the confusion that this will create on the 'long tail'
> developers.
> 
> A change like this, done with a couple of announcements on a mailing list and 
> a few mentions on IRC is not enough to steer the ~400
> developers who may be affected by this change. And then we'll have to manage 
> the change again when we switch to Storyboard. If I
> were you, I'd focus on getting storyboard ready to use asap, instead.
> 
> There, I said it, and I'm now going back to my cave.
> 
> .stef
> 
> --
> Ask and answer questions on https://ask.openstack.org
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device Mapping

2014-03-21 Thread Duncan Thomas
In general, abstracting the offload of snapshot, backup etc the a SAN
is exactly the job of cinder.

RDM has, in the general cloud case, a bunch of security issues (raw
sector reads outside of what is being presented, firmware updates,
etc) that need carefully looking at.

On 18 March 2014 09:33, Zhangleiqiang (Trump)  wrote:
>> From: Huang Zhiteng [mailto:winsto...@gmail.com]
>> Sent: Tuesday, March 18, 2014 4:40 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
>> Mapping
>>
>> On Tue, Mar 18, 2014 at 11:01 AM, Zhangleiqiang (Trump)
>>  wrote:
>> >> From: Huang Zhiteng [mailto:winsto...@gmail.com]
>> >> Sent: Tuesday, March 18, 2014 10:32 AM
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Subject: Re: [openstack-dev] [Nova][Cinder] Feature about Raw Device
>> >> Mapping
>> >>
>> >> On Tue, Mar 18, 2014 at 9:40 AM, Zhangleiqiang (Trump)
>> >>  wrote:
>> >> > Hi, stackers:
>> >> >
>> >> > With RDM, the storage logical unit number (LUN) can be
>> >> > directly
>> >> connected to a instance from the storage area network (SAN).
>> >> >
>> >> > For most data center applications, including Databases, CRM
>> >> > and
>> >> ERP applications, RDM can be used for configurations involving
>> >> clustering between instances, between physical hosts and instances or
>> >> where SAN-aware applications are running inside a instance.
>> >> If 'clustering' here refers to things like cluster file system, which
>> >> requires LUNs to be connected to multiple instances at the same time.
>> >> And since you mentioned Cinder, I suppose the LUNs (volumes) are
>> >> managed by Cinder, then you have an extra dependency for multi-attach
>> >> feature:
>> https://blueprints.launchpad.net/cinder/+spec/multi-attach-volume.
>> >
>> > Yes.  "Clustering" include Oracle RAC, MSCS, etc. If they want to work in
>> instance-based cloud environment, RDM and multi-attached-volumes are both
>> needed.
>> >
>> > But RDM is not only used for clustering, and haven't dependency for
>> multi-attach-volume.
>>
>> Set clustering use case and performance improvement aside, what other
>> benefits/use cases can RDM bring/be useful for?
>
> Thanks for your reply.
>
> The advantages of Raw device mapping are all introduced by its capability of 
> "pass" scsi command to the device, and the most common use cases are 
> clustering and performance improvement mentioned above.
>
> And besides these two scenarios, there is another use case: running SAN-aware 
> application inside instances, such as:
> 1. SAN management app
> 2. Apps which can offload the device related works, such as snapshot, backup, 
> etc, to SAN.
>
>
>> >
>> >> > RDM, which permits the use of existing SAN commands, is
>> >> generally used to improve performance in I/O-intensive applications
>> >> and block locking. Physical mode provides access to most hardware
>> >> functions of the storage system that is mapped.
>> >> It seems to me that the performance benefit mostly from virtio-scsi,
>> >> which is just an virtual disk interface, thus should also benefit all
>> >> virtual disk use cases not just raw device mapping.
>> >> >
>> >> > For libvirt driver, RDM feature can be enabled through the "lun"
>> >> device connected to a "virtio-scsi" controller:
>> >> >
>> >> > 
>> >> >
>> >> >> >> dev='/dev/mapper/360022a11ecba5db427db0023'/>
>> >> >
>> >> >
>> >> > 
>> >> >
>> >> > 
>> >> >
>> >> > Currently,the related works in OpenStack as follows:
>> >> > 1. block-device-mapping-v2 extension has already support
>> >> > the
>> >> "lun" device with "scsi" bus type listed above, but cannot make the
>> >> disk use "virtio-scsi" controller instead of default "lsi" scsi 
>> >> controller.
>> >> > 2. libvirt-virtio-scsi-driver BP ([1]) whose milestone
>> >> > target is
>> >> icehouse-3 is aim to support generate a virtio-scsi controller when
>> >> using an image with "virtio-scsi" property, but it seems not to take
>> >> boot-from-volume and attach-rdm-volume into account.
>> >> >
>> >> > I think it is meaningful if we provide the whole support
>> >> > for RDM
>> >> feature in OpenStack.
>> >> >
>> >> > Any thoughts? Welcome any advices.
>> >> >
>> >> >
>> >> > [1]
>> >> > https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-scsi-dri
>> >> > ver
>> >> > --
>> >> > zhangleiqiang (Trump)
>> >> >
>> >> > Best Regards
>> >> >
>> >> > ___
>> >> > OpenStack-dev mailing list
>> >> > OpenStack-dev@lists.openstack.org
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> >>
>> >> --
>> >> Regards
>> >> Huang Zhiteng
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.o

Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-21 Thread Russell Bryant
On 03/21/2014 02:55 PM, Stefano Maffulli wrote:
> On 03/20/2014 03:50 PM, Jay Lau wrote:
>> It is better that we can have some diagram workflow just like
>> Gerrit_Workflow  to
>> show the new process.
> 
> Indeed, I think it would help.
> 
> While I'm here, and for the records, I think that creating a new
> workflow 'temporarily' only until we have Storyboard usable, is a *huge*
> mistake. It seems to me that you're ignoring or at least underestimating
> the amount of *people* that will need to be retrained, the amount of
> documentation that need to be fixed/adjusted. And the confusion that
> this will create on the 'long tail' developers.
> 
> A change like this, done with a couple of announcements on a mailing
> list and a few mentions on IRC is not enough to steer the ~400
> developers who may be affected by this change. And then we'll have to
> manage the change again when we switch to Storyboard. If I were you, I'd
> focus on getting storyboard ready to use asap, instead.
> 
> There, I said it, and I'm now going back to my cave.

I think the current process and system and *so* broken that we can't
afford to wait.  Further, after talking to Thierry, it seems quite
likely that we would continue using this exact process, even with
Storyboard.  Storyboard isn't a review tool and won't solve all of the
project's problems.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Updates to Juno blueprint review process

2014-03-21 Thread Stefano Maffulli
On 03/20/2014 03:50 PM, Jay Lau wrote:
> It is better that we can have some diagram workflow just like
> Gerrit_Workflow  to
> show the new process.

Indeed, I think it would help.

While I'm here, and for the records, I think that creating a new
workflow 'temporarily' only until we have Storyboard usable, is a *huge*
mistake. It seems to me that you're ignoring or at least underestimating
the amount of *people* that will need to be retrained, the amount of
documentation that need to be fixed/adjusted. And the confusion that
this will create on the 'long tail' developers.

A change like this, done with a couple of announcements on a mailing
list and a few mentions on IRC is not enough to steer the ~400
developers who may be affected by this change. And then we'll have to
manage the change again when we switch to Storyboard. If I were you, I'd
focus on getting storyboard ready to use asap, instead.

There, I said it, and I'm now going back to my cave.

.stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano][Heat] MuranoPL questions?

2014-03-21 Thread Zane Bitter
I completely agree with Georgy, but you raised some questions about Heat 
that I want to answer in the interests of spreading knowledge about how 
Heat works. A heavily-snipped response follows...


On 21/03/14 05:11, Stan Lagun wrote:

3. Despite HOT being more secure on the surface it is not necessary so
in reality. There is a Python class behind each entry in resources
section of HOT template. That Python code is run with root privileges
and not guaranteed to be safe. People make mistakes, forget to validate
parameters, make incorrect assumptions etc. Even if the code is proven
to be secure every single commit can introduce security breach. And no
testing system can detect this.


Quite right, I should acknowledge that it would be crazy to assume that 
HOT is secure just because it is not a programming language, and I do 
not make that assumption. (Indeed, YAML itself has been the subject of 
many security problems, though afaik not in the safe mode that we use in 
Heat.) Thanks for pointing out that I was not clear.



The operator can install whatever plugins they want.

They do but that is a bad solution. The reason is that plugins can
introduce additional resource types but they cannot modify existing
code. Most of the time cloud operators need to customize existing
resources' logic for their needs rather then rewriting it from scratch.
And they want their changes to be opaque to end-users. Imagine that
cloud operator need thats to get permission from his proprietary quota
management system for each VM spawned. If he would create custom
MyInstance resource type end-users could bypass it by using standard
Instance resource rather than custom one. Patching existing Python code
is not good in that then operator need to maintain his private fork of
the Heat and have troubles with CD, upgrades to newer versions etc.


It's not as bad as you think. All of the things you mentioned were 
explicit design goals of the plugin system. If you install a plug-in 
resource with the same type as a built-in resource then it replaces the 
built-in one. And of course you can inherit from the existing plugin to 
customise it.


So in this example, the operator would create a plugin like this:

  from heat.engine.resources import server
  from my.package import do_my_proprietary_quota_thing

  class MyServer(server.Server):
  def handle_create(self):
  do_my_proprietary_quota_thing()
  return super(MyServer, self).handle_create()

  def resource_mapping():
  return {'OS::Nova::Server': MyServer}

and drop it in /usr/lib/heat. As you can see, this is a simple 
customisation (10 lines of code), completely opaque to end users 
(OS::Nova::Server is replaced), and highly unlikely to be broken by any 
changes in Heat (we consider the public APIs of 
heat.engine.resource.Resource as a contract with existing plugins that 
we can't break, at least without a lot of notice).


(I'm ignoring here that if this is needed for _every_ server, it makes 
no sense to do it in Heat, unless you don't expose the Nova API to users 
at all.)



Besides plugin system is not secure because plugins run with the
privileges of Heat engine and while I may trust Heat developers
(community) but not necessary trust 3rd party proprietary plugin.


I'm not sure who 'I' means in this context? As an end-user, you have no 
way of auditing what code your cloud provider is running in general.




What
if he wants that auto-scaling would be based on input from his
existing
Nagios infrastructure rather then Ceilometer?


This is supported already in autoscaling. Ceilometer just hits a URL
for an alarm, but you don't have to configure it this way. Anything
can hit the URL.

And this is a good example for our general approach - we provide a
way that works using built-in OpenStack services and a hook that
allows you to customise it with your own service, running on your
own machine (whether that be an actual machine or an OpenStack
Compute server). What we *don't* do is provide a way to upload your
own code that we then execute for you as some sort of secondary
Compute service.


1. Anything can hit the URL but it is auto-scaling resource that creates
Ceilometer alarms. And what should I do to make it create Nagios alarms
for example?


That's incorrect, autoscaling doesn't create any alarms. You create an 
alarm explicitly using the Ceilometer API, or using an 
OS::Ceilometer::Alarm resource in Heat. Or not, if you want to use some 
other source for alarms. You connect them together by getting the 
alarm_url attribute from the autoscaling policy resource and passing it 
to the Ceilometer alarm, but you could also allow any alarm source you 
care to use to hit that URL.


  [Ceilometer]  [Heat]
  Metrics ---> Alarm - - - - -> Policy ---> Scaling Group
 ^
  (webhook)

A second option is that you can a

Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-21 Thread Chris Behrens

FWIW, I’m fine with any of the options posted. But I’m curious about the 
precedence that reverting would create. It essentially sounds like if we 
release a version with an API bug, the bug is no longer a bug in the API and 
the bug becomes a bug in the documentation. The only way to ‘fix' the API then 
would be to rev it. Is that an accurate representation and is that desirable? 
Or do we just say we take these on a case-by-case basis?

- Chris


On Mar 21, 2014, at 10:34 AM, David Kranz  wrote:

> On 03/21/2014 05:04 AM, Christopher Yeoh wrote:
>> On Thu, 20 Mar 2014 15:45:11 -0700
>> Dan Smith  wrote:
>>> I know that our primary delivery mechanism is releases right now, and
>>> so if we decide to revert before this gets into a release, that's
>>> cool. However, I think we need to be looking at CD as a very important
>>> use-case and I don't want to leave those folks out in the cold.
>>> 
>> I don't want to cause issues for the CD people, but perhaps it won't be
>> too disruptive for them (some direct feedback would be handy). The
>> initial backwards incompatible change did not result in any bug reports
>> coming back to us at all. If there were lots of users using it I think
>> we could have expected some complaints as they would have had to adapt
>> their programs to no longer manually add the flavor access (otherwise
>> that would fail). It is of course possible that new programs written in
>> the meantime would rely on the new behaviour.
>> 
>> I think (please correct me if I'm wrong) the public CD clouds don't
>> expose that part of API to their users so the fallout could be quite
>> limited. Some opinions from those who do CD for private clouds would be
>> very useful. I'll send an email to openstack-operators asking what
>> people there believe the impact would be but at the moment I'm thinking
>> that revert is the way we should go.
>> 
>>> Could we consider a middle road? What if we made the extension
>>> silently tolerate an add-myself operation to a flavor, (potentially
>>> only) right after create? Yes, that's another change, but it means
>>> that old clients (like horizon) will continue to work, and new
>>> clients (which expect to automatically get access) will continue to
>>> work. We can document in the release notes that we made the change to
>>> match our docs, and that anyone that *depends* on the (admittedly
>>> weird) behavior of the old broken extension, where a user doesn't
>>> retain access to flavors they create, may need to tweak their client
>>> to remove themselves after create.
>> My concern is that we'd be digging ourselves an even deeper hole with
>> that approach. That for some reason we don't really understand at the
>> moment, people have programs which rely on adding flavor access to a
>> tenant which is already on the access list being rejected rather than
>> silently accepted. And I'm not sure its the behavior from flavor access
>> that we actually want.
>> 
>> But we certainly don't want to end up in the situation of trying to
>> work out how to rollback two backwards incompatible API changes.
>> 
>> Chris
> Nope.  IMO we should just accept that an incompatible change was made that 
> should not have been, revert it, and move on. I hope that saying our code 
> base is going to support CD does not mean that any incompatible change that 
> slips through our very limited gate cannot be reverted. October was a while 
> back but I'm not sure what principle we would use to draw the line. I am also 
> not sure why this is phrased as a CD vs. not issue. Are the *users* of a 
> system that happens to be managed using CD thought to be more tolerant of 
> their code breaking?
> 
> Perhaps it would be a good time to review 
> https://wiki.openstack.org/wiki/Governance/Approved/APIStability and the 
> details of https://wiki.openstack.org/wiki/APIChangeGuidelines to make sure 
> they still reflect the will of the TC and our community.
> 
> -David
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-21 Thread Joe Gordon
On Fri, Mar 21, 2014 at 10:23 AM, Thierry Carrez wrote:

> Hi everyone,
>
> As we get closer to tagging the first Icehouse release candidates, we'll
> be freezing dependencies on Tuesday, March 25 after the 21:00 UTC
> release meeting. The rationale behind this freeze is to facilitate the
> work of packagers (especially distribution packagers) so that they can
> be ready soon after our own release date.
>

There are still two outstanding trove dependencies that are currently used
in trove but not in global requirements. It would be nice to get this
sorted out before the freeze so we can turn
https://review.openstack.org/#/c/80690/ on.

mockito https://review.openstack.org/#/c/80850/
wsgi_intercept https://review.openstack.org/#/c/80851/


>
> Here is the process we'll follow. When DepFreeze hits, the
> openstack/requirements core reviewers will temporarily stop accepting
> changes to the master branch of the repository. Exceptions will of
> course be considered: if you think your change is valid and necessary
> for a successful release, please post a thread on openstack-dev to
> motivate it ([depfreeze] is a good subject prefix to reuse).
>
> This master freeze will last until all the integrated projects
> milestone-proposed branches are cut (i.e. when all the RC1s will have
> been tagged). At that point we'll create a milestone-proposed branch for
> openstack/requirements itself and unfreeze the master branch for Juno
> improvements.
>
> Thanks,
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Network Tagging Blueprint

2014-03-21 Thread Vinay Bannai
Salvatore,

Thanks for your comments.
I did have a conversation with Kyle and Mark (at the ODL) regarding this
feature. Any suggestions that have to make it "technology-agnostic" would
be appreciated.

Vinay


On Fri, Mar 21, 2014 at 5:54 AM, Salvatore Orlando wrote:

> Hi Vinay,
>
> I left a few comments on the specification document.
> While I understand this is functional for the VPC use case, there might be
> applications also outside of the VPC.
> My only concern is that, at least in the examples in the document, this
> appear to be violating a bit the tenet of neutron being
> "technology-agnostic".
> I am however confident that it should be doable to find a way to work
> around it, or have a discussion identifying the cases where instead it's
> advisable to expose the underlying technology.
>
> From a general perspective, I have not been following closely the
> discussion on VPC; I hope to find time to catch up.
> However, I recall seeing a blueprint for using nova-api as endpoint for
> network operations as well; is that still the current direction>
>
> Salvatore
>
>
> On 21 March 2014 07:01, Vinay Bannai  wrote:
>
>> Hello Folks,
>>
>> Please see a blueprint that we (eBay Inc) would like to propose for the
>> Juno summit. This blueprint addresses the feature of network tagging
>> allowing one to tag network resources with key value pairs as explained in
>> the specification URL. We at eBay have a version of this feature
>> implemented and deployed in our production network. This blueprint
>> formalizes the feature definition with enhancements to address more generic
>> use cases. I have enabled comments and would like to hear opinions and
>> feedback.
>>
>> The document will be updated with REST URL and resource modeling over the
>> weekend for those interested in the details.
>>
>>
>> https://docs.google.com/document/d/1ZqW7qeyHTm9AQt28GUdfv46ui9mz09UQNvjXiewOAys/edit#
>>
>>
>> Regards
>>
>> --
>> Vinay Bannai
>> eBay Inc
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Vinay Bannai
Email: vban...@gmail.com
Google Voice: 415 938 7576
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-21 Thread Jay Faulkner
On 3/21/14, 10:18 AM, Vladimir Kozhukalov wrote:
> And here is scheme
> https://drive.google.com/a/mirantis.com/file/d/0B-Olcp4mLLbvRks0eEhvMXNPM3M/edit?usp=sharing
>

Vlamimir, can you recreate this drawing in a format that doesn't require
an additional browser plugin? Thanks.

-Jay





signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-21 Thread David Kranz

On 03/21/2014 05:04 AM, Christopher Yeoh wrote:

On Thu, 20 Mar 2014 15:45:11 -0700
Dan Smith  wrote:

I know that our primary delivery mechanism is releases right now, and
so if we decide to revert before this gets into a release, that's
cool. However, I think we need to be looking at CD as a very important
use-case and I don't want to leave those folks out in the cold.


I don't want to cause issues for the CD people, but perhaps it won't be
too disruptive for them (some direct feedback would be handy). The
initial backwards incompatible change did not result in any bug reports
coming back to us at all. If there were lots of users using it I think
we could have expected some complaints as they would have had to adapt
their programs to no longer manually add the flavor access (otherwise
that would fail). It is of course possible that new programs written in
the meantime would rely on the new behaviour.

I think (please correct me if I'm wrong) the public CD clouds don't
expose that part of API to their users so the fallout could be quite
limited. Some opinions from those who do CD for private clouds would be
very useful. I'll send an email to openstack-operators asking what
people there believe the impact would be but at the moment I'm thinking
that revert is the way we should go.


Could we consider a middle road? What if we made the extension
silently tolerate an add-myself operation to a flavor, (potentially
only) right after create? Yes, that's another change, but it means
that old clients (like horizon) will continue to work, and new
clients (which expect to automatically get access) will continue to
work. We can document in the release notes that we made the change to
match our docs, and that anyone that *depends* on the (admittedly
weird) behavior of the old broken extension, where a user doesn't
retain access to flavors they create, may need to tweak their client
to remove themselves after create.

My concern is that we'd be digging ourselves an even deeper hole with
that approach. That for some reason we don't really understand at the
moment, people have programs which rely on adding flavor access to a
tenant which is already on the access list being rejected rather than
silently accepted. And I'm not sure its the behavior from flavor access
that we actually want.

But we certainly don't want to end up in the situation of trying to
work out how to rollback two backwards incompatible API changes.

Chris
Nope.  IMO we should just accept that an incompatible change was made 
that should not have been, revert it, and move on. I hope that saying 
our code base is going to support CD does not mean that any incompatible 
change that slips through our very limited gate cannot be reverted. 
October was a while back but I'm not sure what principle we would use to 
draw the line. I am also not sure why this is phrased as a CD vs. not 
issue. Are the *users* of a system that happens to be managed using CD 
thought to be more tolerant of their code breaking?


Perhaps it would be a good time to review 
https://wiki.openstack.org/wiki/Governance/Approved/APIStability and the 
details of https://wiki.openstack.org/wiki/APIChangeGuidelines to make 
sure they still reflect the will of the TC and our community.


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [depfreeze] Dependency freeze coming up (EOD Tuesday March 25)

2014-03-21 Thread Thierry Carrez
Hi everyone,

As we get closer to tagging the first Icehouse release candidates, we'll
be freezing dependencies on Tuesday, March 25 after the 21:00 UTC
release meeting. The rationale behind this freeze is to facilitate the
work of packagers (especially distribution packagers) so that they can
be ready soon after our own release date.

Here is the process we'll follow. When DepFreeze hits, the
openstack/requirements core reviewers will temporarily stop accepting
changes to the master branch of the repository. Exceptions will of
course be considered: if you think your change is valid and necessary
for a successful release, please post a thread on openstack-dev to
motivate it ([depfreeze] is a good subject prefix to reuse).

This master freeze will last until all the integrated projects
milestone-proposed branches are cut (i.e. when all the RC1s will have
been tagged). At that point we'll create a milestone-proposed branch for
openstack/requirements itself and unfreeze the master branch for Juno
improvements.

Thanks,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnetodb] Using gevent in MagnetoDB. OpenStack standards and approaches

2014-03-21 Thread Dmitriy Ukhlov
Doug and Ryan,

Thank you for your opinion! It is very important for us to gather
experience as more as we can for passing OpenStack incubation painlessly.
Also I would be glad to see you and everyone who is interested in MagnetoDB
project on MagnetoDB design session "MagnetoDB, key-value storage.
OpenStack usecases" in scope of OpenStack Juno Design Summit.


On Wed, Mar 19, 2014 at 3:49 PM, Ryan Petrello
wrote:

> Dmitriy,
>
> Gunicorn + gevent + pecan play nicely together, and they're a combination
> I've used to good success in the past.  Pecan even comes with some helpers
> for integrating with gunicorn:
>
> $ gunicorn_pecan pecan_config.py -k gevent -w4
>
> http://pecan.readthedocs.org/en/latest/deployment.html?highlight=gunicorn#gunicorn
>
> ---
> Ryan Petrello
> Senior Developer, DreamHost
> ryan.petre...@dreamhost.com
>
> On Mar 18, 2014, at 2:51 PM, Dmitriy Ukhlov  wrote:
>
> > Hello openstackers,
> >
> > We are working on MagnetoDB project and trying our best to follow
> OpenStack standards.
> >
> > So, MagnetoDB is aimed to be high performance scalable OpenStack based
> WSGI application which provide interface to high available distributed
> reliable key-value storage. We investigated best practices and separated
> the next points:
> >   * to avoid problems with GIL our application should be executed in
> single thread mode with non-blocking IO (using greenlets or another python
> specific approaches to rich this)
> >   * to make MagnetoDB scalable it is necessary to make MagnetoDB
> stateless. It allows us run a lot of independent MagnetoDB processes and
> switch all requests flow between them:
> >   * at single node to load all CPU's cores
> >   * at the different nodes for horizontal scalability
> >   * use Cassandra as most reliable and mature distributed key-value
> storage
> >   * use datastax python-driver as most modern cassandra python
> client which supports newest CQL3 and Cassandra native binary protocol
> features set
> >
> > So, considering this points The next technologies was chosen:
> >   * gevent as one of the fastest non-blocking single-thread WSGI
> server. It is based on greenlet library and supports monkey patching of
> standard threading library. It is necessary because of datastax python
> driver uses threading library and it's backlog has task to add gevent
> backlog. (We patched python-driver ourselves to enable this feature as
> temporary solution and waiting for new python-driver releases). It makes
> gevent more interesting to use than other analogs (like eventlet for
> example)
> >   * gunicorn as WSGI server which is able to run a few worker
> processes and master process for workers managing and routing request
> between them. Also it has integration with gevent   and can run
> gevent based workers. We also analyzed analogues, such as uWSGI. It looks
> like more faster but unfortunately we didn't manage to work uWSGI in multi
> process mode with MagnetoDB application.
> >
> > Also I want to add that currently oslo wsgi framework is used for
> organizing request routing. I know that current OpenStack trend is to
> migrate WSGI services to Pecan wsgi framework. Maybe is it reasonable for
> MagnetoDB too.
> >
> > We would like to hear your opinions about the libraries and approaches
> we have chosen and would appreciate you help and support in order to find
> the best balance between performance, developer friendness  and OpenStack
> standards.
> > --
> > Best regards,
> > Dmitriy Ukhlov
> > Mirantis Inc.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Dmitriy Ukhlov
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-21 Thread Vladimir Kozhukalov
And here is scheme
https://drive.google.com/a/mirantis.com/file/d/0B-Olcp4mLLbvRks0eEhvMXNPM3M/edit?usp=sharing

Vladimir Kozhukalov


On Fri, Mar 21, 2014 at 9:16 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Guys,
>
> I've read comments from JoshNang here
> https://etherpad.openstack.org/p/IronicPythonAgent. And it looks like we
> are still not on the same page about architecture of agent. I'd like us to
> avoid having hard coded logic in agent at all. If we need, then let's
> implement it as a driver. I mean it would be great to have all pieces of
> functionality as and only as drivers exposed via REST.
>
> For example, I have simple granular drivers, let say "power state" driver
> and "disk setup" driver and I'd like it to be possible to call them
> independently (granularly) outside of any kind of flow, outside of
> "prepare" or "deploy" or any other stages. There is no reason to put
> granular actions inside "deploy" or "prepare" stages w/o exposing them
> directly via REST.
>
> On the other hand, we obviously need to have flows (sequences of granular
> tasks), but I'd like to see them implemented as drivers as well. We can
> have canned flows (hard coded sequences like "prepare" and "deploy") as
> well as fully data driven generic flow driver. And we obviously need to get
> rid of "modes" so as to have just a plain bunch of drivers which are able
> to call their neighbors if necessary.
>
> Below are some examples of canned flow, generic flow and granular drivers:
>
> Canned flow driver url:  /prepare
> Data: {"key1": "value1", ...}
> Implementation:
> def flow(data):
>   ext_mgr.map(lambda ext: ext.name == "raid_config", lambda ext:
> ext.obj(data))
>   ext_mgr.map(lambda ext: ext.name == "deploy", lambda ext: ext.obj(data))
>   
>
> Canned flow driver url: /deploy
> Data: {"key11": "value11", ...}
> 
>
> Generic flow driver url: /flow
> Data: [
> {"driver": "prepare", "data": {"key1": "value1", ...}},
> {"driver": "deploy", "data": {"key11": "value11", ...}},
> {"driver": "power", "data": "reboot"}
> ]
> Implemetation:
> def flow(data):
>   for d in data:
>  ext_mgr.map(lambda ext: ext.name == d["driver"], lambda ext:
> ext.obj(d))
>
>
> Granual driver driver: /power
> Data: {"key111": "value111", ...}
> Implementation:
> ext_mgr.map(lambda ext: ext.name == "power", lambda ext: ext.obj(data))
>
> What do you guys think of having just plain (not tree like) bunch of
> drivers?
>
>
> Vladimir Kozhukalov
>
>
> On Mon, Mar 10, 2014 at 1:02 AM, Ryan Petrello <
> ryan.petre...@dreamhost.com> wrote:
>
>> FYI, the API scaffolding isn't actually released yet, though I'm planning
>> on making a pecan release with this in the next week or two.
>>
>> ---
>> Ryan Petrello
>> Senior Developer, DreamHost
>> ryan.petre...@dreamhost.com
>>
>> On Mar 9, 2014, at 12:10 PM, Devananda van der Veen <
>> devananda@gmail.com> wrote:
>>
>> > For those looking at Pecan/WSME'fying the agent, some scaffolding was
>> recently added to Pecan which may interest you.
>> >
>> > https://review.openstack.org/#/c/78682/
>> >
>> >
>> > -Deva
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] A ramdisk agent

2014-03-21 Thread Vladimir Kozhukalov
Guys,

I've read comments from JoshNang here
https://etherpad.openstack.org/p/IronicPythonAgent. And it looks like we
are still not on the same page about architecture of agent. I'd like us to
avoid having hard coded logic in agent at all. If we need, then let's
implement it as a driver. I mean it would be great to have all pieces of
functionality as and only as drivers exposed via REST.

For example, I have simple granular drivers, let say "power state" driver
and "disk setup" driver and I'd like it to be possible to call them
independently (granularly) outside of any kind of flow, outside of
"prepare" or "deploy" or any other stages. There is no reason to put
granular actions inside "deploy" or "prepare" stages w/o exposing them
directly via REST.

On the other hand, we obviously need to have flows (sequences of granular
tasks), but I'd like to see them implemented as drivers as well. We can
have canned flows (hard coded sequences like "prepare" and "deploy") as
well as fully data driven generic flow driver. And we obviously need to get
rid of "modes" so as to have just a plain bunch of drivers which are able
to call their neighbors if necessary.

Below are some examples of canned flow, generic flow and granular drivers:

Canned flow driver url:  /prepare
Data: {"key1": "value1", ...}
Implementation:
def flow(data):
  ext_mgr.map(lambda ext: ext.name == "raid_config", lambda ext:
ext.obj(data))
  ext_mgr.map(lambda ext: ext.name == "deploy", lambda ext: ext.obj(data))
  

Canned flow driver url: /deploy
Data: {"key11": "value11", ...}


Generic flow driver url: /flow
Data: [
{"driver": "prepare", "data": {"key1": "value1", ...}},
{"driver": "deploy", "data": {"key11": "value11", ...}},
{"driver": "power", "data": "reboot"}
]
Implemetation:
def flow(data):
  for d in data:
 ext_mgr.map(lambda ext: ext.name == d["driver"], lambda ext:
ext.obj(d))


Granual driver driver: /power
Data: {"key111": "value111", ...}
Implementation:
ext_mgr.map(lambda ext: ext.name == "power", lambda ext: ext.obj(data))

What do you guys think of having just plain (not tree like) bunch of
drivers?


Vladimir Kozhukalov


On Mon, Mar 10, 2014 at 1:02 AM, Ryan Petrello
wrote:

> FYI, the API scaffolding isn't actually released yet, though I'm planning
> on making a pecan release with this in the next week or two.
>
> ---
> Ryan Petrello
> Senior Developer, DreamHost
> ryan.petre...@dreamhost.com
>
> On Mar 9, 2014, at 12:10 PM, Devananda van der Veen <
> devananda@gmail.com> wrote:
>
> > For those looking at Pecan/WSME'fying the agent, some scaffolding was
> recently added to Pecan which may interest you.
> >
> > https://review.openstack.org/#/c/78682/
> >
> >
> > -Deva
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-21 Thread Joe Gordon
On Fri, Mar 21, 2014 at 2:04 AM, Christopher Yeoh  wrote:

> On Thu, 20 Mar 2014 15:45:11 -0700
> Dan Smith  wrote:
> >
> > I know that our primary delivery mechanism is releases right now, and
> > so if we decide to revert before this gets into a release, that's
> > cool. However, I think we need to be looking at CD as a very important
> > use-case and I don't want to leave those folks out in the cold.
> >
>
> I don't want to cause issues for the CD people, but perhaps it won't be
> too disruptive for them (some direct feedback would be handy). The
> initial backwards incompatible change did not result in any bug reports
> coming back to us at all. If there were lots of users using it I think
> we could have expected some complaints as they would have had to adapt
> their programs to no longer manually add the flavor access (otherwise
> that would fail). It is of course possible that new programs written in
> the meantime would rely on the new behaviour.
>
> I think (please correct me if I'm wrong) the public CD clouds don't
> expose that part of API to their users so the fallout could be quite
> limited. Some opinions from those who do CD for private clouds would be
> very useful. I'll send an email to openstack-operators asking what
> people there believe the impact would be but at the moment I'm thinking
> that revert is the way we should go.
>
> > Could we consider a middle road? What if we made the extension
> > silently tolerate an add-myself operation to a flavor, (potentially
> > only) right after create? Yes, that's another change, but it means
> > that old clients (like horizon) will continue to work, and new
> > clients (which expect to automatically get access) will continue to
> > work. We can document in the release notes that we made the change to
> > match our docs, and that anyone that *depends* on the (admittedly
> > weird) behavior of the old broken extension, where a user doesn't
> > retain access to flavors they create, may need to tweak their client
> > to remove themselves after create.
>

> My concern is that we'd be digging ourselves an even deeper hole with
> that approach. That for some reason we don't really understand at the
> moment, people have programs which rely on adding flavor access to a
> tenant which is already on the access list being rejected rather than
> silently accepted. And I'm not sure its the behavior from flavor access
> that we actually want.
>
>
I agree this sounds like we are just digging the hole deeper.


> But we certainly don't want to end up in the situation of trying to
> work out how to rollback two backwards incompatible API changes.
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] instances stuck with task_state of REBOOTING

2014-03-21 Thread Chris Friesen
On 03/21/2014 08:41 AM, Solly Ross wrote:
> Well, if messages are getting dropped on the floor due to communication 
> issues, that's not a good thing.
> If you have time, could you determine why the messages are getting dropped on 
> the floor?  We shouldn't be
> doing things that require both the controller and compute nodes until we have 
> a connection.

Currently when doing a reboot we set the REBOOTING task_state in the database 
in compute-api and then send an RPC cast. That seems awfully risky given that 
if that message gets lost or the call fails for any reason we could end up 
stuck in the REBOOTING state forever.  I think it might make sense to have the 
power state audit clear the REBOOTING state if appropriate, but others with 
more experience should make that call.


The timeline that I have looks like this.  We had some buggy code that sent all 
the instances for a reboot when the controller came up.  The first two are in 
the controller logs below, and these are the ones that failed.


controller: (running everything but nova-compute)
nova-api log:

/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:23.712 8187 INFO 
nova.compute.api [req-a84e25bd-85b4-478c-a845-7e8034df3ab2 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4] API::reboot reboot_type=SOFT
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:23.898 8187 INFO 
nova.osapi_compute.wsgi.server [req-a84e25bd-85b4-478c-a845-7e8034df3ab2 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 "POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4/action
 HTTP/1.1" status: 202 len: 185 time: 0.2299521
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:25.152 8128 INFO 
nova.compute.api [req-429feb82-a50d-4bf0-a9a4-bca036e55356 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
17169e6d-6693-4e95-9900-ba250dad5a39] API::reboot reboot_type=SOFT
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:25.273 8128 INFO 
nova.osapi_compute.wsgi.server [req-429feb82-a50d-4bf0-a9a4-bca036e55356 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 "POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/17169e6d-6693-4e95-9900-ba250dad5a39/action
 HTTP/1.1" status: 202 len: 185 time: 0.1583798

After this there are other reboot requests for the other instances, and those 
ones passed.


Interestingly, we later see this
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:45.476 8134 INFO 
nova.compute.api [req-2e0b67a0-0cd9-471f-b115-e4f07436f1c4 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4] API::reboot reboot_type=SOFT
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:45.477 8134 INFO 
nova.osapi_compute.wsgi.server [req-2e0b67a0-0cd9-471f-b115-e4f07436f1c4 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 "POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/c967e4ef-8cf4-4fac-8aab-c5ea5c3c3bb4/action
 HTTP/1.1" status: 409 len: 303 time: 0.1177511
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:48.831 8143 INFO 
nova.compute.api [req-afeb680b-91fd-4446-b4d8-fd264541369d 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] [instance: 
17169e6d-6693-4e95-9900-ba250dad5a39] API::reboot reboot_type=SOFT
/var/log/nova/nova-api.log.2.gz:2014-03-20 11:33:48.832 8143 INFO 
nova.osapi_compute.wsgi.server [req-afeb680b-91fd-4446-b4d8-fd264541369d 
8162b2e247704e218ed13094889a5244 48c9875f2edb4a36bbe598effbe835cf] 
192.168.204.195 "POST 
/v2/48c9875f2edb4a36bbe598effbe835cf/servers/17169e6d-6693-4e95-9900-ba250dad5a39/action
 HTTP/1.1" status: 409 len: 303 time: 0.0366399


Presumably the 409 responses are because nova thinks that these instances are 
currently rebooting.



compute:
2014-03-20 11:33:14.213 12229 INFO nova.openstack.common.rpc.common [-] 
Reconnecting to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:14.225 12229 INFO nova.openstack.common.rpc.common [-] 
Reconnecting to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:14.244 12229 INFO nova.openstack.common.rpc.common [-] 
Connected to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:14.246 12229 INFO nova.openstack.common.rpc.common [-] 
Connected to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:26.234 12229 INFO nova.openstack.common.rpc.common [-] 
Reconnecting to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:26.277 12229 INFO nova.openstack.common.rpc.common [-] 
Connected to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:29.240 12229 INFO nova.openstack.common.rpc.common [-] 
Reconnecting to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:29.276 12229 INFO nova.openstack.common.rpc.common [-] 
Connected to AMQP server on 192.168.204.2:5672
2014-03-20 11:33:35.871 12229 INFO nova.compute.manager 
[req-a10b008b-c9d0-4f31-8acb-e42fb43b64fe 8162b2e247704e218ed13094889a5244 
4

Re: [openstack-dev] We need a new version of hacking for Icehouse, or provide compatibility with oslo.sphinx in oslosphinx

2014-03-21 Thread Joe Gordon
On Fri, Mar 21, 2014 at 8:36 AM, Doug Hellmann
wrote:

> There is quite a list of un-released changes to hacking:
>
> * Make H202 check honor pep8 #noqa comment
> * Updated from global requirements
> * Updated from global requirements
> * Switch over to oslosphinx
> * HACKING.rst: Fix odd indentation in an example code
> * Remove tox locale overrides
> * Updated from global requirements
> * Clarify H403 message
> * More portable way to detect modules for H302
> * Fix python 3 incompatibility in _get_import_type
> * Trigger warnings for raw and unicode docstrings
> * Enhance H233 rule
> * Add check for removed modules in Python 3
> * Add Python3 deprecated assert* to HACKING.rst
> * Turn Python3 section into a list
> * Re-Add section on assertRaises(Exception
> * Cleanup HACKING.rst
> * Move hacking guide to root directory
> * Fix typo in package summary
> * Add H904: don't wrap lines using a backslash
> * checking for metaclass to be Python 3.x compatible
> * Remove unnecessary headers
> * Add -U to pip install command in tox.ini
> * Fix typos of comment in module core
> * Updated from global requirements
> * Add a check for file with only comments
> * Enforce grouping like imports together
> * Add noqa support for H201 (bare except)
> * Enforce import grouping
> * Clean up how test env variables are parsed
> * Fix the escape character
> * Remove vim modeline sample
> * Add a check for newline after docstring summary
>
> It looks like it might be time for a new release anyway, especially if it
> resolves the packaging issue you describe.
>


I think two new releases are needed. I have been holding off cutting the
next hacking release until we are closer to Juno. Since the next release
will include new rules I didn't want to distract anyone from focusing on
stabilizing Icehouse.

So it sounds like we need:

* Hacking 0.8.1 to fix the oslo.sphinx  oslosphinx issue for Icehouse.
Since we cap hacking versions at 0.9 [1] this will  get used in icehouse.
* Hacking 0.9 to release all the new hacking goodness. This will be
targeted for use in Juno.

[1] https://review.openstack.org/#/c/81356/


If this sounds good, I will cut 0.8.1 this afternoon.


> As far as the symlink, I think that's a potentially bad idea. It's only
> going to encourage the continued use of oslo.sphinx. Since the package is
> only needed to build the documentation, and not to actually use the tool, I
> don't think we need the symlink in place, do we?
>
> Doug
>
>
> On Fri, Mar 21, 2014 at 6:17 AM, Thomas Goirand  wrote:
>
>> Hi,
>>
>> The current version of python-hacking wants python-oslo.sphinx, but
>> we're moving to python-oslosphinx. In Debian, I made python-oslo.sphinx
>> as a transition empty package that only depends on python-oslosphinx. As
>> a consequence, python-hacking needs to be updated to use
>> python-oslosphinx, otherwise it wont have available build-dependencies.
>>
>>

Thank you for bringing this to our attention, I wonder how we can detect
this in our CI system in the future to prevent this.


>  I was also thinking about providing a symlink from oslo/sphinx to
>> oslosphinx. Maybe it'd be nice to have this directly in oslosphinx?
>>
>> Thoughts anyone?
>>
>> Cheers,
>>
>> Thomas
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-21 Thread Davanum Srinivas
FYI, Latest list of projects:

Implement a re-usable shared library for VMware(oslo.vmware) Masaru Nomura
A pre-caching system for OpenStack Anastasis Andronidis
Proposal for Implementing an application-level FWaaS driver (Zorp) Dániel Csubák
Openstack-OSLO :Add a New Backend to Oslo.Cache sai krishna
Implement a Fuzz testing framework that can be run on Tempest
Manishanker Talusani
How to detect network anomalies from telemetry data within Openstack mst89
Cross-services Scheduler project Artem Shepelev
OpenStack/Marconi: Py3k support Nataliia
Develop a benchmarking suite and new storage backends to OpenStack
Marconi Prashanth Raghu
Adding Redis as a Storage Backend to OpenStack Marconi Chenchong Qin
Developing Benchmarks for Virtual Machines of OpenStack with Rally
Tzanetos Balitsaris
Add a new storage backend to the OpenStack Message Queuing Service
Victoria Martínez de la Cruz

Thanks,
dims

On Thu, Mar 20, 2014 at 9:05 AM, Davanum Srinivas  wrote:
> Team,
>
> Here's what i see in the system so far.
>
> Mentors:
> ybudupi
> blint
> boris_42
> coroner
> cppcabrera
> sriramhere
> arnaudleg
> greghaynes
> hughsaunders
> julim
> ddutta (Organization Administrator)
> dims (Organization Administrator)
>
> Projects:
> Cross-services Scheduler project. Artem Shepelev Artem Shepelev
> Proposal for Implementing an application-level FWaaS driver (Zorp) Dániel 
> Csubák
> Openstack-OSLO :Add a New Backend to Oslo.Cache sai krishna
> How to detect network anomalies from telemetry data within Openstack mst89
> OpenStack/Marconi: Py3k support Nataliia
> Develop a benchmarking suite and new storage backends to OpenStack
> Marconi Prashanth Raghu
> Developing Benchmarks for Virtual Machines of OpenStack with Rally
> Tzanetos Balitsaris
>
> Mentors, if you don't see your id, please send me a connection request
> in the google gsoc site.
> Students, if you don't see your proposal above, please hop onto
> #openstack-gsoc and #gsoc and get help making sure we can see it.
>
> thanks,
> dims
>
> On Thu, Mar 20, 2014 at 8:41 AM, Artem Shepelev
>  wrote:
>> On 03/18/2014 07:19 PM, Davanum Srinivas wrote:
>>
>> Dear Students,
>>
>> Student application deadline is on Friday, March 21 [1]
>>
>> Once you finish the application process on the Google GSoC site.
>> Please reply back to this thread to confirm that all the materials are
>> ready to review.
>>
>> thanks,
>> dims
>>
>> [1] http://www.google-melange.com/gsoc/events/google/gsoc2014
>>
>> Hello!
>>
>> I have just sumbitted my proposal.
>> If there are any questions or offers I'm ready to listen them and do what I
>> can.
>>
>> P.S.: For other students and mentors: there is no possibility to edit
>> proposals after the deadline (March 21, 19.00 UTC)
>>
>> --
>> Best regards,
>> Artem Shepelev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Davanum Srinivas :: http://davanum.wordpress.com



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-21 Thread Ben Nemec
On 2014-03-21 10:57, Derek Higgins wrote:
> On 14/03/14 20:16, Ben Nemec wrote:
>> On 2014-03-13 11:12, James Slagle wrote:
>>> On Thu, Mar 13, 2014 at 2:51 AM, Robert Collins
>>>  wrote:
 So we already have pretty high requirements - its basically a 16G
 workstation as minimum.

 Specifically to test the full story:
  - a seed VM
  - an undercloud VM (bm deploy infra)
  - 1 overcloud control VM
  - 2 overcloud hypervisor VMs
 
5 VMs with 2+G RAM each.

 To test the overcloud alone against the seed we save 1 VM, to skip the
 overcloud we save 3.

 However, as HA matures we're about to add 4 more VMs: we need a HA
 control plane for both the under and overclouds:
  - a seed VM
  - 3 undercloud VMs (HA bm deploy infra)
  - 3 overcloud control VMs (HA)
  - 2 overcloud hypervisor VMs
 
9 VMs with 2+G RAM each == 18GB

 What should we do about this?

 A few thoughts to kick start discussion:
  - use Ironic to test across multiple machines (involves tunnelling
 brbm across machines, fairly easy)
  - shrink the VM sizes (causes thrashing)
  - tell folk to toughen up and get bigger machines (ahahahahaha, no)
  - make the default configuration inline the hypervisors on the
 overcloud with the control plane:
- a seed VM
- 3 undercloud VMs (HA bm deploy infra)
- 3 overcloud all-in-one VMs (HA)
   
  7 VMs with 2+G RAM each == 14GB


 I think its important that we exercise features like HA and live
 migration regularly by developers, so I'm quite keen to have a fairly
 solid systematic answer that will let us catch things like bad
 firewall rules on the control node preventing network tunnelling
 etc... e.g. we benefit the more things are split out like scale
 deployments are. OTOH testing the micro-cloud that folk may start with
 is also a really good idea
>>>
>>>
>>> The idea I was thinking was to make a testenv host available to
>>> tripleo atc's. Or, perhaps make it a bit more locked down and only
>>> available to a new group of tripleo folk, existing somewhere between
>>> the privileges of tripleo atc's and tripleo-cd-admins.  We could
>>> document how you use the cloud (Red Hat's or HP's) rack to start up a
>>> instance to run devtest on one of the compute hosts, request and lock
>>> yourself a testenv environment on one of the testenv hosts, etc.
>>> Basically, how our CI works. Although I think we'd want different
>>> testenv hosts for development vs what runs the CI, and would need to
>>> make sure everything was locked down appropriately security-wise.
>>>
>>> Some other ideas:
>>>
>>> - Allow an option to get rid of the seed VM, or make it so that you
>>> can shut it down after the Undercloud is up. This only really gets rid
>>> of 1 VM though, so it doesn't buy you much nor solve any long term
>>> problem.
>>>
>>> - Make it easier to see how you'd use virsh against any libvirt host
>>> you might have lying around.  We already have the setting exposed, but
>>> make it a bit more public and call it out more in the docs. I've
>>> actually never tried it myself, but have been meaning to.
>>>
>>> - I'm really reaching now, and this may be entirely unrealistic :),
>>> butsomehow use the fake baremetal driver and expose a mechanism to
>>> let the developer specify the already setup undercloud/overcloud
>>> environment ahead of time.
>>> For example:
>>> * Build your undercloud images with the vm element since you won't be
>>> PXE booting it
>>> * Upload your images to a public cloud, and boot instances for them.
>>> * Use this new mechanism when you run devtest (presumably running from
>>> another instance in the same cloud)  to say "I'm using the fake
>>> baremetal driver, and here are the  IP's of the undercloud instances".
>>> * Repeat steps for the overcloud (e.g., configure undercloud to use
>>> fake baremetal driver, etc).
>>> * Maybe it's not the fake baremetal driver, and instead a new driver
>>> that is a noop for the pxe stuff, and the power_on implementation
>>> powers on the cloud instances.
>>> * Obviously if your aim is to test the pxe and disk deploy process
>>> itself, this wouldn't work for you.
>>> * Presumably said public cloud is OpenStack, so we've also achieved
>>> another layer of "On OpenStack".
>>
>> I actually spent quite a while looking into something like this last
>> option when I first started on TripleO, because I had only one big
>> server locally and it was running my OpenStack installation.  I was
>> hoping to use it for my TripleO instances, and even went so far as to
>> add support for OpenStack to the virtual power driver in baremetal.  I
>> was never completely successful, but I did work through a number of
>> problems:
>>
>> 1. Neutron didn't like allowing the DHCP/PXE traffic to let my seed
>> serve to the undercloud.  I was able to get around this b

Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-21 Thread Malini Kamalambal


On 3/21/14 12:01 PM, "David Kranz"  wrote:

>On 03/20/2014 04:19 PM, Rochelle.RochelleGrober wrote:
>>
>>> -Original Message-
>>> From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
>>> Sent: Thursday, March 20, 2014 12:13 PM
>>>
>>> 'project specific functional testing' in the Marconi context is
>>> treating
>>> Marconi as a complete system, making Marconi API calls & verifying the
>>> response - just like an end user would, but without keystone. If one of
>>> these tests fail, it is because there is a bug in the Marconi code ,
>>> and
>>> not because its interaction with Keystone caused it to fail.
>>>
>>> "That being said there are certain cases where having a project
>>> specific
>>> functional test makes sense. For example swift has a functional test
>>> job
>>> that
>>> starts swift in devstack. But, those things are normally handled on a
>>> per
>>> case
>>> basis. In general if the project is meant to be part of the larger
>>> OpenStack
>>> ecosystem then Tempest is the place to put functional testing. That way
>>> you know
>>> it works with all of the other components. The thing is in openstack
>>> what
>>> seems
>>> like a project isolated functional test almost always involves another
>>> project
>>> in real use cases. (for example keystone auth with api requests)
>>>
>>> "
>>>
>>> One of the concerns we heard in the review was 'having the functional
>>> tests elsewhere (I.e within the project itself) does not count and they
>>> have to be in Tempest'.
>>> This has made us as a team wonder if we should migrate all our
>>> functional
>>> tests to Tempest.
>>> But from Matt's response, I think it is reasonable to continue in our
>>> current path & have the functional tests in Marconi coexist  along with
>>> the tests in Tempest.
>>>
>> I think that what is being asked, really is that the functional tests
>>could be a single set of tests that would become a part of the tempest
>>repository and that these tests would have an ENV variable as part of
>>the configuration that would allow either "no Keystone" or "Keystone" or
>>some such, if that is the only configuration issue that separates
>>running the tests isolated vs. integrated.  The functional tests need to
>>be as much as possible a single set of tests to reduce duplication and
>>remove the likelihood of two sets getting out of sync with each
>>other/development.  If they only run in the integrated environment,
>>that's ok, but if you want to run them isolated to make debugging
>>easier, then it should be a configuration option and a separate test job.
>>
>> So, if my assumptions are correct, QA only requires functional tests
>>for integrated runs, but if the project QAs/Devs want to run isolated
>>for dev and devtest purposes, more power to them.  Just keep it a single
>>set of functional tests and put them in the Tempest repository so that
>>if a failure happens, anyone can find the test and do the debug work
>>without digging into a separate project repository.
>>
>> Hopefully, the tests as designed could easily take a new configuration
>>directive and a short bit of work with OS QA will get the integrated FTs
>>working as well as the isolated ones.
>>
>> --Rocky
>This issue has been much debated. There are some active members of our
>community who believe that all the functional tests should live outside
>of tempest in the projects, albeit with the same idea that such tests
>could be run either as part of today's "real" tempest runs or mocked in
>various ways to allow component isolation or better performance. Maru
>Newby posted a patch with an example of one way to do this but I think
>it expired and I don't have a pointer.
>
>IMO there are valid arguments on both sides, but I hope every one could
>agree that functional tests should not be arbitrarily split between
>projects and tempest as they are now. The Tempest README states a desire
>for "complete coverage of the OpenStack API" but Tempest is not close to
>that. We have been discussing and then ignoring this issue for some time
>but I think the recent action to say that Tempest will be used to
>determine if something can use the OpenStack trademark will force more
>completeness on tempest (more tests, that is). I think we need to
>resolve this issue but it won't be easy and modifying existing api tests
>to be more flexible will be a lot of work. But at least new projects
>could get on the right path sooner.
>
>  -David
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

We are talking about different levels of testing,

1. Unit tests - which everybody agrees should be in the individual project
itself
2. System Tests - 'System' referring to (& limited to), all the components
that make up the project. These are also the functional tests for the
project.
3. Integration Tests - This is to verify that the OS components interact
well and

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Yuriy Taraday
On Fri, Mar 21, 2014 at 2:01 PM, Thierry Carrez wrote:

> Yuriy Taraday wrote:
> > Benchmark included showed on my machine these numbers (average over 100
> > iterations):
> >
> > Running 'ip a':
> >   ip a :   4.565ms
> >  sudo ip a :  13.744ms
> >sudo rootwrap conf ip a : 102.571ms
> > daemon.run('ip a') :   8.973ms
> > Running 'ip netns exec bench_ns ip a':
> >   sudo ip netns exec bench_ns ip a : 162.098ms
> > sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
> >  daemon.run('ip netns exec bench_ns ip a') : 129.876ms
> >
> > So it looks like running daemon is actually faster than running "sudo".
>
> That's pretty good! However I fear that the extremely simplistic filter
> rule file you fed on the benchmark is affecting numbers. Could you post
> results from a realistic setup (like same command, but with all the
> filter files normally found on a devstack host ?)
>

I don't have a devstack host at hands but I gathered all filters from Nova,
Cinder and Neutron and got this:
method  :min   avg   max   dev
   ip a :   3.741ms   4.443ms   7.356ms 500.660us
  sudo ip a :  11.165ms  13.739ms  32.326ms   2.643ms
sudo rootwrap conf ip a : 100.814ms 125.701ms 169.048ms  16.265ms
 daemon.run('ip a') :   6.032ms   8.895ms 172.287ms  16.521ms

Then I switched back to one file and got:
method  :min   avg   max   dev
   ip a :   4.176ms   4.976ms  22.910ms   1.821ms
  sudo ip a :  13.240ms  14.730ms  21.793ms   1.382ms
sudo rootwrap conf ip a :  79.834ms 104.586ms 145.070ms  15.063ms
 daemon.run('ip a') :   5.062ms   8.427ms 160.799ms  15.493ms

There is a difference but it looks like it's because of config files
parsing, not applying filters themselves.

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-21 Thread Dan Prince


- Original Message -
> From: "Robert Collins" 
> To: "OpenStack Development Mailing List" 
> Sent: Thursday, March 13, 2014 5:51:30 AM
> Subject: [openstack-dev] [TripleO] test environment requirements
> 
> So we already have pretty high requirements - its basically a 16G
> workstation as minimum.
> 
> Specifically to test the full story:
>  - a seed VM
>  - an undercloud VM (bm deploy infra)
>  - 1 overcloud control VM
>  - 2 overcloud hypervisor VMs
> 
>5 VMs with 2+G RAM each.
> 
> To test the overcloud alone against the seed we save 1 VM, to skip the
> overcloud we save 3.
> 
> However, as HA matures we're about to add 4 more VMs: we need a HA
> control plane for both the under and overclouds:
>  - a seed VM
>  - 3 undercloud VMs (HA bm deploy infra)
>  - 3 overcloud control VMs (HA)
>  - 2 overcloud hypervisor VMs
> 
>9 VMs with 2+G RAM each == 18GB
> 
> What should we do about this?
> 
> A few thoughts to kick start discussion:
>  - use Ironic to test across multiple machines (involves tunnelling
> brbm across machines, fairly easy)
>  - shrink the VM sizes (causes thrashing)
>  - tell folk to toughen up and get bigger machines (ahahahahaha, no)
>  - make the default configuration inline the hypervisors on the
> overcloud with the control plane:
>- a seed VM
>- 3 undercloud VMs (HA bm deploy infra)
>- 3 overcloud all-in-one VMs (HA)
>   
>  7 VMs with 2+G RAM each == 14GB
> 
> 
> I think its important that we exercise features like HA and live
> migration regularly by developers, so I'm quite keen to have a fairly
> solid systematic answer that will let us catch things like bad
> firewall rules on the control node preventing network tunnelling
> etc...

I'm all for supporting HA development and testing within devtest. I'm *against* 
forcing it on all users as a default.

I can imaging wanting to cut corners and have configurations flexible on both 
ends (undercloud and overcloud). I may for example deploy a single all-in-one 
undercloud when I'm testing overcloud HA. Or vice versa.

I think I'm one of the few (if not the only) developer who uses almost 
exclusive baremetal (besides seed VM) when test/developing TripleO. Forcing 
users who want to do this to have 6-7 real machines is a bit much I think. 
Arguably wasteful even. By requiring more machines to run through devtest you 
actually make it harder for people to test it on real hardware which is usually 
harder to come by. Given deployment on real bare metal is sort of the point or 
TripleO I'd very much like to see more developers using it rather than less.

So by all means lets support HA... but lets do it in a way that is configurable 
(i.e. not forcing people to be wasters)

Dan

> e.g. we benefit the more things are split out like scale
> deployments are. OTOH testing the micro-cloud that folk may start with
> is also a really good idea
> 
> -Rob
> 
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-21 Thread Derek Higgins
On 14/03/14 20:16, Ben Nemec wrote:
> On 2014-03-13 11:12, James Slagle wrote:
>> On Thu, Mar 13, 2014 at 2:51 AM, Robert Collins
>>  wrote:
>>> So we already have pretty high requirements - its basically a 16G
>>> workstation as minimum.
>>>
>>> Specifically to test the full story:
>>>  - a seed VM
>>>  - an undercloud VM (bm deploy infra)
>>>  - 1 overcloud control VM
>>>  - 2 overcloud hypervisor VMs
>>> 
>>>5 VMs with 2+G RAM each.
>>>
>>> To test the overcloud alone against the seed we save 1 VM, to skip the
>>> overcloud we save 3.
>>>
>>> However, as HA matures we're about to add 4 more VMs: we need a HA
>>> control plane for both the under and overclouds:
>>>  - a seed VM
>>>  - 3 undercloud VMs (HA bm deploy infra)
>>>  - 3 overcloud control VMs (HA)
>>>  - 2 overcloud hypervisor VMs
>>> 
>>>9 VMs with 2+G RAM each == 18GB
>>>
>>> What should we do about this?
>>>
>>> A few thoughts to kick start discussion:
>>>  - use Ironic to test across multiple machines (involves tunnelling
>>> brbm across machines, fairly easy)
>>>  - shrink the VM sizes (causes thrashing)
>>>  - tell folk to toughen up and get bigger machines (ahahahahaha, no)
>>>  - make the default configuration inline the hypervisors on the
>>> overcloud with the control plane:
>>>- a seed VM
>>>- 3 undercloud VMs (HA bm deploy infra)
>>>- 3 overcloud all-in-one VMs (HA)
>>>   
>>>  7 VMs with 2+G RAM each == 14GB
>>>
>>>
>>> I think its important that we exercise features like HA and live
>>> migration regularly by developers, so I'm quite keen to have a fairly
>>> solid systematic answer that will let us catch things like bad
>>> firewall rules on the control node preventing network tunnelling
>>> etc... e.g. we benefit the more things are split out like scale
>>> deployments are. OTOH testing the micro-cloud that folk may start with
>>> is also a really good idea
>>
>>
>> The idea I was thinking was to make a testenv host available to
>> tripleo atc's. Or, perhaps make it a bit more locked down and only
>> available to a new group of tripleo folk, existing somewhere between
>> the privileges of tripleo atc's and tripleo-cd-admins.  We could
>> document how you use the cloud (Red Hat's or HP's) rack to start up a
>> instance to run devtest on one of the compute hosts, request and lock
>> yourself a testenv environment on one of the testenv hosts, etc.
>> Basically, how our CI works. Although I think we'd want different
>> testenv hosts for development vs what runs the CI, and would need to
>> make sure everything was locked down appropriately security-wise.
>>
>> Some other ideas:
>>
>> - Allow an option to get rid of the seed VM, or make it so that you
>> can shut it down after the Undercloud is up. This only really gets rid
>> of 1 VM though, so it doesn't buy you much nor solve any long term
>> problem.
>>
>> - Make it easier to see how you'd use virsh against any libvirt host
>> you might have lying around.  We already have the setting exposed, but
>> make it a bit more public and call it out more in the docs. I've
>> actually never tried it myself, but have been meaning to.
>>
>> - I'm really reaching now, and this may be entirely unrealistic :),
>> butsomehow use the fake baremetal driver and expose a mechanism to
>> let the developer specify the already setup undercloud/overcloud
>> environment ahead of time.
>> For example:
>> * Build your undercloud images with the vm element since you won't be
>> PXE booting it
>> * Upload your images to a public cloud, and boot instances for them.
>> * Use this new mechanism when you run devtest (presumably running from
>> another instance in the same cloud)  to say "I'm using the fake
>> baremetal driver, and here are the  IP's of the undercloud instances".
>> * Repeat steps for the overcloud (e.g., configure undercloud to use
>> fake baremetal driver, etc).
>> * Maybe it's not the fake baremetal driver, and instead a new driver
>> that is a noop for the pxe stuff, and the power_on implementation
>> powers on the cloud instances.
>> * Obviously if your aim is to test the pxe and disk deploy process
>> itself, this wouldn't work for you.
>> * Presumably said public cloud is OpenStack, so we've also achieved
>> another layer of "On OpenStack".
> 
> I actually spent quite a while looking into something like this last
> option when I first started on TripleO, because I had only one big
> server locally and it was running my OpenStack installation.  I was
> hoping to use it for my TripleO instances, and even went so far as to
> add support for OpenStack to the virtual power driver in baremetal.  I
> was never completely successful, but I did work through a number of
> problems:
> 
> 1. Neutron didn't like allowing the DHCP/PXE traffic to let my seed
> serve to the undercloud.  I was able to get around this by using flat
> networking with a local bridge on the OpenStack system, but I'm not sure
> if that's going to be possible on most public cloud provi

Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Balaji Iyer
+1

On 3/21/14, 11:35 AM, "Amit Gandhi"  wrote:

>+1
>
>On 3/21/14, 11:17 AM, "Flavio Percoco"  wrote:
>
>>Greetings,
>>
>>I'd like to propose adding Malini Kamalambal to Marconi's core. Malini
>>has been an outstanding contributor for a long time. She's taken care
>>of Marconi's tests, benchmarks, gate integration, tempest support and
>>way more other things. She's also actively participated in the mailing
>>list discussions, she's contributed with thoughtful reviews and
>>participated in the project's meeting since she first joined the
>>project.
>>
>>Folks in favor or against please explicitly +1 / -1 the proposal.
>>
>>Thanks Malini, it's an honor to have you in the team.
>>
>>-- 
>>@flaper87
>>Flavio Percoco
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-21 Thread David Kranz

On 03/20/2014 04:19 PM, Rochelle.RochelleGrober wrote:



-Original Message-
From: Malini Kamalambal [mailto:malini.kamalam...@rackspace.com]
Sent: Thursday, March 20, 2014 12:13 PM

'project specific functional testing' in the Marconi context is
treating
Marconi as a complete system, making Marconi API calls & verifying the
response - just like an end user would, but without keystone. If one of
these tests fail, it is because there is a bug in the Marconi code ,
and
not because its interaction with Keystone caused it to fail.

"That being said there are certain cases where having a project
specific
functional test makes sense. For example swift has a functional test
job
that
starts swift in devstack. But, those things are normally handled on a
per
case
basis. In general if the project is meant to be part of the larger
OpenStack
ecosystem then Tempest is the place to put functional testing. That way
you know
it works with all of the other components. The thing is in openstack
what
seems
like a project isolated functional test almost always involves another
project
in real use cases. (for example keystone auth with api requests)

"

One of the concerns we heard in the review was 'having the functional
tests elsewhere (I.e within the project itself) does not count and they
have to be in Tempest'.
This has made us as a team wonder if we should migrate all our
functional
tests to Tempest.
But from Matt's response, I think it is reasonable to continue in our
current path & have the functional tests in Marconi coexist  along with
the tests in Tempest.


I think that what is being asked, really is that the functional tests could be a single set of 
tests that would become a part of the tempest repository and that these tests would have an ENV 
variable as part of the configuration that would allow either "no Keystone" or 
"Keystone" or some such, if that is the only configuration issue that separates running 
the tests isolated vs. integrated.  The functional tests need to be as much as possible a single 
set of tests to reduce duplication and remove the likelihood of two sets getting out of sync with 
each other/development.  If they only run in the integrated environment, that's ok, but if you want 
to run them isolated to make debugging easier, then it should be a configuration option and a 
separate test job.

So, if my assumptions are correct, QA only requires functional tests for 
integrated runs, but if the project QAs/Devs want to run isolated for dev and 
devtest purposes, more power to them.  Just keep it a single set of functional 
tests and put them in the Tempest repository so that if a failure happens, 
anyone can find the test and do the debug work without digging into a separate 
project repository.

Hopefully, the tests as designed could easily take a new configuration 
directive and a short bit of work with OS QA will get the integrated FTs 
working as well as the isolated ones.

--Rocky
This issue has been much debated. There are some active members of our 
community who believe that all the functional tests should live outside 
of tempest in the projects, albeit with the same idea that such tests 
could be run either as part of today's "real" tempest runs or mocked in 
various ways to allow component isolation or better performance. Maru 
Newby posted a patch with an example of one way to do this but I think 
it expired and I don't have a pointer.


IMO there are valid arguments on both sides, but I hope every one could 
agree that functional tests should not be arbitrarily split between 
projects and tempest as they are now. The Tempest README states a desire 
for "complete coverage of the OpenStack API" but Tempest is not close to 
that. We have been discussing and then ignoring this issue for some time 
but I think the recent action to say that Tempest will be used to 
determine if something can use the OpenStack trademark will force more 
completeness on tempest (more tests, that is). I think we need to 
resolve this issue but it won't be easy and modifying existing api tests 
to be more flexible will be a lot of work. But at least new projects 
could get on the right path sooner.


 -David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-21 Thread Andronidis Anastasios
Hello, just finished with my proposal too.

Let me know if there is something missing.

GL to all!

Anastasios Andronidis

On 21 Μαρ 2014, at 2:27 μ.μ., Victoria Martínez de la Cruz 
 wrote:

> Hi all,
> 
> I just submitted my proposal to Melange and I'm finishing my student page in 
> OpenStack wiki. Let me know if there is something missing.
> 
> Let's the waiting begin, good luck for all!
> 
> Thanks,
> 
> Victoria
> 
> 
> 2014-03-20 18:38 GMT-03:00 Chenchong Qin :
> Hi!
> 
> I've submitted my proposal to the GSoC site. And my Project Detail Page will 
> be ready soon. Looking forward to your comments! 
> 
> Thanks!
> 
> 
> 
> 
> 
> Chenchong
> 
> Dear Students,
> 
> Student application deadline is on Friday, March 21 [1]
> 
> Once you finish the application process on the Google GSoC site.
> Please reply back to this thread to confirm that all the materials are
> ready to review.
> 
> thanks,
> dims
> 
> [1] http://www.google-melange.com/gsoc/events/google/gsoc2014
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Victoria Martínez de la Cruz
+1 I also want to add that she is an excellent mentor, she helped me a lot
when I first joined Marconi. Thanks malini, you are great!


2014-03-21 12:35 GMT-03:00 Amit Gandhi :

> +1
>
> On 3/21/14, 11:17 AM, "Flavio Percoco"  wrote:
>
> >Greetings,
> >
> >I'd like to propose adding Malini Kamalambal to Marconi's core. Malini
> >has been an outstanding contributor for a long time. She's taken care
> >of Marconi's tests, benchmarks, gate integration, tempest support and
> >way more other things. She's also actively participated in the mailing
> >list discussions, she's contributed with thoughtful reviews and
> >participated in the project's meeting since she first joined the
> >project.
> >
> >Folks in favor or against please explicitly +1 / -1 the proposal.
> >
> >Thanks Malini, it's an honor to have you in the team.
> >
> >--
> >@flaper87
> >Flavio Percoco
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-21 Thread Derek Higgins
On 13/03/14 09:51, Robert Collins wrote:
> So we already have pretty high requirements - its basically a 16G
> workstation as minimum.
> 
> Specifically to test the full story:
>  - a seed VM
>  - an undercloud VM (bm deploy infra)
>  - 1 overcloud control VM
>  - 2 overcloud hypervisor VMs
> 
>5 VMs with 2+G RAM each.
> 
> To test the overcloud alone against the seed we save 1 VM, to skip the
> overcloud we save 3.
> 
> However, as HA matures we're about to add 4 more VMs: we need a HA
> control plane for both the under and overclouds:
>  - a seed VM
>  - 3 undercloud VMs (HA bm deploy infra)
>  - 3 overcloud control VMs (HA)
>  - 2 overcloud hypervisor VMs
> 
>9 VMs with 2+G RAM each == 18GB
> 
> What should we do about this?
> 
> A few thoughts to kick start discussion:
>  - use Ironic to test across multiple machines (involves tunnelling
> brbm across machines, fairly easy)
>  - shrink the VM sizes (causes thrashing)
>  - tell folk to toughen up and get bigger machines (ahahahahaha, no)
>  - make the default configuration inline the hypervisors on the
> overcloud with the control plane:
>- a seed VM
>- 3 undercloud VMs (HA bm deploy infra)
>- 3 overcloud all-in-one VMs (HA)
>   
>  7 VMs with 2+G RAM each == 14GB
> 
> 
> I think its important that we exercise features like HA and live
> migration regularly by developers, so I'm quite keen to have a fairly
> solid systematic answer that will let us catch things like bad
> firewall rules on the control node preventing network tunnelling
> etc... e.g. we benefit the more things are split out like scale
> deployments are. OTOH testing the micro-cloud that folk may start with
> is also a really good idea

I'd vote for an optional (non default) inline cloud setup like what you
mention above, and maybe a non HA setup aswell. This would allow a lower
entry bar to people who only want to worry about a specific component.
We then would need to cover all supported setups in CI (adding to
capacity needs). and of course we then wouldn't have everybody
exercising HA but it may be necessary to encourage uptake.

> 
> -Rob
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-21 Thread Doug Hellmann
On Fri, Mar 21, 2014 at 7:04 AM, Sean Dague  wrote:

> On 03/20/2014 06:18 PM, Joe Gordon wrote:
> >
> >
> >
> > On Thu, Mar 20, 2014 at 3:03 PM, Alexei Kornienko
> > mailto:alexei.kornie...@gmail.com>> wrote:
> >
> > Hello,
> >
> > We've done some profiling and results are quite interesting:
> > during 1,5 hour ceilometer inserted 59755 events (59755 calls to
> > record_metering_data)
> > this calls resulted in total 2591573 SQL queries.
> >
> > And the most interesting part is that 291569 queries were ROLLBACK
> > queries.
> > We do around 5 rollbacks to record a single event!
> >
> > I guess it means that MySQL backend is currently totally unusable in
> > production environment.
> >
> >
> > It should be noticed that SQLAlchemy is horrible for performance, in
> > nova we usually see sqlalchemy overheads of well over 10x (time
> > nova.db.api call vs the time MySQL measures when slow log is recording
> > everything).
>
> That's not really a fair assessment. Python object inflation takes time.
> I do get that there is SQLA overhead here, but even if you trimmed it
> out you would not get the the mysql query time.
>
> That being said, having Ceilometer's write path be highly tuned and not
> use SQLA (and written for every back end natively) is probably appropriate.
>

I have been working to get Mike Bayer (author of SQLAlchemy) to the summit
in Atlanta. He is interested in working with us to improve SQLAlchemy, so
if we have specific performance or feature issues like this, it would be
good to make a list. If we have enough, maybe we can set aside a session
in the Oslo track, otherwise we can at least have some hallway
conversations.

Doug



>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] test environment requirements

2014-03-21 Thread Derek Higgins
On 13/03/14 16:12, James Slagle wrote:
> On Thu, Mar 13, 2014 at 2:51 AM, Robert Collins
>  wrote:
>> So we already have pretty high requirements - its basically a 16G
>> workstation as minimum.
>>
>> Specifically to test the full story:
>>  - a seed VM
>>  - an undercloud VM (bm deploy infra)
>>  - 1 overcloud control VM
>>  - 2 overcloud hypervisor VMs
>> 
>>5 VMs with 2+G RAM each.
>>
>> To test the overcloud alone against the seed we save 1 VM, to skip the
>> overcloud we save 3.
>>
>> However, as HA matures we're about to add 4 more VMs: we need a HA
>> control plane for both the under and overclouds:
>>  - a seed VM
>>  - 3 undercloud VMs (HA bm deploy infra)
>>  - 3 overcloud control VMs (HA)
>>  - 2 overcloud hypervisor VMs
>> 
>>9 VMs with 2+G RAM each == 18GB
>>
>> What should we do about this?
>>
>> A few thoughts to kick start discussion:
>>  - use Ironic to test across multiple machines (involves tunnelling
>> brbm across machines, fairly easy)
>>  - shrink the VM sizes (causes thrashing)
>>  - tell folk to toughen up and get bigger machines (ahahahahaha, no)
>>  - make the default configuration inline the hypervisors on the
>> overcloud with the control plane:
>>- a seed VM
>>- 3 undercloud VMs (HA bm deploy infra)
>>- 3 overcloud all-in-one VMs (HA)
>>   
>>  7 VMs with 2+G RAM each == 14GB
>>
>>
>> I think its important that we exercise features like HA and live
>> migration regularly by developers, so I'm quite keen to have a fairly
>> solid systematic answer that will let us catch things like bad
>> firewall rules on the control node preventing network tunnelling
>> etc... e.g. we benefit the more things are split out like scale
>> deployments are. OTOH testing the micro-cloud that folk may start with
>> is also a really good idea
> 
> 
> The idea I was thinking was to make a testenv host available to
> tripleo atc's. Or, perhaps make it a bit more locked down and only
> available to a new group of tripleo folk, existing somewhere between
> the privileges of tripleo atc's and tripleo-cd-admins.  We could
> document how you use the cloud (Red Hat's or HP's) rack to start up a
> instance to run devtest on one of the compute hosts, request and lock
> yourself a testenv environment on one of the testenv hosts, etc.
> Basically, how our CI works. Although I think we'd want different
> testenv hosts for development vs what runs the CI, and would need to
> make sure everything was locked down appropriately security-wise.

I like this idea, I think it could work, my only concern is the extra
capacity we would need to pull it off. At the moment we are probably
falling short on capacity to do what we want for CI so adding to this
would make the situation worse (how much worse I don't know). So unless
we get to the point where we have spare hardware doing nothing I think
its a non runner.

> 
> Some other ideas:
> 
> - Allow an option to get rid of the seed VM, or make it so that you
> can shut it down after the Undercloud is up. This only really gets rid
> of 1 VM though, so it doesn't buy you much nor solve any long term
> problem.
> 
> - Make it easier to see how you'd use virsh against any libvirt host
> you might have lying around.  We already have the setting exposed, but
> make it a bit more public and call it out more in the docs. I've
> actually never tried it myself, but have been meaning to.

this could work as an option

> 
> - I'm really reaching now, and this may be entirely unrealistic :),
> butsomehow use the fake baremetal driver and expose a mechanism to
> let the developer specify the already setup undercloud/overcloud
> environment ahead of time.
> For example:
> * Build your undercloud images with the vm element since you won't be
> PXE booting it
> * Upload your images to a public cloud, and boot instances for them.
> * Use this new mechanism when you run devtest (presumably running from
> another instance in the same cloud)  to say "I'm using the fake
> baremetal driver, and here are the  IP's of the undercloud instances".
> * Repeat steps for the overcloud (e.g., configure undercloud to use
> fake baremetal driver, etc).
> * Maybe it's not the fake baremetal driver, and instead a new driver
> that is a noop for the pxe stuff, and the power_on implementation
> powers on the cloud instances.
> * Obviously if your aim is to test the pxe and disk deploy process
> itself, this wouldn't work for you.
> * Presumably said public cloud is OpenStack, so we've also achieved
> another layer of "On OpenStack".
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Allan Metts
+1

Allan Metts
@ametts


-Original Message-
From: Flavio Percoco [mailto:fla...@redhat.com] 
Sent: Friday, March 21, 2014 11:18 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the 
marconi-core team

Greetings,

I'd like to propose adding Malini Kamalambal to Marconi's core. Malini has been 
an outstanding contributor for a long time. She's taken care of Marconi's 
tests, benchmarks, gate integration, tempest support and way more other things. 
She's also actively participated in the mailing list discussions, she's 
contributed with thoughtful reviews and participated in the project's meeting 
since she first joined the project.

Folks in favor or against please explicitly +1 / -1 the proposal.

Thanks Malini, it's an honor to have you in the team.

--
@flaper87
Flavio Percoco
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Kurt Griffiths
+1 million. I’ve been super impressed by Malini’s work and thoughtful
comments on multiple occasions.

On 3/21/14, 10:35 AM, "Amit Gandhi"  wrote:

>+1
>
>On 3/21/14, 11:17 AM, "Flavio Percoco"  wrote:
>
>>Greetings,
>>
>>I'd like to propose adding Malini Kamalambal to Marconi's core. Malini
>>has been an outstanding contributor for a long time. She's taken care
>>of Marconi's tests, benchmarks, gate integration, tempest support and
>>way more other things. She's also actively participated in the mailing
>>list discussions, she's contributed with thoughtful reviews and
>>participated in the project's meeting since she first joined the
>>project.
>>
>>Folks in favor or against please explicitly +1 / -1 the proposal.
>>
>>Thanks Malini, it's an honor to have you in the team.
>>
>>-- 
>>@flaper87
>>Flavio Percoco
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Sriram Madapusi Vasudevan
+1

Best,
Sriram Madapusi Vasudevan



From: Amit Gandhi [amit.gan...@rackspace.com]
Sent: Friday, March 21, 2014 10:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the 
marconi-core team

+1

On 3/21/14, 11:17 AM, "Flavio Percoco"  wrote:

>Greetings,
>
>I'd like to propose adding Malini Kamalambal to Marconi's core. Malini
>has been an outstanding contributor for a long time. She's taken care
>of Marconi's tests, benchmarks, gate integration, tempest support and
>way more other things. She's also actively participated in the mailing
>list discussions, she's contributed with thoughtful reviews and
>participated in the project's meeting since she first joined the
>project.
>
>Folks in favor or against please explicitly +1 / -1 the proposal.
>
>Thanks Malini, it's an honor to have you in the team.
>
>--
>@flaper87
>Flavio Percoco


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Meeting minutes

2014-03-21 Thread Dina Belova
Hello stackers!

Thanks everyone who was attending our meeting :)

Meeting minutes are:

Minutes:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-21-15.00.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-21-15.00.txt
Log:
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-03-21-15.00.log.html


Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] We need a new version of hacking for Icehouse, or provide compatibility with oslo.sphinx in oslosphinx

2014-03-21 Thread Doug Hellmann
There is quite a list of un-released changes to hacking:

* Make H202 check honor pep8 #noqa comment
* Updated from global requirements
* Updated from global requirements
* Switch over to oslosphinx
* HACKING.rst: Fix odd indentation in an example code
* Remove tox locale overrides
* Updated from global requirements
* Clarify H403 message
* More portable way to detect modules for H302
* Fix python 3 incompatibility in _get_import_type
* Trigger warnings for raw and unicode docstrings
* Enhance H233 rule
* Add check for removed modules in Python 3
* Add Python3 deprecated assert* to HACKING.rst
* Turn Python3 section into a list
* Re-Add section on assertRaises(Exception
* Cleanup HACKING.rst
* Move hacking guide to root directory
* Fix typo in package summary
* Add H904: don't wrap lines using a backslash
* checking for metaclass to be Python 3.x compatible
* Remove unnecessary headers
* Add -U to pip install command in tox.ini
* Fix typos of comment in module core
* Updated from global requirements
* Add a check for file with only comments
* Enforce grouping like imports together
* Add noqa support for H201 (bare except)
* Enforce import grouping
* Clean up how test env variables are parsed
* Fix the escape character
* Remove vim modeline sample
* Add a check for newline after docstring summary

It looks like it might be time for a new release anyway, especially if it
resolves the packaging issue you describe.

As far as the symlink, I think that's a potentially bad idea. It's only
going to encourage the continued use of oslo.sphinx. Since the package is
only needed to build the documentation, and not to actually use the tool, I
don't think we need the symlink in place, do we?

Doug


On Fri, Mar 21, 2014 at 6:17 AM, Thomas Goirand  wrote:

> Hi,
>
> The current version of python-hacking wants python-oslo.sphinx, but
> we're moving to python-oslosphinx. In Debian, I made python-oslo.sphinx
> as a transition empty package that only depends on python-oslosphinx. As
> a consequence, python-hacking needs to be updated to use
> python-oslosphinx, otherwise it wont have available build-dependencies.
>
> I was also thinking about providing a symlink from oslo/sphinx to
> oslosphinx. Maybe it'd be nice to have this directly in oslosphinx?
>
> Thoughts anyone?
>
> Cheers,
>
> Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Amit Gandhi
+1

On 3/21/14, 11:17 AM, "Flavio Percoco"  wrote:

>Greetings,
>
>I'd like to propose adding Malini Kamalambal to Marconi's core. Malini
>has been an outstanding contributor for a long time. She's taken care
>of Marconi's tests, benchmarks, gate integration, tempest support and
>way more other things. She's also actively participated in the mailing
>list discussions, she's contributed with thoughtful reviews and
>participated in the project's meeting since she first joined the
>project.
>
>Folks in favor or against please explicitly +1 / -1 the proposal.
>
>Thanks Malini, it's an honor to have you in the team.
>
>-- 
>@flaper87
>Flavio Percoco


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Marconi] Proposal to add Malini Kamalambal to the marconi-core team

2014-03-21 Thread Flavio Percoco

Greetings,

I'd like to propose adding Malini Kamalambal to Marconi's core. Malini
has been an outstanding contributor for a long time. She's taken care
of Marconi's tests, benchmarks, gate integration, tempest support and
way more other things. She's also actively participated in the mailing
list discussions, she's contributed with thoughtful reviews and
participated in the project's meeting since she first joined the
project.

Folks in favor or against please explicitly +1 / -1 the proposal.

Thanks Malini, it's an honor to have you in the team.

--
@flaper87
Flavio Percoco


pgp02FxrKNn6W.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-qa] Graduation Requirements + Scope of Tempest

2014-03-21 Thread Malini Kamalambal
I have an etherpad started to document QA requirements
https://etherpad.openstack.org/p/Tempest-Graduation-Criteria
I hope Sean and the rest of QA team can add their thoughts here.
I am also looking for inputs from the Sahara team, while the path to
graduation is still fresh in their minds.
We can solidify this after the release work is done.

Meanwhile the Marconi team gets the message for engaging in active
conversations with the upstream community.
We look forward to being part of these conversation & becoming active
contributors in other areas.

On 3/20/14 4:17 PM, "Sean Dague"  wrote:

>I will agree that the TC language is not as strong as it should be (and
>really should be clarified, but I don't think that's going to happen
>until the release is looking solid). Honestly, though, I think Sahara is
>a good example here of the level of that we expect. They have actively
>engaged with upstream infra and used it to the full extent possible.
>Then wrote additional tooling to do even more and report 3rd party in on
>their changes.
>
>It's also worth noting they got there by actively participating in the
>upstream community and conversations, and clearly making upstream test
>integration a top priority for the cycle.
>
>On 03/20/2014 03:13 PM, Malini Kamalambal wrote:
>> Thanks Matt for your response !! It has clarified some of the 'cloudy
>> areas' ;)
>> 
>> 
>> "So having only looked at the Marconi ML thread and not the actual TC
>> meeting
>> minutes I might be missing the whole picture. But, from what I saw when
>>I
>> looked
>> at both a marconi commit and a tempest commit is that there is no gating
>> marconi
>> devstack-gate job on marconi commits. It's only non-voting in the check
>> pipeline.
>> Additionally, there isn't a non-voting job on tempest or devstack-gate.
>>For
>> example, look at how savanna has it's tempest jobs setup and this is
>>what
>> marconi
>> needs to have."
>> 
>> 
>> I am not dismissing the fact that Marconi does not have a voting gate
>>job.
>> All we have currently is a non-voting check job in Marconi, and
>> experimental jobs in Tempest & devstack-gate.
>> So we definitely have more work to do there, starting with making the
>> devstack-tempest job voting in Marconi.
>> 
>> But what caught me by surprise is 'the extend of gate testing -
>> 
>>http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/queuing/
>>te
>> st_queues.py'  was brought up as a concern.
>> We have always had an extensive functional test suite in Marconi , which
>> is run at gate against a marconi-server that the test spins up (This was
>> implemented before we had devstack integration in place. It is in our
>> plans to run the functional test suite in Marconi against devstack
>> Marconi).
>> 
>> But my point is not do a postmortem on everything that happened, rather
>> come up with an objective list of items that I (or anybody who wants to
>> meet the Tempest criteria for graduation- No way meant to imply that
>> Tempest work stops after graduation ;) ) need to complete. I started a
>> wiki page to document what will be a good enough graduation criteria to
>> graduate from the QA perspective.
>> 
>> https://etherpad.openstack.org/p/Tempest-Graduation-Criteria
>> 
>> 
>> It will help if the QA team can add their thoughts to this.
>> This can maybe become a recommendation to the TC on what the QA
>>graduation
>> requirements are.
>> 
>> "What do you mean by project specific functional testing? What makes
>> debugging
>> a marconi failure in a tempest gate job any more involved than
>>debugging a
>> nova or neutron failure? Part of the point of having an integrated gate
>>is
>> saying that the project works well with all the others in OpenStack. IMO
>> that's
>> not just in project functionality but also in community. When there is
>>an
>> issue
>> with a gate job everyone comes together to work on it. For example if
>>you
>> have
>> a keystone patch that breaks a marconi test in check there is open
>> communication
>> about what happened and how to fix it."
>> 
>> 'project specific functional testing' in the Marconi context is treating
>> Marconi as a complete system, making Marconi API calls & verifying the
>> response - just like an end user would, but without keystone. If one of
>> these tests fail, it is because there is a bug in the Marconi code , and
>> not because its interaction with Keystone caused it to fail.
>> 
>> "That being said there are certain cases where having a project specific
>> functional test makes sense. For example swift has a functional test job
>> that
>> starts swift in devstack. But, those things are normally handled on a
>>per
>> case
>> basis. In general if the project is meant to be part of the larger
>> OpenStack
>> ecosystem then Tempest is the place to put functional testing. That way
>> you know
>> it works with all of the other components. The thing is in openstack
>>what
>> seems
>> like a project isolated functional test almost always involves another
>> p

Re: [openstack-dev] [Marconi][TC] Withdraw graduation request

2014-03-21 Thread Sean Dague
On 03/21/2014 10:49 AM, Monty Taylor wrote:
> On 03/20/2014 07:40 PM, Kurt Griffiths wrote:
>>> I'd also like to thank the team and the overall community. The team
>>> for its hard work during the last cycle and the community for being
>>> there
>>> and providing such important feedback in this process.
>>
>> +1, thanks again everyone for participating in the discussions and
>> driving towards a constructive outcome.
> 
> ++

Agreed, +1. It's great to be part of a community where we can have
rigorous and difficult debates and it basically remains civil and
constructive.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi][TC] Withdraw graduation request

2014-03-21 Thread Monty Taylor

On 03/20/2014 07:40 PM, Kurt Griffiths wrote:

I'd also like to thank the team and the overall community. The team
for its hard work during the last cycle and the community for being there
and providing such important feedback in this process.


+1, thanks again everyone for participating in the discussions and
driving towards a constructive outcome.


++

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi][TC] Withdraw graduation request

2014-03-21 Thread Malini Kamalambal
Flavio has very well summarized the Marconi team's thoughts.
We appreciate all the frank opinions in the discussions that are going on.
Keep those thoughts coming!

We look forward to building a better Marconi and a stronger community.

On 3/20/14 7:30 PM, "Clint Byrum"  wrote:

>Jay said it all. I look forward to Marconi's graduation day and I see
>good things. :)
>
>Excerpts from Jay Pipes's message of 2014-03-20 16:18:37 -0700:
>> This is a very mature stance and well-written email. Thanks, Flavio and
>> all of the Marconi team for having thick skin and responding to the
>> various issues professionally.
>> 
>> Cheers,
>> -jay
>> 
>> On Thu, 2014-03-20 at 23:59 +0100, Flavio Percoco wrote:
>> > Greetings,
>> > 
>> > I'm sending this email on behalf of Marconi's team.
>> > 
>> > As you already know, we submitted our graduation request a couple of
>> > weeks ago and the meeting was held on Tuesday, March 18th. During the
>> > meeting very important questions and issues were raised that made
>> > us think and analyse our current situation and re-think about what
>> > the best for OpenStack and Marconi would be in this very moment.
>> > 
>> > After some considerations, we've reached the conclusion that this is
>> > probably not the right time for this project to graduate and that
>> > it'll be fruitful for the project and the OpenStack community if we
>> > take another development cycle before coming out from incubation. Here
>> > are some things we took under consideration:
>> > 
>> > 1. It's still not clear to the overall community what the goals of
>> > the project are. It is not fair for Marconi as a project nor for
>> > OpenStack as a community to move forward with this integration when
>> > there are still open questions about the project goals.
>> > 
>> > 2. Some critical issues came out of our attempt to have a gate job.
>> > For the team, the project and the community this is a very critical
>> > point. We've managed to have the gate working but we're still not
>> > happy with the results.
>> > 
>> > 3. The drivers currently supported by the project don't cover some
>> > important cases related to deploying it. One of them solves a
>> > licensing issue but introduces a scale issue whereas the other one
>> > solves the scale issue and introduces a licensing issue. Moreover,
>> > these drivers have created quite a confusion with regards to what the
>> > project goal's are too.
>> > 
>> > 4. We've seen the value - and believe in it - of OpenStack's
>> > incubation period. During this period, the project has gained
>> > maturity in its API, supported drivers and integration with the
>> > overall community.
>> > 
>> > 5. Several important questions were brought up in the recent ML
>> > discussions. These questions take time, effort but also represent a
>> > key point in the support, development and integration of the project
>> > with the rest of OpenStack. We'd like to dedicate to this questions
>> > the time they deserve.
>> > 
>> > 6. There are still some open questions in the OpenStack community
>> > related to the graduation requirements and the required supported
>> > technologies of integrated projects.
>> > 
>> > Based on the aforementioned points, the team would like to withdraw
>> > the graduation request and remain an incubated project for one
>> > more development cycle.
>> > 
>> > During the upcoming months, the team will focus on solving the issues
>> > that arose as part of last Tuesday's meeting. If possible, we would
>> > like to request a meeting where we can discuss with the TC - and
>> > whoever wants to participate - a set of *most pressing issues* that
>> > should be solved before requesting another graduation meeting. The
>> > team will be focused on solving those issues and other issues down
>> > that road.
>> > 
>> > Although the team believes in the project's technical maturity, we
>>think
>> > this is what is best for OpenStack and the project itself
>> > community-wise. The open questions are way too important for the team
>> > and the community and they shouldn't be ignored nor rushed.
>> > 
>> > I'd also like to thank the team and the overall community. The team
>> > for its hard work during the last cycle and the community for being
>>there
>> > and providing such important feedback in this process. We look forward
>> > to see Marconi graduating from incubation.
>> > 
>> > Bests,
>> > Marconi's team.
>> > 
>> > 
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] instances stuck with task_state of REBOOTING

2014-03-21 Thread Solly Ross
Well, if messages are getting dropped on the floor due to communication issues, 
that's not a good thing.
If you have time, could you determine why the messages are getting dropped on 
the floor?  We shouldn't be
doing things that require both the controller and compute nodes until we have a 
connection.

Best Regards,
Solly Ross

- Original Message -
From: "Chris Friesen" 
To: openstack-dev@lists.openstack.org
Sent: Thursday, March 20, 2014 2:59:55 PM
Subject: Re: [openstack-dev] [nova] instances stuck with task_state of  
REBOOTING

On 03/20/2014 12:29 PM, Chris Friesen wrote:

> The fact that there are no success or error logs in nova-compute.log
> makes me wonder if we somehow got stuck in self.driver.reboot().
>
> Also, I'm kind of wondering what would happen if nova-compute was
> running reboot_instance() and we rebooted the controller at the same
> time.  reboot_instance() could time out trying to update the instance
> with the the new power state and a task_state of None.  Later on in
> _sync_power_states() we would update the power_state, but nothing would
> update the task_state.  I don't think this is what happened to us though
> since I'd expect to see logs of the timeout.

Actually, looking at the logs a bit more carefully it appears that what 
happened is something like this:

We reboot the controllers.
Right after they come back up something calls compute.api.API.reboot()
That sets instance.task_state = task_states.REBOOTING and then calls 
instance.save() to update the database.
Then it calls self.compute_rpcapi.reboot_instance() which does an rpc cast.
That message gets dropped on the floor due to communication issues 
between the controller and the compute.
Now we're stuck with a task_state of REBOOTING.


I think that both of the RPC message loss scenarios are valid with 
current nova code, so we really do need an audit to clean up after this 
sort of thing.

Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] User mailing lists for OpenStack projects

2014-03-21 Thread Jason Dunsmore
Here is the mailing list for openstack usage questions (for all
projects):
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


On Thu, Mar 20 2014, Shaunak Kashyap wrote:

> Hi folks,
>
> I am relatively new to OpenStack development as one of the developers
> on the unified PHP SDK for OpenStack [1].
>
> We were recently discussing about a mailing list for the users of this
> SDK (as opposed to it’s contributors who will use openstack-dev@). The
> purpose of such as mailing list would be for users of the SDK to
> communicate with the contributors as well as each other. Of course,
> there would be other avenues for such communication as well (IRC, for
> instance).
>
> Specifically, we would like to know whether existing OpenStack
> projects have mailing lists for their users and, if so, where they are
> being hosted.
>
> Thanks,
>
> Shaunak
>
> [1] https://wiki.openstack.org/wiki/OpenStack-SDK-PHP
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Resource dependencies

2014-03-21 Thread Jason Dunsmore
This is what you're looking for:
http://docs.openstack.org/developer/heat/glossary.html#term-dependency

On Thu, Mar 20 2014, Shaunak Kashyap wrote:

> Hi,
>
> In a Heat template, what does it mean for a resource to depend on
> another resource? As in, what is the impact of creating a dependency?
>
> I read
> http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#resources-section
> and found this definition of the “depends_on” attribute:
>
>> This optional attribute allows for specifying dependencies of the
> current resource on one or more other resources. Please refer to
> section hot_spec_resources_dependencies for details.
>
>
> Unfortunately, I can’t seem to find the referenced
> “hot_spec_resources_dependencies” section anywhere.
>
> Thank you,
>
> Shaunak
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-21 Thread Chris Friesen
This is sort of off on a tangent, but one of the things that resulted in 
this being a problem was the fact that if someone creates a private 
flavor and then tries to add access second flavor access call will fail 
because the the tenant already is on the access list.


Something I was wondering...why do we fail the second call?

It would make sense to me to just return success without actually doing 
anything since at the end of the operation the tenant has the access 
they requested.


Of course, that'd be another API change...  :)

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Resource dependencies

2014-03-21 Thread Shaunak Kashyap
Thanks for the explanation, Thomas. Appreciate it!

Shaunak

On Mar 21, 2014, at 12:37 AM, Thomas Spatzier  
wrote:

> Shaunak Kashyap  wrote on 21/03/2014
> 05:26:50:
> 
>> From: Shaunak Kashyap 
>> To: "openstack-dev@lists.openstack.org"
> 
>> Date: 21/03/2014 05:29
>> Subject: [openstack-dev] [Heat] Resource dependencies
>> 
>> Hi,
>> 
>> In a Heat template, what does it mean for a resource to depend on
>> another resource? As in, what is the impact of creating a dependency?
> 
> When a resource depends on another resource, this means that the Heat
> engine will only start processing this resource as soon as the other
> resource it depends on has been created. If a resource depends on multiple
> resources, all those other resources have to be created before processing
> the depedent resource.
> 
>> 
>> I read http://docs.openstack.org/developer/heat/template_guide/
>> hot_spec.html#resources-section and found this definition of the
>> “depends_on” attribute:
>> 
>>> This optional attribute allows for specifying dependencies of the
>> current resource on one or more other resources. Please refer to
>> section hot_spec_resources_dependencies for details.
>> 
>> 
>> Unfortunately, I can’t seem to find the referenced
>> “hot_spec_resources_dependencies” section anywhere.
> 
> I just checked the source in github and the section is there:
> 
> https://github.com/openstack/heat/blob/master/doc/source/template_guide/hot_spec.rst#L452
> 
> It only looks like the wrong heading markup is used. Nice spotting
> actually; I will fix it.
> 
>> 
>> Thank you,
>> 
>> Shaunak
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Thierry Carrez
Sean Dague wrote:
> Sounds great. One of the things I hope happens with this is a look at
> some place rootwrap is used with such an open policy, that it's
> completely moot. For instance the nova-cpu policy includes tee & dd with
> no arg limitting (which has been that way forever from my look in git
> annotate)
> 
> Which is basically game over.

n-cpu is not the only component where the use of rootwrap doesn't
actually provide additional security... I'll leave as an exercise to the
reader to find the other ones :)

> So in the nova-cpu case I really think we should remove rootwrap as it's
> got to do so many things as root that being a limitted user really isn't
> an option.

The original idea was to have the framework in place to address those
issues: notice abusive commands in filter definitions, and either find a
way to filter them in an efficient way (the way we addressed the kill
calls for example), or adapt the code so that it doesn't need such
commands (like, say, removing file injection altogether).

The trick is, despite multiple sessions on the subject (one at every
summit since the dawn of time) this big review/fix effort hasn't
magically happened :) In some cases we even regressed (re-addition of
blind 'cat' CommandFilter while we have a specific ReadFileFilter).

I still think we are in a better starting place forcing those calls
through inefficient rootwrap rules -- at least we know which those calls
are and we have the framework ready to help in further restricting them
(RegExpFilter anyone ?). But the issue is the current rootwrap gives a
false sense of security. People just add filter rules for their commands
and call their security work done. It's *not* done. It's a continuing
process to make sure you don't have insecure rules, improve them or
rewrite the code so that it doesn't need them. Most CommandFilter rules
can be abused, and they still represent something like 95% of the
filters :) I'm not sure how to better communicate that rootwrap is not
the end, it's just the beginning.

As a final note, the best solution is not "better rootwrap filters". the
best solution is solid design that doesn't require running anything as
root. So components without run_as_root calls should really stay that
way. And components with a couple of rootwrap rules should seriously
look into removing the need for them.

Cheers,

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [3rd party testing] Requiring a utc timestamp on 3rd party test logs

2014-03-21 Thread Anita Kuno
At the beginning of March, I had an entertaining evening/morning with
Michael Still in which we tried to figure out when a 3rd party testing
system ran a specific test. I was days away from a refreshing vacation
and he was working with morning brain. The conversation went on way too
long and as much as I enjoy a good chat about timezones and timestamps,
I think there is a better way to do this. [0]

I proposed a patch to the 3rd party testing requirements adding in a utc
timestamp for 3rd party testing logs, as part of the required
environment details. [1]

Jeremy Stanley has rightly pointed out that this addition to
requirements could benefit from some feedback from reviewers and
developers who consume these logs to ensure that this is a useful
requirement not a dust catcher.

So please comment on the patch preferably or you are welcome to comment
on this thread to indicate whether you feel this requirement is worth
including.

Given the nature of some other conversations regarding 3rd party testing
in the past, I have decided to put a time limit of 7 days on this thread
so as to not create an obstacle to merging the patch by opening the
conversation, in case there are no responses.

Thank you,
Anita.

[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2014-03-01.log
timestamp 2014-03-01T23:50:56 for the beginning of the timestamp
conversation
[1] https://review.openstack.org/#/c/77376/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Moving tripleo-ci towards the gate

2014-03-21 Thread Derek Higgins
Hi All,
   I'm trying to get a handle on what needs to happen before getting
tripleo-ci(toci) into the gate, I realize this may take some time but
I'm trying to map out how to get to the end goal of putting multi node
tripleo based deployments in the gate which should cover a lot of uses
cases that devstact-gate doesn't. Here are some of the stages I think we
need to achieve before being in the gate along with some questions where
people may be able to fill in the blanks.

Stage 1: check - tripleo projects
   This is what we currently have running, 5 separate jobs running non
voting checks against tripleo projects

Stage 2 (a). reliability
   Obviously keeping the reliability of both the results and the ci
system is a must and we should always aim towards 0% false test results,
but is there an acceptable number of false negatives for example that
would be acceptable to infa, what are the numbers on the gate at the
moment? should we aim to match those at the very least (Maybe we already
have). And for how long do we need to maintain those levels before
considering the system proven?

Stage 2 (b). speedup
   How long can the longest jobs take? We have plans in place to speed
up our current jobs but what should the target be?

3. More Capacity
   I'm going to talk about RAM here as its probably the resource where
we will hit our infrastructure limits first.
   Each time a suite of toci jobs is kicked off we currently kick off 5
jobs (which will double once Fedora is added[1])
   In total these jobs spawn 15 vm's consuming 80G of RAM (its actually
120G to workaround a bug we will should soon have fixed[2]), we also
have plans that will reduce this 80G further but lets stick with it for
the moment.
   Some of these jobs complete after about 30 minutes but lets say our
target is an overall average of 45 minutes.

   With Fedora that means each run will tie up 160G for 45 minutes. Or
160G can provide us with 32 runs (each including 10 jobs) per day

   So to kick off 500 (I made this number up) runs per day, we would need
   (500 / 32.0) * 160G = 2500G of RAM

   We then need to double this number to allow for redundancy, so thats
5000G of RAM

   We probably have about 3/4 of this available to us at the moment but
its not evenly balanced between the 2 clouds so we're not covered from a
redundancy point of view.

   So we need more hardware (either by expanding the clouds we have or
added new clouds), I'd like for us to start a separate effort to map out
exactly what our medium term goals should be, including
   o jobs we want to run
   o how long we expect each of them to take
   o how much ram each one would take
   so that we can roughly put together an idea of what our HW
requirements will be.

4. check - all openstack projects
   Once we're happy we have the required capacity I think we can then
move to check on all openstack projects

5. voting check - all projects
   Once we're happy that everybody is happy with reliability I think we
can move to voting check

6. gate on all openstack projects
   And then finally when everything else lines up I think we can be
added to the gate

A) Gating with Ironic
  I bring this up because there was some confusion about ironic's status
in the Gate at a recent tripleo meeting[3], when can tripleo's ironic
jobs be part of the gate?

Any thoughts? Am I way off with any of my assumptions? Is my maths correct?

thanks,
Derek.

[1] https://review.openstack.org/#/q/status:open+topic:add-f20-jobs,n,z
[2] https://bugs.launchpad.net/diskimage-builder/+bug/1289582
[3]
http://eavesdrop.openstack.org/meetings/tripleo/2014/tripleo.2014-03-11-19.01.log.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack/GSoC

2014-03-21 Thread Victoria Martínez de la Cruz
Hi all,

I just submitted my proposal to Melange and I'm finishing my student page
in OpenStack wiki. Let me know if there is something missing.

Let's the waiting begin, good luck for all!

Thanks,

Victoria


2014-03-20 18:38 GMT-03:00 Chenchong Qin :

> Hi!
>
> I've submitted my proposal to the GSoC site. And my Project Detail Page
> will be ready soon. Looking forward to your comments!
>
> Thanks!
>
> Chenchong
>
>  Dear Students,
>>
>> Student application deadline is on Friday, March 21 [1]
>>
>> Once you finish the application process on the Google GSoC site.
>> Please reply back to this thread to confirm that all the materials are
>> ready to review.
>>
>> thanks,
>> dims
>>
>> [1] http://www.google-melange.com/gsoc/events/google/gsoc2014
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS 2.1.0 is available but not the Neutron ARP responder

2014-03-21 Thread Kyle Mestery
Getting this type of functional testing into the gate would be pretty
phenomenal.
Thanks for your continued efforts here Mathieu! If there is anything I can
do to
help here, let me know. One other concern here is that the infra team may
have
issues running a version of OVS which isn't packaged into Ubuntu/CentOS.
Keep
that in mind as well.

Edourard, I look forward to your blog, please share it here once you've
written it!

Thanks,
Kyle



On Fri, Mar 21, 2014 at 6:15 AM, Édouard Thuleau  wrote:

> Thanks Mathieu for your support and work onto CI to enable multi-node.
>
> I wrote a blog post about how to run devstack development environment with
> LXC.
> I hope it will be publish next week.
>
> Just add a pointer about OVS support network namespaces since 2 years ago
> now [1].
>
> [1]
> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=2a4999f3f33467f4fa22ed6e5b06350615fb2dac
>
> Regards,
> Édouard.
>
>
> On Fri, Mar 21, 2014 at 11:31 AM, Mathieu Rohon 
> wrote:
>
>> Hi edouard,
>>
>> thanks for the information. I would love to see your patch getting
>> merged to have l2-population MD fully functional with an OVS based
>> deployment. Moreover, this patch has a minimal impact on neutron,
>> since the code is used only if l2-population MD is used in the ML2
>> plugin.
>>
>> markmcclain was concerned that no functional testing is done, but
>> L2-population MD needs mutlinode deployment to be tested. A deployment
>> based on a single VM won't create overlay tunnels, which is a
>> mandatory technology to have l2-population activated.
>> The Opensatck-CI is not able, for the moment, to run job based on
>> multi-node deployment. We proposed an evolution of devstack to have a
>> multinode deployment based on a single VM which launch compute nodes
>> in LXC containers [1], but this evolution has been refused by
>> Opensatck-CI since there is other ways to run multinode setup with
>> devstack, and LXC container is not compatible with iscsi and probably
>> ovs [2][3].
>>
>> One way to have functional test for this feature would be to deploy
>> 3rd party testing environment, but it would be a pity to have to
>> maintain a 3rd party to test some functionalities which are not based
>> on 3rd party equipments. So we are currently learning about the
>> Openstack-CI tools to propose some evolutions to have mutinode setup
>> inside the gate [4]. There are a lot of way to implement it
>> (node-pools evolution, usage of tripleO, of Heat [5]), and we don't
>> know which one would be the easiest, and so the one we have to work on
>> to have the multinode feature available ASAP.
>>
>> This feature looks very important for Neutron, at least to test
>> overlay tunneling. I thinks it's very important for nova too, to test
>> live-migration.
>>
>>
>> [1]https://blueprints.launchpad.net/devstack/+spec/lxc-computes
>> [2]https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
>> [3]
>> http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.log.html
>> [4]
>> https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg00968.html
>> [5]
>> http://lists.openstack.org/pipermail/openstack-infra/2013-July/000128.html
>>
>> On Fri, Mar 21, 2014 at 10:08 AM, Édouard Thuleau 
>> wrote:
>> > Hi,
>> >
>> > Just to inform you that the new OVS release 2.1.0 was done yesterday
>> [1].
>> > This release contains new features and significant performance
>> improvements
>> > [2].
>> >
>> > And in that new features, one [3] was use to add local ARP responder
>> with
>> > OVS agent and the plugin ML2 with the MD l2-pop [4]. Perhaps, it's time
>> to
>> > reconsider that review?
>> >
>> > [1] https://www.mail-archive.com/discuss@openvswitch.org/msg09251.html
>> > [2] http://openvswitch.org/releases/NEWS-2.1.0
>> > [3]
>> >
>> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=f6c8a6b163af343c66aea54953553d84863835f7
>> > [4] https://review.openstack.org/#/c/49227/
>> >
>> > Regards,
>> > Édouard.
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Network Tagging Blueprint

2014-03-21 Thread Salvatore Orlando
Hi Vinay,

I left a few comments on the specification document.
While I understand this is functional for the VPC use case, there might be
applications also outside of the VPC.
My only concern is that, at least in the examples in the document, this
appear to be violating a bit the tenet of neutron being
"technology-agnostic".
I am however confident that it should be doable to find a way to work
around it, or have a discussion identifying the cases where instead it's
advisable to expose the underlying technology.

>From a general perspective, I have not been following closely the
discussion on VPC; I hope to find time to catch up.
However, I recall seeing a blueprint for using nova-api as endpoint for
network operations as well; is that still the current direction>

Salvatore


On 21 March 2014 07:01, Vinay Bannai  wrote:

> Hello Folks,
>
> Please see a blueprint that we (eBay Inc) would like to propose for the
> Juno summit. This blueprint addresses the feature of network tagging
> allowing one to tag network resources with key value pairs as explained in
> the specification URL. We at eBay have a version of this feature
> implemented and deployed in our production network. This blueprint
> formalizes the feature definition with enhancements to address more generic
> use cases. I have enabled comments and would like to hear opinions and
> feedback.
>
> The document will be updated with REST URL and resource modeling over the
> weekend for those interested in the details.
>
>
> https://docs.google.com/document/d/1ZqW7qeyHTm9AQt28GUdfv46ui9mz09UQNvjXiewOAys/edit#
>
>
> Regards
>
> --
> Vinay Bannai
> eBay Inc
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Sean Dague
On 03/21/2014 05:42 AM, Thierry Carrez wrote:
> Yuriy Taraday wrote:
>> On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo > > wrote:
>>>If this coupled to neutron in a way that it can be accepted for
>>> Icehouse (we're killing a performance bug), or that at least it can
>>> be y backported, you'd be covering both the short & long term needs.
>>
>> As I said on the meeting I plan to provide change request to Neutron
>> with some integration with this patch.
>> I'm also going to engage people involved in rootwrap about my change
>> request.
> 
> Temporarily removing my rootwrap maintainer hat and putting on my
> OpenStack release manager hat: as you probably know we are well into
> Icehouse feature freeze at this point, and there is no way I would
> consider such a significant change for inclusion in the Icehouse release
> at this point.
> 
> The work on both the daemon and the shedskin stuff is very promising,
> but the nature of this beast makes it necessary to undergo a lot of
> testing and security audits before it can be accepted. Not exactly
> something I'd consider 4 weeks before a final release.
> 
> Frankly, this issue has been on the table forever and this is just the
> wrong timing to rush a new implementation to fix it.
> 
> I filed a rootwrap session for the Juno Design summit -- ideally we'll
> have various solutions ready by then and we'd make the final choice for
> early integration in Juno, leaving plenty of time to catch the weird
> regressions (or security holes) that it may cause.

Sounds great. One of the things I hope happens with this is a look at
some place rootwrap is used with such an open policy, that it's
completely moot. For instance the nova-cpu policy includes tee & dd with
no arg limitting (which has been that way forever from my look in git
annotate)

Which is basically game over.

So in the nova-cpu case I really think we should remove rootwrap as it's
got to do so many things as root that being a limitted user really isn't
an option.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-21 Thread Boris Pavlovic
Sean,


Absolutely agree with you.
It's not the same to execute query and get plain text, and execute query
and get hierarchy of python objects.

Plus I disagree when I hear that SQLAlchemy is slow. It's slow when you are
using it wrong.

Like in Nova Scheduler [1] we were fetching full 3 tables with JOIN. Which
produce much more results from DB (in bytes and rows) then just make 3
separated selects and then join it by hand.

We should stop using next phrases:
1) python is slow
2) mysql is slow
3) sqlalchemy is slow
4) hardware is slow [2]

And start using these phrase:
1) Algorithms that we are using are bad
2) Architecture solutions that we are using are bad

And start thinking about how to improve them.


[1] https://review.openstack.org/#/c/43151/
[2] http://en.wikipedia.org/wiki/Buran_(spacecraft)

Best regards,
Boris Pavlovic



On Fri, Mar 21, 2014 at 3:04 PM, Sean Dague  wrote:

> On 03/20/2014 06:18 PM, Joe Gordon wrote:
> >
> >
> >
> > On Thu, Mar 20, 2014 at 3:03 PM, Alexei Kornienko
> > mailto:alexei.kornie...@gmail.com>> wrote:
> >
> > Hello,
> >
> > We've done some profiling and results are quite interesting:
> > during 1,5 hour ceilometer inserted 59755 events (59755 calls to
> > record_metering_data)
> > this calls resulted in total 2591573 SQL queries.
> >
> > And the most interesting part is that 291569 queries were ROLLBACK
> > queries.
> > We do around 5 rollbacks to record a single event!
> >
> > I guess it means that MySQL backend is currently totally unusable in
> > production environment.
> >
> >
> > It should be noticed that SQLAlchemy is horrible for performance, in
> > nova we usually see sqlalchemy overheads of well over 10x (time
> > nova.db.api call vs the time MySQL measures when slow log is recording
> > everything).
>
> That's not really a fair assessment. Python object inflation takes time.
> I do get that there is SQLA overhead here, but even if you trimmed it
> out you would not get the the mysql query time.
>
> That being said, having Ceilometer's write path be highly tuned and not
> use SQLA (and written for every back end natively) is probably appropriate.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-21 Thread Alex Xu

On 2014年03月21日 17:04, Christopher Yeoh wrote:

On Thu, 20 Mar 2014 15:45:11 -0700
Dan Smith  wrote:

I know that our primary delivery mechanism is releases right now, and
so if we decide to revert before this gets into a release, that's
cool. However, I think we need to be looking at CD as a very important
use-case and I don't want to leave those folks out in the cold.


I don't want to cause issues for the CD people, but perhaps it won't be
too disruptive for them (some direct feedback would be handy). The
initial backwards incompatible change did not result in any bug reports
coming back to us at all. If there were lots of users using it I think
we could have expected some complaints as they would have had to adapt
their programs to no longer manually add the flavor access (otherwise
that would fail). It is of course possible that new programs written in
the meantime would rely on the new behaviour.

I think (please correct me if I'm wrong) the public CD clouds don't
expose that part of API to their users so the fallout could be quite
limited. Some opinions from those who do CD for private clouds would be
very useful. I'll send an email to openstack-operators asking what
people there believe the impact would be but at the moment I'm thinking
that revert is the way we should go.


Could we consider a middle road? What if we made the extension
silently tolerate an add-myself operation to a flavor, (potentially
only) right after create? Yes, that's another change, but it means
that old clients (like horizon) will continue to work, and new
clients (which expect to automatically get access) will continue to
work. We can document in the release notes that we made the change to
match our docs, and that anyone that *depends* on the (admittedly
weird) behavior of the old broken extension, where a user doesn't
retain access to flavors they create, may need to tweak their client
to remove themselves after create.

My concern is that we'd be digging ourselves an even deeper hole with
that approach. That for some reason we don't really understand at the
moment, people have programs which rely on adding flavor access to a
tenant which is already on the access list being rejected rather than
silently accepted. And I'm not sure its the behavior from flavor access
that we actually want.

But we certainly don't want to end up in the situation of trying to
work out how to rollback two backwards incompatible API changes.


I vote to revert also. If we promise api stable before release, that 
means we can't
make any mistake in the review. We should think about whether we promise 
something

before release.

If we really want to keep this. There is antoher road. Add an extension 
for this change, just
like extend_quotas extension. It's disabled by default. If any 
deployment depend on that change,

admin can enable it.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS 2.1.0 is available but not the Neutron ARP responder

2014-03-21 Thread Édouard Thuleau
Thanks Mathieu for your support and work onto CI to enable multi-node.

I wrote a blog post about how to run devstack development environment with
LXC.
I hope it will be publish next week.

Just add a pointer about OVS support network namespaces since 2 years ago
now [1].

[1]
http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=2a4999f3f33467f4fa22ed6e5b06350615fb2dac

Regards,
Édouard.


On Fri, Mar 21, 2014 at 11:31 AM, Mathieu Rohon wrote:

> Hi edouard,
>
> thanks for the information. I would love to see your patch getting
> merged to have l2-population MD fully functional with an OVS based
> deployment. Moreover, this patch has a minimal impact on neutron,
> since the code is used only if l2-population MD is used in the ML2
> plugin.
>
> markmcclain was concerned that no functional testing is done, but
> L2-population MD needs mutlinode deployment to be tested. A deployment
> based on a single VM won't create overlay tunnels, which is a
> mandatory technology to have l2-population activated.
> The Opensatck-CI is not able, for the moment, to run job based on
> multi-node deployment. We proposed an evolution of devstack to have a
> multinode deployment based on a single VM which launch compute nodes
> in LXC containers [1], but this evolution has been refused by
> Opensatck-CI since there is other ways to run multinode setup with
> devstack, and LXC container is not compatible with iscsi and probably
> ovs [2][3].
>
> One way to have functional test for this feature would be to deploy
> 3rd party testing environment, but it would be a pity to have to
> maintain a 3rd party to test some functionalities which are not based
> on 3rd party equipments. So we are currently learning about the
> Openstack-CI tools to propose some evolutions to have mutinode setup
> inside the gate [4]. There are a lot of way to implement it
> (node-pools evolution, usage of tripleO, of Heat [5]), and we don't
> know which one would be the easiest, and so the one we have to work on
> to have the multinode feature available ASAP.
>
> This feature looks very important for Neutron, at least to test
> overlay tunneling. I thinks it's very important for nova too, to test
> live-migration.
>
>
> [1]https://blueprints.launchpad.net/devstack/+spec/lxc-computes
> [2]https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
> [3]
> http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.log.html
> [4]
> https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg00968.html
> [5]
> http://lists.openstack.org/pipermail/openstack-infra/2013-July/000128.html
>
> On Fri, Mar 21, 2014 at 10:08 AM, Édouard Thuleau 
> wrote:
> > Hi,
> >
> > Just to inform you that the new OVS release 2.1.0 was done yesterday [1].
> > This release contains new features and significant performance
> improvements
> > [2].
> >
> > And in that new features, one [3] was use to add local ARP responder with
> > OVS agent and the plugin ML2 with the MD l2-pop [4]. Perhaps, it's time
> to
> > reconsider that review?
> >
> > [1] https://www.mail-archive.com/discuss@openvswitch.org/msg09251.html
> > [2] http://openvswitch.org/releases/NEWS-2.1.0
> > [3]
> >
> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=f6c8a6b163af343c66aea54953553d84863835f7
> > [4] https://review.openstack.org/#/c/49227/
> >
> > Regards,
> > Édouard.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] panel not getting registered.

2014-03-21 Thread Roel Van Nyen
Hi,

I`m trying to create a custom dashboard with a panel.

I`ve created the dashboard and with a panel under
/openstack_dashboard/dashboards/.
Then I`ve added this to setting.py under INSTALLED_APPS.

'openstack_dashboard.dashboards.mycustomdashboard',

Howhever if I try to connect to horizon I get:

Traceback (most recent call last):
  File "/usr/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/contrib/staticfiles/handlers.py",
line 67, in __call__
return self.application(environ, start_response)
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py",
line 187, in __call__
self.load_middleware()
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py",
line 49, in load_middleware
mw_instance = mw_class()
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/middleware/locale.py",
line 24, in __init__
for url_pattern in get_resolver(None).url_patterns:
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
line 346, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns",
self.urlconf_module)
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
line 341, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/utils/importlib.py",
line 40, in import_module
__import__(name)
  File "/home/vannyenr/horizon-stable/openstack_dashboard/urls.py", line
38, in 
url(r'', include(horizon.urls))
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/conf/urls/__init__.py",
line 27, in include
patterns = getattr(urlconf_module, 'urlpatterns', urlconf_module)
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/utils/functional.py",
line 213, in inner
self._setup()
  File
"/home/vannyenr/horizon-stable/.venv/local/lib/python2.7/site-packages/django/utils/functional.py",
line 298, in _setup
self._wrapped = self._setupfunc()
  File "/home/vannyenr/horizon-stable/horizon/base.py", line 733, in
url_patterns
return self._urls()[0]
  File "/home/vannyenr/horizon-stable/horizon/base.py", line 767, in _urls
url(r'^%s/' % dash.slug, include(dash._decorated_urls)))
  File "/home/vannyenr/horizon-stable/horizon/base.py", line 468, in
_decorated_urls
% self.default_panel)
NotRegistered: The default panel "mycustompanel" is not registered.

Howhever in my panel code there is
dashboard.mycustomdashboard.register(mycustompanel).

What am I doing wrong ?

Cheers,
Roel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][QA][Tempest][Infra] Ceilometer tempest testing in gate

2014-03-21 Thread Sean Dague
On 03/20/2014 06:18 PM, Joe Gordon wrote:
> 
> 
> 
> On Thu, Mar 20, 2014 at 3:03 PM, Alexei Kornienko
> mailto:alexei.kornie...@gmail.com>> wrote:
> 
> Hello,
> 
> We've done some profiling and results are quite interesting:
> during 1,5 hour ceilometer inserted 59755 events (59755 calls to
> record_metering_data)
> this calls resulted in total 2591573 SQL queries.
> 
> And the most interesting part is that 291569 queries were ROLLBACK
> queries.
> We do around 5 rollbacks to record a single event!
> 
> I guess it means that MySQL backend is currently totally unusable in
> production environment.
> 
> 
> It should be noticed that SQLAlchemy is horrible for performance, in
> nova we usually see sqlalchemy overheads of well over 10x (time
> nova.db.api call vs the time MySQL measures when slow log is recording
> everything).

That's not really a fair assessment. Python object inflation takes time.
I do get that there is SQLA overhead here, but even if you trimmed it
out you would not get the the mysql query time.

That being said, having Ceilometer's write path be highly tuned and not
use SQLA (and written for every back end natively) is probably appropriate.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Backwards incompatible API changes

2014-03-21 Thread Thierry Carrez
Christopher Yeoh wrote:
> I don't want to cause issues for the CD people, but perhaps it won't be
> too disruptive for them (some direct feedback would be handy). The
> initial backwards incompatible change did not result in any bug reports
> coming back to us at all. If there were lots of users using it I think
> we could have expected some complaints as they would have had to adapt
> their programs to no longer manually add the flavor access (otherwise
> that would fail). It is of course possible that new programs written in
> the meantime would rely on the new behaviour.
> 
> I think (please correct me if I'm wrong) the public CD clouds don't
> expose that part of API to their users so the fallout could be quite
> limited. Some opinions from those who do CD for private clouds would be
> very useful. I'll send an email to openstack-operators asking what
> people there believe the impact would be but at the moment I'm thinking
> that revert is the way we should go.
> 
>> Could we consider a middle road? What if we made the extension
>> silently tolerate an add-myself operation to a flavor, (potentially
>> only) right after create? Yes, that's another change, but it means
>> that old clients (like horizon) will continue to work, and new
>> clients (which expect to automatically get access) will continue to
>> work. We can document in the release notes that we made the change to
>> match our docs, and that anyone that *depends* on the (admittedly
>> weird) behavior of the old broken extension, where a user doesn't
>> retain access to flavors they create, may need to tweak their client
>> to remove themselves after create.
> 
> My concern is that we'd be digging ourselves an even deeper hole with
> that approach. That for some reason we don't really understand at the
> moment, people have programs which rely on adding flavor access to a
> tenant which is already on the access list being rejected rather than
> silently accepted. And I'm not sure its the behavior from flavor access
> that we actually want.
> 
> But we certainly don't want to end up in the situation of trying to
> work out how to rollback two backwards incompatible API changes.

My vote still goes to reverting, for all the reasons Chris just exposed.
I could live with the middle road though... My main concern is to avoid
breaking release followers with an issue we detected pre-release.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Miguel Angel Ajo



On 03/21/2014 11:01 AM, Thierry Carrez wrote:

Yuriy Taraday wrote:

Benchmark included showed on my machine these numbers (average over 100
iterations):

Running 'ip a':
   ip a :   4.565ms
  sudo ip a :  13.744ms
sudo rootwrap conf ip a : 102.571ms
 daemon.run('ip a') :   8.973ms
Running 'ip netns exec bench_ns ip a':
   sudo ip netns exec bench_ns ip a : 162.098ms
 sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
  daemon.run('ip netns exec bench_ns ip a') : 129.876ms

So it looks like running daemon is actually faster than running "sudo".


That's pretty good! However I fear that the extremely simplistic filter
rule file you fed on the benchmark is affecting numbers. Could you post
results from a realistic setup (like same command, but with all the
filter files normally found on a devstack host ?)

Thanks,




That's a good point to have a fair comparison to the c translated one,
I ran it with all the rootwrap filters provided in havana, but I will
rerun the benchmark if that changed.

Anyway, I don't think there should be a huge difference, the worst
part was the startup part.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] OVS 2.1.0 is available but not the Neutron ARP responder

2014-03-21 Thread Mathieu Rohon
Hi edouard,

thanks for the information. I would love to see your patch getting
merged to have l2-population MD fully functional with an OVS based
deployment. Moreover, this patch has a minimal impact on neutron,
since the code is used only if l2-population MD is used in the ML2
plugin.

markmcclain was concerned that no functional testing is done, but
L2-population MD needs mutlinode deployment to be tested. A deployment
based on a single VM won't create overlay tunnels, which is a
mandatory technology to have l2-population activated.
The Opensatck-CI is not able, for the moment, to run job based on
multi-node deployment. We proposed an evolution of devstack to have a
multinode deployment based on a single VM which launch compute nodes
in LXC containers [1], but this evolution has been refused by
Opensatck-CI since there is other ways to run multinode setup with
devstack, and LXC container is not compatible with iscsi and probably
ovs [2][3].

One way to have functional test for this feature would be to deploy
3rd party testing environment, but it would be a pity to have to
maintain a 3rd party to test some functionalities which are not based
on 3rd party equipments. So we are currently learning about the
Openstack-CI tools to propose some evolutions to have mutinode setup
inside the gate [4]. There are a lot of way to implement it
(node-pools evolution, usage of tripleO, of Heat [5]), and we don't
know which one would be the easiest, and so the one we have to work on
to have the multinode feature available ASAP.

This feature looks very important for Neutron, at least to test
overlay tunneling. I thinks it's very important for nova too, to test
live-migration.


[1]https://blueprints.launchpad.net/devstack/+spec/lxc-computes
[2]https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1226855
[3]http://eavesdrop.openstack.org/meetings/infra/2014/infra.2014-02-18-19.01.log.html
[4]https://www.mail-archive.com/openstack-infra@lists.openstack.org/msg00968.html
[5]http://lists.openstack.org/pipermail/openstack-infra/2013-July/000128.html

On Fri, Mar 21, 2014 at 10:08 AM, Édouard Thuleau  wrote:
> Hi,
>
> Just to inform you that the new OVS release 2.1.0 was done yesterday [1].
> This release contains new features and significant performance improvements
> [2].
>
> And in that new features, one [3] was use to add local ARP responder with
> OVS agent and the plugin ML2 with the MD l2-pop [4]. Perhaps, it's time to
> reconsider that review?
>
> [1] https://www.mail-archive.com/discuss@openvswitch.org/msg09251.html
> [2] http://openvswitch.org/releases/NEWS-2.1.0
> [3]
> http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=f6c8a6b163af343c66aea54953553d84863835f7
> [4] https://review.openstack.org/#/c/49227/
>
> Regards,
> Édouard.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Miguel Angel Ajo



On 03/21/2014 10:42 AM, Thierry Carrez wrote:

Yuriy Taraday wrote:

On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo mailto:majop...@redhat.com>> wrote:

If this coupled to neutron in a way that it can be accepted for
 Icehouse (we're killing a performance bug), or that at least it can
 be y backported, you'd be covering both the short & long term needs.


As I said on the meeting I plan to provide change request to Neutron
with some integration with this patch.
I'm also going to engage people involved in rootwrap about my change
request.


Temporarily removing my rootwrap maintainer hat and putting on my
OpenStack release manager hat: as you probably know we are well into
Icehouse feature freeze at this point, and there is no way I would
consider such a significant change for inclusion in the Icehouse release
at this point.

The work on both the daemon and the shedskin stuff is very promising,
but the nature of this beast makes it necessary to undergo a lot of
testing and security audits before it can be accepted. Not exactly
something I'd consider 4 weeks before a final release.

Frankly, this issue has been on the table forever and this is just the
wrong timing to rush a new implementation to fix it.



Thierry, it sounds reasonable to me, even if this is a bug that we're
trying to kill (and not a new feature), the regressions and security
problems it could come with totally justify that reasoning.

I'd be satisfied  if the implementation Yuriy is preparing could be done 
in a way that:

1) The traditional sudo/rootwrap functionality is preserved
2) it can be backported to icehouse/havana if it does work as we expect 
and the security sounds reasonable.


1: would allow falling back to a C/translated implementation, which
   looks like it will be more expensive to develop & maintain for the
   same/very similar performance results.

2: would fix our short-term problem with Icehouse.


I filed a rootwrap session for the Juno Design summit -- ideally we'll
have various solutions ready by then and we'd make the final choice for
early integration in Juno, leaving plenty of time to catch the weird
regressions (or security holes) that it may cause.



Best,
Miguel Ángel.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][PCI] problem about PCI SRIOV

2014-03-21 Thread Gouzongmei
Hi,

I have a problem when reading the wiki below, which is based on the latest 
SRIOV design.
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support#API_interface

My problem is about the "PCI SRIOV with tagged flavor" part.
In "pci_information =  { { 'device_id': "8086", 'vendor_id': "000[1-2]" }, { 
'e.physical_network': 'X' } }" , I'm confused what is the "e.physical_network", 
if it means a network resource, why we need to filter the assignable nics by a 
network resource?
Can you please tell me more about the "physical_network" here, thanks a lot.
In "{'e.physical_netowrk':'X', 'count': 1 }", I think the "count" means the 
count of virtual nics a SRIOV nic can support, is that right?

In the last step while booting a vm with a virtual nic, the command is "nova 
boot  mytest  --flavor m1.tiny  --image=cirros-0.3.1-x86_64-uec  --nic  
net-id=network_X  pci_flavor= '1:phyX_NIC;'".
I noticed that, "pci_flavor" is prompted while there already has the m1.tiny 
flavor, will the "pci_flavor" be separated from the normal flavor in the next 
step?

Thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi][TC] Withdraw graduation request

2014-03-21 Thread Thierry Carrez
Flavio Percoco wrote:
> [...]
> 4. We've seen the value - and believe in it - of OpenStack's
> incubation period. During this period, the project has gained
> maturity in its API, supported drivers and integration with the
> overall community.
> [...]

Thanks Flavio, I think that's the right choice.

Not graduating should not be seen as a failure. In this precise case, I
think timing was a bit short, and the TC significantly raised the bar
*during* the cycle by codifying clearer and stricter graduation
requirements.

The discussion around graduation also revealed communication issues:
late questions about Marconi's design and scope, or misunderstandings on
QA requirements. Incubation is not just technical thing, it's also a
social thing: it should be considered complete once you feel at home
with the rest of the OpenStack integrated community (and the other way
around).

I hope an additional cycle of incubation will give you more time to
explain, promote Marconi in the wider community, and engage more
directly with the other programs. In this cycle where we constantly
raised the graduation bar, I think Sahara managed to keep up against all
odds because of Sergey's "presence" in horizontal programs like Infra.

Let's all make the best of this additional incubation cycle to all get
more comfortable with each other :)

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Local vs. Scalable Engine

2014-03-21 Thread Renat Akhmerov
Alright, thanks Winson!

Team, please review.

Renat Akhmerov
@ Mirantis Inc.



On 21 Mar 2014, at 06:43, W Chan  wrote:

> I submitted a rough draft for review @ 
> https://review.openstack.org/#/c/81941/.  Instead of using the pecan hook, I 
> added a class property for the transport in the abstract engine class.  On 
> the pecan app setup, I passed the shared transport to the engine on load.  
> Please provide feedback.  Thanks.
> 
> 
> On Mon, Mar 17, 2014 at 9:37 AM, Ryan Petrello  
> wrote:
> Changing the configuration object at runtime is not thread-safe.  If you want 
> to share objects with controllers, I’d suggest checking out Pecan’s hook 
> functionality.
> 
> http://pecan.readthedocs.org/en/latest/hooks.html#implementating-a-pecan-hook
> 
> e.g.,
> 
> class SpecialContextHook(object):
> 
> def __init__(self, some_obj):
> self.some_obj = some_obj
> 
> def before(self, state):
> # In any pecan controller, `pecan.request` is a thread-local 
> webob.Request instance,
> # allowing you to access `pecan.request.context[‘foo’]` in your 
> controllers.  In this example,
> # self.some_obj could be just about anything - a Python primitive, or 
> an instance of some class
> state.request.context = {
> ‘foo’: self.some_obj
> }
> 
> ...
> 
> wsgi_app = pecan.Pecan(
> my_package.controllers.root.RootController(),
> hooks=[SpecialContextHook(SomeObj(1, 2, 3))]
> )
> 
> ---
> Ryan Petrello
> Senior Developer, DreamHost
> ryan.petre...@dreamhost.com
> 
> On Mar 14, 2014, at 8:53 AM, Renat Akhmerov  wrote:
> 
> > Take a look at method get_pecan_config() in mistral/api/app.py. It’s where 
> > you can pass any parameters into pecan app (see a dictionary ‘cfg_dict’ 
> > initialization). They can be then accessed via pecan.conf as described 
> > here: 
> > http://pecan.readthedocs.org/en/latest/configuration.html#application-configuration.
> >  If I understood the problem correctly this should be helpful.
> >
> > Renat Akhmerov
> > @ Mirantis Inc.
> >
> >
> >
> > On 14 Mar 2014, at 05:14, Dmitri Zimine  wrote:
> >
> >> We have access to all configuration parameters in the context of api.py. 
> >> May be you don't pass it but just instantiate it where you need it? Or I 
> >> may misunderstand what you're trying to do...
> >>
> >> DZ>
> >>
> >> PS: can you generate and update mistral.config.example to include new oslo 
> >> messaging options? I forgot to mention it on review on time.
> >>
> >>
> >> On Mar 13, 2014, at 11:15 AM, W Chan  wrote:
> >>
> >>> On the transport variable, the problem I see isn't with passing the 
> >>> variable to the engine and executor.  It's passing the transport into the 
> >>> API layer.  The API layer is a pecan app and I currently don't see a way 
> >>> where the transport variable can be passed to it directly.  I'm looking 
> >>> at 
> >>> https://github.com/stackforge/mistral/blob/master/mistral/cmd/api.py#L50 
> >>> and 
> >>> https://github.com/stackforge/mistral/blob/master/mistral/api/app.py#L44. 
> >>>  Do you have any suggestion?  Thanks.
> >>>
> >>>
> >>> On Thu, Mar 13, 2014 at 1:30 AM, Renat Akhmerov  
> >>> wrote:
> >>>
> >>> On 13 Mar 2014, at 10:40, W Chan  wrote:
> >>>
> • I can write a method in base test to start local executor.  I will 
>  do that as a separate bp.
> >>> Ok.
> >>>
> • After the engine is made standalone, the API will communicate to 
>  the engine and the engine to the executor via the oslo.messaging 
>  transport.  This means that for the "local" option, we need to start all 
>  three components (API, engine, and executor) on the same process.  If 
>  the long term goal as you stated above is to use separate launchers for 
>  these components, this means that the API launcher needs to duplicate 
>  all the logic to launch the engine and the executor. Hence, my proposal 
>  here is to move the logic to launch the components into a common module 
>  and either have a single generic launch script that launch specific 
>  components based on the CLI options or have separate launch scripts that 
>  reference the appropriate launch function from the common module.
> >>> Ok, I see your point. Then I would suggest we have one script which we 
> >>> could use to run all the components (any subset of of them). So for those 
> >>> components we specified when launching the script we use this local 
> >>> transport. Btw, scheduler eventually should become a standalone component 
> >>> too, so we have 4 components.
> >>>
> • The RPC client/server in oslo.messaging do not determine the 
>  transport.  The transport is determine via oslo.config and then given 
>  explicitly to the RPC client/server.  
>  https://github.com/stackforge/mistral/blob/master/mistral/engine/scalable/engine.py#L31
>   and 
>  https://github.com/stackforge/mistral/blob/master/mistral/cmd/task_executor.py#L63
>   are examples 

[openstack-dev] Taskflow retry controllers - how to?

2014-03-21 Thread Anastasia Karpinska
Hi,

I want to let your know about the Retry. This is a new feature in Taskflow.
Retry allows to execute a flow or a part of a flow several times if it
fails. Retry is very similar to task and it can be executed or reverted, it
accepts and returns values in the similar way as a Task. And it has
'on_failure' method that allow to handle flow failure and make a decision
to revert a flow or revert only a failed part and retry it again. Retry has
a 'history' parameter, this is a history of all previous results returned
by this retry and flow failures raised on each previous try.

There is an example of Retry that retries a flow if some error has been
raised by the task and reverts a flow if CriticalError has been raised.

class SimpleRetry(Retry):

def on_failure(self, history, *args, **kwargs):
# History is a list of tupes (retry_result, {'task_name':
misc.Failure})
# retry_result is the result returned by retry execute method
# 'task_name' is the name of failed task in curent subflow
# misc.Failure is an object that wraps raised exception
# This code fetches a dictionary of latest errors
last_errors = history[-1][1]

# Check errors' types
for task_name, failure in last_errors.items():
# Revert a flow if CriticalError has occurred
if failure.check(CriticalError):
return REVERT
 # Retry a flow if any other error occurred
 return RETRY

def execute(self, history, *args, **kwargs):
attempt = len(history)+1
print "Retry a flow %s times" % attempt
return attempt

def revert(self, history, *args, **kwargs):
print "Reverting a flow"


There is an example a flow with Retry:

flow = linear_flow.Flow("my_flow", retry =
SimpleRetry("my_retry)).add(Task1(), Task2())

In case of Task1 or Task2 failure SimpleRetry.on_failure method will be
called. If the retry returns RETRY then Task1 and Task2 will be reverted
and SimpleRetry and tasks will be executed again, otherwise the whole flow
will be reverted.

There are some predefined retries that can be easily used in most common
cases. You can find the base Retry class and all predefined retries in the
taskflow.retry package.

Example of usage of Retry can be found in taskflow.examples.retry_flow.pyfile.

Wiki page is here https://wiki.openstack.org/wiki/TaskFlow/Retry


Thanks!

Anastasia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] We need a new version of hacking for Icehouse, or provide compatibility with oslo.sphinx in oslosphinx

2014-03-21 Thread Thomas Goirand
Hi,

The current version of python-hacking wants python-oslo.sphinx, but
we're moving to python-oslosphinx. In Debian, I made python-oslo.sphinx
as a transition empty package that only depends on python-oslosphinx. As
a consequence, python-hacking needs to be updated to use
python-oslosphinx, otherwise it wont have available build-dependencies.

I was also thinking about providing a symlink from oslo/sphinx to
oslosphinx. Maybe it'd be nice to have this directly in oslosphinx?

Thoughts anyone?

Cheers,

Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][TaskFlow] Long running actions

2014-03-21 Thread Renat Akhmerov
Valid concerns. It would be great to get Joshua involved in this discussion. If 
it’s possible to do in TaskFlow he could advise on how exactly.

Renat Akhmerov
@ Mirantis Inc.



On 21 Mar 2014, at 16:23, Stan Lagun  wrote:

> Don't forget HA issues. Mistral can be restarted at any moment and need to be 
> able to proceed from the place it was interrupted on another instance. In 
> theory it can be addressed by TaskFlow but I'm not sure it can be done 
> without complete redesign of it
> 
> 
> On Fri, Mar 21, 2014 at 8:33 AM, W Chan  wrote:
> Can the long running task be handled by putting the target task in the 
> workflow in a persisted state until either an event triggers it or timeout 
> occurs?  An event (human approval or trigger from an external system) sent to 
> the transport will rejuvenate the task.  The timeout is configurable by the 
> end user up to a certain time limit set by the mistral admin.  
> 
> Based on the TaskFlow examples, it seems like the engine instance managing 
> the workflow will be in memory until the flow is completed.  Unless there's 
> other options to schedule tasks in TaskFlow, if we have too many of these 
> workflows with long running tasks, seems like it'll become a memory issue for 
> mistral...
> 
> 
> On Thu, Mar 20, 2014 at 3:07 PM, Dmitri Zimine  wrote:
> 
>> For the 'asynchronous manner' discussion see http://tinyurl.com/n3v9lt8; I'm 
>> still not sure why u would want to make is_sync/is_async a primitive concept 
>> in a workflow system, shouldn't this be only up to the entity running the 
>> workflow to decide? Why is a task allowed to be sync/async, that has major 
>> side-effects for state-persistence, resumption (and to me is a incorrect 
>> abstraction to provide) and general workflow execution control, I'd be very 
>> careful with this (which is why I am hesitant to add it without much much 
>> more discussion).
> 
> 
> Let's remove the confusion caused by "async". All tasks [may] run async from 
> the engine standpoint, agreed. 
> 
> "Long running tasks" - that's it.
> 
> Examples: wait_5_days, run_hadoop_job, take_human_input. 
> The Task doesn't do the job: it delegates to an external system. The flow 
> execution needs to wait  (5 days passed, hadoob job finished with data x, 
> user inputs y), and than continue with the received results.
> 
> The requirement is to survive a restart of any WF component without loosing 
> the state of the long running operation.
> 
> Does TaskFlow already have a way to do it? Or ongoing ideas, considerations? 
> If yes let's review. Else let's brainstorm together. 
> 
> I agree,
>> that has major side-effects for state-persistence, resumption (and to me is 
>> a incorrect abstraction to provide) and general workflow execution control, 
>> I'd be very careful with this
> 
> But these requirement  comes from customers'  use cases: wait_5_day - 
> lifecycle management workflow, long running external system - Murano 
> requirements, user input - workflow for operation automations with control 
> gate checks, provisions which require 'approval' steps, etc. 
> 
> DZ> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Sincerely yours
> Stanislav (Stan) Lagun
> Senior Developer
> Mirantis
> 35b/3, Vorontsovskaya St.
> Moscow, Russia
> Skype: stanlagun
> www.mirantis.com
> sla...@mirantis.com
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Thierry Carrez
Yuriy Taraday wrote:
> Benchmark included showed on my machine these numbers (average over 100
> iterations):
> 
> Running 'ip a':
>   ip a :   4.565ms
>  sudo ip a :  13.744ms
>sudo rootwrap conf ip a : 102.571ms
> daemon.run('ip a') :   8.973ms
> Running 'ip netns exec bench_ns ip a':
>   sudo ip netns exec bench_ns ip a : 162.098ms
> sudo rootwrap conf ip netns exec bench_ns ip a : 268.115ms
>  daemon.run('ip netns exec bench_ns ip a') : 129.876ms
> 
> So it looks like running daemon is actually faster than running "sudo".

That's pretty good! However I fear that the extremely simplistic filter
rule file you fed on the benchmark is affecting numbers. Could you post
results from a realistic setup (like same command, but with all the
filter files normally found on a devstack host ?)

Thanks,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-21 Thread Thierry Carrez
Yuriy Taraday wrote:
> On Thu, Mar 20, 2014 at 5:41 PM, Miguel Angel Ajo  > wrote:
>>If this coupled to neutron in a way that it can be accepted for
>> Icehouse (we're killing a performance bug), or that at least it can
>> be y backported, you'd be covering both the short & long term needs.
> 
> As I said on the meeting I plan to provide change request to Neutron
> with some integration with this patch.
> I'm also going to engage people involved in rootwrap about my change
> request.

Temporarily removing my rootwrap maintainer hat and putting on my
OpenStack release manager hat: as you probably know we are well into
Icehouse feature freeze at this point, and there is no way I would
consider such a significant change for inclusion in the Icehouse release
at this point.

The work on both the daemon and the shedskin stuff is very promising,
but the nature of this beast makes it necessary to undergo a lot of
testing and security audits before it can be accepted. Not exactly
something I'd consider 4 weeks before a final release.

Frankly, this issue has been on the table forever and this is just the
wrong timing to rush a new implementation to fix it.

I filed a rootwrap session for the Juno Design summit -- ideally we'll
have various solutions ready by then and we'd make the final choice for
early integration in Juno, leaving plenty of time to catch the weird
regressions (or security holes) that it may cause.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral][TaskFlow] Long running actions

2014-03-21 Thread Stan Lagun
Don't forget HA issues. Mistral can be restarted at any moment and need to
be able to proceed from the place it was interrupted on another instance.
In theory it can be addressed by TaskFlow but I'm not sure it can be done
without complete redesign of it


On Fri, Mar 21, 2014 at 8:33 AM, W Chan  wrote:

> Can the long running task be handled by putting the target task in the
> workflow in a persisted state until either an event triggers it or timeout
> occurs?  An event (human approval or trigger from an external system) sent
> to the transport will rejuvenate the task.  The timeout is configurable by
> the end user up to a certain time limit set by the mistral admin.
>
> Based on the TaskFlow examples, it seems like the engine instance managing
> the workflow will be in memory until the flow is completed.  Unless there's
> other options to schedule tasks in TaskFlow, if we have too many of these
> workflows with long running tasks, seems like it'll become a memory issue
> for mistral...
>
>
> On Thu, Mar 20, 2014 at 3:07 PM, Dmitri Zimine  wrote:
>
>>
>> For the 'asynchronous manner' discussion see http://tinyurl.com/n3v9lt8;
>> I'm still not sure why u would want to make is_sync/is_async a primitive
>> concept in a workflow system, shouldn't this be only up to the entity
>> running the workflow to decide? Why is a task allowed to be sync/async,
>> that has major side-effects for state-persistence, resumption (and to me is
>> a incorrect abstraction to provide) and general workflow execution control,
>> I'd be very careful with this (which is why I am hesitant to add it without
>> much much more discussion).
>>
>>
>> Let's remove the confusion caused by "async". All tasks [may] run async
>> from the engine standpoint, agreed.
>>
>> "Long running tasks" - that's it.
>>
>> Examples: wait_5_days, run_hadoop_job, take_human_input.
>> The Task doesn't do the job: it delegates to an external system. The flow
>> execution needs to wait  (5 days passed, hadoob job finished with data x,
>> user inputs y), and than continue with the received results.
>>
>> The requirement is to survive a restart of any WF component without
>> loosing the state of the long running operation.
>>
>> Does TaskFlow already have a way to do it? Or ongoing ideas,
>> considerations? If yes let's review. Else let's brainstorm together.
>>
>> I agree,
>>
>> that has major side-effects for state-persistence, resumption (and to me
>> is a incorrect abstraction to provide) and general workflow execution
>> control, I'd be very careful with this
>>
>> But these requirement  comes from customers'  use cases: wait_5_day -
>> lifecycle management workflow, long running external system - Murano
>> requirements, user input - workflow for operation automations with control
>> gate checks, provisions which require 'approval' steps, etc.
>>
>> DZ>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Sincerely yours
Stanislav (Stan) Lagun
Senior Developer
Mirantis
35b/3, Vorontsovskaya St.
Moscow, Russia
Skype: stanlagun
www.mirantis.com
sla...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] OVS 2.1.0 is available but not the Neutron ARP responder

2014-03-21 Thread Édouard Thuleau
Hi,

Just to inform you that the new OVS release 2.1.0 was done yesterday [1].
This release contains new features and significant performance improvements
[2].

And in that new features, one [3] was use to add local ARP responder with
OVS agent and the plugin ML2 with the MD l2-pop [4]. Perhaps, it's time to
reconsider that review?

[1] https://www.mail-archive.com/discuss@openvswitch.org/msg09251.html
[2] http://openvswitch.org/releases/NEWS-2.1.0
[3]
http://git.openvswitch.org/cgi-bin/gitweb.cgi?p=openvswitch;a=commitdiff;h=f6c8a6b163af343c66aea54953553d84863835f7
[4] https://review.openstack.org/#/c/49227/

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >