Re: [openstack-dev] [Neutron][ML2]

2014-03-05 Thread Aaron Rosen
Hi Nader,

Devstack's default plugin is ML2. Usually you wouldn't 'inherit' one plugin
in another. I'm guessing  you probably wire a driver that ML2 can use
though it's hard to tell from the information you've provided what you're
trying to do.

Best,

Aaron


On Wed, Mar 5, 2014 at 10:42 PM, Nader Lahouti wrote:

> Hi All,
>
> I have a question regarding ML2 plugin in neutron:
> My understanding is that, 'Ml2Plugin' is the default core_plugin for
> neutron ML2. We can use either the default plugin or our own plugin (i.e.
> my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it as
> core_plugin.
>
> Is my understanding correct?
>
>
> Regards,
> Nader.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][wsme][pecan] Need help for ceilometer havana alarm issue

2014-03-05 Thread Mehdi Abaakouk
Hi,

On Thu, Mar 06, 2014 at 10:44:25AM +0800, ZhiQiang Fan wrote:
> I already check the stable/havana and master branch via devstack, the
> problem is still in havana, but master branch is not affected
> 
> I think it is important to fix it for havana too, since some high level
> application may depends on the returned faultstring. Currently, I'm not
> sure mater branch fix it in pecan or wsme module, or in ceilometer itself
> 
> Is there anyone can help with this problem?

This is a duplicate bug of https://bugs.launchpad.net/ceilometer/+bug/1260398

This one have already been fixed, I have marked havana as affected, to
think about it if we cut a new havana version.

Feel free to prepare the backport.

Regards,

-- 
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-05 Thread Ladislav Smola

On 03/06/2014 04:47 AM, Jason Rist wrote:

On Wed 05 Mar 2014 03:36:22 PM MST, Lyle, David wrote:

I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his reviews 
very insightful and more importantly have come to rely on their quality. He has 
contributed to several areas in Horizon and he understands the code base well.  
Radomir is also very active in tuskar-ui both contributing and reviewing.

David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As someone who benefits from his insightful reviews, I second the
nomination.


I agree, Radomir has been doing excellent reviews and patches in both 
projects.



--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
+1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-05 Thread Tihomir Trifonov
+1


On Thu, Mar 6, 2014 at 5:47 AM, Jason Rist  wrote:

> On Wed 05 Mar 2014 03:36:22 PM MST, Lyle, David wrote:
> > I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his
> reviews very insightful and more importantly have come to rely on their
> quality. He has contributed to several areas in Horizon and he understands
> the code base well.  Radomir is also very active in tuskar-ui both
> contributing and reviewing.
> >
> > David
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> As someone who benefits from his insightful reviews, I second the
> nomination.
>
> --
> Jason E. Rist
> Senior Software Engineer
> OpenStack Management UI
> Red Hat, Inc.
> +1.720.256.3933
> Freenode: jrist
> github/identi.ca: knowncitizen
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Tihomir Trifonov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread IWAMOTO Toshihiro
At Wed, 05 Mar 2014 15:42:54 +0100,
Miguel Angel Ajo wrote:
> 3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
> structures, I wonder if that time could be reduced by conversion
> of system process calls into system library calls (I know we don't have
> libraries for iproute, iptables?, and many other things... but it's a
> problem that's probably worth looking at.)

Try benchmarking

   $ sudo ip netns exec qfoobar /bin/echo

Network namespace switching costs almost as much as a rootwrap
execution, IIRC.

Execution coalesceing is not enough in this case and we would need to
change how Neutron issues commands, IMO.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ML2]

2014-03-05 Thread Nader Lahouti
Hi All,

I have a question regarding ML2 plugin in neutron:
My understanding is that, 'Ml2Plugin' is the default core_plugin for
neutron ML2. We can use either the default plugin or our own plugin (i.e.
my_ml2_core_plugin that can be inherited from Ml2Plugin) and use it as
core_plugin.

Is my understanding correct?


Regards,
Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Adds PCI support for the V3 API (just one patch in novaclient)

2014-03-05 Thread Michael Still
On Thu, Mar 6, 2014 at 12:26 PM, Tian, Shuangtai
 wrote:
>
> Hi,
>
> I would like to make a request for FFE for one patch in novaclient for PCI
> V3 API : https://review.openstack.org/#/c/75324/

[snip]

> BTW the PCI Patches in V2 will defer to Juno.

I'm confused. If this isn't landing in v2 in icehouse I'm not sure we
should do a FFE for v3. I don't think right at this moment we want to
be encouraging users to user v3 so why does waiting matter?

Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Cinder] Feature about volume delete protection

2014-03-05 Thread Zhangleiqiang
Hi all,

Current openstack provide the delete volume function to the user.
But it seems there is no any protection for user's delete operation miss.

As we know the data in the volume maybe very important and valuable. 
So it's better to provide a method to the user to avoid the volume delete miss.

Such as:
We can provide a safe delete for the volume.
User can specify how long the volume will be delay deleted(actually deleted) 
when he deletes the volume.
Before the volume is actually deleted, user can cancel the delete operation and 
find back the volume.
After the specified time, the volume will be actually deleted by the system.

Any thoughts? Welcome any advices.

Best regards to you.


--
zhangleiqiang

Best Regards



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming

2014-03-05 Thread Renat Akhmerov
Alright, good input Manas, appreciate.

My comments are below...

On 06 Mar 2014, at 10:47, Manas Kelshikar  wrote:
> Do we have better ideas on how to work with DSL? A good mental exercise here 
> would be to imagine that we have more than one DSL, not only YAML but say 
> XML. How would it change the picture?
> [Manas] As long as we form an abstraction between the DSL format i.e. 
> YAML/XML and it consumption we will be able to move between various formats. 
> (wishful) My personal preference is to not even have DSL show up anywhere in 
> Mistral code apart from take it as input and transforming into this first 
> level specification model - I know this is not the current state.

Totally agree with your point. That’s what we’re trying to achieve.
> How can we clearly distinguish between these two models so that it wouldn’t 
> be confusing?
> Do we have a better naming in mind?
> [Manas] I think we all would agree that the best approach is to have precise 
> naming.
> 
> I see your point of de-normalizing the dsl data into respective db model 
> objects.
> 
> In a previous email I suggested using *Spec. I will try to build on this -
> 1. Everything specified via the YAML input is a specification or definition 
> or template. Therefore I suggest we suffix all these types with 
> Spec/Definition/Template. So TaskSpec/TaskDefinition/TaskTemplate etc.. As 
> per the latest change these are TaskData ... ActionData. 

After all the time I spent thinking about it I would choose Spec suffix since 
it’s short and expresses the intention well enough. In conjunction with 
“workbook” package name it would look very nice (basically we get specification 
of workbook which is what we’re talking about, right?)

So if you agree then let’s change to TaskSpec, ActionSpec etc. Nikolay, sorry 
for making you change this patch again and again :) But it’s really important 
and going to have a long-term effect at the entire system.

> 2. As per current impl the YAML is stored as a key-value in the DB. This is 
> fine since that is front-ended by objects that Nikolay has introduced. e.g. 
> TaskData, ActionData etc.

Yep, right. The only thing I would suggest is to avoid DB fields like 
“task_dsl” like we have now. The alternative could be “task_spec”.

> 3. As per my thinking a model object that ends up in the DB or a model object 
> that is in memory all can reside in the same module. I view persistence as an 
> orthogonal concern so no real reason to distinguish the module names of the 
> two set of models. If we do choose to distinguish as per latest change i.e. 
> mistral/workbook that works too.

Sorry, I believe I wasn’t clear enough on this thing. I think we shouldn’t have 
these two models in the same package since what I meant by “DB model” is 
actually “execution” and “task” that carry workflow runtime information and 
refer to a particular execution (we could also call it “session”). So my point 
is that these are fundamentally different types of models. The best analogy 
that comes to my mind is the relationship “class -> instance” where in our case 
“class = Specification" (TaskSpec etc.) and “instance = Execution/Task”. Does 
it make any sense?

> @Nikolay - I am generally ok with the approach. I hope that this helps 
> clarify my thinking and perception. Please ask more questions.
> 
> Overall I like the approach of formalizing the 2 models. I am ok with current 
> state of the review and have laid out my preferences.

I like the current state of this patch. The only thing I would do is renaming 
“Data” to “Spec”.

Thank you.

Renat Akhmerov
@ Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-05 Thread Georgy Okrokvertskhov
Hi Steve,

Thank you for sharing your thoughts. I believe that is what we were trying
to receive as a feedback form TC. The current definition of program
actually suggest the scenario you described. A new project will appear
under Orchestration umbrella. Let say there will be two project one is Heat
and another is Workflow (no specific name here, probably some part of
Murano).  Program will have one PTL (current Heat PTL) and two separate
code team for each project. That was our understanding of what we want. I
am not sure that this was enough stressed out on TC meeting. There were no
any intentions to add anything to Heat. Not at all. We just discussed a
possibility to split current Murano App Catalog into two parts. Catalog
part would go to Catalog program to land App Catalog code near Glance
project and integrate them as Glance will store application packages for
Murano App Catalog service. The second part of Murano related to
environment processing (deployment, life cycle management, events) would go
to Orchestration program as a new project like Murano workflows or Murano
environment control or anything else.

As I mentioned in one of the previous e-mails, we already discussed with
the heat team workflows & Heat before HK summit. We understand this very
well that workflows will not fit Heat and we perfectly understand reasons
why.

I think that the good result of last TC was the official mandate to discuss
alignment and integration between projects Glance, Heat, Murano and
probably other projects. Right now we consider the following:
1) Continue discussion around Catalog program mission and how Murano App
Catalog will fit into this program.
2) Start conversation with the Heat team in two directions:
  a) TOSCA and its implementation. How Murano can extend TOSCA and how
TOSCA can help Murano to define an application package. Murano should reuse
as much as possible from TOSCA to implement this open standard
  b) Define the alignment between Heat and Murano. How workflows can
coexist with HOT. What will be the best way to develop both Heat and
Workflows within Orchestration program.
3) Explore Application space for OpenStack. As Thierry mentioned on TC
meeting, there are concerns that it is probably to early for OpenStack to
make a new step up to the stack.

Thanks,
Georgy


On Wed, Mar 5, 2014 at 7:47 PM, Steven Dake  wrote:

> On 03/05/2014 02:16 AM, Thomas Spatzier wrote:
>
>> Georgy Okrokvertskhov  wrote on 05/03/2014
>> 00:32:08:
>>
>>  From: Georgy Okrokvertskhov 
>>> To: "OpenStack Development Mailing List (not for usage questions)"
>>> 
>>> Date: 05/03/2014 00:34
>>> Subject: Re: [openstack-dev] Incubation Request: Murano
>>>
>>> Hi Thomas, Zane,
>>>
>>> Thank you for bringing TOSCA to the discussion. I think this is
>>> important topic as it will help to find better alignment or even
>>> future merge of Murano DSL and Heat templates. Murano DSL uses YAML
>>> representation too, so we can easily merge use constructions from
>>> Heat and probably any other YAML based TOSCA formats.
>>>
>>> I will be glad to join TOSCA TC. Is there any formal process for that?
>>>
>> The first part is that your company must be a member of OASIS. If that is
>> the case, I think you can simply go to the TC page [1] and click a button
>> to join the TC. If your company is not yet a member, you could get in
>> touch
>> with the TC chairs Paul Lipton and Simon Moser and ask for the best next
>> steps. We recently had people from GigaSpaces join the TC, and since they
>> are also doing very TOSCA aligned implementation in Cloudify, their input
>> will probably help a lot to advance TOSCA.
>>
>>  I also would like to use this opportunity and start conversation
>>> with Heat team about Heat roadmap and feature set. As Thomas
>>> mentioned in his previous e-mail TOSCA topology story is quite
>>> covered by HOT. At the same time there are entities like Plans which
>>> are covered by Murano. We had discussion about bringing workflows to
>>> Heat engine before HK summit and it looks like that Heat team has no
>>> plans to bring workflows into Heat. That is actually why we
>>> mentioned Orchestration program as a potential place for Murano DSL
>>> as Heat+Murano together will cover everything which is defined by TOSCA.
>>>
>> I remember the discussions about whether to bring workflows into Heat or
>> not. My personal opinion is that workflows are probably out of the scope
>> of
>> Heat (i.e. everything but the derived orchestration flows the Heat engine
>> implements). So there could well be a layer on-top of Heat that lets Heat
>> deal with all topology-related declarative business and adds
>> workflow-based
>> orchestration around it. TOSCA could be a way to describe the respective
>> overarching models and then hand the different processing tasks to the
>> right engine to deal with it.
>>
> My general take is workflow would fit in the Orchestration program, but
> not be integrated into the heat repo specifically.  It would b

Re: [openstack-dev] [tempest] who builds other part of test environment

2014-03-05 Thread Gareth
fixed!

The reason is that .gitignore contains 'build/' but pep8 checking doesn't
ignore it. So there are many mistakes in build//*.py. After removing
build dir, everything runs well


On Thu, Mar 6, 2014 at 11:55 AM, Gareth  wrote:

> The difference is that: in my local environment, running "tox -epep8"
> failed but succeeded in Jenkins. So I guess the tox.ini may be different.
>
>
> On Thu, Mar 6, 2014 at 11:38 AM, Joshua Hesketh <
> joshua.hesk...@rackspace.com> wrote:
>
>>  Hi Gareth,
>>
>> Tempest shouldn't touch the test that you are looking at. The Jenkins job
>> should execute using the tox.ini in the rally repository so that part
>> should be the same as your local environment. This is the relevant part
>> loaded onto the Jenkin's slaves:
>> https://github.com/openstack-infra/config/blob/master/modules/jenkins/files/slave_scripts/run-pep8.sh
>> .
>>
>> What is the specific difference you were concerned with?
>>
>> Cheers,
>> Josh
>>
>> Rackspace Australia
>>
>> On 3/6/14 1:33 PM, Gareth wrote:
>>
>> Hi
>>
>>  Here a test result:
>> http://logs.openstack.org/25/78325/3/check/gate-rally-pep8/323b39c/console.htmland
>>  its result is different from in my local environment. So I want to
>> check some details of official test environment, for example
>> /home/jenkins/workspace/gate-rally-pep8/tox.ini.
>>
>>  I guess it is in tempest repo but it isn't. I didn't find any test
>> tox.ini file in tempest repo. So it should be hosted in another repo. Which
>> is that one?
>>
>>  thanks
>>
>>  --
>> Gareth
>>
>>  *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
>> *OpenStack contributor, kun_huang@freenode*
>> *My promise: if you find any spelling or grammar mistakes in my email
>> from Mar 1 2013, notify me *
>> *and I'll donate $1 or ¥1 to an open organization you specify.*
>>
>>
>> ___
>> OpenStack-dev mailing 
>> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Gareth
>
> *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
> *OpenStack contributor, kun_huang@freenode*
> *My promise: if you find any spelling or grammar mistakes in my email from
> Mar 1 2013, notify me *
> *and I'll donate $1 or ¥1 to an open organization you specify.*
>



-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] who builds other part of test environment

2014-03-05 Thread Gareth
The difference is that: in my local environment, running "tox -epep8"
failed but succeeded in Jenkins. So I guess the tox.ini may be different.


On Thu, Mar 6, 2014 at 11:38 AM, Joshua Hesketh <
joshua.hesk...@rackspace.com> wrote:

>  Hi Gareth,
>
> Tempest shouldn't touch the test that you are looking at. The Jenkins job
> should execute using the tox.ini in the rally repository so that part
> should be the same as your local environment. This is the relevant part
> loaded onto the Jenkin's slaves:
> https://github.com/openstack-infra/config/blob/master/modules/jenkins/files/slave_scripts/run-pep8.sh
> .
>
> What is the specific difference you were concerned with?
>
> Cheers,
> Josh
>
> Rackspace Australia
>
> On 3/6/14 1:33 PM, Gareth wrote:
>
> Hi
>
>  Here a test result:
> http://logs.openstack.org/25/78325/3/check/gate-rally-pep8/323b39c/console.htmland
>  its result is different from in my local environment. So I want to
> check some details of official test environment, for example
> /home/jenkins/workspace/gate-rally-pep8/tox.ini.
>
>  I guess it is in tempest repo but it isn't. I didn't find any test
> tox.ini file in tempest repo. So it should be hosted in another repo. Which
> is that one?
>
>  thanks
>
>  --
> Gareth
>
>  *Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
> *OpenStack contributor, kun_huang@freenode*
> *My promise: if you find any spelling or grammar mistakes in my email from
> Mar 1 2013, notify me *
> *and I'll donate $1 or ¥1 to an open organization you specify.*
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] DSL model vs. DB model, renaming

2014-03-05 Thread Manas Kelshikar
Renat,

Thanks for the detailed explanation. It is quite instructive and helps
frame the question accurately by providing the necessary context.

   - Is the approach itself reasonable?

[Manas] I think so. A repeatable workflow requires these sort of separation
i.e. a model in which to save the specification as supplied by the user and
a model in which to save the execution state - current and past.

   - Do we have better ideas on how to work with DSL? A good mental
   exercise here would be to imagine that we have more than one DSL, not only
   YAML but say XML. How would it change the picture?

[Manas] As long as we form an abstraction between the DSL format i.e.
YAML/XML and it consumption we will be able to move between various
formats. (wishful) My personal preference is to not even have DSL show up
anywhere in Mistral code apart from take it as input and transforming into
this first level specification model - I know this is not the current state.

   - How can we clearly distinguish between these two models so that it
   wouldn't be confusing?
   - Do we have a better naming in mind?

[Manas] I think we all would agree that the best approach is to have
precise naming.

I see your point of de-normalizing the dsl data into respective db model
objects.

In a previous email I suggested using *Spec. I will try to build on this -
1. Everything specified via the YAML input is a specification or definition
or template. Therefore I suggest we suffix all these types with
Spec/Definition/Template. So TaskSpec/TaskDefinition/TaskTemplate etc.. As
per the latest change these are TaskData ... ActionData.
2. As per current impl the YAML is stored as a key-value in the DB. This is
fine since that is front-ended by objects that Nikolay has introduced. e.g.
TaskData, ActionData etc.
3. As per my thinking a model object that ends up in the DB or a model
object that is in memory all can reside in the same module. I view
persistence as an orthogonal concern so no real reason to distinguish the
module names of the two set of models. If we do choose to distinguish as
per latest change i.e. mistral/workbook that works too.

@Nikolay - I am generally ok with the approach. I hope that this helps
clarify my thinking and perception. Please ask more questions.

Overall I like the approach of formalizing the 2 models. I am ok with
current state of the review and have laid out my preferences.

Thanks,
Manas

On Wed, Mar 5, 2014 at 3:39 AM, Nikolay Makhotkin
wrote:

> I think today and I have a good name for package (instead of
> 'mistral/model')
>
> How do you think about to name it 'mistral/workbook'? I.e., It means that
> it contains modules for work with workbook representation - tasks,
> services, actions and workflow.
>
> This way we able to get rid of any confusing.
>
>
> Best Regards,
> Nikolay
>
>
> On Wed, Mar 5, 2014 at 8:50 AM, Renat Akhmerov wrote:
>
>> I think we forgot to point to the commit itself. Here it is:
>> https://review.openstack.org/#/c/77126/
>>
>> Manas, can you please provide more details on your suggestion?
>>
>> For now let me just describe the background of Nikolay's question.
>>
>> Basically, we are talking about how we are working with data inside
>> Mistral. So far, for example, if a user sent a request to Mistral "start
>> workflow" then Mistral would do the following:
>>
>>- Get workbook DSL (YAML) from the DB (given that it's been already
>>persisted earlier).
>>- Load it into a dictionary-like structure using standard 'yaml'
>>library.
>>- Based on this dictionary-like structure create all necessary DB
>>objects to track the state of workflow execution objects and individual
>>tasks.
>>- Perform all the necessary logic in engine and so on. The important
>>thing here is that DB objects contain corresponding DSL snippets as they
>>are described in DSL (e.g. tasks have property "task_dsl") to reduce the
>>complexity of relational model that we have in DB. Otherwise it would be
>>really complicated and most of the queries would contain lots of joins. 
>> The
>>example of non-trivial relation in DSL is "task"->"action
>>name"->"service"->"service actions"->"action", as you can see it would be
>>hard to navigate to action in the DB from a task if our relational model
>>matches to what we have in DSL. this approach leads to the problem of
>>addressing any dsl properties using hardcoded strings which are spread
>>across the code and that brings lots of pain when doing refactoring, when
>>trying to understand the structure of the model we describe in DSL, it
>>doesn't allow to do validation easily and so on.
>>
>>
>> So what we have in DB we've called "model" so far and we've called just
>> "dsl" the dictionary structure coming from DSL. So if we got a part of the
>> structure related to a task we would call it "dsl_task".
>>
>> So what Nikolay is doing now is he's reworking the approach how we work
>> with DSL. Now we as

Re: [openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-05 Thread Jason Rist
On Wed 05 Mar 2014 03:36:22 PM MST, Lyle, David wrote:
> I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his reviews 
> very insightful and more importantly have come to rely on their quality. He 
> has contributed to several areas in Horizon and he understands the code base 
> well.  Radomir is also very active in tuskar-ui both contributing and 
> reviewing.
>
> David
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

As someone who benefits from his insightful reviews, I second the 
nomination.

--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
+1.720.256.3933
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Incubation Request: Murano

2014-03-05 Thread Steven Dake

On 03/05/2014 02:16 AM, Thomas Spatzier wrote:

Georgy Okrokvertskhov  wrote on 05/03/2014
00:32:08:


From: Georgy Okrokvertskhov 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 05/03/2014 00:34
Subject: Re: [openstack-dev] Incubation Request: Murano

Hi Thomas, Zane,

Thank you for bringing TOSCA to the discussion. I think this is
important topic as it will help to find better alignment or even
future merge of Murano DSL and Heat templates. Murano DSL uses YAML
representation too, so we can easily merge use constructions from
Heat and probably any other YAML based TOSCA formats.

I will be glad to join TOSCA TC. Is there any formal process for that?

The first part is that your company must be a member of OASIS. If that is
the case, I think you can simply go to the TC page [1] and click a button
to join the TC. If your company is not yet a member, you could get in touch
with the TC chairs Paul Lipton and Simon Moser and ask for the best next
steps. We recently had people from GigaSpaces join the TC, and since they
are also doing very TOSCA aligned implementation in Cloudify, their input
will probably help a lot to advance TOSCA.


I also would like to use this opportunity and start conversation
with Heat team about Heat roadmap and feature set. As Thomas
mentioned in his previous e-mail TOSCA topology story is quite
covered by HOT. At the same time there are entities like Plans which
are covered by Murano. We had discussion about bringing workflows to
Heat engine before HK summit and it looks like that Heat team has no
plans to bring workflows into Heat. That is actually why we
mentioned Orchestration program as a potential place for Murano DSL
as Heat+Murano together will cover everything which is defined by TOSCA.

I remember the discussions about whether to bring workflows into Heat or
not. My personal opinion is that workflows are probably out of the scope of
Heat (i.e. everything but the derived orchestration flows the Heat engine
implements). So there could well be a layer on-top of Heat that lets Heat
deal with all topology-related declarative business and adds workflow-based
orchestration around it. TOSCA could be a way to describe the respective
overarching models and then hand the different processing tasks to the
right engine to deal with it.
My general take is workflow would fit in the Orchestration program, but 
not be integrated into the heat repo specifically.  It would be a 
different repo, managed by the same orchestration program just as we 
have heat-cfntools and other repositories.  Figuring out how to handle 
the who is the core team of people responsible for program's individual 
repositories is the most difficult aspect of making such a merge.  For 
example, I'd not desire a bunch of folks from Murano +2/+A heat specific 
repos until they understood the code base in detail, or atleast the 
broad architecture.   I think the same think applies in reverse from the 
Murano perspective.  Ideally folks that are core on a specific program 
would need to figure out how to learn how to broadly review each repo 
(meaning the heat devs would have to come up to speed on murano and 
murano devs would have to come up to speed on heat.  Learning a new code 
base is a big commitment for an already overtaxed core team.


I believe expanding our scope in this way would require TC approval.

The main reason I don't want workflow in the heat repo specifically is 
because it adds complication to Heat itself.  We want Heat to be one 
nice tidy small set of code that does one thing really well. This makes 
it easy to improve, easy to deploy, and easy to learn!


These reasons are why, for example, we are continuing to push the 
autoscaling implementation out of Heat and into a separate repository 
over the next 1 to 2 cycles   This on the other hand won't be an 
expansion of scope of the Orchestration program, because we already do 
autoscaling, we just want to make it more consumable.


Regards,
-steve



I think TOSCA initiative can be a great place to collaborate. I
think it will be possible then to use Simplified TOSCA format for
Application descriptions as TOSCA is intended to provide such

descriptions.

Is there a team who are driving TOSCA implementation in OpenStack
community? I feel that such team is necessary.

We started to implement a TOSCA YAML to HOT converter and our team member
Sahdev (IRC spzala) has recently submitted code for a new stackforge
project [2]. This is very initial, but could be a point to collaborate.

[1] https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=tosca
[2] https://github.com/stackforge/heat-translator

Regards,
Thomas


Thanks
Georgy

On Tue, Mar 4, 2014 at 2:36 PM, Thomas Spatzier


wrote:

Excerpt from Zane Bitter's message on 04/03/2014 23:16:21:

From: Zane Bitter 
To: openstack-dev@lists.openstack.org
Date: 04/03/2014 23:20
Subject: Re: [openstack-dev] Incubation Request: Murano

On 04/03/14 00:04, Georgy Okrokvertskhov w

Re: [openstack-dev] [tempest] who builds other part of test environment

2014-03-05 Thread Joshua Hesketh

Hi Gareth,

Tempest shouldn't touch the test that you are looking at. The Jenkins 
job should execute using the tox.ini in the rally repository so that 
part should be the same as your local environment. This is the relevant 
part loaded onto the Jenkin's slaves: 
https://github.com/openstack-infra/config/blob/master/modules/jenkins/files/slave_scripts/run-pep8.sh.


What is the specific difference you were concerned with?

Cheers,
Josh

Rackspace Australia

On 3/6/14 1:33 PM, Gareth wrote:

Hi

Here a test result: 
http://logs.openstack.org/25/78325/3/check/gate-rally-pep8/323b39c/console.html 
and its result is different from in my local environment. So I want to 
check some details of official test environment, for example 
/home/jenkins/workspace/gate-rally-pep8/tox.ini.


I guess it is in tempest repo but it isn't. I didn't find any test 
tox.ini file in tempest repo. So it should be hosted in another repo. 
Which is that one?


thanks

--
Gareth

/Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball/
/OpenStack contributor, kun_huang@freenode/
/My promise: if you find any spelling or grammar mistakes in my email 
from Mar 1 2013, notify me /

/and I'll donate $1 or ?1 to an open organization you specify./


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] Ownership and path to schema definitions

2014-03-05 Thread Christopher Yeoh
On Tue, 04 Mar 2014 13:31:07 -0500
David Kranz  wrote:
> I think it would be a good time to have at least an initial
> discussion about the requirements for theses schemas and where they
> will live. The next step in tempest around this is to replace the
> existing negative test files with auto-gen versions, and most of the
> work in doing that is to define the schemas.
> 
> The tempest framework needs to know the http method, url part,
> expected error codes, and payload description. I believe only the
> last is covered by the current nova schema definitions, with the
> others being some kind of attribute or data associated with the
> method that is doing the validation. Ideally the information being
> used to do the validation could be auto-converted to a more general
> schema that could be used by tempest. I'm interested in what folks
> have to say about this and especially from the folks who are core
> members of both nova and tempest. See below for one example (note
> that the tempest generator does not yet handle "pattern").


So as you've seen a lot of what is wanted for the tempest framework is
implicitly known already within the method context which is why its not
again explicitly stated in the schema. Not actually having thought
about it a lot, but I suspect the expected errors decorator is
something that would fit just as well in the validation framework
however. 

Some of the other stuff such as url part, descriptions etc, not so much
as it would be purely duplicate information that would get out of date.
However for documentation auto generation it is something that we do
also want to have available in an automated fashion.  I did a bit of
exploration early in Icehouse in generating this within the context of
api samples tests where we have access to this sort of stuff and I think
together we'd have all the info we need, I'm just not sure mashing them
together is the right way to do it.

And from the documentation point of view we need to have a bit of a
think about whether doc strings on methods should be the canonical way
we produce descriptional information about API methods. One hand its
appealing, on the other hand they tend to be not very useful or very
internals Nova focussed. But we could get much better at it.

Short version - yea I think we want to get to the point where tempest
doesn't generate these manually. But I'm not sure about how we
should do it.

Chris

> 
>   -David
> 
>  From nova:
> 
> get_console_output = {
>  'type': 'object',
>  'properties': {
>  'get_console_output': {
>  'type': 'object',
>  'properties': {
>  'length': {
>  'type': ['integer', 'string'],
>  'minimum': 0,
>  'pattern': '^[0-9]+$',
>  },
>  },
>  'additionalProperties': False,
>  },
>  },
>  'required': ['get_console_output'],
>  'additionalProperties': False,
> }
> 
>  From tempest:
> 
> {
>  "name": "get-console-output",
>  "http-method": "POST",
>  "url": "servers/%s/action",
>  "resources": [
>  {"name":"server", "expected_result": 404}
>  ],
>  "json-schema": {
>  "type": "object",
>  "properties": {
>  "os-getConsoleOutput": {
>  "type": "object",
>  "properties": {
>  "length": {
>  "type": ["integer", "string"],
>  "minimum": 0
>  }
>  }
>  }
>  },
>  "additionalProperties": false
>  }
> }
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][nova] Ownership and path to schema definitions

2014-03-05 Thread Kenichi Oomichi

Hi David,

That is an interesting idea, I'd like to join in :-)

> -Original Message-
> From: David Kranz [mailto:dkr...@redhat.com]
> Sent: Wednesday, March 05, 2014 3:31 AM
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] [qa][nova] Ownership and path to schema definitions
> 
> Given that
> 
> 1. there is an ongoing api discussion in which using json schemas is an
> important part
> 2. tempest now has a schema-based auto-generate feature for negative tests
> 
> I think it would be a good time to have at least an initial discussion
> about the requirements for theses schemas and where they will live.
> The next step in tempest around this is to replace the existing negative
> test files with auto-gen versions, and most of the work in doing that
> is to define the schemas.
> 
> The tempest framework needs to know the http method, url part, expected
> error codes, and payload description. I believe only the last is covered
> by the current nova schema definitions, with the others being some kind
> of attribute or data associated with the method that is doing the
> validation.

Now the schema definitions of Nova cover *request* payload description only.
If Tempest will verify *response* payload in many cases(I hope that), it
would be good to add the schema definitions of response into Tempest.


> Ideally the information being used to do the validation
> could be auto-converted to a more general schema that could be used by
> tempest. I'm interested in what folks have to say about this and
> especially from the folks who are core members of both nova and tempest.
> See below for one example (note that the tempest generator does not yet
> handle "pattern").
> 
>   -David
> 
>  From nova:
> 
> get_console_output = {
>  'type': 'object',
>  'properties': {
>  'get_console_output': {
>  'type': 'object',
>  'properties': {
>  'length': {
>  'type': ['integer', 'string'],
>  'minimum': 0,
>  'pattern': '^[0-9]+$',
>  },
>  },
>  'additionalProperties': False,
>  },
>  },
>  'required': ['get_console_output'],
>  'additionalProperties': False,
> }
> 
>  From tempest:
> 
> {
>  "name": "get-console-output",
>  "http-method": "POST",
>  "url": "servers/%s/action",
>  "resources": [
>  {"name":"server", "expected_result": 404}
>  ],

The above "name", "http-method" and "url" would be useful for API
documentation also. So I feel it would be great for Tempest and
API doc if Nova's schemas contain their info. The above "resources"
also would be useful if we can define status code of successful access.


Thanks
Ken'ichi Ohmichi

---

>  "json-schema": {
>  "type": "object",
>  "properties": {
>  "os-getConsoleOutput": {
>  "type": "object",
>  "properties": {
>  "length": {
>  "type": ["integer", "string"],
>  "minimum": 0
>  }
>  }
>  }
>  },
>  "additionalProperties": false
>  }
> }
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Crack at a "Real life" workflow

2014-03-05 Thread Joe Gordon
On Mar 5, 2014 6:58 PM, "Dmitri Zimine"  wrote:
>
> Folks,
>
> I took a crack at using our DSL to build a real-world workflow.
> Just to see how it feels to write it. And how it compares with
alternative tools.
>
> This one automates a page from OpenStack operation guide:
http://docs.openstack.org/trunk/openstack-ops/content/maintenance.html#planned_maintenance_compute_node

>
> Here it is https://gist.github.com/dzimine/9380941
> or here http://paste.openstack.org/show/72741/
>
> I have a bunch of comments, implicit assumptions, and questions which
came to mind while writing it. Want your and other people's opinions on it.
>
> But gist and paste don't let annotate lines!!! :(
>
> May be we can put it on the review board, even with no intention to check
in,  to use for discussion?

How is this different then chef/puppet/ansible ...?

Forgive me if that has already been answered, I didn't see an answer to
that under the FAQ at https://wiki.openstack.org/wiki/Mistral

>
> Any interest?
>
> DZ>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [LBaaS] API spec for SSL Support

2014-03-05 Thread Youcef Laribi
Hi Anand,

I don't think it's fully documented in the API spec yet, but there is a 
patchset being reviewed in gerrit that shows how the API would look like 
(LbaasSSLDBMixin class):

https://review.openstack.org/#/c/74031/5/neutron/db/loadbalancer/lbaas_ssl_db.py

Thanks,
Youcef

From: Palanisamy, Anand [mailto:apalanis...@paypal.com]
Sent: Wednesday, March 05, 2014 5:26 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [LBaaS] API spec for SSL Support

Hi All,

Please let us know if we have the blueprint or the proposal for the LBaaS SSL 
API specification. We see only the workflow documented here 
https://wiki.openstack.org/wiki/Neutron/LBaaS/SSL.

Thanks
Anand

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Crack at a "Real life" workflow

2014-03-05 Thread Dmitri Zimine
Folks, 

I took a crack at using our DSL to build a real-world workflow. 
Just to see how it feels to write it. And how it compares with alternative 
tools. 

This one automates a page from OpenStack operation guide: 
http://docs.openstack.org/trunk/openstack-ops/content/maintenance.html#planned_maintenance_compute_node
 

Here it is https://gist.github.com/dzimine/9380941
or here http://paste.openstack.org/show/72741/

I have a bunch of comments, implicit assumptions, and questions which came to 
mind while writing it. Want your and other people's opinions on it. 

But gist and paste don't let annotate lines!!! :(

May be we can put it on the review board, even with no intention to check in,  
to use for discussion? 

Any interest?

DZ> ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][wsme][pecan] Need help for ceilometer havana alarm issue

2014-03-05 Thread ZhiQiang Fan
I already check the stable/havana and master branch via devstack, the
problem is still in havana, but master branch is not affected

I think it is important to fix it for havana too, since some high level
application may depends on the returned faultstring. Currently, I'm not
sure mater branch fix it in pecan or wsme module, or in ceilometer itself

Is there anyone can help with this problem?

thanks


On Tue, Feb 18, 2014 at 9:09 AM, ZhiQiang Fan  wrote:

> Hi,
>
> When I try to figure out the root cause of bug[1], I found that once
> wsme.exc.ClientSideError is triggerd when create an alarm, assume the
> faultstring is x, then if next http trigger a EntityNotFound(Exception)
> will get http response with faultstring equal to x.
>
> I trace the calling stack with a lot log, and the last log I got is in
> wsmeext.pecan.wsexpose, which showes that the dict which contains the
> faultstring is correct, it seems the problem occurs in formatting http
> response.
>
> env info:
> os: sles 11 sp3, ubuntu 12.04.3
> ceilometer: 2013.2.2, 2013.2.1
> wsme: cannot know, checked the egginfo and __init__ but got nothing
> pecan: forget...
>
> Please help, any information will be appreciated.
>
> Thanks!
>
> [1]: https://bugs.launchpad.net/ceilometer/+bug/1280036
>



-- 
blog: zqfan.github.com
git: github.com/zqfan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] who builds other part of test environment

2014-03-05 Thread Gareth
Hi

Here a test result:
http://logs.openstack.org/25/78325/3/check/gate-rally-pep8/323b39c/console.htmland
its result is different from in my local environment. So I want to
check some details of official test environment, for example
/home/jenkins/workspace/gate-rally-pep8/tox.ini.

I guess it is in tempest repo but it isn't. I didn't find any test tox.ini
file in tempest repo. So it should be hosted in another repo. Which is that
one?

thanks

-- 
Gareth

*Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball*
*OpenStack contributor, kun_huang@freenode*
*My promise: if you find any spelling or grammar mistakes in my email from
Mar 1 2013, notify me *
*and I'll donate $1 or ¥1 to an open organization you specify.*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-05 Thread Kenichi Oomichi

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, March 05, 2014 10:52 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API
> 
> On Wed, 2014-03-05 at 05:43 +, Kenichi Oomichi wrote:
> > > -Original Message-
> > > From: Dan Smith [mailto:d...@danplanet.com]
> > > Sent: Wednesday, March 05, 2014 9:09 AM
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API
> > >
> > > > What I'd like to do next is work through a new proposal that includes
> > > > keeping both v2 and v3, but with a new added focus of minimizing the
> > > > cost.  This should include a path away from the dual code bases and to
> > > > something like the "v2.1" proposal.
> > >
> > > I think that the most we can hope for is consensus on _something_. So,
> > > the thing that I'm hoping would mostly satisfy the largest number of
> > > people is:
> > >
> > > - Leaving v2 and v3 as they are today in the tree, and with v3 still
> > >   marked experimental for the moment
> > > - We start on a v2 proxy to v3, with the first goal of fully
> > >   implementing the v2 API on top of v3, as judged by tempest
> > > - We define the criteria for removing the current v2 code and marking
> > >   the v3 code supported as:
> > >  - The v2 proxy passes tempest
> > >  - The v2 proxy has sign-off from some major deployers as something
> > >they would be comfortable using in place of the existing v2 code
> > >  - The v2 proxy seems to us to be lower maintenance and otherwise
> > >preferable to either keeping both, breaking all our users, deleting
> > >v3 entirely, etc
> >
> > Thanks, Dan.
> > The above criteria is reasonable to me.
> >
> > Now Tempest does not check API responses in many cases.
> > For example, Tempest does not check what API attributes("flavor", "image",
> > etc.) should be included in the response body of "create a server" API.
> > So we need to improve Tempest coverage from this viewpoint for verifying
> > any backward incompatibility does not happen on v2.1 API.
> > We started this improvement for Tempest and have proposed some patches
> > for it now.
> 
> Kenichi-san, you may also want to check out this ML post from David
> Kranz:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/028920.html

Hi Jay-san,

Thank you for pointing it out. That is a good point :-)
I will join in David's idea.

Thanks
Ken'ichi Ohmichi


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest] Coverage improvement for Nova API tests

2014-03-05 Thread Kenichi Oomichi

Hi,

I am working for Nova API tests in Tempest, and I'd like to ask
opinions about the ways.

Tempest generates a request and sends it to each RESTful API.
Then Tempest receives a response from the API and verifies the
response.
>From the viewpoint of the backward compatibility, it is important
to keep the formats of request and response.
Now Tempest already verifies the request format because Tempest
generates it. However it does not verify the response format in
many test cases. So I'd like to improve it by adding verification
code.

but now I am facing a problems and I'd like to propose a workaround
for it.

Problem:
  The deserialized bodies of an XML response seems broken in some cases.
  In one case, some API attributes disappear in a deserialized body.
  In the other case, some API attribute names are different from JSON
  response. For example, the one of JSON is 'OS-EXT-SRV-ATTR:host' but
  the one of XML is '{http://docs.openstack.org/compute/ext/extended
  _status/api/v1.1}host'. I guess they are deserializer bugs of Tempest,
  but I'm not sure yet.

Possible solution:
  The best way is to fix all of them, but I think the best way needs a lot
  of effort. So I'd like to propose the way to skip additional verification
  code in the case of XML tests. The sample is the line151 of
   
https://review.openstack.org/#/c/77517/16/tempest/api/compute/admin/test_servers.py

  Now XML format has been marked as "deprecated" in Nova v2 API[1] and XML
  client would be removed from Tempest in Juno cycle. In addition, I guess
  there is a lot of this kind of problem because I faced the above problems 
  through adding verification code for 2 APIs only. So now I feel the best
  way is overkill.


Thanks
Ken'ichi Ohmichi

---
[1]: https://review.openstack.org/#/c/75439/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] FFE for vmdk-storage-policy-volume-type

2014-03-05 Thread Subramanian
Hi,

https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type
.

This is a blueprint that I am working on since Dec 2013 and as far I
remember it was targetted to icehouse-3. Just today I noticed that it was
moved to "future", so should have feel through the cracks for core
reviewers.Is there a chance that this can still make it into icehouse?
Given that the change is fairly isolated in vmdk driver, and that the code
across 4 patches [1] that implement this blueprint has been fairly
reviewed, can I request for an FFE for this one?

Thanks,
Subbu

[1]
https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/vmdk-storage-policy-volume-type,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: Adds PCI support for the V3 API (just one patch in novaclient)

2014-03-05 Thread Tian, Shuangtai

Hi,

I would like to make a request for FFE for one patch in novaclient for PCI V3 
API : https://review.openstack.org/#/c/75324/
Though the V3 API will not release in Icehouse but all the PCI patches for V3 
API have been merged, and this is the last for V3,
I think some people may use the V3 and the PCI passthrough. Hope all the 
function can be used in the V3 in Icehouse.
This patch got one +2 from Kevin L. Mitchell but I updated it because of the 
comments.

The PCI patches in V3(merged):
Addressed by: https://review.openstack.org/51135
Extends V3 servers api for pci support
Addressed by: https://review.openstack.org/52376
Extends V3 os-hypervisor api for pci support
Addressed by: https://review.openstack.org/52377
Add V3 api for pci support

BTW the PCI Patches in V2 will defer to Juno.
The Blueprint https://blueprints.launchpad.net/nova/+spec/pci-api-support

Best regards,
Tian, Shuangtai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] a key-value storage service for OpenStack. The pilot implementation is available now

2014-03-05 Thread Ilya Sviridov
Hello openstackers,

I'm excited to share that we have finished work on pilot implementation of
MagnetoDB, a key-value storage service for OpenStack.

During pilot development we have reached the following goals:

   -

   evaluated python cassandra clients maturity
   -

   evaluated python web stack maturity to handle high availability and high
   load scenarios
   -

   found a number of performance bottlenecks and analyzed the approaches to
   address
   -

   drafted and checked service architecture
   -

   drafted and checked deployment procedures


The API implementation is compatible with AWS DynamoDB API and pilot
version already supports the basic operations with tables and items. We
tested with boto library and Java AWS SDK and it work seamlessly with both
services.

Currently we are working on RESTFul API what will follow OpenStack tenets
in addition to current AWS DynamoDB API.

We have chosen Cassandra for pilot as most suitable storage for service
functionality. However, the cost of ownership and administration of
additional type of software can be determinative factor in choosing of
solution. That is why the backend database pluggability is important.

Currently we are evaluating HBase as one of alternatives as far as Hadoop
powered analytics often co-exists with OpenStack installations or works on
top of it like Savanna.

You can find more details on MagnetoDB along with the screencast on
Mirantis blog [1].

We will be publishing more details on each area of the findings during the
course of the next few weeks.

Any questions and ideas are very welcome. For those who are interested to
contribute, you can always find us on #magnetodb.

Links

[1]
http://www.mirantis.com/blog/introducing-magnetodb-nosql-database-service-openstack/

[2] https://github.com/stackforge/magnetodb

[3] https://wiki.openstack.org/wiki/MagnetoDB

[4] https://launchpad.net/magnetodb

With best regards,
Ilya Sviridov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for complex LB configurations.

2014-03-05 Thread John Dewey
On Wednesday, March 5, 2014 at 12:41 PM, Eugene Nikanorov wrote:
> Hi community,
> 
> Another interesting questions were raised during object model discussion 
> about how pool statistics and health monitoring should be used in case of 
> multiple vips sharing one pool. 
> 
> Right now we can query statistics for the pool, and some data like in/out 
> bytes and request count will be returned.
> If we had several vips sharing the pool, what kind of statistics would make 
> sense for the user?
> The options are:
> 
> 1) aggregated statistics for the pool, e.g. statistics of all requests that 
> has hit the pool through any VIP
> 2) per-vip statistics for the pool.
> 
> 
> 

Would it be crazy to offer both?  We can return stats for each pool associated 
with the VIP as you described below.  However, we also offer an aggregated 
section for those interested.

IMO, having stats broken out per-pool seem more helpful than only aggregated, 
while both would be ideal.

John
> 
> Depending on the answer, the statistics workflow will be different.
> 
> The good option of getting the statistics and health status could be to query 
> it through the vip and get it for the whole logical instance, e.g. a call 
> like: 
>  lb-vip-statistics-get --vip-id vip_id
> the would result in json that returns statistics for every pool associated 
> with the vip, plus operational status of all members for the pools associated 
> with that VIP.
> 
> Looking forward to your feedback.
> 
> Thanks,
> Eugene.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org (mailto:OpenStack-dev@lists.openstack.org)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for complex LB configurations.

2014-03-05 Thread Youcef Laribi
Hi Eugene,

Having an aggregate call to get all of the stats and statuses is good, but we 
should also keep the ability to retrieve statistics or the status of individual 
resources IMHO.

Thanks
Youcef

From: Eugene Nikanorov [mailto:enikano...@mirantis.com]
Sent: Wednesday, March 05, 2014 12:42 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for 
complex LB configurations.

Hi community,

Another interesting questions were raised during object model discussion about 
how pool statistics and health monitoring should be used in case of multiple 
vips sharing one pool.

Right now we can query statistics for the pool, and some data like in/out bytes 
and request count will be returned.
If we had several vips sharing the pool, what kind of statistics would make 
sense for the user?
The options are:

1) aggregated statistics for the pool, e.g. statistics of all requests that has 
hit the pool through any VIP
2) per-vip statistics for the pool.

Depending on the answer, the statistics workflow will be different.

The good option of getting the statistics and health status could be to query 
it through the vip and get it for the whole logical instance, e.g. a call like:
 lb-vip-statistics-get --vip-id vip_id
the would result in json that returns statistics for every pool associated with 
the vip, plus operational status of all members for the pools associated with 
that VIP.

Looking forward to your feedback.

Thanks,
Eugene.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for meeting (tommorow) at 2000 UTC

2014-03-05 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow,
2014-03-06!!! 

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow
Docs: http://docs.openstack.org/developer/taskflow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Any open reviews/questions/discussion needed for for 0.2
- Integration progress, help, furthering integration efforts.
- Possibly discuss about worker capability discovery.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, reviews needing help, questions and
answers (and more!).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] WARNING: ... This application has not enabled MySQL traditional mode, which means silent data corruption may occur - real issue?

2014-03-05 Thread Brant Knudson
On Tue, Mar 4, 2014 at 1:24 PM, Joe Gordon  wrote:

> On Tue, Mar 4, 2014 at 11:08 AM, Ben Nemec  wrote:
> > On 2014-03-04 12:51, Sean Dague wrote:
> >>
> >> On 03/04/2014 01:27 PM, Ben Nemec wrote:
> >>>
> >>> This warning should be gone by default once
> >>>
> >>>
> https://github.com/openstack/oslo-incubator/commit/dda24eb4a815914c29e801ad0176630786db8734
> >>> gets synced.  I believe there is work underway by the db team to get
> >>> that done.
> >>>
> >>> Note that the reason it will be gone is that we're changing the default
> >>> oslo db mode to traditional, so if we have any code that would have
> >>> triggered silent data corruption it's now going to be not so silent.
> >>>
> >>> -Ben
> >>
> >>
> >> Ok, but we're at the i3 freeze. So is there a db patch set up for every
> >> service to sync that, and FFE ready to let this land?
> >>
> >> Because otherwise I'm very afraid this is going to get trapped as 1/2
> >> implemented, which would be terrible for the release.
> >>
> >> So basically, who is driving these patches out to the projects?
> >>
> >> -Sean
> >
> >
> > I'm not sure.  We're tracking the sync work here:
> > https://etherpad.openstack.org/p/Icehouse-nova-oslo-sync but it just
> says
> > the db team is working on it.
> >
> > Adding Joe and Doug since I think they know more about what's going on
> with
> > this.
>
> https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS#L100
>
> >
> > If we can't get db synced, it's basically a bit flip to turn on
> traditional
> > mode in the projects that are seeing this message right now.  I'd rather
> not
> > since we want to drop support for that in favor of the general sql_mode
> > option, but it can certainly be done if necessary.
> >
> > -Ben
>
>
I proposed a sync to Keystone to get rid of the warning[1]. But when I
tried it, keystone is now logging an INFO for every query telling me what
the mode is, which is way too spammy[2].

[1] https://review.openstack.org/#/c/78429/
[2] https://bugs.launchpad.net/oslo/+bug/1288443

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Nominating Radomir Dopieralski to Horizon Core

2014-03-05 Thread Lyle, David
I'd like to nominate Radomir Dopieralski to Horizon Core.  I find his reviews 
very insightful and more importantly have come to rely on their quality. He has 
contributed to several areas in Horizon and he understands the code base well.  
Radomir is also very active in tuskar-ui both contributing and reviewing.

David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feature freeze + Icehouse-3 milestone candidates available

2014-03-05 Thread Martinx - ジェームズ
AWESOME!!

If may I ask, does IPv6 bits got included into this milestone?! I'm very
anxious to start testing it with Ubuntu 14.04 plus the official devel
packages from Canonical.

Thanks a lot!!

Best,
Thiago

On 5 March 2014 17:46, Thierry Carrez  wrote:

> Hi everyone,
>
> We just hit feature freeze, so please do not approve changes that add
> features or new configuration options unless those have been granted a
> feature freeze exception.
>
> This is also string freeze, so you should avoid changing translatable
> strings. If you have to modify a translatable string, you should give a
> heads-up to the I18N team.
>
> Milestone-proposed branches were created for Horizon, Keystone, Glance,
> Nova, Neutron, Cinder, Heat and and Trove in preparation for the
> icehouse-3 milestone publication tomorrow.
>
> Ceilometer should follow in an hour.
>
> You can find candidate tarballs at:
> http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
> http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
> http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
> http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
> http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz
> http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz
> http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz
> http://tarballs.openstack.org/trove/trove-milestone-proposed.tar.gz
>
> You can also access the milestone-proposed branches directly at:
> https://github.com/openstack/horizon/tree/milestone-proposed
> https://github.com/openstack/keystone/tree/milestone-proposed
> https://github.com/openstack/glance/tree/milestone-proposed
> https://github.com/openstack/nova/tree/milestone-proposed
> https://github.com/openstack/neutron/tree/milestone-proposed
> https://github.com/openstack/cinder/tree/milestone-proposed
> https://github.com/openstack/heat/tree/milestone-proposed
> https://github.com/openstack/trove/tree/milestone-proposed
>
> Regards,
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Solver Scheduler: Complex constraint based resource placement

2014-03-05 Thread Joe Gordon
On Wed, Mar 5, 2014 at 12:40 PM, Yathiraj Udupi (yudupi)
 wrote:
> Hi,
>
> We would like to make a request for FFE for the Solver Scheduler work.  A
> lot of work has gone into it since Sep'13, and the first patch has gone
> through several iteration after some reviews.   The first patch -
> https://review.openstack.org/#/c/46588/ introduces the main solver scheduler
> driver, and a reference solver implementation, and the subsequent patches
> that are already added provide the pluggable solver, and individual support
> for adding constraints, costs, etc.
>
> First Patch: https://review.openstack.org/#/c/46588/
> Second patch with enhanced support for pluggable constraints and costs: -
> https://review.openstack.org/#/c/70654/
> Subsequent patches add the constraints and the costs.
> BP: https://blueprints.launchpad.net/nova/+spec/solver-scheduler
> Core sponsor:  Joe Gordon
>
> John Garbutt expressed concerns in Blueprint whiteboard regarding the
> configuration values, existing filters,etc and I noticed that you have
> un-approved this BP.
> John, I will discuss with you in detail over IRC.
> But briefly,  the plan is not many new configuration values will be added,
> just the ones to specify the solver to use, and the pluggable constraints,
> and costs to use, with the weights for the costs. (these are mainly part of
> the second patch -
>  https://review.openstack.org/#/c/70654/ )
>
> The plan is to gradually support the concepts for the existing filters as
> the constraints that are accepted by our Solver Scheduler.   Depending on
> the constraints and the costs chosen, the final scheduling will be done by
> solving the problem as an optimization problem.
>
> Please reconsider this blueprint, and allow a FFE.

I will not be sponsoring this for FFE. I think its a little late to
land something to such a core piece of code.  And I am more interested
in spending my bandwidth fixing bugs and getting things ready for RC1.

>
> Thanks,
> Yathi.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Solver Scheduler: Complex constraint based resource placement

2014-03-05 Thread Yathiraj Udupi (yudupi)
Sorry,  This is request for FFE.
I meant Approver of the BP was Joe Gordon below.. not sponsor, probably the 
wrong word.

Thanks
Yathi.



On 3/5/14, 12:40 PM, "Yathiraj Udupi (yudupi)" 
mailto:yud...@cisco.com>> wrote:

Hi,

We would like to make a request for FFE for the Solver Scheduler work.  A lot 
of work has gone into it since Sep’13, and the first patch has gone through 
several iteration after some reviews.   The first patch - 
https://review.openstack.org/#/c/46588/ introduces the main solver scheduler 
driver, and a reference solver implementation, and the subsequent patches that 
are already added provide the pluggable solver, and individual support for 
adding constraints, costs, etc.

First Patch: https://review.openstack.org/#/c/46588/
Second patch with enhanced support for pluggable constraints and costs: -  
https://review.openstack.org/#/c/70654/
Subsequent patches add the constraints and the costs.
BP: https://blueprints.launchpad.net/nova/+spec/solver-scheduler
Core sponsor:  Joe Gordon

John Garbutt expressed concerns in Blueprint whiteboard regarding the 
configuration values, existing filters,etc and I noticed that you have 
un-approved this BP.
John, I will discuss with you in detail over IRC.
But briefly,  the plan is not many new configuration values will be added, just 
the ones to specify the solver to use, and the pluggable constraints, and costs 
to use, with the weights for the costs. (these are mainly part of the second 
patch -
 https://review.openstack.org/#/c/70654/ )

The plan is to gradually support the concepts for the existing filters as the 
constraints that are accepted by our Solver Scheduler.   Depending on the 
constraints and the costs chosen, the final scheduling will be done by solving 
the problem as an optimization problem.

Please reconsider this blueprint, and allow a FFE.

Thanks,
Yathi.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Solver Scheduler: Complex constraint based resource placement

2014-03-05 Thread Sean Dague
On 03/05/2014 03:40 PM, Yathiraj Udupi (yudupi) wrote:
> Hi, 
> 
> We would like to make a request for FFE for the Solver Scheduler work.
>  A lot of work has gone into it since Sep’13, and the first patch has
> gone through several iteration after some reviews.   The first patch
> - https://review.openstack.org/#/c/46588/ introduces the main solver
> scheduler driver, and a reference solver implementation, and the
> subsequent patches that are already added provide the pluggable solver,
> and individual support for adding constraints, costs, etc. 
> 
> First Patch: https://review.openstack.org/#/c/46588/ 
> Second patch with enhanced support for pluggable constraints and costs:
> -  https://review.openstack.org/#/c/70654/
> 
> Subsequent patches add the constraints and the costs. 
> BP: https://blueprints.launchpad.net/nova/+spec/solver-scheduler 
> Core sponsor:  Joe Gordon
> 
> John Garbutt expressed concerns in Blueprint whiteboard regarding the
> configuration values, existing filters,etc and I noticed that you have
> un-approved this BP. 
> John, I will discuss with you in detail over IRC. 
> But briefly,  the plan is not many new configuration values will be
> added, just the ones to specify the solver to use, and the pluggable
> constraints, and costs to use, with the weights for the costs. (these
> are mainly part of the second patch -
>  https://review.openstack.org/#/c/70654/
>  )
> 
> The plan is to gradually support the concepts for the existing filters
> as the constraints that are accepted by our Solver Scheduler.  
> Depending on the constraints and the costs chosen, the final scheduling
> will be done by solving the problem as an optimization problem. 
> 
> Please reconsider this blueprint, and allow a FFE. 
> 
> Thanks,
> Yathi. 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

This does not seem small or low risk in any way. And the blueprint is
not currently approved.

Also, as far as I can tell you never actually talked with Joe Gordon
about him supporting the FFE.

-2

This needs to wait for Juno.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-05 Thread Russell Bryant
On 03/05/2014 04:52 PM, Everett Toews wrote:
> On Mar 5, 2014, at 8:16 AM, Russell Bryant  wrote:
> 
>> I think SDK support is critical for the success of v3 long term.  I
>> expect most people are using the APIs through one of the major SDKs, so
>> v3 won't take off until that happens.  I think our top priority in Nova
>> to help ensure this happens is to provide top notch documentation on the
>> v3 API, as well as all of the differences between v2 and v3.
> 
> Yes. Thank you.
> 
> And the earlier we can see the first parts of this documentation, both the 
> differences between v2 and v3 and the final version, the better. If we can 
> give you feedback on early versions of the docs, the whole thing will go much 
> more smoothly.
> 
> You can find us in #openstack-sdks on IRC.

Sounds good.

I know there is at least this wiki page started to document the changes,
but I believe there is still a lot missing.

https://wiki.openstack.org/wiki/NovaAPIv2tov3

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][FWaaS] FFE request: fwaas-service-insertion

2014-03-05 Thread Rajesh Mohan
Hi All,

I would like to request FFE for the following patch

https://review.openstack.org/#/c/62599/

The design and the patch has gone through many reviews. We have reached out
to folks working on other advanced services as well.

This will be a first good step towards true service integration with
Neutron. Would also allow for innovative service integration.

Nachi and Sumit looked at this patch closely and are happy. Akihiro also
gave useful comments and I have addressed all his comments.

Please consider this patch for merge in I3.

Thanks,
-Rajesh Mohan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Concrete Proposal for Keeping V2 API

2014-03-05 Thread Everett Toews
On Mar 5, 2014, at 8:16 AM, Russell Bryant  wrote:

> I think SDK support is critical for the success of v3 long term.  I
> expect most people are using the APIs through one of the major SDKs, so
> v3 won't take off until that happens.  I think our top priority in Nova
> to help ensure this happens is to provide top notch documentation on the
> v3 API, as well as all of the differences between v2 and v3.

Yes. Thank you.

And the earlier we can see the first parts of this documentation, both the 
differences between v2 and v3 and the final version, the better. If we can give 
you feedback on early versions of the docs, the whole thing will go much more 
smoothly.

You can find us in #openstack-sdks on IRC.

Cheers,
Everett
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Mar 5 2014

2014-03-05 Thread Anne Gentle
Here in Austin we are gearing up for the onslaught of visitors for the SXSW
festival next week. My first visit there was in 2003, and yes I had to look
it up in my blog,
http://justwriteclick.com/2006/03/14/a-trip-report-from-sxsw-interactive-2003/for
those who are into retro tech. :) I'll be there early next week so if
you're in town say hi!

1. In review and merged this past week:
Operations Guide dropped to production! More details and schedule below.

Just about a month until a release candidate is cut, March 27th starts the
release candidates. See
https://wiki.openstack.org/wiki/Icehouse_Release_Schedule for all the
glorious release dates. This week is Feature Freeze.

Great work going on to keep up with configuration options and good work
making the Configuration Reference more polished and "leaner" -- for true
reference. Summer Long has been going great gangbusters on ensuring ALL
configuration files have examples in the documentation, great work Summer.

We also revised the configuring live migration instructions to be more
careful with access.

Lots of cleanup in the training manuals as well.

2. High priority doc work:

The subteam on install doc is still working hard, diagrams are being drawn,
architectures are being made, servers are being booted. It's all good. I
should probably have one of their reps give the report! Come to one of the
weekly docs team meetings for details and to find ways to help.

I'd love to find a documentation mentor for an intern with the Outreach
Program for Women. Please add Documentation ideas to the wiki page at:
https://wiki.openstack.org/wiki/OutreachProgramForWomen/Ideas.

3. Doc work going on that I know of:

The Operations Guide went to production yesterday! Here's their schedule
(which shouldn't affect us, just letting you all know how these things
work). I'll be entering edits that are markup cleanup most likely. Our
single window of time to add  icehouse info in the Working with Roadmaps
appendix is 3/7-3/18. If you're curious, the book is in the latest O'Reilly
collaborative authoring system, Atlas2, backed by github and they convert
the DocBook to asciidoc on the fly. You can still update the master branch
at any time, and I'll be hand-picking and hand-editing as needed.

-Intake (3): 3/4-3/6
-Copyedit (5): 3/7-3/13
-AU review (3): 3/14-3/18 (Window closes)
-Enter edits (5): 3/19-3/25
-QC1/index (6): 3/26-4/2
-Enter edits (5): 4/3-4/9
-QC2 (2): 4/10-4/11
-Enter edits: 4/14-4/15
-final O'Reilly check: 4/16
-to print: 4/17

4. New incoming doc requests:

We are definitely behind in triaging doc bugs -- 48 new bugs, some of which
may be from DocImpact flags so until the code merges we won't need to
address. Even so, with over 400 open bugs we know we need help keeping up
with DocImpact flags. Please if you have some time, take a look through doc
bugs you could pick up.

Also the install doc group is logging bugs for improving the install guide,
so that accounts for about 10 doc bugs.

5. Doc tools updates:
The 1.14.1 version of the clouddocs-maven-plugin went out this week with
multiple improvements, many to the API Reference listing page to help with
unique anchor URLs, Google Analytics, and an automatically generated TOC
(rather than hand-coding the list of APIs to display). Read more at
https://github.com/stackforge/clouddocs-maven-plugin.

6. Other doc news:

Not to bury some very exciting news, but we've adjusted our list of
doc-core members. We hope to keep reviews moving through. We managed to get
over 100 doc patches in the last week and I'm hopeful the additional
reviewers will help push our numbers even higher! I'm very happy with the
strength of the doc core team and appreciate all the participants we've had
in this release period.

Welcome to:
Gauvain Pocentek
Lana Brindley
Summer Long
Shilla Saebi
Matt Kassawara

Also, the Design Summit proposals for blueprint discussions in the
documentation track should open up tomorrow or Friday, 3/7. Please let me
know what you'd like to discuss at the Summit in Atlanta. The deadline for
the Travel Support program was Monday and we'll be making our decisions
soon with notification by 3/24.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Feature freeze + Icehouse-3 milestone candidates available

2014-03-05 Thread Thierry Carrez
Hi everyone,

We just hit feature freeze, so please do not approve changes that add
features or new configuration options unless those have been granted a
feature freeze exception.

This is also string freeze, so you should avoid changing translatable
strings. If you have to modify a translatable string, you should give a
heads-up to the I18N team.

Milestone-proposed branches were created for Horizon, Keystone, Glance,
Nova, Neutron, Cinder, Heat and and Trove in preparation for the
icehouse-3 milestone publication tomorrow.

Ceilometer should follow in an hour.

You can find candidate tarballs at:
http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz
http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz
http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz
http://tarballs.openstack.org/trove/trove-milestone-proposed.tar.gz

You can also access the milestone-proposed branches directly at:
https://github.com/openstack/horizon/tree/milestone-proposed
https://github.com/openstack/keystone/tree/milestone-proposed
https://github.com/openstack/glance/tree/milestone-proposed
https://github.com/openstack/nova/tree/milestone-proposed
https://github.com/openstack/neutron/tree/milestone-proposed
https://github.com/openstack/cinder/tree/milestone-proposed
https://github.com/openstack/heat/tree/milestone-proposed
https://github.com/openstack/trove/tree/milestone-proposed

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Health monitoring and statistics for complex LB configurations.

2014-03-05 Thread Eugene Nikanorov
Hi community,

Another interesting questions were raised during object model discussion
about how pool statistics and health monitoring should be used in case of
multiple vips sharing one pool.

Right now we can query statistics for the pool, and some data like in/out
bytes and request count will be returned.
If we had several vips sharing the pool, what kind of statistics would make
sense for the user?
The options are:

1) aggregated statistics for the pool, e.g. statistics of all requests that
has hit the pool through any VIP
2) per-vip statistics for the pool.

Depending on the answer, the statistics workflow will be different.

The good option of getting the statistics and health status could be to
query it through the vip and get it for the whole logical instance, e.g. a
call like:
 lb-vip-statistics-get --vip-id vip_id
the would result in json that returns statistics for every pool associated
with the vip, plus operational status of all members for the pools
associated with that VIP.

Looking forward to your feedback.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: Solver Scheduler: Complex constraint based resource placement

2014-03-05 Thread Yathiraj Udupi (yudupi)
Hi,

We would like to make a request for FFE for the Solver Scheduler work.  A lot 
of work has gone into it since Sep’13, and the first patch has gone through 
several iteration after some reviews.   The first patch - 
https://review.openstack.org/#/c/46588/ introduces the main solver scheduler 
driver, and a reference solver implementation, and the subsequent patches that 
are already added provide the pluggable solver, and individual support for 
adding constraints, costs, etc.

First Patch: https://review.openstack.org/#/c/46588/
Second patch with enhanced support for pluggable constraints and costs: -  
https://review.openstack.org/#/c/70654/
Subsequent patches add the constraints and the costs.
BP: https://blueprints.launchpad.net/nova/+spec/solver-scheduler
Core sponsor:  Joe Gordon

John Garbutt expressed concerns in Blueprint whiteboard regarding the 
configuration values, existing filters,etc and I noticed that you have 
un-approved this BP.
John, I will discuss with you in detail over IRC.
But briefly,  the plan is not many new configuration values will be added, just 
the ones to specify the solver to use, and the pluggable constraints, and costs 
to use, with the weights for the costs. (these are mainly part of the second 
patch -
 https://review.openstack.org/#/c/70654/ )

The plan is to gradually support the concepts for the existing filters as the 
constraints that are accepted by our Solver Scheduler.   Depending on the 
constraints and the costs chosen, the final scheduling will be done by solving 
the problem as an optimization problem.

Please reconsider this blueprint, and allow a FFE.

Thanks,
Yathi.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly meeting & object model discussion IMPORTANT

2014-03-05 Thread Eugene Nikanorov
Hi neutron & lbaas folks,

Let's meet tomorrow, Thursday, 06 at 14-00 on #openstack-meeting to
continue discussing the object model.

We had discussed with Samuel Bercovici proposals at hand and currently
there are two main proposals that we are evaluating.
Both of them allow to add two major features that initially made us to do
that whole object model redesign:
1) neutron port (ip address reuse) by multiple vips pointing to the same
pool.
Use case: http and https protocols for the same pool
2) multiple pools per vip via L7 rules.

Approach #1 (which I'm advocating) is #3 here:
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion

Approach #2 (Sam's proposal):
https://docs.google.com/a/mirantis.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r

In short, the difference between two is in how neutron port reuse is
achieved:
- Proposal #1 uses VIP object to keep neutron port (ip address) and
Listener objects
to represent different tcp ports and protocols.
- Proposal #2 uses VIP object only, neutron port reuse is achieved by
creating another VIP with vip_id of the VIP who's port is going to be
shared.
Both proposals suggest making VIP a root object (e.g. the object to which
different bindings are applied)

Those two proposals have the following advantages and disadvantages:
Proposal #1:
 - logical instance has 1 root object (VIP) which gives API clarity and
implementation advantage.
The following operations will have clear semantics: changing SLA for the
logical balancer, plugging into the different network, changing operational
status, etc.
E.g. many kinds of update operations applied to the root object (VIP)
affect whole child configuration.
 - Introducing another resource (listener) is a disadvantage (although
backward compatibility could be preserved)

Proposal #2:
 - Keeping existing set of resources, which might be an advantage for some
consumers.
 - As a disadvantage I see several root object that are implicitly bound to
the same logical configuration.
That creates small subtle inconsistencies in API that are better to be
avoided (IMO):
 when when updating certain VIP parameters like IP address or subnet that
leads to a changed parameters of another VIP that shares neutron port.
That is a direct consequence of having several 'root objects' within one
logical configuration (non-hierarchical)

Technically both proposals are fine to me.
Practically I prefer #1 over #2 because IMO it leads to a clearer API.

Please look at those proposals, think about the differences, your
preference and any concern you have about these two. We're going to
dedicate the meeting to that.

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday March 6th at 22:00UTC

2014-03-05 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, March 6th at 22:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

17:00 EST
07:00 JST
08:30 ACDT
23:00 CET
16:00 CST
14:00 PST

-Matt Treinish

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-05 Thread Qin Zhao
Hi Joe,
For example, I used to use a private cloud system, which will calculate
charge bi-weekly. and it charging formula looks like "Total_charge =
Instance_number*C1 + Total_instance_duration*C2 + Image_number*C3 +
Volume_number*C4".  Those Instance/Image/Volume number are the number of
those objects that user created within these two weeks. And it also has
quota to limit total image size and total volume size. That formula is not
very exact, but you can see that it regards each of my 'create' operation
as a 'ticket', and will charge all those tickets, plus the instance
duration fee. In order to reduce the expense of my department, I am asked
not to create instance very frequently, and not to create too many images
and volume. The image quota is not very big. And I would never be permitted
to exceed the quota, since it request additional dollars.


On Thu, Mar 6, 2014 at 1:33 AM, Joe Gordon  wrote:

> On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao  wrote:
> > Hi Joe,
> > If we assume the user is willing to create a new instance, the workflow
> you
> > are saying is exactly correct. However, what I am assuming is that the
> user
> > is NOT willing to create a new instance. If Nova can revert the existing
> > instance, instead of creating a new one, it will become the alternative
> way
> > utilized by those users who are not allowed to create a new instance.
> > Both paths lead to the target. I think we can not assume all the people
> > should walk through path one and should not walk through path two. Maybe
> > creating new instance or adjusting the quota is very easy in your point
> of
> > view. However, the real use case is often limited by business process.
> So I
> > think we may need to consider that some users can not or are not allowed
> to
> > creating the new instance under specific circumstances.
> >
>
> What sort of circumstances would prevent someone from deleting and
> recreating an instance?
>
> >
> > On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon 
> wrote:
> >>
> >> On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao  wrote:
> >> > Hi Joe, my meaning is that cloud users may not hope to create new
> >> > instances
> >> > or new images, because those actions may require additional approval
> and
> >> > additional charging. Or, due to instance/image quota limits, they can
> >> > not do
> >> > that. Anyway, from user's perspective, saving and reverting the
> existing
> >> > instance will be preferred sometimes. Creating a new instance will be
> >> > another story.
> >> >
> >>
> >> Are you saying some users may not be able to create an instance at
> >> all? If so why not just control that via quotas.
> >>
> >> Assuming the user has the power to rights and quota to create one
> >> instance and one snapshot, your proposed idea is only slightly
> >> different then the current workflow.
> >>
> >> Currently one would:
> >> 1) Create instance
> >> 2) Snapshot instance
> >> 3) Use instance / break instance
> >> 4) delete instance
> >> 5) boot new instance from snapshot
> >> 6) goto step 3
> >>
> >> From what I gather you are saying that instead of 4/5 you want the
> >> user to be able to just reboot the instance. I don't think such a
> >> subtle change in behavior is worth a whole new API extension.
> >>
> >> >
> >> > On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon 
> >> > wrote:
> >> >>
> >> >> On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao  wrote:
> >> >> > I think the current snapshot implementation can be a solution
> >> >> > sometimes,
> >> >> > but
> >> >> > it is NOT exact same as user's expectation. For example, a new
> >> >> > blueprint
> >> >> > is
> >> >> > created last week,
> >> >> >
> https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
> >> >> > which
> >> >> > seems a little similar with this discussion. I feel the user is
> >> >> > requesting
> >> >> > Nova to create in-place snapshot (not a new image), in order to
> >> >> > revert
> >> >> > the
> >> >> > instance to a certain state. This capability should be very useful
> >> >> > when
> >> >> > testing new software or system settings. It seems a short-term
> >> >> > temporary
> >> >> > snapshot associated with a running instance for Nova. Creating a
> new
> >> >> > instance is not that convenient, and may be not feasible for the
> >> >> > user,
> >> >> > especially if he or she is using public cloud.
> >> >> >
> >> >>
> >> >> Why isn't it easy to create a new instance from a snapshot?
> >> >>
> >> >> >
> >> >> > On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
> >> >> >  wrote:
> >> >> >>
> >> >> >> >>> Why reboot an instance? What is wrong with deleting it and
> >> >> >> >>> create a
> >> >> >> >>> new one?
> >> >> >>
> >> >> >> You generally use non-persistent disk mode when you are testing
> new
> >> >> >> software or experimenting with settings.   If something goes wrong
> >> >> >> just
> >> >> >> reboot and you are back to clean state and start over again.I
> >> >> >> feel
> >> >> >> it's
> >> >> >> convenient to handle this with 

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Solly Ross
Has anyone tried compiling rootwrap under Cython?  Even with non-optimized 
libraries,
Cython sometimes sees speedups.

Best Regards,
Solly Ross

- Original Message -
From: "Vishvananda Ishaya" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, March 5, 2014 1:13:33 PM
Subject: Re: [openstack-dev] [neutron][rootwrap] Performance considerations,
sudo?


On Mar 5, 2014, at 6:42 AM, Miguel Angel Ajo  wrote:

> 
>Hello,
> 
>Recently, I found a serious issue about network-nodes startup time,
> neutron-rootwrap eats a lot of cpu cycles, much more than the processes it's 
> wrapping itself.
> 
>On a database with 1 public network, 192 private networks, 192 routers, 
> and 192 nano VMs, with OVS plugin:
> 
> 
> Network node setup time (rootwrap): 24 minutes
> Network node setup time (sudo): 10 minutes
> 
> 
>   That's the time since you reboot a network node, until all namespaces
> and services are restored.
> 
> 
>   If you see appendix "1", this extra 14min overhead, matches with the fact 
> that rootwrap needs 0.3s to start, and launch a system command (once 
> filtered).
> 
>14minutes =  840 s.
>(840s. / 192 resources)/0.3s ~= 15 operations / resource(qdhcp+qrouter) 
> (iptables, ovs port creation & tagging, starting child processes, etc..)
> 
>   The overhead comes from python startup time + rootwrap loading.
> 
>   I suppose that rootwrap was designed for lower amount of system calls 
> (nova?).
> 
>   And, I understand what rootwrap provides, a level of filtering that sudo 
> cannot offer. But it raises some question:
> 
> 1) It's actually someone using rootwrap in production?
> 
> 2) What alternatives can we think about to improve this situation.
> 
>   0) already being done: coalescing system calls. But I'm unsure that's 
> enough. (if we coalesce 15 calls to 3 on this system we get: 192*3*0.3/60 ~=3 
> minutes overhead on a 10min operation).
> 
>   a) Rewriting rules into sudo (to the extent that it's possible), and live 
> with that.
>   b) How secure is neutron about command injection to that point? How much is 
> user input filtered on the API calls?
>   c) Even if "b" is ok , I suppose that if the DB gets compromised, that 
> could lead to command injection.
> 
>   d) Re-writing rootwrap into C (it's 600 python LOCs now).


This seems like the best choice to me. It shouldn’t be that much work for a 
proficient C coder. Obviously it will need to be audited for buffer overflow 
issues etc, but the code should be small enough to make this doable with high 
confidence.

Vish

> 
>   e) Doing the command filtering at neutron-side, as a library and live with 
> sudo with simple filtering. (we kill the python/rootwrap startup overhead).
> 
> 3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
> structures, I wonder if that time could be reduced by conversion
> of system process calls into system library calls (I know we don't have
> libraries for iproute, iptables?, and many other things... but it's a
> problem that's probably worth looking at.)
> 
> Best,
> Miguel Ángel Ajo.
> 
> 
> Appendix:
> 
> [1] Analyzing overhead:
> 
> [root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
> [root@rhos4-neutron2 ~]# gcc test.c -o test
> [root@rhos4-neutron2 ~]# time test  # to time process invocation on this 
> machine
> 
> real0m0.000s
> user0m0.000s
> sys0m0.000s
> 
> 
> [root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'
> 
> real0m0.032s
> user0m0.010s
> sys0m0.019s
> 
> 
> [root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'
> 
> real0m0.057s
> user0m0.016s
> sys0m0.011s
> 
> [root@rhos4-neutron2 ~]# time neutron-rootwrap --help
> /usr/bin/neutron-rootwrap: No command specified
> 
> real0m0.309s
> user0m0.128s
> sys0m0.037s
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Status of multi-attach-volume work

2014-03-05 Thread Mike

On 03/05/2014 05:37 AM, Zhi Yan Liu wrote:

Hi,

We decided multi-attach feature must be implemented as an extension to
core functionality in Cinder, but currently we have not a clear
extension support in Cinder, IMO it's the biggest blocker now. And the
other issues have been listed at
https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume#Comments_and_Discussion
as well. Probably we could get more inputs from Cinder cores.

thanks,
zhiyan

On Wed, Mar 5, 2014 at 8:19 PM, Niklas Widell
 wrote:

Hi
What is the current status of the work around multi-attach-volume [1]? We
have some cluster related use cases that would benefit from being able to
attach a volume from several instances.

[1] https://wiki.openstack.org/wiki/Cinder/blueprints/multi-attach-volume

Best regards
Niklas Widell
Ericsson AB


As discussed in previous IRC meetings, this is not blocked by new ideas 
with extensions. We've decided there's a lot involved with the changes 
that it didn't make sense to block progress of others. Zhi, I've spoke 
to you personally about how you can continue your work as normal. Feel 
free to reach out to me on IRC user thingee if you need help.


-Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-05 Thread Tracy Jones
Russell - i also believe that danbp and alaski said they would sponsor 


On Mar 5, 2014, at 10:59 AM, Russell Bryant  wrote:

> On 03/05/2014 12:27 PM, Andrew Laski wrote:
>> On 03/05/14 at 07:37am, Tracy Jones wrote:
>>> Hi - Please consider the image cache aging BP for FFE
>>> (https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/%23/c/56416/&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=fysbO8%2FBLtC%2B0WXqPRtZjP%2BFTxUY74FYnj8tkYiMlD4%3D%0A&m=qBzHh8rJVCAgDuyV9OOsUqK5joMcb%2BWA5nBRCaM5mzU%3D%0A&s=0decb0928a178cc2b07ed80a75ef39bb3501417ff110d005886c77ffae8db98b)
>>> 
>>> This is the last of several patches (already merged) that implement
>>> image cache cleanup for the vmware driver.  This patch solves a
>>> significant customer pain point as it removes unused images from their
>>> datastore.  Without this patch their datastore can become
>>> unnecessarily full.  In addition to the customer benefit from this
>>> patch it
>>> 
>>> 1.  has a turn off switch
>>> 2.  if fully contained within the vmware driver
>>> 3.  has gone through functional testing with our internal QA team
>>> 
>>> ndipanov has been good enough to say he will review the patch, so we
>>> would ask for one additional core sponsor for this FFE.
>> 
>> Looking over the blueprint and outstanding review it seems that this is
>> a fairly low risk change, so I am willing to sponsor this bp as well.
> 
> Nikola, can you confirm if you're willing to sponsor (review) this?
> 
> -- 
> Russell Bryant
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=fysbO8%2FBLtC%2B0WXqPRtZjP%2BFTxUY74FYnj8tkYiMlD4%3D%0A&m=qBzHh8rJVCAgDuyV9OOsUqK5joMcb%2BWA5nBRCaM5mzU%3D%0A&s=a73a22c3a013d36d1b517efc9d754819b2339667ae10ff9afc10f7738ae5c3bd

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-05 Thread Russell Bryant
On 03/05/2014 10:34 AM, Gary Kotton wrote:
> Hi,
> Unfortunately we did not get the ISO support approved by the deadline.
> If possible can we please get the FFE.
> 
> The feature is completed and has been tested extensively internally. The
> feature is very low risk and has huge value for users. In short a user
> is able to upload a iso to glance then boot from that iso.
> 
> BP: https://blueprints.launchpad.net/openstack/?searchtext=vmware-iso-boot
> Code: https://review.openstack.org/#/c/63084/ and 
> https://review.openstack.org/#/c/77965/
> Sponsors: John Garbutt and Nikola Dipanov
> 
> One of the things that we are planning on improving in Juno is the way
> that the Vmops code is arranged and organized. We will soon be posting a
> wiki for ideas to be discussed. That will enable use to make additions
> like this a lot simpler in the future. But sadly that is not part of the
> scope at the moment.

John and Nikola, can you confirm your sponsorship of this one?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-05 Thread Russell Bryant
On 03/05/2014 12:27 PM, Andrew Laski wrote:
> On 03/05/14 at 07:37am, Tracy Jones wrote:
>> Hi - Please consider the image cache aging BP for FFE
>> (https://review.openstack.org/#/c/56416/)
>>
>> This is the last of several patches (already merged) that implement
>> image cache cleanup for the vmware driver.  This patch solves a
>> significant customer pain point as it removes unused images from their
>> datastore.  Without this patch their datastore can become
>> unnecessarily full.  In addition to the customer benefit from this
>> patch it
>>
>> 1.  has a turn off switch
>> 2.  if fully contained within the vmware driver
>> 3.  has gone through functional testing with our internal QA team
>>
>> ndipanov has been good enough to say he will review the patch, so we
>> would ask for one additional core sponsor for this FFE.
> 
> Looking over the blueprint and outstanding review it seems that this is
> a fairly low risk change, so I am willing to sponsor this bp as well.

Nikola, can you confirm if you're willing to sponsor (review) this?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE request: clean-up-legacy-block-device-mapping

2014-03-05 Thread Russell Bryant
On 03/05/2014 12:18 PM, Andrew Laski wrote:
> On 03/05/14 at 09:05am, Dan Smith wrote:
>>> Why accept it?
>>>
>>> * It's low risk but needed refactoring that will make the code that has
>>> been a source of occasional bugs.
>>> * It is very low risk internal refactoring that uses code that has been
>>> in tree for some time now (BDM objects).
>>> * It has seen it's fair share of reviews
>>
>> Yeah, this has been conflict-heavy for a long time. If it hadn't been, I
>> think it'd be merged by now.
>>
>> The bulk of this is done, the bits remaining have seen a *lot* of real
>> review. I'm happy to commit to reviewing this, since I've already done
>> so many times :)
> 
> I will also commit to reviewing this as I have reviewed much of it already.

Ok great, consider it approved.  It really needs to get in this week
though, with an absolute hard deadline of this coming Tuesday.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack GSoC] Chenchong Ask for Mentoring on "Implement a cross-services scheduler" Project

2014-03-05 Thread Yathiraj Udupi (yudupi)
Hi Chenchong, Fang,

I am glad that you have expressed interested in this project for GSoC.  It is a 
big project I agree in terms of its scope. But it is good to start with smaller 
goals.
It will be interesting to see what incremental  things can be added to the 
current Nova scheduler to achieve cross-services scheduling.
Solver Scheduler (https://blueprints.launchpad.net/nova/+spec/solver-scheduler) 
 has been pushed to Juno, and that BP has a goal of providing a generic 
framework for expressing scheduling problem as a constraint optimization 
problem, and hence can take different forms of constraints and cost metrics 
including the cross-services aspects.
So it is good to not limit your ideas with respect to Solver Scheduler BP, but 
in general also think of what additional stuff can be added to the current 
Filter Scheduler as well.

For GSoC, I don’t think you should worry about the feature freeze for now.  You 
can propose ideas in this theme for GSoC, and we can eventually get it upstream 
to be merged with Nova/ Gantt.

The cross service scheduling BP for Filter Scheduler enhancements  is here - 
https://blueprints.launchpad.net/nova/+spec/cross-service-filter-scheduling  We 
can probably use this for additional filter scheduler enhancements.

Thanks,
Yathi.




On 3/5/14, 10:33 AM, "Chenchong Qin" 
mailto:qinchench...@gmail.com>> wrote:

Hi

Sorry for not cc openstack-dev at first (haven't got familiar with OpenStack's 
GSoC
custom... but it's quite a different flavor compared with my last mentoring 
org). I just
sent it to the possible mentors. But it turns out that openstack-dev gives lots 
of
benefit. :)

I noticed that Fang also has interests towards this idea. It's strengthened my 
thought
that it's a great idea/project.

Russell and dims showed their concerns that the project described it is far too 
large
to be able to implement in one GSoC term. In fact, I hold the same concern, so I
asked the possible mentors about it at the end of my last mail.

This project appear to have a "big name". But when we dig into detail of the 
project
description, it seems that the project is about implementing a nova scheduler 
that
can take information from storage and network components into consideration and
can make decisions based on global information. Besides, Sylvain also mentioned
that it's now in FeatureFreeze period. So, I think maybe we can move this 
project
from Gantt section to Nova section (with the consent of original project 
proposers),
and further specify the contents of the project to make it a enhancement or a 
new
feature/option to nova's current scheduler.

Thanks all your help and Sylvain's reminder on #openstack-meeting!

Regards!

Chenchong


-- Forwarded message --
From: Chenchong Qin mailto:qinchench...@gmail.com>>
Date: Wed, Mar 5, 2014 at 10:28 PM
Subject: [OpenStack GSoC] Chenchong Ask for Mentoring on "Implement a 
cross-services scheduler" Project
To: yud...@cisco.com, 
dedu...@cisco.com


Hi, Yathi and Debo

I'm a master student from China who got a great interest in the "Implement a 
cross-services scheduler"
project you put in the Gantt section of OpenStack's GSoC 2014 idea list. I'm 
taking the liberty of asking
you as my mentor for applying this project.

My name is Chenchong Qin. I'm now in my second year as a master student of 
Computer Science at
University of Chinese Academy of Sciences. My research interests mainly focus 
on Computer Network
and Cloud Computing. I participated in GSoC 2013 to develop a rate control API 
that is 802.11n features
aware for FreeBSD (project 
homepage).
 I've been following closely with OpenStack since last year and
have done some work related to network policy migration. I'm familiar with 
C/C++ and Python, and have
also write some little tools and simulation programs with python.

When I first saw your idea of implementing a cross-services scheduler, I 
determined that it's a necessary
and meaningful proposal. I participated in a research project on channel 
scheduling in a distributed MIMO
system last year. From that project, I learned that without global information, 
any scheduling mechanisms
seemed feeble. I‘ve read the blueprints you wrote and I highly agree with you 
that the scheduler should be
able to leverage global information from multiple components like Nova, Cinder, 
and Neutron to make the
placement decisions. I'm willing to help with the SolverScheduler blueprint 
both during this GSoC project
and after.

And, I also got a question here. According to the project description, "This 
project will help to build a
cross-services scheduler that can interact with storage and network services to 
make decisions". So, our
cross-services scheduler is now just a nova scheduler that can interact with 
storage and network component
to make decisions, but not a universal scheduler that c

[openstack-dev] Fwd: [OpenStack GSoC] Chenchong Ask for Mentoring on "Implement a cross-services scheduler" Project

2014-03-05 Thread Chenchong Qin
Hi

Sorry for not cc openstack-dev at first (haven't got familiar with
OpenStack's GSoC
custom... but it's quite a different flavor compared with my last mentoring
org). I just
sent it to the possible mentors. But it turns out that openstack-dev gives
lots of
benefit. :)

I noticed that Fang also has interests towards this idea. It's strengthened
my thought
that it's a great idea/project.

Russell and dims showed their concerns that the project described it is far
too large
to be able to implement in one GSoC term. In fact, I hold the same concern,
so I
asked the possible mentors about it at the end of my last mail.

This project appear to have a "big name". But when we dig into detail of
the project
description, it seems that the project is about implementing a nova
scheduler that
can take information from storage and network components into consideration
and
can make decisions based on global information. Besides, Sylvain also
mentioned
that it's now in FeatureFreeze period. So, I think maybe we can move this
project
from Gantt section to Nova section (with the consent of original project
proposers),
and further specify the contents of the project to make it a enhancement or
a new
feature/option to nova's current scheduler.

Thanks all your help and Sylvain's reminder on #openstack-meeting!

Regards!

Chenchong


-- Forwarded message --
From: Chenchong Qin 
Date: Wed, Mar 5, 2014 at 10:28 PM
Subject: [OpenStack GSoC] Chenchong Ask for Mentoring on "Implement a
cross-services scheduler" Project
To: yud...@cisco.com, dedu...@cisco.com


Hi, Yathi and Debo

I'm a master student from China who got a great interest in the "Implement
a cross-services scheduler"
project you put in the Gantt section of OpenStack's GSoC 2014 idea list.
I'm taking the liberty of asking
you as my mentor for applying this project.

My name is Chenchong Qin. I'm now in my second year as a master student of
Computer Science at
University of Chinese Academy of Sciences. My research interests mainly
focus on Computer Network
and Cloud Computing. I participated in GSoC 2013 to develop a rate control
API that is 802.11n features
aware for FreeBSD (project
homepage).
I've been following closely with OpenStack since last year and
have done some work related to network policy migration. I'm familiar with
C/C++ and Python, and have
also write some little tools and simulation programs with python.

When I first saw your idea of implementing a cross-services scheduler, I
determined that it's a necessary
and meaningful proposal. I participated in a research project on channel
scheduling in a distributed MIMO
system last year. From that project, I learned that without global
information, any scheduling mechanisms
seemed feeble. I've read the blueprints you wrote and I highly agree with
you that the scheduler should be
able to leverage global information from multiple components like Nova,
Cinder, and Neutron to make the
placement decisions. I'm willing to help with the SolverScheduler blueprint
both during this GSoC project
and after.

And, I also got a question here. According to the project description,
"This project will help to build a
cross-services scheduler that can interact with storage and network
services to make decisions". So, our
cross-services scheduler is now just a nova scheduler that can interact
with storage and network component
to make decisions, but not a universal scheduler that can be used by other
components. Did I make it right?

Looking forward to hear from you.

Thanks and regards!

Chenchong
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Disaster Recovery for OpenStack - community interest for Juno and beyond - meeting notes and next steps

2014-03-05 Thread Ronen Kat
Thanks you for the participants who joined the kick-off meeting for work 
in the community toward Disaster Recovery for OpenStack.
We captured the meeting notes on the Etherpad - see 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders 


Per the consensus in the meeting we will schedule meeting toward the next 
summit.
Next meeting: March 19 12pm - 1pm ET (phone call-in)
Call in numbers are available at 
https://www.teleconference.att.com/servlet/glbAccess?process=1&accessCode=6406941&accessNumber=1809417783#C2
 

Passcode: 6406941

Everyone is invited!

Ronen,

- Forwarded by Ronen Kat/Haifa/IBM on 05/03/2014 08:05 PM -

From:   Ronen Kat/Haifa/IBM
To: openstack-dev@lists.openstack.org, 
Date:   04/03/2014 01:16 PM
Subject:Disaster Recovery for OpenStack - call for stakeholders


Hello,

In the Hong-Kong summit, there was a lot of interest around OpenStack 
support for Disaster Recovery including a design summit session, an 
un-conference session and a break-out session.
In addition we set up a Wiki for OpenStack disaster recovery - see 
https://wiki.openstack.org/wiki/DisasterRecovery 
The first step was enabling volume replication in Cinder, which has 
started in the Icehouse development cycle and will continue into Juno.

Toward the Juno summit and development cycle we would like to send out a 
"call for disaster recovery stakeholders", looking to:
* Create a list of use-cases and scenarios for disaster recovery with 
OpenStack
* Find interested parties who wish to contribute features and code to 
advance disaster recovery in OpenStack
* Plan needed for discussions at the Juno summit

To coordinate such efforts, I  would like to invite you to a conference 
call on Wednesday March 5 at 12pm ET and work together coordinating 
actions for the Juno summit (an invitation is attached).
We will record minutes of the call at - 
https://etherpad.openstack.org/p/juno-disaster-recovery-call-for-stakeholders 
(link also available from the disaster recovery wiki page).
If you are unable to join and interested, please register your self and 
share your thoughts.



Call in numbers are available at 
https://www.teleconference.att.com/servlet/glbAccess?process=1&accessCode=6406941&accessNumber=1809417783#C2
 

Passcode: 6406941

Regards,
__
Ronen I. Kat, PhD
Storage Research
IBM Research - Haifa
Phone: +972.3.7689493
Email: ronen...@il.ibm.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Vishvananda Ishaya

On Mar 5, 2014, at 6:42 AM, Miguel Angel Ajo  wrote:

> 
>Hello,
> 
>Recently, I found a serious issue about network-nodes startup time,
> neutron-rootwrap eats a lot of cpu cycles, much more than the processes it's 
> wrapping itself.
> 
>On a database with 1 public network, 192 private networks, 192 routers, 
> and 192 nano VMs, with OVS plugin:
> 
> 
> Network node setup time (rootwrap): 24 minutes
> Network node setup time (sudo): 10 minutes
> 
> 
>   That's the time since you reboot a network node, until all namespaces
> and services are restored.
> 
> 
>   If you see appendix "1", this extra 14min overhead, matches with the fact 
> that rootwrap needs 0.3s to start, and launch a system command (once 
> filtered).
> 
>14minutes =  840 s.
>(840s. / 192 resources)/0.3s ~= 15 operations / resource(qdhcp+qrouter) 
> (iptables, ovs port creation & tagging, starting child processes, etc..)
> 
>   The overhead comes from python startup time + rootwrap loading.
> 
>   I suppose that rootwrap was designed for lower amount of system calls 
> (nova?).
> 
>   And, I understand what rootwrap provides, a level of filtering that sudo 
> cannot offer. But it raises some question:
> 
> 1) It's actually someone using rootwrap in production?
> 
> 2) What alternatives can we think about to improve this situation.
> 
>   0) already being done: coalescing system calls. But I'm unsure that's 
> enough. (if we coalesce 15 calls to 3 on this system we get: 192*3*0.3/60 ~=3 
> minutes overhead on a 10min operation).
> 
>   a) Rewriting rules into sudo (to the extent that it's possible), and live 
> with that.
>   b) How secure is neutron about command injection to that point? How much is 
> user input filtered on the API calls?
>   c) Even if "b" is ok , I suppose that if the DB gets compromised, that 
> could lead to command injection.
> 
>   d) Re-writing rootwrap into C (it's 600 python LOCs now).


This seems like the best choice to me. It shouldn’t be that much work for a 
proficient C coder. Obviously it will need to be audited for buffer overflow 
issues etc, but the code should be small enough to make this doable with high 
confidence.

Vish

> 
>   e) Doing the command filtering at neutron-side, as a library and live with 
> sudo with simple filtering. (we kill the python/rootwrap startup overhead).
> 
> 3) I also find 10 minutes a long time to setup 192 networks/basic tenant 
> structures, I wonder if that time could be reduced by conversion
> of system process calls into system library calls (I know we don't have
> libraries for iproute, iptables?, and many other things... but it's a
> problem that's probably worth looking at.)
> 
> Best,
> Miguel Ángel Ajo.
> 
> 
> Appendix:
> 
> [1] Analyzing overhead:
> 
> [root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
> [root@rhos4-neutron2 ~]# gcc test.c -o test
> [root@rhos4-neutron2 ~]# time test  # to time process invocation on this 
> machine
> 
> real0m0.000s
> user0m0.000s
> sys0m0.000s
> 
> 
> [root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'
> 
> real0m0.032s
> user0m0.010s
> sys0m0.019s
> 
> 
> [root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'
> 
> real0m0.057s
> user0m0.016s
> sys0m0.011s
> 
> [root@rhos4-neutron2 ~]# time neutron-rootwrap --help
> /usr/bin/neutron-rootwrap: No command specified
> 
> real0m0.309s
> user0m0.128s
> sys0m0.037s
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting Mar 6 1800 UTC

2014-03-05 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in
#openstack-meeting-alt channel.

Agenda: 
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_March.2C_6

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meeting&iso=20140306T18

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Feature Freeze

2014-03-05 Thread Devananda van der Veen
All,

Feature freeze for Ironic is now in effect, and the icehouse-3
milestone-proposed branch has been created:

http://git.openstack.org/cgit/openstack/ironic/log/?h=milestone-proposed

I have bumped to the next cycle any blueprints which were targeted to
Icehouse but not yet completed, and temporarily blocked code reviews
related to new features. I will unblock those reviews when Juno opens. The
following blueprints were affected:

https://blueprints.launchpad.net/ironic/+spec/serial-console-access
https://blueprints.launchpad.net/ironic/+spec/migration-from-nova
https://blueprints.launchpad.net/ironic/+spec/windows-disk-image-support
https://blueprints.launchpad.net/ironic/+spec/ironic-ilo-power-driver
https://blueprints.launchpad.net/ironic/+spec/ironic-ilo-virtualmedia-driver

Icehouse release candidates will be tagged near the end of March [*]. Until
then, I would like everyone to focus on CI by means of integration with
TripleO and devstack, and fixing bugs and improving stability. We should
not change either the REST or Driver APIs unless absolutely necessary. I am
targeting bugs which I believe are necessary for the Icehouse release to
the RC1 milestone; that list can be seen here:

https://launchpad.net/ironic/+milestone/icehouse-rc1

If you believe a bug should be targeted to icehouse, please raise it with a
member of the core team in #openstack-ironic on irc.freenode.net. Code
reviews for non-RC-targeted bugs may be blocked, or the bug should be
targeted to the RC so we can track ongoing work.


Thanks!
Devananda

[*] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Roman Podoliaka
Hi all,

So yeah, we could restore the option and put creation of a slave
engine instance to EngineFacade class, but I don't think we want this.

The only reason why slave connections aren't implemented e.g. in
SQLAlchemy is that, SQLAlchemy, as a library can't decide  for you how
those engines should be used: do you have an ACTIVE-ACTIVE setup or
ACTIVE-PASSIVE, to which database reads/writes must go, and so on. The
same is true for oslo.db.

Nova is the only project that uses slave_connection option and it was
kind of broken: nova bare metal driver uses a separate database and
there was no way to use a slave db connection for it.

So due to lack of consistency in using of slave connection, IMO, this
should be left up to application to decide, how to use it. And we
provide EngineFacade helper already. So I'd just say, create an
EngineFacade instance for a slave connection explicitly, if you want
it to be used like it is used in Nova right now.

Thanks,
Roman

On Wed, Mar 5, 2014 at 8:35 AM, Doug Hellmann
 wrote:
>
>
>
> On Wed, Mar 5, 2014 at 10:43 AM, Alexei Kornienko
>  wrote:
>>
>> Hello Darren,
>>
>> This option is removed since oslo.db will no longer manage engine objects
>> on it's own. Since it will not store engines it cannot handle query
>> dispatching.
>>
>> Every project that wan't to use slave_connection will have to implement
>> this logic (creation of the slave engine and query dispatching) on it's own.
>
>
> If we are going to have multiple projects using that feature, we will have
> to restore it to oslo.db. Just because the primary API won't manage global
> objects doesn't mean we can't have a secondary API that does.
>
> Doug
>
>
>>
>>
>> Regards,
>>
>>
>> On 03/05/2014 05:18 PM, Darren Birkett wrote:
>>
>> Hi,
>>
>> I'm wondering why in this commit:
>>
>>
>> https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f
>>
>> ...the slave_connection option was removed.  It seems like a useful option
>> to have, even if a lot of projects weren't yet using it.
>>
>> Darren
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Service VM: irc discussion?

2014-03-05 Thread Isaku Yamahata
Since I received some mails privately, I'd like to start weekly IRC meeting.
The first meeting will be

  Tuesdays 23:00UTC from March 11, 2014
  #openstack-meeting
  https://wiki.openstack.org/wiki/Meetings/ServiceVM
  If you have topics to discuss, please add to the page.

Sorry if the time is inconvenient for you. The schedule will also be
discussed, and the meeting time would be changed from the 2nd one.

Thanks,

On Mon, Feb 10, 2014 at 03:11:43PM +0900,
Isaku Yamahata  wrote:

> As the first patch for service vm framework is ready for review[1][2],
> it would be a good idea to have IRC meeting.
> Anyone interested in it? How about schedule?
> 
> Schedule candidate
> Monday  22:00UTC-, 23:00UTC-
> Tuesday 22:00UTC-, 23:00UTC-
> (Although the slot of servanced service vm[3] can be resuled,
>  it doesn't work for me because my timezone is UTC+9.)
> 
> topics for 
> - discussion/review on the patch
> - next steps
> - other open issues?
> 
> [1] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
> [2] https://review.openstack.org/#/c/56892/
> [3] https://wiki.openstack.org/wiki/Meetings/AdvancedServices
> -- 
> Isaku Yamahata 

-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-03-05 Thread Samuel Bercovici
Hi,

In 
https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit?usp=sharing
 referenced by the Wiki, I have added the section that address the items raised 
on the last irc meeting.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Wednesday, February 26, 2014 7:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici; Eugene Nikanorov (enikano...@mirantis.com); Evgeny 
Fedoruk; Avishay Balderman
Subject: RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi,

I have added to the wiki page: 
https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion#1.1_Turning_existing_model_to_logical_model
 that points to a document that includes the current model + L7 + SSL.
Please review.

Regards,
-Sam.


From: Samuel Bercovici
Sent: Monday, February 24, 2014 7:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici
Subject: RE: [openstack-dev] [Neutron][LBaaS] Object Model discussion

Hi,

I also agree that the model should be pure logical.
I think that the existing model is almost correct but the pool should be made 
pure logical. This means that the vip <>pool relationships needs also to 
become any to any.
Eugene, has rightfully pointed that the current "state" management will not 
handle such relationship well.
To me this means that the "state" management is broken and not the model.
I will propose an update to the state management in the next few days.

Regards,
-Sam.




From: Mark McClain [mailto:mmccl...@yahoo-inc.com]
Sent: Monday, February 24, 2014 6:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion


On Feb 21, 2014, at 1:29 PM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:

I disagree on this point. I believe that the more implementation details
bleed into the API, the harder the API is to evolve and improve, and the
less flexible the API becomes.

I'd personally love to see the next version of the LBaaS API be a
complete breakaway from any implementation specifics and refocus itself
to be a control plane API that is written from the perspective of the
*user* of a load balancing service, not the perspective of developers of
load balancer products.

I agree with Jay.  We the API needs to be user centric and free of 
implementation details.  One of my concerns I've voiced in some of the IRC 
discussions is that too many implementation details are exposed to the user.

mark
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Joe Gordon
On Wed, Mar 5, 2014 at 8:51 AM, Miguel Angel Ajo Pelayo
 wrote:
>
>
> - Original Message -
>> Miguel Angel Ajo wrote:
>> > [...]
>> >The overhead comes from python startup time + rootwrap loading.
>> >
>> >I suppose that rootwrap was designed for lower amount of system calls
>> > (nova?).
>>
>> Yes, it was not really designed to escalate rights on hundreds of
>> separate shell commands in a row.
>>
>> >And, I understand what rootwrap provides, a level of filtering that
>> > sudo cannot offer. But it raises some question:
>> >
>> > 1) It's actually someone using rootwrap in production?
>> >
>> > 2) What alternatives can we think about to improve this situation.
>> >
>> >0) already being done: coalescing system calls. But I'm unsure that's
>> > enough. (if we coalesce 15 calls to 3 on this system we get:
>> > 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
>> >
>> >a) Rewriting rules into sudo (to the extent that it's possible), and
>> > live with that.
>>
>> We used to use sudo and a sudoers file. The rules were poorly written,
>> and there is just so much you can check in a sudoers file. But the main
>> issue was that the sudoers file lived in packaging
>> (distribution-dependent), and was not maintained in sync with the code.
>> Rootwrap let us to maintain the rules (filters) in sync with the code
>> calling them.
>
> Yes, from security & maintenance, it was an smart decision. I'm thinking
> of automatically converting rootwrap rules to sudoers, but that's very
> limited, specially for the ip netns exec ... case.
>
>
>> To work around perf issues, you still have the option of running with a
>> wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
>> running with a badly-written or badly-maintained sudo rules anyway.
>
> That's what I used for my "benchmark". I just wonder, the how possible
> is to get command injection from neutron, via API or DB.
>
>>
>> > [...]
>> >d) Re-writing rootwrap into C (it's 600 python LOCs now).
>>
>> (d2) would be to explore running rootwrap under Pypy. Testing that is on
>> my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
>> that option.
>
> I tried in my system right now, it takes more time to boot-up. Pypy JIT
> is awesome on runtime, but it seems that boot time is slower.

That is the wrong pypy! there are some pypy core devs lurking on this
ML so they may correct some of these details but:

It turns out python has a really big startup overhead:

jogo@lappy:~$ time echo true
true

real0m0.000s
user0m0.000s
sys 0m0.000s

jogo@lappy:~$ time python -c "print True"
True

real0m0.022s
user0m0.013s
sys 0m0.009s

And I am not surprised pypy isn't much better, pypy works better with
longer running programs.

But pypy isn't just one thing its two parts:

"In common parlance, PyPy has been used to mean two things. The first
is the RPython translation toolchain, which is a framework for
generating dynamic programming language implementations. And the
second is one particular implementation that is so generated - an
implementation of the Pythonprogramming language written in Python
itself. It is designed to be flexible and easy to experiment with."

So the idea is to rewrite rootwrap in in RPython and use the Rpython
translation toolchain to convert rootwrap into C. That way we keep the
source code in a language more friendly to OpenStack devs, and we
hopefully avoid the overhead assocated with starting python up.

>
> I also played a little with shedskin (py->c++ converter), but it
> doesn't support all the python libraries, dynamic typing, or parameter 
> unpacking.
>
> That could be another approach, writing a simplified rootwrap in python, and
> have it automatically converted to C++.
>
> f) haleyb on IRC is pointing me to another approach Carl Baldwin is
> pushing https://review.openstack.org/#/c/67490/ towards command execution
> coalescing.
>
>
>>
>> >e) Doing the command filtering at neutron-side, as a library and live
>> > with sudo with simple filtering. (we kill the python/rootwrap startup
>> > overhead).
>>
>> That's as safe as running with a wildcard sudoers file (neutron user can
>> escalate to root). Which may just be acceptable in /some/ scenarios.
>
> I think it can be safer, (from the command injection point of view).
>
>>
>> --
>> Thierry Carrez (ttx)
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Victor Sergeyev
Hello All.

We suppose to have common database code oslo.db library. So we decided to
let end applications to cope with engines, not oslo.db. For example, see
work with slave engine in Nova [1]. These is also patch to oslo with more
details - [2]

Also, Darren, please inform us about your usecase a bit.

[1]
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L95
[2] https://review.openstack.org/#/c/68684/


On Wed, Mar 5, 2014 at 6:35 PM, Doug Hellmann
wrote:

>
>
>
> On Wed, Mar 5, 2014 at 10:43 AM, Alexei Kornienko <
> alexei.kornie...@gmail.com> wrote:
>
>>  Hello Darren,
>>
>> This option is removed since oslo.db will no longer manage engine objects
>> on it's own. Since it will not store engines it cannot handle query
>> dispatching.
>>
>> Every project that wan't to use slave_connection will have to implement
>> this logic (creation of the slave engine and query dispatching) on it's own.
>>
>
> If we are going to have multiple projects using that feature, we will have
> to restore it to oslo.db. Just because the primary API won't manage global
> objects doesn't mean we can't have a secondary API that does.
>
> Doug
>
>
>
>>
>> Regards,
>>
>>
>> On 03/05/2014 05:18 PM, Darren Birkett wrote:
>>
>> Hi,
>>
>>  I'm wondering why in this commit:
>>
>>
>> https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f
>>
>>  ...the slave_connection option was removed.  It seems like a useful
>> option to have, even if a lot of projects weren't yet using it.
>>
>>  Darren
>>
>>
>> ___
>> OpenStack-dev mailing 
>> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-05 Thread Joe Gordon
On Wed, Mar 5, 2014 at 8:59 AM, Qin Zhao  wrote:
> Hi Joe,
> If we assume the user is willing to create a new instance, the workflow you
> are saying is exactly correct. However, what I am assuming is that the user
> is NOT willing to create a new instance. If Nova can revert the existing
> instance, instead of creating a new one, it will become the alternative way
> utilized by those users who are not allowed to create a new instance.
> Both paths lead to the target. I think we can not assume all the people
> should walk through path one and should not walk through path two. Maybe
> creating new instance or adjusting the quota is very easy in your point of
> view. However, the real use case is often limited by business process. So I
> think we may need to consider that some users can not or are not allowed to
> creating the new instance under specific circumstances.
>

What sort of circumstances would prevent someone from deleting and
recreating an instance?

>
> On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon  wrote:
>>
>> On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao  wrote:
>> > Hi Joe, my meaning is that cloud users may not hope to create new
>> > instances
>> > or new images, because those actions may require additional approval and
>> > additional charging. Or, due to instance/image quota limits, they can
>> > not do
>> > that. Anyway, from user's perspective, saving and reverting the existing
>> > instance will be preferred sometimes. Creating a new instance will be
>> > another story.
>> >
>>
>> Are you saying some users may not be able to create an instance at
>> all? If so why not just control that via quotas.
>>
>> Assuming the user has the power to rights and quota to create one
>> instance and one snapshot, your proposed idea is only slightly
>> different then the current workflow.
>>
>> Currently one would:
>> 1) Create instance
>> 2) Snapshot instance
>> 3) Use instance / break instance
>> 4) delete instance
>> 5) boot new instance from snapshot
>> 6) goto step 3
>>
>> From what I gather you are saying that instead of 4/5 you want the
>> user to be able to just reboot the instance. I don't think such a
>> subtle change in behavior is worth a whole new API extension.
>>
>> >
>> > On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon 
>> > wrote:
>> >>
>> >> On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao  wrote:
>> >> > I think the current snapshot implementation can be a solution
>> >> > sometimes,
>> >> > but
>> >> > it is NOT exact same as user's expectation. For example, a new
>> >> > blueprint
>> >> > is
>> >> > created last week,
>> >> > https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
>> >> > which
>> >> > seems a little similar with this discussion. I feel the user is
>> >> > requesting
>> >> > Nova to create in-place snapshot (not a new image), in order to
>> >> > revert
>> >> > the
>> >> > instance to a certain state. This capability should be very useful
>> >> > when
>> >> > testing new software or system settings. It seems a short-term
>> >> > temporary
>> >> > snapshot associated with a running instance for Nova. Creating a new
>> >> > instance is not that convenient, and may be not feasible for the
>> >> > user,
>> >> > especially if he or she is using public cloud.
>> >> >
>> >>
>> >> Why isn't it easy to create a new instance from a snapshot?
>> >>
>> >> >
>> >> > On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
>> >> >  wrote:
>> >> >>
>> >> >> >>> Why reboot an instance? What is wrong with deleting it and
>> >> >> >>> create a
>> >> >> >>> new one?
>> >> >>
>> >> >> You generally use non-persistent disk mode when you are testing new
>> >> >> software or experimenting with settings.   If something goes wrong
>> >> >> just
>> >> >> reboot and you are back to clean state and start over again.I
>> >> >> feel
>> >> >> it's
>> >> >> convenient to handle this with just a reboot rather than recreating
>> >> >> the
>> >> >> instance.
>> >> >>
>> >> >> Thanks,
>> >> >> Divakar
>> >> >>
>> >> >> -Original Message-
>> >> >> From: Joe Gordon [mailto:joe.gord...@gmail.com]
>> >> >> Sent: Tuesday, March 04, 2014 10:41 AM
>> >> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> >> Subject: Re: [openstack-dev] [nova][cinder] non-persistent
>> >> >> storage(after
>> >> >> stopping VM, data will be rollback automatically), do you think we
>> >> >> shoud
>> >> >> introduce this feature?
>> >> >> Importance: High
>> >> >>
>> >> >> On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang
>> >> >> 
>> >> >> wrote:
>> >> >> >>
>> >> >> >> This sounds like ephemeral storage plus snapshots.  You build a
>> >> >> >> base
>> >> >> >> image, snapshot it then boot from the snapshot.
>> >> >> >
>> >> >> >
>> >> >> > Non-persistent storage/disk is useful for sandbox-like
>> >> >> > environment,
>> >> >> > and
>> >> >> > this feature has already exists in VMWare ESX from version 4.1.
>> >> >> > The
>> >> >> > implementation of ESX is the same as what you said, boot from
>> >> >> 

Re: [openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-05 Thread Andrew Laski

On 03/05/14 at 07:37am, Tracy Jones wrote:

Hi - Please consider the image cache aging BP for FFE 
(https://review.openstack.org/#/c/56416/)

This is the last of several patches (already merged) that implement image cache 
cleanup for the vmware driver.  This patch solves a significant customer pain 
point as it removes unused images from their datastore.  Without this patch 
their datastore can become unnecessarily full.  In addition to the customer 
benefit from this patch it

1.  has a turn off switch
2.  if fully contained within the vmware driver
3.  has gone through functional testing with our internal QA team

ndipanov has been good enough to say he will review the patch, so we would ask 
for one additional core sponsor for this FFE.


Looking over the blueprint and outstanding review it seems that this is 
a fairly low risk change, so I am willing to sponsor this bp as well.




Thanks

Tracy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE request: clean-up-legacy-block-device-mapping

2014-03-05 Thread Andrew Laski

On 03/05/14 at 09:05am, Dan Smith wrote:

Why accept it?

* It's low risk but needed refactoring that will make the code that has
been a source of occasional bugs.
* It is very low risk internal refactoring that uses code that has been
in tree for some time now (BDM objects).
* It has seen it's fair share of reviews


Yeah, this has been conflict-heavy for a long time. If it hadn't been, I
think it'd be merged by now.

The bulk of this is done, the bits remaining have seen a *lot* of real
review. I'm happy to commit to reviewing this, since I've already done
so many times :)


I will also commit to reviewing this as I have reviewed much of it 
already.




--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE request: clean-up-legacy-block-device-mapping

2014-03-05 Thread Dan Smith
> Why accept it?
> 
> * It's low risk but needed refactoring that will make the code that has
> been a source of occasional bugs.
> * It is very low risk internal refactoring that uses code that has been
> in tree for some time now (BDM objects).
> * It has seen it's fair share of reviews

Yeah, this has been conflict-heavy for a long time. If it hadn't been, I
think it'd be merged by now.

The bulk of this is done, the bits remaining have seen a *lot* of real
review. I'm happy to commit to reviewing this, since I've already done
so many times :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-05 Thread Qin Zhao
Hi Joe,
If we assume the user is willing to create a new instance, the workflow you
are saying is exactly correct. However, what I am assuming is that the user
is NOT willing to create a new instance. If Nova can revert the existing
instance, instead of creating a new one, it will become the alternative way
utilized by those users who are not allowed to create a new instance.
Both paths lead to the target. I think we can not assume all the people
should walk through path one and should not walk through path two. Maybe
creating new instance or adjusting the quota is very easy in your point of
view. However, the real use case is often limited by business process. So I
think we may need to consider that some users can not or are not allowed to
creating the new instance under specific circumstances.


On Thu, Mar 6, 2014 at 12:02 AM, Joe Gordon  wrote:

> On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao  wrote:
> > Hi Joe, my meaning is that cloud users may not hope to create new
> instances
> > or new images, because those actions may require additional approval and
> > additional charging. Or, due to instance/image quota limits, they can
> not do
> > that. Anyway, from user's perspective, saving and reverting the existing
> > instance will be preferred sometimes. Creating a new instance will be
> > another story.
> >
>
> Are you saying some users may not be able to create an instance at
> all? If so why not just control that via quotas.
>
> Assuming the user has the power to rights and quota to create one
> instance and one snapshot, your proposed idea is only slightly
> different then the current workflow.
>
> Currently one would:
> 1) Create instance
> 2) Snapshot instance
> 3) Use instance / break instance
> 4) delete instance
> 5) boot new instance from snapshot
> 6) goto step 3
>
> From what I gather you are saying that instead of 4/5 you want the
> user to be able to just reboot the instance. I don't think such a
> subtle change in behavior is worth a whole new API extension.
>
> >
> > On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon 
> wrote:
> >>
> >> On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao  wrote:
> >> > I think the current snapshot implementation can be a solution
> sometimes,
> >> > but
> >> > it is NOT exact same as user's expectation. For example, a new
> blueprint
> >> > is
> >> > created last week,
> >> > https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
> >> > which
> >> > seems a little similar with this discussion. I feel the user is
> >> > requesting
> >> > Nova to create in-place snapshot (not a new image), in order to revert
> >> > the
> >> > instance to a certain state. This capability should be very useful
> when
> >> > testing new software or system settings. It seems a short-term
> temporary
> >> > snapshot associated with a running instance for Nova. Creating a new
> >> > instance is not that convenient, and may be not feasible for the user,
> >> > especially if he or she is using public cloud.
> >> >
> >>
> >> Why isn't it easy to create a new instance from a snapshot?
> >>
> >> >
> >> > On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
> >> >  wrote:
> >> >>
> >> >> >>> Why reboot an instance? What is wrong with deleting it and
> create a
> >> >> >>> new one?
> >> >>
> >> >> You generally use non-persistent disk mode when you are testing new
> >> >> software or experimenting with settings.   If something goes wrong
> just
> >> >> reboot and you are back to clean state and start over again.I
> feel
> >> >> it's
> >> >> convenient to handle this with just a reboot rather than recreating
> the
> >> >> instance.
> >> >>
> >> >> Thanks,
> >> >> Divakar
> >> >>
> >> >> -Original Message-
> >> >> From: Joe Gordon [mailto:joe.gord...@gmail.com]
> >> >> Sent: Tuesday, March 04, 2014 10:41 AM
> >> >> To: OpenStack Development Mailing List (not for usage questions)
> >> >> Subject: Re: [openstack-dev] [nova][cinder] non-persistent
> >> >> storage(after
> >> >> stopping VM, data will be rollback automatically), do you think we
> >> >> shoud
> >> >> introduce this feature?
> >> >> Importance: High
> >> >>
> >> >> On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang
> >> >> 
> >> >> wrote:
> >> >> >>
> >> >> >> This sounds like ephemeral storage plus snapshots.  You build a
> base
> >> >> >> image, snapshot it then boot from the snapshot.
> >> >> >
> >> >> >
> >> >> > Non-persistent storage/disk is useful for sandbox-like environment,
> >> >> > and
> >> >> > this feature has already exists in VMWare ESX from version 4.1. The
> >> >> > implementation of ESX is the same as what you said, boot from
> >> >> > snapshot of
> >> >> > the disk/volume, but it will also *automatically* delete the
> >> >> > transient
> >> >> > snapshot after the instance reboots or shutdowns. I think the whole
> >> >> > procedure may be controlled by OpenStack other than user's manual
> >> >> > operations.
> >> >>
> >> >> Why reboot an instance? What is wrong with deleting it and create a
> new
> >> >>

Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Rick Jones

On 03/05/2014 06:42 AM, Miguel Angel Ajo wrote:


 Hello,

 Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the processes
it's wrapping itself.

 On a database with 1 public network, 192 private networks, 192
routers, and 192 nano VMs, with OVS plugin:


Network node setup time (rootwrap): 24 minutes
Network node setup time (sudo): 10 minutes


I've not been looking at rootwrap, but have been looking at sudo and ip. 
(Using some scripts which create "fake routers" so I could look without 
any of this icky OpenStack stuff in the way :) ) The Ubuntu 12.04 
versions of each at least will enumerate all the interfaces on the 
system, even though they don't need to.


There was already an upstream change to 'ip' that eliminates the 
unnecessary enumeration.  In the last few weeks an enhancement went into 
the upstream sudo that allows one to configure sudo to not do the same 
thing.   Down in the low(ish) three figures of interfaces it may not be 
a Big Deal (tm) but as one starts to go beyond that...


commit f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8
Author: Stephen Hemminger 
Date:   Thu Mar 28 15:17:47 2013 -0700

ip: remove unnecessary ll_init_map

Don't call ll_init_map on modify operations
Saves significant overhead with 1000's of devices.

http://www.sudo.ws/pipermail/sudo-workers/2014-January/000826.html

Whether your environment already has the 'ip' change I don't know, but 
odd are probably pretty good it doesn't have the sudo enhancement.



That's the time since you reboot a network node, until all namespaces
and services are restored.


So, that includes the time for the system to go down and reboot, not 
just the time it takes to rebuild once rebuilding starts?



If you see appendix "1", this extra 14min overhead, matches with the
fact that rootwrap needs 0.3s to start, and launch a system command
(once filtered).

 14minutes =  840 s.
 (840s. / 192 resources)/0.3s ~= 15 operations /
resource(qdhcp+qrouter) (iptables, ovs port creation & tagging, starting
child processes, etc..)

The overhead comes from python startup time + rootwrap loading.


How much of the time is python startup time?  I assume that would be all 
the "find this lib, find that lib" stuff one sees in a system call 
trace?  I saw a boatload of that at one point but didn't quite feel like 
wading into that at the time.



I suppose that rootwrap was designed for lower amount of system
calls (nova?).


And/or a smaller environment perhaps.


And, I understand what rootwrap provides, a level of filtering that
sudo cannot offer. But it raises some question:

1) It's actually someone using rootwrap in production?

2) What alternatives can we think about to improve this situation.

0) already being done: coalescing system calls. But I'm unsure
that's enough. (if we coalesce 15 calls to 3 on this system we get:
192*3*0.3/60 ~=3 minutes overhead on a 10min operation).


It may not be sufficient, but it is (IMO) certainly necessary.  It will 
make any work that minimizes or eliminates the overhead of rootwrap look 
that much better.



a) Rewriting rules into sudo (to the extent that it's possible), and
live with that.
b) How secure is neutron about command injection to that point? How
much is user input filtered on the API calls?
c) Even if "b" is ok , I suppose that if the DB gets compromised,
that could lead to command injection.

d) Re-writing rootwrap into C (it's 600 python LOCs now).

e) Doing the command filtering at neutron-side, as a library and
live with sudo with simple filtering. (we kill the python/rootwrap
startup overhead).

3) I also find 10 minutes a long time to setup 192 networks/basic tenant
structures, I wonder if that time could be reduced by conversion
of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)


Certainly going back and forth creating short-lived processes is at 
least anti-social and perhaps ever so slightly upsetting to the process 
scheduler.  Particularly "at scale."  The/a problem is though that the 
Linux networking folks have been somewhat reticent about creating 
libraries (at least any that they would end-up supporting) because they 
have a concern it will lock-in interfaces and reduce their freedom of 
movement.


happy benchmarking,

rick jones
the fastest procedure call is the one you never make



Best,
Miguel Ángel Ajo.


Appendix:

[1] Analyzing overhead:

[root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
[root@rhos4-neutron2 ~]# gcc test.c -o test
[root@rhos4-neutron2 ~]# time test  # to time process invocation on
this machine

real0m0.000s
user0m0.000s
sys0m0.000s


[root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'

real0m0.032s
user0m0.010s
sys0m0.019s


[r

[openstack-dev] [Nova] FFE request: clean-up-legacy-block-device-mapping

2014-03-05 Thread Nikola Đipanov
Hi folks,

This did not make it in fully.  Outstanding patches are:

https://review.openstack.org/#/c/71064/
https://review.openstack.org/#/c/71065/
https://review.openstack.org/#/c/71067/
https://review.openstack.org/#/c/71067/
https://review.openstack.org/#/c/71479/
https://review.openstack.org/#/c/72341/
https://review.openstack.org/#/c/72346/

Why accept it?

* It's low risk but needed refactoring that will make the code that has
been a source of occasional bugs.
* It is very low risk internal refactoring that uses code that has been
in tree for some time now (BDM objects).
* It has seen it's fair share of reviews

In addition I'd like to ask for the following patch that is based on the
above also be considered:

https://review.openstack.org/#/c/72797/

It is part of periodic-tasks-to-db-slave, a very useful BP, and was
blocked waiting for my work to land.

Thanks for consideration.

Regards,

Nikola

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Do you think we should introduce the online-extend feature to cinder ?

2014-03-05 Thread Paul Marshall
Hey, 

Sorry I missed this thread a couple of days ago. I am working on a first-pass 
of this and hope to have something soon. So far I've mostly focused on getting 
OpenVZ and the HP LH SAN driver working for online extend. I've had trouble 
with libvirt+kvm+lvm so I'd love some help there if you have ideas about how to 
get them working. For example, in a devstack VM the only way I can get the 
iSCSI target to show the new size (after an lvextend) is to delete and recreate 
the target, something jgriffiths said he doesn't want to support ;-). I also 
haven't dived into any of those other limits you mentioned (nfs_used_ratio, 
etc.). Feel free to ping me on IRC (pdmars).

Paul


On Mar 3, 2014, at 8:50 PM, Zhangleiqiang  wrote:

> @john.griffith. Thanks for your information.
>  
> I have read the BP you mentioned ([1]) and have some rough thoughts about it.
>  
> As far as I know, the corresponding online-extend command for libvirt is 
> “blockresize”, and for Qemu, the implement differs among disk formats.
>  
> For the regular qcow2/raw disk file, qemu will take charge of the 
> drain_all_io and truncate_disk actions, but for raw block device, qemu will 
> only check if the *Actual* size of the device is larger than current size.
>  
> I think the former need more consideration, because the extend work is done 
> by libvirt, Nova may need to do this first and then notify Cinder. But if we 
> take allocation limit of different cinder backend drivers (such as quota, 
> nfs_used_ratio, nfs_oversub_ratio, etc) into account, the workflow will be 
> more complicated.
>  
> This scenario is not included by the Item 3 of BP ([1]), as it cannot be 
> simply “just work” or notified by the compute node/libvirt after the volume 
> is extended.
>  
> This regular qcow2/raw disk files are normally stored in file system based 
> storage, maybe the Manila project is more appropriate for this scenario?
>  
>  
> Thanks.
>  
>  
> [1]: 
> https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
>  
> --
> zhangleiqiang
>  
> Best Regards
>  
> From: John Griffith [mailto:john.griff...@solidfire.com] 
> Sent: Tuesday, March 04, 2014 1:05 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Luohao (brian)
> Subject: Re: [openstack-dev] [Cinder] Do you think we should introduce the 
> online-extend feature to cinder ?
>  
>  
>  
> 
> On Mon, Mar 3, 2014 at 2:01 AM, Zhangleiqiang  
> wrote:
> Hi, stackers:
> 
> Libvirt/qemu have supported online-extend for multiple disk formats, 
> including qcow2, sparse, etc. But Cinder only support offline-extend volumes 
> currently.
> 
> Offline-extend volume will force the instance to be shutoff or the volume 
> to be detached. I think it will be useful if we introduce the online-extend 
> feature to cinder, especially for the file system based driver, e.g. nfs, 
> glusterfs, etc.
> 
> Is there any other suggestions?
> 
> Thanks.
> 
> 
> --
> zhangleiqiang
> 
> Best Regards
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>  
> Hi Zhangleiqiang,
>  
> So yes, there's a rough BP for this here: [1], and some of the folks from the 
> Trove team (pdmars on IRC) have actually started to dive into this.  Last I 
> checked with him there were some sticking points on the Nova side but we 
> should synch up with Paul, it's been a couple weeks since I've last caught up 
> with him.
>  
> Thanks,
> John
> [1]: 
> https://blueprints.launchpad.net/cinder/+spec/inuse-extend-volume-extension
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Miguel Angel Ajo Pelayo


- Original Message -
> Miguel Angel Ajo wrote:
> > [...]
> >The overhead comes from python startup time + rootwrap loading.
> > 
> >I suppose that rootwrap was designed for lower amount of system calls
> > (nova?).
> 
> Yes, it was not really designed to escalate rights on hundreds of
> separate shell commands in a row.
> 
> >And, I understand what rootwrap provides, a level of filtering that
> > sudo cannot offer. But it raises some question:
> > 
> > 1) It's actually someone using rootwrap in production?
> > 
> > 2) What alternatives can we think about to improve this situation.
> > 
> >0) already being done: coalescing system calls. But I'm unsure that's
> > enough. (if we coalesce 15 calls to 3 on this system we get:
> > 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
> > 
> >a) Rewriting rules into sudo (to the extent that it's possible), and
> > live with that.
> 
> We used to use sudo and a sudoers file. The rules were poorly written,
> and there is just so much you can check in a sudoers file. But the main
> issue was that the sudoers file lived in packaging
> (distribution-dependent), and was not maintained in sync with the code.
> Rootwrap let us to maintain the rules (filters) in sync with the code
> calling them.

Yes, from security & maintenance, it was an smart decision. I'm thinking
of automatically converting rootwrap rules to sudoers, but that's very 
limited, specially for the ip netns exec ... case.


> To work around perf issues, you still have the option of running with a
> wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
> running with a badly-written or badly-maintained sudo rules anyway.

That's what I used for my "benchmark". I just wonder, the how possible
is to get command injection from neutron, via API or DB.

> 
> > [...]
> >d) Re-writing rootwrap into C (it's 600 python LOCs now).
> 
> (d2) would be to explore running rootwrap under Pypy. Testing that is on
> my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
> that option.

I tried in my system right now, it takes more time to boot-up. Pypy JIT 
is awesome on runtime, but it seems that boot time is slower.

I also played a little with shedskin (py->c++ converter), but it 
doesn't support all the python libraries, dynamic typing, or parameter 
unpacking.

That could be another approach, writing a simplified rootwrap in python, and
have it automatically converted to C++.

f) haleyb on IRC is pointing me to another approach Carl Baldwin is
pushing https://review.openstack.org/#/c/67490/ towards command execution 
coalescing.


> 
> >e) Doing the command filtering at neutron-side, as a library and live
> > with sudo with simple filtering. (we kill the python/rootwrap startup
> > overhead).
> 
> That's as safe as running with a wildcard sudoers file (neutron user can
> escalate to root). Which may just be acceptable in /some/ scenarios.

I think it can be safer, (from the command injection point of view).

> 
> --
> Thierry Carrez (ttx)
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-03-05 Thread Tim Hinrichs
Hi Gokul, 

Thanks for working out how all these policy initiatives relate to each other. 
I'll be spending some time diving into the ones I hadn't heard about. 

I made some additional comments about Congress below. 

Tim 

- Original Message -

From: "Jay Lau"  
To: "OpenStack Development Mailing List (not for usage questions)" 
 
Sent: Wednesday, March 5, 2014 7:31:55 AM 
Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage compute/storage resource 

Hi Gokul, 




2014-03-05 3:30 GMT+08:00 Gokul Kandiraju < gokul4o...@gmail.com > : 





Dear All, 

We are working on a framework where we want to monitor the system and take 
certain actions when specific events or situations occur. Here are two examples 
of ‘different’ situations: 

Example 1: A VM’s-Owner and N/W’s-owner are different ==> this could mean a 
violation ==> we need to take some action 

Example 2: A simple policy such as (VM-migrate of all VMs on possible node 
failure) OR (a more complex Energy Policy that may involve optimization). 

Both these examples need monitoring and actions to be taken when certain events 
happen (or through polling). However, the first one falls into the Compliance 
domain with Boolean conditions getting evaluated while the second one may 
require a more richer set of expression allowing for sequences or algorithms. 





So far, based on this discussion, it seems that these are *separate* 
initiatives in the community. I am understanding the Congress project to be in 
the domain of Boolean conditions (used for Compliance, etc.) where as the 
Run-time-policies (Jay's proposal) where policies can be expressed as rules, 
algorithms with higher-level goals. Is this understanding correct? 

Also, looking at all the mails, this is what I am reading: 

1. Congress -- Focused on Compliance [ is that correct? ] (Boolean constraints 
and logic) 



[Tim] Your characterization of boolean constraints for Congress is probably a 
good one. Congress won't be solving optimization/numeric problems any time soon 
if ever. However, I could imagine that down the road we could tell Congress 
here's the policy (optimization or Boolean) that we want to enforce, and it 
would carve off say the Load-balancing part of the policy and send it to the 
Runtime-Policies component; or it would carve off the placement policy and send 
it to the SolverScheduler. Not saying I know how to do this today, but that's 
always been part of the goal for Congress: to have a central point for admins 
to control the global policy being enforced throughout the datacenter/cloud. 

The other delta here is that the Congress policy language is general-purpose, 
so there's not a list of policy types that it will handle (Load Balancing, 
Placement, Energy). That generality comes with a price: that Congress must rely 
on other enforcement points, such as the ones below, to handle complicated 
policy enforcement problems. 







2. Runtime-Policies --  -- Focused on Runtime policies for Load 
Balancing, Availability, Energy, etc. (sequences of actions, rules, algorithms) 



[Jay] Yes, exactly. 







3. SolverScheduler -- Focused on Placement [ static or runtime ] and will be 
invoked by the (above) policy engines 




4. Gantt – Focused on (Holistic) Scheduling 



[Jay] For 3 and 4, I was always thinking Gantt is doing something for 
implementing SolverScheduler, not sure if run time policy can be included. 







5. Neat -- seems to be a special case of Runtime-Policies (policies based on 
Load) 



Would this be correct understanding? We need to understand this to contribute 
to the right project. :) 



Thanks! 

-Gokul 


On Fri, Feb 28, 2014 at 5:46 PM, Jay Lau < jay.lau@gmail.com > wrote: 



Hi Yathiraj and Tim, 

Really appreciate your comments here ;-) 

I will prepare some detailed slides or documents before summit and we can have 
a review then. It would be great if OpenStack can provide "DRS" features. 

Thanks, 

Jay 



2014-03-01 6:00 GMT+08:00 Tim Hinrichs < thinri...@vmware.com > : 



Hi Jay, 

I think the Solver Scheduler is a better fit for your needs than Congress 
because you know what kinds of constraints and enforcement you want. I'm not 
sure this topic deserves an entire design session--maybe just talking a bit at 
the summit would suffice (I *think* I'll be attending). 

Tim 

- Original Message - 
| From: "Jay Lau" < jay.lau@gmail.com > 
| To: "OpenStack Development Mailing List (not for usage questions)" < 
openstack-dev@lists.openstack.org > 
| Sent: Wednesday, February 26, 2014 6:30:54 PM 
| Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for 
OpenStack run time policy to manage 
| compute/storage resource 
| 
| 
| 
| 
| 
| 
| Hi Tim, 
| 
| I'm not sure if we can put resource monitor and adjust to 
| solver-scheduler (Gantt), but I have proposed this to Gantt design 
| [1], you can refer to [1] and search "jay-lau-513". 
| 
| IMH

[openstack-dev] FFE Request: Freescale SDN ML2 Mechanism Driver

2014-03-05 Thread trinath.soman...@freescale.com
Hi Mark,?


We have the codebase and the 3rd Party CI setup in place for review.


Freescale CI is currently in non-voting status.


Kindly please consider the Blueprint and the codebase for FFE (icehouse 
release).


Blueprint: 
https://blueprints.launchpad.net/neutron/+spec/fsl-sdn-os-mech-driver and


Code base: https://review.openstack.org/#/c/78092/

?

Kindly please do the needful.


Thanking you


-

Trinath Somanchi

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Doug Hellmann
On Wed, Mar 5, 2014 at 10:43 AM, Alexei Kornienko <
alexei.kornie...@gmail.com> wrote:

>  Hello Darren,
>
> This option is removed since oslo.db will no longer manage engine objects
> on it's own. Since it will not store engines it cannot handle query
> dispatching.
>
> Every project that wan't to use slave_connection will have to implement
> this logic (creation of the slave engine and query dispatching) on it's own.
>

If we are going to have multiple projects using that feature, we will have
to restore it to oslo.db. Just because the primary API won't manage global
objects doesn't mean we can't have a secondary API that does.

Doug



>
> Regards,
>
>
> On 03/05/2014 05:18 PM, Darren Birkett wrote:
>
> Hi,
>
>  I'm wondering why in this commit:
>
>
> https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f
>
>  ...the slave_connection option was removed.  It seems like a useful
> option to have, even if a lot of projects weren't yet using it.
>
>  Darren
>
>
> ___
> OpenStack-dev mailing 
> listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-05 Thread Irena Berezovsky
Hi Robert,
I think what you mentioned can be achieved by calling into specific MD method 
from
SriovAgentMechanismDriverBase .try_to_bind_segment_for_agent mehod, maybe 
something like 'get_vif_details' before it calls to context.set_binding.
Would you mind to continue discussion over patch gerrit review 
https://review.openstack.org/#/c/74464/ ?
I think it will be easier to follow up the comments and decisions.

Thanks,
Irena

-Original Message-
From: Robert Li (baoli) [mailto:ba...@cisco.com] 
Sent: Wednesday, March 05, 2014 6:10 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions); Sandhya Dasu (sadasu); Robert Kukura; Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of 
ports

Hi Irena,

The main reason for me to do it that way is how vif_details should be setup in 
our case. Do you need vlan in vif_details? The behavior in the existing base 
classes is that the vif_details is set during the driver init time. In our 
case, it needs to be setup during bind_port().

thanks,
Robert


On 3/5/14 7:37 AM, "Irena Berezovsky"  wrote:

>Hi Robert, Sandhya,
>I have pushed the reference implementation 
>SriovAgentMechanismDriverBase as part the following WIP:
>https://review.openstack.org/#/c/74464/
>
>The code is in mech_agent.py, and very simple code for 
>mech_sriov_nic_switch.py.
>
>Please take a look and review.
>
>BR,
>Irena
>
>-Original Message-
>From: Irena Berezovsky [mailto:ire...@mellanox.com]
>Sent: Wednesday, March 05, 2014 9:04 AM
>To: Robert Li (baoli); Sandhya Dasu (sadasu); OpenStack Development 
>Mailing List (not for usage questions); Robert Kukura; Brian Bowen
>(brbowen)
>Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
>binding of ports
>
>Hi Robert,
>Seems to me that many code lines are duplicated following your proposal.
>For agent based MDs, I would prefer to inherit from 
>SimpleAgentMechanismDriverBase and add there verify method for 
>supported_pci_vendor_info. Specific MD will pass the list of supported 
>pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will 
>call 'supported_pci_vendor_info', and if supported continue with 
>binding flow.
>Maybe instead of a decorator method, it should be just an utility method?
>I think that the check for supported vnic_type and pci_vendor info 
>support, should be done in order to see if MD should bind the port. If 
>the answer is Yes, no more checks are required.
>
>Coming back to the question I asked earlier, for non-agent MD, how 
>would you deal with updates after port is bound, like 'admin_state_up' changes?
>I'll try to push some reference code later today.
>
>BR,
>Irena
>
>-Original Message-
>From: Robert Li (baoli) [mailto:ba...@cisco.com]
>Sent: Wednesday, March 05, 2014 4:46 AM
>To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for 
>usage questions); Irena Berezovsky; Robert Kukura; Brian Bowen 
>(brbowen)
>Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV 
>binding of ports
>
>Hi Sandhya,
>
>I agree with you except that I think that the class should inherit from 
>MechanismDriver. I took a crack at it, and here is what I got:
>
># Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
>#
>#Licensed under the Apache License, Version 2.0 (the "License"); you
>may
>#not use this file except in compliance with the License. You may
>obtain
>#a copy of the License at
>#
># http://www.apache.org/licenses/LICENSE-2.0
>#
>#Unless required by applicable law or agreed to in writing, software
>#distributed under the License is distributed on an "AS IS" BASIS,
>WITHOUT
>#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
>the
>#License for the specific language governing permissions and
>limitations
>#under the License.
>
>from abc import ABCMeta, abstractmethod
>
>import functools
>import six
>
>from neutron.extensions import portbindings from 
>neutron.openstack.common import log from neutron.plugins.ml2 import 
>driver_api as api
>
>LOG = log.getLogger(__name__)
>
>
>DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
>portbindings.VNIC_MACVTAP]
>
>def check_vnic_type_and_vendor_info(f):
>@functools.wraps(f)
>def wrapper(self, context):
>vnic_type = context.current.get(portbindings.VNIC_TYPE,
>portbindings.VNIC_NORMAL)
>if vnic_type not in self.supported_vnic_types:
>LOG.debug(_("%(func_name)s: skipped due to unsupported "
>"vnic_type: %(vnic_type)s"),
>  {'func_name': f.func_name, 'vnic_type': vnic_type})
>return
>
>if self.supported_pci_vendor_info:
>profile = context.current.get(portbindings.PROFILE, {})
>if not profile:
>LOG.debug(_("%s: Missing profile in port binding"),
>

Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-05 Thread Robert Li (baoli)
Hi Sean,

Sorry for your frustration. I actually provided the comments about the two
LLAs in the review (see patch set 1). If the intent for these changes is
to allow RAs from legitimate sources only, I'm afraid that that goal won't
be reached with them. I may be completely wrong, but so far I haven't been
convinced yet. 
 

thanks,
Robert



On 3/5/14 10:21 AM, "Collins, Sean" 
wrote:

>Hi Robert,
>
>I'm reaching out to you off-list for this:
>
>On Wed, Mar 05, 2014 at 09:48:46AM EST, Robert Li (baoli) wrote:
>> As a result of this change, it will end up having two LLA addresses in
>>the
>> router's qr interface. It would have made more sense if the LLA will be
>> replacing the qr interface's automatically generated LLA address.
>
>Was this not what you intended, when you -1'd the security group patch
>because you were not able to create gateways for Neutron subnets with a
>LLA address? I am a little frustrated because we scrambled to create a
>patch so you would remove your -1, then now your suggesting we abandon
>it?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Ben Nemec
This has actually come up before, too: 
http://lists.openstack.org/pipermail/openstack-dev/2013-July/012539.html


-Ben

On 2014-03-05 08:42, Miguel Angel Ajo wrote:

Hello,

Recently, I found a serious issue about network-nodes startup time,
neutron-rootwrap eats a lot of cpu cycles, much more than the
processes it's wrapping itself.

On a database with 1 public network, 192 private networks, 192
routers, and 192 nano VMs, with OVS plugin:


Network node setup time (rootwrap): 24 minutes
Network node setup time (sudo): 10 minutes


   That's the time since you reboot a network node, until all 
namespaces

and services are restored.


   If you see appendix "1", this extra 14min overhead, matches with
the fact that rootwrap needs 0.3s to start, and launch a system
command (once filtered).

14minutes =  840 s.
(840s. / 192 resources)/0.3s ~= 15 operations /
resource(qdhcp+qrouter) (iptables, ovs port creation & tagging,
starting child processes, etc..)

   The overhead comes from python startup time + rootwrap loading.

   I suppose that rootwrap was designed for lower amount of system
calls (nova?).

   And, I understand what rootwrap provides, a level of filtering that
sudo cannot offer. But it raises some question:

1) It's actually someone using rootwrap in production?

2) What alternatives can we think about to improve this situation.

   0) already being done: coalescing system calls. But I'm unsure
that's enough. (if we coalesce 15 calls to 3 on this system we get:
192*3*0.3/60 ~=3 minutes overhead on a 10min operation).

   a) Rewriting rules into sudo (to the extent that it's possible),
and live with that.
   b) How secure is neutron about command injection to that point? How
much is user input filtered on the API calls?
   c) Even if "b" is ok , I suppose that if the DB gets compromised,
that could lead to command injection.

   d) Re-writing rootwrap into C (it's 600 python LOCs now).

   e) Doing the command filtering at neutron-side, as a library and
live with sudo with simple filtering. (we kill the python/rootwrap
startup overhead).

3) I also find 10 minutes a long time to setup 192 networks/basic
tenant structures, I wonder if that time could be reduced by
conversion
of system process calls into system library calls (I know we don't have
libraries for iproute, iptables?, and many other things... but it's a
problem that's probably worth looking at.)

Best,
Miguel Ángel Ajo.


Appendix:

[1] Analyzing overhead:

[root@rhos4-neutron2 ~]# echo "int main() { return 0; }" > test.c
[root@rhos4-neutron2 ~]# gcc test.c -o test
[root@rhos4-neutron2 ~]# time test  # to time process invocation
on this machine

real0m0.000s
user0m0.000s
sys0m0.000s


[root@rhos4-neutron2 ~]# time sudo bash -c 'exit 0'

real0m0.032s
user0m0.010s
sys0m0.019s


[root@rhos4-neutron2 ~]# time python -c'import sys;sys.exit(0)'

real0m0.057s
user0m0.016s
sys0m0.011s

[root@rhos4-neutron2 ~]# time neutron-rootwrap --help
/usr/bin/neutron-rootwrap: No command specified

real0m0.309s
user0m0.128s
sys0m0.037s

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-03-05 Thread Joe Gordon
On Wed, Mar 5, 2014 at 5:53 AM, Julien Danjou  wrote:
> On Tue, Mar 04 2014, Joe Gordon wrote:
>
>> So since tools/config/check_uptodate.sh is oslo code, I assumed this
>> issue falls into the domain of oslo-incubator.
>>
>> Until this gets resolved nova is considering
>> https://review.openstack.org/#/c/78028/
>
> Removing tools/config/oslo.config.generator.rc would have a been a
> better trade-off I think.
>

Perhaps, although the previous consensus in this thread seemed to be
we generally don't want to include auto-generated files like config
sample in git. Removing the check all together also solves the case
where a patch in trunk adds a new config option and causes all
subsequent patches in that repo to fail.

> --
> Julien Danjou
> // Free Software hacker
> // http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV binding of ports

2014-03-05 Thread Robert Li (baoli)
Hi Irena,

The main reason for me to do it that way is how vif_details should be
setup in our case. Do you need vlan in vif_details? The behavior in the
existing base classes is that the vif_details is set during the driver
init time. In our case, it needs to be setup during bind_port().

thanks,
Robert


On 3/5/14 7:37 AM, "Irena Berezovsky"  wrote:

>Hi Robert, Sandhya,
>I have pushed the reference implementation SriovAgentMechanismDriverBase
>as part the following WIP:
>https://review.openstack.org/#/c/74464/
>
>The code is in mech_agent.py, and very simple code for
>mech_sriov_nic_switch.py.
>
>Please take a look and review.
>
>BR,
>Irena
>
>-Original Message-
>From: Irena Berezovsky [mailto:ire...@mellanox.com]
>Sent: Wednesday, March 05, 2014 9:04 AM
>To: Robert Li (baoli); Sandhya Dasu (sadasu); OpenStack Development
>Mailing List (not for usage questions); Robert Kukura; Brian Bowen
>(brbowen)
>Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
>binding of ports
>
>Hi Robert,
>Seems to me that many code lines are duplicated following your proposal.
>For agent based MDs, I would prefer to inherit from
>SimpleAgentMechanismDriverBase and add there verify method for
>supported_pci_vendor_info. Specific MD will pass the list of supported
>pci_vendor_info list. The  'try_to_bind_segment_for_agent' method will
>call 'supported_pci_vendor_info', and if supported continue with binding
>flow. 
>Maybe instead of a decorator method, it should be just an utility method?
>I think that the check for supported vnic_type and pci_vendor info
>support, should be done in order to see if MD should bind the port. If
>the answer is Yes, no more checks are required.
>
>Coming back to the question I asked earlier, for non-agent MD, how would
>you deal with updates after port is bound, like 'admin_state_up' changes?
>I'll try to push some reference code later today.
>
>BR,
>Irena
>
>-Original Message-
>From: Robert Li (baoli) [mailto:ba...@cisco.com]
>Sent: Wednesday, March 05, 2014 4:46 AM
>To: Sandhya Dasu (sadasu); OpenStack Development Mailing List (not for
>usage questions); Irena Berezovsky; Robert Kukura; Brian Bowen (brbowen)
>Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV
>binding of ports
>
>Hi Sandhya,
>
>I agree with you except that I think that the class should inherit from
>MechanismDriver. I took a crack at it, and here is what I got:
>
># Copyright (c) 2014 OpenStack Foundation # All Rights Reserved.
>#
>#Licensed under the Apache License, Version 2.0 (the "License"); you
>may
>#not use this file except in compliance with the License. You may
>obtain
>#a copy of the License at
>#
># http://www.apache.org/licenses/LICENSE-2.0
>#
>#Unless required by applicable law or agreed to in writing, software
>#distributed under the License is distributed on an "AS IS" BASIS,
>WITHOUT
>#WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See
>the
>#License for the specific language governing permissions and
>limitations
>#under the License.
>
>from abc import ABCMeta, abstractmethod
>
>import functools
>import six
>
>from neutron.extensions import portbindings from neutron.openstack.common
>import log from neutron.plugins.ml2 import driver_api as api
>
>LOG = log.getLogger(__name__)
>
>
>DEFAULT_VNIC_TYPES_SUPPORTED = [portbindings.VNIC_DIRECT,
>portbindings.VNIC_MACVTAP]
>
>def check_vnic_type_and_vendor_info(f):
>@functools.wraps(f)
>def wrapper(self, context):
>vnic_type = context.current.get(portbindings.VNIC_TYPE,
>portbindings.VNIC_NORMAL)
>if vnic_type not in self.supported_vnic_types:
>LOG.debug(_("%(func_name)s: skipped due to unsupported "
>"vnic_type: %(vnic_type)s"),
>  {'func_name': f.func_name, 'vnic_type': vnic_type})
>return
>
>if self.supported_pci_vendor_info:
>profile = context.current.get(portbindings.PROFILE, {})
>if not profile:
>LOG.debug(_("%s: Missing profile in port binding"),
>  f.func_name)
>return
>pci_vendor_info = profile.get('pci_vendor_info')
>if not pci_vendor_info:
>LOG.debug(_("%s: Missing pci vendor info in profile"),
>  f.func_name)
>return
>if pci_vendor_info not in self.supported_pci_vendor_info:
>LOG.debug(_("%(func_name)s: unsupported pci vendor "
>"info: %(info)s"),
>  {'func_name': f.func_name, 'info':
>pci_vendor_info})
>return
>f(self, context)
>return wrapper
>
>@six.add_metaclass(ABCMeta)
>class SriovMechanismDriverBase(api.MechanismDriver):
>"""Base class for drivers that supports SR-IOV
>
>The SriovMechanismDriverBase provides

Re: [openstack-dev] [nova][cinder] non-persistent storage(after stopping VM, data will be rollback automatically), do you think we shoud introduce this feature?

2014-03-05 Thread Joe Gordon
On Tue, Mar 4, 2014 at 6:21 PM, Qin Zhao  wrote:
> Hi Joe, my meaning is that cloud users may not hope to create new instances
> or new images, because those actions may require additional approval and
> additional charging. Or, due to instance/image quota limits, they can not do
> that. Anyway, from user's perspective, saving and reverting the existing
> instance will be preferred sometimes. Creating a new instance will be
> another story.
>

Are you saying some users may not be able to create an instance at
all? If so why not just control that via quotas.

Assuming the user has the power to rights and quota to create one
instance and one snapshot, your proposed idea is only slightly
different then the current workflow.

Currently one would:
1) Create instance
2) Snapshot instance
3) Use instance / break instance
4) delete instance
5) boot new instance from snapshot
6) goto step 3

>From what I gather you are saying that instead of 4/5 you want the
user to be able to just reboot the instance. I don't think such a
subtle change in behavior is worth a whole new API extension.

>
> On Wed, Mar 5, 2014 at 3:20 AM, Joe Gordon  wrote:
>>
>> On Tue, Mar 4, 2014 at 1:06 AM, Qin Zhao  wrote:
>> > I think the current snapshot implementation can be a solution sometimes,
>> > but
>> > it is NOT exact same as user's expectation. For example, a new blueprint
>> > is
>> > created last week,
>> > https://blueprints.launchpad.net/nova/+spec/driver-specific-snapshot,
>> > which
>> > seems a little similar with this discussion. I feel the user is
>> > requesting
>> > Nova to create in-place snapshot (not a new image), in order to revert
>> > the
>> > instance to a certain state. This capability should be very useful when
>> > testing new software or system settings. It seems a short-term temporary
>> > snapshot associated with a running instance for Nova. Creating a new
>> > instance is not that convenient, and may be not feasible for the user,
>> > especially if he or she is using public cloud.
>> >
>>
>> Why isn't it easy to create a new instance from a snapshot?
>>
>> >
>> > On Tue, Mar 4, 2014 at 1:32 PM, Nandavar, Divakar Padiyar
>> >  wrote:
>> >>
>> >> >>> Why reboot an instance? What is wrong with deleting it and create a
>> >> >>> new one?
>> >>
>> >> You generally use non-persistent disk mode when you are testing new
>> >> software or experimenting with settings.   If something goes wrong just
>> >> reboot and you are back to clean state and start over again.I feel
>> >> it's
>> >> convenient to handle this with just a reboot rather than recreating the
>> >> instance.
>> >>
>> >> Thanks,
>> >> Divakar
>> >>
>> >> -Original Message-
>> >> From: Joe Gordon [mailto:joe.gord...@gmail.com]
>> >> Sent: Tuesday, March 04, 2014 10:41 AM
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Subject: Re: [openstack-dev] [nova][cinder] non-persistent
>> >> storage(after
>> >> stopping VM, data will be rollback automatically), do you think we
>> >> shoud
>> >> introduce this feature?
>> >> Importance: High
>> >>
>> >> On Mon, Mar 3, 2014 at 8:13 PM, Zhangleiqiang
>> >> 
>> >> wrote:
>> >> >>
>> >> >> This sounds like ephemeral storage plus snapshots.  You build a base
>> >> >> image, snapshot it then boot from the snapshot.
>> >> >
>> >> >
>> >> > Non-persistent storage/disk is useful for sandbox-like environment,
>> >> > and
>> >> > this feature has already exists in VMWare ESX from version 4.1. The
>> >> > implementation of ESX is the same as what you said, boot from
>> >> > snapshot of
>> >> > the disk/volume, but it will also *automatically* delete the
>> >> > transient
>> >> > snapshot after the instance reboots or shutdowns. I think the whole
>> >> > procedure may be controlled by OpenStack other than user's manual
>> >> > operations.
>> >>
>> >> Why reboot an instance? What is wrong with deleting it and create a new
>> >> one?
>> >>
>> >> >
>> >> > As far as I know, libvirt already defines the corresponding
>> >> > 
>> >> > element in domain xml for non-persistent disk ( [1] ), but it cannot
>> >> > specify
>> >> > the location of the transient snapshot. Although qemu-kvm has
>> >> > provided
>> >> > support for this feature by the "-snapshot" command argument, which
>> >> > will
>> >> > create the transient snapshot under /tmp directory, the qemu driver
>> >> > of
>> >> > libvirt don't support  element currently.
>> >> >
>> >> > I think the steps of creating and deleting transient snapshot may be
>> >> > better to done by Nova/Cinder other than waiting for the 
>> >> > support
>> >> > added to libvirt, as the location of transient snapshot should
>> >> > specified by
>> >> > Nova.
>> >> >
>> >> >
>> >> > [1] http://libvirt.org/formatdomain.html#elementsDisks
>> >> > --
>> >> > zhangleiqiang
>> >> >
>> >> > Best Regards
>> >> >
>> >> >
>> >> >> -Original Message-
>> >> >> From: Joe Gordon [mailto:joe.gord...@gmail.com]
>> >> >> Sent: Tuesday, March 04, 2014 11:26 AM
>>

Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal for OpenStack run time policy to manage compute/storage resource

2014-03-05 Thread Jay Lau
Hi Gokul,




2014-03-05 3:30 GMT+08:00 Gokul Kandiraju :

>
>
> Dear All,
>
>
>
> We are working on a framework where we want to monitor the system and take
> certain actions when specific events or situations occur. Here are two
> examples of 'different' situations:
>
>
>
>Example 1: A VM's-Owner and N/W's-owner are different ==> this could
> mean a violation ==> we need to take some action
>
>Example 2: A simple policy such as (VM-migrate of all VMs on possible
> node failure) OR (a more complex Energy Policy that may involve
> optimization).
>
>
>
> Both these examples need monitoring and actions to be taken when certain
> events happen (or through polling). However, the first one falls into the
> Compliance domain with Boolean conditions getting evaluated while the
> second one may require a more richer set of expression allowing for
> sequences or algorithms.
>
>  So far, based on this discussion, it seems that these are *separate*
> initiatives in the community. I am understanding the Congress project to be
> in the domain of Boolean conditions (used for Compliance, etc.) where as
> the Run-time-policies (Jay's proposal) where policies can be expressed as
> rules, algorithms with higher-level goals. Is this understanding correct?
>
> Also, looking at all the mails, this is what I am reading:
>
>
>
>  1. Congress -- Focused on Compliance [ is that correct? ] (Boolean
> constraints and logic)
>
>
>
>  2. Runtime-Policies --  -- Focused on Runtime policies
> for Load Balancing, Availability, Energy, etc. (sequences of actions,
> rules, algorithms)
>
[Jay] Yes, exactly.

>
>
>  3. SolverScheduler -- Focused on Placement [ static or runtime ] and
> will be invoked by the (above) policy engines
>
>
>
>  4. Gantt - Focused on (Holistic) Scheduling
>
  [Jay] For 3 and 4, I was always thinking Gantt is doing something for
implementing SolverScheduler, not sure if run time policy can be included.

>
>
>  5. Neat -- seems to be a special case of Runtime-Policies  (policies
> based on Load)
>
>
>
> Would this be correct understanding?  We need to understand this to
> contribute to the right project. :)
>
>
>
> Thanks!
>
> -Gokul
>
>
>
> On Fri, Feb 28, 2014 at 5:46 PM, Jay Lau  wrote:
>
>> Hi Yathiraj and Tim,
>>
>> Really appreciate your comments here ;-)
>>
>> I will prepare some detailed slides or documents before summit and we can
>> have a review then. It would be great if OpenStack can provide "DRS"
>> features.
>>
>> Thanks,
>>
>> Jay
>>
>>
>>
>> 2014-03-01 6:00 GMT+08:00 Tim Hinrichs :
>>
>> Hi Jay,
>>>
>>> I think the Solver Scheduler is a better fit for your needs than
>>> Congress because you know what kinds of constraints and enforcement you
>>> want.  I'm not sure this topic deserves an entire design session--maybe
>>> just talking a bit at the summit would suffice (I *think* I'll be
>>> attending).
>>>
>>> Tim
>>>
>>> - Original Message -
>>> | From: "Jay Lau" 
>>> | To: "OpenStack Development Mailing List (not for usage questions)" <
>>> openstack-dev@lists.openstack.org>
>>> | Sent: Wednesday, February 26, 2014 6:30:54 PM
>>> | Subject: Re: [openstack-dev] [OpenStack][Runtime Policy] A proposal
>>> for OpenStack run time policy to manage
>>> | compute/storage resource
>>> |
>>> |
>>> |
>>> |
>>> |
>>> |
>>> | Hi Tim,
>>> |
>>> | I'm not sure if we can put resource monitor and adjust to
>>> | solver-scheduler (Gantt), but I have proposed this to Gantt design
>>> | [1], you can refer to [1] and search "jay-lau-513".
>>> |
>>> | IMHO, Congress does monitoring and also take actions, but the actions
>>> | seems mainly for adjusting single VM network or storage. It did not
>>> | consider migrating VM according to hypervisor load.
>>> |
>>> | Not sure if this topic deserved to be a design session for the coming
>>> | summit, but I will try to propose.
>>> |
>>> |
>>> |
>>> |
>>> | [1] https://etherpad.openstack.org/p/icehouse-external-scheduler
>>> |
>>> |
>>> |
>>> | Thanks,
>>> |
>>> |
>>> | Jay
>>> |
>>> |
>>> |
>>> | 2014-02-27 1:48 GMT+08:00 Tim Hinrichs < thinri...@vmware.com > :
>>> |
>>> |
>>> | Hi Jay and Sylvain,
>>> |
>>> | The solver-scheduler sounds like a good fit to me as well. It clearly
>>> | provisions resources in accordance with policy. Does it monitor
>>> | those resources and adjust them if the system falls out of
>>> | compliance with the policy?
>>> |
>>> | I mentioned Congress for two reasons. (i) It does monitoring. (ii)
>>> | There was mention of compute, networking, and storage, and I
>>> | couldn't tell if the idea was for policy that spans OS components or
>>> | not. Congress was designed for policies spanning OS components.
>>> |
>>> |
>>> | Tim
>>> |
>>> | - Original Message -
>>> |
>>> | | From: "Jay Lau" < jay.lau@gmail.com >
>>> | | To: "OpenStack Development Mailing List (not for usage questions)"
>>> | | < openstack-dev@lists.openstack.org >
>>> |
>>> |
>>> | | Sent: Tuesday, February 25, 2014 10:13:14 PM
>>> | | Subje

Re: [openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Alexei Kornienko

Hello Darren,

This option is removed since oslo.db will no longer manage engine 
objects on it's own. Since it will not store engines it cannot handle 
query dispatching.


Every project that wan't to use slave_connection will have to implement 
this logic (creation of the slave engine and query dispatching) on it's own.


Regards,

On 03/05/2014 05:18 PM, Darren Birkett wrote:

Hi,

I'm wondering why in this commit:

https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f

...the slave_connection option was removed.  It seems like a useful 
option to have, even if a lot of projects weren't yet using it.


Darren


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: Image Cache Aging

2014-03-05 Thread Tracy Jones
Hi - Please consider the image cache aging BP for FFE 
(https://review.openstack.org/#/c/56416/)

This is the last of several patches (already merged) that implement image cache 
cleanup for the vmware driver.  This patch solves a significant customer pain 
point as it removes unused images from their datastore.  Without this patch 
their datastore can become unnecessarily full.  In addition to the customer 
benefit from this patch it

1.  has a turn off switch 
2.  if fully contained within the vmware driver
3.  has gone through functional testing with our internal QA team 

ndipanov has been good enough to say he will review the patch, so we would ask 
for one additional core sponsor for this FFE.

Thanks

Tracy
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-05 Thread Thierry Carrez
Anne Gentle wrote:
> It feels like it should be part of a scheduler or reservation program
> but we don't have one today. We also don't have a workflow, planning, or
> capacity management program, all of which these use cases could fall under. 
> 
> (I should know this but) What are the options when a program doesn't
> exist already? Am I actually struggling with a scope expansion beyond
> infrastructure definitions? I'd like some more discussion by next week's
> TC meeting.

When a project files for incubation and covers a new scope, they also
file for a new program to go with it.

https://wiki.openstack.org/wiki/Governance/NewProjects

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: ISO support for the VMware driver

2014-03-05 Thread Gary Kotton
Hi,
Unfortunately we did not get the ISO support approved by the deadline. If 
possible can we please get the FFE.

The feature is completed and has been tested extensively internally. The 
feature is very low risk and has huge value for users. In short a user is able 
to upload a iso to glance then boot from that iso.

BP: https://blueprints.launchpad.net/openstack/?searchtext=vmware-iso-boot
Code: https://review.openstack.org/#/c/63084/ and 
https://review.openstack.org/#/c/77965/
Sponsors: John Garbutt and Nikola Dipanov

One of the things that we are planning on improving in Juno is the way that the 
Vmops code is arranged and organized. We will soon be posting a wiki for ideas 
to be discussed. That will enable use to make additions like this a lot simpler 
in the future. But sadly that is not part of the scope at the moment.

Thanks in advance
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Climate Incubation Application

2014-03-05 Thread Thierry Carrez
Dina Belova wrote:
> I think your idea is really interesting. I mean, that thought “Gantt -
> where to schedule, Climate - when to schedule” is quite understandable
> and good looking.

Would Climate also be usable to support functionality like Spot
Instances ? "Schedule when spot price falls under X" ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-05 Thread Davanum Srinivas
Hi Fang,

Agree with Russell. Also please update the wiki with your information
https://wiki.openstack.org/wiki/GSoC2014 and also information about
the mentor/ideas as well (if you have not yet done so already). You
can reach out to folks on #openstack-gsoc and #openstack-nova IRC
channels as well

thanks,
dims

On Wed, Mar 5, 2014 at 10:12 AM, Russell Bryant  wrote:
> On 03/05/2014 09:59 AM, 方祯 wrote:
>> Hi:
>> I'm Fang Zhen, an M.S student from China. My current research work is on
>> scheduling policy on cloud computing. I have been following the
>> openstack for about 2 years.I always thought of picking a blueprint and
>> implementing it with the community's guidance.Luckily, open-stack
>> participates GSOC this year and is is impossible for me to implement
>> "Cross-services Scheduler" of Openstack-Gantt project.And also, I'm sure
>> that I can continue to help to  openstack after GSOC.
>
> Thanks for your interest in OpenStack!
>
> I think the project as you've described it is far too large to be able
> to implement in one GSoC term.  If you're interested in scheduling,
> perhaps we can come up with a specific enhancement to Nova's current
> scheduler that would be more achievable in the time allotted.  I want to
> make sure we're setting you up for success, and I think helping scope
> the project is a big early part of that.
>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Flavors per datastore

2014-03-05 Thread Denis Makogon
Hey, Danies.

Datastore has a set of flavor that allowed to be used while instance
provisioning with the given datastore.


Best regards,
Denis Makogon


On Wed, Mar 5, 2014 at 5:28 PM, Daniel Salinas  wrote:

> After reading this I feel it requires me to ask the question:
>
> Do flavors have datastores or do datastores have flavors?
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Flavors per datastore

2014-03-05 Thread Daniel Salinas
After reading this I feel it requires me to ask the question:

Do flavors have datastores or do datastores have flavors?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo-incubator] removal of slave_connection from db.sqlalchemy.session

2014-03-05 Thread Darren Birkett
Hi,

I'm wondering why in this commit:

https://github.com/openstack/oslo-incubator/commit/630d3959b9d001ca18bd2ed1cf757f2eb44a336f

...the slave_connection option was removed.  It seems like a useful option
to have, even if a lot of projects weren't yet using it.

Darren
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Blueprint cinder-rbd-driver-qos

2014-03-05 Thread git harry
Hi,

https://blueprints.launchpad.net/cinder/+spec/cinder-rbd-driver-qos

I've been looking at this blueprint with a view to contributing on it, assuming 
I can take it. I am unclear as to whether or not it is still valid. I can see 
that it was registered around a year ago and it appears the functionality is 
essentially already supported by using multiple backends.

Looking at the existing drivers that have qos support it appears IOPS etc are 
available for control/customisation. As I understand it  Ceph has no qos type 
control built-in and creating pools using different hardware is as granular as 
it gets. The two don't quite seem comparable to me so I was hoping to get some 
feedback, as to whether or not this is still useful/appropriate, before 
attempting to do any work.

Thanks,
git-harry 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

2014-03-05 Thread Thierry Carrez
Miguel Angel Ajo wrote:
> [...]
>The overhead comes from python startup time + rootwrap loading.
> 
>I suppose that rootwrap was designed for lower amount of system calls
> (nova?).

Yes, it was not really designed to escalate rights on hundreds of
separate shell commands in a row.

>And, I understand what rootwrap provides, a level of filtering that
> sudo cannot offer. But it raises some question:
> 
> 1) It's actually someone using rootwrap in production?
> 
> 2) What alternatives can we think about to improve this situation.
> 
>0) already being done: coalescing system calls. But I'm unsure that's
> enough. (if we coalesce 15 calls to 3 on this system we get:
> 192*3*0.3/60 ~=3 minutes overhead on a 10min operation).
> 
>a) Rewriting rules into sudo (to the extent that it's possible), and
> live with that.

We used to use sudo and a sudoers file. The rules were poorly written,
and there is just so much you can check in a sudoers file. But the main
issue was that the sudoers file lived in packaging
(distribution-dependent), and was not maintained in sync with the code.
Rootwrap let us to maintain the rules (filters) in sync with the code
calling them.

To work around perf issues, you still have the option of running with a
wildcard sudoer file (and root_wrapper = sudo). That's about as safe as
running with a badly-written or badly-maintained sudo rules anyway.

> [...]
>d) Re-writing rootwrap into C (it's 600 python LOCs now).

(d2) would be to explore running rootwrap under Pypy. Testing that is on
my TODO list, but $OTHERSTUFF got into the way. Feel free to explore
that option.

>e) Doing the command filtering at neutron-side, as a library and live
> with sudo with simple filtering. (we kill the python/rootwrap startup
> overhead).

That's as safe as running with a wildcard sudoers file (neutron user can
escalate to root). Which may just be acceptable in /some/ scenarios.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-05 Thread Collins, Sean
Hi Robert,

I'm reaching out to you off-list for this:

On Wed, Mar 05, 2014 at 09:48:46AM EST, Robert Li (baoli) wrote:
> As a result of this change, it will end up having two LLA addresses in the
> router's qr interface. It would have made more sense if the LLA will be
> replacing the qr interface's automatically generated LLA address.

Was this not what you intended, when you -1'd the security group patch
because you were not able to create gateways for Neutron subnets with a
LLA address? I am a little frustrated because we scrambled to create a
patch so you would remove your -1, then now your suggesting we abandon
it?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-05 Thread Sylvain Bauza

Hi Fang,

Gantt subteam owns weekly meetings every Tuesdays 1500 UTC at 
#openstack-meeting IRC channel, where we discuss about the steps for 
forklifting Nova scheduler into a separate service.
As there is now FeatureFreeze period, there are no patches targeted to 
be merged before next Juno summit, but there is opportunity for 
discussing anyway.


Thanks,
-Sylvain

Le 05/03/2014 15:59, ?? a écrit :

Hi:
I'm Fang Zhen, an M.S student from China. My current research work is 
on scheduling policy on cloud computing. I have been following the 
openstack for about 2 years.I always thought of picking a blueprint 
and implementing it with the community's guidance.Luckily, open-stack 
participates GSOC this year and is is impossible for me to implement 
"Cross-services Scheduler" of Openstack-Gantt project.And also, I'm 
sure that I can continue to help to  openstack after GSOC.


About me:
I'm a M.S student from XiDian University. I'm good at c and python 
programming and I'm familiar with git, python development.I have 
participated in developing several web server with python web 
framework and implentmented some python scripts to use openstack.I 
have read the docs and papers about the the project and the guide.md 
 of openstack development. But as a newbie to 
OpenStack dev, it would be very great if somebody could give me some 
guidance on staring with the project;)


Thanks and Regards,
fangzhen
GitHub : https://github.com/fz1989


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-05 Thread Russell Bryant
On 03/05/2014 09:59 AM, 方祯 wrote:
> Hi:
> I'm Fang Zhen, an M.S student from China. My current research work is on
> scheduling policy on cloud computing. I have been following the
> openstack for about 2 years.I always thought of picking a blueprint and
> implementing it with the community's guidance.Luckily, open-stack
> participates GSOC this year and is is impossible for me to implement
> "Cross-services Scheduler" of Openstack-Gantt project.And also, I'm sure
> that I can continue to help to  openstack after GSOC.

Thanks for your interest in OpenStack!

I think the project as you've described it is far too large to be able
to implement in one GSoC term.  If you're interested in scheduling,
perhaps we can come up with a specific enhancement to Nova's current
scheduler that would be more achievable in the time allotted.  I want to
make sure we're setting you up for success, and I think helping scope
the project is a big early part of that.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Feature Freeze Exceptions for Icehouse

2014-03-05 Thread Russell Bryant
Nova is now feature frozen for the Icehouse release.  Patches for
blueprints not already merged will need a feature freeze exception (FFE)
to be considered for Icehouse.

If you would like to request a FFE, please do so on the openstack-dev
mailing list with a prefix of "[Nova] FFE Request: ".

In addition to evaluation the request in terms of risks and benefits, I
would like to require that every FFE be sponsored by two members of
nova-core.  This is to ensure that there are reviewers willing to review
the code in a timely manner so that we can exclusively focus on bug
fixes as soon as possible.

Thanks!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [GSOC][Gantt]Cross-services Scheduler

2014-03-05 Thread 方祯
Hi:
I'm Fang Zhen, an M.S student from China. My current research work is on
scheduling policy on cloud computing. I have been following the openstack
for about 2 years.I always thought of picking a blueprint and implementing
it with the community's guidance.Luckily, open-stack participates GSOC this
year and is is impossible for me to implement "Cross-services Scheduler" of
Openstack-Gantt project.And also, I'm sure that I can continue to help to
 openstack after GSOC.

About me:
I'm a M.S student from XiDian University. I'm good at c and python
programming and I'm familiar with git, python development.I have
participated in developing several web server with python web framework and
implentmented some python scripts to use openstack.I have read the docs and
papers about the the project and the guide.md of openstack development. But
as a newbie to OpenStack dev, it would be very great if somebody could give
me some guidance on staring with the project;)

Thanks and Regards,
fangzhen
GitHub : https://github.com/fz1989
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Change in openstack/neutron[master]: Permit ICMPv6 RAs only from known routers

2014-03-05 Thread Robert Li (baoli)
Hi Sean,

See embedded commentsŠ

Thanks,
Robert

On 3/4/14 3:25 PM, "Collins, Sean"  wrote:

>On Tue, Mar 04, 2014 at 02:08:03PM EST, Robert Li (baoli) wrote:
>> Hi Xu Han & Sean,
>> 
>> Is this code going to be committed as it is? Based on this morning's
>> discussion, I thought that the IP address used to install the RA rule
>> comes from the qr-xxx interface's LLA address. I think that I'm
>>confused.
>
>Xu Han has a better grasp on the query than I do, but I'm going to try
>and take a crack at explaining the code as I read through it. Here's
>some sample data from the Neutron database - built using
>vagrant_devstack. 
>
>https://gist.github.com/sc68cal/568d6119eecad753d696
>
>I don't have V6 addresses working in vagrant_devstack just yet, but for
>the sake of discourse I'm going to use it as an example.
>
>If you look at the queries he's building in 72252 - he's querying all
>the ports on the network, that are q_const.DEVICE_OWNER_ROUTER_INTF
>("network:router_interface"). The IP of those ports are added to the list
>of IPs.
>
>Then a second query is done to find the port connected from the router
>to the gateway, q_const.DEVICE_OWNER_ROUTER_GW
>('network:router_gateway'). Those IPs are then appended to the list of
>IPs.
>
>Finally, the last query adds the IPs of the gateway for each subnet
>in the network.
>
>So, ICMPv6 traffic from ports that are either:
>
>A) A gateway device
>B) A router
>C) The subnet's gateway

My understanding is that the RA (if enabled) will be sent to the router
interface (the qr interface). Therefore, the RA's source IP will be an LLA
from the qr interface

> 
>
>Will be passed through to an instance.
>
>Now, please take note that I have *not* discussed what *kind* of IP
>address will be picked up. We intend for it to be a Link Local address,
>but that will be/is addressed in other patch sets.
>
>> Also this bug: Allow LLA as router interface of IPv6 subnet
>> https://review.openstack.org/76125 was created due to comments to 72252.
>> If We don't need to create a new LLA for the gateway IP, is the fix
>>still
>> needed? 
>
>Yes - we still need this patch - because that code path is how we are
>able to create ports on routers that are a link local address.

As a result of this change, it will end up having two LLA addresses in the
router's qr interface. It would have made more sense if the LLA will be
replacing the qr interface's automatically generated LLA address.

>
>
>This is at least my understanding of our progress so far, but I'm not
>perfect - Xu Han will probably have the last word.
>
>-- 
>Sean M. Collins


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >