Re: [openstack-dev] [ironic bare metal installation issue]

2014-06-03 Thread Clint Byrum
Excerpts from 严超's message of 2014-06-03 21:23:25 -0700:
> Hi, All :
> I've deployed my ironic following this link:
> http://ma.ttwagner.com/bare-metal-deploys-with-devstack-and-ironic/ , all
> steps is completed.
> Now one of my node-show provision_state is active. But why is this
> node still in installation state as follow ?
>  [image: 内嵌图片 1]


Ironic has done all that it can for the machine. That is the kernel
and ramdisk from the image, and Ironic has no real way to check that
this deploy succeeds. It is on the same level as checking to see if your
VM actually boots after kvm has been spawned.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [solved] How to mock the LOG inside cinder driver

2014-06-03 Thread Deepak Shetty
The below issue was resolved (thanks to akerr on IRC).
It seems called_once_with is not a function of mock and doesn't work
properly
Need to use assertTrue(mock_func.called) and thats working for me.

thanx,
deepak


On Tue, Jun 3, 2014 at 9:46 PM, Deepak Shetty  wrote:

>  Hi, whats the right way to mock the LOG variable inside the
> driver ? I am mocking mock.patch.object(glusterfs, 'LOG') as mock_logger
>  and then doing...
>  mock_logger.warning.assert_called_once() - which passes and is
> expected to pass per my code
>  but
>  mock_logger.debug.assert_called_once() - shud fail , but this
> also passes !
>  any idea why ?
>
> I feel that I am not mocking the LOG inside the driver correctly.
>
> I also tried
>mock.patch.object(glusterfs.LOG, 'warning'),
> mock.patch.object(glusterfs.LOG, 'debug')
> as mock_logger_warn and mock_logger_debug respectively
>
> But here too
> .debug and .warning both passes.. while the expected result is for
> .warning to pass and .debug to fail
>
> So somehow I am unable to mock LOG properly
>
> thanx,
> deepak
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [Cinder] How to mock the LOG inside cinder driver

2014-06-03 Thread Deepak Shetty
Wrongly sent to Joshua only, hence fwding to the list.

--

Joshua,
  If my code has logs warning, error, debug based on diff exceptions or
conditions, its good to test them & have a unit test around it
so that we can catch scenarios where we modified code that ideally shud
have just put a warn but wrongly put a debug/error. Thats my only intention
here


On Tue, Jun 3, 2014 at 11:54 PM, Joshua Harlow 
wrote:

>  Why is mocking the LOG object useful/being used?
>
>  Testing functionality which depends on LOG triggers/calls imho is bad
> practice (and usually means something needs to be refactored).
>
>  LOG statements, and calls should be expected to move/be removed *often*
> so testing functionality in tests with them seems like the wrong approach.
>
>  My 2 cents.
>
>   From: Deepak Shetty 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, June 3, 2014 at 9:16 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [Cinder] How to mock the LOG inside cinder driver
>
>   Hi, whats the right way to mock the LOG variable inside
> the driver ? I am mocking mock.patch.object(glusterfs, 'LOG') as mock_logger
>  and then doing...
>  mock_logger.warning.assert_called_once() - which passes and is
> expected to pass per my code
>  but
>  mock_logger.debug.assert_called_once() - shud fail , but this
> also passes !
>  any idea why ?
>
>  I feel that I am not mocking the LOG inside the driver correctly.
>
> I also tried
>mock.patch.object(glusterfs.LOG, 'warning'),
> mock.patch.object(glusterfs.LOG, 'debug')
>  as mock_logger_warn and mock_logger_debug respectively
>
>  But here too
>  .debug and .warning both passes.. while the expected result is for
> .warning to pass and .debug to fail
>
>  So somehow I am unable to mock LOG properly
>
> thanx,
> deepak
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic]Need to know the process to make changes to nova ironic virt driver

2014-06-03 Thread Faizan Barmawer
Hi All,

I am currently looking at UEFI support for ironic driver blueprints and
writing a design spec for the same.
https://blueprints.launchpad.net/ironic/+spec/uefi-boot-for-ironic
https://blueprints.launchpad.net/ironic/+spec/uefi-gpt-support

I anticipate changes in nova ironic virt driver and may be adding ironic
specific filters in this space for the above feature.

Just wanted to check with you folks, if we need to file a separate
blueprint in nova to make those changes or can we add changes in the nova
virt driver placed in ironic tree itself?

Please clarify.

Thanks,
Faizan Barmawer
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic bare metal installation issue]

2014-06-03 Thread 严超
Hi, All :
I've deployed my ironic following this link:
http://ma.ttwagner.com/bare-metal-deploys-with-devstack-and-ironic/ , all
steps is completed.
Now one of my node-show provision_state is active. But why is this
node still in installation state as follow ?
 [image: 内嵌图片 1]



*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
*


*My Weibo:http://weibo.com/herewearenow
--*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] nova-compute deadlock

2014-06-03 Thread Zhu Zhu
Hi Qin Zhao,

Thanks for raising this issue and analysis. According to the issue description 
and happen 
scenario(https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960&h=720),
  if that's the case,  concurrent mutiple KVM spawn instances(with both config 
drive and data injection enabled) are triggered, the issue can be very likely 
to happen. 
As in libvirt/driver.py _create_image method, right after iso making 
"cdb.make_drive", the driver will attempt "data injection" which will call the 
libguestfs launch in another thread. 

Looks there were also a couple of libguestfs hang issues from Launch pad as 
below. . I am not sure if libguestfs itself can have certain mechanism to 
free/close the fds that inherited from parent process instead of require 
explicitly calling the tear down. Maybe open a defect to libguestfs to see what 
their thoughts? 

https://bugs.launchpad.net/nova/+bug/1286256
https://bugs.launchpad.net/nova/+bug/1270304 



Zhu Zhu
Best Regards
 
From: Qin Zhao
Date: 2014-05-31 01:25
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Nova] nova-compute deadlock
Hi all,

When I run Icehouse code, I encountered a strange problem. The nova-compute 
service becomes stuck, when I boot instances. I report this bug in 
https://bugs.launchpad.net/nova/+bug/1313477.

After thinking several days, I feel I know its root cause. This bug should be a 
deadlock problem cause by pipe fd leaking.  I draw a diagram to illustrate this 
problem. 
https://docs.google.com/drawings/d/1pItX9urLd6fmjws3BVovXQvRg_qMdTHS-0JhYfSkkVc/pub?w=960&h=720

However, I have not find a very good solution to prevent this deadlock. This 
problem is related with Python runtime, libguestfs, and eventlet. The situation 
is a little complicated. Is there any expert who can help me to look for a 
solution? I will appreciate for your help!

-- 
Qin Zhao
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-03 Thread ChangBo Guo
Jay, thanks for raising this up .
+1 for this .
 A related question about the CPU and RAM allocation ratio, shall we apply
them when get hypervisor information with command "nova hypervisor-show
${hypervisor-name}"
The output  shows like
| memory_mb |
15824
|
| memory_mb_used|
1024
|
| running_vms   |
1
|
| service_host  |
node-6
|
| service_id|
39
|
| vcpus |
4
|
| vcpus_used| 1

vcpus is showing the number of physical CPU, I think that's not correct.
Any thoughts ?


2014-06-03 21:29 GMT+08:00 Jay Pipes :

> Hi Stackers,
>
> tl;dr
> =
>
> Move CPU and RAM allocation ratio definition out of the Nova scheduler and
> into the resource tracker. Remove the calculations for overcommit out of
> the core_filter and ram_filter scheduler pieces.
>
> Details
> ===
>
> Currently, in the Nova code base, the thing that controls whether or not
> the scheduler places an instance on a compute host that is already "full"
> (in terms of memory or vCPU usage) is a pair of configuration options*
> called cpu_allocation_ratio and ram_allocation_ratio.
>
> These configuration options are defined in, respectively,
> nova/scheduler/filters/core_filter.py and nova/scheduler/filters/ram_
> filter.py.
>
> Every time an instance is launched, the scheduler loops through a
> collection of host state structures that contain resource consumption
> figures for each compute node. For each compute host, the core_filter and
> ram_filter's host_passes() method is called. In the host_passes() method,
> the host's reported total amount of CPU or RAM is multiplied by this
> configuration option, and the product is then subtracted from the reported
> used amount of CPU or RAM. If the result is greater than or equal to the
> number of vCPUs needed by the instance being launched, True is returned and
> the host continues to be considered during scheduling decisions.
>
> I propose we move the definition of the allocation ratios out of the
> scheduler entirely, as well as the calculation of the total amount of
> resources each compute node contains. The resource tracker is the most
> appropriate place to define these configuration options, as the resource
> tracker is what is responsible for keeping track of total and used resource
> amounts for all compute nodes.
>
> Benefits:
>
>  * Allocation ratios determine the amount of resources that a compute node
> advertises. The resource tracker is what determines the amount of resources
> that each compute node has, and how much of a particular type of resource
> have been used on a compute node. It therefore makes sense to put
> calculations and definition of allocation ratios where they naturally
> belong.
>  * The scheduler currently needlessly re-calculates total resource amounts
> on every call to the scheduler. This isn't necessary. The total resource
> amounts don't change unless either a configuration option is changed on a
> compute node (or host aggregate), and this calculation can be done more
> efficiently once in the resource tracker.
>  * Move more logic out of the scheduler
>  * With the move to an extensible resource tracker, we can more easily
> evolve to defining all resource-related options in the same place (instead
> of in different filter files in the scheduler...)
>
> Thoughts?
>
> Best,
> -jay
>
> * Host aggregates may also have a separate allocation ratio that overrides
> any configuration setting that a particular host may have
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] nova-compute rpc version

2014-06-03 Thread Matt Riedemann



On 6/3/2014 8:15 AM, abhishek jain wrote:

Hi Russell

Thanks
I'm able to solve it now by switching onto havana release on both the
controller node and compute node.







On Tue, Jun 3, 2014 at 10:56 AM, abhishek jain mailto:ashujain9...@gmail.com>> wrote:

Hi Russell

Below are the details...

controller node...

nova --version
2.17.0.122

nova-compute  --version
2014.2

compute node.

nova --version
2.17.0.122

nova-compute --version
2013.2.4

Can you help me what i need to change in order to achieve the
desired functioonality.



Thaks


On Tue, Jun 3, 2014 at 2:16 AM, Russell Bryant mailto:rbry...@redhat.com>> wrote:

On 06/02/2014 08:20 AM, abhishek jain wrote:
 > |Hi
 > |
 >
 > |
 > I'm getting following error in nova-compute logs when trying
to boot VM from controller node onto compute node ...
 >
 >  Specified RPC version, 3.23, not supported
 >
 > Please help regarding this.

It sounds like you're using an older nova-compute with newer
controller
services (without the configuration to allow a live ugprade).
  Check the
versions of Nova services you have running.

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



See the 2nd to last bullet here:

https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse#Upgrade_Notes_2

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance

2014-06-03 Thread Dmitry Borodaenko
Here's a fix that increases haproxy server timeout for Horizon to 48h:
https://review.openstack.org/#/c/97645/

I've marked the bug as Incomplete for now: we need a confirmation that
only Horizon is affected and Glance isn't. Please try the current
version of the fix, if gateway timeouts disappear, it will confirm
that timeout value for Glance doesn't need to be changed.

Thanks,
-DmitryB



On Tue, Jun 3, 2014 at 11:14 AM, Evgeny Kozhemyakin
 wrote:
> Tizy Ninan wrote :
>> When uploading images with large filesize (more than 1 GB) from dashboard,
>> after upload is done the dashboard is showing "504 Gateway Timeout". What
>
> Anyway we've launched a bug for fuel, thank you for the notice.
> https://bugs.launchpad.net/fuel/+bug/1326082
>
> --
> Regards,
> Evgeny Kozhemyakin (EVK-RIPE)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Dmitry Borodaenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] dealing with M:N relashionships for Pools and Listeners

2014-06-03 Thread Eichberger, German
Hi,

From deep below in the e-mail chain:
Same here. Cascade-deleting of shared objects should not be allowed in any case.

Being able to delete all lbs and related constructs after a customer leaves 
and/or for tests is a pretty important requirements for us. It does not 
necessarily have to be accomplished by a cascading delete on the user api (we 
could use an admin api for that) but it is important in  our data model to 
avoid  constraint violation when we want to clean everything out…

I am still with Jorge that sharing of objects in whatever form might confuse 
customers who will then use up costly customer support time and hence not 
entirely in the interest of us public cloud providers. The examples with the 
status is another example for that…

German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Friday, May 30, 2014 9:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] dealing with M:N relashionships 
for Pools and Listeners

Hi y'all!

Re-responses inline:

On Fri, May 30, 2014 at 8:25 AM, Brandon Logan 
mailto:brandon.lo...@rackspace.com>> wrote:

> § Where can a user check the success of the update?
>
>
>
>
> Depending on the object... either the status of the child object
> itself or all of its affected parent(s). Since we're allowing reusing
> of the pool object, when getting the status of a pool, maybe it makes
> sense to produce a list showing the status of all the pool's members,
> as well as the update status of all the listeners using the pool?
This is confusing to me.  Will there be a separate provisioning status
field on the loadbalancer and just a generic status on the child
objects?  I get the idea of a pool having a status the reflects the
state of all of its members.  Is that what you mean by status of a child
object?

It seems to me that we could use the 'generic status' field on the load 
balancer to show provisioning status as well. :/  Is there a compelling reason 
we couldn't do this? (Sam?)

And yes, I think that's what I mean with one addition. For example:

If I have Listener A and B which use pool X which has members M and N...  if I 
set member 'M' to be 'ADMIN_STATE_DISABLED', then what I would expect to see, 
if I ask for the status of pool X immediately after this change is:
* An array showing N is 'UP' and 'M' is in state 'ADMIN_STATE_DISABLED' and
* An array showing that listeners 'A' and 'B' are in 'PENDING_UPDATE' state (or 
something similar).

I would also expect listeners 'A' and 'B' to go back to 'UP' state shortly 
thereafter.

Does this make sense?

Note that there is a problem with my suggestion: What does the status of a 
member mean when the member is referenced indirectly by several listeners?  
(For example, listener A could see member N as being UP, whereas listener B 
could see member N as being DOWN.)  Should member statuses also be an array 
from the perspective of each listener? (in other words, we'd have a 
two-dimensional array here.)

If we do this then perhaps the right thing to do is just list the pool members' 
statuses in context of the listeners.  In other words, if we're reporting this 
way, then given the same scenario above, if we set member 'M' to be 
'ADMIN_STATE_DISABLED', then asking for the status of pool X immediately after 
this change is:
* (Possibly?) an array for each listener status showing them as 'PENDING_UPDATE'
* An array for member statuses which contain:
** An array which shows member N is 'UP' for listener 'A' and 'DOWN' for 
listener 'B'
** An array which shows member M is 'PENDING_DISABLED' for both listener 'A' 
and 'B'

...and then shortly thereafter we would see member M's status for each listener 
change to 'DISABLED' at the same time the listeners' statuses change to 'UP'.

So... this second way of looking at it is less intuitive to me, though it is 
probably more correct. Isn't object re-use fun?


>
> ·Operation status/state – this refers to information
> returning from the load balancing back end / driver
>
> o  How is member status that failed health monitor reflected,
> on which LBaaS object and how can a user understand the
> failure?
>
>
> Assuming you're not talking about an alert which would be generated by
> a back-end load balancer and get routed to some notification system...
> I think you should be able to get the status of a member by just
> checking the member status directly (ie.  GET /members/[UUID]) or, if
> people like my suggestion above, by checking the status of the pool to
> which the member belongs (ie. GET /pools/[UUID]).
>
>
> ·Administrator state management
>
> o  How is a change in admin_state on member, pool, listener
> get managed
>
>
> I'm thinking that disabling members, pools, and listeners should
> propagate to all parent objects. (For example, disabling a member
> should propagate to all affected pools and listene

Re: [openstack-dev] [Neutron][LBaaS] Requirements around statistics and billing

2014-06-03 Thread Eichberger, German
Hi Stephen,

We would like all those numbers as well ☺

Additionally, we measure:

· When a lb instance was created, deleted, etc.

· For monitoring we “ping” a load balancers health check and report/act 
on the results

· For user’s troubleshooting we make the haproxy logs available. Which 
contain connection information like from, to, duration, protocol, status 
(though we frequently have been told that this is not really useful for 
debugging…) and of course having that more gui-fied would be neat

German



From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, May 27, 2014 8:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Requirements around statistics and 
billing

Hi folks!

We have yet to have any kind of meaningful discussion on this list around load 
balancer stats (which, I presume to include data that will eventually need to 
be consumed by a billing system). I'd like to get the discussion started here, 
as this will have significant meaning for how we both make this data available 
to users, and how we implement back-end systems to be able to provide this data.

So!  What kinds of data are people looking for, as far as load balancer 
statistics.

For our part, as an absolute minimum we need the following per loadbalancer + 
listener combination:

* Total bytes transferred in for a given period
* Total bytes transferred out for a given period

Our product and billing people I'm sure would like the following as well:

* Some kind of peak connections / second data (95th percentile or average over 
a period, etc.)
* Total connections for a given period
* Total HTTP / HTTPS requests served for a given period

And the people who work on UIs and put together dashboards would like:

* Current requests / second (average for last X seconds, either on-demand, or 
simply dumped regularly).
* Current In/Out bytes throughput

And our monitoring people would like this:

* Errors / second
* Current connections / second and bytes throughput secant slope (ie. like 
derivative but easier to calculate from digital data) for last X seconds (ie. 
detecting massive spikes or drops in traffic, potentially useful for detecting 
a problem before it becomes critical)

And some of our users would like all of the above data per pool, and not just 
for loadbalancer + listener. Some would also like to see it per member (though 
I'm less inclined to make this part of our standard).

I'm also interested in hearing vendor capabilities here, as it doesn't make 
sense to design stats that most can't implement, and I imagine vendors also 
have valuable data on what their customer ask for / what stats are most useful 
in troubleshooting.

What other statistics data for load balancing are meaningful and hopefully not 
too arduous to calculate? What other data are your users asking for or 
accustomed to seeing?

Thanks,
Stephen

--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Tests for Custom roles in keystone v3

2014-06-03 Thread Frittoli, Andrea (HP Cloud)
Hi Ajaya,

 

Thanks for offer to help :)

 

Are you talking about tempest tests or in-tree keystone tests?

 

Verifying custom roles can be challenging via  API only driven tests such as 
tempest – as it requires to have the policies configured accordingly in the 
cloud under test (i.e. devstack).

It should be possible to prepare support for custom roles in the policy at 
deployment time. If tempest what you’re aiming at, it would be good if you 
could file a bp to describe what kind of use cases you have in mind, and why 
you’d like to run them in tempest.

 

As these would be keystone only tests, I wonder if they would be more fitting 
as unit / functional tests in the keystone tree? This approach would give you 
more flexibility in changing the policies.

 

If you are interested in contributing to tempest tests in the keystone area, 
below are some ideas.

 

A few bp which are related to tempest and keystone identity API v3:

-  Refactor tempest so that it may run consuming identity v3 only (or 
greater, when available) [1]

-  Setup dsvm tests which rely on identity v3 only (including 
intra-service communication) [2]

-  Cross domain testing: write tests to verify the impact of the domain 
scope on keystone itself and on the services [3]

-  Tempest without admin account (David Kranz’s blueprint): run tempest 
tests without the need of an “admin” account [4]

 

Your very welcome to contribute to any of those. [3] and [4] are still in the 
design phase.

 

The non-admin blueprint is loosely related to custom roles: it raised the 
question of how to run as many tests as possible without the need of an 
identity-admin account, which in certain deployments may be not available to 
the person running the tests.

The concept of domain introduced in identity v3 may be helpful here, as a 
domain admin could be able to have full control within the boundaries of the 
domain.  

That can be true for keystone, as long as roles is defined and the policy in 
keystone is configured correctly.  

 

For services I believe there is no combination of custom roles / service 
policies that will allow to achieve this – to make an example use case, allow 
the domain admin to list all the VMs, images, containers and networks defined 
within projects that belong to the domain. I believe that for this to be 
possible we’ll have to wait for the hierarchical multi-tenancy in every 
projects. [5]

 

Andrea

 

p.s.

Please use the openstack-dev list, openstack-qa is only used for reporting of 
periodic job test results. 

 

[1] 
https://github.com/openstack/qa-specs/blob/master/specs/multi-keystone-api-version-tests.rst

[2] https://github.com/openstack/qa-specs/blob/master/specs/keystone-v3-jobs.rst

[3] 
http://docs-draft.openstack.org/98/83898/5/check/gate-qa-specs-docs/4372c5f/doc/build/html/specs/cross-domain-testing.html
 

[4] 
http://docs-draft.openstack.org/67/86967/6/check/gate-qa-specs-docs/d0c8170/doc/build/html/specs/run-without-admin.html
 

[5] 
https://etherpad.openstack.org/p/juno-cross-project-hierarchical-multitenancy 

 

From: Ajaya Agrawal [mailto:ajku@gmail.com] 
Sent: 03 June 2014 20:38
To: openstack-qa
Subject: [openstack-qa] Tests for Custom roles in keystone v3

 

Hi,

 

Is someone writing tests for custom roles and policies in keystone v3. for e.g. 
one could create a role called project_admin who would allowed to create/delete 
users in his project only.

 

Andrea, Sean said in irc that you are working on this thing. Would you like to 
have one more pair of hands on this? :)




Cheers,

Ajaya



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Tempest + Rally: first success

2014-06-03 Thread om prakash pandey
Thanks Andrey! Please see below the logs(Environment specific output has
been snipped):

2014-06-04 02:32:07.303 1939 DEBUG rally.cmd.cliutils [-] INFO logs from
urllib3 and requests module are hide. run
/usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py:137
2014-06-04 02:32:07.363 1939 INFO rally.orchestrator.api [-] Starting
verification of deployment: 9c023039-211e-4794-84df-4ada68c656dd
2014-06-04 02:32:07.378 1939 INFO
rally.verification.verifiers.tempest.tempest [-] Verification
d596caf4-feb7-455c-832e-b6b77b1dcb9c | Starting:  Run verification.
2014-06-04 02:32:07.378 1939 DEBUG
rally.verification.verifiers.tempest.tempest [-] Tempest config file:
/home/om/.rally/tempest/for-deployment-9c023039-211e-4794-84df-4ada68c656dd/tempest.conf
 generate_config_file
/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py:90
2014-06-04 02:32:07.379 1939 INFO
rally.verification.verifiers.tempest.tempest [-] Starting: Creation of
configuration file for tempest.
2014-06-04 02:32:07.385 1939 DEBUG keystoneclient.session [-] REQ: curl -i
-X POST
---
2014-06-04 02:32:09.003 1939 DEBUG glanceclient.common.http [-]
HTTP/1.1 200 OK
content-length: 9276
via: 1.1 xyzcloud.com
server: Apache/2.4.9 (Ubuntu)
connection: close
date: Tue, 03 Jun 2014 21:02:08 GMT
content-type: application/json; charset=UTF-8
x-openstack-request-id: req-5fe73b11-85c7-49b7-bf31-34a44ceaaf6b

2014-06-04 02:32:13.399 1939 DEBUG neutronclient.client [-]
REQ: curl -i
https://xyzcloud.com:443//v2.0/subnets.json?network_id=13d63b58-57c9-4ba2-ae63-733836257636
-X GET -H "X-Auth-Token: bc604fb2a258429f99ed8940064fb1cb" -H
"Content-Type: application/json" -H "Accept: application/json" -H
"User-Agent: python-neutronclient"
 http_log_req
/usr/local/lib/python2.7/dist-packages/neutronclient/common/utils.py:173
2014-06-04 02:32:14.068 1939 DEBUG neutronclient.client [-] RESP:{'date':
'Tue, 03 Jun 2014 21:02:14 GMT', 'status': '200', 'content-length': '15',
'content-type': 'application/json; charset=UTF-8', 'content-location': '
https://xyzcloud.com:443//v2.0/subnets.json?network_id=13d63b58-57c9-4ba2-ae63-733836257636'}
{"subnets": []}
 http_log_resp
/usr/local/lib/python2.7/dist-packages/neutronclient/common/utils.py:179
2014-06-04 02:32:14.071 1939 CRITICAL rally [-] IndexError: list index out
of range
2014-06-04 02:32:14.071 1939 TRACE rally Traceback (most recent call last):
2014-06-04 02:32:14.071 1939 TRACE rally   File "/usr/local/bin/rally",
line 10, in 
2014-06-04 02:32:14.071 1939 TRACE rally sys.exit(main())
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/cmd/main.py", line 44, in main
2014-06-04 02:32:14.071 1939 TRACE rally return cliutils.run(sys.argv,
categories)
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/cmd/cliutils.py", line 193,
in run
2014-06-04 02:32:14.071 1939 TRACE rally ret = fn(*fn_args, **fn_kwargs)
2014-06-04 02:32:14.071 1939 TRACE rally   File "", line 2, in start
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/cmd/envutils.py", line 63, in
default_from_global
2014-06-04 02:32:14.071 1939 TRACE rally return f(*args, **kwargs)
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/cmd/commands/verify.py", line
58, in start
2014-06-04 02:32:14.071 1939 TRACE rally api.verify(deploy_id,
set_name, regex)
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/orchestrator/api.py", line
150, in verify
2014-06-04 02:32:14.071 1939 TRACE rally
verifier.verify(set_name=set_name, regex=regex)
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py",
line 271, in verify
2014-06-04 02:32:14.071 1939 TRACE rally
self._prepare_and_run(set_name, regex)
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/utils.py", line 162, in
wrapper
2014-06-04 02:32:14.071 1939 TRACE rally result = f(self, *args,
**kwargs)
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py",
line 153, in _prepare_and_run
2014-06-04 02:32:14.071 1939 TRACE rally self.generate_config_file()
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tempest/tempest.py",
line 95, in generate_config_file
2014-06-04 02:32:14.071 1939 TRACE rally conf =
config.TempestConf(self.deploy_id).generate()
2014-06-04 02:32:14.071 1939 TRACE rally   File
"/usr/local/lib/python2.7/dist-packages/rally/verification/verifiers/tem

Re: [openstack-dev] [Neutron] test configuration for ml2/ovs L2 and L3 agents

2014-06-03 Thread Carl Baldwin
Chuck,

I accidentally uploaded by local.conf changes to gerrit [1].  I
immediately abandoned them so that reviewers wouldn't waste time
thinking I was trying to get changes upstream.  But, since they're up
there now, you could take a look.

I am currently running a multi-node devstack on a couple of cloud VMs
with these changes.

Carl

[1] https://review.openstack.org/#/c/96972/

On Tue, Jun 3, 2014 at 9:23 AM, Carlino, Chuck  wrote:
> Hi all,
>
> I'm struggling a bit to get a test set up working for L2/L3 work (ml2/ovs).  
> I've been trying multi-host devstack (just controller node for now), and I 
> must be missing something important because n-sch bombs out.  Single node 
> devstack works fine, but it's not very useful for L2/L3.
>
> Any suggestions, or maybe someone has some local.conf files they'd care to 
> share?
>
> Many thanks,
> Chuck
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Use of AngularJS

2014-06-03 Thread Matthias Runge
On Tue, Jun 03, 2014 at 05:14:16PM +, Musso, Veronica A wrote:
> Great, thanks Matthias!
> 
> Then, if django-angular is approved for Fedora, do we need to wait for Ubuntu 
> packages? Or can it  be used?
> 
> Thanks!
> Veronica

I can not speak for other distributions. My 2ct here:

you'd need a package, when a release is being made. If your favourite
distro provides e.g packages for each snapshot, you'd need a package
around June 12th.
This is for django-angular not different to any other dependency.

So: I'd say: we don't need to wait here with a implementation for any
distribution. 
-- 
Matthias Runge 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-03 Thread Martinez, Christian
Hi Liz,
The designs look really cool and I think that we should consider a couple of 
things (more related to the alarm's implementation made at Ceilometer):

* There are combined alarms, which are a combination of two or more 
alarms. We need to see how they work and how we can show/modify them (or even 
if we want to show them)

* Currently, the alarms doesn't have a severity field. Which will be 
the intention to have this? Is to be able to filter by "alarm severity"? Is to 
have a way to distinguish the "not-so-critical" alarms that the ones that are 
critical?

* The alarms have a "list of actions" to be executed based on their 
current state. I think that the intention of that feature was to create alarms 
that could manage and trigger different actions based on their "alarm state". 
For instance, if an alarm is created but doesn't have enough data to be 
evaluated, the state is "insufficient data", and you can add actions to be 
triggered when this happens, for instance writing a LOG file or calling an URL. 
Maybe we could use this functionality that to notify the user whenever an alarm 
is triggered and we also should consider that when creating or updating the 
alarms as well.

More related to Alarms in general :

* What are the ideas around the alarm notifications? I saw that your 
intention is to have some sort of "g+ notifications" but what about other 
solutions/options, like email (using Mistral, perhaps'), logs. What do you guys 
think about that?

* The alarms could be created by the users as well.. I would add that 
CRUD functionality on the alarms tab on the overview section as well.

Hope it helps

Regards,
H
From: Liz Blanchard [mailto:lsure...@redhat.com]
Sent: Tuesday, June 3, 2014 3:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

Hi All,

I've recently put together a set of wireframes[1] around Alarm Management that 
would support the following blueprint:
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page

If you have a chance it would be great to hear any feedback that folks have on 
this direction moving forward with Alarms.

Best,
Liz

[1] 
http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-05-30.pdf
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Implementing new LBaaS API

2014-06-03 Thread Brandon Logan
This is an LBaaS topic bud I'd like to get some Neutron Core members to
give their opinions on this matter so I've just directed this to Neutron
proper.

The design for the new API and object model for LBaaS needs to be locked
down before the hackathon in a couple of weeks and there are some
questions that need answered.  This is pretty urgent to come on to a
decision on and to get a clear strategy defined so we can actually do
real code during the hackathon instead of wasting some of that valuable
time discussing this.


Implementation must be backwards compatible

There are 2 ways that have come up on how to do this:

1) New API and object model are created in the same extension and plugin
as the old.  Any API requests structured for the old API will be
translated/adapted to the into the new object model.
PROS:
-Only one extension and plugin
-Mostly true backwards compatibility
-Do not have to rename unchanged resources and models
CONS:
-May end up being confusing to an end-user.
-Separation of old api and new api is less clear
-Deprecating and removing old api and object model will take a bit more
work
-This is basically API versioning the wrong way

2) A new extension and plugin are created for the new API and object
model.  Each API would live side by side.  New API would need to have
different names for resources and object models from Old API resources
and object models.
PROS:
-Clean demarcation point between old and new
-No translation layer needed
-Do not need to modify existing API and object model, no new bugs
-Drivers do not need to be immediately modified
-Easy to deprecate and remove old API and object model later
CONS:
-Separate extensions and object model will be confusing to end-users
-Code reuse by copy paste since old extension and plugin will be
deprecated and removed.
-This is basically API versioning the wrong way

Now if #2 is chosen to be feasible and acceptable then there are a
number of ways to actually do that.  I won't bring those up until a
clear decision is made on which strategy above is the most acceptable.

Thanks,
Brandon






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ML2] Modular agent architecture

2014-06-03 Thread Mathieu Rohon
Hi mohammad,

What I meant in my email is totaly in line with your proposal. My dataplane
driver is your resource driver, whereas my controlplane driver is your
agent driver!
I totally agree that the real challenge is defining a common abstract class
for every resource driver.
My proposal was to bind a port to a resource driver, so that we can have
several resource driver on the same agent. This seems to be the goal the
method * ResourceDriver**.port_bound() *in [3], am I wrong?

Tahnks for the etherpad, I will try to participate through it.

Mathieu


On Sat, May 31, 2014 at 5:10 AM, Mohammad Banikazemi  wrote:

> Hi Mathieu,
>
> Thanks for the email. As discussed during the ML2 IRC meeting [2], we have
> not decided on a design. That is why we do not have a spec for review yet.
> The idea is that we spend a bit more time and figure out the details and
> try out some possible options before we go ahead with the spec. So new
> comments/suggestions are much appreciated.
>
> In addition to having different drivers we want to reduce the code
> replication across current agents. I am wondering if with what you are
> proposing as dataplane drivers, we will end up with having different
> drivers which look like the current agents and we do not deal with reducing
> code replication across agents. If this is not a correct assessment could
> you describe how we can avoid code replication across agents/drivers.
>
> Let me briefly explain what I have outlined in [3] (also mentioned in
> [2]). We are thinking of having drivers for each extension or probably
> better said each functionality. So we can have a base l2 connectivity
> driver, an l2pop driver, a sg driver (not to be confused with sq drivers),
> so on so forth. I think in your email you are referring to these drivers
> (or something close to them) as Extension drivers. In [3] they are called
> Agent Drivers.
>
> Then we have the Resource Drivers which will be essentially used for
> realizing these features depending on the technology/resource being used
> (e.g., using  OVS switches, or Linux Bridges, or some other technology).
> The main reason for using such a organization is to be able to have
> different agent drivers utilize the same resource and reuse code. The
> challenge is figuring out the api for such a driver. Any thoughts on this?
>
> Mohammad
>
> [3] https://etherpad.openstack.org/p/modular-l2-agent-outline
>
>
> [image: Inactive hide details for Mathieu Rohon ---05/30/2014 06:25:29
> AM---Hi all, Modular agent seems to have to choose between two t]Mathieu
> Rohon ---05/30/2014 06:25:29 AM---Hi all, Modular agent seems to have to
> choose between two type of architecture [1].
>
> From: Mathieu Rohon 
> To: OpenStack Development Mailing List ,
> Mohammad Banikazemi/Watson/IBM@IBMUS,
> Date: 05/30/2014 06:25 AM
> Subject: [openstack-dev][Neutron][ML2] Modular agent architecture
> --
>
>
>
> Hi all,
>
> Modular agent seems to have to choose between two type of architecture [1].
>
> As I understood during the last ML2 meeting [2], Extension driver
> seems to be the most reasonnable choice.
> But I think that those two approaches are complementory : Extension
> drivers will deal with RPC callbacks form the plugin, wheras Agent
> drivers will deal with controlling the underlying technology to
> interpret those callbacks.
>
> It looks like a controlPlane/Dataplane architecture. Could we have a
> control plane manager on which each Extension driver should register
> (and register callbacks it is listening at), and a data plane manager,
> on which each dataplane controller will register (ofagent, ovs, LB..),
> and which implement a common abastract class.
> A port will be managed by only one dataplane controller, and when a
> control plane driver wants to apply a modification on a port, it will
> retrieve the correct dataplane controller for this port in order to
> call one of the abstracted method to modify the dataplane.
>
>
> [1]
> https://wiki.openstack.org/wiki/Neutron/ModularL2Agent#Possible_Directions
> [2]
> http://eavesdrop.openstack.org/meetings/networking_ml2/2014/networking_ml2.2014-05-28-16.02.log.html
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-03 Thread Liz Blanchard
Hi All,

I’ve recently put together a set of wireframes[1] around Alarm Management that 
would support the following blueprint:
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page

If you have a chance it would be great to hear any feedback that folks have on 
this direction moving forward with Alarms.

Best,
Liz

[1] 
http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-05-30.pdf___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] How to mock the LOG inside cinder driver

2014-06-03 Thread Joshua Harlow
Why is mocking the LOG object useful/being used?

Testing functionality which depends on LOG triggers/calls imho is bad practice 
(and usually means something needs to be refactored).

LOG statements, and calls should be expected to move/be removed often so 
testing functionality in tests with them seems like the wrong approach.

My 2 cents.

From: Deepak Shetty mailto:dpkshe...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 3, 2014 at 9:16 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Cinder] How to mock the LOG inside cinder driver

 Hi, whats the right way to mock the LOG variable inside the driver ? 
I am mocking mock.patch.object(glusterfs, 'LOG') as mock_logger
 and then doing...
 mock_logger.warning.assert_called_once() - which passes and is 
expected to pass per my code
 but
 mock_logger.debug.assert_called_once() - shud fail , but this also 
passes !
 any idea why ?

I feel that I am not mocking the LOG inside the driver correctly.

I also tried
   mock.patch.object(glusterfs.LOG, 'warning'),
mock.patch.object(glusterfs.LOG, 'debug')
as mock_logger_warn and mock_logger_debug respectively

But here too
.debug and .warning both passes.. while the expected result is for .warning to 
pass and .debug to fail

So somehow I am unable to mock LOG properly

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance

2014-06-03 Thread Evgeny Kozhemyakin
Tizy Ninan wrote :
> When uploading images with large filesize (more than 1 GB) from dashboard,
> after upload is done the dashboard is showing "504 Gateway Timeout". What

Anyway we've launched a bug for fuel, thank you for the notice.
https://bugs.launchpad.net/fuel/+bug/1326082

-- 
Regards,
Evgeny Kozhemyakin (EVK-RIPE)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] How to mock the LOG inside cinder driver

2014-06-03 Thread Mike Perez
On 21:46 Tue 03 Jun , Deepak Shetty wrote:
>  Hi, whats the right way to mock the LOG variable inside the
> driver ? I am mocking mock.patch.object(glusterfs, 'LOG') as mock_logger

Please provide a paste[1] of the patch.

[1] - http://paste.openstack.org

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] summit wrap-up: Future of EDP

2014-06-03 Thread Sergey Lukjanov
Cool, thank you for wrapping up design summit session!

On Tue, Jun 3, 2014 at 7:24 PM, Trevor McKay  wrote:
> Hi folks,
>
>   Here is a summary of priorities from Summit and some action items for
> high the high priority issues.  The link to the pad is here:
>
> https://etherpad.openstack.org/p/juno-summit-sahara-edp
>
>   We really did not have any leftover questions from summit, but we need
> investigation and development work in several areas.  Please respond with any
> comments/questions, and feel free to work on action items :)
>
> High priority
> Fix hive support
> minimal EDP for spark via spark plugin (may be possible w Oozie)
> Design pluggable job model and investigate Spark / Storm integration
>
> Medium priority
> Error reporting improvements
>
> Low priority
> Raw Oozie workflows
> coordinated jobs
> preparation tags for workflows
> files and archives tags (need clear use cases)
> streamline copying of job binaries from swift to hdfs (dscp)
>
> Action items for high priority issues:
> Hive:
> We need to flesh out existing blueprints:
> 
> https://blueprints.launchpad.net/sahara/+spec/hive-vanilla2-support
> 
> https://blueprints.launchpad.net/sahara/+spec/hive-integration-tests
> Additional blueprint needed for swift support in Hive
> Hive is not fully implemented in HDP plugin
>
> Investigate Spark job execution via Oozie
> Underway.  Produce a blueprint after initial investigation.
>
> Design a pluggable job model
> We need to review current EDP and think about how operations can be
> abstracted.  For instance, what are the essential operations on a job,
> and where has the Oozie implementation leaked knowledge into the EDP 
> code?
>
> Can we develop an abstraction that maps to other models in a 
> believable way?
>storm, spark, scalding, others
>
> How will the UI deal with a pluggable job model assuming we have one?
>
>
>
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Review dashboard update

2014-06-03 Thread Dmitry Tantsur
Hi everyone!

It's hard to stop polishing things, and today I got an updated review
dashboard. It's sources are merged to Sean Dague's repository [1], so I
expect this to be the final version. Thank you everyone for numerous
comments and suggestions, especially Ruby Loo.

Here is nice link to it: http://perm.ly/ironic-review-dashboard

Major changes since previous edition:
- "My Patches Requiring Attention" section - all your patches that are
either WIP or have any -1.
- "Needs Reverify" - approved changes that failed Jenkins verification
- Added last section with changes that either WIP or got -1 from Jenkins
(all other sections do not include these).
- Specs section show also WIP specs

I know someone requesting dashboard with IPA subproject highlighted - I
can do such things on case-by-case base - ping me on IRC.

Hope this will be helpful :)

Dmitry.

[1] https://github.com/sdague/gerrit-dash-creator


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Use of AngularJS

2014-06-03 Thread Musso, Veronica A
Great, thanks Matthias!

Then, if django-angular is approved for Fedora, do we need to wait for Ubuntu 
packages? Or can it  be used?

Thanks!
Veronica





Date: Tue, 3 Jun 2014 08:32:43 +0200
From: Matthias Runge 
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] [Horizon] Use of AngularJS
Message-ID: <20140603063243.gb28...@turing.berg.ol>
Content-Type: text/plain; charset=us-ascii

On Tue, Jun 03, 2014 at 07:49:04AM +0200, Radomir Dopieralski wrote:
> On 06/02/2014 05:13 PM, Adam Nelson wrote:
> > I think that you would use the PyPI version anyway:
> > 
> > https://pypi.python.org/pypi/django-angular/0.7.2
> > 
> > That's how most of the other Python dependencies work, even in the
> > distribution packages.
> 
> That is not true. As all components of OpenStack, Horizon has to be
> packaged at the end of the cycle, with all of its dependencies.
> 

I already packaged python-django-angular for Fedora (and EPEL), it's
just waiting for review [1]. 

>From a distro standpoint, every dependency needs to be packaged, and
this is not limited to Horizon dependencies as well.
On the other side, we don't break each time, when someone releases a new
setuptools or keystoneclient to pypi.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1099473
-- 
Matthias Runge 


*

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Name proposals

2014-06-03 Thread Radomir Dopieralski
We decided that we need to pick the name for the splitting of Horizon
properly. From now up to the next meeting on June 10 we will be
collecting name proposals at:

https://etherpad.openstack.org/p/horizon-name-proposals

After that, until next meeting on June 17, we will be voting for
the proposed names. In case the most popular name is impossible to use
(due to trademark issues), we will use the next most popular. In case of
a tie, we will pick randomly.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-03 Thread Chris Friesen

On 06/03/2014 07:29 AM, Jay Pipes wrote:

Hi Stackers,

tl;dr
=

Move CPU and RAM allocation ratio definition out of the Nova scheduler
and into the resource tracker. Remove the calculations for overcommit
out of the core_filter and ram_filter scheduler pieces.


Makes sense to me.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] How to mock the LOG inside cinder driver

2014-06-03 Thread Deepak Shetty
 Hi, whats the right way to mock the LOG variable inside the
driver ? I am mocking mock.patch.object(glusterfs, 'LOG') as mock_logger
 and then doing...
 mock_logger.warning.assert_called_once() - which passes and is
expected to pass per my code
 but
 mock_logger.debug.assert_called_once() - shud fail , but this
also passes !
 any idea why ?

I feel that I am not mocking the LOG inside the driver correctly.

I also tried
   mock.patch.object(glusterfs.LOG, 'warning'),
mock.patch.object(glusterfs.LOG, 'debug')
as mock_logger_warn and mock_logger_debug respectively

But here too
.debug and .warning both passes.. while the expected result is for .warning
to pass and .debug to fail

So somehow I am unable to mock LOG properly

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-03 Thread Scott Devoid
>
> It may be useful to have an API query which tells you all the numbers you
> may need - real hardware values, values after using the configured
> overcommit ratios and currently used values.
>

+1 to an exposed admin-API for host resource state and calculations,
especially if this allowed you to dynamically change the ratios.


On Tue, Jun 3, 2014 at 10:20 AM, Jesse Pretorius 
wrote:

> On 3 June 2014 15:29, Jay Pipes  wrote:
>
>> Move CPU and RAM allocation ratio definition out of the Nova scheduler
>> and into the resource tracker. Remove the calculations for overcommit out
>> of the core_filter and ram_filter scheduler pieces.
>
>
> Makes sense to me.
>
> I especially like the idea of being able to have different allocation
> ratios for host aggregates.
>
> It may be useful to have an API query which tells you all the numbers you
> may need - real hardware values, values after using the configured
> overcommit ratios and currently used values.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] test configuration for ml2/ovs L2 and L3 agents

2014-06-03 Thread Carlino, Chuck
Hi all,

I'm struggling a bit to get a test set up working for L2/L3 work (ml2/ovs).  
I've been trying multi-host devstack (just controller node for now), and I must 
be missing something important because n-sch bombs out.  Single node devstack 
works fine, but it's not very useful for L2/L3.

Any suggestions, or maybe someone has some local.conf files they'd care to 
share?

Many thanks,
Chuck

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] summit wrap-up: Future of EDP

2014-06-03 Thread Trevor McKay
Hi folks,

  Here is a summary of priorities from Summit and some action items for
high the high priority issues.  The link to the pad is here:

https://etherpad.openstack.org/p/juno-summit-sahara-edp

  We really did not have any leftover questions from summit, but we need
investigation and development work in several areas.  Please respond with any 
comments/questions, and feel free to work on action items :)

High priority
Fix hive support
minimal EDP for spark via spark plugin (may be possible w Oozie)
Design pluggable job model and investigate Spark / Storm integration

Medium priority
Error reporting improvements

Low priority
Raw Oozie workflows
coordinated jobs
preparation tags for workflows
files and archives tags (need clear use cases)
streamline copying of job binaries from swift to hdfs (dscp)

Action items for high priority issues:
Hive:
We need to flesh out existing blueprints:
https://blueprints.launchpad.net/sahara/+spec/hive-vanilla2-support
https://blueprints.launchpad.net/sahara/+spec/hive-integration-tests
Additional blueprint needed for swift support in Hive
Hive is not fully implemented in HDP plugin

Investigate Spark job execution via Oozie
Underway.  Produce a blueprint after initial investigation.

Design a pluggable job model
We need to review current EDP and think about how operations can be
abstracted.  For instance, what are the essential operations on a job,
and where has the Oozie implementation leaked knowledge into the EDP 
code?

Can we develop an abstraction that maps to other models in a believable 
way?
   storm, spark, scalding, others

How will the UI deal with a pluggable job model assuming we have one?








___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-03 Thread Jesse Pretorius
On 3 June 2014 15:29, Jay Pipes  wrote:

> Move CPU and RAM allocation ratio definition out of the Nova scheduler and
> into the resource tracker. Remove the calculations for overcommit out of
> the core_filter and ram_filter scheduler pieces.


Makes sense to me.

I especially like the idea of being able to have different allocation
ratios for host aggregates.

It may be useful to have an API query which tells you all the numbers you
may need - real hardware values, values after using the configured
overcommit ratios and currently used values.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting

2014-06-03 Thread Peter Pouliot
Hi All,

Multiple individuals are travelling this week.   This weeks meeting will need 
to be cancelled as a result.

We will resume with the usual schedule next week.

p

Peter J. Pouliot CISSP
Sr. SDET OpenStack
Microsoft
New England Research & Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] scheduler sub-group meeting agenda 6/3

2014-06-03 Thread Khanh-Toan Tran
The slides of the Atlanta presentation  is here:

https://drive.google.com/file/d/0B598PxJUvPrwMXpWYUtWOGRTckE

It contains our vision on scheduling which sets foot for the integration
with Tetris and Congress.

> -Message d'origine-
> De : Khanh-Toan Tran [mailto:khanh-toan.t...@cloudwatt.com]
> Envoyé : mardi 3 juin 2014 16:05
> À : OpenStack Development Mailing List (not for usage questions)
> Objet : Re: [openstack-dev] [gantt] scheduler sub-group meeting agenda
6/3
>
> Dear all,
>
> If we have time, I would like to take your attention to my new patch:
> Policy-based Scheduling engine
>
> https://review.openstack.org/#/c/97503/
>
> This patch implements Policy-Based Scheduler blueprint:
>
> https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler
>
> I presented its prototype at Atlanta summit:
>
> http://openstacksummitmay2014atlanta.sched.org/event/b4313b37de4645079
> e3d5
> 506b1d725df#.U43VqPl_tm4
>
> It's a pity that the video of the demo is not yet available on OpenStack
channel.
> We've contacted the foundation on this topic.
>
> Best regards,
>
> Toan
>
> > -Message d'origine-
> > De : Dugger, Donald D [mailto:donald.d.dug...@intel.com]
> > Envoyé : mardi 3 juin 2014 04:38
> > À : OpenStack Development Mailing List (not for usage questions) Objet
> > : [openstack-dev] [gantt] scheduler sub-group meeting agenda 6/3
> >
> > 1) Forklift (tasks & status)
> > 2) No-db scheduler discussion (BP ref -
> https://review.openstack.org/#/c/92128/
> > )
> > 3) Opens
> >
> > --
> > Don Dugger
> > "Censeo Toto nos in Kansa esse decisse." - D. Gale
> > Ph: 303/443-3786
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Marconi] Adopt Spec

2014-06-03 Thread Kurt Griffiths
I think it becomes more useful the larger your team. With a smaller team it is 
easier to keep everyone on the same page just through the mailing list and IRC. 
As for where to document design decisions, the trick there is more one of being 
diligent about capturing and recording the why of every decision made in 
discussions and such; gerrit review history can help with that, but it isn’t 
free.

If we’d like to give the specs process a try, I think we could do an experiment 
in j-2 with a single bp. Depending on how that goes, we may do more in the K 
cycle. What does everyone think?

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 at 2:45 PM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

+1 – Requiring specs for every blueprint is going to make the development 
process very cumbersome, and will take us back to waterfall days.
I like how the Marconi team operates now, with design decisions being made in 
IRC/ team meetings.
So Spec might become more of an overhead than add value, given how our team 
functions.

'If' we agree to use Specs, we should use that only for the blue prints that 
make sense.
For example, the unit test decoupling that we are working on now – this one 
will be a good candidate to use specs, since there is a lot of back and forth 
going on how to do this.
On the other hand something like Tempest Integration for Marconi will not 
warrant a spec, since it is pretty straightforward what needs to be done.
In the past we have had discussions around where to document certain design 
decisions (e.g. Which endpoint/verb is the best fit for pop operation?)
Maybe spec is the place for these?

We should leave it to the implementor to decide, if the bp warrants a spec or 
not & what should be in the spec.


From: Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 1:33 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Marconi] Adopt Spec

I’ve been in roles where enormous amounts of time were spent on writing specs, 
and in roles where specs where non-existent. Like most things, I’ve become 
convinced that success lies in moderation between the two extremes.

I think it would make sense for big specs, but I want to be careful we use it 
judiciously so that we don’t simply apply more process for the sake of more 
process. It is tempting to spend too much time recording every little detail in 
a spec, when that time could be better spent in regular communication between 
team members and with customers, and on iterating the code (short iterations 
between demo/testing, so you ensure you are on staying on track and can address 
design problems early, often).

IMO, specs are best used more as summaries, containing useful big-picture 
ideas, diagrams, and specific “memory pegs” to help us remember what was 
discussed and decided, and calling out specific “promises” for future 
conversations where certain design points are TBD.

From: Malini Kamalambal 
mailto:malini.kamalam...@rackspace.com>>
Reply-To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 2, 2014 at 9:51 AM
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [Marconi] Adopt Spec

Hello all,

We are seeing more & more design questions in #openstack-marconi.
It will be a good idea to formalize our design process a bit more & start using 
spec.
We are kind of late to the party –so we already have a lot of precedent ahead 
of us.

Thoughts?

Malini

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Introducing task oriented workflows

2014-06-03 Thread Andrew Laski


On 05/22/2014 08:16 PM, Nachi Ueno wrote:

Hi Salvatore

Thank you for your posting this.

IMO, this topic shouldn't be limited for Neutron only.
Users wants consistent API between OpenStack project, right?

In Nova, a server has task_state, so Neutron should do same way.


We're moving away from the simple task_state field in Nova towards a 
more comprehensive task model.  See 
https://review.openstack.org/#/c/86938/ for the nova-spec around this.





2014-05-22 15:34 GMT-07:00 Salvatore Orlando :

As most of you probably know already, this is one of the topics discussed
during the Juno summit [1].
I would like to kick off the discussion in order to move towards a concrete
design.

Preamble: Considering the meat that's already on the plate for Juno, I'm not
advocating that whatever comes out of this discussion should be put on the
Juno roadmap. However, preparation (or yak shaving) activities that should
be identified as pre-requisite might happen during the Juno time frame
assuming that they won't interfere with other critical or high priority
activities.
This is also a very long post; the TL;DR summary is that I would like to
explore task-oriented communication with the backend and how it should be
reflected in the API - gauging how the community feels about this, and
collecting feedback regarding design, constructs, and related
tools/techniques/technologies.

At the summit a broad range of items were discussed during the session, and
most of them have been reported in the etherpad [1].

First, I think it would be good to clarify whether we're advocating a
task-based API, a workflow-oriented operation processing, or both.

--> About a task-based API

In a task-based API, most PUT/POST API operations would return tasks rather
than neutron resources, and users of the API will interact directly with
tasks.
I put an example in [2] to avoid cluttering this post with too much text.
As the API operation simply launches a task - the database state won't be
updated until the task is completed.

Needless to say, this would be a radical change to Neutron's API; it should
be carefully evaluated and not considered for the v2 API.
Even if it is easily recognisable that this approach has a few benefits, I
don't think this will improve usability of the API at all. Indeed this will
limit the ability of operating on a resource will a task is in execution on
it, and will also require neutron API users to change the paradigm the use
to interact with the API; for not mentioning the fact that it would look
weird if neutron is the only API endpoint in Openstack operating in this
way.
For the Neutron API, I think that its operations should still be
manipulating the database state, and possibly return immediately after that
(*) - a task, or to better say a workflow will then be started, executed
asynchronously, and update the resource status on completion.

--> On workflow-oriented operations

The benefits of it when it comes to easily controlling operations and
ensuring consistency in case of failures are obvious. For what is worth, I
have been experimenting introducing this kind of capability in the NSX
plugin in the past few months. I've been using celery as a task queue, and
writing the task management code from scratch - only to realize that the
same features I was implementing are already supported by taskflow.

I think that all parts of Neutron API can greatly benefit from introducing a
flow-based approach.
Some examples:
- pre/post commit operations in the ML2 plugin can be orchestrated a lot
better as a workflow, articulating operations on the various drivers in a
graph
- operation spanning multiple plugins (eg: add router interface) could be
simplified using clearly defined tasks for the L2 and L3 parts
- it would be finally possible to properly manage resources' "operational
status", as well as knowing whether the actual configuration of the backend
matches the database configuration
- synchronous plugins might be converted into asynchronous thus improving
their API throughput

Now, the caveats:
- during the sessions it was correctly pointed out that special care is
required with multiple producers (ie: api servers) as workflows should be
always executed in the correct order
- it is probably be advisable to serialize workflows operating on the same
resource; this might lead to unexpected situations (potentially to
deadlocks) with workflows operating on multiple resources
- if the API is asynchronous, and multiple workflows might be queued or in
execution at a given time, rolling back the DB operation on failures is
probably not advisable (it would not be advisable anyway in any asynchronous
framework). If the API instead stays synchronous the revert action for a
failed task might also restore the db state for a resource; but I think that
keeping the API synchronous missed a bit the point of this whole work - feel
free to show your disagreement here!
- some neutron workflows are actually initiated by agents; this i

[openstack-dev] [NFV] Sub-team Meeting Reminder - Wednesday June 4 @ 1400 utc

2014-06-03 Thread Steve Gordon
Hi all,

Just a reminder that the first post-summit meeting of the sub-team is scheduled 
for Wednesday June 4 @ 1400 UTC in #openstack-meeting. 

Agenda:

First meeting!
Meet and greet
Review Mission
Review our current blueprint list and fill in anything we're not tracking 
yet
Review use case prioritization
Discuss tracking approaches:
Use cases
Blueprints
Bugs

The agenda is also available at https://wiki.openstack.org/wiki/Meetings/NFV 
for editing.

Thanks!

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] RE: Compute capabilities filter

2014-06-03 Thread Maldonado, Facundo N
Hi all,

I have a patch for this bug ready for review: 
https://review.openstack.org/#/c/89844/
I also want to know your thought about Santiago's question.
Some time ago I found this bug (https://review.openstack.org/#/c/62088/) where 
the decision of deprecate the use
of instance_type_extra_specs configuration option was made 
(https://review.openstack.org/#/c/62088/).
As far as I understand, it only deprecate the ability to setup additional 
capabilities to each compute node thru the config file, not the filter. Am I 
right?
Thanks,
Facundo

From: Baldassin, Santiago B [mailto:santiago.b.baldas...@intel.com]
Sent: Monday, April 14, 2014 11:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Compute capabilities filter [openstack] [nova]

Hey folks,

I have a question regarding the ComputeCapabilitiesFilter. Such filter is not 
currently working https://bugs.launchpad.net/nova/+bug/1279719 and I'd like to 
know if, the filter is not working due to the mentioned bug but it should work 
or if there's any plan to deprecate that filter or to replace it with the 
AggregateInstanceExtraSpecsFilter

Thanks

Santiago B. Baldassin
ASDC Argentina
Software Development Center
Email: santiago.b.baldas...@intel.com
P Save a tree. Print only when necessary.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gantt] scheduler sub-group meeting agenda 6/3

2014-06-03 Thread Khanh-Toan Tran
Dear all,

If we have time, I would like to take your attention to my new patch:
Policy-based Scheduling engine

https://review.openstack.org/#/c/97503/

This patch implements Policy-Based Scheduler blueprint:

https://blueprints.launchpad.net/nova/+spec/policy-based-scheduler

I presented its prototype at Atlanta summit:

http://openstacksummitmay2014atlanta.sched.org/event/b4313b37de4645079e3d5
506b1d725df#.U43VqPl_tm4

It's a pity that the video of the demo is not yet available on OpenStack
channel. We've contacted the foundation on this topic.

Best regards,

Toan

> -Message d'origine-
> De : Dugger, Donald D [mailto:donald.d.dug...@intel.com]
> Envoyé : mardi 3 juin 2014 04:38
> À : OpenStack Development Mailing List (not for usage questions)
> Objet : [openstack-dev] [gantt] scheduler sub-group meeting agenda 6/3
>
> 1) Forklift (tasks & status)
> 2) No-db scheduler discussion (BP ref -
https://review.openstack.org/#/c/92128/
> )
> 3) Opens
>
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-03 Thread Carl Baldwin
How does ovs handle tcp flows?  Does it include stateful tracking of tcp --
as your wording below implies -- or does it do stateless inspection of
returning tcp packets?  It appears it is the latter.  This isn't the same
as providing a stateful ESTABLISHED feature.  Many users may not fully
understand the differences.

One of the most basic use cases, which is to ping an outside Ip address
from inside a nova instance would not work without connection tracking with
the default security groups which don't allow ingress except related and
established.  This may surprise many.

Carl
 Hi all,

 In the Neutron weekly meeting today[0], we discussed the
ovs-firewall-driver blueprint[1]. Moving forward, OVS features today will
give us "80%" of the iptables security groups behavior. Specifically, OVS
lacks connection tracking so it won’t have a RELATED feature or stateful
rules for non-TCP flows. (OVS connection tracking is currently under
development, to be released by 2015[2]). To make the “20%" difference more
explicit to the operator and end user, we have proposed feature
configuration to provide security group rules API validation that would
validate based on connection tracking ability, for example.

 Several ideas floated up during the chat today, I wanted to expand the
discussion to the mailing list for further debate. Some ideas include:
- marking ovs-firewall-driver as experimental in Juno
- What does it mean to be marked as “experimental”?
- performance improvements under a new OVS firewall driver untested so far
(vthapar is working on this)
- incomplete implementation will cause confusion, educational burden
- debugging OVS is new to users compared to debugging old iptables
- waiting for upstream OVS to implement (OpenStack K- or even L- cycle)

 In my humble opinion, merging the blueprint for Juno will provide us a
viable, more performant security groups implementation than what we have
available today.

 Amir


 [0]
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-02-21.01.log.html
[1] https://review.openstack.org/#/c/89712/
[2] http://openvswitch.org/pipermail/dev/2014-May/040567.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Glance

2014-06-03 Thread Evgeny Kozhemyakin
Hi,

Tizy Ninan wrote :
> We have an openstack deployment (Havana on CentOS) in HA mode with
> nova-network service deployed using Mirantis Fuel v4.0 .
> When uploading images with large filesize (more than 1 GB) from dashboard,
> after upload is done the dashboard is showing "504 Gateway Timeout". What
> could be the problem? Can anyone please help me on resolving this issue?

Your problem is not a glance's or dashboard's bug. It's because of
timeouts configured for haproxy.
It doesn't have any impact on your deployment, the images are still
uploading. But you can adjust this parameters in
/etc/haproxy/haproxy.conf on your nodes. There are several ones, I think
you need the server timeout.


ps:
> This is a discussion that definitely belongs on the users list:
sorry for replying here

-- 
Regards,
Evgeny Kozhemyakin (EVK-RIPE)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Tempest + Rally: first success

2014-06-03 Thread Andrey Kurilin
Hey, Om!
Can you launch Rally in debug mode and share logs?
> rally -vd verify start --set image


On Tue, Jun 3, 2014 at 3:49 PM, om prakash pandey 
wrote:

> Hi Andrey,
>
> Thanks a ton for putting together this blog on using tempest + rally.
>
> I followed all the steps listed and managed to get tempest successfully
> installed.
>
> However, I was not able to proceed beyond and couldn't manage to run
> tempest even once. I am getting the below error:
>
> om@desktop2:~/rally$ rally verify start --set image
> Command failed, please check log for more info
> 2014-06-03 18:14:56.029 8331 CRITICAL rally [-] IndexError: list index out
> of range
>
> What did I mess up while following the blog?
>
> Regards
> Om
>
>
> On Sat, May 31, 2014 at 3:46 AM, Andrey Kurilin 
> wrote:
>
>> Hi stackers,
>>
>> I would like to share with you great news.
>> We all know that it's quite hard to use Tempest out of gates, especially
>> when you are going to benchmark different clouds, run just part of tests
>> and would like to store somewhere results. As all this stuff doesn't belong
>> to Tempest, we decided to make it in Rally.
>>
>> More details about how to use Tempest in one click in my tutorial:
>> http://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
>>
>> --
>> Best regards,
>> Andrey Kurilin.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSG][OSSN] Cinder wipe fails in an insecure manner on Grizzly

2014-06-03 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Cinder wipe fails in an insecure manner on Grizzly
- ---

### Summary ###
A configuration error can prevent the secure erase of volumes in Cinder
on Grizzly, potentially allowing a user to recover another user’s data.

### Affected Services / Software ###
Cinder, Grizzly

### Discussion ###
In Cinder on Grizzly, a configurable method to perform a secure erase of
volumes was added. In the event of a misconfiguration no secure erase
will be performed.

The default code path in Cinder’s clear_volume() method, which is taken
in the event of a configuration error, results in no wiping of the
volume - even in the event that the user had flagged the volume for
wiping.

This is the same behaviour as if the volume_clear = ‘none’ option was
selected. This could let an attacker recover data from a volume that was
intended to be securely erased. Examples of possible incorrect
configuration options include values that would appear to result in a
secure erase, for example “volume_clear = true” or “volume_clear =
yes”.

In the event of a misconfiguration resulting in this issue, the message
“Error unrecognized volume_clear option” should be present in log
files.

### Recommended Actions ###
- - Create and clear a volume (cinder create --display_name erasetest 10;
cinder delete erasetest)
- - Review log files for the above error message (grep “Error unrecognized
volume_clear option” )
- - Review configuration files to ensure that the valid options ‘zero’ or
‘shred’ are specified.


### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0016
Original LaunchPad Bug : https://bugs.launchpad.net/cinder/+bug/1322766
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTjc5hAAoJEJa+6E7Ri+EVm6EH/i0IseGxSHb0il1ryDUu56K7
GwX0P72pBQ90BGaJdaLR0t/w68o9hZXFmGJxVZk/8nq0cI+FriEXa8QDCuNwWe2X
vgJ4YoqlvD9jy2V5MUV/WaP99QBnCVClj9Gr0h21YzFJe+mvyAFLKY8HMbhrxUgv
dkhtYUodDQnjSNjVO6s5hzsCYDjti78aPnzgiP2Y7bsHrOkVgRy4a1qt281btPWd
ZklXviqvvO2hI1ZSsH5JkjzLTD3THN260TIkIrVThUOm0TK3iC3JOu+f+FoTOXGg
gHXR0DyIoVldqtn1Nmcd4OY/Wx9bav6jPyPPhfcAAsbbipCzUY/WtRe9pm/gJI0=
=W3y3
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-03 Thread Jay Pipes

Hi Stackers,

tl;dr
=

Move CPU and RAM allocation ratio definition out of the Nova scheduler 
and into the resource tracker. Remove the calculations for overcommit 
out of the core_filter and ram_filter scheduler pieces.


Details
===

Currently, in the Nova code base, the thing that controls whether or not 
the scheduler places an instance on a compute host that is already 
"full" (in terms of memory or vCPU usage) is a pair of configuration 
options* called cpu_allocation_ratio and ram_allocation_ratio.


These configuration options are defined in, respectively, 
nova/scheduler/filters/core_filter.py and 
nova/scheduler/filters/ram_filter.py.


Every time an instance is launched, the scheduler loops through a 
collection of host state structures that contain resource consumption 
figures for each compute node. For each compute host, the core_filter 
and ram_filter's host_passes() method is called. In the host_passes() 
method, the host's reported total amount of CPU or RAM is multiplied by 
this configuration option, and the product is then subtracted from the 
reported used amount of CPU or RAM. If the result is greater than or 
equal to the number of vCPUs needed by the instance being launched, True 
is returned and the host continues to be considered during scheduling 
decisions.


I propose we move the definition of the allocation ratios out of the 
scheduler entirely, as well as the calculation of the total amount of 
resources each compute node contains. The resource tracker is the most 
appropriate place to define these configuration options, as the resource 
tracker is what is responsible for keeping track of total and used 
resource amounts for all compute nodes.


Benefits:

 * Allocation ratios determine the amount of resources that a compute 
node advertises. The resource tracker is what determines the amount of 
resources that each compute node has, and how much of a particular type 
of resource have been used on a compute node. It therefore makes sense 
to put calculations and definition of allocation ratios where they 
naturally belong.
 * The scheduler currently needlessly re-calculates total resource 
amounts on every call to the scheduler. This isn't necessary. The total 
resource amounts don't change unless either a configuration option is 
changed on a compute node (or host aggregate), and this calculation can 
be done more efficiently once in the resource tracker.

 * Move more logic out of the scheduler
 * With the move to an extensible resource tracker, we can more easily 
evolve to defining all resource-related options in the same place 
(instead of in different filter files in the scheduler...)


Thoughts?

Best,
-jay

* Host aggregates may also have a separate allocation ratio that 
overrides any configuration setting that a particular host may have


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Introducing task oriented workflows

2014-06-03 Thread Sam Harwell
When considering user interfaces, the choice of task and/or status reporting 
methods can have a big impact on the ability to communicate with the user. In 
particular, status properties (e.g. in the manner of compute V2 servers) 
prevent user interfaces from associating the result of an operation with a 
description of an executed action. Even though a REST API is a theoretically a 
set of synchronous operations to transform basic data resources, in reality 
users are initiating actions against their account that do not reach their 
final state immediately.

In designing an API, the ability to provide users with relevant information 
about decisions and actions they take is of utmost importance. Since a separate 
task representation (e.g. glance) does support providing users with information 
about the ongoing and final result of specific actions they perform, where a 
status field does not, we will eventually need to use a task representation in 
order to properly support our users.

Also, the specific detail of whether a resource supports more than one 
asynchronous operation concurrently (or supports queueing of task operations) 
is not applicable to this decision. Cloud resources are inherently a 
distributed system, and individual clients are not able to determine which 
status is associated with particular actions. For example, consider the 
following:

1. Client A request operation X be performed
2. Operation X completes successfully
3. Client B requests operation Y be performed
4. Operation Y results in the resource entering an error state
5. Client A checks the status of the resource

In this scenario, Client A is unable to report to the user which operation 
resulted in the resource entering its current error state. If it attempts to 
report the information according to the information available to it, the user 
would be under the impression that Operation X caused the resource to enter the 
error state, with clearly negative impacts on their ability to understand the 
problem(s) encountered and steps they should take to resolve the situation 
during their use of the API.

Please keep in mind that this message is not related to particular 
implementation, storage mechanism, or the manner in which clients communicate 
with the server. I am simply pointing out that the needs of end users can only 
be properly met by ensuring that particular information is available through 
the API their applications are using. This is (or should be) the primary driver 
for design decisions made during the creation of each API.

Thank you,
Sam Harwell

-Original Message-
From: Robert Collins [mailto:robe...@robertcollins.net] 
Sent: Tuesday, June 03, 2014 4:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Introducing task oriented workflows

On 23 May 2014 10:34, Salvatore Orlando  wrote:
> As most of you probably know already, this is one of the topics discussed
> during the Juno summit [1].
> I would like to kick off the discussion in order to move towards a concrete
> design.
>
> Preamble: Considering the meat that's already on the plate for Juno, I'm not
> advocating that whatever comes out of this discussion should be put on the
> Juno roadmap. However, preparation (or yak shaving) activities that should
> be identified as pre-requisite might happen during the Juno time frame
> assuming that they won't interfere with other critical or high priority
> activities.
> This is also a very long post; the TL;DR summary is that I would like to
> explore task-oriented communication with the backend and how it should be
> reflected in the API - gauging how the community feels about this, and
> collecting feedback regarding design, constructs, and related
> tools/techniques/technologies.

Hi, thanks for writing this up.

A few thoughts:

 - if there can be only one task on a resource at a time, you're
essentially forcing all other clients to poll for task completion
before coming back to do *their* change. Its kindof a pathological
edge case of no in-flight-conflicts :).
 - Please please please don't embed polling into the design - use
webhooks or something similar so that each client (be that Nova,
Ironic, Horizon or what-have-you - can get a push response when the
thing they want to happen has happened).
 - I'd think very very carefully about whether you're actually
modelling /tasks/ or whether tasks are the implementation and really
the core issue is modelling the desired vs obtained resource state
 - Ironic has a debate going on right now about very much the same
problem - the latency involved in some API tasks, and whether the API
should return when complete, or when the work his guaranteed to start,
or even immediately and maybe the work isn't guaranteed to start.

My feeling is that we need to balance ease and correctness of
implementation, ease (and efficiency/correctness) of use, and
robustness - an entirely non-blocking API might end up being

Re: [openstack-dev] nova-compute rpc version

2014-06-03 Thread abhishek jain
Hi Russell

Thanks
I'm able to solve it now by switching onto havana release on both the
controller node and compute node.







On Tue, Jun 3, 2014 at 10:56 AM, abhishek jain 
wrote:

> Hi Russell
>
> Below are the details...
>
> controller node...
>
> nova --version
> 2.17.0.122
>
> nova-compute  --version
> 2014.2
>
> compute node.
>
> nova --version
> 2.17.0.122
>
> nova-compute --version
> 2013.2.4
>
> Can you help me what i need to change in order to achieve the desired
> functioonality.
>
>
>
> Thaks
>
>
> On Tue, Jun 3, 2014 at 2:16 AM, Russell Bryant  wrote:
>
>> On 06/02/2014 08:20 AM, abhishek jain wrote:
>> > |Hi
>> > |
>> >
>> > |
>> > I'm getting following error in nova-compute logs when trying to boot VM
>> from controller node onto compute node ...
>> >
>> >  Specified RPC version, 3.23, not supported
>> >
>> > Please help regarding this.
>>
>> It sounds like you're using an older nova-compute with newer controller
>> services (without the configuration to allow a live ugprade).  Check the
>> versions of Nova services you have running.
>>
>> --
>> Russell Bryant
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] request for review

2014-06-03 Thread Kyle Mestery
On Tue, Jun 3, 2014 at 12:49 AM, YAMAMOTO Takashi
 wrote:
> can anyone please review this small fix for ofagent?
> https://review.openstack.org/#/c/88224/
> it's unfortunate a simple fix like this taking months to be merged.
>
Done, looks like Nachi beat me to it but it's in the check queue
again.

> YAMAMOTO Takashi
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Port leak when instance fails to boot normally

2014-06-03 Thread Matthew Gilliard
Bug report: https://bugs.launchpad.net/nova/+bug/1324934

TL;DR version of the bug report:  A dropped connection during nova->neutron
call to create a port will result in nova deciding that the instance can't
be booted, and therefore terminating it.  Nova calls get_ports() from
neutron during this termination, but an empty list is returned so no ports
are deleted.  However, the port actually is created, and associated to the
now-deleted instance.

I suspect there is a race condition between create_port and get_ports in
neutron, which could be fixed by using
@lockutils.synchronized(nstance_uuid) on those 2 methods.

I'm not very familiar with the neutron codebase though, and would
appreciate some feedback or discussion on how likely people think my
diagnosis and suggested fix are, and whether similar problems have been
found in the past.

  Thanks,

Matthew Gilliard
(HP Cloud)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [compute] Server parameters for Create Server

2014-06-03 Thread Sam Harwell
I'm having trouble determining which parameters may be included in the Create 
Server request. In particular, I'm interested in the JSON properties which are 
supported by a base installation of Compute V2.

The documentation on the following page is not clear:
http://docs.openstack.org/api/openstack-compute/2/content/POST_createServer__v2__tenant_id__servers_CreateServers.html

The examples in that documentation include properties like max_count and 
min_count that I could not find a description for, and security_groups which 
appears to be a property added by an extension as opposed to being part of the 
base implementation of Create Server.

The separation of every JSON property according to the location where they are 
defined (base OpenStack installation, OpenStack-defined extension, or 
vendor-specific extension is a key design aspect of the SDK I am working on. 
How can I determine this information according to the documentation?

Thank you,
Sam Harwell
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Configurable DB Plugins - BP flushed out

2014-06-03 Thread boden

Guys,
In the BP meeting yesterday we briefly discussed the 'Configurable DB 
Plugins' BP: 
https://blueprints.launchpad.net/trove/+spec/configurable-db-plugins


It was clear I needed to hash this one out in greater detail to 
optimally discuss the feature with the group - I've done that and you 
can find the details here: 
https://wiki.openstack.org/wiki/Trove/ConfigurableDBPlugins


Given I will be traveling next week, I won't be at the BP meeting. 
Therefore I look forward to any feedback / comments / etc via the BP 
wiki or email. Otherwise this one will have to wait until the BP meeting 
on the 16th.


Thanks


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Barbican] KMIP support

2014-06-03 Thread Nathan Reller
> I was wondering about the progress of KMIP support in Barbican?

As John pointed out, JHU/APL is working on adding KMIP support to Barbican.
We submitted the first CR to add a Secret Store interface into Barbican.
The next step is to add a KMIP implementation of the Secret Store.

> Is this waiting on an open python KMIP support?

We are working in parallel to add KMIP support to Barbican and to release
an open source version of a Python KMIP library. We would like to have both
out by Juno.

> Also, is the “OpenStack KMIP Client” ever going to be a thing?
> (https://wiki.openstack.org/wiki/KMIPclient)

That work was not proposed by us, so I can't comment on the status of that.
Right now our path forward is to support Barbican by adding a KMIP Secret
Store.

-Nate

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] Tempest + Rally: first success

2014-06-03 Thread om prakash pandey
Hi Andrey,

Thanks a ton for putting together this blog on using tempest + rally.

I followed all the steps listed and managed to get tempest successfully
installed.

However, I was not able to proceed beyond and couldn't manage to run
tempest even once. I am getting the below error:

om@desktop2:~/rally$ rally verify start --set image
Command failed, please check log for more info
2014-06-03 18:14:56.029 8331 CRITICAL rally [-] IndexError: list index out
of range

What did I mess up while following the blog?

Regards
Om


On Sat, May 31, 2014 at 3:46 AM, Andrey Kurilin 
wrote:

> Hi stackers,
>
> I would like to share with you great news.
> We all know that it's quite hard to use Tempest out of gates, especially
> when you are going to benchmark different clouds, run just part of tests
> and would like to store somewhere results. As all this stuff doesn't belong
> to Tempest, we decided to make it in Rally.
>
> More details about how to use Tempest in one click in my tutorial:
> http://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/
>
> --
> Best regards,
> Andrey Kurilin.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-06-03 Thread Jaromir Coufal


Yeah, Robert is right. I think, everybody who is approved by his 
employer (or is expected to be approved) should enter his constraints. 
So realistically enter dates when you would be able to attend.


Personal view: after Aug 11th, we will have about 3 weeks to feature 
freeze which is very late I would say. I would better target the meetup 
to July.


-- Jarda



On 2014/02/06 22:52, Robert Collins wrote:

I think you should add the constraints you have.

Realistically though, not every will be there, and thats fine. There
are some folk we'll need there (e.g. I suspect I'm one of those, but
maybe not!)

My constraints are:
- need to be in Sydney for the 1st-5th, remembering there is an
international date line between NC and Sydney.
- need to be home for most of a week between the mid cycle meetup and
PyCon AU (family).

So I can do anytime from Aug 11th on, or anytime ending before or on
July the 25th - and I'm going to put that in the etherpad now :0

-Rob



On 3 June 2014 04:51, Ben Nemec  wrote:

On 05/30/2014 06:58 AM, Jaromir Coufal wrote:

On 2014/30/05 10:00, Thomas Spatzier wrote:

Excerpt from Zane Bitter's message on 29/05/2014 20:57:10:


From: Zane Bitter 
To: openstack-dev@lists.openstack.org
Date: 29/05/2014 20:59
Subject: Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle
collaborative meetup



BTW one timing option I haven't seen mentioned is to follow Pycon-AU's
model of running e.g. Friday-Tuesday (July 25-29). I know nobody wants
to be stuck in Raleigh, NC on a weekend (I've lived there, I understand
;), but for folks who have a long ways to travel it's one weekend lost
instead of two.


+1 - excellent idea!


It looks that there is an interest in these dates, so I added 3rd option
to the etherpad [0].

For one more time, I would like to ask potential attendees to put
yourselves to dates which would work for you.

-- Jarda

[0] https://etherpad.openstack.org/p/juno-midcycle-meetup


Just to clarify, I should add my name to the list if I _can_ make it to
a given proposal, even if I don't know for sure that I will be going?

I don't know what the travel situation is yet so I can't commit to being
there on any dates, but I can certainly say which dates would work for
me if I can make it.

-Ben


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance]: V2 api support for download_image policy?

2014-06-03 Thread stuart . mclaren

Hi Abishek,

"download_image" *should* apply to v2 in the same way as v1 I think.

If that's not what you're seeing can you enter a bug (with details) in 
launchpad?

https://bugs.launchpad.net/glance

Typically you'd mark it with 'private security' but 'public security'
makes sense now.

Thanks!

-Stuart


Hi All,

Can anyone let me know whether is download_image policy applicable only for V1 
api and not for V2 api?

Thanks & Regards,

Abhishek

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.
If you are not the intended recipient, please advise the sender by replying 
promptly to this email and then delete and destroy this email and any 
attachments without any further use,
copying or forwarding
-- next part --
An HTML attachment was scrubbed...
URL: 
<http://lists.openstack.org/pipermail/openstack-dev/attachments/20140603/b4236c77/attachment-0001.html>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-03 Thread CARVER, PAUL

Amir Sadoughi wrote:

>Specifically, OVS lacks connection tracking so it won't have a RELATED feature 
>or stateful rules
>for non-TCP flows. (OVS connection tracking is currently under development, to 
>be released by 2015

It definitely needs a big obvious warning label on this. A stateless firewall 
hasn't been acceptable in serious
security environments for at least a decade. "Real" firewalls do things like 
TCP sequence number validation
to ensure that someone isn't hi-jacking an existing connection and TCP flag 
validation to make sure that someone
isn't "fuzzing" by sending invalid combinations of flags in order to uncover 
bugs in servers behind the firewall.


>- debugging OVS is new to users compared to debugging old iptables

This one is very important in my opinion. There absolutely needs to be a 
section in the documentation
on displaying and interpreting the rules generated by Neutron. I'm pretty sure 
that if you tell anyone
with Linux admin experience that Neutron security groups are iptables based, 
they should be able to
figure their way around iptables -L or iptables -S without much help.

If they haven't touched iptables in a while, five minutes reading "man 
iptables" should be enough
for them to figure out the important options and they can readily see the 
relationship between
what they put in a security group and what shows up in the iptables chain. I 
don't think there's
anywhere near that ease of use on how to list the OvS ruleset for a VM and see 
how it corresponds
to the Neutron security group.


Finally, logging of packets (including both dropped and permitted connections) 
is mandatory in many
environments. Does OvS have the ability to do the necessary logging? Although 
Neutron
security groups don't currently enable logging, the capabilities are present in 
the underlying
iptables and can be enabled with some work. If OvS doesn't support logging of 
connections then
this feature definitely needs to be clearly marked as "not a firewall 
substitute" so that admins
are clearly informed that they still need a "real" firewall for audit 
compliance and may only
consider OvS based Neutron security groups as an additional layer of protection 
behind the
"real" firewall.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Libvirt Sub-Team Meeting: Call for agenda June 3rd, 2014

2014-06-03 Thread Daniel P. Berrange
If anyone has agenda items they'd like to discuss in today's Nova Libvirt
sub-team meeting[1] please add them to the etherpad:

   https://etherpad.openstack.org/p/nova-libvirt-meeting-agenda

Regards,
Daniel

[1] https://wiki.openstack.org/wiki/Meetings/Libvirt
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] How to conditionally modify attributes in CreateNetwork class.

2014-06-03 Thread Timur Sufiev
Hello, Nader!

As for `contributes` attribute, you could override `contribute(self,
data, context)` method in your descendant of `workflows.Step` which by
default simply iterates over all keys in `contributes`.

Either you could use even more flexible approach (which also fits for
`default_steps`): define in your `workflows.Step` descendants methods
`contributes(self)` and `default_steps(self)` (with the conditional
logic you need) and then decorate them with @property.

On Fri, May 30, 2014 at 10:15 AM, Nader Lahouti  wrote:
> Hi All,
>
> Currently in the
> horizon/openstack_dashboard/dashboards/project/networks/workflows.py in
> classes such as CreateNetwork, CreateNetworkInfo and CreateSubnetInfo, the
> contributes or default_steps as shown below are fixed. Is it possible to add
> entries to those attributes conditionally?
>
> 156class CreateSubnetInfo(workflows.Step):
> 157action_class = CreateSubnetInfoAction
> 158contributes = ("with_subnet", "subnet_name", "cidr",
> 159   "ip_version", "gateway_ip", "no_gateway")
> 160
>
> 262class CreateNetwork(workflows.Workflow):
> 263slug = "create_network"
> 264name = _("Create Network")
> 265finalize_button_name = _("Create")
> 266success_message = _('Created network "%s".')
> 267failure_message = _('Unable to create network "%s".')
> 268default_steps = (CreateNetworkInfo,
> 269 CreateSubnetInfo,
> 270 CreateSubnetDetail)
>
> Thanks for your input.
>
> Nader.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] 2014.1.1 preparation

2014-06-03 Thread Sergey Lukjanov
Okay, it makes sense, I've updated the etherpad -
https://etherpad.openstack.org/p/sahara-2014.1.1

Here is the chain of backports for 2014.1.1 -
https://review.openstack.org/#/q/topic:sahara-2014.1.1,n,z

Review appreciate, all changes are cherry-picked and only one conflict
was in https://review.openstack.org/#/c/97458/1/sahara/swift/swift_helper.py,cm
due to the multi-region support addition.

Thanks.

On Tue, Jun 3, 2014 at 1:48 PM, Dmitry Mescheryakov
 wrote:
> I agree with Andrew and actually think that we do need to have
> https://review.openstack.org/#/c/87573 (Fix running EDP job on
> transient cluster) fixed in stable branch.
>
> We also might want to add https://review.openstack.org/#/c/93322/
> (Create trusts for admin user with correct tenant name). This is
> another fix for transient clusters, but it is not even merged into
> master branch yet.
>
> Thanks,
>
> Dmitry
>
> 2014-06-03 13:27 GMT+04:00 Sergey Lukjanov :
>> Here is etherpad to track preparation -
>> https://etherpad.openstack.org/p/sahara-2014.1.1
>>
>> On Tue, Jun 3, 2014 at 10:08 AM, Sergey Lukjanov  
>> wrote:
>>> /me proposing to backport:
>>>
>>> Docs:
>>>
>>> https://review.openstack.org/#/c/87531/ Change IRC channel name to
>>> #openstack-sahara
>>> https://review.openstack.org/#/c/96621/ Added validate_edp method to
>>> Plugin SPI doc
>>> https://review.openstack.org/#/c/89647/ Updated architecture diagram in docs
>>>
>>> EDP:
>>>
>>> https://review.openstack.org/#/c/93564/ 
>>> https://review.openstack.org/#/c/93564/
>>>
>>> On Tue, Jun 3, 2014 at 10:03 AM, Sergey Lukjanov  
>>> wrote:
 Hey folks,

 this Thu, June 5 is the date for 2014.1.1 release. We already have
 some back ported patches to the stable/icehouse branch, so, the
 question is do we need some more patches to back port? Please, propose
 them here.

 2014.1 - stable/icehouse diff:
 https://github.com/openstack/sahara/compare/2014.1...stable/icehouse

 Thanks.

 --
 Sincerely yours,
 Sergey Lukjanov
 Sahara Technical Lead
 (OpenStack Data Processing)
 Principal Software Engineer
 Mirantis Inc.
>>>
>>>
>>>
>>> --
>>> Sincerely yours,
>>> Sergey Lukjanov
>>> Sahara Technical Lead
>>> (OpenStack Data Processing)
>>> Principal Software Engineer
>>> Mirantis Inc.
>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] 2014.1.1 preparation

2014-06-03 Thread Dmitry Mescheryakov
I agree with Andrew and actually think that we do need to have
https://review.openstack.org/#/c/87573 (Fix running EDP job on
transient cluster) fixed in stable branch.

We also might want to add https://review.openstack.org/#/c/93322/
(Create trusts for admin user with correct tenant name). This is
another fix for transient clusters, but it is not even merged into
master branch yet.

Thanks,

Dmitry

2014-06-03 13:27 GMT+04:00 Sergey Lukjanov :
> Here is etherpad to track preparation -
> https://etherpad.openstack.org/p/sahara-2014.1.1
>
> On Tue, Jun 3, 2014 at 10:08 AM, Sergey Lukjanov  
> wrote:
>> /me proposing to backport:
>>
>> Docs:
>>
>> https://review.openstack.org/#/c/87531/ Change IRC channel name to
>> #openstack-sahara
>> https://review.openstack.org/#/c/96621/ Added validate_edp method to
>> Plugin SPI doc
>> https://review.openstack.org/#/c/89647/ Updated architecture diagram in docs
>>
>> EDP:
>>
>> https://review.openstack.org/#/c/93564/ 
>> https://review.openstack.org/#/c/93564/
>>
>> On Tue, Jun 3, 2014 at 10:03 AM, Sergey Lukjanov  
>> wrote:
>>> Hey folks,
>>>
>>> this Thu, June 5 is the date for 2014.1.1 release. We already have
>>> some back ported patches to the stable/icehouse branch, so, the
>>> question is do we need some more patches to back port? Please, propose
>>> them here.
>>>
>>> 2014.1 - stable/icehouse diff:
>>> https://github.com/openstack/sahara/compare/2014.1...stable/icehouse
>>>
>>> Thanks.
>>>
>>> --
>>> Sincerely yours,
>>> Sergey Lukjanov
>>> Sahara Technical Lead
>>> (OpenStack Data Processing)
>>> Principal Software Engineer
>>> Mirantis Inc.
>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Introducing task oriented workflows

2014-06-03 Thread Robert Collins
On 23 May 2014 10:34, Salvatore Orlando  wrote:
> As most of you probably know already, this is one of the topics discussed
> during the Juno summit [1].
> I would like to kick off the discussion in order to move towards a concrete
> design.
>
> Preamble: Considering the meat that's already on the plate for Juno, I'm not
> advocating that whatever comes out of this discussion should be put on the
> Juno roadmap. However, preparation (or yak shaving) activities that should
> be identified as pre-requisite might happen during the Juno time frame
> assuming that they won't interfere with other critical or high priority
> activities.
> This is also a very long post; the TL;DR summary is that I would like to
> explore task-oriented communication with the backend and how it should be
> reflected in the API - gauging how the community feels about this, and
> collecting feedback regarding design, constructs, and related
> tools/techniques/technologies.

Hi, thanks for writing this up.

A few thoughts:

 - if there can be only one task on a resource at a time, you're
essentially forcing all other clients to poll for task completion
before coming back to do *their* change. Its kindof a pathological
edge case of no in-flight-conflicts :).
 - Please please please don't embed polling into the design - use
webhooks or something similar so that each client (be that Nova,
Ironic, Horizon or what-have-you - can get a push response when the
thing they want to happen has happened).
 - I'd think very very carefully about whether you're actually
modelling /tasks/ or whether tasks are the implementation and really
the core issue is modelling the desired vs obtained resource state
 - Ironic has a debate going on right now about very much the same
problem - the latency involved in some API tasks, and whether the API
should return when complete, or when the work his guaranteed to start,
or even immediately and maybe the work isn't guaranteed to start.

My feeling is that we need to balance ease and correctness of
implementation, ease (and efficiency/correctness) of use, and
robustness - an entirely non-blocking API might end up being the
polling nightmare of nightmares if not done carefully, for instance.

-Rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] 2014.1.1 preparation

2014-06-03 Thread Sergey Lukjanov
Here is etherpad to track preparation -
https://etherpad.openstack.org/p/sahara-2014.1.1

On Tue, Jun 3, 2014 at 10:08 AM, Sergey Lukjanov  wrote:
> /me proposing to backport:
>
> Docs:
>
> https://review.openstack.org/#/c/87531/ Change IRC channel name to
> #openstack-sahara
> https://review.openstack.org/#/c/96621/ Added validate_edp method to
> Plugin SPI doc
> https://review.openstack.org/#/c/89647/ Updated architecture diagram in docs
>
> EDP:
>
> https://review.openstack.org/#/c/93564/ 
> https://review.openstack.org/#/c/93564/
>
> On Tue, Jun 3, 2014 at 10:03 AM, Sergey Lukjanov  
> wrote:
>> Hey folks,
>>
>> this Thu, June 5 is the date for 2014.1.1 release. We already have
>> some back ported patches to the stable/icehouse branch, so, the
>> question is do we need some more patches to back port? Please, propose
>> them here.
>>
>> 2014.1 - stable/icehouse diff:
>> https://github.com/openstack/sahara/compare/2014.1...stable/icehouse
>>
>> Thanks.
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Sahara Technical Lead
>> (OpenStack Data Processing)
>> Principal Software Engineer
>> Mirantis Inc.
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Introducing task oriented workflows

2014-06-03 Thread Hirofumi Ichihara
Hi, Salvatore

> It is totally correct that most Neutron resources have a sloppy status 
> management. Mostly because, as already pointed out, the 'status' for most 
> resource was conceived to be a 'network fabric' status rather than a resource 
> synchronisation status.
Exactly, I reckon that neutron needs resource synchronization status.

> As it emerged from previous posts in this thread, I reckon we have three 
> choices:
> 1) Add a new attribute for describing "configuration" state. For instance 
> this will have values such as PENDING_UPDATE, PENDING_DELETE, IN_SYNC, 
> OUT_OF_SYNC, etc.
> 2) Merge status and configuration statuses in a single attribute. This will 
> probably result simpler from a client perspective, but there are open 
> questions such as whether a resource for which a task is in progress and is 
> down should be reported as 'down' or 'pending_updage'.
> 3) Not use any new flags, and use tasks to describe whether there are 
> operations in progress on a resource.
> The status attribute will describe exclusively the 'fabric' status of a 
> resources; however tasks will be exposed through the API - and a resource in 
> sync will be a resource with no PENDING or FAILED task active on it.
Good suggestions.
I reckon that choice (3) is discussion about new API and choice (1) (2) are 
discussion about current API.
It is not good the problem of current API continues remaining in the future.
So they should be discussed individually and improve the fabric status by (1) 
or (2). 
When (3) will be achieved, if neutron has same fabric status problem, users may 
be confused about the difference between resource status and task status. 
Additionally, to be exact, the task show not resource status but API process 
status.

I reckon we should improve the fabric status, then add task to neutron.
Also, I think (2) is good. Because there is performance of LBaaS model.

thanks,
Hirofumi

-
市原 裕史 (Ichihara Hirofumi)
NTTソフトウェアイノベーションセンタ
Tel:0422-59-2843  Fax:0422-59-2699
Email:ichihara.hirof...@lab.ntt.co.jp
-


On 2014/05/30, at 17:57, Salvatore Orlando  wrote:

> Hi Hirofumi,
> 
> I reckon this has been immediately recognised as a long term effort.
> However, I just want to clarify that by "long term" I don't mean pushing it 
> back until we get to the next release cycle and we realize we are in the same 
> place where we are today!
> 
> It is totally correct that most Neutron resources have a sloppy status 
> management. Mostly because, as already pointed out, the 'status' for most 
> resource was conceived to be a 'network fabric' status rather than a resource 
> synchronisation status.
> 
> As it emerged from previous posts in this thread, I reckon we have three 
> choices:
> 1) Add a new attribute for describing "configuration" state. For instance 
> this will have values such as PENDING_UPDATE, PENDING_DELETE, IN_SYNC, 
> OUT_OF_SYNC, etc.
> 2) Merge status and configuration statuses in a single attribute. This will 
> probably result simpler from a client perspective, but there are open 
> questions such as whether a resource for which a task is in progress and is 
> down should be reported as 'down' or 'pending_updage'.
> 3) Not use any new flags, and use tasks to describe whether there are 
> operations in progress on a resource.
> The status attribute will describe exclusively the 'fabric' status of a 
> resources; however tasks will be exposed through the API - and a resource in 
> sync will be a resource with no PENDING or FAILED task active on it.
> 
> The above are just options at the moment; I tend to lean toward the latter, 
> but it would be great to have your feedback.
> 
> Salvatore
> 
> 
> 
> On 28 May 2014 11:20, Hirofumi Ichihara  
> wrote:
> Hi, Salvatore
> 
> I think neutron needs the task management too.
> 
> IMO, the problem of neutron resource status should be discussed individually.
> Task management enable neutron to roll back API operation and delete trash of 
> resource, try API operation again in one API process.
> Of course, we can use task to correct inconsistency between neutron 
> DB(resource status) and actual resource configuration.
> But, we should add resource status management to some resources before task.
> For example, LBaaS has resource status management[1].
> Neutron router, port don't mange status is basic problem.
> 
>> For instance a port is "UP" if it's been wired by the OVS agent; it often 
>> does not tell us whether the actual resource configuration is exactly the 
>> desired one in the database. For instance, if the ovs agent fails to apply 
>> security groups to a port, the port stays "ACTIVE" and the user might never 
>> know there was an error and the actual state diverged from the desired one.
> So, we should solve this problem by resource status management such LBaaS 
> rather than task.
> 
> I don't deny task, but we need to discuss for task long term, I hop

[openstack-dev] [Glance]: V2 api support for download_image policy?

2014-06-03 Thread Kekane, Abhishek
Hi All,

Can anyone let me know whether is download_image policy applicable only for V1 
api and not for V2 api?

Thanks & Regards,

Abhishek

__
Disclaimer:This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged, confidential, 
and proprietary data.  If you are not the intended recipient, please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Use of AngularJS

2014-06-03 Thread Jiri Tomasek

On 05/29/2014 05:30 PM, Musso, Veronica A wrote:

Hello,

During the last Summit the use of AngularJS in Horizon was discussed and there 
is the intention to do a better use of it in the dashboards.
  I think this blueprint could help 
https://blueprints.launchpad.net/horizon/+spec/django-angular-integration, 
since it proposes the integration of Django-Angular 
(http://django-angular.readthedocs.org/en/latest/index.html).
I would like to know the community opinion about it, due I could start its 
implementation.

Thanks!

Best Regards,
Verónica Musso

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Thanks for bringing this up. We have been discussing including this lib 
before and I think using it's features are beneficial. I'll have a 
broader look at it.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-03 Thread Philipp Marek
Hi everybody,

at the Juno Design Summit we held a presentation about using DRBD 9 
within OpenStack.

Here's an overview about the situation; I apologize in advance that the 
mail got a bit longer, but I think it makes sense to capture all that 
information in a single piece.



 WHAT WE HAVE


Design Summit notes:
https://etherpad.openstack.org/p/juno-cinder-DRBD


As promised we've got a proof-of-concept implementation for the simplest 
case, using DRBD to access data on all nodes - the "DRBDmanage volume 
driver" as per the Etherpad notes (link see below).


As both DRBD 9 and DRBDmanage are still in heavy development, there are 
quite a few rough edges; in case anyone's interested in setting that up 
on some testsystem, I can offer RPMs and DEBs of "drbd-utils" and 
"drbdmanage", and for the DRBD 9 kernel module for a small set of kernel 
versions:

Ubuntu 12.043.8.0-34-generic
RHEL6 (& compat)2.6.32_431.11.2.el6.x86_64

If there's consensus that some specific kernel version should be used 
for testing instead I can try to build packages for that, too.


There's a cinder git clone with our changes at
https://github.com/phmarek/cinder
so that all developments can be discussed easily.
(Should I use some branch in github.com/OpenStack/Cinder instead?)



 FUTURE PLANS


The (/our) plans are:

 * LINBIT will continue DRBD 9 and DRBDmanage development,
   so that these get production-ready ASAP.
   Note: DRBDmanage is heavily influenced by outside
   requirements, eg. OpenStack Cinder Consistency Groups...
   So the sooner we're aware of such needs the better;
   I'd like to avoid changing the DBUS api multiple times ;)

 * LINBIT continues to work on the DRBD Cinder volume driver,
   as this is 

 * LINBIT starts to work to provide DRBD 9 integration
   between the LVM and iSCSI layer.
   That needs the Replication API to be more or less finished.

There are a few dependencies, though ... please see below.


All help - ideas, comments (both for design and code), all feedback, 
and, last but not least, patches resp. pull requests - are *really* 
welcome, of course.

(For real-time communication I'm available in the #openstack-cinder 
channel too, mostly during European working hours; I'm "flip\d+".)



 WHAT WE NEED


Now, while I filled out the CLA, I haven't read through all the 
documentation regarding Processes & Workflow yet ... and that'll take 
some time, I gather.


Furthermore, on the technical side there's a lot to discuss, too;
eg. regarding snapshots there are quite a few things to decide.

 * Should snapshots be taken on _one_ of the storage nodes,
 * on some subnet, or
 * on all of them?

I'm not sure whether the same redundancy that's defined for the volume 
is wanted for the snapshots, too.
(I guess one usecase that should be possible is to take at least one 
snapshot of the volume in _each_ data center?)


Please note that having volume groups would be good-to-have (if not 
essential) for a DRBD integration, because only then DRBD could ensure 
data integrity *across* volumes (by using a single resource for all of 
them).
See also 
https://etherpad.openstack.org/p/juno-cinder-cinder-consistency-groups; 
basically, the volume driver just needs to get an 
additional value "associate into this group".



 EULA


Now, there'll be quite a few things I forgot to mention, or that I'm 
simply missing. Please bear with me, I'm fairly new to OpenStack.


So ... ideas, comments, other feedback?


Regards,

Phil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-03 Thread Salvatore Orlando
I would like to understand how did we get to this 80%/20% distinction.
In other terms, it seems conntrack's RELATED features won't be supported
for non-tcp traffic. What about the ESTABLISHED feature? The blueprint
specs refers to tcp_flags=ack.
Or will that be supported through the source port matching extension which
is being promoted?

More comments inline.

On 3 June 2014 01:22, Amir Sadoughi  wrote:

>  Hi all,
>
>  In the Neutron weekly meeting today[0], we discussed the
> ovs-firewall-driver blueprint[1]. Moving forward, OVS features today will
> give us "80%" of the iptables security groups behavior. Specifically, OVS
> lacks connection tracking so it won’t have a RELATED feature or stateful
> rules for non-TCP flows. (OVS connection tracking is currently under
> development, to be released by 2015[2]). To make the “20%" difference more
> explicit to the operator and end user, we have proposed feature
> configuration to provide security group rules API validation that would
> validate based on connection tracking ability, for example.
>

I am stilly generally skeptic of API changes which surface backend details
on user-facing APIs. I understand why you are proposing this however, and I
think it would be good to get first an assessment of the benefits brought
by such a change before making a call on changing API behaviour to reflect
security group implementation on the backend.


>
>  Several ideas floated up during the chat today, I wanted to expand the
> discussion to the mailing list for further debate. Some ideas include:
> - marking ovs-firewall-driver as experimental in Juno
> - What does it mean to be marked as “experimental”?
>

In this case experimental would be a way to say "not 100% functional".  You
would not expect a public service provider exposing neutron APIs backed by
this driver, but maybe in some private deployments where the missing
features are not a concern it could be used.

- performance improvements under a new OVS firewall driver untested so far
> (vthapar is working on this)
>

>From the last comment in your post it seems you already have proof of the
performance improvement, perhaps you can add those to the "Performance
Impact" section on the spec.


> - incomplete implementation will cause confusion, educational burden
>

It's more about technical debt in my opinion, but this is not necessarily
the case.


> - debugging OVS is new to users compared to debugging old iptables
>

This won't be a concern as long as we have good documentation to back the
implementation.
As Neutron is usually sloppy with documentation - then it's a concern.


> - waiting for upstream OVS to implement (OpenStack K- or even L- cycle)
>
>  In my humble opinion, merging the blueprint for Juno will provide us a
> viable, more performant security groups implementation than what we have
> available today.
>

>  Amir
>
>
>  [0]
> http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-02-21.01.log.html
> [1] https://review.openstack.org/#/c/89712/
> [2] http://openvswitch.org/pipermail/dev/2014-May/040567.html
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ServiceVM] IRC meeting minutes June 3, 2014 5:00(AM)UTC-)

2014-06-03 Thread Isaku Yamahata
Here is the meeting minutes of the meeting.

ServiceVM/device manager
meeting minutes on June 3, 2014:
  https://wiki.openstack.org/wiki/Meetings/ServiceVM

next meeting:
  June 10, 2014 5:00AM UTC (Tuesday)

agreement:
- include NFV conformance to servicevm project into servicevm project
  => will continue discuss on nomenclature at gerrit. tacker-specs
- we have to define the relationship between NFV team and servicevm team
- consolidate floating implementations

Action Items:
- everyone add your name/bio to contributor of incubation page
- yamahata create tacker-specs repo in stackforge for further discussion
  on terminology
- yamahata update draft to include NFV conformance
- s3wong look into vif creation/network connection
- everyone review incubation page

Detailed logs:
  
http://eavesdrop.openstack.org/meetings/servicevm_device_manager/2014/servicevm_device_manager.2014-06-03-05.04.html
  
http://eavesdrop.openstack.org/meetings/servicevm_device_manager/2014/servicevm_device_manager.2014-06-03-05.04.log.html

thanks,
-- 
Isaku Yamahata 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev