Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-23 Thread Tim Bell
Angus,

There are two groups which may be relevant regarding ‘consumers’ of Heat


-Application eco system working group at 
https://wiki.openstack.org/wiki/Application_Ecosystem_Working_Group

-API working group at https://wiki.openstack.org/wiki/API_Working_Group

There are some discussions planned as part of the breakouts in the Kilo design 
summit (http://kilodesignsummit.sched.org/)

So, there are frameworks in place and we would welcome volunteers to help 
advance these in a consistent way across the OpenStack programs.

Tim

From: Angus Salkeld [mailto:asalk...@mirantis.com]
Sent: 24 October 2014 08:16
To: Stefano Maffulli
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] How can we get more feedback from users?

On Fri, Oct 24, 2014 at 4:00 PM, Stefano Maffulli 
mailto:stef...@openstack.org>> wrote:
Hi Angus,

quite a noble intent, one that requires lots of attempts like this you
have started.

On 10/23/2014 09:32 PM, Angus Salkeld wrote:
> I have felt some grumblings about usability issues with Heat
> templates/client/etc..
> and wanted a way that users could come and give us feedback easily (low
> barrier). I started an etherpad
> (https://etherpad.openstack.org/p/heat-useablity-improvements) - the
> first win is it is spelt wrong :-O

:)

> We now have some great feedback there in a very short time, most of this
> we should be able to solve.
>
> This lead me to think, "should OpenStack have a more general mechanism
> for users to provide feedback". The idea is this is not for bugs or
> support, but for users to express pain points, requests for features and
> docs/howtos.

One place to start is to pay attention to what happens on the operators
mailing list. Posting this message there would probably help since lots
of users hang out there.

In Paris there will be another operators mini-summit, the fourth IIRC,
one every 3 months more or less (I can't find the details at the moment,
I assume they'll be published soon -- Ideas are being collected on
https://etherpad.openstack.org/p/PAR-ops-meetup).

Thanks for those pointers, we very interested in feedback from operators, but
in this case I am talking more about end users not operators (people that 
actually use our API).
-Angus

Another effort to close this 'feedback loop' is the new working group
temporarily named 'influencers' that will meet in Paris for the first
time:
https://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831

It's great to see lots of efforts going in the same direction. Keep 'em
coming.

/stef

--
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-23 Thread Angus Salkeld
On Fri, Oct 24, 2014 at 4:00 PM, Stefano Maffulli 
wrote:

> Hi Angus,
>
> quite a noble intent, one that requires lots of attempts like this you
> have started.
>
> On 10/23/2014 09:32 PM, Angus Salkeld wrote:
> > I have felt some grumblings about usability issues with Heat
> > templates/client/etc..
> > and wanted a way that users could come and give us feedback easily (low
> > barrier). I started an etherpad
> > (https://etherpad.openstack.org/p/heat-useablity-improvements) - the
> > first win is it is spelt wrong :-O
>
> :)
>
> > We now have some great feedback there in a very short time, most of this
> > we should be able to solve.
> >
> > This lead me to think, "should OpenStack have a more general mechanism
> > for users to provide feedback". The idea is this is not for bugs or
> > support, but for users to express pain points, requests for features and
> > docs/howtos.
>
> One place to start is to pay attention to what happens on the operators
> mailing list. Posting this message there would probably help since lots
> of users hang out there.
>
> In Paris there will be another operators mini-summit, the fourth IIRC,
> one every 3 months more or less (I can't find the details at the moment,
> I assume they'll be published soon -- Ideas are being collected on
> https://etherpad.openstack.org/p/PAR-ops-meetup).
>
>
Thanks for those pointers, we very interested in feedback from operators,
but
in this case I am talking more about end users not operators (people that
actually use our API).

-Angus


> Another effort to close this 'feedback loop' is the new working group
> temporarily named 'influencers' that will meet in Paris for the first
> time:
>
> https://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831
>
> It's great to see lots of efforts going in the same direction. Keep 'em
> coming.
>
> /stef
>
> --
> Ask and answer questions on https://ask.openstack.org
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] How can we get more feedback from users?

2014-10-23 Thread Stefano Maffulli
Hi Angus,

quite a noble intent, one that requires lots of attempts like this you
have started.

On 10/23/2014 09:32 PM, Angus Salkeld wrote:
> I have felt some grumblings about usability issues with Heat
> templates/client/etc..
> and wanted a way that users could come and give us feedback easily (low
> barrier). I started an etherpad
> (https://etherpad.openstack.org/p/heat-useablity-improvements) - the
> first win is it is spelt wrong :-O

:)

> We now have some great feedback there in a very short time, most of this
> we should be able to solve.
>
> This lead me to think, "should OpenStack have a more general mechanism
> for users to provide feedback". The idea is this is not for bugs or
> support, but for users to express pain points, requests for features and
> docs/howtos.

One place to start is to pay attention to what happens on the operators
mailing list. Posting this message there would probably help since lots
of users hang out there.

In Paris there will be another operators mini-summit, the fourth IIRC,
one every 3 months more or less (I can't find the details at the moment,
I assume they'll be published soon -- Ideas are being collected on
https://etherpad.openstack.org/p/PAR-ops-meetup).

Another effort to close this 'feedback loop' is the new working group
temporarily named 'influencers' that will meet in Paris for the first
time:
https://openstacksummitnovember2014paris.sched.org/event/268a9853812c22ca8d0636b9d8f0c831

It's great to see lots of efforts going in the same direction. Keep 'em
coming.

/stef

-- 
Ask and answer questions on https://ask.openstack.org

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] How can we get more feedback from users?

2014-10-23 Thread Angus Salkeld
Hi all

I have felt some grumblings about usability issues with Heat
templates/client/etc..
and wanted a way that users could come and give us feedback easily (low
barrier). I started an etherpad (
https://etherpad.openstack.org/p/heat-useablity-improvements) - the first
win is it is spelt wrong :-O

We now have some great feedback there in a very short time, most of this we
should be able to solve.

This lead me to think, "should OpenStack have a more general mechanism for
users to provide feedback". The idea is this is not for bugs or support,
but for users to express pain points, requests for features and docs/howtos.

It's not easy to improve your software unless you are listening to your
users.

Ideas?

-Angus
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-23 Thread Brian Haley
On 10/23/14 6:22 AM, Elena Ezhova wrote:
> Hi!
> 
> I am working on a bug "ping still working once connected even after
> related security group rule is
> deleted" (https://bugs.launchpad.net/neutron/+bug/1335375). The gist of
> the problem is the following: when we delete a security group rule the
> corresponding rule in iptables is also deleted, but the connection, that
> was allowed by that rule, is not being destroyed.
> The reason for such behavior is that in iptables we have the following
> structure of a chain that filters input packets for an interface of an
> istance:


Like Miguel said, there's no easy way to identify this on the compute
node since neither the MAC nor the interface are going to be in the
conntrack command output.  And you don't want to drop the wrong tenant's
connections.

Just wondering, if you remove the conntrack entries using the IP/port
from the router namespace does it drop the connection?  Or will it just
start working again on the next packet?  Doesn't work for VM to VM
packets, but those packets are probably less interesting.  It's just my
first guess.

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Networking API "Create network" missing Request parameters

2014-10-23 Thread Anne Gentle
On Thu, Oct 23, 2014 at 6:12 PM, Mathieu Gagné  wrote:

> On 2014-10-23 7:00 PM, Danny Choi (dannchoi) wrote:
>
>>
>> In neutron, user with “admin” role can specify the provider network
>> parameters when creating a network.
>>
>> —provider:network_type
>> —provider:physical_network
>> —provider:segmentation_id
>>
>> localadmin@qa4:~/devstack$ neutron net-create test-network
>> --provider:network_type vlan --provider:physical_network physnet1
>> --provider:segmentation_id 400
>>
>> However, the Networking API v2.0
>> (http://developer.openstack.org/api-ref-networking-v2.html) “Create
>> network”
>> does not list them as Request parameters.
>>
>> Is this a print error?
>>
>>
> I see them under the "Networks multiple provider extension (networks)"
> section. [1]
>
> Open the detail for "Create network with multiple segment mappings" to see
> them.
>
> Is this what you were looking for?
>
> [1] http://developer.openstack.org/api-ref-networking-v2.
> html#network_multi_provider-ext
>
>
We have a couple of doc bugs on this:

https://bugs.launchpad.net/openstack-api-site/+bug/1373418

https://bugs.launchpad.net/openstack-api-site/+bug/1373423

Hope that helps -- please triage those bugs if you find out more.

Thanks,
Anne


> --
> Mathieu
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Summit] proposed item for the crossproject and/ or Nova meetings in the Design summit

2014-10-23 Thread Jay Pipes

On 10/23/2014 07:57 PM, Elzur, Uri wrote:

Today, OpenStack makes placement decision mainly based on Compute
demands (Scheduler is part of Nova). It also uses some info provided
about platform’s Compute capabilities. But for a given application
(consists of some VMs, some Network appliances, some storage etc),
Nova/Scheduler has no way to figure out relative placement of Network
devices (virtual appliances, SFC) and/or Storage devices (which is also
network born in many cases) in reference to the Compute elements. This
makes it harder to provide SLA, support certain policies (e.g. HA or
keeping all of these elements within a physical boundary of your choice,
or within a given network physical boundary and guarantee storage
proximity, for example. It also makes it harder to optimize resource
utilization level, which increases the cost and may cause Openstack to
be less competitive on TCO.

Another aspect of the issue, is that in order, to lower the cost per
unit of compute (or said better per unit of Application), it is
essential to pack tighter. This increases infrastructure utilization but
also makes interference a more important phenomenon (aka Nosy neighbor).
SLA requests, SLA guarantees and placement based on ability to provide
desired SLA are required.

We’d like to suggest moving a bit faster on making OpenStack a more
compelling stack for Compute/Network/Storage, capable of supporting
Telco/NFV and other usage models, and creating the foundation for
providing very low cost platform, more competitive with large cloud
deployment.


How do you suggest moving faster?

Also, when you say things like "more competitive with large cloud 
deployment" you need to tell us what you are comparing OpenStack to, and 
what cost factors you are using. Otherwise, it's just a statement with 
no context.



The concern is that any scheduler change will take long time. Folks
closer to the Scheduler work, have already pointed out we first need to
stabilize the API between Nova and the Scheduler, before we can talk
about a split (e.g. Gantt). So it may take till  late in 2016 (best
case?), to get this kind of broader Application level functionality in
the OpenStack scheduler .


I'm not entirely sure where late in 2016 comes from? Could you elaborate?


We’d like to bring it up in the coming design summit. Where do you think
it needs to be discussed: cross project tack? Scheduler discussion? Other?

I’ve just added a proposed item 17.1 to the
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics

1.

2.“present Application’s Network and Storage requirements, coupled with
infrastructure capabilities and status (e.g. up/dn


This is the kind of thing that was nixed as an idea last go around with 
the "nic-state-aware-scheduler":


https://review.openstack.org/#/c/87978/

You are coupling service state monitoring with placement decisions, and 
by doing so, you will limit the scale of the system considerably. We 
need improvements to our service state monitoring, for sure, including 
the ability to have much more fine-grained definition of what a service 
is. But I am 100% against adding the concept of checking service state 
*during* placement decisions.


Service state monitoring (it's called the servicegroup API in Nova) can 
and should notify the scheduler of important changes to the state of 
resource providers, but I'm opposed to making changes to the scheduler 
that would essentially make a placement decision and then immediately go 
and check a link for UP/DOWN state before "finalizing" the claim of 
resources on the resource provider.



, utilization levels) and placement policy (e.g. proximity, HA)


I understand proximity (affinity/anti-affinity), but what does HA have 
to do with placement policy? Could you elaborate a bit more on that?


> to get optimized placement

decisions accounting for all application elements (VMs, virt Network
appliances, Storage) vs. Compute only”


Yep. These are all simply inputs to the scheduler's placement decision 
engine. We need:


 a) A way of providing these inputs to the launch request without 
polluting a cloud user's view of the cloud -- remember we do NOT want 
users of the Nova API to essentially need to understand the exact layout 
of the cloud provider's datacenter. That's definitively anti-cloudy :) 
So, we need a way of providing generic inputs to the scheduler that the 
scheduler can translate into specific inputs because the scheduler would 
know the layout of the datacenter...


 b) Simple condition engine that would be able to understand the inputs 
(requested proximity to a storage cluster used by applications running 
in the instance, for example) with information the scheduler can query 
for about the topology of the datacenter's network and storage.


Work on b) involves the following foundational blueprints:

https://review.openstack.org/#/c/127609/
https://review.openstack.org/#/c/127610/
https://review.openstack.org/#/c/127612/

Looking forward t

[openstack-dev] Deprecation of Python 2.6 CI Testing

2014-10-23 Thread Clark Boylan
Hello,

At the Atlanta summit there was a session on removing python2.6
testing/support from the OpenStack Kilo release [0]. The Infra team is
working on enacting this change in the near future.

The way that this will work is python26 jobs will be removed from
running on master and feature branches of projects that have
stable/icehouse and/or stable/juno branches. The python26 jobs will
still continue to run against the stable branches. Any project that is a
library consumed by stable releases but does not have stable branches
will have python26 run against that project's master branch. This is
necessary to ensure we don't break backward compatibility with stable
releases.

This essentially boils down to: no python26 jobs against server project
master branches, but python26 jobs continue to run against stable
branches. Python-*client and oslo projects[1] will continue to have
python26 jobs run against their master branches. All other projects will
have python26 jobs completely removed (including stackforge).

If you are a project slated to have python26 removed and would prefer to
continue testing python26 that is doable, but we ask that you propose a
change atop the removal change [2] that adds python26 back to your
project. This way it is clear through git history and review that this
is a desired state. Also, this serves as a warning to the future where
we will drop all python26 jobs when stable/juno is no longer supported.
At that point we will stop building slaves capable of running python26
jobs.

Rough timeline for making these changes is early next week for OpenStack
projects. Then at the end of November (November 30th) we will make the 
changes to stackforge. This should give us plenty of time to work out 
which stackforge projects wish to continue testing python26.

[0] https://etherpad.openstack.org/p/juno-cross-project-future-of-python
[1]
http://lists.openstack.org/pipermail/openstack-dev/2014-October/048999.html
[2] https://review.openstack.org/129434

Let me or the Infra team know if you have any questions,
Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Jeremy Stanley
On 2014-10-23 17:18:04 -0400 (-0400), Doug Hellmann wrote:
> I think we have to actually wait for M, don’t we (K & L represents
> 1 year where J is supported, M is the first release where J is not
> supported and 2.6 can be fully dropped).
[...]

Roughly speaking, probably. It's more accurate to say we need to
keep it until stable/juno reaches end of support, which won't
necessarily coincide exactly with any particular release cycle
ending (it will instead coincide with whenever the stable branch
management team decides the final 2014.2.x point release is, which I
don't think has been settled quite yet).
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Summit] proposed item for the crossproject and/ or Nova meetings in the Design summit

2014-10-23 Thread Elzur, Uri
Today, OpenStack makes placement decision mainly based on Compute demands 
(Scheduler is part of Nova). It also uses some info provided about platform's 
Compute capabilities. But for a given application (consists of some VMs, some 
Network appliances, some storage etc), Nova/Scheduler has no way to figure out 
relative placement of Network devices (virtual appliances, SFC) and/or Storage 
devices (which is also network born in many cases) in reference to the Compute 
elements. This makes it harder to provide SLA, support certain policies (e.g. 
HA or keeping all of these elements within a physical boundary of your choice, 
or within a given network physical boundary and guarantee storage proximity, 
for example. It also makes it harder to optimize resource utilization level, 
which increases the cost and may cause Openstack to be less competitive on TCO.

Another aspect of the issue, is that in order, to lower the cost per unit of 
compute (or said better per unit of Application), it is essential to pack 
tighter. This increases infrastructure utilization but also makes interference 
a more important phenomenon (aka Nosy neighbor). SLA requests, SLA guarantees 
and placement based on ability to provide desired SLA are required.

We'd like to suggest moving a bit faster on making OpenStack a more compelling 
stack for Compute/Network/Storage, capable of supporting Telco/NFV and other 
usage models, and creating the foundation for providing very low cost platform, 
more competitive with large cloud deployment.

The concern is that any scheduler change will take long time. Folks closer to 
the Scheduler work, have already pointed out we first need to stabilize the API 
between Nova and the Scheduler, before we can talk about a split (e.g. Gantt). 
So it may take till  late in 2016 (best case?), to get this kind of broader 
Application level functionality in the OpenStack scheduler .

We'd like to bring it up in the coming design summit. Where do you think it 
needs to be discussed: cross project tack? Scheduler discussion? Other?

I've just added a proposed item 17.1 to the 
https://etherpad.openstack.org/p/kilo-crossproject-summit-topics
1.
2.   "present Application's Network and Storage requirements, coupled with 
infrastructure capabilities and status (e.g. up/dn, utilization levels) and 
placement policy (e.g. proximity, HA) to get optimized placement decisions 
accounting for all application elements (VMs, virt Network appliances, Storage) 
vs. Compute only"


Thx

Uri ("Oo-Ree")
C: 949-378-7568
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Networking API "Create network" missing Request parameters

2014-10-23 Thread Mathieu Gagné

On 2014-10-23 7:00 PM, Danny Choi (dannchoi) wrote:


In neutron, user with “admin” role can specify the provider network
parameters when creating a network.

—provider:network_type
—provider:physical_network
—provider:segmentation_id

localadmin@qa4:~/devstack$ neutron net-create test-network
--provider:network_type vlan --provider:physical_network physnet1
--provider:segmentation_id 400

However, the Networking API v2.0
(http://developer.openstack.org/api-ref-networking-v2.html) “Create network”
does not list them as Request parameters.

Is this a print error?



I see them under the "Networks multiple provider extension (networks)" 
section. [1]


Open the detail for "Create network with multiple segment mappings" to 
see them.


Is this what you were looking for?

[1] 
http://developer.openstack.org/api-ref-networking-v2.html#network_multi_provider-ext


--
Mathieu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Networking API "Create network" missing Request parameters

2014-10-23 Thread Danny Choi (dannchoi)
Hi,

In neutron, user with “admin” role can specify the provider network parameters 
when creating a network.

—provider:network_type
—provider:physical_network
—provider:segmentation_id


localadmin@qa4:~/devstack$ neutron net-create test-network 
--provider:network_type vlan --provider:physical_network physnet1 
--provider:segmentation_id 400

Created a new network:

+---+--+

| Field | Value|

+---+--+

| admin_state_up| True |

| id| 389caa09-da54-4713-b869-12f7389cb9c6 |

| name  | test-network |

| provider:network_type | vlan |

| provider:physical_network | physnet1 |

| provider:segmentation_id  | 400  |

| router:external   | False|

| shared| False|

| status| ACTIVE   |

| subnets   |  |

| tenant_id | 92edf0cd20bf4085bb9dbe1b9084aadb |

+---+--+

However, the Networking API v2.0 
(http://developer.openstack.org/api-ref-networking-v2.html) “Create network”
does not list them as Request parameters.

Is this a print error?

Thanks,
Danny
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Chris Friesen

On 10/23/2014 04:24 PM, Preston L. Bannister wrote:

On Thu, Oct 23, 2014 at 3:04 PM, John Griffith mailto:john.griffi...@gmail.com>> wrote:

The debate about whether to wipe LV's pretty much massively
depends on the intelligence of the underlying store. If the
lower level storage never returns accidental information ...
explicit zeroes are not needed.

On Thu, Oct 23, 2014 at 3:44 PM, Preston L. Bannister
mailto:pres...@bannister.us>> wrote:


Yes, that is pretty much the key.

Does LVM let you read physical blocks that have never been
written? Or zero out virgin segments on read? If not, then "dd"
of zeroes is a way of doing the right thing (if *very* expensive).

Yeah... so that's the crux of the issue on LVM (Thick).  It's quite
possible for a new LV to be allocated from the VG and a block from a
previous LV can be allocated.  So in essence if somebody were to sit
there in a cloud env and just create volumes and read the blocks
over and over and over they could gather some previous or other
tenants data (or pieces of it at any rate).  It's def the "right"
thing to do if you're in an env where you need some level of
security between tenants.  There are other ways to solve it of
course but this is what we've got.



Has anyone raised this issue with the LVM folk? Returning zeros on
unwritten blocks would require a bit of extra bookkeeping, but a lot
more efficient overall.


For Cinder volumes, I think that if you have new enough versions of 
everything you can specify "lvm_type = thin" and it will use thin 
provisioning.  Among other things this should improve snapshot 
performance and also avoid the need to explicitly wipe on delete (since 
the next user of the storage will be provided zeros for a read of any 
page it hasn't written).


As far as I know this is not supported for ephemeral storage.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements

2014-10-23 Thread Jorge Miramontes
Hey German/Susanne,

To continue our conversation from our IRC meeting could you all provide
more insight into you usage requirements? Also, I'd like to clarify a few
points related to using logging.

I am advocating that logs be used for multiple purposes, including
billing. Billing requirements are different that connection logging
requirements. However, connection logging is a very accurate mechanism to
capture billable metrics and thus, is related. My vision for this is
something like the following:

- Capture logs in a scalable way (i.e. capture logs and put them on a
separate scalable store somewhere so that it doesn't affect the amphora).
- Every X amount of time (every hour, for example) process the logs and
send them on their merry way to cielometer or whatever service an operator
will be using for billing purposes.
- Keep logs for some configurable amount of time. This could be anything
from indefinitely to not at all. Rackspace is planing on keeping them for
a certain period of time for the following reasons:

A) We have connection logging as a planned feature. If a customer turns
on the connection logging feature for their load balancer it will already
have a history. One important aspect of this is that customers (at least
ours) tend to turn on logging after they realize they need it (usually
after a tragic lb event). By already capturing the logs I'm sure customers
will be extremely happy to see that there are already X days worth of logs
they can immediately sift through.
B) Operators and their support teams can leverage logs when providing
service to their customers. This is huge for finding issues and resolving
them quickly.
C) Albeit a minor point, building support for logs from the get-go
mitigates capacity management uncertainty. My example earlier was the
extreme case of every customer turning on logging at the same time. While
unlikely, I would hate to manage that!

I agree that there are other ways to capture billing metrics but, from my
experience, those tend to be more complex than what I am advocating and
without the added benefits listed above. An understanding of HP's desires
on this matter will hopefully get this to a point where we can start
working on a spec.

Cheers,
--Jorge

P.S. Real-time stats is a different beast and I envision there being an
API call that returns "real-time" data such as this ==>
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.


From:  , German 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)"

Date:  Wednesday, October 22, 2014 2:41 PM
To:  "OpenStack Development Mailing List (not for usage questions)"

Subject:  Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements


>Hi Jorge,
> 
>Good discussion so far + glad to have you back
>J
> 
>I am not a big fan of using logs for billing information since ultimately
>(at least at HP) we need to pump it into ceilometer. So I am envisioning
>either the
> amphora (via a proxy) to pump it straight into that system or we collect
>it on the controller and pump it from there.
> 
>Allowing/enabling logging creates some requirements on the hardware,
>mainly, that they can handle the IO coming from logging. Some operators
>might choose to
> hook up very cheap and non performing disks which might not be able to
>deal with the log traffic. So I would suggest that there is some rate
>limiting on the log output to help with that.
>
> 
>Thanks,
>German
> 
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>
>Sent: Wednesday, October 22, 2014 6:51 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements
>
>
> 
>Hey Stephen (and Robert),
>
> 
>
>For real-time usage I was thinking something similar to what you are
>proposing. Using logs for this would be overkill IMO so your suggestions
>were what I was
> thinking of starting with.
>
> 
>
>As far as storing logs is concerned I was definitely thinking of
>offloading these onto separate storage devices. Robert, I totally hear
>you on the scalability
> part as our current LBaaS setup generates TB of request logs. I'll start
>planning out a spec and then I'll let everyone chime in there. I just
>wanted to get a general feel for the ideas I had mentioned. I'll also
>bring it up in today's meeting.
>
> 
>
>Cheers,
>
>--Jorge
>
>
>
>
> 
>
>From:
>Stephen Balukoff 
>Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
>Date: Wednesday, October 22, 2014 4:04 AM
>To: "OpenStack Development Mailing List (not for usage questions)"
>
>Subject: Re: [openstack-dev] [Neutron][LBaaS][Octavia] Usage Requirements
>
> 
>
>>Hi Jorge!
>>
>> 
>>
>>Welcome back, eh! You've been missed.
>>
>> 
>>
>>Anyway, I just wanted to say that your proposal sounds great to me, and
>>it's good to finally be closer to having concrete requirements for
>>logging, eh. Once this
>> discussion is nearing a conclusion, 

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Preston L. Bannister
On Thu, Oct 23, 2014 at 3:04 PM, John Griffith 
wrote:

The debate about whether to wipe LV's pretty much massively depends on the
>> intelligence of the underlying store. If the lower level storage never
>> returns accidental information ... explicit zeroes are not needed.
>>
>

> On Thu, Oct 23, 2014 at 3:44 PM, Preston L. Bannister <
> pres...@bannister.us> wrote:
>

>> Yes, that is pretty much the key.
>>
>> Does LVM let you read physical blocks that have never been written? Or
>> zero out virgin segments on read? If not, then "dd" of zeroes is a way of
>> doing the right thing (if *very* expensive).
>>
>
> Yeah... so that's the crux of the issue on LVM (Thick).  It's quite
> possible for a new LV to be allocated from the VG and a block from a
> previous LV can be allocated.  So in essence if somebody were to sit there
> in a cloud env and just create volumes and read the blocks over and over
> and over they could gather some previous or other tenants data (or pieces
> of it at any rate).  It's def the "right" thing to do if you're in an env
> where you need some level of security between tenants.  There are other
> ways to solve it of course but this is what we've got.
>


Has anyone raised this issue with the LVM folk? Returning zeros on
unwritten blocks would require a bit of extra bookkeeping, but a lot more
efficient overall.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread John Griffith
On Thu, Oct 23, 2014 at 3:44 PM, Preston L. Bannister 
wrote:

>
> On Thu, Oct 23, 2014 at 7:51 AM, John Griffith 
> wrote:
>>
>> On Thu, Oct 23, 2014 at 8:50 AM, John Griffith 
>> wrote:
>>>
>>> On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister <
>>> pres...@bannister.us> wrote:
>>>
 John,

 As a (new) OpenStack developer, I just discovered the
 "CINDER_SECURE_DELETE" option.

>>>
>> OHHH... Most importantly, I almost forgot.  Welcome!!!
>>
>
> Thanks! (I think...)
>
:)

>
>
>
>
>> It doesn't suck as bad as you might have thought or some of the other
>>> respondents on this thread seem to think.  There's certainly room for
>>> improvement and growth but it hasn't been completely ignored on the Cinder
>>> side.
>>>
>>
> To be clear, I am fairly impressed with what has gone into OpenStack as a
> whole. Given the breadth, complexity, and growth ... not everything is
> going to be perfect (yet?).
>
> So ... not trying to disparage past work, but noting what does not seem
> right. (Also know I could easily be missing something.)
>
Sure, I didn't mean anything by that at all, and certainly didn't take it
that way.

>
>
>
>
>
>> The debate about whether to wipe LV's pretty much massively depends on
 the intelligence of the underlying store. If the lower level storage never
 returns accidental information ... explicit zeroes are not needed.

>>>
> Yes, that is pretty much the key.
>
> Does LVM let you read physical blocks that have never been written? Or
> zero out virgin segments on read? If not, then "dd" of zeroes is a way of
> doing the right thing (if *very* expensive).
>

Yeah... so that's the crux of the issue on LVM (Thick).  It's quite
possible for a new LV to be allocated from the VG and a block from a
previous LV can be allocated.  So in essence if somebody were to sit there
in a cloud env and just create volumes and read the blocks over and over
and over they could gather some previous or other tenants data (or pieces
of it at any rate).  It's def the "right" thing to do if you're in an env
where you need some level of security between tenants.  There are other
ways to solve it of course but this is what we've got.

>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-23 Thread Salvatore Orlando
Hi Miguel,

while we'd need to hear from the stable team, I think it's not such a bad
idea to make this tool available to users of pre-juno openstack releases.
As far as upstream repos are concerned, I don't know if this tool violates
the criteria for stable branches. Even if it would be a rather large change
for stable/icehouse, it is pretty much orthogonal to the existing code, so
it could be ok. However, please note that stable/havana has now reached its
EOL, so there will be no more stable release for it.

The orthogonal nature of this tool however also make the case for making it
widely available on pypi. I think it should be ok to describe the
scalability issue in the official OpenStack Icehouse docs and point out to
this tool for mitigation.

Salvatore

On 23 October 2014 14:03, Miguel Angel Ajo Pelayo 
wrote:

>
>
> Recently, we have identified clients with problems due to the
> bad scalability of security groups in Havana and Icehouse, that
> was addressed during juno here [1] [2]
>
> This situation is identified by blinking agents (going UP/DOWN),
> high AMQP load, nigh neutron-server load, and timeout from openvswitch
> agents when trying to contact neutron-server
> "security_group_rules_for_devices".
>
> Doing a [1] backport involves many dependent patches related
> to the general RPC refactor in neutron (which modifies all plugins),
> and subsequent ones fixing a few bugs. Sounds risky to me. [2] Introduces
> new features and it's dependent on features which aren't available on
> all systems.
>
> To remediate this on production systems, I wrote a quick tool
> to help on reporting security groups and mitigating the problem
> by writing almost-equivalent rules [3].
>
> We believe this tool would be better available to the wider community,
> and under better review and testing, and, since it doesn't modify any
> behavior
> or actual code in neutron, I'd like to propose it for inclusion into, at
> least,
> Icehouse stable branch where it's more relevant.
>
> I know the usual way is to go master->Juno->Icehouse, but at this
> moment
> the tool is only interesting for Icehouse (and Havana), although I believe
> it could be extended to cleanup orphaned resources, or any other cleanup
> tasks, in that case it could make sense to be available for K->J->I.
>
> As a reference, I'm leaving links to outputs from the tool [4][5]
>
> Looking forward to get some feedback,
> Miguel Ángel.
>
>
> [1] https://review.openstack.org/#/c/111876/ security group rpc refactor
> [2] https://review.openstack.org/#/c/111877/ ipset support
> [3] https://github.com/mangelajo/neutrontool
> [4] http://paste.openstack.org/show/123519/
> [5] http://paste.openstack.org/show/123525/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Kevin L. Mitchell
On Thu, 2014-10-23 at 17:19 -0400, Doug Hellmann wrote:
> I’m not aware of any Oslo code that presents a problem for those
> plugins. We wouldn’t want to cause a problem, but as you say, we don’t
> have anywhere to test 2.4 code. Do you know if the Xen driver uses any
> of the Oslo code?

I missed the [oslo] tag in the subject line and was thinking generally;
so no, none of the Xen plugins use anything from oslo, because of the
need to support 2.4.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-23 Thread Ian Wells
There are two categories of problems:

1. some networks don't pass VLAN tagged traffic, and it's impossible to
detect this from the API
2. it's not possible to pass traffic from multiple networks to one port on
one machine as (e.g.) VLAN tagged traffic

(1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else
addresses this, particularly in the case that one VM is emitting tagged
packets that another one should receive and Openstack knows nothing about
what's going on.

We should get this in, and ideally in quickly and in a simple form where it
simply tells you if a network is capable of passing tagged traffic.  In
general, this is possible to calculate but a bit tricky in ML2 - anything
using the OVS mechanism driver won't pass VLAN traffic, anything using
VLANs should probably also claim it doesn't pass VLAN traffic (though
actually it depends a little on the switch), and combinations of L3 tunnels
plus Linuxbridge seem to pass VLAN traffic just fine.  Beyond that, it's
got a backward compatibility mode, so it's possible to ensure that any
plugin that doesn't implement VLAN reporting is still behaving correctly
per the specification.

(2) is addressed by several blueprints, and these have overlapping ideas
that all solve the problem.  I would summarise the possibilities as follows:

A. Racha's L2 gateway blueprint,
https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension, which
(at its simplest, though it's had features added on and is somewhat
OVS-specific in its detail) acts as a concentrator to multiplex multiple
networks onto one as a trunk.  This is a very simple approach and doesn't
attempt to resolve any of the hairier questions like making DHCP work as
you might want it to on the ports attached to the trunk network.
B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/,
which is more limited in that it refers only to external connections.
C. Erik's VLAN port blueprint,
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which tries
to solve the addressing problem mentioned above by having ports within
ports (much as, on the VM side, interfaces passing trunk traffic tend to
have subinterfaces that deal with the traffic streams).
D. Not a blueprint, but an idea I've come across: create a network that is
a collection of other networks, each 'subnetwork' being a VLAN in the
network trunk.
E. Kyle's very old blueprint,
https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api -
where we attach a port, not a network, to multiple networks.  Probably
doesn't work with appliances.

I would recommend we try and find a solution that works with both external
hardware and internal networks.  (B) is only a partial solution.

Considering the others, note that (C) and (D) add significant complexity to
the data model, independently of the benefits they bring.  (A) adds one new
functional block to networking (similar to today's routers, or even today's
Nova instances).

Finally, I suggest we consider the most prominent use case for multiplexing
networks.  This seems to be condensing traffic from many networks to either
a service VM or a service appliance.  It's useful, but not essential, to
have Neutron control the addresses on the trunk port subinterfaces.

So, that said, I personally favour (A) is the simplest way to solve our
current needs, and I recommend paring (A) right down to its basics: a block
that has access ports that we tag with a VLAN ID, and one trunk port that
has all of the access networks multiplexed onto it.  This is a slightly
dangerous block, in that you can actually set up forwarding blocks with it,
and that's a concern; but it's a simple service block like a router, it's
very, very simple to implement, and it solves our immediate problems so
that we can make forward progress.  It also doesn't affect the other
solutions significantly, so someone could implement (C) or (D) or (E) in
the future.
-- 
Ian.


On 23 October 2014 02:13, Alan Kavanagh  wrote:

> +1 many thanks to Kyle for putting this as a priority, its most welcome.
> /Alan
>
> -Original Message-
> From: Erik Moe [mailto:erik@ericsson.com]
> Sent: October-22-14 5:01 PM
> To: Steve Gordon; OpenStack Development Mailing List (not for usage
> questions)
> Cc: iawe...@cisco.com
> Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
> Hi,
>
> Great that we can have more focus on this. I'll attend the meeting on
> Monday and also attend the summit, looking forward to these discussions.
>
> Thanks,
> Erik
>
>
> -Original Message-
> From: Steve Gordon [mailto:sgor...@redhat.com]
> Sent: den 22 oktober 2014 16:29
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
> Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
> - Original Message -
> > From: "Kyle Mestery" 
> > To: "OpenStack Development Mail

Re: [openstack-dev] [Horizon] [Devstack]

2014-10-23 Thread Gabriel Hurley
All in all this is been a long time coming. The cookie-based option was useful 
as a batteries-included, simplest-case scenario. Moving to SQLite is a 
reasonable second choice since most systems Horizon might be deployed on 
support sqlite out of the box.

I would make a couple notes:


1)  If you’re going to store very large amounts of data in the session, 
then session cleanup is going to become an important issue to prevent excessive 
data growth from old sessions.

2)  SQLite is far worse to go into production with than cookie-based 
sessions (which are far from perfect). The more we can do to ensure people 
don’t make that mistake, the better.

3)  There should be a clear deprecation for cookie-based sessions. Don’t 
just drop them in a single release, as tempting as it is.

Otherwise, seems good to me.


-  Gabriel

From: David Lyle [mailto:dkly...@gmail.com]
Sent: Thursday, October 23, 2014 2:44 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Horizon] [Devstack]

In order to help ease an ongoing struggle with session size limit issues, 
Horizon is planning on changing the default session store from signed cookie to 
simple server side session storage using sqlite. The size limit for cookie 
based sessions is 4K and when this value is overrun, the result is truncation 
of the session data in the cookie or a complete lack of session data updates.

Operators will have the flexibility to replace the sqlite backend with the DB 
of their choice, or memcached.

We gain: support for non-trivial service catalogs, support for higher number of 
regions, space for holding multiple tokens (domain scoped and project scoped), 
better support for PKI and PKIZ tokens, and frees up cookie space for user 
preferences.

The drawbacks are we lose HA as a default, a slightly more complicated 
configuration. Once the cookie size limit is removed, cookie based storage 
would no longer be supported.

Additionally, this will require a few config changes to devstack to configure 
the session store DB and clean it up periodically.

Concerns?

David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Error in ssh key pair log in

2014-10-23 Thread Patil, Tushar
Hi Khayam,

Read below warning message carefully.

Open /home/openstack/.ssh/known_hosts file from where you are trying to connect 
to the VM, delete line #1 and try it again.

TP

From: Khayam Gondal mailto:khayam.gon...@gmail.com>>
Date: Thursday, October 23, 2014 at 2:32 AM
To: "openst...@lists.openstack.org" 
mailto:openst...@lists.openstack.org>>, 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [Openstack] Error in ssh key pair log in


I am trying to login into VM from host using ssh key pair instead of password. 
I have created VM using keypair khayamkey and than tried to login into vm using 
following command

ssh -l tux -i khayamkey.pem 10.3.24.56

where tux is username for VM, but I got following error

WARNING: REMOTE 
HOST IDENTIFICATION HAS CHANGED! 

IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!Someone could be 
eavesdropping on you right now (man-in-the-middle attack)!It is also possible 
that a host key has just been changed.The fingerprint for the RSA key sent by 
the remote host is52:5c:47:33:dd:d0:7a:cd:0e:78:8d:9b:66:d8:74:a3.Please 
contact your system administrator.Add correct host key in 
/home/openstack/.ssh/known_hosts to get rid of this message.Offending RSA key 
in /home/openstack/.ssh/known_hosts:1
  remove with: ssh-keygen -f "/home/openstack/.ssh/known_hosts" -R 10.3.24.56
RSA host key for 10.3.24.56 has changed and you have requested strict 
checking.Host key verification failed.

P.S: I know if I run ssh-keygen -f "/home/openstack/.ssh/known_hosts" -R 
10.3.24.56 problem can be solved but than I have to provide password to log in 
to VM, but my goal is to use keypairs NOT password.

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Preston L. Bannister
On Thu, Oct 23, 2014 at 7:51 AM, John Griffith 
wrote:
>
> On Thu, Oct 23, 2014 at 8:50 AM, John Griffith 
> wrote:
>>
>> On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister <
>> pres...@bannister.us> wrote:
>>
>>> John,
>>>
>>> As a (new) OpenStack developer, I just discovered the
>>> "CINDER_SECURE_DELETE" option.
>>>
>>
> OHHH... Most importantly, I almost forgot.  Welcome!!!
>

Thanks! (I think...)




> It doesn't suck as bad as you might have thought or some of the other
>> respondents on this thread seem to think.  There's certainly room for
>> improvement and growth but it hasn't been completely ignored on the Cinder
>> side.
>>
>
To be clear, I am fairly impressed with what has gone into OpenStack as a
whole. Given the breadth, complexity, and growth ... not everything is
going to be perfect (yet?).

So ... not trying to disparage past work, but noting what does not seem
right. (Also know I could easily be missing something.)





> The debate about whether to wipe LV's pretty much massively depends on the
>>> intelligence of the underlying store. If the lower level storage never
>>> returns accidental information ... explicit zeroes are not needed.
>>>
>>
Yes, that is pretty much the key.

Does LVM let you read physical blocks that have never been written? Or zero
out virgin segments on read? If not, then "dd" of zeroes is a way of doing
the right thing (if *very* expensive).
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [Devstack]

2014-10-23 Thread David Lyle
In order to help ease an ongoing struggle with session size limit issues,
Horizon is planning on changing the default session store from signed
cookie to simple server side session storage using sqlite. The size limit
for cookie based sessions is 4K and when this value is overrun, the result
is truncation of the session data in the cookie or a complete lack of
session data updates.

Operators will have the flexibility to replace the sqlite backend with the
DB of their choice, or memcached.

We gain: support for non-trivial service catalogs, support for higher
number of regions, space for holding multiple tokens (domain scoped and
project scoped), better support for PKI and PKIZ tokens, and frees up
cookie space for user preferences.

The drawbacks are we lose HA as a default, a slightly more complicated
configuration. Once the cookie size limit is removed, cookie based storage
would no longer be supported.

Additionally, this will require a few config changes to devstack to
configure the session store DB and clean it up periodically.

Concerns?

David
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [poppy] Summit Design Session Planning

2014-10-23 Thread Amit Gandhi
Hi

The summit planning etherpad [0] is now available to continue discussion topics 
for the Poppy design session in Paris (to be held on Tuesday Nov 4th, at 2pm)

The planning etherpad will be kept open until next Thursday, during which we 
will finalize what will be discussed during the Poppy Design session[1].  The 
Poppy team started the planning discussion at todays weekly Poppy meeting[2].

One of the initial design session topics we plan to discuss at the summit is 
how Poppy can provision CDN services over Swift Containers.

I would like to invite any Swift developers who are attending the Kilo Summit 
to attend the Poppy design session, so that we can discuss in detail how this 
feature would work and any issues we would need to consider.


For more information on Poppy (CDN), and the Design Session, please visit the 
Poppy wiki page [3]

[0] https://etherpad.openstack.org/p/poppy-design-session-paris
[1] 
http://kilodesignsummit.sched.org/event/5c9eed173199565ce840100e37ebd754#.VElwU4exE1d
[2] https://wiki.openstack.org/wiki/Meetings/Poppy
[3] https://wiki.openstack.org/wiki/Poppy

Thanks,

Amit Gandhi
Rackspace.

@amitgandhinz on Freenode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Doug Hellmann

On Oct 23, 2014, at 12:30 PM, Kevin L. Mitchell  
wrote:

> On Thu, 2014-10-23 at 18:56 +0300, Andrey Kurilin wrote:
>> Just a joke: Can we drop supporting Python 2.6, when several project
>> still have hooks for Python 2.4?
>> 
>> https://github.com/openstack/python-novaclient/blob/master/novaclient/exceptions.py#L195-L203
>> https://github.com/openstack/python-cinderclient/blob/master/cinderclient/exceptions.py#L147-L155
> 
> It may have been intended as a joke, but it's worth pointing out that
> the Xen plugins for nova (at least) have to be compatible with Python
> 2.4, because they run on the Xenserver, which has an antiquated Python
> installed :)
> 
> As for the clients, we could probably drop that segment now; it's not
> like we *test* against 2.4, right?  :)

I’m not aware of any Oslo code that presents a problem for those plugins. We 
wouldn’t want to cause a problem, but as you say, we don’t have anywhere to 
test 2.4 code. Do you know if the Xen driver uses any of the Oslo code?

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Doug Hellmann

On Oct 23, 2014, at 2:56 AM, Flavio Percoco  wrote:

> On 10/22/2014 08:15 PM, Doug Hellmann wrote:
>> The application projects are dropping python 2.6 support during Kilo, and 
>> I’ve had several people ask recently about what this means for Oslo. Because 
>> we create libraries that will be used by stable versions of projects that 
>> still need to run on 2.6, we are going to need to maintain support for 2.6 
>> in Oslo until Juno is no longer supported, at least for some of our 
>> projects. After Juno’s support period ends we can look again at dropping 2.6 
>> support in all of the projects.
>> 
>> 
>> I think these rules cover all of the cases we have:
>> 
>> 1. Any Oslo library in use by an API client that is used by a supported 
>> stable branch (Icehouse and Juno) needs to keep 2.6 support.
>> 
>> 2. If a client library needs a library we graduate from this point forward, 
>> we will need to ensure that library supports 2.6.
>> 
>> 3. Any Oslo library used directly by a supported stable branch of an 
>> application needs to keep 2.6 support.
>> 
>> 4. Any Oslo library graduated during Kilo can drop 2.6 support, unless one 
>> of the previous rules applies.
>> 
>> 5. The stable/icehouse and stable/juno branches of the incubator need to 
>> retain 2.6 support for as long as those versions are supported.
>> 
>> 6. The master branch of the incubator needs to retain 2.6 support until we 
>> graduate all of the modules that will go into libraries used by clients.
>> 
>> 
>> A few examples:
>> 
>> - oslo.utils was graduated during Juno and is used by some of the client 
>> libraries, so it needs to maintain python 2.6 support.
>> 
>> - oslo.config was graduated several releases ago and is used directly by the 
>> stable branches of the server projects, so it needs to maintain python 2.6 
>> support.
>> 
>> - oslo.log is being graduated in Kilo and is not yet in use by any projects, 
>> so it does not need python 2.6 support.
>> 
>> - oslo.cliutils and oslo.apiclient are on the list to graduate in Kilo, but 
>> both are used by client projects, so they need to keep python 2.6 support. 
>> At that point we can evaluate the code that remains in the incubator and see 
>> if we’re ready to turn of 2.6 support there.
>> 
>> 
>> Let me know if you have questions about any specific cases not listed in the 
>> examples.
> 
> The rules look ok to me but I'm a bit worried that we might miss
> something in the process due to all these rules being in place. Would it
> be simpler to just say we'll keep py2.6 support in oslo for Kilo and
> drop it in Igloo (or L?) ?

I think we have to actually wait for M, don’t we (K & L represents 1 year where 
J is supported, M is the first release where J is not supported and 2.6 can be 
fully dropped).

But to your point of keeping it simple and saying we support 2.6 in all of Oslo 
until no stable branches use it, that could work. I think in practice we’re not 
in any hurry to drop the 2.6 tests from existing Oslo libs, and we just won’t 
add them to new ones, which gives us basically the same result.

Doug

> 
> Once Igloo development begins, Kilo will be stable (without py2.6
> support except for Oslo) and Juno will be in security maintenance (with
> py2.6 support).
> 
> I guess the TL;DR of what I'm proposing is to keep 2.6 support in oslo
> until we move the rest of the projects just to keep the process simpler.
> Probably longer but hopefully simpler.
> 
> I'm sure I'm missing something so please, correct me here.
> Flavio
> 
> 
> -- 
> @flaper87
> Flavio Percoco
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-23 Thread Doug Hellmann

On Oct 23, 2014, at 6:27 AM, Chris Dent  wrote:

> 
> I've proposed a spec to Ceilometer
> 
>   https://review.openstack.org/#/c/129669/
> 
> for a suite of declarative HTTP tests that would be runnable both in
> gate check jobs and in local dev environments.
> 
> There's been some discussion that this may be generally applicable
> and could be best served by a generic tool. My original assertion
> was "let's make something work and then see if people like it" but I
> thought I also better check with the larger world:
> 
> * Is this a good idea?
> 
> * Do other projects have similar ideas in progress?
> 
> * Is this concept something for which a generic tool should be
>  created _prior_ to implementation in an individual project?
> 
> * Is there prior art? What's a good format?

WebTest isn’t quite what you’re talking about, but does provide a way to talk 
to a WSGI app from within a test suite rather simply. Can you expand a little 
on why “declarative” tests are better suited for this than the more usual sorts 
of tests we write?

I definitely don’t think the ceilometer team should build something completely 
new for this without a lot more detail in the spec about which projects on PyPI 
were evaluated and rejected as not meeting the requirements. If we do need/want 
something like this I would expect it to be built within the QA program. I 
don’t know if it’s appropriate to put it in tempestlib or if we need a 
completely new tool.

Doug

> 
> Thanks.
> 
> -- 
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-23 Thread Vishvananda Ishaya
If you exec conntrack inside the namespace with ip netns exec does it still 
show both connections?

Vish

On Oct 23, 2014, at 3:22 AM, Elena Ezhova  wrote:

> Hi!
> 
> I am working on a bug "ping still working once connected even after related 
> security group rule is deleted" 
> (https://bugs.launchpad.net/neutron/+bug/1335375). The gist of the problem is 
> the following: when we delete a security group rule the corresponding rule in 
> iptables is also deleted, but the connection, that was allowed by that rule, 
> is not being destroyed.
> The reason for such behavior is that in iptables we have the following 
> structure of a chain that filters input packets for an interface of an 
> istance:
> 
> Chain neutron-openvswi-i830fa99f-3 (1 references)
>  pkts bytes target prot opt in out source   
> destination 
> 0 0 DROP   all  --  *  *   0.0.0.0/00.0.0.0/0 
>state INVALID /* Drop packets that are not associated with a 
> state. */
> 0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0 
>state RELATED,ESTABLISHED /* Direct packets associated with a 
> known session to the RETURN chain. */
> 0 0 RETURN udp  --  *  *   10.0.0.3 0.0.0.0/0 
>udp spt:67 dpt:68
> 0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0 
>match-set IPv43a0d3610-8b38-43f2-8 src
> 0 0 RETURN tcp  --  *  *   0.0.0.0/00.0.0.0/0 
>tcp dpt:22  < rule that allows ssh on port 22  
>   
> 184 RETURN icmp --  *  *   0.0.0.0/00.0.0.0/0 
>   
> 0 0 neutron-openvswi-sg-fallback  all  --  *  *   0.0.0.0/0   
>  0.0.0.0/0/* Send unmatched traffic to the fallback 
> chain. */
> 
> So, if we delete rule that allows tcp on port 22, then all connections that 
> are already established won't be closed, because all packets would satisfy 
> the rule: 
> 0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0 
>state RELATED,ESTABLISHED /* Direct packets associated with a known 
> session to the RETURN chain. */
> 
> I seek advice on the way how to deal with the problem. There are a couple of 
> ideas how to do it (more or less realistic):
> Kill the connection using conntrack
>   The problem here is that it is sometimes impossible to tell which 
> connection should be killed. For example there may be two instances running 
> in different namespaces that have the same ip addresses. As a compute doesn't 
> know anything about namespaces, it cannot distinguish between the two 
> seemingly identical connections: 
>  $ sudo conntrack -L  | grep "10.0.0.5"
>  tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60723 
> dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723 [ASSURED] mark=0 use=1
>  tcp  6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60729 
> dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729 [ASSURED] mark=0 use=1
> 
> I wonder whether there is any way to search for a connection by destination 
> MAC?
> Delete iptables rule that directs packets associated with a known session to 
> the RETURN chain
>It will force all packets to go through the full chain each time 
> and this will definitely make the connection close. But this will strongly 
> affect the performance. Probably there may be created a timeout after which 
> this rule will be restored, but it is uncertain how long should it be.
> 
> Please share your thoughts on how it would be better to handle it.
> 
> Thanks in advance,
> Elena
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Lightning talks during the Design Summit!

2014-10-23 Thread Kyle Mestery
As discussed during the neutron-drivers meeting this week [1], we've
going to use one of the Neutron 40 minute design summit slots for
lightning talks. The basic idea is we will have 6 lightning talks,
each 5 minutes long. We will force a 5 minute hard limit here. We'll
do the lightning talk round first thing Thursday morning.

To submit a lightning talk, please add it to the etherpad linked here
[2]. I'll be collecting ideas until after the Neutron meeting on
Monday, 10-27-2014. At that point, I'll take all the ideas and add
them into a Survey Monkey form and we'll vote for which talks people
want to see. The top 6 talks will get a lightning talk slot.

I'm hoping the lightning talks allow people to discuss some ideas
which didn't get summit time, and allow for even new contributors to
discuss their ideas face to face with folks.

Thanks!
Kyle

[1] 
http://eavesdrop.openstack.org/meetings/neutron_drivers/2014/neutron_drivers.2014-10-22-15.02.log.html
[2] https://etherpad.openstack.org/p/neutron-kilo-lightning-talks

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pluggable framework in Fuel: first prototype ready

2014-10-23 Thread Dmitry Borodaenko
Preventing plugin developers from implementing their own installer is
a pro, not a con, you've already listed one reason in cons against
install scripts inside plugin tarball: if we centralize plugin
installation and management logic in fuel, we can change it once for
all plugins and don't have to worry about old plugins using an
obsolete installer.

I think priorities here should be 1) easy of plugin development; and
2) ease of use. Pluggable architecture won't do us much good if we end
up being the only ones being able to use it efficiently. Adding a
little more complexity to fuelclient to allow moving a lot of fuel
complexity from core to plugins is a good tradeoff.


On Thu, Oct 23, 2014 at 8:32 AM, Evgeniy L  wrote:
> Hi Mike,
>
> I would like to add a bit more details about current implementation and how
> it can be done.
>
> Implement installation as a scripts inside of tar ball:
> Cons:
> * install script is really simple right now, but it will be much more
> complicated
> ** it requires to implement logic where we can ask user for login/password
> ** use some config, where we will be able to get endpoints, like where is
> keystone, nailgun
> ** validate that it's possible to install plugin on the current version of
> master
> ** handle error cases (to make installation process more atomic)
> * it will be impossible to deprecate the installation logic/method, because
> it's on the plugin's side
>   and you cannot change a plugin which user downloaded some times ago, when
> we get
>   plugin manager, we probably would like user to use plugin manager, instead
> of some scripts
> * plugin installation process is not so simple as it could be (untar, cd
> plugin, ./install)
>
> Pros:
> * plugin developer can change installation scripts (I'm not sure if it's a
> pros)
>
> Add installation to fuel client:
> Cons:
> * requires changes in fuel client, which are not good for fuel client by
> design (fuel client
>   should be able to work remotely from user's machine), current
> implementation requires
>   local operations on files, it will be changed in the future releases, so
> fuel-client will
>   be able to do it via api, also we can determine if it's not master node by
> /etc/fuel/version.yaml
>   and show the user a message which says that in the current version it's
> not possible
>   to install the plugin remotely
> * plugin developer won't be able to change installation process (I'm not
> sure if it's a cons)
>
> Pros:
> * it's easier for user to install the plugin `fuel --install-plugin
> plugin_name-1.0.1.fpb'
> * all of the authentication logic already implemented in fuel client
> * fuel client uses config with endpoints which is generated by puppet
> * it will be easier to deprecate previous installation approach, we can just
> install new
>   fuel client on the master which uses api
>
> Personally I like the second approach, and I think we should try to
> implement it,
> when we get time.
>
> Thanks,
>
> On Thu, Oct 23, 2014 at 3:02 PM, Mike Scherbakov 
> wrote:
>>>
>>> I feel like we should not require user to unpack the plugin before
>>> installing it. Moreover, we may chose to distribute plugins in our own
>>> format, which we may potentially change later. E.g. "lbaas-v2.0.fp". I'd
>>> rather stick with two actions:
>>>
>>> Assembly (externally): fpb --build 
>>>
>>> Installation (on master node): fuel --install-plugin 
>>>
>>>  I like the idea of putting plugin installation functionality in fuel
>>> client, which is installed
>>> on master node.
>>> But in the current version plugin installation requires files operations
>>> on the master,
>>> as result we can have problems if user's fuel-client is installed on
>>> another env.
>>
>>
>> I suggest to keep it simple for now as we have the issue mentioned by
>> Evgeny: fuel client is supposed to work from other nodes, and we will need
>> additional verification code in there. Also, to make it smooth, we will have
>> to end up with a few more checks - like what if tarball is broken, what if
>> we can't find install script in it, etc.
>> I'd suggest to run it simple for 6.0, and then we will see how it's being
>> used and what other limitations / issues we have around plugin installation
>> and usage. We can consider to make this functionality as part of fuel client
>> a bit later.
>>
>> Thanks,
>>
>> On Tue, Oct 21, 2014 at 6:57 PM, Vitaly Kramskikh
>>  wrote:
>>>
>>> Hi,
>>>
>>> As for a separate section for plugins, I think we should not force it and
>>> leave this decision to a plugin developer, so he can create just a single
>>> checkbox or a section of the settings tab or a separate tab depending on
>>> plugin functionality. Plugins should be able to modify arbitrary release
>>> fields. For example, if Ceph was a plugin, it should be able to extend
>>> wizard config to add new options to Storage pane. If vCenter was a plugin,
>>> it should be able to set maximum amount of Compute nodes to 0.
>>>
>>> 2014-10-20 21:21 GMT+07:00 Evgeniy L :
>>

Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Vadivel Poonathan
On Thu, Oct 23, 2014 at 9:49 AM, Edgar Magana 
wrote:

>  I forgot to mention that I can help to coordinate the creation and
> maintenance of the wiki for non-upstreamed drivers for Neutron.
>
>>[vad] Edgar, that would be nice!... but not sure whether it has to wait
till the outcome of the design discussion on this topic in the upcoming
summit??!...

Thanks,
Vad
--


> We need to be sure that we DO NOT confuse users with the current
> information here:
> https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers
>
>  I have been maintaining that wiki and I would like to keep just for
> upstreamed vendor-specific plugins/drivers.
>
>  Edgar
>
>   From: Edgar Magana 
> Date: Thursday, October 23, 2014 at 9:46 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>, Kyle Mestery 
>
> Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update
> about new vendor plugin, but without code in repository?
>
>   I second Anne’s and Kyle comments. Actually, I like very much the wiki
> part to provide some visibility for out-of-tree plugins/drivers but not
> into the official documentation.
>
>  Thanks,
>
>  Edgar
>
>   From: Anne Gentle 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, October 23, 2014 at 8:51 AM
> To: Kyle Mestery 
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update
> about new vendor plugin, but without code in repository?
>
>
>
> On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery 
> wrote:
>
>> Vad:
>>
>> The third-party CI is required for your upstream driver. I think
>> what's different from my reading of this thread is the question of
>> what is the requirement to have a driver listed in the upstream
>> documentation which is not in the upstream codebase. To my knowledge,
>> we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
>> documentation to document drivers which are themselves not upstream.
>> When we split out the drivers which are currently upstream in neutron
>> into a separate repo, they will still be upstream. So my opinion here
>> is that if your driver is not upstream, it shouldn't be in the
>> upstream documentation. But I'd like to hear others opinions as well.
>>
>>
>  This is my sense as well.
>
>  The hypervisor drivers are documented on the wiki, sometimes they're
> in-tree, sometimes they're not, but the state of testing is documented on
> the wiki. I think we could take this approach for network and storage
> drivers as well.
>
>  https://wiki.openstack.org/wiki/HypervisorSupportMatrix
>
>  Anne
>
>
>> Thanks,
>> Kyle
>>
>> On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
>>   wrote:
>> > Kyle,
>> > Gentle reminder... when you get a chance!..
>> >
>> > Anne,
>> > In case, if i need to send it to different group or email-id to reach
>> Kyle
>> > Mestery, pls. let me know. Thanks for your help.
>> >
>> > Regards,
>> > Vad
>> > --
>> >
>> >
>> > On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
>> >  wrote:
>> >>
>> >> Hi Kyle,
>> >>
>> >> Can you pls. comment on this discussion and confirm the requirements
>> for
>> >> getting out-of-tree mechanism_driver listed in the supported
>> plugin/driver
>> >> list of the Openstack Neutron docs.
>> >>
>> >> Thanks,
>> >> Vad
>> >> --
>> >>
>> >> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle 
>> wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
>> >>>  wrote:
>> 
>>  Hi,
>> 
>>   On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton <
>> blak...@gmail.com>
>>   wrote:
>>  >
>>  > I think you will probably have to wait until after the summit
>> so
>>  > we can
>>  > see the direction that will be taken with the rest of the
>> in-tree
>>  > drivers/plugins. It seems like we are moving towards removing
>> all
>>  > of them so
>>  > we would definitely need a solution to documenting out-of-tree
>>  > drivers as
>>  > you suggested.
>> 
>>  [Vad] while i 'm waiting for the conclusion on this subject, i 'm
>> trying
>>  to setup the third-party CI/Test system and meet its requirements to
>> get my
>>  mechanism_driver listed in the Kilo's documentation, in parallel.
>> 
>>  Couple of questions/confirmations before i proceed further on this
>>  direction...
>> 
>>  1) Is there anything more required other than the third-party CI/Test
>>  requirements ??.. like should I still need to go-through the entire
>>  development process of submit/review/approval of the blue-print and
>> code of
>>  my ML2 driver which was already developed and in-use?...
>> 
>> >>>
>> >>> The neutron PTL Kyle Mestery can answer if there are any additional
>> >>> requirements.
>> >>>
>> 
>>  2

Re: [openstack-dev] [Nova] questions on object/db usage

2014-10-23 Thread Dan Smith
>When I fix some bugs, I found that some code in
> nova/compute/api.py
>   sometimes we use db ,sometimes we use objects do we have
> any criteria for it? I knew we can't access db in compute layer code,
> how about others ? prefer object or db direct access? thanks

Prefer objects, and any remaining db.* usage anywhere (other than the
object code itself) is not only a candidate for cleanup, it's much
appreciated :)

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Kyle Mestery
On Thu, Oct 23, 2014 at 12:35 PM, Vadivel Poonathan
 wrote:
> Hi Kyle and Anne,
>
> Thanks for the clarifications... understood and it makes sense.
>
> However, per my understanding, the drivers (aka plugins) are meant to be
> developed and supported by third-party vendors, outside of the OpenStack
> community, and they are supposed to work as plug-n-play... they are not part
> of the core OpenStack development, nor any of its components. If that is the
> case, then why should OpenStack community include and maintain them as part
> of it, for every release?...  Wouldnt it be enough to limit the scope with
> the plugin framework and built-in drivers such as LinuxBridge or OVS etc?...
> not extending to commercial vendors?...  (It is just a curious question,
> forgive me if i missed something and correct me!).
>
You haven't misunderstood anything, we're in the process of splitting
these things out, and this will be a prime focus of the Neutron design
summit track at the upcoming summit.

Thanks,
Kyle

> At the same time, IMHO, there must be some reference or a page within the
> scope of OpenStack documentation (not necessarily the core docs, but some
> wiki page or reference link or so - as Anne suggested) to mention the list
> of the drivers/plugins supported as of given release and may be an external
> link to know more details about the driver, if the link is provided by
> respective vendor.
>
>
> Anyway, besides my opinion, the wiki page similar to hypervisor driver would
> be good for now atleast, until the direction/policy level decision is made
> to maintain out-of-tree plugins/drivers.
>
>
> Thanks,
> Vad
> --
>
>
>
>
> On Thu, Oct 23, 2014 at 9:46 AM, Edgar Magana 
> wrote:
>>
>> I second Anne’s and Kyle comments. Actually, I like very much the wiki
>> part to provide some visibility for out-of-tree plugins/drivers but not into
>> the official documentation.
>>
>> Thanks,
>>
>> Edgar
>>
>> From: Anne Gentle 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Thursday, October 23, 2014 at 8:51 AM
>> To: Kyle Mestery 
>> Cc: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update
>> about new vendor plugin, but without code in repository?
>>
>>
>>
>> On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery 
>> wrote:
>>>
>>> Vad:
>>>
>>> The third-party CI is required for your upstream driver. I think
>>> what's different from my reading of this thread is the question of
>>> what is the requirement to have a driver listed in the upstream
>>> documentation which is not in the upstream codebase. To my knowledge,
>>> we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
>>> documentation to document drivers which are themselves not upstream.
>>> When we split out the drivers which are currently upstream in neutron
>>> into a separate repo, they will still be upstream. So my opinion here
>>> is that if your driver is not upstream, it shouldn't be in the
>>> upstream documentation. But I'd like to hear others opinions as well.
>>>
>>
>> This is my sense as well.
>>
>> The hypervisor drivers are documented on the wiki, sometimes they're
>> in-tree, sometimes they're not, but the state of testing is documented on
>> the wiki. I think we could take this approach for network and storage
>> drivers as well.
>>
>> https://wiki.openstack.org/wiki/HypervisorSupportMatrix
>>
>> Anne
>>
>>>
>>> Thanks,
>>> Kyle
>>>
>>> On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
>>>  wrote:
>>> > Kyle,
>>> > Gentle reminder... when you get a chance!..
>>> >
>>> > Anne,
>>> > In case, if i need to send it to different group or email-id to reach
>>> > Kyle
>>> > Mestery, pls. let me know. Thanks for your help.
>>> >
>>> > Regards,
>>> > Vad
>>> > --
>>> >
>>> >
>>> > On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
>>> >  wrote:
>>> >>
>>> >> Hi Kyle,
>>> >>
>>> >> Can you pls. comment on this discussion and confirm the requirements
>>> >> for
>>> >> getting out-of-tree mechanism_driver listed in the supported
>>> >> plugin/driver
>>> >> list of the Openstack Neutron docs.
>>> >>
>>> >> Thanks,
>>> >> Vad
>>> >> --
>>> >>
>>> >> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle 
>>> >> wrote:
>>> >>>
>>> >>>
>>> >>>
>>> >>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
>>> >>>  wrote:
>>> 
>>>  Hi,
>>> 
>>>   On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton
>>>   
>>>   wrote:
>>>  >
>>>  > I think you will probably have to wait until after the summit
>>>  > so
>>>  > we can
>>>  > see the direction that will be taken with the rest of the
>>>  > in-tree
>>>  > drivers/plugins. It seems like we are moving towards removing
>>>  > all
>>>  > of them so
>>>  > we would definitely need a solution to documenting out-of-tree
>>>  > drivers as
>>>  > you suggested.
>>> 
>>> >

Re: [openstack-dev] [Nova] Cells conversation starter

2014-10-23 Thread Andrew Laski


On 10/22/2014 08:11 PM, Sam Morrison wrote:

On 23 Oct 2014, at 5:55 am, Andrew Laski  wrote:


While I agree that N is a bit interesting, I have seen N=3 in production

[central API]-->[state/region1]-->[state/region DC1]
\->[state/region DC2]
   -->[state/region2 DC]
   -->[state/region3 DC]
   -->[state/region4 DC]

I would be curious to hear any information about how this is working out.  Does 
everything that works for N=2 work when N=3?  Are there fixes that needed to be 
added to make this work?  Why do it this way rather than bring [state/region 
DC1] and [state/region DC2] up a level?

We (NeCTAR) have 3 tiers, our current setup has one parent, 6 children then 3 
of the children have 2 grandchildren each. All compute nodes are at the lowest 
level.

Everything works fine and we haven’t needed to do any modifications.

We run in a 3 tier system because it matches how our infrastructure is 
logically laid out, but I don’t see a problem in just having a 2 tier system 
and getting rid of the middle man.


There's no reason an N-tier system where N > 2 shouldn't be feasible, 
but it's not going to be tested in this initial effort. So while we will 
try not to break it, it's hard to guarantee that. That's why my 
preference would be to remove that code and build up an N-tier system in 
conjunction with testing later.  But with a clear user of this 
functionality I don't think that's an option.




Sam


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Convergence prototyping

2014-10-23 Thread Zane Bitter

Hi folks,
I've been looking at the convergence stuff, and become a bit concerned 
that we're more or less flying blind (or at least I have been) in trying 
to figure out the design, and also that some of the first implementation 
efforts seem to be around the stuff that is _most_ expensive to change 
(e.g. database schemata).


What we really want is to experiment on stuff that is cheap to change 
with a view to figuring out the big picture without having to iterate on 
the expensive stuff. To that end, I started last week to write a little 
prototype system to demonstrate the concepts of convergence. (Note that 
none of this code is intended to end up in Heat!) You can find the code 
here:


https://github.com/zaneb/heat-convergence-prototype

Note that this is a *very* early prototype. At the moment it can create 
resources, and not much else. I plan to continue working on it to 
implement updates and so forth. My hope is that we can develop a test 
framework and scenarios around this that can eventually be transplanted 
into Heat's functional tests. So the prototype code is throwaway, but 
the tests we might write against it in future should be useful.


I'd like to encourage anyone who needs to figure out any part of the 
design of convergence to fork the repo and try out some alternatives - 
it should be very lightweight to do so. I will also entertain pull 
requests (though I see my branch primarily as a vehicle for my own 
learning at this early stage, so if you want to go in a different 
direction it may be best to do so on your own branch), and the issue 
tracker is enabled if there is something you want to track.


I have learned a bunch of stuff already:

* The proposed spec for persisting the dependency graph 
(https://review.openstack.org/#/c/123749/1) is really well done. Kudos 
to Anant and the other folks who had input to it. I have left comments 
based on what I learned so far from trying it out.



* We should isolate the problem of merging two branches of execution 
(i.e. knowing when to trigger a check on one resource that depends on 
multiple others). Either in a library (like taskflow) or just a separate 
database table (like my current prototype). Baking it into the 
orchestration algorithms (e.g. by marking nodes in the dependency graph) 
would be a colossal mistake IMHO.



* Our overarching plan is backwards.

There are two quite separable parts to this architecture - the worker 
and the observer. Up until now, we have been assuming that implementing 
the observer would be the first step. Originally we thought that this 
would give us the best incremental benefits. At the mid-cycle meetup we 
came to the conclusion that there were actually no real incremental 
benefits to be had until everything was close to completion. I am now of 
the opinion that we had it exactly backwards - the observer 
implementation should come last. That will allow us to deliver 
incremental benefits from the observer sooner.


The problem with the observer is that it requires new plugins. (That 
sucks BTW, because a lot of the value of Heat is in having all of these 
tested, working plugins. I'd love it if we could take the opportunity to 
design a plugin framework such that plugins would require much less 
custom code, but it looks like a really hard job.) Basically this means 
that convergence would be stalled until we could rewrite all the 
plugins. I think it's much better to implement a first stage that can 
work with existing plugins *or* the new ones we'll eventually have with 
the observer. That allows us to get some benefits soon and further 
incremental benefits as we convert plugins one at a time. It should also 
mean a transition period (possibly with a performance penalty) for 
existing plugin authors, and for things like HARestarter (can we please 
please deprecate it now?).


So the two phases I'm proposing are:
 1. (Workers) Distribute tasks for individual resources among workers; 
implement update-during-update (no more locking).
 2. (Observers) Compare against real-world values instead of template 
values to determine when updates are needed. Make use of notifications 
and such.


I believe it's quite realistic to aim to get #1 done for Kilo. There 
could also be a phase 1.5, where we use the existing stack-check 
mechanism to detect the most egregious divergences between template and 
reality (e.g. whole resource is missing should be easy-ish). I think 
this means that we could have a feasible Autoscaling API for Kilo if 
folks step up to work on it - and in any case now is the time to start 
on that to avoid it being delayed more than it needs to be based purely 
on the availability of underlying features. That's why I proposed a 
session on Autoscaling for the design summit.



* This thing is going to _hammer_ the database

The advantage is that we'll be able to spread the access across an 
arbitrary number of workers, but it's still going to be brutal because 
there's only one dat

[openstack-dev] [Keystone] python-keystoneclient release 0.11.2

2014-10-23 Thread Morgan Fainberg
The Keystone team has released python-keystoneclient 0.11.2 [1]. This version 
includes a number of bug fixes.

Details of new features and bug fixes included in the 0.11.2 release of 
python-keystoneclient can be found on the milestone information page [2].


[1] https://pypi.python.org/pypi/python-keystoneclient/0.11.2
[2] https://launchpad.net/python-keystoneclient/+milestone/0.11.2
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Vadivel Poonathan
Hi Kyle and Anne,

Thanks for the clarifications... understood and it makes sense.

However, per my understanding, the drivers (aka plugins) are meant to be
developed and supported by third-party vendors, outside of the OpenStack
community, and they are supposed to work as plug-n-play... they are not
part of the core OpenStack development, nor any of its components. If that
is the case, then why should OpenStack community include and maintain them
as part of it, for every release?...  Wouldnt it be enough to limit the
scope with the plugin framework and built-in drivers such as LinuxBridge or
OVS etc?... not extending to commercial vendors?...  (It is just a curious
question, forgive me if i missed something and correct me!).

At the same time, IMHO, there must be some reference or a page within the
scope of OpenStack documentation (not necessarily the core docs, but some
wiki page or reference link or so - as Anne suggested) to mention the list
of the drivers/plugins supported as of given release and may be an external
link to know more details about the driver, if the link is provided by
respective vendor.


Anyway, besides my opinion, the wiki page similar to hypervisor driver
would be good for now atleast, until the direction/policy level decision is
made to maintain out-of-tree plugins/drivers.


Thanks,
Vad
--




On Thu, Oct 23, 2014 at 9:46 AM, Edgar Magana 
wrote:

>  I second Anne’s and Kyle comments. Actually, I like very much the wiki
> part to provide some visibility for out-of-tree plugins/drivers but not
> into the official documentation.
>
>  Thanks,
>
>  Edgar
>
>   From: Anne Gentle 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, October 23, 2014 at 8:51 AM
> To: Kyle Mestery 
> Cc: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update
> about new vendor plugin, but without code in repository?
>
>
>
> On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery 
> wrote:
>
>> Vad:
>>
>> The third-party CI is required for your upstream driver. I think
>> what's different from my reading of this thread is the question of
>> what is the requirement to have a driver listed in the upstream
>> documentation which is not in the upstream codebase. To my knowledge,
>> we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
>> documentation to document drivers which are themselves not upstream.
>> When we split out the drivers which are currently upstream in neutron
>> into a separate repo, they will still be upstream. So my opinion here
>> is that if your driver is not upstream, it shouldn't be in the
>> upstream documentation. But I'd like to hear others opinions as well.
>>
>>
>  This is my sense as well.
>
>  The hypervisor drivers are documented on the wiki, sometimes they're
> in-tree, sometimes they're not, but the state of testing is documented on
> the wiki. I think we could take this approach for network and storage
> drivers as well.
>
>  https://wiki.openstack.org/wiki/HypervisorSupportMatrix
>
>  Anne
>
>
>> Thanks,
>> Kyle
>>
>> On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
>>   wrote:
>> > Kyle,
>> > Gentle reminder... when you get a chance!..
>> >
>> > Anne,
>> > In case, if i need to send it to different group or email-id to reach
>> Kyle
>> > Mestery, pls. let me know. Thanks for your help.
>> >
>> > Regards,
>> > Vad
>> > --
>> >
>> >
>> > On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
>> >  wrote:
>> >>
>> >> Hi Kyle,
>> >>
>> >> Can you pls. comment on this discussion and confirm the requirements
>> for
>> >> getting out-of-tree mechanism_driver listed in the supported
>> plugin/driver
>> >> list of the Openstack Neutron docs.
>> >>
>> >> Thanks,
>> >> Vad
>> >> --
>> >>
>> >> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle 
>> wrote:
>> >>>
>> >>>
>> >>>
>> >>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
>> >>>  wrote:
>> 
>>  Hi,
>> 
>>   On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton <
>> blak...@gmail.com>
>>   wrote:
>>  >
>>  > I think you will probably have to wait until after the summit
>> so
>>  > we can
>>  > see the direction that will be taken with the rest of the
>> in-tree
>>  > drivers/plugins. It seems like we are moving towards removing
>> all
>>  > of them so
>>  > we would definitely need a solution to documenting out-of-tree
>>  > drivers as
>>  > you suggested.
>> 
>>  [Vad] while i 'm waiting for the conclusion on this subject, i 'm
>> trying
>>  to setup the third-party CI/Test system and meet its requirements to
>> get my
>>  mechanism_driver listed in the Kilo's documentation, in parallel.
>> 
>>  Couple of questions/confirmations before i proceed further on this
>>  direction...
>> 
>>  1) Is there anyth

Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Edgar Magana
I forgot to mention that I can help to coordinate the creation and maintenance 
of the wiki for non-upstreamed drivers for Neutron.
We need to be sure that we DO NOT confuse users with the current information 
here:
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers

I have been maintaining that wiki and I would like to keep just for upstreamed 
vendor-specific plugins/drivers.

Edgar

From: Edgar Magana mailto:edgar.mag...@workday.com>>
Date: Thursday, October 23, 2014 at 9:46 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>, 
Kyle Mestery mailto:mest...@mestery.com>>
Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update about 
new vendor plugin, but without code in repository?

I second Anne’s and Kyle comments. Actually, I like very much the wiki part to 
provide some visibility for out-of-tree plugins/drivers but not into the 
official documentation.

Thanks,

Edgar

From: Anne Gentle mailto:a...@openstack.org>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 23, 2014 at 8:51 AM
To: Kyle Mestery mailto:mest...@mestery.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update about 
new vendor plugin, but without code in repository?



On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery 
mailto:mest...@mestery.com>> wrote:
Vad:

The third-party CI is required for your upstream driver. I think
what's different from my reading of this thread is the question of
what is the requirement to have a driver listed in the upstream
documentation which is not in the upstream codebase. To my knowledge,
we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
documentation to document drivers which are themselves not upstream.
When we split out the drivers which are currently upstream in neutron
into a separate repo, they will still be upstream. So my opinion here
is that if your driver is not upstream, it shouldn't be in the
upstream documentation. But I'd like to hear others opinions as well.


This is my sense as well.

The hypervisor drivers are documented on the wiki, sometimes they're in-tree, 
sometimes they're not, but the state of testing is documented on the wiki. I 
think we could take this approach for network and storage drivers as well.

https://wiki.openstack.org/wiki/HypervisorSupportMatrix

Anne

Thanks,
Kyle

On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
mailto:vadivel.openst...@gmail.com>> wrote:
> Kyle,
> Gentle reminder... when you get a chance!..
>
> Anne,
> In case, if i need to send it to different group or email-id to reach Kyle
> Mestery, pls. let me know. Thanks for your help.
>
> Regards,
> Vad
> --
>
>
> On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
> mailto:vadivel.openst...@gmail.com>> wrote:
>>
>> Hi Kyle,
>>
>> Can you pls. comment on this discussion and confirm the requirements for
>> getting out-of-tree mechanism_driver listed in the supported plugin/driver
>> list of the Openstack Neutron docs.
>>
>> Thanks,
>> Vad
>> --
>>
>> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle 
>> mailto:a...@openstack.org>> wrote:
>>>
>>>
>>>
>>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
>>> mailto:vadivel.openst...@gmail.com>> wrote:

 Hi,

  On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton 
  mailto:blak...@gmail.com>>
  wrote:
 >
 > I think you will probably have to wait until after the summit so
 > we can
 > see the direction that will be taken with the rest of the in-tree
 > drivers/plugins. It seems like we are moving towards removing all
 > of them so
 > we would definitely need a solution to documenting out-of-tree
 > drivers as
 > you suggested.

 [Vad] while i 'm waiting for the conclusion on this subject, i 'm trying
 to setup the third-party CI/Test system and meet its requirements to get my
 mechanism_driver listed in the Kilo's documentation, in parallel.

 Couple of questions/confirmations before i proceed further on this
 direction...

 1) Is there anything more required other than the third-party CI/Test
 requirements ??.. like should I still need to go-through the entire
 development process of submit/review/approval of the blue-print and code of
 my ML2 driver which was already developed and in-use?...

>>>
>>> The neutron PTL Kyle Mestery can answer if there are any additional
>>> requirements.
>>>

 2) Who is the authority to clarify and confirm the above (and how do i
 contact them)?...
>>>
>>>
>>> Elections just completed, and the newly elected PTL is Kyle Mestery,
>>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html.
>>>


 Thanks again for y

Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Edgar Magana
I second Anne’s and Kyle comments. Actually, I like very much the wiki part to 
provide some visibility for out-of-tree plugins/drivers but not into the 
official documentation.

Thanks,

Edgar

From: Anne Gentle mailto:a...@openstack.org>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, October 23, 2014 at 8:51 AM
To: Kyle Mestery mailto:mest...@mestery.com>>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] Neutron documentation to update about 
new vendor plugin, but without code in repository?



On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery 
mailto:mest...@mestery.com>> wrote:
Vad:

The third-party CI is required for your upstream driver. I think
what's different from my reading of this thread is the question of
what is the requirement to have a driver listed in the upstream
documentation which is not in the upstream codebase. To my knowledge,
we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
documentation to document drivers which are themselves not upstream.
When we split out the drivers which are currently upstream in neutron
into a separate repo, they will still be upstream. So my opinion here
is that if your driver is not upstream, it shouldn't be in the
upstream documentation. But I'd like to hear others opinions as well.


This is my sense as well.

The hypervisor drivers are documented on the wiki, sometimes they're in-tree, 
sometimes they're not, but the state of testing is documented on the wiki. I 
think we could take this approach for network and storage drivers as well.

https://wiki.openstack.org/wiki/HypervisorSupportMatrix

Anne

Thanks,
Kyle

On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
mailto:vadivel.openst...@gmail.com>> wrote:
> Kyle,
> Gentle reminder... when you get a chance!..
>
> Anne,
> In case, if i need to send it to different group or email-id to reach Kyle
> Mestery, pls. let me know. Thanks for your help.
>
> Regards,
> Vad
> --
>
>
> On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
> mailto:vadivel.openst...@gmail.com>> wrote:
>>
>> Hi Kyle,
>>
>> Can you pls. comment on this discussion and confirm the requirements for
>> getting out-of-tree mechanism_driver listed in the supported plugin/driver
>> list of the Openstack Neutron docs.
>>
>> Thanks,
>> Vad
>> --
>>
>> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle 
>> mailto:a...@openstack.org>> wrote:
>>>
>>>
>>>
>>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
>>> mailto:vadivel.openst...@gmail.com>> wrote:

 Hi,

  On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton 
  mailto:blak...@gmail.com>>
  wrote:
 >
 > I think you will probably have to wait until after the summit so
 > we can
 > see the direction that will be taken with the rest of the in-tree
 > drivers/plugins. It seems like we are moving towards removing all
 > of them so
 > we would definitely need a solution to documenting out-of-tree
 > drivers as
 > you suggested.

 [Vad] while i 'm waiting for the conclusion on this subject, i 'm trying
 to setup the third-party CI/Test system and meet its requirements to get my
 mechanism_driver listed in the Kilo's documentation, in parallel.

 Couple of questions/confirmations before i proceed further on this
 direction...

 1) Is there anything more required other than the third-party CI/Test
 requirements ??.. like should I still need to go-through the entire
 development process of submit/review/approval of the blue-print and code of
 my ML2 driver which was already developed and in-use?...

>>>
>>> The neutron PTL Kyle Mestery can answer if there are any additional
>>> requirements.
>>>

 2) Who is the authority to clarify and confirm the above (and how do i
 contact them)?...
>>>
>>>
>>> Elections just completed, and the newly elected PTL is Kyle Mestery,
>>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html.
>>>


 Thanks again for your inputs...

 Regards,
 Vad
 --

 On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle 
 mailto:a...@openstack.org>> wrote:
>
>
>
> On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan
> mailto:vadivel.openst...@gmail.com>> wrote:
>>
>> Agreed on the requirements of test results to qualify the vendor
>> plugin to be listed in the upstream docs.
>> Is there any procedure/infrastructure currently available for this
>> purpose?..
>> Pls. fwd any link/pointers on those info.
>>
>
> Here's a link to the third-party testing setup information.
>
> http://ci.openstack.org/third_party.html
>
> Feel free to keep asking questions as you dig deeper.
> Thanks,
> Anne
>
>>

Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Kevin L. Mitchell
On Thu, 2014-10-23 at 18:56 +0300, Andrey Kurilin wrote:
> Just a joke: Can we drop supporting Python 2.6, when several project
> still have hooks for Python 2.4?
> 
> https://github.com/openstack/python-novaclient/blob/master/novaclient/exceptions.py#L195-L203
> https://github.com/openstack/python-cinderclient/blob/master/cinderclient/exceptions.py#L147-L155

It may have been intended as a joke, but it's worth pointing out that
the Xen plugins for nova (at least) have to be compatible with Python
2.4, because they run on the Xenserver, which has an antiquated Python
installed :)

As for the clients, we could probably drop that segment now; it's not
like we *test* against 2.4, right?  :)
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Andrey Kurilin
Just a joke: Can we drop supporting Python 2.6, when several project still
have hooks for Python 2.4?

https://github.com/openstack/python-novaclient/blob/master/novaclient/exceptions.py#L195-L203
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/exceptions.py#L147-L155

On Wed, Oct 22, 2014 at 9:15 PM, Doug Hellmann 
wrote:

> The application projects are dropping python 2.6 support during Kilo, and
> I’ve had several people ask recently about what this means for Oslo.
> Because we create libraries that will be used by stable versions of
> projects that still need to run on 2.6, we are going to need to maintain
> support for 2.6 in Oslo until Juno is no longer supported, at least for
> some of our projects. After Juno’s support period ends we can look again at
> dropping 2.6 support in all of the projects.
>
>
> I think these rules cover all of the cases we have:
>
> 1. Any Oslo library in use by an API client that is used by a supported
> stable branch (Icehouse and Juno) needs to keep 2.6 support.
>
> 2. If a client library needs a library we graduate from this point
> forward, we will need to ensure that library supports 2.6.
>
> 3. Any Oslo library used directly by a supported stable branch of an
> application needs to keep 2.6 support.
>
> 4. Any Oslo library graduated during Kilo can drop 2.6 support, unless one
> of the previous rules applies.
>
> 5. The stable/icehouse and stable/juno branches of the incubator need to
> retain 2.6 support for as long as those versions are supported.
>
> 6. The master branch of the incubator needs to retain 2.6 support until we
> graduate all of the modules that will go into libraries used by clients.
>
>
> A few examples:
>
> - oslo.utils was graduated during Juno and is used by some of the client
> libraries, so it needs to maintain python 2.6 support.
>
> - oslo.config was graduated several releases ago and is used directly by
> the stable branches of the server projects, so it needs to maintain python
> 2.6 support.
>
> - oslo.log is being graduated in Kilo and is not yet in use by any
> projects, so it does not need python 2.6 support.
>
> - oslo.cliutils and oslo.apiclient are on the list to graduate in Kilo,
> but both are used by client projects, so they need to keep python 2.6
> support. At that point we can evaluate the code that remains in the
> incubator and see if we’re ready to turn of 2.6 support there.
>
>
> Let me know if you have questions about any specific cases not listed in
> the examples.
>
> Doug
>
> PS - Thanks to fungi and clarkb for helping work out the rules above.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Anne Gentle
On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery  wrote:

> Vad:
>
> The third-party CI is required for your upstream driver. I think
> what's different from my reading of this thread is the question of
> what is the requirement to have a driver listed in the upstream
> documentation which is not in the upstream codebase. To my knowledge,
> we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
> documentation to document drivers which are themselves not upstream.
> When we split out the drivers which are currently upstream in neutron
> into a separate repo, they will still be upstream. So my opinion here
> is that if your driver is not upstream, it shouldn't be in the
> upstream documentation. But I'd like to hear others opinions as well.
>
>
This is my sense as well.

The hypervisor drivers are documented on the wiki, sometimes they're
in-tree, sometimes they're not, but the state of testing is documented on
the wiki. I think we could take this approach for network and storage
drivers as well.

https://wiki.openstack.org/wiki/HypervisorSupportMatrix

Anne


> Thanks,
> Kyle
>
> On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
>  wrote:
> > Kyle,
> > Gentle reminder... when you get a chance!..
> >
> > Anne,
> > In case, if i need to send it to different group or email-id to reach
> Kyle
> > Mestery, pls. let me know. Thanks for your help.
> >
> > Regards,
> > Vad
> > --
> >
> >
> > On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
> >  wrote:
> >>
> >> Hi Kyle,
> >>
> >> Can you pls. comment on this discussion and confirm the requirements for
> >> getting out-of-tree mechanism_driver listed in the supported
> plugin/driver
> >> list of the Openstack Neutron docs.
> >>
> >> Thanks,
> >> Vad
> >> --
> >>
> >> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle 
> wrote:
> >>>
> >>>
> >>>
> >>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
> >>>  wrote:
> 
>  Hi,
> 
>   On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton  >
>   wrote:
>  >
>  > I think you will probably have to wait until after the summit so
>  > we can
>  > see the direction that will be taken with the rest of the
> in-tree
>  > drivers/plugins. It seems like we are moving towards removing
> all
>  > of them so
>  > we would definitely need a solution to documenting out-of-tree
>  > drivers as
>  > you suggested.
> 
>  [Vad] while i 'm waiting for the conclusion on this subject, i 'm
> trying
>  to setup the third-party CI/Test system and meet its requirements to
> get my
>  mechanism_driver listed in the Kilo's documentation, in parallel.
> 
>  Couple of questions/confirmations before i proceed further on this
>  direction...
> 
>  1) Is there anything more required other than the third-party CI/Test
>  requirements ??.. like should I still need to go-through the entire
>  development process of submit/review/approval of the blue-print and
> code of
>  my ML2 driver which was already developed and in-use?...
> 
> >>>
> >>> The neutron PTL Kyle Mestery can answer if there are any additional
> >>> requirements.
> >>>
> 
>  2) Who is the authority to clarify and confirm the above (and how do i
>  contact them)?...
> >>>
> >>>
> >>> Elections just completed, and the newly elected PTL is Kyle Mestery,
> >>>
> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html.
> >>>
> 
> 
>  Thanks again for your inputs...
> 
>  Regards,
>  Vad
>  --
> 
>  On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle 
> wrote:
> >
> >
> >
> > On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan
> >  wrote:
> >>
> >> Agreed on the requirements of test results to qualify the vendor
> >> plugin to be listed in the upstream docs.
> >> Is there any procedure/infrastructure currently available for this
> >> purpose?..
> >> Pls. fwd any link/pointers on those info.
> >>
> >
> > Here's a link to the third-party testing setup information.
> >
> > http://ci.openstack.org/third_party.html
> >
> > Feel free to keep asking questions as you dig deeper.
> > Thanks,
> > Anne
> >
> >>
> >> Thanks,
> >> Vad
> >> --
> >>
> >> On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki  >
> >> wrote:
> >>>
> >>> I agree with Kevin and Kyle. Even if we decided to use separate
> tree
> >>> for neutron
> >>> plugins and drivers, they still will be regarded as part of the
> >>> upstream.
> >>> These plugins/drivers need to prove they are well integrated with
> >>> Neutron master
> >>> in some way and gating integration proves it is well tested and
> >>> integrated.
> >>> I believe it is a reasonable assumption and requirement that a
> vendor
> >>> plugin/driver
> >>> is listed in the upstream docs. This is a same kind of question

Re: [openstack-dev] [Fuel] Fuel standards

2014-10-23 Thread Anton Zemlyanov
I have another example, nailgun and UI are bundled in FuelWeb being quite
independent components. Nailgun is python REST API, while UI is HTML/CSS/JS
+ libs. I also support the idea making CLI a separate project, it is
similar to FuelWeb UI, it uses the same REST API. Fuelclient lib is also
good idea, REPL can be separated from command execution logic.

Multiple simple components are usually easier to maintain, bigger
components tend to become complex and tightly coupled.

I also fully support standards of naming files and directories, although it
relates to Python stuff mostly.

Anton Zemlyanov


> 1) Standard for an architecture.
> Most of OpenStack services are split into several independent parts
> (raughly service-api, serivce-engine, python-serivceclient) and those parts
> interact with each other via REST and AMQP. python-serivceclient is usually
> located in a separate repository. Do we actually need to do the same for
> Fuel? According to fuelclient it means it should be moved into a separate
> repository. Fortunately, it already uses REST API for interacting with
> nailgun. But it should be possible to use it not only as a CLI tool, but
> also as a library.
>
> 2) Standard for project directory structure (directory names for api, db
> models,  drivers, cli related code, plugins, common code, etc.)
> Do we actually need to standardize a directory structure?
>
> 3) Standard for third party libraries
> As far as Fuel is a deployment tool for OpenStack, let's make a decision
> about using OpenStack components wherever it is possible.
> 3.1) oslo.config for configuring.
> 3.2) oslo.db for database layer
> 3.3) oslo.messaging for AMQP layer
> 3.4) cliff for CLI (should we refactor fuelclient so as to make based on
> cliff?)
> 3.5) oslo.log for logging
> 3.6) stevedore for plugins
> etc.
> What about third party components which are not OpenStack related? What
> could be the requirements for an arbitrary PyPi package?
>
> 4) Standard for testing.
> It requires a separate discussion.
>
> 5) Standard for documentation.
> It requires a separate discussion.
>
>
> Vladimir Kozhukalov
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] kilo design session

2014-10-23 Thread Tim Hinrichs
Works for me. 

Tim

P. S. Pardon the brevity. Sent from my mobile. 

> On Oct 22, 2014, at 5:01 PM, "Sean Roberts"  wrote:
> 
> We are scheduled for Monday, 03 Nov, 14:30 - 16:00. I have a conflict with 
> the “Meet the Influencers” talk that runs from 14:30-18:30, plus the GBP 
> session is on Tuesday, 04 Nov, 12:05-12:45. I was thinking we would want to 
> co-located the Congress and GBP talks as much as possible.
> 
> The BOSH team has the Tuesday, 04 Nov, 16:40-18:10 slot and wants to switch. 
> 
> Does this switch work for everyone?
> 
> Maybe we can get some space in one of the pods or cross-project workshops on 
> Tuesday between the GBP and the potential Congress session to make it even 
> more better.
> 
> ~sean
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] questions on object/db usage

2014-10-23 Thread Chen CH Ji

Hi
   When I fix some bugs, I found that some code in
nova/compute/api.py

  sometimes we use db ,sometimes we use objects do we have any
criteria for it? I knew we can't access db in compute layer code, how about
others ? prefer object or db direct access? thanks

def service_delete(self, context, service_id):
"""Deletes the specified service."""
objects.Service.get_by_id(context, service_id).destroy()

def instance_get_all_by_host(self, context, host_name):
"""Return all instances on the given host."""
return self.db.instance_get_all_by_host(context, host_name)

def compute_node_get_all(self, context):
return self.db.compute_node_get_all(context)

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pluggable framework in Fuel: first prototype ready

2014-10-23 Thread Evgeniy L
Hi Mike,

I would like to add a bit more details about current implementation and how
it can be done.

*Implement installation as a scripts inside of tar ball:*
Cons:
* install script is really simple right now, but it will be much more
complicated
** it requires to implement logic where we can ask user for login/password
** use some config, where we will be able to get endpoints, like where is
keystone, nailgun
** validate that it's possible to install plugin on the current version of
master
** handle error cases (to make installation process more atomic)
* it will be impossible to deprecate the installation logic/method, because
it's on the plugin's side
  and you cannot change a plugin which user downloaded some times ago, when
we get
  plugin manager, we probably would like user to use plugin manager,
instead of some scripts
* plugin installation process is not so simple as it could be (untar, cd
plugin, ./install)

Pros:
* plugin developer can change installation scripts (I'm not sure if it's a
pros)

*Add installation to fuel client:*
Cons:
* requires changes in fuel client, which are not good for fuel client by
design (fuel client
  should be able to work remotely from user's machine), current
implementation requires
  local operations on files, it will be changed in the future releases, so
fuel-client will
  be able to do it via api, also we can determine if it's not master node
by /etc/fuel/version.yaml
  and show the user a message which says that in the current version it's
not possible
  to install the plugin remotely
* plugin developer won't be able to change installation process (I'm not
sure if it's a cons)

Pros:
* it's easier for user to install the plugin `fuel --install-plugin
plugin_name-1.0.1.fpb'
* all of the authentication logic already implemented in fuel client
* fuel client uses config with endpoints which is generated by puppet
* it will be easier to deprecate previous installation approach, we can
just install new
  fuel client on the master which uses api

Personally I like the second approach, and I think we should try to
implement it,
when we get time.

Thanks,

On Thu, Oct 23, 2014 at 3:02 PM, Mike Scherbakov 
wrote:

>
>>1. I feel like we should not require user to unpack the plugin before
>>installing it. Moreover, we may chose to distribute plugins in our own
>>format, which we may potentially change later. E.g. "lbaas-v2.0.fp". I'd
>>rather stick with two actions:
>>
>>
>>- Assembly (externally): fpb --build 
>>
>>
>>- Installation (on master node): fuel --install-plugin 
>>
>>  I like the idea of putting plugin installation functionality in fuel client,
>> which is installed
>> on master node.
>> But in the current version plugin installation requires files operations
>> on the master,
>> as result we can have problems if user's fuel-client is installed on
>> another env.
>
>
> I suggest to keep it simple for now as we have the issue mentioned by
> Evgeny: fuel client is supposed to work from other nodes, and we will need
> additional verification code in there. Also, to make it smooth, we will
> have to end up with a few more checks - like what if tarball is broken,
> what if we can't find install script in it, etc.
> I'd suggest to run it simple for 6.0, and then we will see how it's being
> used and what other limitations / issues we have around plugin installation
> and usage. We can consider to make this functionality as part of fuel
> client a bit later.
>
> Thanks,
>
> On Tue, Oct 21, 2014 at 6:57 PM, Vitaly Kramskikh  > wrote:
>
>> Hi,
>>
>> As for a separate section for plugins, I think we should not force it and
>> leave this decision to a plugin developer, so he can create just a single
>> checkbox or a section of the settings tab or a separate tab depending on
>> plugin functionality. Plugins should be able to modify arbitrary release
>> fields. For example, if Ceph was a plugin, it should be able to extend
>> wizard config to add new options to Storage pane. If vCenter was a plugin,
>> it should be able to set maximum amount of Compute nodes to 0.
>>
>> 2014-10-20 21:21 GMT+07:00 Evgeniy L :
>>
>>> Hi guys,
>>>
>>> *Romans' questions:*
>>>
>>> >> I feel like we should not require user to unpack the plugin before
>>> installing it.
>>> >> Moreover, we may chose to distribute plugins in our own format, which
>>> we
>>> >> may potentially change later. E.g. "lbaas-v2.0.fp".
>>>
>>> I like the idea of putting plugin installation functionality in fuel
>>> client, which is installed
>>> on master node.
>>> But in the current version plugin installation requires files operations
>>> on the master,
>>> as result we can have problems if user's fuel-client is installed on
>>> another env.
>>> What we can do is to try to determine where fuel-client is installed, if
>>> it's master
>>> node, we can perform installation, if it isn't master node, we can show
>>> user the
>>> message, that in the current version remote plugin installation is not
>>

Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Kyle Mestery
Vad:

The third-party CI is required for your upstream driver. I think
what's different from my reading of this thread is the question of
what is the requirement to have a driver listed in the upstream
documentation which is not in the upstream codebase. To my knowledge,
we haven't done this. Thus, IMHO, we should NOT be utilizing upstream
documentation to document drivers which are themselves not upstream.
When we split out the drivers which are currently upstream in neutron
into a separate repo, they will still be upstream. So my opinion here
is that if your driver is not upstream, it shouldn't be in the
upstream documentation. But I'd like to hear others opinions as well.

Thanks,
Kyle

On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan
 wrote:
> Kyle,
> Gentle reminder... when you get a chance!..
>
> Anne,
> In case, if i need to send it to different group or email-id to reach Kyle
> Mestery, pls. let me know. Thanks for your help.
>
> Regards,
> Vad
> --
>
>
> On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan
>  wrote:
>>
>> Hi Kyle,
>>
>> Can you pls. comment on this discussion and confirm the requirements for
>> getting out-of-tree mechanism_driver listed in the supported plugin/driver
>> list of the Openstack Neutron docs.
>>
>> Thanks,
>> Vad
>> --
>>
>> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle  wrote:
>>>
>>>
>>>
>>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan
>>>  wrote:

 Hi,

  On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton 
  wrote:
 >
 > I think you will probably have to wait until after the summit so
 > we can
 > see the direction that will be taken with the rest of the in-tree
 > drivers/plugins. It seems like we are moving towards removing all
 > of them so
 > we would definitely need a solution to documenting out-of-tree
 > drivers as
 > you suggested.

 [Vad] while i 'm waiting for the conclusion on this subject, i 'm trying
 to setup the third-party CI/Test system and meet its requirements to get my
 mechanism_driver listed in the Kilo's documentation, in parallel.

 Couple of questions/confirmations before i proceed further on this
 direction...

 1) Is there anything more required other than the third-party CI/Test
 requirements ??.. like should I still need to go-through the entire
 development process of submit/review/approval of the blue-print and code of
 my ML2 driver which was already developed and in-use?...

>>>
>>> The neutron PTL Kyle Mestery can answer if there are any additional
>>> requirements.
>>>

 2) Who is the authority to clarify and confirm the above (and how do i
 contact them)?...
>>>
>>>
>>> Elections just completed, and the newly elected PTL is Kyle Mestery,
>>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html.
>>>


 Thanks again for your inputs...

 Regards,
 Vad
 --

 On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle  wrote:
>
>
>
> On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan
>  wrote:
>>
>> Agreed on the requirements of test results to qualify the vendor
>> plugin to be listed in the upstream docs.
>> Is there any procedure/infrastructure currently available for this
>> purpose?..
>> Pls. fwd any link/pointers on those info.
>>
>
> Here's a link to the third-party testing setup information.
>
> http://ci.openstack.org/third_party.html
>
> Feel free to keep asking questions as you dig deeper.
> Thanks,
> Anne
>
>>
>> Thanks,
>> Vad
>> --
>>
>> On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki 
>> wrote:
>>>
>>> I agree with Kevin and Kyle. Even if we decided to use separate tree
>>> for neutron
>>> plugins and drivers, they still will be regarded as part of the
>>> upstream.
>>> These plugins/drivers need to prove they are well integrated with
>>> Neutron master
>>> in some way and gating integration proves it is well tested and
>>> integrated.
>>> I believe it is a reasonable assumption and requirement that a vendor
>>> plugin/driver
>>> is listed in the upstream docs. This is a same kind of question as
>>> what vendor plugins
>>> are tested and worth documented in the upstream docs.
>>> I hope you work with the neutron team and run the third party
>>> requirements.
>>>
>>> Thanks,
>>> Akihiro
>>>
>>> On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery 
>>> wrote:
>>> > On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton 
>>> > wrote:
>>> >>>The OpenStack dev and docs team dont have to worry about
>>> >>> gating/publishing/maintaining the vendor specific
>>> >>> plugins/drivers.
>>> >>
>>> >> I disagree about the gating part. If a vendor wants to have a link
>>> >> that
>>> >> shows they are compatible

Re: [openstack-dev] [neutron] [oslo.db] model_query() future and neutron specifics

2014-10-23 Thread Kyle Mestery
On Mon, Oct 20, 2014 at 2:44 PM, Mike Bayer  wrote:
> As I’ve established oslo.db blueprints which will roll out new SQLAlchemy 
> connectivity patterns for consuming applications within both API [1] and 
> tests [2], one of the next big areas I’m to focus on is that of querying.   
> If one looks at how SQLAlchemy ORM queries are composed across Openstack, the 
> most prominent feature one finds is the prevalent use of the model_query() 
> initiation function.This is a function that is implemented in a specific 
> way for each consuming application; its purpose is to act as a factory for 
> new Query objects, starting from the point of acquiring a Session, starting 
> up the Query against a selected model, and then augmenting that Query right 
> off with criteria derived from the given application context, typically 
> oriented around the widespread use of so-called “soft-delete” columns, as 
> well as a few other fixed criteria.
>
> There’s a few issues with model_query() that I will be looking to solve, 
> starting with the proposal of a new blueprint.   Key issues include that it 
> will need some changes to interact with my new connectivity specification, it 
> may need a big change in how it is invoked in order to work with some new 
> querying features I also plan on proposing at some point (see 
> https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Baked_Queries), and 
> also it’s current form in some cases tends to slightly discourage the 
> construction of appropriate queries.
>
> In order to propose a new system for model_query(), I have to do a survey of 
> how this function is implemented and used across projects.  Which is why we 
> find me talking about Neutron today - Neutron’s model_query() system is a 
> much more significant construct compared to that of all other projects.   It 
> is interesting because it makes clear some use cases that SQLAlchemy may very 
> well be able to help with.  It also seems to me that in its current form it 
> leads to SQL queries that are poorly formed - as I see this, on one hand we 
> can blame the structure of neutron’s model_query() for how this occurs, but 
> on the other, we can blame SQLAlchemy for not providing more tools oriented 
> towards what Neutron is trying to do.   The use case Neutron has here is very 
> common throughout many Python applications, but as yet I’ve not had the 
> opportunity to address this kind of pattern in a comprehensive way.
>
> I first sketched out my concerns on a Neutron issue 
> https://bugs.launchpad.net/neutron/+bug/1380823, however I was encouraged to 
> move it over to the mailing list.
>
> Specifically with Neutron’s model_query(), we're talking here about the 
> plugin architecture in neutron/db/common_db_mixin.py, where the 
> register_model_query_hook() method presents a way of applying modifiers to 
> queries. This system appears to be used by: db/external_net_db.py, 
> plugins/ml2/plugin.py, db/portbindings_db.py, 
> plugins/metaplugin/meta_neutron_plugin.py.
>
> What the use of the hook has in common in these cases is that a LEFT OUTER 
> JOIN is applied to the Query early on, in anticipation of either the 
> filter_hook or result_filters being applied to the query, but only 
> *possibly*, and then even within those hooks as supplied, again only 
> *possibly*. It's these two "*possiblies*" that leads to the use of LEFT OUTER 
> JOIN - this extra table is present in the query's FROM clause, but if we 
> decide we don't need to filter on it, the idea is that it's just a left outer 
> join, which will not change the primary result if not added to what’s being 
> filtered. And even, in the case of external_net_db.py, maybe we even add a 
> criteria "WHERE  IS NULL", that is doing a "not contains" off 
> of this left outer join.
>
> The result is that we can get a query like this:
>
> SELECT a.* FROM a LEFT OUTER JOIN b ON a.id=b.aid WHERE b.id IS NOT NULL
>
> this can happen for example if using External_net_db_mixin, the outerjoin to 
> ExternalNetwork is created, _network_filter_hook applies 
> "expr.or_(ExternalNetwork.network_id != expr.null())", and that's it.
>
> The database will usually have a much easier time if this query is expressed 
> correctly [3]:
>
>SELECT a.* FROM a INNER JOIN b ON a.id=b.aid
>
> the reason this bugs me is because the SQL output is being compromised as a 
> result of how the plugin system is organized. Preferable would be a system 
> where the plugins are either organized into fewer functions that perform all 
> the checking at once, or if the plugin system had more granularity to know 
> that it needs to apply an optional JOIN or not.   My thoughts for new 
> SQLAlchemy/oslo.db features are being driven largely by Neutron’s use case 
> here.
>
> Towards my goal of proposing a better system of model_query(), along with 
> Neutron’s heavy use of generically added criteria, I’ve put some thoughts 
> down on a new SQLAlchemy feature which would also be backported 

[openstack-dev] [Fuel] Fuel standards

2014-10-23 Thread Vladimir Kozhukalov
All,

Recently we launched a couple new Fuel related projects
(fuel_plugin_builder, fuel_agent, fuel_upgrade, etc.). Those projects are
written in python and they use different approaches to organizing CLI,
configuration, different third party libraries, etc. Besides, we have some
old Fuel projects which are also not standardized.

The idea is to have a set of standards for all Fuel related projects about
architecture in general, third party libraries, API, user interface,
documentation, etc. When I take a look at any OpenStack project I usually
know a priori how project's code is organized. For example, cli is likely
based on python cliff library, configuration is based on oslo.config,
database layer is based of oslo.db and so on.

Let's do the same for Fuel. Frankly, I'd say we could take OpenStack
standards as is and use them for Fuel. But maybe there are other opinions.
Let's discuss this and decide what to do. Do we actually need those
standards at all?

Just to keep the scope narrow let's consider fuelclient project as an
example. If we decide something about it, we can then try to spread those
decisions on other Fuel related projects.

0) Standard for projects naming.
Currently most of Fuel projects are named like fuel-whatever or even
whatever? Is it ok? Or maybe we need some formal rules for naming. For
example, all OpenStack clients are named python-someclient. Do we need to
rename fuelclient into python-fuelclient?

1) Standard for an architecture.
Most of OpenStack services are split into several independent parts
(raughly service-api, serivce-engine, python-serivceclient) and those parts
interact with each other via REST and AMQP. python-serivceclient is usually
located in a separate repository. Do we actually need to do the same for
Fuel? According to fuelclient it means it should be moved into a separate
repository. Fortunately, it already uses REST API for interacting with
nailgun. But it should be possible to use it not only as a CLI tool, but
also as a library.

2) Standard for project directory structure (directory names for api, db
models,  drivers, cli related code, plugins, common code, etc.)
Do we actually need to standardize a directory structure?

3) Standard for third party libraries
As far as Fuel is a deployment tool for OpenStack, let's make a decision
about using OpenStack components wherever it is possible.
3.1) oslo.config for configuring.
3.2) oslo.db for database layer
3.3) oslo.messaging for AMQP layer
3.4) cliff for CLI (should we refactor fuelclient so as to make based on
cliff?)
3.5) oslo.log for logging
3.6) stevedore for plugins
etc.
What about third party components which are not OpenStack related? What
could be the requirements for an arbitrary PyPi package?

4) Standard for testing.
It requires a separate discussion.

5) Standard for documentation.
It requires a separate discussion.


Vladimir Kozhukalov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread John Griffith
On Thu, Oct 23, 2014 at 8:50 AM, John Griffith 
wrote:

>
>
> On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister <
> pres...@bannister.us> wrote:
>
>> John,
>>
>> As a (new) OpenStack developer, I just discovered the
>> "CINDER_SECURE_DELETE" option.
>>
>
OHHH... Most importantly, I almost forgot.  Welcome!!!

>
>> As an *implicit* default, I entirely approve.  Production OpenStack
>> installations should *absolutely* insure there is no information leakage
>> from one instance to the next.
>>
>> As an *explicit* default, I am not so sure. Low-end storage requires you
>> do this explicitly. High-end storage can insure information never leaks.
>> Counting on high level storage can make the upper levels more efficient,
>> can be a good thing.
>>
>
> Not entirely sure of the distinction intended as far as
> implicit/explicit... but one other thing I should probably point out; this
> ONLY applies to the LVM driver, maybe that's what you're getting at.  Would
> be better probably to advertise as an LVM Driver option (easy enough to do
> in the config options help message).
>
> Anyway, I just wanted to point to some of the options like using io-nice,
> clear-size, blkio cgroups, bps_limit..
>
> It doesn't suck as bad as you might have thought or some of the other
> respondents on this thread seem to think.  There's certainly room for
> improvement and growth but it hasn't been completely ignored on the Cinder
> side.
>
>
>>
>> The debate about whether to wipe LV's pretty much massively depends on
>> the intelligence of the underlying store. If the lower level storage never
>> returns accidental information ... explicit zeroes are not needed.
>>
>>
>>
>> On Wed, Oct 22, 2014 at 11:15 PM, John Griffith > > wrote:
>>
>>>
>>>
>>> On Tue, Oct 21, 2014 at 9:17 AM, Duncan Thomas 
>>> wrote:
>>>
 For LVM-thin I believe it is already disabled? It is only really
 needed on LVM-thick, where the returning zeros behaviour is not done.

 On 21 October 2014 08:29, Avishay Traeger 
 wrote:
 > I would say that wipe-on-delete is not necessary in most deployments.
 >
 > Most storage backends exhibit the following behavior:
 > 1. Delete volume A that has data on physical sectors 1-10
 > 2. Create new volume B
 > 3. Read from volume B before writing, which happens to map to physical
 > sector 5 - backend should return zeroes here, and not data from
 volume A
 >
 > In case the backend doesn't provide this rather standard behavior,
 data must
 > be wiped immediately.  Otherwise, the only risk is physical security,
 and if
 > that's not adequate, customers shouldn't be storing all their data
 there
 > regardless.  You could also run a periodic job to wipe deleted
 volumes to
 > reduce the window of vulnerability, without making delete_volume take
 a
 > ridiculously long time.
 >
 > Encryption is a good option as well, and of course it protects the
 data
 > before deletion as well (as long as your keys are protected...)
 >
 > Bottom line - I too think the default in devstack should be to
 disable this
 > option, and think we should consider making the default False in
 Cinder
 > itself.  This isn't the first time someone has asked why volume
 deletion
 > takes 20 minutes...
 >
 > As for queuing backup operations and managing bandwidth for various
 > operations, ideally this would be done with a holistic view, so that
 for
 > example Cinder operations won't interfere with Nova, or different Nova
 > operations won't interfere with each other, but that is probably far
 down
 > the road.
 >
 > Thanks,
 > Avishay
 >
 >
 > On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen <
 chris.frie...@windriver.com>
 > wrote:
 >>
 >> On 10/19/2014 09:33 AM, Avishay Traeger wrote:
 >>>
 >>> Hi Preston,
 >>> Replies to some of your cinder-related questions:
 >>> 1. Creating a snapshot isn't usually an I/O intensive operation.
 Are
 >>> you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen
 the
 >>> CPU usage of cinder-api spike sometimes - not sure why.
 >>> 2. The 'dd' processes that you see are Cinder wiping the volumes
 during
 >>> deletion.  You can either disable this in cinder.conf, or you can
 use a
 >>> relatively new option to manage the bandwidth used for this.
 >>>
 >>> IMHO, deployments should be optimized to not do very long/intensive
 >>> management operations - for example, use backends with efficient
 >>> snapshots, use CoW operations wherever possible rather than copying
 full
 >>> volumes/images, disabling wipe on delete, etc.
 >>
 >>
 >> In a public-cloud environment I don't think it's reasonable to
 disable
 >> wipe-on-delete.
 >>
 >> Arguably it would be better to use encryption instead of
 wipe-on-

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread John Griffith
On Thu, Oct 23, 2014 at 1:30 AM, Preston L. Bannister 
wrote:

> John,
>
> As a (new) OpenStack developer, I just discovered the
> "CINDER_SECURE_DELETE" option.
>
> As an *implicit* default, I entirely approve.  Production OpenStack
> installations should *absolutely* insure there is no information leakage
> from one instance to the next.
>
> As an *explicit* default, I am not so sure. Low-end storage requires you
> do this explicitly. High-end storage can insure information never leaks.
> Counting on high level storage can make the upper levels more efficient,
> can be a good thing.
>

Not entirely sure of the distinction intended as far as
implicit/explicit... but one other thing I should probably point out; this
ONLY applies to the LVM driver, maybe that's what you're getting at.  Would
be better probably to advertise as an LVM Driver option (easy enough to do
in the config options help message).

Anyway, I just wanted to point to some of the options like using io-nice,
clear-size, blkio cgroups, bps_limit..

It doesn't suck as bad as you might have thought or some of the other
respondents on this thread seem to think.  There's certainly room for
improvement and growth but it hasn't been completely ignored on the Cinder
side.


>
> The debate about whether to wipe LV's pretty much massively depends on the
> intelligence of the underlying store. If the lower level storage never
> returns accidental information ... explicit zeroes are not needed.
>
>
>
> On Wed, Oct 22, 2014 at 11:15 PM, John Griffith 
> wrote:
>
>>
>>
>> On Tue, Oct 21, 2014 at 9:17 AM, Duncan Thomas 
>> wrote:
>>
>>> For LVM-thin I believe it is already disabled? It is only really
>>> needed on LVM-thick, where the returning zeros behaviour is not done.
>>>
>>> On 21 October 2014 08:29, Avishay Traeger 
>>> wrote:
>>> > I would say that wipe-on-delete is not necessary in most deployments.
>>> >
>>> > Most storage backends exhibit the following behavior:
>>> > 1. Delete volume A that has data on physical sectors 1-10
>>> > 2. Create new volume B
>>> > 3. Read from volume B before writing, which happens to map to physical
>>> > sector 5 - backend should return zeroes here, and not data from volume
>>> A
>>> >
>>> > In case the backend doesn't provide this rather standard behavior,
>>> data must
>>> > be wiped immediately.  Otherwise, the only risk is physical security,
>>> and if
>>> > that's not adequate, customers shouldn't be storing all their data
>>> there
>>> > regardless.  You could also run a periodic job to wipe deleted volumes
>>> to
>>> > reduce the window of vulnerability, without making delete_volume take a
>>> > ridiculously long time.
>>> >
>>> > Encryption is a good option as well, and of course it protects the data
>>> > before deletion as well (as long as your keys are protected...)
>>> >
>>> > Bottom line - I too think the default in devstack should be to disable
>>> this
>>> > option, and think we should consider making the default False in Cinder
>>> > itself.  This isn't the first time someone has asked why volume
>>> deletion
>>> > takes 20 minutes...
>>> >
>>> > As for queuing backup operations and managing bandwidth for various
>>> > operations, ideally this would be done with a holistic view, so that
>>> for
>>> > example Cinder operations won't interfere with Nova, or different Nova
>>> > operations won't interfere with each other, but that is probably far
>>> down
>>> > the road.
>>> >
>>> > Thanks,
>>> > Avishay
>>> >
>>> >
>>> > On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen <
>>> chris.frie...@windriver.com>
>>> > wrote:
>>> >>
>>> >> On 10/19/2014 09:33 AM, Avishay Traeger wrote:
>>> >>>
>>> >>> Hi Preston,
>>> >>> Replies to some of your cinder-related questions:
>>> >>> 1. Creating a snapshot isn't usually an I/O intensive operation.  Are
>>> >>> you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen
>>> the
>>> >>> CPU usage of cinder-api spike sometimes - not sure why.
>>> >>> 2. The 'dd' processes that you see are Cinder wiping the volumes
>>> during
>>> >>> deletion.  You can either disable this in cinder.conf, or you can
>>> use a
>>> >>> relatively new option to manage the bandwidth used for this.
>>> >>>
>>> >>> IMHO, deployments should be optimized to not do very long/intensive
>>> >>> management operations - for example, use backends with efficient
>>> >>> snapshots, use CoW operations wherever possible rather than copying
>>> full
>>> >>> volumes/images, disabling wipe on delete, etc.
>>> >>
>>> >>
>>> >> In a public-cloud environment I don't think it's reasonable to disable
>>> >> wipe-on-delete.
>>> >>
>>> >> Arguably it would be better to use encryption instead of
>>> wipe-on-delete.
>>> >> When done with the backing store, just throw away the key and it'll be
>>> >> secure enough for most purposes.
>>> >>
>>> >> Chris
>>> >>
>>> >>
>>> >>
>>> >> ___
>>> >> OpenStack-dev mailing list
>>> >> OpenStack-dev@lists.openstack.org
>>> >> h

Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-10-23 Thread Vadivel Poonathan
Kyle,
Gentle reminder... when you get a chance!..

Anne,
In case, if i need to send it to different group or email-id to reach Kyle
Mestery, pls. let me know. Thanks for your help.

Regards,
Vad
--


On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan <
vadivel.openst...@gmail.com> wrote:

> Hi Kyle,
>
> Can you pls. comment on this discussion and confirm the requirements for
> getting out-of-tree mechanism_driver listed in the supported plugin/driver
> list of the Openstack Neutron docs.
>
> Thanks,
> Vad
> --
>
> On Mon, Oct 20, 2014 at 12:48 PM, Anne Gentle  wrote:
>
>>
>>
>> On Mon, Oct 20, 2014 at 2:42 PM, Vadivel Poonathan <
>> vadivel.openst...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> * On Fri, Oct 10, 2014 at 7:36 PM, Kevin Benton >> > wrote:>> I think you will probably have to
>>> wait until after the summit so we can> see the direction that will be
>>> taken with the rest of the in-tree> drivers/plugins. It seems like we
>>> are moving towards removing all of them so> we would definitely need a
>>> solution to documenting out-of-tree drivers as> you suggested.*
>>>
>>> [Vad] while i 'm waiting for the conclusion on this subject, i 'm trying
>>> to setup the third-party CI/Test system and meet its requirements to get my
>>> mechanism_driver listed in the Kilo's documentation, in parallel.
>>>
>>> Couple of questions/confirmations before i proceed further on this
>>> direction...
>>>
>>> 1) Is there anything more required other than the third-party CI/Test
>>> requirements ??.. like should I still need to go-through the entire
>>> development process of submit/review/approval of the blue-print and code of
>>> my ML2 driver which was already developed and in-use?...
>>>
>>>
>> The neutron PTL Kyle Mestery can answer if there are any additional
>> requirements.
>>
>>
>>> 2) Who is the authority to clarify and confirm the above (and how do i
>>> contact them)?...
>>>
>>
>> Elections just completed, and the newly elected PTL is Kyle Mestery,
>> http://lists.openstack.org/pipermail/openstack-dev/2014-March/031433.html
>> .
>>
>>
>>>
>>> Thanks again for your inputs...
>>>
>>> Regards,
>>> Vad
>>> --
>>>
>>> On Tue, Oct 14, 2014 at 3:17 PM, Anne Gentle  wrote:
>>>


 On Tue, Oct 14, 2014 at 5:14 PM, Vadivel Poonathan <
 vadivel.openst...@gmail.com> wrote:

> Agreed on the requirements of test results to qualify the vendor
> plugin to be listed in the upstream docs.
> Is there any procedure/infrastructure currently available for this
> purpose?..
> Pls. fwd any link/pointers on those info.
>
>
 Here's a link to the third-party testing setup information.

 http://ci.openstack.org/third_party.html

 Feel free to keep asking questions as you dig deeper.
 Thanks,
 Anne


> Thanks,
> Vad
> --
>
> On Mon, Oct 13, 2014 at 10:25 PM, Akihiro Motoki 
> wrote:
>
>> I agree with Kevin and Kyle. Even if we decided to use separate tree
>> for neutron
>> plugins and drivers, they still will be regarded as part of the
>> upstream.
>> These plugins/drivers need to prove they are well integrated with
>> Neutron master
>> in some way and gating integration proves it is well tested and
>> integrated.
>> I believe it is a reasonable assumption and requirement that a vendor
>> plugin/driver
>> is listed in the upstream docs. This is a same kind of question as
>> what vendor plugins
>> are tested and worth documented in the upstream docs.
>> I hope you work with the neutron team and run the third party
>> requirements.
>>
>> Thanks,
>> Akihiro
>>
>> On Tue, Oct 14, 2014 at 10:09 AM, Kyle Mestery 
>> wrote:
>> > On Mon, Oct 13, 2014 at 6:44 PM, Kevin Benton 
>> wrote:
>> >>>The OpenStack dev and docs team dont have to worry about
>> >>> gating/publishing/maintaining the vendor specific plugins/drivers.
>> >>
>> >> I disagree about the gating part. If a vendor wants to have a link
>> that
>> >> shows they are compatible with openstack, they should be reporting
>> test
>> >> results on all patches. A link to a vendor driver in the docs
>> should signify
>> >> some form of testing that the community is comfortable with.
>> >>
>> > I agree with Kevin here. If you want to play upstream, in whatever
>> > form that takes by the end of Kilo, you have to work with the
>> existing
>> > third-party requirements and team to take advantage of being a part
>> of
>> > things like upstream docs.
>> >
>> > Thanks,
>> > Kyle
>> >
>> >> On Mon, Oct 13, 2014 at 11:33 AM, Vadivel Poonathan
>> >>  wrote:
>> >>>
>> >>> Hi,
>> >>>
>> >>> If the plan is to move ALL existing vendor specific
>> plugins/drivers
>> >>> out-of-tree, then having a place-holder within the OpenStack
>> domain woul

Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Flavio Percoco
On 10/23/2014 08:56 AM, Flavio Percoco wrote:
> On 10/22/2014 08:15 PM, Doug Hellmann wrote:
>> The application projects are dropping python 2.6 support during Kilo, and 
>> I’ve had several people ask recently about what this means for Oslo. Because 
>> we create libraries that will be used by stable versions of projects that 
>> still need to run on 2.6, we are going to need to maintain support for 2.6 
>> in Oslo until Juno is no longer supported, at least for some of our 
>> projects. After Juno’s support period ends we can look again at dropping 2.6 
>> support in all of the projects.
>>
>>
>> I think these rules cover all of the cases we have:
>>
>> 1. Any Oslo library in use by an API client that is used by a supported 
>> stable branch (Icehouse and Juno) needs to keep 2.6 support.
>>
>> 2. If a client library needs a library we graduate from this point forward, 
>> we will need to ensure that library supports 2.6.
>>
>> 3. Any Oslo library used directly by a supported stable branch of an 
>> application needs to keep 2.6 support.
>>
>> 4. Any Oslo library graduated during Kilo can drop 2.6 support, unless one 
>> of the previous rules applies.
>>
>> 5. The stable/icehouse and stable/juno branches of the incubator need to 
>> retain 2.6 support for as long as those versions are supported.
>>
>> 6. The master branch of the incubator needs to retain 2.6 support until we 
>> graduate all of the modules that will go into libraries used by clients.
>>
>>
>> A few examples:
>>
>> - oslo.utils was graduated during Juno and is used by some of the client 
>> libraries, so it needs to maintain python 2.6 support.
>>
>> - oslo.config was graduated several releases ago and is used directly by the 
>> stable branches of the server projects, so it needs to maintain python 2.6 
>> support.
>>
>> - oslo.log is being graduated in Kilo and is not yet in use by any projects, 
>> so it does not need python 2.6 support.
>>
>> - oslo.cliutils and oslo.apiclient are on the list to graduate in Kilo, but 
>> both are used by client projects, so they need to keep python 2.6 support. 
>> At that point we can evaluate the code that remains in the incubator and see 
>> if we’re ready to turn of 2.6 support there.
>>
>>
>> Let me know if you have questions about any specific cases not listed in the 
>> examples.
> 
> The rules look ok to me but I'm a bit worried that we might miss
> something in the process due to all these rules being in place. Would it
> be simpler to just say we'll keep py2.6 support in oslo for Kilo and
> drop it in Igloo (or L?) ?
> 
> Once Igloo development begins, Kilo will be stable (without py2.6
> support except for Oslo) and Juno will be in security maintenance (with
> py2.6 support).

OMFG, did I really say Igloo? I should really consider taking a break.
Anyway, just read Igloo as the L release.

Seriously, WTF?
Flavio

> 
> I guess the TL;DR of what I'm proposing is to keep 2.6 support in oslo
> until we move the rest of the projects just to keep the process simpler.
> Probably longer but hopefully simpler.
> 
> I'm sure I'm missing something so please, correct me here.
> Flavio
> 
> 


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] making Daneyon Hansen core

2014-10-23 Thread Jeff Peeler

On 10/22/2014 11:04 AM, Steven Dake wrote:

A few weeks ago in IRC we discussed the criteria for joining the core
team in Kolla.  I believe Daneyon has met all of these requirements by
reviewing patches along with the rest of the core team and providing
valuable comments, as well as implementing neutron and helping get
nova-networking implementation rolling.

Please vote +1 or -1 if your kolla core.  Recall a -1 is a veto.  It
takes 3 votes.  This email counts as one vote ;)


definitely +1


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread Adam Young

On 10/09/2014 03:36 PM, Duncan Thomas wrote:

On 9 October 2014 07:49, henry hly  wrote:

Hi Joshua,

...in fact hierarchical scale
depends on square of single child scale. If a single child can deal
with 00's to 000's, cascading on it would then deal with 00,000's.

That is faulty logic - maybe the cascading solution needs to deal with
global quota and other aggregations that will rapidly break down your


There should not be Global quota in a cascading deployment.  If I own a 
cloud, I should manage my own Quota.


Keystone needs to be able to merge the authorization data across 
multiple OpenStack instances.  I have a spec proposal for this:


https://review.openstack.org/#/c/123782/

There are many issues to be resolved due to the "organic growth" nature 
of OpenStack deployments.  We see a recuring pattern where people need 
to span across multiple deployments, and not just for Bursting.


Quota then becomes essential:  it is the way of limiting what a user can 
do in one deployment ,separate from what they could do in a different 
one.  The quotas really reflect the contract between the user and the 
deployment.




scaling factor, or maybe there are few such problems can the cascade
part can scale way better than the underlying part. They are two
totally different scaling cases, so and suggestion that they are
anything other than an unknown multiplier is bogus.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposed summit session topics

2014-10-23 Thread Julien Danjou
On Wed, Oct 22 2014, Doug Hellmann wrote:

> 2014-11-05 11:00  - Oslo graduation schedule 
> 2014-11-05 11:50  - oslo.messaging 
> 2014-11-05 13:50  - A Common Quota Management Library 
> 2014-11-06 11:50  - taskflow 
> 2014-11-06 13:40  - Using alpha versioning for Oslo libraries 
> 2014-11-06 16:30  - Python 3 support in Oslo 
> 2014-11-06 17:20  - Moving Oslo away from namespace packages 
>
> That should allow the QA and Infra teams to participate in the versioning and
> packaging discussions, Salvatore to be present for the quota library session
> (and lead it, I hope), and the eNovance guys who also work on ceilometer to be
> there for the Python 3 session.
>
> If you know you have a conflict with one of these times, let me know
> and I’ll see if we can juggle a little.

LGTM!

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread joehuang
Hi, Phil,

I am sorry that no enough information for you to understand the cascading in 
the document.  If we can talk f2f, I can explain much more in detail. But in 
short, I will give a simplified picture how a virtual machine will be booted:

The general process to boot a VM is like this:
Nova API -> Nova Scheduler -> Nova Compute( manager ) -> Nova Compute( Libvirt 
driver ) -> Nova Compute( KVM )

After OpenStack cascading is introduced, the process is a little difference and 
can be divided into two parts:
1. inside cascading OpenStack: Nova API -> Nova Scheduler -> Nova Proxy ->
2, inside cascaded OpenStack: Nova API -> Nova Scheduler -> Nova Compute( 
manager ) -> Nova Compute( Libvirt driver ) -> Nova Compute( KVM )
 
After schedule a Nova-Proxy, the instance object will be persisted in the DB in 
the cascading layer. And VM query to the cloud will be answered by the  
cascading Nova API from cascading layer DB. No need to touch cascaded Nova. 
(it's not bad thing to persist the data in the cascading layer, quota control, 
system healing and consistency correction, fast user experience, etc...)

All VM generation in the cascaded OpenStack has nothing different with the 
process of general VM boot process, and is a asynchronous process from the 
cascading layer.

How the scheduler in the cascading layer to select proper Nova Proxy. The 
answer is that if hosts the cascaded Nova was added to AZ1 (AZ: availability 
zone in short), then the Nova proxy (it's a host in the cascading layer) will 
also be added to AZ1 in the cascading layer, and this nova proxy will be 
configured to send all request to the endpoint of the regarding cascaded Nova. 
And scheduler will be configured to use availability zone filter only, we know 
all VM boot  request has AZ parameter, that's the key for scheduling in the 
cascading layer.  Host Aggregate could be done in the same way.

After Nova-proxy receive the RPC message from the Nova-scheduler, it will not 
work like libvirt driver to boot a VM in the local host, it'll pickup all 
request parameter and call python client, to send the restful nova-boot request 
to the cascaded Nova.

How the flavor will be synchronized to the cascaded Nova? The flavor will be 
synchronized to the cascaded Nova only if the flavor does not exist in the 
cascaded Nova, or the flavor is recently updated but not synchronized to the 
cascaded Nova. Because the VM boot request has been answered after scheduling, 
so all the things done in the nova-proxy is asynchronous operation just like a 
VM booted in a host, it'll take seconds to minutes in general host, but in the 
cascading, some API calling will be done by nova-proxy to cascaded Nova, or 
cascaded Cinder & Neutron. 

I wrote a few blogs to explain something in detail, but I am too busy, and not 
able to write all things we have done in the PoC. [ 1 ]

[1] blog about cascading:  http://www.linkedin.com/today/author/23841540

Best Regards

Chaoyi Huang


From: Day, Phil [philip@hp.com]
Sent: 23 October 2014 19:24
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi,

> -Original Message-
> From: joehuang [mailto:joehu...@huawei.com]
> Sent: 23 October 2014 09:59
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
> cascading
>
> Hi,
>
> Because I am not able to find a meeting room to have deep diving OpenStack
> cascading before design summit. You are welcome to have a f2f conversation
> about the cascading before design summit. I planned to stay at Paris from
> Oct.30 to Nov.8, if you have any doubt or question, please feel free to
> contact me. All the conversation is for clarification / idea exchange purpose,
> not for any secret agreement purpose. It is necessary before design summit,
> for design summit session, it's only 40 minutes, if all 40 minutes are spent 
> on
> basic question and clarification, then no valuable conclusion can be drawn in
> the meeting. So I want to work as client-server mode, anyone who is
> interested in talking cascading with me, just tell me when he will come to the
> hotel where I stay at Paris, then a chat could be made to reduce
> misunderstanding, get more clear picture, and focus on what need to be
> discussed and consensuses during the design summit session.
>
Sure, I'll certainly try and find some time to meet and talk.


> >>>"It kind of feels to me that if we just concentrated on the part of this 
> >>>that
> is working out how to distribute/federate Neutron then we'd have a solution
> that could be mapped as easily cells and/or regions - and I wonder if then
> why really need yet another aggregation concept ?"
>
> My answer is that it seems to be feasible but can not meet the muti-site
> cloud demand (that's the drive force for cascading):
> 1) larg

[openstack-dev] [neutron] [stable] Tool to aid in scalability problems mitigation.

2014-10-23 Thread Miguel Angel Ajo Pelayo


Recently, we have identified clients with problems due to the 
bad scalability of security groups in Havana and Icehouse, that 
was addressed during juno here [1] [2]

This situation is identified by blinking agents (going UP/DOWN),
high AMQP load, nigh neutron-server load, and timeout from openvswitch
agents when trying to contact neutron-server "security_group_rules_for_devices".

Doing a [1] backport involves many dependent patches related 
to the general RPC refactor in neutron (which modifies all plugins), 
and subsequent ones fixing a few bugs. Sounds risky to me. [2] Introduces 
new features and it's dependent on features which aren't available on 
all systems.

To remediate this on production systems, I wrote a quick tool
to help on reporting security groups and mitigating the problem
by writing almost-equivalent rules [3]. 

We believe this tool would be better available to the wider community,
and under better review and testing, and, since it doesn't modify any behavior 
or actual code in neutron, I'd like to propose it for inclusion into, at least, 
Icehouse stable branch where it's more relevant.

I know the usual way is to go master->Juno->Icehouse, but at this moment
the tool is only interesting for Icehouse (and Havana), although I believe 
it could be extended to cleanup orphaned resources, or any other cleanup 
tasks, in that case it could make sense to be available for K->J->I.
 
As a reference, I'm leaving links to outputs from the tool [4][5]
  
Looking forward to get some feedback,
Miguel Ángel.


[1] https://review.openstack.org/#/c/111876/ security group rpc refactor
[2] https://review.openstack.org/#/c/111877/ ipset support
[3] https://github.com/mangelajo/neutrontool
[4] http://paste.openstack.org/show/123519/
[5] http://paste.openstack.org/show/123525/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-23 Thread Miguel Angel Ajo Pelayo

Hi!

  This is an interesting topic, I don't know if there's any way to
target connection tracker rules by MAC, but that'd be the ideal solution.

  I also understand the RETURN for RELATED,ESTABLISHED is there for
performance reasons, and removing it would lead to longer table evaluation,
and degraded packet throughput.

  Temporarily removing this entry doesn't seem like a good solution
to me as we can't really know how long do we need to remove this rule to
induce the connection to close at both ends (it will only close if any
new activity happens, and timeout is exhausted afterwards).


  Also, I'm not sure if removing all the conntrack rules that match the
certain filter would be OK enough, as it may only lead to full reevaluation
of rules for the next packet of the cleared connections (may be I'm missing 
some corner detail, which could be).


Best regards,
Miguel Ángel.



- Original Message - 

> Hi!

> I am working on a bug " ping still working once connected even after related
> security group rule is deleted" (
> https://bugs.launchpad.net/neutron/+bug/1335375 ). The gist of the problem
> is the following: when we delete a security group rule the corresponding
> rule in iptables is also deleted, but the connection, that was allowed by
> that rule, is not being destroyed.
> The reason for such behavior is that in iptables we have the following
> structure of a chain that filters input packets for an interface of an
> istance:

> Chain neutron-openvswi-i830fa99f-3 (1 references)
> pkts bytes target prot opt in out source destination
> 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID /* Drop packets that
> are not associated with a state. */
> 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /* Direct
> packets associated with a known session to the RETURN chain. */
> 0 0 RETURN udp -- * * 10.0.0.3 0.0.0.0/0 udp spt:67 dpt:68
> 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 match-set IPv43a0d3610-8b38-43f2-8
> src
> 0 0 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 < rule that allows
> ssh on port 22
> 1 84 RETURN icmp -- * * 0.0.0.0/0 0.0.0.0/0
> 0 0 neutron-openvswi-sg-fallback all -- * * 0.0.0.0/0 0.0.0.0/0 /* Send
> unmatched traffic to the fallback chain. */

> So, if we delete rule that allows tcp on port 22, then all connections that
> are already established won't be closed, because all packets would satisfy
> the rule:
> 0 0 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED /* Direct
> packets associated with a known session to the RETURN chain. */

> I seek advice on the way how to deal with the problem. There are a couple of
> ideas how to do it (more or less realistic):

> * Kill the connection using conntrack

> The problem here is that it is sometimes impossible to tell which connection
> should be killed. For example there may be two instances running in
> different namespaces that have the same ip addresses. As a compute doesn't
> know anything about namespaces, it cannot distinguish between the two
> seemingly identical connections:
> $ sudo conntrack -L | grep "10.0.0.5"
> tcp 6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60723 dport=22
> src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723 [ASSURED] mark=0 use=1
> tcp 6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5 sport=60729 dport=22
> src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729 [ASSURED] mark=0 use=1

> I wonder whether there is any way to search for a connection by destination
> MAC?

> * Delete iptables rule that directs packets associated with a known session
> to the RETURN chain

> It will force all packets to go through the full chain each time and this
> will definitely make the connection close. But this will strongly affect the
> performance. Probably there may be created a timeout after which this rule
> will be restored, but it is uncertain how long should it be.

> Please share your thoughts on how it would be better to handle it.

> Thanks in advance,
> Elena

> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread Day, Phil
Hi,

> -Original Message-
> From: joehuang [mailto:joehu...@huawei.com]
> Sent: 23 October 2014 09:59
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
> cascading
> 
> Hi,
> 
> Because I am not able to find a meeting room to have deep diving OpenStack
> cascading before design summit. You are welcome to have a f2f conversation
> about the cascading before design summit. I planned to stay at Paris from
> Oct.30 to Nov.8, if you have any doubt or question, please feel free to
> contact me. All the conversation is for clarification / idea exchange purpose,
> not for any secret agreement purpose. It is necessary before design summit,
> for design summit session, it's only 40 minutes, if all 40 minutes are spent 
> on
> basic question and clarification, then no valuable conclusion can be drawn in
> the meeting. So I want to work as client-server mode, anyone who is
> interested in talking cascading with me, just tell me when he will come to the
> hotel where I stay at Paris, then a chat could be made to reduce
> misunderstanding, get more clear picture, and focus on what need to be
> discussed and consensuses during the design summit session.
> 
Sure, I'll certainly try and find some time to meet and talk.


> >>>"It kind of feels to me that if we just concentrated on the part of this 
> >>>that
> is working out how to distribute/federate Neutron then we'd have a solution
> that could be mapped as easily cells and/or regions - and I wonder if then
> why really need yet another aggregation concept ?"
> 
> My answer is that it seems to be feasible but can not meet the muti-site
> cloud demand (that's the drive force for cascading):
> 1) large cloud operator ask multi-vendor to build the distributed but unified
> multi-site cloud together and each vendor has his own OpenStack based
> solution. If shared Nova/Cinder with federated Neutron used, the cross data
> center integration through RPC message for multi-vendor infrastrcuture is
> very difficult, and no clear responsibility boundry, it leads to difficulty 
> for
> trouble shooting, upgrade, etc.

So if the scope of what you're doing to is to provide a single API across 
multiple clouds that are being built and operated independently then I'm not 
sure how you can impose enough consistency to guarantee any operations.What 
if one of those clouds has Nova AZs configured, and your using (from what I 
understand AZs to try and route to a specific cloud) ?   How do you get image 
and flavor consistency across the clouds ?

I picked up on the Network aspect because that seems to be something you've 
covered in some depth here 
https://docs.google.com/presentation/d/1wIqWgbZBS_EotaERV18xYYA99CXeAa4tv6v_3VlD2ik/edit?pli=1#slide=id.g390a1cf23_2_149
 so I'd assumed it was an intrinsic part of your proposal.  Now I'm even less 
clear on the scope of what you're trying to achieve ;-( 

If this is a federation layer for in effect arbitrary Openstack clouds then it 
kind of feels like it can't be anything other than an aggregator of queries 
(list the VMs in all of the clouds you know about, and show the results in one 
output).   If you have to make API calls into many clouds (when only one of 
them may have any results) then that feels like it would be a performance 
issue.  If you're going to cache the results somehow then in effect you needs 
the Cells approach for propogating up results, which means the sub-clouds have 
to be co-operating.

Maybe I missed it somewhere, but is there a clear write-up of the restrictions 
/ expectations of sub-clouds to work in this model ?

Kind Regards
Phil

> 2) restful API /CLI is required for each site to make the cloud always 
> workable
> and manageable. If shared Nova/Cinder with federated Neutron, then some
> data center is not able to expose restful API/CLI for management purpose.
> 3) the unified cloud need to expose open and standard api. If shared Nova /
> Cinder with federated Neutron, this point can be arhieved.
> 
> Best Regards
> 
> Chaoyi Huang ( joehuang )
> 
> -Original Message-
> From: henry hly [mailto:henry4...@gmail.com]
> Sent: Thursday, October 23, 2014 3:13 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
> cascading
> 
> Hi Phil,
> 
> Thanks for your feedback, and patience of this long history reading :) See
> comments inline.
> 
> On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil  wrote:
> >> -Original Message-
> >> From: henry hly [mailto:henry4...@gmail.com]
> >> Sent: 08 October 2014 09:16
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by
> >> OpenStack cascading
> >>
> >> Hi,
> >>
> >> Good questions: why not just keeping multiple endpoints, and leaving
> >> orchestration effort in the client si

Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Duncan Thomas
On 23 October 2014 08:30, Preston L. Bannister  wrote:
> John,
>
> As a (new) OpenStack developer, I just discovered the "CINDER_SECURE_DELETE"
> option.
>
> As an *implicit* default, I entirely approve.  Production OpenStack
> installations should *absolutely* insure there is no information leakage
> from one instance to the next.
>
> As an *explicit* default, I am not so sure. Low-end storage requires you do
> this explicitly. High-end storage can insure information never leaks.
> Counting on high level storage can make the upper levels more efficient, can
> be a good thing.
>
> The debate about whether to wipe LV's pretty much massively depends on the
> intelligence of the underlying store. If the lower level storage never
> returns accidental information ... explicit zeroes are not needed.

The security requirements regarding wiping are totally and utterly
site dependent - some places care and are happy to pay the cost (some
even using an entirely pointless multi-write scrub out of historically
rooted paranoia) where as some don't care in the slightest. LVM thin
that John mentioned is no worse or better than most 'smart' arrays -
unless you happen to hit a bug, it won't return previous info.

That's a good default, if your site needs better then there are lots
of config options to go looking into for a whole variety of things,
and you should probably be doing your own security audits of the code
base and other deep analysis, as well as reading and contributing to
the security guide.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Pluggable framework in Fuel: first prototype ready

2014-10-23 Thread Mike Scherbakov
>
>
>1. I feel like we should not require user to unpack the plugin before
>installing it. Moreover, we may chose to distribute plugins in our own
>format, which we may potentially change later. E.g. "lbaas-v2.0.fp". I'd
>rather stick with two actions:
>
>
>- Assembly (externally): fpb --build 
>
>
>- Installation (on master node): fuel --install-plugin 
>
>  I like the idea of putting plugin installation functionality in fuel client,
> which is installed
> on master node.
> But in the current version plugin installation requires files operations
> on the master,
> as result we can have problems if user's fuel-client is installed on
> another env.


I suggest to keep it simple for now as we have the issue mentioned by
Evgeny: fuel client is supposed to work from other nodes, and we will need
additional verification code in there. Also, to make it smooth, we will
have to end up with a few more checks - like what if tarball is broken,
what if we can't find install script in it, etc.
I'd suggest to run it simple for 6.0, and then we will see how it's being
used and what other limitations / issues we have around plugin installation
and usage. We can consider to make this functionality as part of fuel
client a bit later.

Thanks,

On Tue, Oct 21, 2014 at 6:57 PM, Vitaly Kramskikh 
wrote:

> Hi,
>
> As for a separate section for plugins, I think we should not force it and
> leave this decision to a plugin developer, so he can create just a single
> checkbox or a section of the settings tab or a separate tab depending on
> plugin functionality. Plugins should be able to modify arbitrary release
> fields. For example, if Ceph was a plugin, it should be able to extend
> wizard config to add new options to Storage pane. If vCenter was a plugin,
> it should be able to set maximum amount of Compute nodes to 0.
>
> 2014-10-20 21:21 GMT+07:00 Evgeniy L :
>
>> Hi guys,
>>
>> *Romans' questions:*
>>
>> >> I feel like we should not require user to unpack the plugin before
>> installing it.
>> >> Moreover, we may chose to distribute plugins in our own format, which
>> we
>> >> may potentially change later. E.g. "lbaas-v2.0.fp".
>>
>> I like the idea of putting plugin installation functionality in fuel
>> client, which is installed
>> on master node.
>> But in the current version plugin installation requires files operations
>> on the master,
>> as result we can have problems if user's fuel-client is installed on
>> another env.
>> What we can do is to try to determine where fuel-client is installed, if
>> it's master
>> node, we can perform installation, if it isn't master node, we can show
>> user the
>> message, that in the current version remote plugin installation is not
>> supported.
>> In the next versions if we implement plugin manager (which is separate
>> service
>> for plugins management) we will be able to do it remotely.
>>
>> >> How are we planning to distribute fuel plugin builder and its updates?
>>
>>
>> Yes, as Mike mentioned our plan is to release it on PyPi which is python
>> packages
>> repository, so any developer will be able to run `pip install fpb` and
>> get the tool.
>>
>> >> What happens if an error occurs during plugin installation?
>>
>> Plugins installation process is very simple, our plan is to have some
>> kind of transaction,
>> to make it atomic.
>>
>> 1. register plugin via API
>> 2. copy the files
>>
>> In case of error on the 1st step, we can do nothing, in case of error on
>> the 2nd step,
>> remove files if there are any, and delete a plugin via rest api. And show
>> user a message.
>>
>> >> What happens if an error occurs during plugin execution?
>>
>> In the first iteration we are going to interrupt deployment if there are
>> any errors for plugin's
>> tasks, also we are thinking how to improve it, for example we wanted to
>> provide a special
>> flag for each task, like fail_deploument_on_error, and only if it's true,
>> we fail deployment in
>> case of failed task. But it can be tricky to implement, it requires to
>> change the current
>> orchestrator/nailgun error handling logic. So, I'm not sure if we can
>> implement this logic in
>> the first release.
>>
>> Regarding to meaningful error messages, yes, we want to show the
>> user, which plugin
>> causes the error.
>>
>> >> Shall we consider a separate place in UI (tab) for plugins?
>>
>> +1 to Mike's answer
>>
>> >> When are we planning to focus on the 2 plugins which were identified
>> as must-haves
>> >> for 6.0? Cinder & LBaaS
>>
>> For Cinder we are going to implement plugin which configures GlusterFS as
>> cinder backend,
>> so, if user has installed GlusterFS cluster, we can configure our cinder
>> to work with it,
>> I want to mention that we don't install GlusterFS nodes, we just
>> configure cinder to work
>> with user's GlusterFS cluster.
>> Stanislaw B. already did some scripts which configures cinder to work
>> with GlusterFS, so
>> we are on testing stage.
>>
>> Regarding to LBaaS, Stanislaw 

[openstack-dev] [Ceilometer] [qa] [oslo] Declarative HTTP Tests

2014-10-23 Thread Chris Dent


I've proposed a spec to Ceilometer

   https://review.openstack.org/#/c/129669/

for a suite of declarative HTTP tests that would be runnable both in
gate check jobs and in local dev environments.

There's been some discussion that this may be generally applicable
and could be best served by a generic tool. My original assertion
was "let's make something work and then see if people like it" but I
thought I also better check with the larger world:

* Is this a good idea?

* Do other projects have similar ideas in progress?

* Is this concept something for which a generic tool should be
  created _prior_ to implementation in an individual project?

* Is there prior art? What's a good format?

Thanks.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Killing connection after security group rule deletion

2014-10-23 Thread Elena Ezhova
Hi!

I am working on a bug "ping still working once connected even after related
security group rule is deleted" (
https://bugs.launchpad.net/neutron/+bug/1335375). The gist of the problem
is the following: when we delete a security group rule the corresponding
rule in iptables is also deleted, but the connection, that was allowed by
that rule, is not being destroyed.
The reason for such behavior is that in iptables we have the following
structure of a chain that filters input packets for an interface of an
istance:

Chain neutron-openvswi-i830fa99f-3 (1 references)
 pkts bytes target prot opt in out source
destination
0 0 DROP   all  --  *  *   0.0.0.0/0
0.0.0.0/0state INVALID /* Drop packets that are not associated
with a state. */
0 0 RETURN all  --  *  *   0.0.0.0/0
0.0.0.0/0state RELATED,ESTABLISHED /* Direct packets associated
with a known session to the RETURN chain. */
0 0 RETURN udp  --  *  *   10.0.0.3
0.0.0.0/0udp spt:67 dpt:68
0 0 RETURN all  --  *  *   0.0.0.0/0
0.0.0.0/0match-set IPv43a0d3610-8b38-43f2-8 src
0 0 RETURN tcp  --  *  *   0.0.0.0/0
0.0.0.0/0tcp dpt:22  < rule that allows ssh on port 22

184 RETURN icmp --  *  *   0.0.0.0/0
0.0.0.0/0
0 0 neutron-openvswi-sg-fallback  all  --  *  *   0.0.0.0/0
   0.0.0.0/0/* Send unmatched traffic to the fallback
chain. */

So, if we delete rule that allows tcp on port 22, then all connections that
are already established won't be closed, because all packets would satisfy
the rule:
0 0 RETURN all  --  *  *   0.0.0.0/00.0.0.0/0
 state RELATED,ESTABLISHED /* Direct packets associated with a
known session to the RETURN chain. */

I seek advice on the way how to deal with the problem. There are a couple
of ideas how to do it (more or less realistic):

   - Kill the connection using conntrack

  The problem here is that it is sometimes impossible to tell which
connection should be killed. For example there may be two instances running
in different namespaces that have the same ip addresses. As a compute
doesn't know anything about namespaces, it cannot distinguish between the
two seemingly identical connections:
 $ sudo conntrack -L  | grep "10.0.0.5"
 tcp  6 431954 ESTABLISHED src=10.0.0.3 dst=10.0.0.5
sport=60723 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60723
[ASSURED] mark=0 use=1
 tcp  6 431976 ESTABLISHED src=10.0.0.3 dst=10.0.0.5
sport=60729 dport=22 src=10.0.0.5 dst=10.0.0.3 sport=22 dport=60729
[ASSURED] mark=0 use=1

I wonder whether there is any way to search for a connection by destination
MAC?

   - Delete iptables rule that directs packets associated with a known
   session to the RETURN chain

   It will force all packets to go through the full chain each time
and this will definitely make the connection close. But this will strongly
affect the performance. Probably there may be created a timeout after which
this rule will be restored, but it is uncertain how long should it be.

Please share your thoughts on how it would be better to handle it.

Thanks in advance,
Elena
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Backup of information about nodes.

2014-10-23 Thread Tomasz Napierala

On 22 Oct 2014, at 21:03, Adam Lawson  wrote:

> What is current best practice to restore a failed Fuel node?

It’s documented here:
http://docs.mirantis.com/openstack/fuel/fuel-5.1/operations.html#restoring-fuel-master

Regards,
-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



[openstack-dev] Error in ssh key pair log in

2014-10-23 Thread Khayam Gondal
I am trying to login into VM from host using ssh key pair instead of
password. I have created VM using keypair *khayamkey* and than tried to
login into vm using following command

ssh -l tux -i khayamkey.pem 10.3.24.56

where *tux* is username for VM, but I got following error


WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!

IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!Someone could be
eavesdropping on you right now (man-in-the-middle attack)!It is also
possible that a host key has just been changed.The fingerprint for the
RSA key sent by the remote host
is52:5c:47:33:dd:d0:7a:cd:0e:78:8d:9b:66:d8:74:a3.Please contact your
system administrator.Add correct host key in
/home/openstack/.ssh/known_hosts to get rid of this message.Offending
RSA key in /home/openstack/.ssh/known_hosts:1
  remove with: ssh-keygen -f "/home/openstack/.ssh/known_hosts" -R 10.3.24.56
RSA host key for 10.3.24.56 has changed and you have requested strict
checking.Host key verification failed.

P.S: I know if I run ssh-keygen -f "/home/openstack/.ssh/known_hosts" -R
10.3.24.56 problem can be solved but than I have to provide password to log
in to VM, but my goal is to use keypairs NOT password.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Fuel-library][CI] New voting gates for provision and master node smoke checks

2014-10-23 Thread Bogdan Dobrelya
Hello.
We have only one voting gate for changes submitted to fuel-library which
checks deployment of Openstack nodes with Puppet manifests from
fuel-library.
But we must to check for two more important potential 'smoke sources':

1) Master node build:
  * nailgun::*_only classes - for docker builds must be smoke tested
as well as modules for Openstack nodes.
  * repo consistency - either packages used by Puppet manifests
was included in repos at master node or not)

2) Nodes provisioning
  * consistency checks for packages shipped with ISO as well - if some
package was missed and node required it at provision stage,
gate would have shown that.

Would like to see comments and ideas from our DevOps and QA teams, please.

-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Travels tips for the Paris summit

2014-10-23 Thread Maish Saidel-Keesing
Thanks for creating the page.

I have added a section to it with information about Kosher restaurants
as well.

Well done and thank you for the invaluable information

Maish
On 14/10/2014 20:02, Anita Kuno wrote:
> On 10/14/2014 12:40 PM, Sylvain Bauza wrote:
>> Le 14/10/2014 18:29, Anita Kuno a écrit :
>>> On 10/14/2014 11:35 AM, Adrien Cunin wrote:
 Hi everyone,

 Inspired by the travels tips published for the HK summit, the
 French OpenStack user group wrote a similar wiki page for Paris:

 https://wiki.openstack.org/wiki/Summit/Kilo/Travel_Tips

 Also note that if you want some local informations or want to talk
 about user groups during the summit we will have a booth in the
 market place expo hall (location: E47).

 Adrien, On behalf of OpenStack-fr



 ___ OpenStack-dev
 mailing list OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>> This is awesome, thanks Adrien.
>>>
>>> I have a request. Is there any way to expand the Food section to
>>> include how to find vegetarian restaurants? Any help here appreciated.
>> Well, this is a tough question. We usually make use of TripAdvisor or
>> other French noting websites for finding good places to eat, but some
>> small restaurant don't provide this kind of information. There is no
>> official requirement to provide these details for example.
>>
>> What I can suggest is when looking at the menu (this is mandatory to put
>> it outside of the restaurant) and check for the word 'Végétarien'.
>>
>> Will amend the wiki tho with these details.
>>
>> -Sylvain
> Thanks Sylvain, I appreciate the pointers. Will wander around and look
> at menus outside restaurants. Not hard to do since I love wandering
> around the streets of Paris, so easy to walk, nice wide sidewalks.
>
> I'll also check back on the wikipage after you have edited.
>
> Thank you!
> Anita.
>>> Thanks so much for creating this wikipage,
>>> Anita.
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Maish Saidel-Keesing


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

2014-10-23 Thread Alan Kavanagh
+1 many thanks to Kyle for putting this as a priority, its most welcome.
/Alan

-Original Message-
From: Erik Moe [mailto:erik@ericsson.com] 
Sent: October-22-14 5:01 PM
To: Steve Gordon; OpenStack Development Mailing List (not for usage questions)
Cc: iawe...@cisco.com
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints


Hi,

Great that we can have more focus on this. I'll attend the meeting on Monday 
and also attend the summit, looking forward to these discussions.

Thanks,
Erik


-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com]
Sent: den 22 oktober 2014 16:29
To: OpenStack Development Mailing List (not for usage questions)
Cc: Erik Moe; iawe...@cisco.com; calum.lou...@metaswitch.com
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

- Original Message -
> From: "Kyle Mestery" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> There are currently at least two BPs registered for VLAN trunk support 
> to VMs in neutron-specs [1] [2]. This is clearly something that I'd 
> like to see us land in Kilo, as it enables a bunch of things for the 
> NFV use cases. I'm going to propose that we talk about this at an 
> upcoming Neutron meeting [3]. Given the rotating schedule of this 
> meeting, and the fact the Summit is fast approaching, I'm going to 
> propose we allocate a bit of time in next Monday's meeting to discuss 
> this. It's likely we can continue this discussion F2F in Paris as 
> well, but getting a head start would be good.
> 
> Thanks,
> Kyle
> 
> [1] https://review.openstack.org/#/c/94612/
> [2] https://review.openstack.org/#/c/97714
> [3] https://wiki.openstack.org/wiki/Network/Meetings

Hi Kyle,

Thanks for raising this, it would be great to have a converged plan for 
addressing this use case [1] for Kilo. I plan to attend the Neutron meeting and 
I've CC'd Erik, Ian, and Calum to make sure they are aware as well.

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread joehuang
Hi,

Because I am not able to find a meeting room to have deep diving OpenStack 
cascading before design summit. You are welcome to have a f2f conversation 
about the cascading before design summit. I planned to stay at Paris from 
Oct.30 to Nov.8, if you have any doubt or question, please feel free to contact 
me. All the conversation is for clarification / idea exchange purpose, not for 
any secret agreement purpose. It is necessary before design summit, for design 
summit session, it's only 40 minutes, if all 40 minutes are spent on basic 
question and clarification, then no valuable conclusion can be drawn in the 
meeting. So I want to work as client-server mode, anyone who is interested in 
talking cascading with me, just tell me when he will come to the hotel where I 
stay at Paris, then a chat could be made to reduce misunderstanding, get more 
clear picture, and focus on what need to be discussed and consensuses during 
the design summit session. 

>>>"It kind of feels to me that if we just concentrated on the part of this 
>>>that is working out how to distribute/federate Neutron then we'd have a 
>>>solution that could be mapped as easily cells and/or regions - and I wonder 
>>>if then why really need yet another aggregation concept ?"

My answer is that it seems to be feasible but can not meet the muti-site cloud 
demand (that's the drive force for cascading): 
1) large cloud operator ask multi-vendor to build the distributed but unified 
multi-site cloud together and each vendor has his own OpenStack based solution. 
If shared Nova/Cinder with federated Neutron used, the cross data center 
integration through RPC message for multi-vendor infrastrcuture is very 
difficult, and no clear responsibility boundry, it leads to difficulty for 
trouble shooting, upgrade, etc.
2) restful API /CLI is required for each site to make the cloud always workable 
and manageable. If shared Nova/Cinder with federated Neutron, then some data 
center is not able to expose restful API/CLI for management purpose.
3) the unified cloud need to expose open and standard api. If shared Nova / 
Cinder with federated Neutron, this point can be arhieved.

Best Regards

Chaoyi Huang ( joehuang )

-Original Message-
From: henry hly [mailto:henry4...@gmail.com] 
Sent: Thursday, October 23, 2014 3:13 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack 
cascading

Hi Phil,

Thanks for your feedback, and patience of this long history reading :) See 
comments inline.

On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil  wrote:
>> -Original Message-
>> From: henry hly [mailto:henry4...@gmail.com]
>> Sent: 08 October 2014 09:16
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by 
>> OpenStack cascading
>>
>> Hi,
>>
>> Good questions: why not just keeping multiple endpoints, and leaving 
>> orchestration effort in the client side?
>>
>> From feedback of some large data center operators, they want the 
>> cloud exposed to tenant as a single region with multiple AZs, while 
>> each AZ may be distributed in different/same locations, very similar with AZ 
>> concept of AWS.
>> And the OpenStack API is indispensable for the cloud for eco-system 
>> friendly.
>>
>> The cascading is mainly doing one thing: map each standalone child 
>> Openstack to AZs in the parent Openstack, hide separated child 
>> endpoints, thus converge them into a single standard OS-API endpoint.
>>
>> One of the obvious benefit doing so is the networking: we can create 
>> a single Router/LB, with subnet/port member from different child, 
>> just like in a single OpenStack instance. Without the parent 
>> OpenStack working as the aggregation layer, it is not so easy to do 
>> so. Explicit VPN endpoint may be required in each child.
>>
> I've read through the thread and the various links, and to me this still 
> sounds an awful lot like having multiple regions in Keystone.
>
> First of all I think we're in danger of getting badly mixed up in terminology 
> here around AZs which is an awfully overloaded term - esp when we make 
> comparisons to AWS AZs.  Whether we think the current Openstack usage of 
> these terms or not, lets at least stick to how they are currently defined and 
> used in Openstack:
>
> AZs - A scheduling concept in Nova and Cinder.Simply provides some 
> isolation schemantic about a compute host or storage server.  Nothing to do 
> with explicit physical or geographical location, although some degree of that 
> (separate racks, power, etc) is usually implied.
>
> Regions - A keystone concept for a collection of Openstack Endpoints.   They 
> may be distinct (a completely isolated set of Openstack service) or overlap 
> (some shared services).  Openstack clients support explicit user selection of 
> a region.
>
> Cells - A scalability / fault-isolation concept 

Re: [openstack-dev] [rally][users]: Synchronizing between multiple scenario instances.

2014-10-23 Thread Behzad Dastur (bdastur)
Hi Boris,
I am still getting my feet wet with rally so some concepts are new, and did not 
quite get your statement regarding the different load generators. I am 
presuming you are referring to the Scenario runner and the different “types” of 
runs.

What I was looking at is the runner, where we specify the type, times and 
concurrency.  We could have an additional field(s) which would specify the 
synchronization property.

Essentially, what I have found most useful in the cases where we run 
scenarios/tests in parallel;  is some sort of “barrier”, where at a certain 
point in the run you want all the parallel tasks to reach a specific point 
before continuing.

Also, I am also considering cases where synchronization is needed within a 
single benchmark case, where the same benchmark scenario:
creates some vms, performs some tasks, deletes the vm

Just for simplicity as a POC, I tried something with shared memory 
(multiprocessing.Value), which looks something like this:

class Barrier(object):
__init__(self, concurrency)
self.shmem = multiprocessing.Value(‘I’, concurrency)
self.lock = multiprocessing.Lock()

def wait_at_barrier ():
   while self.shmem.value > 0:
   time.sleep(1)
   return

def decrement_shm_concurrency_cnt ():
 with self.lock:
 self.shmem.value -=  1


And from the scenario, it can be called as:

scenario:
 -- do some action –
  barrobj.decrement_shm_concurrency_cnt()
 sync_helper.wait_at_barrier()
-- do some action –   <-- all processes will do this action at almost the same 
time.

I would be happy to discuss more to get a good common solution.

regards,
Behzad





From: bo...@pavlovic.ru [mailto:bo...@pavlovic.ru] On Behalf Of Boris Pavlovic
Sent: Tuesday, October 21, 2014 3:23 PM
To: Behzad Dastur (bdastur)
Cc: OpenStack Development Mailing List (not for usage questions); Pradeep 
Chandrasekar (pradeech); John Wei-I Wu (weiwu)
Subject: Re: [openstack-dev] [rally][users]: Synchronizing between multiple 
scenario instances.

Behzad,

Unfortunately at this point there is no support of locking between scenarios.


It will be quite tricky for implementation, because we have different load 
generators, and we will need to find
common solutions for all of them.

If you have any ideas about how to implement it in such way, I will be more 
than happy to get this in upstream.


One of the way that I see is to having some kind of chain-of-benchmarks:

1) Like first benchmark is running N VMs
2) Second benchmarking is doing something with all those benchmarks
3) Third benchmark is deleting all these VMs

(where chain elements are atomic actions)

Probably this will be better long term solution.
Only thing that we should understand is how to store those results and how to 
display them.


If you would like to help with this let's start discussing it, in some kind of 
google docs.

Thoughts?


Best regards,
Boris Pavlovic


On Wed, Oct 22, 2014 at 2:13 AM, Behzad Dastur (bdastur) 
mailto:bdas...@cisco.com>> wrote:
Does rally provide any synchronization mechanism to synchronize between 
multiple scenario, when running in parallel? Rally spawns multiple processes, 
with each process running the scenario.  We need a way to synchronize between 
these to start a perf test operation at the same time.


regards,
Behzad


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Are disk-intensive operations managed ... or not?

2014-10-23 Thread Preston L. Bannister
John,

As a (new) OpenStack developer, I just discovered the
"CINDER_SECURE_DELETE" option.

As an *implicit* default, I entirely approve.  Production OpenStack
installations should *absolutely* insure there is no information leakage
from one instance to the next.

As an *explicit* default, I am not so sure. Low-end storage requires you do
this explicitly. High-end storage can insure information never leaks.
Counting on high level storage can make the upper levels more efficient,
can be a good thing.

The debate about whether to wipe LV's pretty much massively depends on the
intelligence of the underlying store. If the lower level storage never
returns accidental information ... explicit zeroes are not needed.



On Wed, Oct 22, 2014 at 11:15 PM, John Griffith 
wrote:

>
>
> On Tue, Oct 21, 2014 at 9:17 AM, Duncan Thomas 
> wrote:
>
>> For LVM-thin I believe it is already disabled? It is only really
>> needed on LVM-thick, where the returning zeros behaviour is not done.
>>
>> On 21 October 2014 08:29, Avishay Traeger 
>> wrote:
>> > I would say that wipe-on-delete is not necessary in most deployments.
>> >
>> > Most storage backends exhibit the following behavior:
>> > 1. Delete volume A that has data on physical sectors 1-10
>> > 2. Create new volume B
>> > 3. Read from volume B before writing, which happens to map to physical
>> > sector 5 - backend should return zeroes here, and not data from volume A
>> >
>> > In case the backend doesn't provide this rather standard behavior, data
>> must
>> > be wiped immediately.  Otherwise, the only risk is physical security,
>> and if
>> > that's not adequate, customers shouldn't be storing all their data there
>> > regardless.  You could also run a periodic job to wipe deleted volumes
>> to
>> > reduce the window of vulnerability, without making delete_volume take a
>> > ridiculously long time.
>> >
>> > Encryption is a good option as well, and of course it protects the data
>> > before deletion as well (as long as your keys are protected...)
>> >
>> > Bottom line - I too think the default in devstack should be to disable
>> this
>> > option, and think we should consider making the default False in Cinder
>> > itself.  This isn't the first time someone has asked why volume deletion
>> > takes 20 minutes...
>> >
>> > As for queuing backup operations and managing bandwidth for various
>> > operations, ideally this would be done with a holistic view, so that for
>> > example Cinder operations won't interfere with Nova, or different Nova
>> > operations won't interfere with each other, but that is probably far
>> down
>> > the road.
>> >
>> > Thanks,
>> > Avishay
>> >
>> >
>> > On Tue, Oct 21, 2014 at 9:16 AM, Chris Friesen <
>> chris.frie...@windriver.com>
>> > wrote:
>> >>
>> >> On 10/19/2014 09:33 AM, Avishay Traeger wrote:
>> >>>
>> >>> Hi Preston,
>> >>> Replies to some of your cinder-related questions:
>> >>> 1. Creating a snapshot isn't usually an I/O intensive operation.  Are
>> >>> you seeing I/O spike or CPU?  If you're seeing CPU load, I've seen the
>> >>> CPU usage of cinder-api spike sometimes - not sure why.
>> >>> 2. The 'dd' processes that you see are Cinder wiping the volumes
>> during
>> >>> deletion.  You can either disable this in cinder.conf, or you can use
>> a
>> >>> relatively new option to manage the bandwidth used for this.
>> >>>
>> >>> IMHO, deployments should be optimized to not do very long/intensive
>> >>> management operations - for example, use backends with efficient
>> >>> snapshots, use CoW operations wherever possible rather than copying
>> full
>> >>> volumes/images, disabling wipe on delete, etc.
>> >>
>> >>
>> >> In a public-cloud environment I don't think it's reasonable to disable
>> >> wipe-on-delete.
>> >>
>> >> Arguably it would be better to use encryption instead of
>> wipe-on-delete.
>> >> When done with the backing store, just throw away the key and it'll be
>> >> secure enough for most purposes.
>> >>
>> >> Chris
>> >>
>> >>
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Duncan Thomas
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> We disable this in the Gates "CINDER_SECURE_DELETE=False"
>
> ThinLVM (which hopefully will be default upon release of Kilo) doesn't
> need it because internally it returns zeros when reading unallocated blocks
> so it's a non-issue.
>
> The debate of to wipe LV's or not to is a long running issue.  The default
> behavior in Cinder is to leave it enable and IMHO

Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack cascading

2014-10-23 Thread henry hly
Hi Phil,

Thanks for your feedback, and patience of this long history reading :)
See comments inline.

On Wed, Oct 22, 2014 at 5:59 PM, Day, Phil  wrote:
>> -Original Message-
>> From: henry hly [mailto:henry4...@gmail.com]
>> Sent: 08 October 2014 09:16
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [all] [tc] Multi-clouds integration by OpenStack
>> cascading
>>
>> Hi,
>>
>> Good questions: why not just keeping multiple endpoints, and leaving
>> orchestration effort in the client side?
>>
>> From feedback of some large data center operators, they want the cloud
>> exposed to tenant as a single region with multiple AZs, while each AZ may be
>> distributed in different/same locations, very similar with AZ concept of AWS.
>> And the OpenStack API is indispensable for the cloud for eco-system
>> friendly.
>>
>> The cascading is mainly doing one thing: map each standalone child
>> Openstack to AZs in the parent Openstack, hide separated child endpoints,
>> thus converge them into a single standard OS-API endpoint.
>>
>> One of the obvious benefit doing so is the networking: we can create a single
>> Router/LB, with subnet/port member from different child, just like in a 
>> single
>> OpenStack instance. Without the parent OpenStack working as the
>> aggregation layer, it is not so easy to do so. Explicit VPN endpoint may be
>> required in each child.
>>
> I've read through the thread and the various links, and to me this still 
> sounds an awful lot like having multiple regions in Keystone.
>
> First of all I think we're in danger of getting badly mixed up in terminology 
> here around AZs which is an awfully overloaded term - esp when we make 
> comparisons to AWS AZs.  Whether we think the current Openstack usage of 
> these terms or not, lets at least stick to how they are currently defined and 
> used in Openstack:
>
> AZs - A scheduling concept in Nova and Cinder.Simply provides some 
> isolation schemantic about a compute host or storage server.  Nothing to do 
> with explicit physical or geographical location, although some degree of that 
> (separate racks, power, etc) is usually implied.
>
> Regions - A keystone concept for a collection of Openstack Endpoints.   They 
> may be distinct (a completely isolated set of Openstack service) or overlap 
> (some shared services).  Openstack clients support explicit user selection of 
> a region.
>
> Cells - A scalability / fault-isolation concept within Nova.  Because Cells 
> aspires to provide all Nova features transparently across cells this kind or 
> acts like multiple regions where only the Nova service is distinct 
> (Networking has to be common, Glance has to be common or at least federated 
> in a transparent way, etc).   The difference from regions is that the user 
> doesn’t have to make an explicit region choice - they get a single Nova URL 
> for all cells.   From what I remember Cells originally started out also using 
> the existing APIs as the way to connect the Cells together, but had to move 
> away from that because of the performance overhead of going through multiple 
> layers.
>
>

Agree, it's very clear now. However isolation is not all about
hardware and facility fault, REST API is preferred in terms of system
level isolation despite the theoretical protocol serialization
overhead.

>
> Now with Cascading it seems that we're pretty much building on the Regions 
> concept, wrapping it behind a single set of endpoints for user convenience, 
> overloading the term AZ

Sorry not very certain of the meaning "overloading". It's just a
configuration choice by admin in the wrapper Openstack. As you
mentioned, there is no explicit definition of what a AZ should be, so
Cascading select to map it to a child Openstack. Surely we could use
another concept or invent new concept instead of AZ, but AZ is the
most appropriate one because it share the same semantic of "isolation"
with those child.

> to re-expose those sets of services to allow the user to choose between them 
> (doesn't this kind of negate the advantage of not having to specify the 
> region in the client- is that really such a bit deal for users ?) , and doing 
> something to provide a sort of federated Neutron service - because as we all 
> know the hard part in all of this is how you handle the Networking.
>
> It kind of feels to me that if we just concentrated on the part of this that 
> is working out how to distribute/federate Neutron then we'd have a solution 
> that could be mapped as easily cells and/or regions - and I wonder if then 
> why really need yet another aggregation concept ?
>

I agree that it's not so huge a gap between cascading AZ and
standalone endpoints for Nova and Cinder. However, wrapping is
strongly needed by customer feedback for Neutron, especially for those
who operate multiple internally connected DC. They don't like to force
tenants to create multiple route domain, connected with explicit

Re: [openstack-dev] [Glance][Cinder] The sorry state of cinder's driver in Glance

2014-10-23 Thread Flavio Percoco
On 10/22/2014 04:46 PM, Zhi Yan Liu wrote:
> Replied in inline.
> 
> On Wed, Oct 22, 2014 at 9:33 PM, Flavio Percoco  wrote:
>> On 10/22/2014 02:30 PM, Zhi Yan Liu wrote:
>>> Greetings,
>>>
>>> On Wed, Oct 22, 2014 at 4:56 PM, Flavio Percoco  wrote:
 Greetings,

 Back in Havana a, partially-implemented[0][1], Cinder driver was merged
 in Glance to provide an easier and hopefully more consistent interaction
 between glance, cinder and nova when it comes to manage volume images
 and booting from volumes.
>>>
>>> With my idea, it not only for VM provisioning and consuming feature
>>> but also for implementing a consistent and unified block storage
>>> backend for image store.  For historical reasons, we have implemented
>>> a lot of duplicated block storage drivers between glance and cinder,
>>> IMO, cinder could regard as a full-functional block storage backend
>>> from OpenStack's perspective (I mean it contains both data and control
>>> plane), glance could just leverage cinder as a unified block storage
>>> backend. Essentially, Glance has two kind of drivers, block storage
>>> driver and object storage driver (e.g. swift and s3 driver),  from
>>> above opinion, I consider to give glance a cinder driver is very
>>> sensible, it could provide a unified and consistent way to access
>>> different kind of block backend instead of implement duplicated
>>> drivers in both projects.
>>
>> Let me see if I got this right. You're suggesting to have a cinder
>> driver in Glance so we can basically remove the
>> 'create-volume-from-image' functionality from Cinder. is this right?
>>
> 
> I don't think we need to remove any feature as an existing/reasonable
> use case from end user's perspective, 'create-volume-from-image' is a
> useful function and need to keep as-is to me, but I consider we can do
> some changes for internal implementation if we have cinder driver for
> glance, e.g. for this use case, if glance store image as a volume
> already then cinder can create volume effectively - to leverage such
> capability from backend storage, I think this case just like ceph
> current way on this situation (so a duplication example again).
> 
>>> I see some people like to see implementing similar drivers in
>>> different projects again and again, but at least I think this is a
>>> hurtless and beneficial feature/driver.
>>
>> It's not as harmless as it seems. There are many users confused as to
>> what the use case of this driver is. For example, should users create
>> volumes from images? or should the create images that are then stored in
>> a volume? What's the difference?
> 
> I'm not sure I understood all concerns from those folks, but for your
> examples, one key reason I think is that they still think it in
> technical way to much. I mean create-image-from-volume and
> create-volume-from-image are useful and reasonable _use case_ from end
> user's perspective because volume and image are totally different
> concept for end user in cloud context (at least, in OpenStack
> context), the benefits/purpose of leverage cinder store/driver in
> glance is not to change those concepts and existing use case for end
> user/operator but to try to help us implement those feature
> efficiently in glance and cinder inside, IMO, including low the
> duplication as much as possible which as I mentioned before. So, in
> short, I see the impact of this idea is on _implementation_ level,
> instead on the exposed _use case_ level.

While I agree it has a major impact in the implementation of things, I
still think it has an impact from an use-case perspective, even an
operations perspective.

For example, If I were to deploy Glance on top of Cinder, I need to
first make Cinder accessible from Glance, which it might not be. This is
probably not a big deal. However, I also need to figure out things like:
How should I expose this to my users? Do they need it? or should I keep
it internal?

Furthermore, I'd need to answer questions like: Now that images can be
created in volumes, I need to take that into account when doing my size
planning.

Not saying these are blockers for this feature but I just want to make
clear that from a users perspective, it's not as simple as enabling a
new driver in Glance.


>> Technically, the answer is probably none, but from a deployment and
>> usability perspective, there's a huge difference that needs to be
>> considered.
> 
> According to my above explanations, IMO, this driver/idea couldn't
> (and shouldn't) break existing concept and use case for end
> user/operator, but if I still miss something pls let me know.

According to the use-cases explained in this thread (also in the emails
from John and Mathieu) this is something that'd be good having. I'm
looking forward to seeing the driver completed.

As John mentioned in his email, we should probably sync again in K-1 to
see if there's been some progress on the bricks side and the other
things this driver depends on. If there hasn'

Re: [openstack-dev] [oslo] python 2.6 support for oslo libraries

2014-10-23 Thread Andreas Jaeger
Doug, thanks for writing this up.

Looking at your list, I created a patch and only changed oslo.log:

https://review.openstack.org/130444

Please double check that I didn't miss anything,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev