[Openstack] [openstack-dev][cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-04 Thread Vilobh Meshram
Hi All,

For people who missed the design summit session on Delimiter - Cross
project Quota enforcement library here is a gist of what we discussed.
Etherpad [1] captures the details.

1. Delimiter will not be responsible for rate-limiting.
2. Delimiter will not maintain data for the projects.
3. Delimiter will not have the concept of reservations.
4. Delimiter will fetch information for project quotas from respective
projects.
5. Delimiter will consolidate utility code for quota related issues at
common place. For example X, Y, Z companies might have different scripts to
fix quota issues. Delimiter can be a single place for it and the scripts
can be more generalized to suit everyones needs.
6. The details of project hierarchy is maintained in Keystone but Delimiter
while making calculations for available/free resource will take into
consideration whether the project has flat or nested hierarchy.
7. Delimiter will rely on the concept of generation-id to guarantee
sequencing. Generation-id gives a point in time view of resource usage in a
project. Project consuming delimiter will need to provide this information
while checking or consuming quota. At present Nova [3] has the concept of
generation-id.
8. Spec [5] will be modified based on the design summit discussion.

If you want to contribute to Delimiter, please join *#openstack-quota. *

We have *meetings every Tuesday at 17:00 UTC. *Please join us !

I am in the process of setting up a new repo for Delimiter. The launchpad
page[4] is up.


Thanks!

-Vilobh

[1] Etherpad : https://etherpad.openstack.org/p/newton-quota-library
[2] Slides :
http://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal


[3] https://review.openstack.org/#/c/283253/
[4] https://launchpad.net/delimiter
[5] Spec : https://review.openstack.org/#/c/284454
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [openstack-dev] [cross-project] [all] Quotas and the need for reservation

2016-04-05 Thread Vilobh Meshram
Hi All,

As part of the cross project Quota library effort [1] there has been a
discussion around the need for reservations or getting rid of reservations
altogether.

It is believed that reservation help to to reserve a set of resources
beforehand and hence eventually preventing any other upcoming request
(serial or parallel) to exceed quota if because of original request the
project might have reached the quota limits.

Questions :-
1. Does reservation in its current state as used by Nova, Cinder, Neutron
help to solve the above problem ?

2. Is it consistent, reliable ?  Even with reservation can we run into
in-consistent behaviour ?

3. Do we really need it ?

Since we could not come to a conclusion and since this is a key decision
thought would take insights from the community on what (and why) they think
is a better approach.

-Vilobh

[1] https://review.openstack.org/#/c/284454/
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-15 Thread Vilobh Meshram
IMHO for Magnum and Nested Quota we need more discussion
before proceeding ahead because :-

1. The main intent of hierarchical multi tenancy is creating a hierarchy of
projects (so that its easier for the cloud provider to manage different
projects) and nested quota driver being able to validate and impose those
restrictions.
2. The tenancy boundary in Magnum is the bay. Bays offer both a management
and security isolation between multiple tenants.
3. In Magnum there is no intent to share a single bay between multiple
tenants.

So I would like to have a discussion on whether Nested Quota approach fits
in our/Magnum's design and how will the resources be distributed in the
hierarchy. I will include it in our Magnum weekly meeting agenda.

I have in-fact drafted a blueprint for it sometime back [1].

I am a huge supporter of hierarchical projects and nested quota approaches
(as they if done correctly IMHO minimize admin pain of managing quotas) ,
just wanted to see a cleaner way we can get this done for Magnum.

JFYI, I am the primary author of Cinder Nested Quota [2]  and co-author of
Nova Nested Quota[3] so I am familiar with the approach taken in both.

Thoughts ?

-Vilobh

[1]  Magnum Nested Quota :
https://blueprints.launchpad.net/magnum/+spec/nested-quota-magnum
[2] Cinder Nested Quota Driver : https://review.openstack.org/#/c/205369/
[3] Nova Nested Quota Driver : https://review.openstack.org/#/c/242626/

On Tue, Dec 15, 2015 at 10:10 AM, Tim Bell  wrote:

> Thanks… it is really important from the user experience that we keep the
> nested quota implementations in sync so we don’t have different semantics.
>
>
>
> Tim
>
>
>
> *From:* Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* 15 December 2015 18:44
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-...@lists.openstack.org>
> *Cc:* OpenStack Mailing List (not for usage questions) <
> openstack@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
>
>
>
> Vilobh,
>
>
>
> Thanks for advancing this important topic. I took a look at what Tim
> referenced how Nova is implementing nested quotas, and it seems to me
> that’s something we could fold in as well to our design. Do you agree?
>
>
>
> Adrian
>
>
>
> On Dec 14, 2015, at 10:22 PM, Tim Bell  wrote:
>
>
>
> Can we have nested project quotas in from the beginning ? Nested projects
> are in Keystone V3 from Kilo onwards and retrofitting this is hard work.
>
>
>
> For details, see the Nova functions at
> https://review.openstack.org/#/c/242626/. Cinder now also has similar
> functions.
>
>
>
> Tim
>
>
>
> *From:* Vilobh Meshram [mailto:vilobhmeshram.openst...@gmail.com
> ]
> *Sent:* 15 December 2015 01:59
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-...@lists.openstack.org>; OpenStack Mailing List (not for usage
> questions) 
> *Subject:* [openstack-dev] [openstack][magnum] Quota for Magnum Resources
>
>
>
> Hi All,
>
>
>
> Currently, it is possible to create unlimited number of resource like
> bay/pod/service/. In Magnum, there should be a limitation for user or
> project to create Magnum resource,
> and the limitation should be configurable[1].
>
>
>
> I proposed following design :-
>
>
>
> 1. Introduce new table magnum.quotas
>
> ++--+--+-+-++
>
> | Field  | Type | Null | Key | Default | Extra  |
>
> ++--+--+-+-++
>
> | id | int(11)  | NO   | PRI | NULL| auto_increment |
>
> | created_at | datetime | YES  | | NULL||
>
> | updated_at | datetime | YES  | | NULL||
>
> | deleted_at | datetime | YES  | | NULL||
>
> | project_id | varchar(255) | YES  | MUL | NULL||
>
> | resource   | varchar(255) | NO   | | NULL||
>
> | hard_limit | int(11)  | YES  | | NULL||
>
> | deleted| int(11)  | YES  | | NULL||
>
> ++--+--+-+-++
>
> resource can be Bay, Pod, Containers, etc.
>
>
>
> 2. API controller for quota will be created to make sure basic CLI
> commands work.
>
> quota-show, quota-delete, quota-create, quota-update
>
> 3. When the admin specifies a quota of X number of resources to be created
> the code should abide by that. For example if hard limit for Bay is 5 (i.e.
> a project can have maximum 5 Bay's) if a user in a proj

[Openstack] [openstack-dev][openstack][magnum] Quota for Magnum Resources

2015-12-14 Thread Vilobh Meshram
Hi All,

Currently, it is possible to create unlimited number of resource like
bay/pod/service/. In Magnum, there should be a limitation for user or
project to create Magnum resource,
and the limitation should be configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
++--+--+-+-++

| Field  | Type | Null | Key | Default | Extra  |

++--+--+-+-++

| id | int(11)  | NO   | PRI | NULL| auto_increment |

| created_at | datetime | YES  | | NULL||

| updated_at | datetime | YES  | | NULL||

| deleted_at | datetime | YES  | | NULL||

| project_id | varchar(255) | YES  | MUL | NULL||

| resource   | varchar(255) | NO   | | NULL||

| hard_limit | int(11)  | YES  | | NULL||

| deleted| int(11)  | YES  | | NULL||

++--+--+-+-++

resource can be Bay, Pod, Containers, etc.


2. API controller for quota will be created to make sure basic CLI commands
work.

quota-show, quota-delete, quota-create, quota-update

3. When the admin specifies a quota of X number of resources to be created
the code should abide by that. For example if hard limit for Bay is 5 (i.e.
a project can have maximum 5 Bay's) if a user in a project tries to exceed
that hardlimit it won't be allowed. Similarly goes for other resources.

4. Please note the quota validation only works for resources created via
Magnum. Could not think of a way that Magnum to know if a COE specific
utilities created a resource in background. One way could be to see the
difference between whats stored in magnum.quotas and the information of the
actual resources created for a particular bay in k8s/COE.

5. Introduce a config variable to set quotas values.

If everyone agrees will start the changes by introducing quota restrictions
on Bay creation.

Thoughts ??


-Vilobh

[1] https://blueprints.launchpad.net/magnum/+spec/resource-quota
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] [nova]New Quota Subteam on Nova

2015-12-01 Thread Vilobh Meshram
I am highly supportive for the idea of Nova Quota sub-team, for something
as complex as Quota, as it helps to move quickly on reviews and changes.

Agree with John, a test framework to test quotas will be helpful and can be
one of the first task the Nova Quota sub team can focus on as that will lay
the foundation for whether the bugs mentioned here http://bit.ly/1Pbr8YL are
valid or not.

Having worked in the area of Quotas for a while now by introducing features
like Cinder Nested Quota Driver [1] [2] I strongly feel that something like
a Nova Quota sub-team will definitely help. Mentioning about Cinder Quota
driver since it was accepted in Mitaka design summit that Nova Nested Quota
Driver[3] would like to pursue the route taken by Cinder.  Since Nested
quota is a one part of Quota subsystem and working in small team helped to
iterate quickly for Nested Quota patches[4][5][6][7] so IMHO forming a Nova
quota subteam will help.

Melanie,

If you can share the details of the bug that Joe mentioned, to reproduce
quota bugs locally, it would be helpful.

-Vilobh (irc: vilobhmm)

[1] Code : https://review.openstack.org/#/c/205369/
[2] Blueprint :
https://blueprints.launchpad.net/cinder/+spec/cinder-nested-quota-driver
[3] Nova Nested Quota Spec : https://review.openstack.org/#/c/209969/
[4] https://review.openstack.org/#/c/242568/
[5] https://review.openstack.org/#/c/242500/
[6] https://review.openstack.org/#/c/242514/
[7] https://review.openstack.org/#/c/242626/


On Mon, Nov 30, 2015 at 10:59 AM, melanie witt  wrote:

> On Nov 26, 2015, at 9:36, John Garbutt  wrote:
>
> > A suggestion in the past, that I like, is creating a nova functional
> > test that stress tests the quota code.
> >
> > Hopefully that will be able to help reproduce the error.
> > That should help prove if any proposed fix actually works.
>
> +1, I think it's wise to get some data on the current state of quotas
> before choosing a redesign. IIRC, Joe Gordon described a test scenario he
> used to use to reproduce quota bugs locally, in one of the launchpad bugs.
> If we could automate something like that, we could use it to demonstrate
> how quotas currently behave during parallel requests and try things like
> disabling reservations. I also like the idea of being able to verify the
> effects of proposed fixes.
>
> -melanie (irc: melwitt)
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [openstack] [openstack-dev] [nova] Instance snapshot creation locally and Negative values returned by Resource Tracker

2015-11-03 Thread Vilobh Meshram
Hi All,

I see negative values being returned by resource tracker, which is
surprising, since enough capacity is available on Hypervisor (as seen
through df -ha output [0]). In my setup I have configured nova.conf to
created instance snapshot locally and I *don't have* disk-filter enabled.

Local instance snapshot means the snapshot creation (and conversion from
RAW=>QCOW2) happens on the Hypervisor where the instance was created. After
the conversion the snapshot is uploaded to Glance and deleted from the
Hypervisor.

Questions are :-

1. compute_nodes['free_disk_gb'] is not in-sync with the actual free disk
capacity for that partition (as seen by df -ha) [0]  (see /home).

This is because resource tracker is returning negative values for
free_disk_gb [1] and that is because the value of resources['local_gb_used']
is greater than resources['local_gb']. The value for resources['
local_gb_used'] should ideally be the local gigabytes (787G [0]) used by
the Hypervisor but in-fact is the local gigabytes allocated on the
Hypervisor (3525 G [0]). Allocated is the sum of used capacity on
hypervisor + space consumed by instances spawned on that Hypervisor ( and
there size depends on which flavor VM was spawned on the Hypervisor).
Because of [2] the used space on the Hypervisor is discarded and only the
space consumed by the instances on the HV is taken into consideration.

Was there a specific reason to do so, specifically [2] i.e. resetting the
value of resources['local_gb_used'] ?

2. Is seeing negative values for compute_nodes['free_disk_gb'] and
compute_nodes['disk_available_least'] a normal pattern ? When can we expect
to see them ?

3. Lets say in future I plan to enable disk filter, scheduler logic will
make sure not to pick up this Hypervisor if its reaching its consumption
(considering it might need to have enough space for snapshot creation and
later a scratch space for snapshot conversion from RAW => QCOW2) will it
help so that resource tracker does not return negative values. Is there a
recommended overcommit ration suggestion in these scenario where you happen
to create/convert snapshot locally before uploading to glance.

4. How will multiple snapshot request for instances on same Hypervisor be
handled because till the time the request reaches the compute it has no
clear idea about the free capacity on HV which might lead to instance
unusable. Will something of this sort [3]  help? How do people using local
snapshots handle it right now ?

-Vilobh

[0] http://paste.openstack.org/show/477926/
[1]
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/resource_tracker.py#L576
[2]
https://github.com/openstack/nova/blob/stable/liberty/nova/compute/resource_tracker.py#L853
[3] https://review.openstack.org/#/c/208078/
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [openstack-dev] [nova] Servicegroup refactoring for the Control Plane - Mitaka

2015-09-23 Thread Vilobh Meshram
Hi All,

As part of Liberty spec [1] was approved with the conclusion that
nova.services data be stored and managed by respective driver backend that
is selected by the CONF.servicegroup_driver (which can be
DB/Zookeeper/Memcache).

When this spec was proposed again for Mitaka[3], the idea that has come up
is that the nova.services data will remain in nova database itself and the
servicegroup zookeeper, memcache drivers be used solely for
liveliness/up/down ness of the service. The idea being using the best of
both worlds and few operations for example getting min/max for a service id
can be quicker when done a query in DB in comparison to ZK/Memcache
backends. But ZK driver is worthwhile for state management as it minimizes
the burden on nova db to store the additional *periodic* (depending on
service_down_time) liveliness information.

Please note [4] depends on [3] and a conclusion on [3] can pave a way
forward for [4] (similarly [1] was a dependency for [2]). A detail document
[5] encompassing all the possible options by having different permutation
of various drivers (db/zk/mc). Once we have a conclusion on one of the
approach proposed in [5] will update spec [3] to reflect these changes.

So in short

*Accepted in Liberty [1] [2] :*
[1] Services information be stored in respective backend configured by
CONF.servicegroup_driver
and all the interfaces which plan to access service information go through
servicegroup layer.
[2] Add tooz specific drivers e.g. replace existing nova servicegroup
zookeeper driver with a new zookeeper driver backed by Tooz zookeeper
driver.

*Proposal for Mitaka [3][4] :*
[3] Services information be stored in nova.services (nova database) and
liveliness information be managed by CONF.servicegroup_driver
(DB/Zookeeper/Memcache)
[4] Stick to what is accepted for #2. Just that the scope will be decided
based on whether we go with #1 (as accepted for Liberty) or #3 (what is
proposed for Mitaka)


- Vilobh

[1] Servicegroup foundational refactoring for Control Plane *(Liberty)* -
https://review.openstack.org/#/c/190322/

[2] Add tooz service group driver* (Liberty)* -
https://review.openstack.org/#/c/138607/

[3] Servicegroup foundational refactoring for Control Plane *(Mitaka)* -
https://review.openstack.org/#/c/222423/

[4] Add tooz service group driver *(Mitaka) *-
https://review.openstack.org/#/c/222422/

[5] *Various options and there impact* :
https://etherpad.openstack.org/p/servicegroup-refactoring
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [openstack-dev] [Magnum] Obtain the objects from the bay endpoint

2015-08-11 Thread Vilobh Meshram
Hi All,

As discussed in today's Magnum weekly meeting I had shown interest to work
on [1].

Problem :-

Currently objects (pod/rc/service) are read from the database. In order for
native clients to work, they must be read from the ReST bay endpoint. To
execute native clients, we must have one truth of the state of the system,
not two as in its current state of art.

sdake and I discussed about it on IRC and we plan to propose following
solution :-

Approach to solve the problem :-
A]  READ path needs to be changed :

1. For python clients :-

python-magnum client->rest api->conductor->rest-endpoint-k8s-api handler

In its present state of art this is python-magnum client->rest api->db
2. For native clients :-

native client->rest-endpoint-k8s-api

B] WRITE operations need to happen via the rest endpoint instead of the
conductor.

C] Another requirement that needs to be satisfied is that data returned by
magnum should be the same whether its created by native client or
python-magnum client.

The fix will make sure all of the above conditions are met.

Need your input on the proposed approach.


-Vilobh

[1] https://blueprints.launchpad.net/magnum/+spec/objects-from-bay
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev][cinder] Nested Quota Driver and policy.json changes

2015-07-21 Thread Vilobh Meshram
Hi,

While developing Nested Quota Driver for Cinder, when performing
show/update/delete following restrictions apply :-

1. show : Only user who is admin or admin in parent or admin in root
project should be able to perform show/view the quota of the leaf projects.

2. update : Only user admin in parent or admin in root project should be
able to perform update.

3. delete : Only user admin in parent or admin in root project should be
able to perform delete.

In order to get the parent information or child list in nested hierarchy
calls need to be made to keystone. So as part of these changes do we want
to introduce 2 new roles in cinder one for project_admin and one for
root_admin so that the token can be scoped at project/root level and only
the permissible operation at the respective levels as described above can
be allowed.

For example  :-

A
 |
B
 |
C

cinder quota-update C (should only be permissible from B or A)

This can achieved either by :-
1. Introducing project_admin or cloud_admin rule in policy.json and later
populate the [1] with respective target[2][3]. Minises code changes and
gives the freedom to operators to modify policy.json and tune changes
accordingly.
2. Not introduce these 2 roles in policy.json by just make code changes and
additional logic in code to handle this but using this option we can go to
at max 1 level of heirarchy as in-order to fetch more parent we will need
to make a keystone call.

Need opinion on which option can be helpful in longterm.

-Vilobh
[1]
https://github.com/openstack/cinder/blob/master/cinder/api/contrib/quotas.py#L33
[2]
https://github.com/openstack/cinder/blob/master/cinder/api/extensions.py#L379
[3]
https://github.com/openstack/cinder/blob/master/cinder/api/contrib/quotas.py#L109
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [openstack] [openstack-dev] [cinder] Problems with existing Cinder Database Quota Driver

2015-05-12 Thread Vilobh Meshram
Hi All,

I am working on the Nested Quota Driver for Cinder [1] and with that effort
trying to clean up some of the existing quota related issues we have with
Cinder. Few of the obvious ones which we saw recently is [2] where usage
and reservation quota was being deleted on quota deletion.

It would be great if you specify anything you are currently experiencing
with cinder quota so that I could help in making it better by addressing
them.

-Vilobh
[1] https://review.openstack.org/#/c/173141/
[2] https://bugs.launchpad.net/cinder/+bug/1410034
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] [openstack][nova] Does anyone use Zookeeper, Memcache Nova ServiceGroup Driver ?

2015-04-28 Thread Vilobh Meshram
Attila,

Thanks for the details.

Why the current Zk driver is not good ?

Apart from the slowness of Mc, Zk driver are they reliable enough ?

Lets say I have more than 1000 compute would you still suggest to go with
DB Servicegroup driver ?

>The sg drivers was introduced to eliminate 100 Update/sec at 1000 Host,
>but it caused all service is being fetched from the DB even if at the
given code
>part you just need to alive services.

Couldn't get this comment, current implementation has get_all and
service_is_up calls so why is it still fetching all compute nodes rather
than fetching only the ones which where service_is_up ?

-Vilobh

On Tue, Apr 28, 2015 at 12:34 AM, Attila Fazekas 
wrote:

> How many compute nodes do you want to manage ?
>
> If it less than ~1000, you do not need to care.
> If you have more, just use SSD with good write IOPS value.
>
> Mysql actually can be fast with enough memory and good SSD.
> Even faster than [1].
>
> zk as technology is good, the current nova driver is not. Not recommended.
> The current mc driver does lot of tcp ping-pong for every node,
> it can be slower than the SQL.
>
> IMHO At high compute node count you would face with scheduler latency
> issues
> sooner than sg driver issues. (It is not Log(N) :()
>
> The sg drivers was introduced to eliminate 100 Update/sec at 1000 Host,
> but it caused all service is being fetched from the DB even if at the
> given code
> part you just need to alive services.
>
>
> [1]
> http://www.percona.com/blog/2013/10/18/innodb-scalability-issues-tables-without-primary-keys/
>
> - Original Message -
> > From: "Vilobh Meshram" 
> > To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-...@lists.openstack.org>, "OpenStack
> > Mailing List (not for usage questions)" 
> > Sent: Tuesday, April 28, 2015 1:21:58 AM
> > Subject: [openstack-dev] [openstack][nova] Does anyone use Zookeeper,
> Memcache Nova ServiceGroup Driver ?
> >
> > Hi,
> >
> > Does anyone use Zookeeper[1], Memcache[2] Nova ServiceGroup Driver ?
> >
> > If yes how has been your experience with it. It was noticed that most of
> the
> > deployment try to use the default Database driver[3]. Any experiences
> with
> > Zookeeper, Memcache driver will be helpful.
> >
> > -Vilobh
> >
> > [1]
> >
> https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/zk.py
> > [2]
> >
> https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/mc.py
> > [3]
> >
> https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [openstack-dev][openstack][nova] Does anyone use Zookeeper, Memcache Nova ServiceGroup Driver ?

2015-04-27 Thread Vilobh Meshram
Hi,

Does anyone use Zookeeper[1], Memcache[2] Nova ServiceGroup Driver ?

If yes how has been your experience with it. It was noticed that most of
the deployment try to use the default Database driver[3]. Any experiences
with Zookeeper, Memcache driver will be helpful.

-Vilobh

[1]
https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/zk.py
[2]
https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/mc.py
[3]
https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/db.py
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Cinder] Cinder State Machine - Kilo Design Summit Talk - November 5

2014-11-06 Thread Vilobh Meshram
Hi Josh,
I have updated https://etherpad.openstack.org/p/cinder-enforcement-of-states  - 
Cinder Enforcement of States with the write-up (in an easily understandable 
way) for Dynamic state diagram generation depending on the way the flow has 
been layed out in the code.
With this in place the comments Mike Perez had regarding more cleanup in 
states.py should go away. As of now I have cleaned up all the duplicate code 
just.
I am working on the read-modify-update should propose a patch in few hours once 
I fix the things with the test thats failing.
Thanks,Vilobh
  From: Vilobh Meshram 
 To: "openstack@lists.openstack.org"  
Cc: Vilobh Meshram  
 Sent: Tuesday, November 4, 2014 2:55 PM
 Subject: [Cinder] Cinder State Machine - Kilo Design Summit Talk - November 5
   
Following Etherpad links will be used for the talk :-
https://etherpad.openstack.org/p/cinder-state-machine-and-rolling-upgrades  - 
Cinder State Machine and Rolling upgrades

https://etherpad.openstack.org/p/cinder-enforcement-of-states  - Cinder 
Enforcement of States

Thanks,Vilobh


  ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Cinder] Cinder State Machine - Kilo Design Summit Talk - November 5

2014-11-06 Thread Vilobh Meshram
I have updated https://etherpad.openstack.org/p/cinder-enforcement-of-states  - 
Cinder Enforcement of States with the write-up (in an easily understandable 
way) for Dynamic state diagram generation depending on the way the flow has 
been layed out in the code.
Interested people have a look.
Thanks,Vilobh

 

 From: Vilobh Meshram 
 To: "openstack@lists.openstack.org"  
Cc: Vilobh Meshram  
 Sent: Tuesday, November 4, 2014 2:55 PM
 Subject: [Cinder] Cinder State Machine - Kilo Design Summit Talk - November 5
   
Following Etherpad links will be used for the talk :-
https://etherpad.openstack.org/p/cinder-state-machine-and-rolling-upgrades  - 
Cinder State Machine and Rolling upgrades

https://etherpad.openstack.org/p/cinder-enforcement-of-states  - Cinder 
Enforcement of States

Thanks,Vilobh


   

  ___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [Cinder] Cinder State Machine - Kilo Design Summit Talk - November 5

2014-11-04 Thread Vilobh Meshram

Following Etherpad links will be used for the talk :-



https://etherpad.openstack.org/p/cinder-state-machine-and-rolling-upgrades  - 
Cinder State Machine and Rolling upgrades

https://etherpad.openstack.org/p/cinder-enforcement-of-states  - Cinder 
Enforcement of States

Thanks,Vilobh


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack