Tihomir,

Your comments in the patch were the actually the clearest to me about ease of 
customizing without requiring upstream changes and really made me think more 
about your points.

Here are a couple of bullet points for consideration.


  *   Will we take on auto-discovery of API extensions in two spots (python for 
legacy and JS for new)?
  *   As teams move towards deprecating / modifying APIs who will be 
responsible for ensuring the JS libraries stay up to date and keep tabs on 
every single project?  Right now in Glance, for example, they are working on 
some fixes to v2 API (soon to become v2.3) that will allow them to deprecate v1 
so that Nova can migrate from v1.  Part of this includes making simultaneous 
improvements to the client library so that the switch can happen more 
transparently to client users. This testing and maintenance of the service team 
already takes on.
  *   The service API documentation almost always lags (helped by specs now) 
and the service team takes on the burden of exposing a programmatic way to 
access which is tested and easily consumable via the python clients which 
removes some guesswork from using the service.
  *   This is going to be an incremental approach with legacy support 
requirements anyway, I think.  So, incorporating python side changes won’t just 
go away.
  *   A tangent that needs to be considered IMO since I’m working on some 
elastic search things right now. Which of these would be better if we introduce 
a server side caching mechanism or a new source of data such as elastic search 
to improve performance?
     *   Would the client just be able to handle changing whether or not it 
used cache with a header and in either case the server side appropriately uses 
the cache? (e.g. Cache-Control: no-cache)

I’m not sure I fully understood your example about Cinder.  Was it the 
cinderclient that held up delivery of that horizon support or there cinder API 
or both?  If the API isn’t in, then it would hold up delivery of the feature in 
any case. If it is just about delivering new functionality, all that would be 
required in Richard’s approach is to drop in a new file of decorated classes / 
functions from his utility with the API’s you want? None of the API calls have 
anything to do with how your view actually replaces the upstream view.  These 
are all just about accessing the data.

Finally, I mentioned the following in the patch related to your example below 
about the client making two calls to do an update, but wanted to mention here 
to see if it is an approach that was purposeful (I don’t know the history):

​>>Do we really need the lines:​

>> project = api.keystone.tenant_get(request, id)
>> kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None)
​
I agree that if you already have all the data it is really bad to have to do 
another call. I do think there is room for discussing the reasoning, though.
As far as I can tell, they do this so that if you are updating an entity, you 
have to be very specific about the fields you are changing. I actually see this 
as potentially a protectionary measure against data loss and a sometimes very 
nice to have feature. It perhaps was intended to *help* guard against race 
conditions *sometimes*.

Here's an example: Admin user Joe has an Domain open and stares at it for 15 
minutes while he updates the description. Admin user Bob is asked to go ahead 
and enable it. He opens the record, edits it, and then saves it. Joe finished 
perfecting the description and saves it. Doing this action would mean that the 
Domain is enabled and the description gets updated. Last man in still wins if 
he updates the same fields, but if they update different fields then both of 
their changes will take affect without them stomping on each other. Whether 
that is good or bad may depend on the situation…

From: Tihomir Trifonov <t.trifo...@gmail.com<mailto:t.trifo...@gmail.com>>
Reply-To: OpenStack List 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, December 11, 2014 at 7:53 AM
To: OpenStack List 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [horizon] REST and Django

​​
Client just needs to know which URL to hit in order to invoke a certain API, 
and does not need to know the procedure name or parameters ordering.


​That's where the difference is. I think the client has to know the procedure 
name and parameters. Otherwise​ we have a translation factory pattern, that 
converts one naming convention to another. And you won't be able to call any 
service API if there is no code in the middleware to translate it to the 
service API procedure name and parameters. To avoid this - we can use a 
transparent proxy model - direct mapping of a client call to service API 
naming, which can be done if the client invokes the methods with the names in 
the service API, so that the middleware will just pass parameters, and will not 
translate. Instead of:


updating user data:

    <client: POST /user/ >   =>    <middleware: convert to /keystone/update/ >  
 =>   <keystone: update>

we may use:

    <client: POST /keystone/{ver:=x.0}/{method:=update} >   =>    <middleware: 
just forward to clients[ver].getattr("method")(**kwargs) >   =>   <keystone: 
update>


​The idea here is that if we have keystone 4.0 client, ​we will have to just 
add it to the clients [] list and nothing more is required at the middleware 
level. Just create the frontend code to use the new Keystone 4.0 methods. 
Otherwise we will have to add all new/different signatures of 4.0 against 
2.0/3.0 in the middleware in order to use Keystone 4.0.

There is also a great example of using a pluggable/new feature in Horizon. Do 
you remember the volume types support patch? The patch was pending in Gerrit 
for few months - first waiting the cinder support for volume types to go 
upstream, then waiting few more weeks for review. I am not sure, but as far as 
I remember, the Horizon patch even missed a release milestone and was 
introduced in the next release.

If we have a transparent middleware - this will be no more an issue. As long as 
someone has written the frontend modules(which should be easy to add and 
customize), and they install the required version of the service API - they 
will not need updated Horizon to start using the feature. Maybe I am not the 
right person to give examples here, but how many of you had some kind of 
Horizon customization being locally merged/patched in your local 
distros/setups, until the patch is being pushed upstream?

I will say it again. Nova, Keystone, Cinder, Glance etc. already have stable 
public APIs. Why do we want to add the translation middleware and to introduce 
another level of REST API? This layer will often hide new features, added to 
the service APIs and will delay their appearance in Horizon. That's simply not 
needed. I believe it is possible to just wrap the authentication in the 
middleware REST, but not to translate anything as RPC methods/parameters.


​And one more example:

​@rest_utils.ajax()
def put(self, request, id):
    """Update a single project.

        The POST data should be an application/json object containing the
        parameters to update: "name" (string),  "description" (string),
        "domain_id" (string) and "enabled" (boolean, defaults to true).
        Additional, undefined parameters may also be provided, but you'll have
        to look deep into keystone to figure out what they might be.

        This method returns HTTP 204 (no content) on success.
        """
        project = api.keystone.tenant_get(request, id)
        kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None)
        api.keystone.tenant_update(request, project, **kwargs)

​Do we really need the lines:​

project = api.keystone.tenant_get(request, id)
kwargs = _tenant_kwargs_from_DATA(request.DATA, enabled=None)
​
? ​Since we update the project on the client, it is obvious that we already 
fetched the project data. So we can simply send:


POST /keystone/3.0/tenant_update

Content-Type: application/json

{"id": cached.id<http://cached.id>, "domain_id": cached.domain_id, "name": "new 
name", "description": "new description", "enabled": cached.enabled}

Fewer requests, faster application.




On Wed, Dec 10, 2014 at 8:39 PM, Thai Q Tran 
<tqt...@us.ibm.com<mailto:tqt...@us.ibm.com>> wrote:

​​
I think we're arguing for the same thing, but maybe slightly different 
approach. I think we can both agree that a middle-layer is required, whether we 
intend to use it as a proxy or REST endpoints. Regardless of the approach, the 
client needs to relay what API it wants to invoke, and you can do that either 
via RPC or REST. I personally prefer the REST approach because it shields the 
client. Client just needs to know which URL to hit in order to invoke a certain 
API, and does not need to know the procedure name or parameters ordering. 
Having said all of that, I do believe we should keep it as thin as possible. I 
do like the idea of having separate classes for different API versions. What we 
have today is a thin REST layer that acts like a proxy. You hit a certain URL, 
and the middle layer forwards the API invokation. The only exception to this 
rule is support for batch deletions.

-----Tihomir Trifonov <t.trifo...@gmail.com<mailto:t.trifo...@gmail.com>> 
wrote: -----
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
From: Tihomir Trifonov <t.trifo...@gmail.com<mailto:t.trifo...@gmail.com>>
Date: 12/10/2014 03:04AM

Subject: Re: [openstack-dev] [horizon] REST and Django

Richard, thanks for the reply,


I agree that the given example is not a real REST. But we already have the REST 
API - that's Keystone, Nova, Cinder, Glance, Neutron etc, APIs. So what we plan 
to do here? To add a new REST layer to communicate with other REST API? Do we 
really need Frontend-REST-REST architecture ? My opinion is that we don't need 
another REST layer as we currently are trying to go away from the Django layer, 
which is the same - another processing layer. Although we call it REST proxy or 
whatever - it doesn't need to be a real REST, but just an aggregation proxy 
that combines and forwards some requests with adding minimal processing 
overhead. What makes sense for me is to keep the authentication in this layer 
as it is now - push a cookie to the frontend, but the REST layer will extract 
the auth tokens from the session storage and prepare the auth context for the 
REST API request to OS services. This way we will not expose the tokens to the 
JS frontend, and will have strict control over the authentication. The frontend 
will just send data requests, they will be wrapped with auth context and 
forwarded.

Regarding the existing issues with versions in the API - for me the existing 
approach is wrong. All these fixes were made as workarounds. What should have 
been done is to create abstractions for each version and to use a separate 
class for each version. This was partially done for the keystoneclient in 
api/keystone.py, but not for the forms/views, where we still have if-else for 
versions. What I suggest here is to have different(concrete) views/forms for 
each version, and to use them according the context. If the Keystone backend is 
v2.0 - then in the Frontend use keystone2() object, otherwise use keystone3() 
object. This of course needs some more coding, but is much cleaner in terms of 
customization and testing. For me the current hacks with 'if keystone.version 
== 3.0' are wrong at many levels. And this can be solved now. The problem till 
now was that we had one frontend that had to be backed by different versions of 
backend components. Now we can have different frontends that map to specific 
backend. That's how I understand the power of Angular with it's views and 
directives. That's where I see the real benefit of using full-featured 
frontend. Also imagine how easy will be then to deprecate a component version, 
compared to what we need to do now for the same.

Otherwise we just rewrite the current Django middleware with another DjangoRest 
middleware and don't change anything, we don't fix the problems. We just move 
them to another place.

I still think that in Paris we talked about a new generation of the Dashboard, 
a different approach on building the frontend for OpenStack. What I've heard 
there from users/operators of Horizon was that it was extremely hard to add 
customizations and new features to the Dashboard, as all these needed to go 
through upstream changes and to wait until next release cycle to get them. Do 
we still want to address these concerns and how? Please, correct me if I got 
things wrong.


On Wed, Dec 10, 2014 at 11:56 AM, Richard Jones 
<r1chardj0...@gmail.com<mailto:r1chardj0...@gmail.com>> wrote:
Sorry I didn't respond to this earlier today, I had intended to.

What you're describing isn't REST, and the principles of REST are what have 
been guiding the design of the new API so far. I see a lot of value in using 
REST approaches, mostly around clarity of the interface.

While the idea of a very thin proxy seemed like a great idea at one point, my 
conversations at the summit convinced me that there was value in both using the 
client interfaces present in the openstack_dashboard/api code base (since they 
abstract away many issues in the apis including across versions) and also value 
in us being able to clean up (for example, using "project_id" rather than 
"project" in the user API we've already implemented) and extend those 
interfaces (to allow batched operations).

We want to be careful about what we expose in Horizon to the JS clients through 
this API. That necessitates some amount of code in Horizon. About half of the 
current API for keysone represents that control (the other half is docstrings :)


     Richard


On Tue Dec 09 2014 at 9:37:47 PM Tihomir Trifonov 
<t.trifo...@gmail.com<mailto:t.trifo...@gmail.com>> wrote:
Sorry for the late reply, just few thoughts on the matter.

IMO the REST middleware should be as thin as possible. And I mean thin in terms 
of processing - it should not do pre/post processing of the requests, but just 
unpack/pack. So here is an example:

instead of making AJAX calls that contain instructions:

​​
POST --json --data {"action": "delete", "data": [ {"name": "item1"}, {"name": 
"item2"}, {"name": "item3" ]}

I think a better approach is just to pack/unpack batch commands, and leave 
execution to the frontend/backend and not middleware:


​​
POST --json --data {"
​batch
":
​[
{​
"
​
action"
​ : "delete"​
,
​"payload": ​
{"name": "item1"}
​,
{​
"
​
action"
​ : "delete"​
,
​
"payload":
​
{"name": "item
​2
"}
​,
{​
"
​
action"
​ : "delete"​
,
​
"payload":
​
{"name": "item
​3
"}
​ ] ​
​
​
}

​The idea is that the middleware should not know the actual data. It should 
ideally just unpack the data:

​​responses = []

for cmd in
​ ​
​
​
request.POST['batch']:​

​
​​responses
​.append(​
​
getattr(controller, cmd['action']
​)(**
cmd['​payload']
​)​)

​return responses​


​and the frontend(JS) will just send batches of simple commands, and will 
receive a list of responses for each command in the batch. The error handling 
will be done in the frontend​(JS) as well.

​

For the more complex example of 'put()' where we have dependent objects:

project = api.keystone.tenant_get(request, id)
kwargs = self._tenant_kwargs_from_DATA(request.DATA, enabled=None)
api.keystone.tenant_update(request, project, **kwargs)


In practice the project data should be already present in the frontend(assuming 
that we already loaded it to render the project form/view), so

​
​
POST --json --data {"
​batch
":
​[
{​
"
​
action"
​ : "tenant_update"​
,
​"payload": ​
{"project": js_project_object.id<http://js_project_object.id>, "name": "some 
name", "prop1": "some prop", "prop2": "other prop, etc."}
​
​ ] ​
​
​
}​

So in general we don't need to recreate the full state on each REST call, if we 
make the Frontent full-featured application. This way - the frontend will 
construct the object, will hold the cached value, and will just send the needed 
requests as single ones or in batches, will receive the response from the API 
backend, and will render the results. The whole processing logic will be held 
in the Frontend(JS), while the middleware will just act as proxy(un/packer). 
This way we will maintain just the logic in the frontend, and will not need to 
duplicate some logic in the middleware.




On Tue, Dec 2, 2014 at 4:45 PM, Adam Young 
<ayo...@redhat.com<mailto:ayo...@redhat.com>> wrote:
On 12/02/2014 12:39 AM, Richard Jones wrote:
On Mon Dec 01 2014 at 4:18:42 PM Thai Q Tran 
<tqt...@us.ibm.com<mailto:tqt...@us.ibm.com>> wrote:

I agree that keeping the API layer thin would be ideal. I should add that 
having discrete API calls would allow dynamic population of table. However, I 
will make a case where it might be necessary to add additional APIs. Consider 
that you want to delete 3 items in a given table.

If you do this on the client side, you would need to perform: n * (1 API 
request + 1 AJAX request)
If you have some logic on the server side that batch delete actions: n * (1 API 
request) + 1 AJAX request

Consider the following:
n = 1, client = 2 trips, server = 2 trips
n = 3, client = 6 trips, server = 4 trips
n = 10, client = 20 trips, server = 11 trips
n = 100, client = 200 trips, server 101 trips

As you can see, this does not scale very well.... something to consider...

This is not something Horizon can fix.  Horizon can make matters worse, but 
cannot make things better.

If you want to delete 3 users,   Horizon still needs to make 3 distinct calls 
to Keystone.

To fix this, we need either batch calls or a standard way to do multiples of 
the same operation.

The unified API effort it the right place to drive this.







Yep, though in the above cases the client is still going to be hanging, waiting 
for those server-backend calls, with no feedback until it's all done. I would 
hope that the client-server call overhead is minimal, but I guess that's 
probably wishful thinking when in the land of random Internet users hitting 
some provider's Horizon :)

So yeah, having mulled it over myself I agree that it's useful to have batch 
operations implemented in the POST handler, the most common operation being 
DELETE.

Maybe one day we could transition to a batch call with user feedback using a 
websocket connection.


     Richard

[cid:994faa67a8e28335_0.0.1.1]Richard Jones ---11/27/2014 05:38:53 PM---On Fri 
Nov 28 2014 at 5:58:00 AM Tripp, Travis S 
<travis.tr...@hp.com<mailto:travis.tr...@hp.com>> wrote:

From: Richard Jones <r1chardj0...@gmail.com<mailto:r1chardj0...@gmail.com>>
To: "Tripp, Travis S" <travis.tr...@hp.com<mailto:travis.tr...@hp.com>>, 
OpenStack List 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: 11/27/2014 05:38 PM
Subject: Re: [openstack-dev] [horizon] REST and Django

________________________________



On Fri Nov 28 2014 at 5:58:00 AM Tripp, Travis S 
<travis.tr...@hp.com<mailto:travis.tr...@hp.com>> wrote:

Hi Richard,

You are right, we should put this out on the main ML, so copying thread out to 
there.  ML: FYI that this started after some impromptu IRC discussions about a 
specific patch led into an impromptu google hangout discussion with all the 
people on the thread below.

Thanks Travis!



As I mentioned in the review[1], Thai and I were mainly discussing the possible 
performance implications of network hops from client to horizon server and 
whether or not any aggregation should occur server side.   In other words, some 
views  require several APIs to be queried before any data can displayed and it 
would eliminate some extra network requests from client to server if some of 
the data was first collected on the server side across service APIs.  For 
example, the launch instance wizard will need to collect data from quite a few 
APIs before even the first step is displayed (I’ve listed those out in the 
blueprint [2]).

The flip side to that (as you also pointed out) is that if we keep the API’s 
fine grained then the wizard will be able to optimize in one place the calls 
for data as it is needed. For example, the first step may only need half of the 
API calls. It also could lead to perceived performance increases just due to 
the wizard making a call for different data independently and displaying it as 
soon as it can.

Indeed, looking at the current launch wizard code it seems like you wouldn't 
need to load all that data for the wizard to be displayed, since only some 
subset of it would be necessary to display any given panel of the wizard.



I tend to lean towards your POV and starting with discrete API calls and 
letting the client optimize calls.  If there are performance problems or other 
reasons then doing data aggregation on the server side could be considered at 
that point.

I'm glad to hear it. I'm a fan of optimising when necessary, and not beforehand 
:)



Of course if anybody is able to do some performance testing between the two 
approaches then that could affect the direction taken.

I would certainly like to see us take some measurements when performance issues 
pop up. Optimising without solid metrics is bad idea :)


    Richard


[1] 
https://review.openstack.org/#/c/136676/8/openstack_dashboard/api/rest/urls.py
[2] https://blueprints.launchpad.net/horizon/+spec/launch-instance-redesign

-Travis

From: Richard Jones <r1chardj0...@gmail.com<mailto:r1chardj0...@gmail.com>>
Date: Wednesday, November 26, 2014 at 11:55 PM
To: Travis Tripp <travis.tr...@hp.com<mailto:travis.tr...@hp.com>>, Thai Q 
Tran/Silicon Valley/IBM <tqt...@us.ibm.com<mailto:tqt...@us.ibm.com>>, David 
Lyle <dkly...@gmail.com<mailto:dkly...@gmail.com>>, Maxime Vidori 
<maxime.vid...@enovance.com<mailto:maxime.vid...@enovance.com>>, "Wroblewski, 
Szymon" <szymon.wroblew...@intel.com<mailto:szymon.wroblew...@intel.com>>, 
"Wood, Matthew David (HP Cloud - Horizon)" 
<matt.w...@hp.com<mailto:matt.w...@hp.com>>, "Chen, Shaoquan" 
<sean.ch...@hp.com<mailto:sean.ch...@hp.com>>, "Farina, Matt (HP Cloud)" 
<matthew.far...@hp.com<mailto:matthew.far...@hp.com>>, Cindy Lu/Silicon 
Valley/IBM <c...@us.ibm.com<mailto:c...@us.ibm.com>>, Justin 
Pomeroy/Rochester/IBM <jpom...@us.ibm.com<mailto:jpom...@us.ibm.com>>, Neill 
Cox <neill....@ingenious.com.au<mailto:neill....@ingenious.com.au>>
Subject: Re: REST and Django

I'm not sure whether this is the appropriate place to discuss this, or whether 
I should be posting to the list under [Horizon] but I think we need to have a 
clear idea of what goes in the REST API and what goes in the client (angular) 
code.

In my mind, the thinner the REST API the better. Indeed if we can get away with 
proxying requests through without touching any *client code, that would be 
great.

Coding additional logic into the REST API means that a developer would need to 
look in two places, instead of one, to determine what was happening for a 
particular call. If we keep it thin then the API presented to the client 
developer is very, very similar to the API presented by the services. Minimum 
surprise.

Your thoughts?


     Richard


On Wed Nov 26 2014 at 2:40:52 PM Richard Jones 
<r1chardj0...@gmail.com<mailto:r1chardj0...@gmail.com>> wrote:

Thanks for the great summary, Travis.

I've completed the work I pledged this morning, so now the REST API change set 
has:

- no rest framework dependency
- AJAX scaffolding in openstack_dashboard.api.rest.utils
- code in openstack_dashboard/api/rest/
- renamed the API from "identity" to "keystone" to be consistent
- added a sample of testing, mostly for my own sanity to check things were 
working

https://review.openstack.org/#/c/136676


      Richard

On Wed Nov 26 2014 at 12:18:25 PM Tripp, Travis S 
<travis.tr...@hp.com<mailto:travis.tr...@hp.com>> wrote:

Hello all,

Great discussion on the REST urls today! I think that we are on track to come 
to a common REST API usage pattern.  To provide quick summary:

We all agreed that going to a straight REST pattern rather than through tables 
was a good idea. We discussed using direct get / post in Django views like what 
Max originally used[1][2] and Thai also started[3] with the identity table 
rework or to go with djangorestframework [5] like what Richard was prototyping 
with[4].

The main things we would use from Django Rest Framework were built in JSON 
serialization (avoid boilerplate), better exception handling, and some request 
wrapping.  However, we all weren’t sure about the need for a full new framework 
just for that. At the end of the conversation, we decided that it was a cleaner 
approach, but Richard would see if he could provide some utility code to do 
that much for us without requiring the full framework.  David voiced that he 
doesn’t want us building out a whole framework on our own either.

So, Richard will do some investigation during his day today and get back to us. 
 Whatever the case, we’ll get a patch in horizon for the base dependency 
(framework or Richard’s utilities) that both Thai’s work and the launch 
instance work is dependent upon.  We’ll build REST style API’s using the same 
pattern.  We will likely put the rest api’s in 
horizon/openstack_dashboard/api/rest/.

[1] 
https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/keypair.py
[2] 
https://review.openstack.org/#/c/133178/1/openstack_dashboard/workflow/launch.py
[3] 
https://review.openstack.org/#/c/133767/8/openstack_dashboard/dashboards/identity/users/views.py
[4] 
https://review.openstack.org/#/c/136676/4/openstack_dashboard/rest_api/identity.py
[5] http://www.django-rest-framework.org/

Thanks,

Travis_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards,
Tihomir Trifonov
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards,
Tihomir Trifonov
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards,
Tihomir Trifonov
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to