Re: [openstack-dev] [Fuel] Compatibility of fuel plugins and fuel versions

2016-03-11 Thread Patrick Petit
On 11 March 2016 at 15:45:57, Simon Pasquier (spasqu...@mirantis.com) wrote:
Thanks for kicking off the discussion!

On Thu, Mar 10, 2016 at 8:30 AM, Mike Scherbakov  
wrote:
Hi folks,
in order to make a decision whether we need to support example plugins, and if 
actually need them [1], I'd suggest to discuss more common things about plugins.

My thoughts:
1) This is not good, that our plugins created for Fuel 8 won't even install on 
Fuel 9. By default, we should assume that plugin will work at newer version of 
Fuel. However, for proper user experience, I suggest to create meta-field 
"validated_against", where plugin dev would provide versions of Fuel this 
plugin has been tested with. Let's say, it was tested against 7.0, 8.0. If user 
installs plugin in Fuel 9, I'd suggest to show a warning saying about risks and 
the fact that the plugin has not been tested against 9. We should not restrict 
intsallation against 9, though.

From a plugin developer's standpoint, this point doesn't worry me too much. 
It's fairly easy to hack the metadata.yaml file for supporting a newer release 
of Fuel and I suspect that some users already do this.
And I think that it is good that plugin developers explicitly advertise which 
Fuel versions the plugin supports.
That being said, I get the need to have something more automatic for CI and QA 
purposes. What about having some kind of flag/option (in the Nailgun API?) that 
would allow the installation of a plugin even if it is marked as not compatible 
with the current release?

 

2) We need to keep backward compatibility of pluggable interface for a few 
releases. So that plugin developer can use pluggable interface of version x, 
which was supported in Fuel 6.1. If we still support it, it would mean (see 
next point) compatibility of this plugin with 6.1, 7.0, 8.0, 9.0. If we want to 
deprecate pluggable interface version, we should announce it, and basically 
follow standard process of deprecation.


+1 and more.
From my past experience, this is a major issue that complicates the plugin 
maintenance. I understand that it is sometimes necessary to make breaking 
changes but at least it should be advertised in advance and to a wide audience. 
Not all plugin developers monitor the Fuel reviews to track these changes...
 

3) Plugin's ability to work against multiple releases of Fuel (multi-release 
support). If if..else clauses to support multiple releases are fairly minimal, 
let's say take less that 10% of LOC, I'd suggest to have this supported. Just 
because it will be easier for plugin devs to support their plugin code (no code 
duplication, single repo for multiple releases).

From my experience (and assuming that framework compatibility isn't broken), 
this is usually what happens. You need a few if clauses to deal with the 
differences between releases N and N+1 but this is manageable.
+1

Probably one of the most single important thing to have to be able to upgrade a 
deployed environment with a new plugin version!


 

Thoughts?

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-March/088211.html
--
Mike Scherbakov
#mihgen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing logs from Fuel Web UI and Nailgun

2016-03-11 Thread Patrick Petit

On 11 March 2016 at 12:51:40, Igor Kalnitsky (ikalnit...@mirantis.com) wrote:

Patrick, 

Sorry, but I meant another question. I thought that LMA plugin should 
be installed in some environment before we can start use it. Is this a 
case? If so, it means we can't use for master node until some 
environment is deployed. 
Right. This is the chicken and egg problem I mentioned earlier...

But this is not a “problem” specific to Fuel. My take on this is is that ops 
management tooling (logging, monitoring) should be installed off-band before 
any OpenStack deployment. In fact, in real-world usage, we frequently get asks 
to have the monitoring and logging services of StackLight installed permanently 
for multi-enviroments. And so, one approach would be to make Stacklight backend 
services the first bits of software installed by Fuel (if not already there), 
then reconfigure Fuel to hook into those services and only then, enter into the 
regular OpenStack provisioning mode.



On Fri, Mar 11, 2016 at 12:52 PM, Patrick Petit  wrote: 
> 
> On 11 March 2016 at 11:34:32, Igor Kalnitsky (ikalnit...@mirantis.com) 
> wrote: 
> 
> Hey Roman, 
> 
> Thank you for bringing this up. +1 from my side, especially taking 
> into account the patch where we tried to solve logrotated logs problem 
> [1]. It's complex and unsupportable, as well as already existed 
> logview code in Nailgun. 
> 
> Patrick, Simon, 
> 
> Does LMA plugin support logs from master node? Or it's designed to 
> watch environment's logs? 
> 
> No it’s not designed specifically for environment logs. Can be adapted to 
> any log format. 
> 
> Would just need to write a parser like you would with logstach when logs are 
> not standard. 
> 
> Patrick 
> 
> 
> 
> Thanks, 
> Igor 
> 
> 
> [1]: https://review.openstack.org/#/c/243240/ 
> 
> On Fri, Mar 11, 2016 at 11:53 AM, Patrick Petit  wrote: 
>> Fuelers, 
>> 
>> As Simon said, we already have a log centralisation solution for MOS 
>> delivered as a Fuel plugins known as StackLight / LMA toolset. And so 
>> objectively, there is no need to have log management in Nailgun anymore. 
>> To 
>> go one step further we suggested several times to have a StackLight agent 
>> installed on the Fuel master node to also collect and centralise those 
>> logs. 
>> There is a little bit of a chicken and egg problem to resolve but I think 
>> it 
>> is worth a try to have that nailed down in the roadmap for Fuel 10. 
>> Cheers 
>> - Patrick 
>> 
>> 
>> On 11 March 2016 at 10:07:28, Simon Pasquier (spasqu...@mirantis.com) 
>> wrote: 
>> 
>> Hello Roman, 
>> 
>> On Fri, Mar 11, 2016 at 9:57 AM, Roman Prykhodchenko  
>> wrote: 
>>> 
>>> Fuelers, 
>>> 
>>> I remember we’ve discussing this topic in our couloirs before but I’d 
>>> like 
>>> to bring that discussion to a more official format. 
>>> 
>>> Let me state a few reasons to do this: 
>>> 
>>> - Log management code in Nailgun is overcomplicated 
>>> - Working with logs on big scale deployments is barely possible given the 
>>> current representation 
>>> - Due to overcomplexity and ineffectiveness of the code we always get 
>>> recurring bugs like [1]. That eats tons of time to resolve. 
>>> - There are much better specialized tools, say Logstash [2], that can 
>>> deal 
>>> with logs much more effectively. 
>>> 
>>> 
>>> There may be more reasons bus I think even the already mentioned ones are 
>>> enough to think about the following proposal: 
>>> 
>>> - Remove Logs tab from Fuel Web UI 
>>> - Remove logs support from Nailgun 
>>> - Create mechanism that allows to configure different log management 
>>> software, say Logstash, Loggly, etc 
>>> 
>>> - Choose a default software to install and provide a plugin for it from 
>>> the box 
>> 
>> 
>> This is what the LMA/StackLight plugins [1][2] are meant for. No need to 
>> develop anything new. 
>> 
>> And I'm +1 with the removal of log management from Fuel. As you said, it 
>> can't scale... 
>> 
>> [1] http://fuel-plugin-lma-collector.readthedocs.org/en/latest/ 
>> [2] http://fuel-plugin-elasticsearch-kibana.readthedocs.org/en/latest/ 
>> 
>> 
>>> 
>>> 
>>> 
>>> References 
>>> 1. https://bugs.launchpad.net/fuel/+bug/1553170 
>>> 2. https://www.elastic.co/products/logstash 
>>> 
>>> 
>>> - romcheg 
>>> 
>>> 
>>> 

Re: [openstack-dev] [Fuel] Removing logs from Fuel Web UI and Nailgun

2016-03-11 Thread Patrick Petit

On 11 March 2016 at 11:34:32, Igor Kalnitsky (ikalnit...@mirantis.com) wrote:

Hey Roman,

Thank you for bringing this up. +1 from my side, especially taking
into account the patch where we tried to solve logrotated logs problem
[1]. It's complex and unsupportable, as well as already existed
logview code in Nailgun.

Patrick, Simon,

Does LMA plugin support logs from master node? Or it's designed to
watch environment's logs?
No it’s not designed specifically for environment logs. Can be adapted to any 
log format.

Would just need to write a parser like you would with logstach when logs are 
not standard.

Patrick



Thanks,
Igor


[1]: https://review.openstack.org/#/c/243240/

On Fri, Mar 11, 2016 at 11:53 AM, Patrick Petit  wrote:
> Fuelers,
>
> As Simon said, we already have a log centralisation solution for MOS
> delivered as a Fuel plugins known as StackLight / LMA toolset. And so
> objectively, there is no need to have log management in Nailgun anymore. To
> go one step further we suggested several times to have a StackLight agent
> installed on the Fuel master node to also collect and centralise those logs.
> There is a little bit of a chicken and egg problem to resolve but I think it
> is worth a try to have that nailed down in the roadmap for Fuel 10.
> Cheers
> - Patrick
>
>
> On 11 March 2016 at 10:07:28, Simon Pasquier (spasqu...@mirantis.com) wrote:
>
> Hello Roman,
>
> On Fri, Mar 11, 2016 at 9:57 AM, Roman Prykhodchenko  wrote:
>>
>> Fuelers,
>>
>> I remember we’ve discussing this topic in our couloirs before but I’d like
>> to bring that discussion to a more official format.
>>
>> Let me state a few reasons to do this:
>>
>> - Log management code in Nailgun is overcomplicated
>> - Working with logs on big scale deployments is barely possible given the
>> current representation
>> - Due to overcomplexity and ineffectiveness of the code we always get
>> recurring bugs like [1]. That eats tons of time to resolve.
>> - There are much better specialized tools, say Logstash [2], that can deal
>> with logs much more effectively.
>>
>>
>> There may be more reasons bus I think even the already mentioned ones are
>> enough to think about the following proposal:
>>
>> - Remove Logs tab from Fuel Web UI
>> - Remove logs support from Nailgun
>> - Create mechanism that allows to configure different log management
>> software, say Logstash, Loggly, etc
>>
>> - Choose a default software to install and provide a plugin for it from
>> the box
>
>
> This is what the LMA/StackLight plugins [1][2] are meant for. No need to
> develop anything new.
>
> And I'm +1 with the removal of log management from Fuel. As you said, it
> can't scale...
>
> [1] http://fuel-plugin-lma-collector.readthedocs.org/en/latest/
> [2] http://fuel-plugin-elasticsearch-kibana.readthedocs.org/en/latest/
>
>
>>
>>
>>
>> References
>> 1. https://bugs.launchpad.net/fuel/+bug/1553170
>> 2. https://www.elastic.co/products/logstash
>>
>>
>> - romcheg
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Removing logs from Fuel Web UI and Nailgun

2016-03-11 Thread Patrick Petit
Fuelers,

As Simon said, we already have a log centralisation solution for MOS delivered 
as a Fuel plugins known as StackLight / LMA toolset. And so objectively, there 
is no need to have log management in Nailgun anymore. To go one step further we 
suggested several times to have a StackLight agent installed on the Fuel master 
node to also collect and centralise those logs. There is a little bit of a 
chicken and egg problem to resolve but I think it is worth a try to have that 
nailed down in the roadmap for Fuel 10.
Cheers
 - Patrick  
 
On 11 March 2016 at 10:07:28, Simon Pasquier (spasqu...@mirantis.com) wrote:

Hello Roman,

On Fri, Mar 11, 2016 at 9:57 AM, Roman Prykhodchenko  wrote:
Fuelers,

I remember we’ve discussing this topic in our couloirs before but I’d like to 
bring that discussion to a more official format.

Let me state a few reasons to do this:

- Log management code in Nailgun is overcomplicated
- Working with logs on big scale deployments is barely possible given the 
current representation
- Due to overcomplexity and ineffectiveness of the code we always get recurring 
bugs like [1]. That eats tons of time to resolve.
- There are much better specialized tools, say Logstash [2], that can deal with 
logs much more effectively.


There may be more reasons bus I think even the already mentioned ones are 
enough to think about the following proposal:

- Remove Logs tab from Fuel Web UI
- Remove logs support from Nailgun
- Create mechanism that allows to configure different log management software, 
say Logstash, Loggly, etc 
- Choose a default software to install and provide a plugin for it from the box

This is what the LMA/StackLight plugins [1][2] are meant for. No need to 
develop anything new.

And I'm +1 with the removal of log management from Fuel. As you said, it can't 
scale...

[1] http://fuel-plugin-lma-collector.readthedocs.org/en/latest/
[2] http://fuel-plugin-elasticsearch-kibana.readthedocs.org/en/latest/

 


References
1.  https://bugs.launchpad.net/fuel/+bug/1553170
2. https://www.elastic.co/products/logstash


- romcheg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Heka v ELK stack logistics

2016-01-15 Thread Patrick Petit
On 15 Jan 2016 at 14:43:39, Michał Jastrzębski (inc...@gmail.com) wrote:
Yeah that's true. We did all of openstack systems but we didn't 
implement infra around yet. I'd guess most of services can log either 
to stdout or file, and both sources should be accessible by heka. 
Also, I'd be surprised if heka wouldn't have syslog driver? That 
should be one of first:) Maybe worth writing one? I wanted an excuse 
to write some golang;) 
Well writing in golang plugin would require to rebuild Heka.

The beauty is that you don’t have to do that.

Just a decoder plugin in Lua like it’s already the case for rsyslog…

Cheers,

Patrick



Regards, 
Michal 

On 15 January 2016 at 06:42, Eric LEMOINE  wrote: 
> On Fri, Jan 15, 2016 at 11:57 AM, Michal Rostecki 
>  wrote: 
>> On 01/15/2016 11:14 AM, Simon Pasquier wrote: 
>>> 
>>> My 2 cents on RabbitMQ logging... 
>>> 
>>> On Fri, Jan 15, 2016 at 8:39 AM, Michal Rostecki >> > wrote: 
>>> 
>>> I'd suggest to check the similar options in RabbitMQ and other 
>>> non-OpenStack components. 
>>> 
>>> 
>>> AFAICT RabbitMQ can't log to Syslog anyway. But you have option to make 
>>> RabbitMQ log to stdout [1]. 
>>> BR, 
>>> Simon. 
>>> [1] http://www.superpumpup.com/docker-rabbitmq-stdout 
>>> 
>> 
>> That's OK for Heka/Mesos/k8s approach. 
>> 
>> Just for the curiosity, 
>> @inc0: so we don't receive any logs from RabbitMQ in the current rsyslog 
>> approach? 
> 
> 
> /var/lib/docker/volumes/rsyslog/_data is where logs are stored, and 
> you'll see that there is no file for RabbitMQ. This is related to 
> RabbitMQ not logging to syslog. So our impression is that Kolla 
> doesn't at all collect RabbitMQ logs today. I guess this should be 
> fixed. 
> 
> __ 
> OpenStack Development Mailing List (not for usage questions) 
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Introduction of Heka in Kolla

2016-01-13 Thread Patrick Petit

On 12 Jan 2016 at 13:24:26, Kwasniewska, Alicja (alicja.kwasniew...@intel.com) 
wrote:

Unfortunately I do not have any experience in working or testing Heka, so it’s 
hard for me to compare its performance vs Logstash performance. However I’ve 
read that Heka possess a lot advantages over Logstash in this scope.



But which version of Logstash did you test? One guy from the Logstash community 
said that: “The next release of logstash (1.2.0 is in beta) has a 3.5x 
improvement in event throughput. For numbers: on my workstation at home (6 vcpu 
on virtualbox, host OS windows, 8 GB ram, host cpu is FX-8150) - with logstash 
1.1.13, I can process roughly 31,000 events/sec parsing apache logs. With 
logstash 1.2.0.beta1, I can process 102,000 events/sec.”



You also said that Heka is a unified data processing, but do we need this? Heka 
seems to address stream processing needs, while Logstash focuses mainly on 
processing logs. We want to create a central logging service, and Logstash was 
created especially for it and seems to work well for this application.


I think you are touching a key point here. Our thinking is that Heka is doing 
at least as well as Logstach to collecting and parsing logs with lesser 
footprint and higher performance but it can do more as you noticed. This is 
exactly why we came to using that tool in a first place and like it hence the 
motivation to proposing it here. It’s not a handicap but an asset because you 
can choose to do more if you want to and so avoid the sprawl of tools to do 
different things. Consider the prospect of transforming logs matching a 
particular pattern into metric messages (e.x. average http response time, http 
5xx errors count, errors rate, ...) that you could send to a time-series like 
InfluxDB… Wouldn't that be cool? I am not saying that you couldn't do it with 
Logstach but doing it with Heka could be distributed on the hosts and is much 
easier to implement because of the streams processing design. That’s a big plus.

One thing that is obvious is the fact that the Logstash is better known, more 
popular and tested. Maybe it has some performance disadvantages, but at least 
we know what we can expect from it. Also, it has more pre-built plugins and has 
a lot examples of usage, while Heka doesn’t have many of them yet and is 
nowhere near the range of plugins and integrations provided by Logstash.

I tend to disagree with that. You may think that Heka has less plugins 
out-of-the-box but in practice it has all the plugins needed to cover a variety 
of use cases I would say even beyond Lofstach thanks to Heka’s approach to 
decoupling protocol (input and output) plugins from 
deserialisation/serialisation (decoder/encoder) plugins. You can slice and dice 
combinations of those plugins and if you need to support a new message format 
it suffices to implement a decoder or an encoder in Lua using any combination 
of protocols including http, tcp, udp, amqp, kafka, statsd, … What more would 
you need?





In the case of adding plugins, I’ve read that in order to add Go plugins, the 
binary has to be recompiled, what is a little bit frustrating (static linking - 
to wire in new plugins, have to recompile). On the other hand, the Lua plugins 
do not require it, but the question is whether Lua plugins are sufficient? Or 
maybe adding Go plugins is not so bad?


We are using Heka to address a much broader spectrum of use cases and 
functionalities (some being very sophisticated) but as it is not the subject of 
the conversation I will not expand on this but we never found the need to write 
a plugin in Go. Lua and associated libraries have always been sufficient to 
address our needs.  

You also said that you didn’t test the Heka with Docker, right? But do you have 
any experience in setting up Heka in Docker container? I saw that with Heka 
0.8.0 new Docker features were implemented (included Dockerfiles to generate 
Heka Docker containers for both development and deployment), but did you test 
it? If you didn’t, we could not be sure whether there are any issues with it.



Moreover you will have to write your own Dockerfile for Heka that inherits from 
Kolla base image (as we discussed during last meeting, we would like to have 
our own images), you won’t be able to inherit from ianneub/heka:0.10 as 
specified in the link that you sent 
http://www.ianneubert.com/wp/2015/03/03/how-to-use-heka-docker-and-tutum/.



There are also some issues with DockerInput Module which you want to use. For 
example splitters are not available in DockerInput 
(https://github.com/mozilla-services/heka/issues/1643). I can’t say that it 
will affect us, but we also don’t know which new issues may arise during first 
tests, as any of us has ever tried Heka in and with Dockers.



I am not stick to any specific solution, however just not sure whether Heka 
won’t surprise us with something hard to solve, configure, etc.

Well I guess that’s a fact of life we (especially in I

Re: [openstack-dev] [Fuel][Plugins] Plugin deployment questions

2015-10-21 Thread Patrick Petit
On 21 Oct 2015 at 12:21:57, Igor Kalnitsky (ikalnit...@mirantis.com) wrote:
We can make bidirectional dependencies, just like our deployment tasks do.  


Just to make sure we are on the same page…
We don’t want to be in a situation where a role needs to know about the its 
reverse dependencies.
Dependencies are always expressed one direction. Right?

And, btw, standalone-* roles may have a restriction that at least one  
node is required. I think it's ok for the plugin is case, since if you  
don't want to use it - just disable it.  

On Wed, Oct 21, 2015 at 1:15 PM, Dmitriy Shulyak  wrote: 
 
> But it will lead to situations, when certain plugins, like  
> standalone_rabbitmq/standalone_mysql will need to overwrite settings on  
> *all*  
> dependent roles, and it might be a problem.. Because, how plugin developer  
> will be able to know what are those roles?  
>  
> On Wed, Oct 21, 2015 at 1:01 PM, Igor Kalnitsky   
> wrote:  
>>  
>> Hi Dmitry,  
>>  
>> > Insert required metadata into roles that relies on another roles, for  
>> > compute it will be something like:  
>> >  
>> > compute:  
>> > requires: controller > 1  
>>  
>> Yeah, that's actually what I was thinking about when I wrote:  
>>  
>> > Or should we improve it somehow so it would work for one nodes,  
>> > and will be ignored for others?  
>>  
>> So I'm +1 for extending our meta information with such dependencies.  
>>  
>> Sincerely,  
>> Igor  
>>  
>> On Wed, Oct 21, 2015 at 12:51 PM, Dmitriy Shulyak   
>> wrote:  
>> > Hi,  
>> >  
>> >> Can we ignore the problem above and remove this limitation? Or should  
>> >> we improve it somehow so it would work for one nodes, and will be  
>> >> ignored for others?  
>> >  
>> > I think that this validation needs to be accomplished in a different  
>> > way, we  
>> > don't need 1 controller for the sake of 1 controller,  
>> > 1 controller is a dependency of compute/cinder/other roles. So from my  
>> > pov  
>> > there is atleast 2 options:  
>> >  
>> > 1. Use tasks dependencies, and prevent deployment in case if some tasks  
>> > relies on controller.  
>> > But the implementation might be complicated  
>> >  
>> > 2. Insert required metadata into roles that relies on another roles, for  
>> > compute it will be something like:  
>> > compute:  
>> > requires: controller > 1  
>> > We actually have DSL for declaring such things, we just need to specify  
>> > this  
>> > requirements from other side.  
>> >  
>> > But in 2nd case we will still need to use tricks, like one provided by  
>> > Matt,  
>> > for certain plugins. So maybe we should spend time and do 1st.  
>> >  
>> >  
>> > __ 
>> >  
>> > OpenStack Development Mailing List (not for usage questions)  
>> > Unsubscribe:  
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
>> >  
>>  
>> __  
>> OpenStack Development Mailing List (not for usage questions)  
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
>  
>  
>  
> __  
> OpenStack Development Mailing List (not for usage questions)  
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
>  

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Plugin deployment questions

2015-10-20 Thread Patrick Petit
Hi Matthew,

That’s useful.
Thanks

On 20 Oct 2015 at 11:22:07, Matthew Mosesohn (mmoses...@mirantis.com) wrote:
Hi Patrick,

During the 7.0 development cycle we made a lot of enhancements to what 
environment characteristics can be modified through a plugin. One item that 
plugins cannot directly modify is the default Fuel roles and their metadata. 
That having been said, there is an open-ended post_install.sh script you can 
use for your plugin to "hack" this value. I know of one project that currently 
disables the requirement for controller role in a deployment. This may be 
helpful in testing a given standalone role that doesn't depend on a controller.

Here's a link to the script: http://paste.openstack.org/show/476821/
Note that this doesn't reflect "enabled" status of a plugin. It will make 
controller min count 0 for all environments. That won't break them, but it just 
removes the restriction.

Best Regards,
Matthew Mosesohn

On Mon, Oct 19, 2015 at 3:29 PM, Dmitry Mescheryakov 
 wrote:
Hello folks,

I second Patrick's idea. In our case we would like to install standalone 
RabbitMQ cluster with Fuel reference architecture to perform destructive tests 
on it. Requirement to install controller is an excessive burden in that case.

Thanks,

Dmitry

2015-10-19 13:44 GMT+03:00 Patrick Petit :
Hi There,

There are situations where we’d like to deploy only Fuel plugins in an 
environment.
That’s typically the case with Elasticsearch and InfluxDB plugins of LMA tools.
Currently it’s not possible because you need to at least have one controller.
What exactly is making that limitation? How hard would it be to have it removed?

Thanks
Patrick

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] Plugin deployment questions

2015-10-19 Thread Patrick Petit
Hi There,

There are situations where we’d like to deploy only Fuel plugins in an 
environment.
That’s typically the case with Elasticsearch and InfluxDB plugins of LMA tools.
Currently it’s not possible because you need to at least have one controller.
What exactly is making that limitation? How hard would it be to have it removed?

Thanks
Patrick__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Feedback

2015-07-29 Thread Patrick Petit
On 29 Jul 2015 at 14:41:48, Sheena Gregson (sgreg...@mirantis.com) wrote:
Hey Sergii –

 

I don’t know if I agree with the statement that it’s bad practice to mix core 
and plugin functionality.  From a user standpoint, if I’m trying to deploy 
something like Contrail, I would like to see all of my Networking configuration 
options together (including the Contrail plugin) so that I can make an 
intelligent selection in the context of networking.

 

Agreed


When plugins are not related to a specific space, I personally as a user would 
expect to see a generic “Plugins” grouping in the Settings tab to reduce 
sub-group proliferation (I probably don’t need a sub-group for every plugin).

 

I know that in conversations with Patrick (cc’d for input) he has mentioned 
wanting to have the plugins define the space they should be displayed in, as 
well, including spaces where core component settings are made.

 

Absolutely. I think the plugins paradigme should be considered more of an 
implementation artefact than a logical grouping of functionality. I think that 
what we need is a mechanism by which plugins are free to make that logical 
grouping of settings in a way that is meaningful and consistent from an 
end-user standpoint.


I agree that name validation could probably be improved – the names right now 
correspond either to the plugin name or to the name of the section that existed 
in the previous version.  This initial iteration breaks down subgroups but does 
not change any of the section naming conventions or do anything else to make 
the Settings space more manageable.

 

Sheena

 

From: Sergii Golovatiuk [mailto:sgolovat...@mirantis.com]
Sent: Wednesday, July 29, 2015 5:24 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Fuel][Plugins] Feedback

 

Sheena, I still have concerns regarding #3. I am sending attachment how it's 
implemented. Firstly, it's bad practice to mix core and plugin functionality. 
Also we do not validate names. When there are several plugins it's very hard to 
find all of them

I am giving a sketch how it should be IMO



--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

 

On Tue, Jul 28, 2015 at 6:25 PM, Sheena Gregson  wrote:

Hey Sergii –

 

This is excellent feedback, thank you for taking the time to provide your 
thoughts.

 

#1 I agree that the documentation lag is challenging – I’m not sure how to best 
address this.  We could potentially prioritize updates to the Plugin SDK for 
soon-to-be-released features ahead of the standard release notes and user guide 
updates to ensure that plugin developers have access to this information 
earlier?  A number of the docs team members will be getting together in late 
August to discuss how to improve documentation, I will add this as a topic if 
we don’t feel there is good resolution on the mailing list.

+Alexander/Evgeny to cc for their input

 

#3 Settings tab is getting a facelift in 7.0 and there are now subgroups in the 
tab which should make it significantly easier for a user to find plugin 
settings.  Each plugin will create a new sub-group in the Settings tab, like 
Access (and others) in the screenshot below.

 



 

I don’t have any insight on the GitHub issues, so I will wait for others to 
weigh in on your concerns there.

 

Sheena

 

From: Sergii Golovatiuk [mailto:sgolovat...@mirantis.com]
Sent: Tuesday, July 28, 2015 9:51 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Fuel][Plugins] Feedback

 

Hi,

I have started digging into plugins recently. There are many positive things 
though I would like to point to some problem areas

1. Documentation

a. It doesn't include the features of 7.0. There are many outstanding features, 
though I needed to ping the developers to ask how these features work. It means 
that it's almost impossible to develop plugins for upcoming releases. The 
external developer needs to wait for documentation so it creates a lag between 
release and plugin release.

b. in [1] the statement about 'For Ubuntu 12.04.2 LTS' should be extended to 
14.04. Also we don't need to add PATCH version as 12.04.2 is equivalent to 12.04

c. There is no documentation how to install fpb from github master branch. It's 
very useful for developers who want to use latest version. We should add 
something

2. Github repository [2] is messed up

a. We are doing the same mistake putting all things into one basket. There 
should be 2 repositories. One for examples and one for fpb. What's the goal of 
keeping fpb in directory and examples on top? This breaks a couple of things

b. I cannot build fpm with simple

pip install git+https://

Instead I am forced to do

git clone https://

cd fuel-plugins

pip install .

 

c. There is no tags as I can see only stable/6.0

d. There are no tests to improve code quality pep8 flask8, code coverage

e. Repository doesn't follow community standards.

 

3. S

Re: [openstack-dev] [Fuel][Plugins] Feedback

2015-07-28 Thread Patrick Petit
Hi

Additional comments inside.
Thanks
Patrick

On 28 Jul 2015 at 18:33:34, Sheena Gregson (sgreg...@mirantis.com) wrote:
Hey Sergii –

 

This is excellent feedback, thank you for taking the time to provide your 
thoughts.

 

#1 I agree that the documentation lag is challenging – I’m not sure how to best 
address this.  We could potentially prioritize updates to the Plugin SDK for 
soon-to-be-released features ahead of the standard release notes and user guide 
updates to ensure that plugin developers have access to this information 
earlier?  A number of the docs team members will be getting together in late 
August to discuss how to improve documentation, I will add this as a topic if 
we don’t feel there is good resolution on the mailing list.

+Alexander/Evgeny to cc for their input

+1. Yes that’s a huge impediment! Struggling myself with the same issue since 
we are supposed to release Plugins at about the same time as the new Plugins 
SDK released in Fuel. 

It’s also true that Plugins documentation lacks information about how to build 
the fpb builder.
 

#3 Settings tab is getting a facelift in 7.0 and there are now subgroups in the 
tab which should make it significantly easier for a user to find plugin 
settings.  Each plugin will create a new sub-group in the Settings tab, like 
Access (and others) in the screenshot below.

That’s certainly a very significant improvement compared to the previous 
version. But, as already stated in a retrospective meeting, going forward we’ll 
need an even more flexible way to link Plugins with settings in that settings 
could be made common to multiple plugins. I am thinking of more logical 
grouping (by feature category) independent of the underlying Plugins breakdown. 
For example, we could have an LMA monitoring settings category common to all 
LMA related plugins. This should be less confusing for users and avoid settings 
duplicates. Hope this is making sense…


 



 

I don’t have any insight on the GitHub issues, so I will wait for others to 
weigh in on your concerns there.

 

Sheena

 

From: Sergii Golovatiuk [mailto:sgolovat...@mirantis.com]
Sent: Tuesday, July 28, 2015 9:51 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [Fuel][Plugins] Feedback

 

Hi,

I have started digging into plugins recently. There are many positive things 
though I would like to point to some problem areas

1. Documentation

a. It doesn't include the features of 7.0. There are many outstanding features, 
though I needed to ping the developers to ask how these features work. It means 
that it's almost impossible to develop plugins for upcoming releases. The 
external developer needs to wait for documentation so it creates a lag between 
release and plugin release.

b. in [1] the statement about 'For Ubuntu 12.04.2 LTS' should be extended to 
14.04. Also we don't need to add PATCH version as 12.04.2 is equivalent to 12.04

c. There is no documentation how to install fpb from github master branch. It's 
very useful for developers who want to use latest version. We should add 
something

2. Github repository [2] is messed up

a. We are doing the same mistake putting all things into one basket. There 
should be 2 repositories. One for examples and one for fpb. What's the goal of 
keeping fpb in directory and examples on top? This breaks a couple of things

b. I cannot build fpm with simple

pip install git+https://

Instead I am forced to do

git clone https://

cd fuel-plugins

pip install .

 

c. There is no tags as I can see only stable/6.0

d. There are no tests to improve code quality pep8 flask8, code coverage

e. Repository doesn't follow community standards.

 

3. Setting tab

When plugin is installed, it's very hard to find in. In setting tab it's 
somewhere between A and Z

How is user supposed to find it? There should be a separator between Core 
features and plugins. User must easily find, configure, enable/disable them.

P.S. I am asking everyone to add own concerns so we'll be able to make a plan 
how to address them.

Thank you in advance.


[1] https://wiki.openstack.org/wiki/Fuel/Plugins#Installation
[2] https://github.com/stackforge/fuel-plugins
--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Plugins on separate launchpad projects

2015-07-27 Thread Patrick Petit

On 26 Jul 2015 at 20:25:43, Sheena Gregson (sgreg...@mirantis.com) wrote:

Patrick –

 

Are you suggesting one project for all Fuel plugins, or individual projects for 
each plugin?  I believe it is the former, which I prefer – but I wanted to 
check.

Sheen,

I meant, one individual project for each plugin or one individual project for 
several plugins when it makes sense to regroupe them under one umbrella like 
LMA toolchain as stated earlier.


 

Sheena

 

From: Patrick Petit [mailto:ppe...@mirantis.com]
Sent: Saturday, July 25, 2015 12:25 PM
To: Igor Kalnitsky; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Fuel][Plugins] Plugins on separate launchpad 
projects

 

Igor, thanks for your comments. Please see below.

Patrick

On 25 Jul 2015 at 13:08:24, Igor Kalnitsky (ikalnit...@mirantis.com) wrote:

Hello Patrick,

Thank you for raising this topic. I think that it'd be nice to create
a separate projects for Fuel plugins if it wasn't done yet.

Yes there is a launchpad project for fuel plugins although it’s currently not 
possible to create blueprints in that project.

But that’s not what I meant. I meant dedicated projets for each fuel plugins or 
for a group of fuel plugins if desired.

For example a project for LMA series of fuel plugins.

Fuel
plugins have different release cycles and do not share core group. So
it makes pretty much sense to me to create separate projects.

Correct. We are on the same page.




Otherwise, I have no idea how to work with LP's milestones since again
- plugins have different release cycles.

Thanks,
Igor

On Fri, Jul 24, 2015 at 8:27 PM, Patrick Petit  wrote:
> Hi There,
>
> I have been thinking that it would make a lot of sense to have separate
> launchpad projects for Fuel plugins.
>
> The main benefits I foresee….
>
> - Fuel project will be less of a bottleneck for bug triage and it should be
> more effective to have team members do the bug triage. After all, they are
> best placed to make the required judgement call.
> - A feature can be spread across multiple plugins, like it’s the case with
> LMA toolchain, and so it would be better to have a separate project to
> regroup them.
> - It is counter intuitive and awkward to create blueprints for plugins in
> Fuel project itself in addition to making it cluttered with stuffs that
> unrelated to Fuel.
>
> Can you please tell me what’s your thinking about this?
> Thanks
> Patrick
>
> --
> Patrick Petit
> Mirantis France
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Plugins on separate launchpad projects

2015-07-25 Thread Patrick Petit
Igor, thanks for your comments. Please see below.
Patrick
On 25 Jul 2015 at 13:08:24, Igor Kalnitsky (ikalnit...@mirantis.com) wrote:

Hello Patrick,

Thank you for raising this topic. I think that it'd be nice to create
a separate projects for Fuel plugins if it wasn't done yet.
Yes there is a launchpad project for fuel plugins although it’s currently not 
possible to create blueprints in that project.

But that’s not what I meant. I meant dedicated projets for each fuel plugins or 
for a group of fuel plugins if desired.

For example a project for LMA series of fuel plugins.

Fuel
plugins have different release cycles and do not share core group. So
it makes pretty much sense to me to create separate projects.

Correct. We are on the same page.

Otherwise, I have no idea how to work with LP's milestones since again
- plugins have different release cycles.

Thanks,
Igor

On Fri, Jul 24, 2015 at 8:27 PM, Patrick Petit  wrote:
> Hi There,
>
> I have been thinking that it would make a lot of sense to have separate
> launchpad projects for Fuel plugins.
>
> The main benefits I foresee….
>
> - Fuel project will be less of a bottleneck for bug triage and it should be
> more effective to have team members do the bug triage. After all, they are
> best placed to make the required judgement call.
> - A feature can be spread across multiple plugins, like it’s the case with
> LMA toolchain, and so it would be better to have a separate project to
> regroup them.
> - It is counter intuitive and awkward to create blueprints for plugins in
> Fuel project itself in addition to making it cluttered with stuffs that
> unrelated to Fuel.
>
> Can you please tell me what’s your thinking about this?
> Thanks
> Patrick
>
> --
> Patrick Petit
> Mirantis France
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] Plugins on separate launchpad projects

2015-07-24 Thread Patrick Petit
Hi There,

I have been thinking that it would make a lot of sense to have separate 
launchpad projects for Fuel plugins.

The main benefits I foresee….

- Fuel project will be less of a bottleneck for bug triage and it should be 
more effective to have team members do the bug triage. After all, they are best 
placed to make the required judgement call.
- A feature can be spread across multiple plugins, like it’s the case with LMA 
toolchain, and so it would be better to have a separate project to regroup 
them. 
- It is counter intuitive and awkward to create blueprints for plugins in Fuel 
project itself in addition to making it cluttered with stuffs that unrelated to 
Fuel.

Can you please tell me what’s your thinking about this?
Thanks
Patrick

-- 
Patrick Petit
Mirantis France

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer][Gnocchi] question on integration with time-series databases

2015-06-18 Thread Patrick Petit
On 18 Jun 2015 at 04:44:18, gordon chung (g...@live.ca) wrote:


On 17/06/2015 12:57 PM, Chris Dent wrote: 
> On Tue, 16 Jun 2015, Simon Pasquier wrote: 
> 
>> I'm still struggling to see how these optimizations would be implemented 
>> since the current Gnocchi design has separate backends for indexing and 
>> storage which means that datapoints (id + timestamp + value) and metric 
>> metadata (tenant_id, instance_id, server group, ...) are stored into 
>> different places. I'd be interested to hear from the Gnocchi team how 
>> this 
>> is going to be tackled. For instance, does it imply modifications or 
>> extensions to the existing Gnocchi API? 
> 
> I think there's three things to keep in mind: 
> 
> a) The plan is to figure it out and make it work well, "production 
> ready" even. That will require some iteration. At the moment the 
> overlap between InfluxDB python driver maturity and someone-to-do-the- 
> work is not great. When it is I'm sure the full variety of 
> optimizations will be explored, with actual working code and test 
> cases. 

just curious but what bugs are we waiting on for the influxdb driver? 
i'm hoping Paul Dix has prioritised them? 

> 
> b) Gnocchi has separate _interfaces_ for indexing and storage. This 
> is not the same as having separate _backends_[1]. If it turns out 
> that the right way to get InfluxDB working is for it to be the 
> same backend to the two separate interfaces then that will be 
> okay. 

i'll straddle the middle line here and say i think we need to wait for a 
viable driver before we can start making the appropriate adjustments. 
having said that, i think once we have the gaps resolved, i think we 
should make all effort to conform to the rules of the db (whether it is 
influxdb, kairosdb, opentsdb). we faced a similar issue with the 
previous data storage design where we generically applied a design for 
one driver across all drivers and that led to terribly inefficient 
design everywhere. 
I'd like to emphasise that using the same backend for both data-point 
time-series and the identification of the resources linked to those time-series 
is not only the right way, it is the mandatory way. The most salient reason 
being that we shall not mandate other applications consuming time-series 
produced through Gnocchi to use anything else than the time-series backend 
native API. Operators who want to use InfluxDB, OpenTSDB or something else, as 
their time-series backend, do it for a reason. The choice of an API that best 
suits their needs is key to that decision. It is also a question of 
effectiveness. There are plenty of applications out there like Grafana that 
plug into those time-series out-of-the-box. I don’t think we want to force 
those applications to use the Gnocchi API instead.

 - Patrick



> 
> c) The future is unknown and the present is not made of stone. There 
> could be modifications and extensions to the existing stuff. We 
> don't know. Yet. 
> 
> [1] Yes the existing implementations use SQL for the indexer and 
> various subclasses of the carbonara abstraction as two backends 
> for the two interfaces. That's an accident of history not a design 
> requirement. 

-- 
gord 


__ 
OpenStack Development Mailing List (not for usage questions) 
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-25 Thread Patrick Petit

I think this fits into something that I want for optimizing

os-collect-config as well (our in-instance Heat-aware agent). That is
a way for us to wait for notification of changes to Metadata without
polling.
Interesting... If I understand correctly that's kinda replacement of 
cfn-hup... Do you have a blueprint pointer or something more 
specific? While I see the benefits of it, in-instance notifications 
is not really what we are looking for. We are looking for a 
notification service that exposes an API whereby listeners can 
register for Heat notifications. AWS Alarming / CloudFormation has 
that. Why not Ceilometer / Heat? That would be extremely valuable for 
those who build PaaS-like solutions above Heat. To say it bluntly, 
I'd like to suggest we explore ways to integrate Heat with Marconi.


Yeah, I am trying to do a PoC of this now. I'll let you know how
it goes.

I am trying to implement the following:

heat_template_version: 2013-05-23
parameters:
  key_name:
type: String
  flavor:
type: String
default: m1.small
  image:
type: String
default: fedora-19-i386-heat-cfntools
resources:
  config_server:
type: OS::Marconi::QueueServer
properties:
  image: {get_param: image}
  flavor: {get_param: flavor}
  key_name: {get_param: key_name}

  configA:
type: OS::Heat::OrderedConfig
properties:
  marconi_server: {get_attr: [config_server, url]}
  hosted_on: {get_resource: serv1}
  script: |
#!/bin/bash
logger "1. hello from marconi"

  configB:
type: OS::Heat::OrderedConfig
properties:
  marconi_server: {get_attr: [config_server, url]}
  hosted_on: {get_resource: serv1}
  depends_on: {get_resource: configA}
  script: |
#!/bin/bash
logger "2. hello from marconi"

  serv1:
type: OS::Nova::Server
properties:
  image: {get_param: image}
  flavor: {get_param: flavor}
  key_name: {get_param: key_name}
  user_data: |
#!/bin/sh
# poll /v1/queues/{hostname}/messages
# apply config
# post a response message with any outputs
# delete request message

The idea here is that each "os::heat::orderedConfig" does the
following:
create the following marconi queues:
.{request,respones}
sends the config to the request queue
the vm then posts responses to the response queue

- each vm/server has a "config" queue (named based on "hosted_on")
- we can get attributes/outputs from the vm
- we can depend_on other config resources
- you can monitor progress externally via marconi
- you can have properties other than "script" (puppet/chef/..)
- you could/should have a marconi server running in you infrastructure
  (I am doing it in a vm for ease of testing).

Is this the kind of thing you are after?


No.
The king of thing I am taking about is that:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-notificationconfiguration.html
http://docs.aws.amazon.com/AutoScaling/latest/APIReference/API_DescribeAutoScalingNotificationTypes.html

Which I figured could be supported in Heat with the use Marconi's 
notifications.


Thanks
Patrick


-Angus



The second one would be to support a new type of AWS::IAM::User 
(perhaps

OS::IAM::User) resource whereby one could pass Keystone credentials to
be able to specify Ceilometer alarms based on application's specific
metrics (a.k.a KPIs).


It would likely be OS::Keystone::User, and AFAIK this is on the list of
de-AWS-ification things.
Great! As I said. It's a blocker for us and really would like to see 
it accepted for icehouse.


I hope this is making sense to you and can serve as a basis for 
further

discussions and refinements.


Really great feedback Patrick, thanks again for sharing!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
http://www.bull.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
http://www.bull.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [OpenStackfr] Organisation, mailing-list...

2013-10-25 Thread Patrick Petit


FYI,
Patrick

 Original Message 
Subject:[OpenStackfr] Organisation, mailing-list...
Date:   Fri, 25 Oct 2013 10:20:26 +0200
From:   Jonathan Le Lous 
To: 
CC: 	Nicolas Thomas , Dave Neary 
, Patrick Petit , Sylvain 
Bauza , Christophe Sauthier 
, Thierry Carrez 
, Dave Neary , Raphael Ferreira 
, Yannick Foeillet 
, Loic Dachary , Julien 
Danjou , "Michael Bright" , 
Bruno Seznec , "Adrien Cunin" 
, Antoine Castaing 
, Bernard Paques 
, Jean-Pierre Dion 
, Nicolas Barcet , 
Philippe Desmaison , Thierry Lefort 
, , 
, , , 
, Jérémie Bourdoncle 
, Chmouel Boudjnah 
, Stephane EVEILLARD 
, , 
, , 
, 




Bonjour à tous,

La communauté française d'OpenStack avance avec de nouveaux meetup plus 
techniques, des rencontres informelles, des articles... Le tout est 
organisé de façon communautaire :-)


Pour y participer, juste un petit rappel, nous avons maintenant 
plusieurs mailing-list pour discuter d'OpenStack en France et/ou de 
participer à l' organisation de la communauté:


- Mailing-list France pour se tenir informé: 
https://wiki.openstack.org/wiki/OpenStackUsersGroup#France


- Mailing-list Organisation pour ceux qu'ils veulent "agir" concrètement 
autour de l'animation de la communauté: 
http://listes.openstack.fr/listinfo/organisation


Faîtes passer le mot !

A bientôt !
Librement,
Jonathan

Jonathan Le Lous

Membre du Conseil d'Administration de l'April <http://www.april.org>
Board member of Apri <http://www.april.org/en/>l

fr.linkedin.com/in/jonathanlelous/ 
<http://fr.linkedin.com/in/jonathanlelous/>

Blog : http://blog.itnservice.net/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-24 Thread Patrick Petit

Sorry, I clicked the 'send' button too quickly.

On 10/24/13 11:54 AM, Patrick Petit wrote:

Hi Clint,
Thank you! I have few replies/questions in-line.
Cheers,
Patrick
On 10/23/13 8:36 PM, Clint Byrum wrote:

Excerpts from Patrick Petit's message of 2013-10-23 10:58:22 -0700:

Dear Steve and All,

If I may add up on this already busy thread to share our experience 
with

using Heat in large and complex software deployments.


Thanks for sharing Patrick, I have a few replies in-line.


I work on a project which precisely provides additional value at the
articulation point between resource orchestration automation and
configuration management. We rely on Heat and chef-solo respectively 
for

these base management functions. On top of this, we have developed an
event-driven workflow to manage the life-cycles of complex software
stacks which primary purpose is to support middleware components as
opposed to end-user apps. Our use cases are peculiar in the sense that
software setup (install, config, contextualization) is not a one-time
operation issue but a continuous thing that can happen any time in
life-span of a stack. Users can deploy (and undeploy) apps long time
after the stack is created. Auto-scaling may also result in an
asynchronous apps deployment. More about this latter. The framework we
have designed works well for us. It clearly refers to a PaaS-like
environment which I understand is not the topic of the HOT software
configuration proposal(s) and that's absolutely fine with us. However,
the question for us is whether the separation of software config from
resources would make our life easier or not. I think the answer is
definitely yes but at the condition that the DSL extension preserves
almost everything from the expressiveness of the resource element. In
practice, I think that a strict separation between resource and
component will be hard to achieve because we'll always need a little 
bit
of application's specific in the resources. Take for example the 
case of

the SecurityGroups. The ports open in a SecurityGroup are application
specific.

Components can only be made up of the things that are common to all 
users

of said component. Also components would, if I understand the concept
correctly, just be for things that are at the sub-resource level.
Security groups and open ports would be across multiple resources, and
thus would be separately specified from your app's component (though it
might be useful to allow components to export static values so that the
port list can be referred to along with the app component).

Okay got it. If that's the case then that would work



Then, designing a Chef or Puppet component type may be harder than it
looks at first glance. Speaking of our use cases we still need a little
bit of scripting in the instance's user-data block to setup a working
chef-solo environment. For example, we run librarian-chef prior to
starting chef-solo to resolve the cookbook dependencies. A cookbook can
present itself as a downloadable tarball but it's not always the 
case. A

chef component type would have to support getting a cookbook from a
public or private git repo (maybe subversion), handle situations where
there is one cookbook per repo or multiple cookbooks per repo, let the
user choose a particular branch or label, provide ssh keys if it's a
private repo, and so forth. We support all of this scenarios and so we
can provide more detailed requirements if needed.

Correct me if I'm wrong though, all of those scenarios are just 
variations

on standard inputs into chef. So the chef component really just has to
allow a way to feed data to chef.


That's correct. Boils down to specifying correctly all the constraints 
that apply to deploying a cookbook in an instance from it's component 
description.



I am not sure adding component relations like the 'depends-on' would
really help us since it is the job of config management to handle
software dependencies. Also, it doesn't address the issue of circular
dependencies. Circular dependencies occur in complex software stack
deployments. Example. When we setup a Slum virtual cluster, both the
head node and compute nodes depend on one another to complete their
configuration and so they would wait for each other indefinitely if we
were to rely on the 'depends-on'. In addition, I think it's critical to
distinguish between configuration parameters which are known ahead of
time, like a db name or user name and password, versus 
contextualization
parameters which are known after the fact generally when the 
instance is

created. Typically those contextualization parameters are IP addresses
but not only. The fact packages x,y,z have been properly installed and
services a,b,c successfully started is contextualization information
(a.k.a facts) which may be indicative that other components can move on
to the next setup stage.


The 

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-24 Thread Patrick Petit
stem converge toward the desirable
end-state through running idempotent recipes. This is our approach. The
first configuration phase handles parametrization which in general
brings an instance to CREATE_COMPLETE state. A second phase follows to
handle contextualization at the stack level. As a matter of fact, a new
contextualization should be triggered every time an instance enters or
leave the CREATE_COMPLETE state which may happen any time with
auto-scaling. In that phase, circular dependencies can be resolved
because all contextualization data can be compiled globally. Notice that
Heat doesn't provide a purpose built resource or service like Chef's
data-bag for the storage and retrieval of metadata. This a gap which IMO
should be addressed in the proposal. Currently, we use a kludge that is
to create a fake AWS::AutoScaling::LaunchConfiguration resource to store
contextualization data in the metadata section of that resource.


That is what we use in TripleO as well:

http://git.openstack.org/cgit/openstack/tripleo-heat-templates/tree/overcloud-source.yaml#n143

We are not doing any updating of that from within our servers though.
That is an interesting further use of the capability.
Right. The problem with that is... that it's a kludge ;-)  Obscure the 
readability of the code because used for an unintended purpose.

Aside from the HOT software configuration proposal(s). There are two
critical enhancements in Heat that would make software life-cycles
management much easier. In fact, they are actual blockers for us.

The first one would be to support asynchronous notifications when an
instance is created or deleted as a result of an auto-scaling decision.
As stated earlier, contextualization needs to apply in a stack every
time a instance enters or leaves the CREATE_COMPLETE state. I am not
referring to a Ceilometer notification but a Heat notification that can
be consumed by a Heat client.


I think this fits into something that I want for optimizing
os-collect-config as well (our in-instance Heat-aware agent). That is
a way for us to wait for notification of changes to Metadata without
polling.
Interesting... If I understand correctly that's kinda replacement of 
cfn-hup... Do you have a blueprint pointer or something more specific? 
While I see the benefits of it, in-instance notifications is not really 
what we are looking for. We are looking for a notification service that 
exposes an API whereby listeners can register for Heat notifications. 
AWS Alarming / CloudFormation has that. Why not Ceilometer / Heat? That 
would be extremely valuable for those who build PaaS-like solutions 
above Heat. To say it bluntly, I'd like to suggest we explore ways to 
integrate Heat with Marconi.



The second one would be to support a new type of AWS::IAM::User (perhaps
OS::IAM::User) resource whereby one could pass Keystone credentials to
be able to specify Ceilometer alarms based on application's specific
metrics (a.k.a KPIs).


It would likely be OS::Keystone::User, and AFAIK this is on the list of
de-AWS-ification things.
Great! As I said. It's a blocker for us and really would like to see it 
accepted for icehouse.



I hope this is making sense to you and can serve as a basis for further
discussions and refinements.


Really great feedback Patrick, thanks again for sharing!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
http://www.bull.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-24 Thread Patrick Petit

Hello Stan,
Please see comments inline.
Cheers,
Patrick
On 10/23/13 8:33 PM, Stan Lagun wrote:

Hi Patric,

Thank you for such great post! This is very close to the vision I've 
tried to propose earlier on software orchestration thread and I'm glad 
other people concern about the same issues. However the problem the 
problem with PaaS-like approached it that they currently on a little 
bit higher abstraction layer than Heat is intended to be. Typical Heat 
users are more of DevOps people rather than those who enjoy 
PaaS-related solutions. Going that direction would require some major 
paradigm shift for the Heat which I think is unnecessary.
Okay. But don't get me wrong. I am not militating for embarking 
PaaS-like capabilities into Heat. Far from it. There are two basic 
reasons for that. There are to many ways of approaching the PaaS 
endeavor and that would kill innovation for those who are trying to 
build value atop of OpenStack/Heat like ourselves. Even though we are 
DevOps the intent is that our users don't have to be since we provide 
them with built-in middleware stacks covering some verticals 
(high-performance computing related) that power users can leverage 
out-of-the-box to deploy their own apps. So, I guess what I intended to 
say is; let's try to keep it lean. Do not over engineer this thing with 
nuts and bolts allover the place because Heat is and will be 
increasingly used in completely unexpected ways.


I believe there is a place in OpenStack software-orchestration 
ecosystem for layers that would sin on top of Heat and provide more 
high-level services for software composition, dependency management. 
Heat is not aimed to be software-everything. I would suggest you to 
take a look at Murano project as it is very very close to what you 
want to achieve and as every open-source project it needs community 
contributions. And I believe that it is the place in OpenStack 
ecosystem where your expirience would be most valuable and appreciated 
as well as your contributions
Thank you for the invitation! We also welcome you to work with us on the 
XLcloud project which is also open-source Apache V2 project. Java-based 
though. Nobody is perfect ;-). More seriously we are thinking of moving 
the code to github and apply for incubation eventually making the 
OpenStack community become bigger and richer by joining in with the Java 
community :-)


The code
http://gitorious.ow2.org/xlcloud
A beginning of user documentation can be found here:
https://129.184.11.121:8443/display/XGM/XLcloud+Guides+and+Manuals+Home




On Wed, Oct 23, 2013 at 9:58 PM, Patrick Petit <mailto:patrick.pe...@bull.net>> wrote:


Dear Steve and All,

If I may add up on this already busy thread to share our
experience with using Heat in large and complex software deployments.

I work on a project which precisely provides additional value at
the articulation point between resource orchestration automation
and configuration management. We rely on Heat and chef-solo
respectively for these base management functions. On top of this,
we have developed an event-driven workflow to manage the
life-cycles of complex software stacks which primary purpose is to
support middleware components as opposed to end-user apps. Our use
cases are peculiar in the sense that software setup (install,
config, contextualization) is not a one-time operation issue but a
continuous thing that can happen any time in life-span of a stack.
Users can deploy (and undeploy) apps long time after the stack is
created. Auto-scaling may also result in an asynchronous apps
deployment. More about this latter. The framework we have designed
works well for us. It clearly refers to a PaaS-like environment
which I understand is not the topic of the HOT software
configuration proposal(s) and that's absolutely fine with us.
However, the question for us is whether the separation of software
config from resources would make our life easier or not. I think
the answer is definitely yes but at the condition that the DSL
extension preserves almost everything from the expressiveness of
the resource element. In practice, I think that a strict
separation between resource and component will be hard to achieve
because we'll always need a little bit of application's specific
in the resources. Take for example the case of the SecurityGroups.
The ports open in a SecurityGroup are application specific.

Then, designing a Chef or Puppet component type may be harder than
it looks at first glance. Speaking of our use cases we still need
a little bit of scripting in the instance's user-data block to
setup a working chef-solo environment. For example, we run
librarian-chef prior to starting chef-solo to resolve the cookbook
dependencies. A cookbook can present itself as a downloadable
tarball but

Re: [openstack-dev] [Heat] HOT Software configuration proposal

2013-10-23 Thread Patrick Petit
er. In fact, they are actual blockers for us.


The first one would be to support asynchronous notifications when an 
instance is created or deleted as a result of an auto-scaling decision. 
As stated earlier, contextualization needs to apply in a stack every 
time a instance enters or leaves the CREATE_COMPLETE state. I am not 
referring to a Ceilometer notification but a Heat notification that can 
be consumed by a Heat client.


The second one would be to support a new type of AWS::IAM::User (perhaps 
OS::IAM::User) resource whereby one could pass Keystone credentials to 
be able to specify Ceilometer alarms based on application's specific 
metrics (a.k.a KPIs).


I hope this is making sense to you and can serve as a basis for further 
discussions and refinements.


Cheers,
Patrick

On 10/16/13 12:48 AM, Steve Baker wrote:
I've just written some proposals to address Heat's HOT software 
configuration needs, and I'd like to use this thread to get some feedback:

https://wiki.openstack.org/wiki/Heat/Blueprints/hot-software-config
https://wiki.openstack.org/wiki/Heat/Blueprints/native-tools-bootstrap-config

Please read the proposals and reply to the list with any comments or 
suggestions.


We can spend some time discussing software configuration at tomorrow's 
Heat meeting, but I fully expect we'll still be in the discussion 
phase at Hong Kong.


cheers


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Questions and comments

2013-10-09 Thread Patrick Petit

On 10/9/13 6:53 AM, Mike Spreitzer wrote:
Yes, that helps.  Please, guys, do not interpret my questions as 
hostility, I really am just trying to understand.  I think there is 
some overlap between your concerns and mine, and I hope we can work 
together.
No probs at all. Don't see a sign of hostility at all. Potential 
collaboration and understanding is really how we perceive your questions...


Sticking to the physical reservations for the moment, let me ask for a 
little more explicit details.  In your outline below, late in the game 
you write "the actual reservation is performed by the lease manager 
plugin".  Is that the point in time when something (the lease manager 
plugin, in fact) decides which hosts will be used to satisfy the 
reservation?
Yes. The reservation service should return only a Pcloud uuid that is 
empty. The description of host capabilities and extra-specs is only 
defined as metadata of the Pcloud at this point.
Or is that decided up-front when the reservation is made?  I do not 
understand how the lease manager plugin can make this decision on its 
own, isn't the nova scheduler also deciding how to use hosts?  Why 
isn't there a problem due to two independent allocators making 
allocations of the same resources (the system's hosts)?
The way we are designing it excludes race conditions between Nova 
scheduler and the lease manager plugin for host reservations because the 
lease manager plugin will use a private pool of hosts for reservation 
(reservation pool) that is not shared with Nova scheduler. In our view, 
this is not a convenience design artifact but a purpose. It is because 
what we'd like to achieve really is energy efficiency management based 
on a reservation backlog and possibly dynamic management of host 
resources between the reservation pool and the multi-tenant pool. A 
Climate scheduler filter in Nova will do the triage filtering out those 
hosts that belong to the reservation pool and hosts that are reserved in 
an active lease. Another (longer term) goal behind this (was actually 
the primary justification for the reservation pool) is that the lease 
manager plugin could turn machines off to save electricity when the 
reservation backlog allows to do so and consequently turn them back on 
when a lease kicks in if that's needed. We anticipate that the resource 
management algorithms / heuristics behind that behavior is non-trivial 
but we believe that it would be hardly achievable without a reservation 
backlog and some form of capacity management capabilities left open to 
the provider. In particular, things become much trickier when it to 
comes decide what to do with the reserved hosts when a lease ends. We 
foresee few options:


1) Forcibly kill the instances running on reserved hosts and move them 
back to the reservation pool for the next lease to come
2) Keep the instances running on the reserved hosts and move them to an 
intermediary "recycling pool" until all the instances die at which point 
in time those hosts that are released from duty can return to the 
reservation pool. Case 1 and 2 could optionally be augmented by a grace 
period
3) Keep the instances running on the reserved hosts and move them to the 
multi-tenant pool. Then, it'll be up to the operator to repopulate the 
reservation pool using free hosts. Would require administrative tasks 
like disabling hosts, instance migrations, ... in other words certainly 
a pain if not fully automated.


So, you noticed that all this relies very much on manipulating hosts 
aggregates, metadata and filtering behind the scene. That's one way of 
implementing the whole-host-reservation feature based on the tools we 
have at our disposable today. A substantial refactoring of Nova and 
scheduler could/should be a better way to go? Is it worth it? We don't 
know. We anyway have zero visibility on that.


HTH,
Patrick


Thanks,
Mike

Patrick Petit  wrote on 10/07/2013 07:02:36 AM:

> Hi Mike,
>
> There are actually more facets to this. Sorry if it's a little
> confusing :-( Climate's original blueprint https://
> wiki.openstack.org/wiki/Blueprint-nova-planned-resource-reservation-api
> was about physical host reservation only. The typical use case
> being: "I want to reserve x number of hosts that match the
> capabilities expressed in the reservation request". The lease is
> populated with reservations which at this point are only capacity
> descriptors. The reservation becomes active only when the lease
> starts at a specified time and for a specified duration. The lease
> manager plugin in charge of the physical reservation has a planning
> of reservations that allows Climate to grant a lease only if the
> requested capacity is available at that time. Once the lease becomes
> active, the user can request instances to be created on the reserved
> hosts using a lease

Re: [openstack-dev] [Climate] Questions and comments

2013-10-07 Thread Patrick Petit
 future, or focused on 
the immediate future?  If a bag of resources (including their backing 
capacities) is reserved for a period that starts more than a little 
while in the future, what is done with that backing capacity in the 
meantime?


I see use cases for immediate future reservations; do you have use 
cases for more distant reservations?


Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Patrick Petit

Hi Dina,
Sounds great! Speaking on behalf of Francois feel free to proceed with 
points below. I don't think he would have issues with that. We'll close 
the loop when he returns. BTW, did you get a chance to take a look at 
Haizea's design and implementation?

Thanks
Patrick
On 8/13/13 3:08 PM, Dina Belova wrote:


Patrick, we are really glad we've found the way to deal with both use 
cases.



As for your patches, that are on review and were already merged, we 
are thinking about the following actions to commit:



1) Oslo was merged, but it is a little bit old verdant (with setup and 
version module, that are not really used now because of new per 
project). So we (Mirantis) can update it as a first step.


2) We need to implement comfortable to use DB layer to allow using of 
different DB types (SQL and NoSQL as well), so that's the second step. 
Here we'll also create new abstractions like lease and physical or 
virtual reservations (I think we can implement it really before end of 
August).



3) After that we'll have the opportunity to modify Francois' patches 
for the physical hosts reservation in the way to be a part of our new 
common vision together.



Thank you.



On Tue, Aug 13, 2013 at 4:23 PM, Patrick Petit <mailto:patrick.pe...@bull.net>> wrote:


Hi Nikolay,
Please see comments inline.
Thanks
Patrick

On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:


Hi, again!


Partick, I'll try to explain why do we belive in some base
actions like instance starting/deleting in Climate. We are
thinking about the following workflow (that will be quite
comfortable and user friendly, and now we have more than one
customer who really want it):


1) User goes to the OpenStack dashboard and asks Heat to reserve
several stacks.


2) Heat goes to the Climate and creates all needed leases. Also
Heat reserves all resources for these stacks.


3) When time comes, user goes to the OpenStack cloud and here we
think he wants to see already working stacks (ideal version) or
(at least) already started. If no, user will have to go to the
Dashboard and wake up all the stacks he or she reserved. This
means several actions, that may be done for the user
automatically, because it will be needed to do them no matter
what is the aim for these stacks - if user reserves them, he /
she needs them.


We understand, that there are situations when these actions may
be done by some other system (like some hypothetical Jenkins).
But if we speak about users, this will be useful. We also
understand that this default way of behavior should be
implemented in some kind of long term life cycle management
system (which is not Heat), but we have no one in the OpenStack
now. Because the best may to implement it is to use Convection,
that is only proposal now...


That's why we think that for the behavior like "user just
reserves resources and then does anything he / she wants to"
physical leases are better variant, when user may reserve several
nodes and use it in different ways. For the virtual reservations
it will be better to start / delete them as a default way (for
something unusual Heat may be used and modified).


Okay. So let's bootstrap it this way then. There will be two
different ways the reservation service will deal with reservations
depending on whether its physical or virtual. All things being
equal, future will tell how things settle. We will focus on the
physical host reservation side of things. It think having this
contradictory debate helped to understand each others use cases
and requirements that the initial design has to cope with.
Francois who already submitted a bunch of code for review will not
return from vacation until the end of August. So things on our
side are a little on the standby until he returns but it might
help if you could take a look at it. I suggest you start with your
vision and we will iterate from there. Is that okay with you?




Do you think that this workflow is useful too and if so can you
propose another implementation  variant for it?


Thank you.




On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit
mailto:patrick.pe...@bull.net>> wrote:

On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

Hello, Patrick!

We have several reasons to think that for the virtual
resources this possibility is interesting. If we speak about
physical resources, user may use them in the different ways,
that's why it is impossible to include base actions with
them to the reservation service. But speaking about virtual
reservations, let's imagine user wants to reserve virtual
machine. He knows everything about it - its parameters,
flavor and 

Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-13 Thread Patrick Petit

Hi Nikolay,
Please see comments inline.
Thanks
Patrick
On 8/12/13 5:28 PM, Nikolay Starodubtsev wrote:


Hi, again!


Partick, I’ll try to explain why do we belive in some base actions 
like instance starting/deleting in Climate. We are thinking about the 
following workflow (that will be quite comfortable and user friendly, 
and now we have more than one customer who really want it):



1) User goes to the OpenStack dashboard and asks Heat to reserve 
several stacks.



2) Heat goes to the Climate and creates all needed leases. Also Heat 
reserves all resources for these stacks.



3) When time comes, user goes to the OpenStack cloud and here we think 
he wants to see already working stacks (ideal version) or (at least) 
already started. If no, user will have to go to the Dashboard and wake 
up all the stacks he or she reserved. This means several actions, that 
may be done for the user automatically, because it will be needed to 
do them no matter what is the aim for these stacks - if user reserves 
them, he / she needs them.



We understand, that there are situations when these actions may be 
done by some other system (like some hypothetical Jenkins). But if we 
speak about users, this will be useful. We also understand that this 
default way of behavior should be implemented in some kind of long 
term life cycle management system (which is not Heat), but we have no 
one in the OpenStack now. Because the best may to implement it is to 
use Convection, that is only proposal now...



That’s why we think that for the behavior like “user just reserves 
resources and then does anything he / she wants to” physical leases 
are better variant, when user may reserve several nodes and use it in 
different ways. For the virtual reservations it will be better to 
start / delete them as a default way (for something unusual Heat may 
be used and modified).


Okay. So let's bootstrap it this way then. There will be two different 
ways the reservation service will deal with reservations depending on 
whether its physical or virtual. All things being equal, future will 
tell how things settle. We will focus on the physical host reservation 
side of things. It think having this contradictory debate helped to 
understand each others use cases and requirements that the initial 
design has to cope with. Francois who already submitted a bunch of code 
for review will not return from vacation until the end of August. So 
things on our side are a little on the standby until he returns but it 
might help if you could take a look at it. I suggest you start with your 
vision and we will iterate from there. Is that okay with you?




Do you think that this workflow is useful too and if so can you 
propose another implementation  variant for it?



Thank you.




On Mon, Aug 12, 2013 at 1:55 PM, Patrick Petit <mailto:patrick.pe...@bull.net>> wrote:


On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

Hello, Patrick!

We have several reasons to think that for the virtual resources
this possibility is interesting. If we speak about physical
resources, user may use them in the different ways, that's why it
is impossible to include base actions with them to the
reservation service. But speaking about virtual reservations,
let's imagine user wants to reserve virtual machine. He knows
everything about it - its parameters, flavor and time to be
leased for. Really, in this case user wants to have already
working (or at least starting to work) reserved virtual machine
and it would be great to include this opportunity to the
reservation service.
We are thinking about base actions for the virtual reservations
that will be supported by Climate, like boot/delete for instance,
create/delete for volume and create/delete for the stacks. The
same will be with volumes, IPs, etc. As for more complicated
behaviour, it may be implemented in Heat. This will make
reservations simpler to use for the end users.

Don't you think so?

Well yes and and no. It really depends upon what you put behind
those lease actions. The view I am trying to sustain is separation
of duties to keep the service simple, ubiquitous and non
prescriptive of a certain kind of usage pattern. In other words,
keep Climate for reservation of capacity (physical or virtual),
Heat for orchestration, and so forth. ... Consider for example the
case of reservation as a non technical act but rather as a
business enabler for wholesales activities. Don't need, and
probably don't want to start or stop any resource there. I do not
deny that there are cases where it is desirable but then how
reservations are used and composed together at the end of the day
mainly depends on exogenous factors which couldn't be anticipated
because they are driven by the business.

And so, rather than coupling reservations with wired

Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-12 Thread Patrick Petit

On 8/9/13 9:06 PM, Scott Devoid wrote:

Hi Nikolay and Patrick, thanks for your replies.

Virtual vs. Physical Resources
Ok, now I realize what you meant by "virtual resources," e.g. 
instances, volumes, networks...resources provided by existing 
OpenStack schedulers. In this case "physical resources" are actually 
more "removed" since there are no interfaces to them in the user-level 
OpenStack APIs. If you make a physical reservation on "this rack of 
machines right here", how do you supply this reservation information 
to nova-scheduler? Probably via scheduler hints + an availability zone 
or host-aggregates. At which point you're really defining a instance 
reservation that includes explicit scheduler hints. Am I missing 
something?


Hi Scott!
No, you don't. At least, it's how I see things working for hosts 
reservation. In fact, it is already partially addressed in Havana with 
https://wiki.openstack.org/wiki/WholeHostAllocation. What's missing is 
the ability to automate the create and release of those pools based on a 
lease schedule.

Thanks
Patrick

Eviction:
Nikolay, to your point that we might evict something that was already 
paid for: in the design I have in mind, this would only happen if the 
policies set up by the operator caused one reservation to be weighted 
higher than another reservation. Maybe because one client paid more? 
The point is that this would be configurable and the sensible default 
is to not evict anything.



On Fri, Aug 9, 2013 at 8:05 AM, Nikolay Starodubtsev 
mailto:nstarodubt...@mirantis.com>> wrote:


Hello, Patrick!

We have several reasons to think that for the virtual resources
this possibility is interesting. If we speak about physical
resources, user may use them in the different ways, that's why it
is impossible to include base actions with them to the reservation
service. But speaking about virtual reservations, let's imagine
user wants to reserve virtual machine. He knows everything about
it - its parameters, flavor and time to be leased for. Really, in
this case user wants to have already working (or at least starting
to work) reserved virtual machine and it would be great to include
this opportunity to the reservation service. We are thinking about
base actions for the virtual reservations that will be supported
by Climate, like boot/delete for instance, create/delete for
volume and create/delete for the stacks. The same will be with
volumes, IPs, etc. As for more complicated behaviour, it may be
implemented in Heat. This will make reservations simpler to use
for the end users.

Don't you think so?

P.S. Also we remember about the problem you mentioned some letters
ago - how to guarantee that user will have already working and
prepared host / VM / stack / etc. by the time lease actually
starts, no just "lease begins and preparing process begins too".
We are working on it now.


On Thu, Aug 8, 2013 at 8:18 PM, Patrick Petit
mailto:patrick.pe...@bull.net>> wrote:

Hi Nikolay,

Relying on Heat for orchestration is obviously the right thing
to do. But there is still something in your design approach
that I am having difficulties to comprehend since the
beginning. Why do you keep thinking that orchestration and
reservation should be treated together? That's adding
unnecessary complexity IMHO. I just don't get it. Wouldn't it
be much simpler and sufficient to say that there are pools of
reserved resources you create through the reservation service.
Those pools could be of different types i.e. host, instance,
volume, network,.., whatever if that's really needed. Those
pools are identified by a unique id that you pass along when
the resource is created. That's it. You know, the AWS
reservation service doesn't even care about referencing a
reservation when an instance is created. The association
between the two just happens behind the scene. That would work
in all scenarios, manual, automatic, whatever... So, why do
you care so much about this in a first place?
Thanks,
Patrick

On 8/7/13 3:35 PM, Nikolay Starodubtsev wrote:

Patrick, responding to your comments:

1) Dina mentioned "start automatically" and "start manually"
only as examples of how these politics may look like. It
doesn't seem to be a correct approach to put orchestration
functionality (that belongs to Heat) in Climate. That's why
now we can implement the basics like starting Heat stack, and
for more complex actions we may later utilize something like
Convection (Task-as-a-Service) project.

2) If we agree th

Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-12 Thread Patrick Petit

On 8/9/13 3:05 PM, Nikolay Starodubtsev wrote:

Hello, Patrick!

We have several reasons to think that for the virtual resources this 
possibility is interesting. If we speak about physical resources, user 
may use them in the different ways, that's why it is impossible to 
include base actions with them to the reservation service. But 
speaking about virtual reservations, let's imagine user wants to 
reserve virtual machine. He knows everything about it - its 
parameters, flavor and time to be leased for. Really, in this case 
user wants to have already working (or at least starting to work) 
reserved virtual machine and it would be great to include this 
opportunity to the reservation service.
We are thinking about base actions for the virtual reservations that 
will be supported by Climate, like boot/delete for instance, 
create/delete for volume and create/delete for the stacks. The same 
will be with volumes, IPs, etc. As for more complicated behaviour, it 
may be implemented in Heat. This will make reservations simpler to use 
for the end users.


Don't you think so?
Well yes and and no. It really depends upon what you put behind those 
lease actions. The view I am trying to sustain is separation of duties 
to keep the service simple, ubiquitous and non prescriptive of a certain 
kind of usage pattern. In other words, keep Climate for reservation of 
capacity (physical or virtual), Heat for orchestration, and so forth. 
... Consider for example the case of reservation as a non technical act 
but rather as a business enabler for wholesales activities. Don't need, 
and probably don't want to start or stop any resource there. I do not 
deny that there are cases where it is desirable but then how 
reservations are used and composed together at the end of the day mainly 
depends on exogenous factors which couldn't be anticipated because they 
are driven by the business.


And so, rather than coupling reservations with wired resource 
instantiation actions, I would rather couple them with notifications 
that everybody can subscribe to (as opposed to the Resource Manager 
only) which would let users decide what to do with the life-cycle 
events. The what to do may very well be what you advocate i.e. start a 
full stack of reserved and interwoven resources, or at the other end of 
the spectrum, do nothing at all. This approach IMO would keep things 
more open.


P.S. Also we remember about the problem you mentioned some letters ago 
- how to guarantee that user will have already working and prepared 
host / VM / stack / etc. by the time lease actually starts, no just 
"lease begins and preparing process begins too". We are working on it now.
Yes. I think I was explicitly referring to hosts instantiation also 
because there is no support of that in Nova API. Climate should support 
some kind of "reservation kick-in heads-up" notification whereby the 
provider and/or some automated provisioning tools could do the heavy 
lifting work of bringing physical hosts online before a hosts 
reservation lease starts. I think it doesn't have to be rocket-science 
either. It's probably sufficient to make Climate fire up a notification 
that say "Lease starting in x seconds", x being an offset value against 
T0 that could be defined by the operator based on heuristics. A 
dedicated (e.g. IPMI) module of the Resource Manager for hosts 
reservation would subscribe as listener to those events.



On Thu, Aug 8, 2013 at 8:18 PM, Patrick Petit <mailto:patrick.pe...@bull.net>> wrote:


Hi Nikolay,

Relying on Heat for orchestration is obviously the right thing to
do. But there is still something in your design approach that I am
having difficulties to comprehend since the beginning. Why do you
keep thinking that orchestration and reservation should be treated
together? That's adding unnecessary complexity IMHO. I just don't
get it. Wouldn't it be much simpler and sufficient to say that
there are pools of reserved resources you create through the
reservation service. Those pools could be of different types i.e.
host, instance, volume, network,.., whatever if that's really
needed. Those pools are identified by a unique id that you pass
along when the resource is created. That's it. You know, the AWS
reservation service doesn't even care about referencing a
reservation when an instance is created. The association between
the two just happens behind the scene. That would work in all
scenarios, manual, automatic, whatever... So, why do you care so
much about this in a first place?
Thanks,
Patrick

On 8/7/13 3:35 PM, Nikolay Starodubtsev wrote:

Patrick, responding to your comments:

1) Dina mentioned "start automatically" and "start manually" only
as examples of how these politics may look like. It doesn&#x

Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-08 Thread Patrick Petit

Hi Nikolay,

Relying on Heat for orchestration is obviously the right thing to do. 
But there is still something in your design approach that I am having 
difficulties to comprehend since the beginning. Why do you keep thinking 
that orchestration and reservation should be treated together? That's 
adding unnecessary complexity IMHO. I just don't get it. Wouldn't it be 
much simpler and sufficient to say that there are pools of reserved 
resources you create through the reservation service. Those pools could 
be of different types i.e. host, instance, volume, network,.., whatever 
if that's really needed. Those pools are identified by a unique id that 
you pass along when the resource is created. That's it. You know, the 
AWS reservation service doesn't even care about referencing a 
reservation when an instance is created. The association between the two 
just happens behind the scene. That would work in all scenarios, manual, 
automatic, whatever... So, why do you care so much about this in a first 
place?

Thanks,
Patrick
On 8/7/13 3:35 PM, Nikolay Starodubtsev wrote:

Patrick, responding to your comments:

1) Dina mentioned "start automatically" and "start manually" only as 
examples of how these politics may look like. It doesn't seem to be a 
correct approach to put orchestration functionality (that belongs to 
Heat) in Climate. That's why now we can implement the basics like 
starting Heat stack, and for more complex actions we may later utilize 
something like Convection (Task-as-a-Service) project.


2) If we agree that Heat is the main consumer of 
Reservation-as-a-Service, we can agree that lease may be created 
according to one of the following scenarions (but not multiple):
- a Heat stack (with requirements to stack's contents) as a resource 
to be reserved
- some amount of physical hosts (random ones or filtered based on 
certain characteristics).

- some amount of individual VMs OR Volumes OR IPs

3) Heat might be the main consumer of virtual reservations. If not, 
Heat will require development efforts in order to support:

- reservation of a stack
- waking up a reserved stack
- performing all the usual orchestration work

We will support reservation of individual instance/volume/ IP etc, but 
the use case with "giving user already working group of connected VMs, 
volumes, networks" seems to be the most interesting one.
As for Heat autoscaling, reservation of the maximum instances set in 
the Heat template (not the minimum value) has to be implemented in 
Heat. Some open questions remain though - like updating of Heat stack 
when user changes the template to support higher max number of running 
instances


4) As a user, I would of course want to have it already working, 
running any configured hosts/stacks/etc by the time lease starts. But 
in reality we can't predict how much time the preparation process 
should take for every single use case. So if you have an idea how this 
should be implemented, it would be great you share your opinion.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-07 Thread Patrick Petit

Hi Scott,

Thanks for your inputs. Please see some comments below.
BR,
Patrick
On 8/6/13 6:58 PM, Scott Devoid wrote:

Some thoughts:

0. Should Climate also address the need for an eviction service? That 
is, a service that can weight incoming requests and existing resource 
allocations using some set of policies and evict an existing resource 
allocations to make room for the higher weighted request. Eviction is 
necessary if you want to implement a Spot-like service. And if you 
want Climate reservations that do not tie physical resources to the 
reservation, this is also required to ensure that requests against the 
reservation succeed. (Note that even if you do tie physical resources 
as in whole-host reservations, an eviction service can help when 
physical resources fail.)
Good point. We probably don't want to to tie physical resources to a 
reservations until the lease becomes active.


1. +1 Let end users continue to use existing APIs for resources and 
extend those interfaces with reservation attributes. Climate should 
only handle reservation crud and tracking.


2a. As an operator, I want the power to define reservations in terms 
of host capacity / flavor, min duration, max duration... and limit 
what kind of reservation requests can come in. Basically define 
"reservation flavors" and let users submit requests as instances of 
one "reservation flavor". If you let the end user define all of these 
parameters I will be rejecting a lot of reservation requests.
Sure, however it is unclear what is the state of reflection about 
creating host flavor types and extend Nova and API to support that 
case...? Meanwhile, I think the approach proposed in 
https://wiki.openstack.org/wiki/WholeHostAllocation to use pre-defined 
metadata in aggregates should work for categorizing host reservation 
flavors.


2b. What's the point of an "immediate lease"? This should be 
equivalent to making the request against Nova directly right? Perhaps 
there's a rational for this w.r.t. billing? Otherwise I'm not sure 
what utility this kind of reservation provides?
Well, Amazon uses it as a business enabler for whole sales activities. 
From the end-user standpoint it ensures that the resources is available 
for the duration of the lease. I think it is useful when your cloud has 
limited capacity with capacity contenders.


2c. Automatic vs. manual reservation approval:

What a user wants to know is whether a reservation can be granted
in a all-or-nothing manner at the time he is asking the lease.


This is a very hard problem to solve: you have to model resource 
availability (MTTF, MTBF), resource demand (how full are we going to 
be), and bake in explicit policies (this tenant gets priority) to 
automatically grant / deny such reservations. Having reservations go 
through a manual request -> operator approval system is extremely 
simple and allows operators to tackle the automated case as they need to.
I agree, but I think that was Dina was referring to when speaking of 
automatic vs manual reservation is the ability to express whether the 
resource is started automatically or not by the reservation service. My 
point was to say that reservation and instantiation are two different 
and separate things and so the specification of post-lease actions 
should not be restricted to that if it was only because a reservation 
that is not started automatically by the reservation service could still 
be started automatically by someone else like auto-scaling.


All I need is a tool that lets a tenant spawn a single critical 
instance even when another tenant is running an application that's 
constantly trying to grab as many instances as it can get.
3. This will add a lot of complexity, particularly if you want to 
tackle #0.


5. (NEW) Note that Amazon's reserved instances feature doesn't tie 
reservations against specific instances. Effectively you purchase 
discount coupons to be applied at the end of the billing cycle. I am 
not sure how Amazon handles tenants with multiple reservations at 
different utilization levels (prioritize heavy -> light?).
Amazon knows how to handle tenant's dedicated instances with 
reservations in the context of VPC. Not sure either how or if it works 
at all when mixed with prioritization levels. That's tough!


~ Scott


On Tue, Aug 6, 2013 at 6:12 AM, Patrick Petit <mailto:patrick.pe...@bull.net>> wrote:


Hi Dina and All,
Please see comments inline. We can  drill down on the specifics
off-line if that's more practical.
Thanks in advance,
Patrick

On 8/5/13 3:19 PM, Dina Belova wrote:


Hello, everyone!


Patrick, Julien, thank you so much for your comments. As for the
moments Patrick mentioned in his letter, I'll describe our vision
for them below.


1) Patrick, thank you for the idea! I think it would be great to
add not only 'po

Re: [openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-06 Thread Patrick Petit
ably okay to do it in two separate 
steps 1) create the lease, 2) add reservation (although it seems 
problematic in the case of immediate lease) but the actual hosts 
reservation request should include a cardinality factor so that if the 
user wants to reserve x number of hosts in one chunk he can do it. The 
reservation service would respond yes or no depending on the three 
possible lease terms (immediate, best effort and schedule) along with 
the operator's specific reservation policies that yet has to be 
configurable one way or another. To be discussed...



3) We completely agree with you! Our 'nested reservation' vision was 
created only to let user the opportunity of checking reservation 
status of complex virtual resources (stacks) by having an opportunity 
to check status of all its 'nested' components, like VMs, networks, 
etc. This can be done as well by using just Heat without reservation 
service. Now we are thinking about reservation as the reservation of 
the OpenStack resource that has ID in the OpenStack service DB, no 
matter how complex it is (VM, network, floating IP, stack, etc.)


I am not sure I am getting this...? All I wanted to say is that 
orchestration is a pretty big deal and my recommendation is not to do 
any of this at all in the reservation service but rely on Heat instead 
when possible. I understand you seem to agree with this... Also, I am 
not sure how you can do stack reservations on the basis of a Heat 
template when it has auto-scaling groups.



4) We were thinking about Reservation Scheduler as a service that 
controls lease life cycle (starting, ending, making user 
notifications, etc.) and communicates with Reservation Manager via 
RPC. Reservation Manager can send user notifications about close lease 
ending using Ceilometer (this question has to be researched). As for 
the time needed to run physical reservation or complex virtual one, 
that is used to make preparations and settings, I think it would be 
better for user to amortise it in lease using period, because for 
physical resources it much depends on hardware resources and for 
virtual ones - on hardware, network and geo location of DCs.


Do you mean make the user aware of the provisioning lead time in the 
lease schedule? How do suggest they know how to account for that? In 
practice, a lease is a contract and so the reservations must be 
available at the exact time the lease becomes effective.



Thank you,

DIna.



On Mon, Aug 5, 2013 at 1:22 PM, Julien Danjou <mailto:jul...@danjou.info>> wrote:


On Fri, Aug 02 2013, Patrick Petit wrote:

> 3. The proposal specifies that a lease can contain a combo of
different
>resources types reservations (instances, volumes, hosts, Heat
>stacks, ...) that can even be nested and that the reservation
>service will somehow orchestrate their deployment when the lease
>kicks in. In my opinion, many use cases (at least ours) do not
>warrant for that level of complexity and so, if that's something
>that is need to support your use cases, then it should be
delivered
>as module that can be loaded optionally in the system. Our
preferred
>approach is to use Heat for deployment orchestration.

I agree that this is not something Climate should be in charge. If the
user wants to reserve a set of services and deploys them
automatically,
Climate should provide the lease and Heat the deployment
orchestration.
Also, for example, it may be good to be able to reserve automatically
the right amount of resources needed to deploy a Heat stack via
Climate.

--
Julien Danjou
// Free Software hacker / freelance consultant
// http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
<mailto:OpenStack-dev@lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.




--
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [climate] Mirantis proposal to extend Climate to support virtual resources reservation

2013-08-02 Thread Patrick Petit
at the Reservation Manager may want to
   query the Reservation Scheduler to check the state of the ongoing
   leases and scheduled leased as opposed to just being notified when a
   lease starts and ends. That's  because typically in the case of
   physical host reservation, the Reservation Manager must anticipate
   (account for) the time it takes to bootstrap and provision the hosts
   before the lease starts.

I think it's probably enough as a starting point. I propose we iterate 
on this first and see where this is taking us.

Best regards,
Patrick

--
Patrick Petit
Cloud Computing Principal Architect, Innovative Products
Bull, Architect of an Open World TM
Tél : +33 (0)4 76 29 70 31
Mobile : +33 (0)6 85 22 06 39
http://www.bull.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] autoscaling question

2013-06-21 Thread Patrick Petit

Dear All,

I'd like to have some confirmation about the mechanism that is going to 
be used to inform Heat's clients about instance create and destroy in an 
auto-scaling group. I am referring to the wiki page at 
https://wiki.openstack.org/wiki/Heat/AutoScaling.


I assume, but I may be wrong, that the same eventing mechanism than the 
one being used for stack creation will be used...


An instance create in an auto-scaling group will generate an IN_PROGRESS 
event for the instance being created followed by CREATE_COMPLETE or 
CREATE_FAILED based on the value returned by cfn-signal. Similarly, an 
instance destroy will generate a DELETE_IN_PROGRESS event for the 
instance being destroyed followed by a DELETE_COMPLETE or DELETE_FAILED 
in case the instance can't be destroyed in the group.


Adding a group id in the event details will be helpful to figure out 
what group the instance belongs to.


Thanks in advance for the clarification.
Patrick

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev