[openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-18 Thread Lakshminaraya Renganarayana

Hi,

In the last Openstack Heat meeting there was good interest in proposals for
cross-vm synchronization and communication and I had mentioned the
prototype I have built. I had also promised that I will post an outline of
the prototype ... Here it is. I might have missed some details, please feel
free to ask / comment and I would be happy to explain more.
---
Goal of the prototype: Enable cross-vm synchronization and communication
using high-level declarative description (no wait-conditions) Use chef as
the CM tool.

Design rationale / choices of the prototype (note that these were made just
for the prototype and I am not proposing them to be the choices for
Heat/HOT):

D1: No new construct in Heat template
=> use metadata sections
D2: No extensions to core Heat engine
=> use a pre-processor that will produce a Heat template that the
standard Heat engine can consume
D3: Do not require chef recipes to be modified
=> use a convention of accessing inputs/outputs from chef node[][]
=> use ruby meta-programming to intercept reads/writes to node[][]
forward values
D4: Use a standard distributed coordinator (don't reinvent)
=> use zookeeper as a coordinator and as a global data space for
communciation

Overall, the flow is the following:
1. User specifies a Heat template with details about software config and
dependences in the metadata section of resources (see step S1 below).
2. A pre-processor consumes this augmented heat template and produces
another heat template with user-data sections with cloud-init scripts and
also sets up a zookeeper instance with enough information to coordinate
between the resources at runtime to realize the dependences and
synchronization (see step S2)
3. The generated heat template is fed into standard heat engine to deploy.
After the VMs are created the cloud-init script kicks in. The cloud init
script installs chef solo and then starts the execution of the roles
specified in the metadata section. During this execution of the recipes the
coordination is realized (see steps S2 and S3 below).

Implementation scheme:
S1. Use metadata section of each resource to describe  (see attached
example)
- a list of roles
- inputs to and outputs from each role and their mapping to resource
attrs (any attr)
- convention: these inputs/outputs will be through chef node attrs
node[][]

S2. Dependence analysis and cloud init script generation

Dependence analysis:
- resolve every reference that can be statically resolved using
Heat's fucntions (this step just uses Heat's current dependence analysis --
Thanks to Zane Bitter for helping me understand this)
- flag all unresolved references as values resolved at run-time at
communicated via the coordinator

Use cloud-init in user-data sections:
- automatically generate a script that would bootstrap chef and will
run the roles/recipes in the order specified in the metadata section
- generate dependence info for zookeeper to coordinate at runtime

S3. Coordinate synchronization and communication at run-time
- intercept reads and writes to node[][]
- if it is a remote read, get it from Zookeeper
- execution will block till the value is available
- if write is for a value required by a remote resource, write the
value to Zookeeper

The prototype is implemented in Python and Ruby is used for chef
interception.

There are alternatives for many of the choices I have made for the
prototype:
- zookeeper can be replaced with any other service that provides a
data space and distributed coordination
- chef can be replaced by any other CM tool (a little bit of design /
convention needed for other CM tools because of the interception used in
the prototype to catch reads/writes to node[][])
- the whole dependence analysis can be integrated into the Heat's
dependence analyzer
- the component construct proposed recently (by Steve Baker) for
HOT/Heat can be used to specify much of what is specified using the
metadata sections in this prototype.

I am interested in using my experience with this prototype to contribute to
HOT/Heat's cross-vm synchronization and communication design and code.  I
look forward to your comments.

Thanks,
LN___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-18 Thread Lakshminaraya Renganarayana
Just wanted to add a couple of clarifications:

1. the cross-vm dependences are captured via the read/writes of attributes
in resources and in software components (described in metadata sections).

2. these dependences are then realized via blocking-reads and writes to
zookeeper, which realizes the cross-vm synchronization and communication of
values between the resources.

Thanks,
LN


Lakshminaraya Renganarayana/Watson/IBM@IBMUS wrote on 10/18/2013 02:45:01
PM:

> From: Lakshminaraya Renganarayana/Watson/IBM@IBMUS
> To: OpenStack Development Mailing List

> Date: 10/18/2013 02:48 PM
> Subject: [openstack-dev] [Heat] A prototype for cross-vm
> synchronization and communication
>
> Hi,
>
> In the last Openstack Heat meeting there was good interest in
> proposals for cross-vm synchronization and communication and I had
> mentioned the prototype I have built. I had also promised that I
> will post an outline of the prototype ... Here it is. I might have
> missed some details, please feel free to ask / comment and I would
> be happy to explain more.
> ---
> Goal of the prototype: Enable cross-vm synchronization and
> communication using high-level declarative description (no wait-
> conditions) Use chef as the CM tool.
>
> Design rationale / choices of the prototype (note that these were
> made just for the prototype and I am not proposing them to be the
> choices for Heat/HOT):
>
> D1: No new construct in Heat template
> => use metadata sections
> D2: No extensions to core Heat engine
> => use a pre-processor that will produce a Heat template that the
> standard Heat engine can consume
> D3: Do not require chef recipes to be modified
> => use a convention of accessing inputs/outputs from chef node[][]
> => use ruby meta-programming to intercept reads/writes to node[][]
> forward values
> D4: Use a standard distributed coordinator (don't reinvent)
> => use zookeeper as a coordinator and as a global data space for
communciation
>
> Overall, the flow is the following:
> 1. User specifies a Heat template with details about software config
> and dependences in the metadata section of resources (see step S1 below).
> 2. A pre-processor consumes this augmented heat template and
> produces another heat template with user-data sections with cloud-
> init scripts and also sets up a zookeeper instance with enough
> information to coordinate between the resources at runtime to
> realize the dependences and synchronization (see step S2)
> 3. The generated heat template is fed into standard heat engine to
> deploy. After the VMs are created the cloud-init script kicks in.
> The cloud init script installs chef solo and then starts the
> execution of the roles specified in the metadata section. During
> this execution of the recipes the coordination is realized (see
> steps S2 and S3 below).
>
> Implementation scheme:
> S1. Use metadata section of each resource to describe  (see attached
example)
> - a list of roles
> - inputs to and outputs from each role and their mapping to resource
> attrs (any attr)
> - convention: these inputs/outputs will be through chef node attrs node
[][]
>
> S2. Dependence analysis and cloud init script generation
>
> Dependence analysis:
> - resolve every reference that can be statically resolved using
> Heat's fucntions (this step just uses Heat's current dependence
> analysis -- Thanks to Zane Bitter for helping me understand this)
> - flag all unresolved references as values resolved at run-time at
> communicated via the coordinator
>
> Use cloud-init in user-data sections:
> - automatically generate a script that would bootstrap chef and will
> run the roles/recipes in the order specified in the metadata section
> - generate dependence info for zookeeper to coordinate at runtime
>
> S3. Coordinate synchronization and communication at run-time
> - intercept reads and writes to node[][]
> - if it is a remote read, get it from Zookeeper
> - execution will block till the value is available
> - if write is for a value required by a remote resource, write the
> value to Zookeeper
>
> The prototype is implemented in Python and Ruby is used for chef
> interception.
>
> There are alternatives for many of the choices I have made for the
prototype:
> - zookeeper can be replaced with any other service that provides a
> data space and distributed coordination
> - chef can be replaced by any other CM tool (a little bit of design
> / convention needed for other CM tools because of the interception
> used in the prototype to catch reads/writes to node[][])
> - the whole dependence analysis can be integrated into the Heat's
> dependence analyzer
> - the component construct proposed recently (by Steve Ba

Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Thomas Spatzier
Hi Lakshmi,

you mentioned an example in your original post, but I did not find it. Can
you add the example?

Lakshminaraya Renganarayana  wrote on 18.10.2013
20:57:43:
> From: Lakshminaraya Renganarayana 
> To: OpenStack Development Mailing List
,
> Date: 18.10.2013 21:01
> Subject: Re: [openstack-dev] [Heat] A prototype for cross-vm
> synchronization and communication
>
> Just wanted to add a couple of clarifications:
>
> 1. the cross-vm dependences are captured via the read/writes of
> attributes in resources and in software components (described in
> metadata sections).
>
> 2. these dependences are then realized via blocking-reads and writes
> to zookeeper, which realizes the cross-vm synchronization and
> communication of values between the resources.
>
> Thanks,
> LN
>
>
> Lakshminaraya Renganarayana/Watson/IBM@IBMUS wrote on 10/18/2013 02:45:01
PM:
>
> > From: Lakshminaraya Renganarayana/Watson/IBM@IBMUS
> > To: OpenStack Development Mailing List

> > Date: 10/18/2013 02:48 PM
> > Subject: [openstack-dev] [Heat] A prototype for cross-vm
> > synchronization and communication
> >
> > Hi,
> >
> > In the last Openstack Heat meeting there was good interest in
> > proposals for cross-vm synchronization and communication and I had
> > mentioned the prototype I have built. I had also promised that I
> > will post an outline of the prototype ... Here it is. I might have
> > missed some details, please feel free to ask / comment and I would
> > be happy to explain more.
> > ---
> > Goal of the prototype: Enable cross-vm synchronization and
> > communication using high-level declarative description (no wait-
> > conditions) Use chef as the CM tool.
> >
> > Design rationale / choices of the prototype (note that these were
> > made just for the prototype and I am not proposing them to be the
> > choices for Heat/HOT):
> >
> > D1: No new construct in Heat template
> > => use metadata sections
> > D2: No extensions to core Heat engine
> > => use a pre-processor that will produce a Heat template that the
> > standard Heat engine can consume
> > D3: Do not require chef recipes to be modified
> > => use a convention of accessing inputs/outputs from chef node[][]
> > => use ruby meta-programming to intercept reads/writes to node[][]
> > forward values
> > D4: Use a standard distributed coordinator (don't reinvent)
> > => use zookeeper as a coordinator and as a global data space for
> communciation
> >
> > Overall, the flow is the following:
> > 1. User specifies a Heat template with details about software config
> > and dependences in the metadata section of resources (see step S1
below).
> > 2. A pre-processor consumes this augmented heat template and
> > produces another heat template with user-data sections with cloud-
> > init scripts and also sets up a zookeeper instance with enough
> > information to coordinate between the resources at runtime to
> > realize the dependences and synchronization (see step S2)
> > 3. The generated heat template is fed into standard heat engine to
> > deploy. After the VMs are created the cloud-init script kicks in.
> > The cloud init script installs chef solo and then starts the
> > execution of the roles specified in the metadata section. During
> > this execution of the recipes the coordination is realized (see
> > steps S2 and S3 below).
> >
> > Implementation scheme:
> > S1. Use metadata section of each resource to describe  (see
> attached example)
> > - a list of roles
> > - inputs to and outputs from each role and their mapping to resource
> > attrs (any attr)
> > - convention: these inputs/outputs will be through chef node attrs node
[][]
> >
> > S2. Dependence analysis and cloud init script generation
> >
> > Dependence analysis:
> > - resolve every reference that can be statically resolved using
> > Heat's fucntions (this step just uses Heat's current dependence
> > analysis -- Thanks to Zane Bitter for helping me understand this)
> > - flag all unresolved references as values resolved at run-time at
> > communicated via the coordinator
> >
> > Use cloud-init in user-data sections:
> > - automatically generate a script that would bootstrap chef and will
> > run the roles/recipes in the order specified in the metadata section
> > - generate dependence info for zookeeper to coordinate at runtime
> >
> > S3. Coordinate synchronization and communication at run-time
> > - intercept reads and writes to node[][]
> > - if it is a 

Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Stan Lagun
Hi Lakshminarayanan,

Seems like a solid plan.
I'm probably wrong here but ain't this too tied to chef? I believe the
solution should equally be suitable for chef, puppet, SaltStack, Murano, or
maybe all I need is just a plain bash script execution. It may be difficult
to intercept script reads the way it is possible with chef's node[][]. In
Murano we has a generic agent that could integrate all such deployment
platforms using common syntax. Agent specification can be found here:
https://wiki.openstack.org/wiki/Murano/UnifiedAgent and it can be helpful
or at least can be a source for design ideas.

I'm very positive on adoption on such solution to Heat. There would be a
significant amount of work to abstract all underlying technologies (chef,
Zookeper etc) so that they become pluggable and replaceable without
introducing hard-coded dependencies for the Heat and bringing everything to
production quality level. We could collaborate on bringing such solution to
the Heat if it would be accepted by Heat's core team and community



On Fri, Oct 18, 2013 at 10:45 PM, Lakshminaraya Renganarayana <
lren...@us.ibm.com> wrote:

> Hi,
>
> In the last Openstack Heat meeting there was good interest in proposals
> for cross-vm synchronization and communication and I had mentioned the
> prototype I have built. I had also promised that I will post an outline of
> the prototype ... Here it is. I might have missed some details, please feel
> free to ask / comment and I would be happy to explain more.
> ---
> Goal of the prototype: Enable cross-vm synchronization and communication
> using high-level declarative description (no wait-conditions) Use chef as
> the CM tool.
>
> Design rationale / choices of the prototype (note that these were made
> just for the prototype and I am not proposing them to be the choices for
> Heat/HOT):
>
> D1: No new construct in Heat template
>  => use metadata sections
> D2: No extensions to core Heat engine
>  => use a pre-processor that will produce a Heat template that the
> standard Heat engine can consume
> D3: Do not require chef recipes to be modified
>  => use a convention of accessing inputs/outputs from chef node[][]
>  => use ruby meta-programming to intercept reads/writes to node[][]
> forward values
> D4: Use a standard distributed coordinator (don't reinvent)
>  => use zookeeper as a coordinator and as a global data space for
> communciation
>
> Overall, the flow is the following:
> 1. User specifies a Heat template with details about software config and
> dependences in the metadata section of resources (see step S1 below).
> 2. A pre-processor consumes this augmented heat template and produces
> another heat template with user-data sections with cloud-init scripts and
> also sets up a zookeeper instance with enough information to coordinate
> between the resources at runtime to realize the dependences and
> synchronization (see step S2)
> 3. The generated heat template is fed into standard heat engine to deploy.
> After the VMs are created the cloud-init script kicks in. The cloud init
> script installs chef solo and then starts the execution of the roles
> specified in the metadata section. During this execution of the recipes the
> coordination is realized (see steps S2 and S3 below).
>
> Implementation scheme:
> S1. Use metadata section of each resource to describe  (see attached
> example)
>  - a list of roles
>  - inputs to and outputs from each role and their mapping to resource
> attrs (any attr)
>  - convention: these inputs/outputs will be through chef node attrs
> node[][]
>
> S2. Dependence analysis and cloud init script generation
>
>  Dependence analysis:
>  - resolve every reference that can be statically resolved using Heat's
> fucntions (this step just uses Heat's current dependence analysis -- Thanks
> to Zane Bitter for helping me understand this)
>  - flag all unresolved references as values resolved at run-time at
> communicated via the coordinator
>
>  Use cloud-init in user-data sections:
>  - automatically generate a script that would bootstrap chef and will run
> the roles/recipes in the order specified in the metadata section
>  - generate dependence info for zookeeper to coordinate at runtime
>
> S3. Coordinate synchronization and communication at run-time
>  - intercept reads and writes to node[][]
>  - if it is a remote read, get it from Zookeeper
>  - execution will block till the value is available
>  - if write is for a value required by a remote resource, write the value
> to Zookeeper
>
> The prototype is implemented in Python and Ruby is used for chef
> interception.
>
> There are alternatives for many of the choices I have made for the
> prototype:
>  - zookeeper can be replaced with any other service that provides a data
> space and distributed coordination
>  - chef can be replaced by any other CM tool (a little bit of design /
> convention needed for other CM tools because of the interception used in
> the prototype to catch reads/writes to no

Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Lakshminaraya Renganarayana

Hi Stan,

Thanks for the comments. As you have observed the prototype that I have
built is tied to Chef. I just wanted to describe that here for reference
and not as a proposal for the general implementation.  What I would like to
work on is a more general solution that is agnostic to (or works with any)
underlying CM tool (such as chfe, puppet, saltstack, murano, etc.).

Regarding identifying reads/writes: I was thinking that we could come up
with a general syntax + semantics of explicitly defining the reads/writes
of Heat components. I think we can extend Steve Baker's recent proposal, to
include the inputs/outputs in software component definitions. Your
experience with the Unified Agent would be valuable for this. I would be
happy to collaborate with you!

Thanks,
LN


Stan Lagun  wrote on 10/21/2013 10:03:58 AM:

> From: Stan Lagun 
> To: OpenStack Development Mailing List

> Date: 10/21/2013 10:18 AM
> Subject: Re: [openstack-dev] [Heat] A prototype for cross-vm
> synchronization and communication
>
> Hi Lakshminarayanan,

> Seems like a solid plan.
> I'm probably wrong here but ain't this too tied to chef? I believe
> the solution should equally be suitable for chef, puppet, SaltStack,
> Murano, or maybe all I need is just a plain bash script execution.
> It may be difficult to intercept script reads the way it is possible
> with chef's node[][]. In Murano we has a generic agent that could
> integrate all such deployment platforms using common syntax. Agent
> specification can be found here: https://wiki.openstack.org/wiki/
> Murano/UnifiedAgent and it can be helpful or at least can be a
> source for design ideas.

> I'm very positive on adoption on such solution to Heat. There would
> be a significant amount of work to abstract all underlying
> technologies (chef, Zookeper etc) so that they become pluggable and
> replaceable without introducing hard-coded dependencies for the Heat
> and bringing everything to production quality level. We could
> collaborate on bringing such solution to the Heat if it would be
> accepted by Heat's core team and community
>
>

> On Fri, Oct 18, 2013 at 10:45 PM, Lakshminaraya Renganarayana <
> lren...@us.ibm.com> wrote:
> Hi,
>
> In the last Openstack Heat meeting there was good interest in
> proposals for cross-vm synchronization and communication and I had
> mentioned the prototype I have built. I had also promised that I
> will post an outline of the prototype ... Here it is. I might have
> missed some details, please feel free to ask / comment and I would
> be happy to explain more.
> ---
> Goal of the prototype: Enable cross-vm synchronization and
> communication using high-level declarative description (no wait-
> conditions) Use chef as the CM tool.
>
> Design rationale / choices of the prototype (note that these were
> made just for the prototype and I am not proposing them to be the
> choices for Heat/HOT):
>
> D1: No new construct in Heat template
> => use metadata sections
> D2: No extensions to core Heat engine
> => use a pre-processor that will produce a Heat template that the
> standard Heat engine can consume
> D3: Do not require chef recipes to be modified
> => use a convention of accessing inputs/outputs from chef node[][]
> => use ruby meta-programming to intercept reads/writes to node[][]
> forward values
> D4: Use a standard distributed coordinator (don't reinvent)
> => use zookeeper as a coordinator and as a global data space for
communciation
>
> Overall, the flow is the following:
> 1. User specifies a Heat template with details about software config
> and dependences in the metadata section of resources (see step S1 below).
> 2. A pre-processor consumes this augmented heat template and
> produces another heat template with user-data sections with cloud-
> init scripts and also sets up a zookeeper instance with enough
> information to coordinate between the resources at runtime to
> realize the dependences and synchronization (see step S2)
> 3. The generated heat template is fed into standard heat engine to
> deploy. After the VMs are created the cloud-init script kicks in.
> The cloud init script installs chef solo and then starts the
> execution of the roles specified in the metadata section. During
> this execution of the recipes the coordination is realized (see
> steps S2 and S3 below).
>
> Implementation scheme:
> S1. Use metadata section of each resource to describe  (see attached
example)
> - a list of roles
> - inputs to and outputs from each role and their mapping to resource
> attrs (any attr)
> - convention: these inputs/outputs will be through chef node attrs node
[][]
>
> S2. Dependence analysis and cloud init script generati

Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Lakshminaraya Renganarayana

Thomas Spatzier  wrote on 10/21/2013 08:29:47
AM:

> you mentioned an example in your original post, but I did not find it.
Can
> you add the example?

Hi Thomas,

Here is the example I used earlier:

For example, consider
a two VM app, with VMs vmA, vmB, and a set of software components (ai's and
bi's)
to be installed on them:

vmA = base-vmA + a1 + a2 + a3
vmB = base-vmB + b1 + b2 + b3

let us say that software component b1 of vmB, requires a config value
produced by
software component a1 of vmA. How to declaratively model this dependence?
Clearly,
modeling a dependence between just base-vmA and base-vmB is not enough.
However,
defining a dependence between the whole of vmA and vmB is too coarse. It
would be ideal
to be able to define a dependence at the granularity of software
components, i.e.,
vmB.b1 depends on vmA.a1. Of course, it would also be good to capture what
value
is passed between vmB.b1 and vmA.a1, so that the communication can be
facilitated
by the orchestration engine.


Thanks,
LN

>
> Lakshminaraya Renganarayana  wrote on 18.10.2013
> 20:57:43:
> > From: Lakshminaraya Renganarayana 
> > To: OpenStack Development Mailing List
> ,
> > Date: 18.10.2013 21:01
> > Subject: Re: [openstack-dev] [Heat] A prototype for cross-vm
> > synchronization and communication
> >
> > Just wanted to add a couple of clarifications:
> >
> > 1. the cross-vm dependences are captured via the read/writes of
> > attributes in resources and in software components (described in
> > metadata sections).
> >
> > 2. these dependences are then realized via blocking-reads and writes
> > to zookeeper, which realizes the cross-vm synchronization and
> > communication of values between the resources.
> >
> > Thanks,
> > LN
> >
> >
> > Lakshminaraya Renganarayana/Watson/IBM@IBMUS wrote on 10/18/2013
02:45:01
> PM:
> >
> > > From: Lakshminaraya Renganarayana/Watson/IBM@IBMUS
> > > To: OpenStack Development Mailing List
> 
> > > Date: 10/18/2013 02:48 PM
> > > Subject: [openstack-dev] [Heat] A prototype for cross-vm
> > > synchronization and communication
> > >
> > > Hi,
> > >
> > > In the last Openstack Heat meeting there was good interest in
> > > proposals for cross-vm synchronization and communication and I had
> > > mentioned the prototype I have built. I had also promised that I
> > > will post an outline of the prototype ... Here it is. I might have
> > > missed some details, please feel free to ask / comment and I would
> > > be happy to explain more.
> > > ---
> > > Goal of the prototype: Enable cross-vm synchronization and
> > > communication using high-level declarative description (no wait-
> > > conditions) Use chef as the CM tool.
> > >
> > > Design rationale / choices of the prototype (note that these were
> > > made just for the prototype and I am not proposing them to be the
> > > choices for Heat/HOT):
> > >
> > > D1: No new construct in Heat template
> > > => use metadata sections
> > > D2: No extensions to core Heat engine
> > > => use a pre-processor that will produce a Heat template that the
> > > standard Heat engine can consume
> > > D3: Do not require chef recipes to be modified
> > > => use a convention of accessing inputs/outputs from chef node[][]
> > > => use ruby meta-programming to intercept reads/writes to node[][]
> > > forward values
> > > D4: Use a standard distributed coordinator (don't reinvent)
> > > => use zookeeper as a coordinator and as a global data space for
> > communciation
> > >
> > > Overall, the flow is the following:
> > > 1. User specifies a Heat template with details about software config
> > > and dependences in the metadata section of resources (see step S1
> below).
> > > 2. A pre-processor consumes this augmented heat template and
> > > produces another heat template with user-data sections with cloud-
> > > init scripts and also sets up a zookeeper instance with enough
> > > information to coordinate between the resources at runtime to
> > > realize the dependences and synchronization (see step S2)
> > > 3. The generated heat template is fed into standard heat engine to
> > > deploy. After the VMs are created the cloud-init script kicks in.
> > > The cloud init script installs chef solo and then starts the
> > > execution of the roles specified in the metadata section. During
> > > this execution of the recipes the coordination is realized (see
> &g

Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-21 Thread Steven Hardy
On Fri, Oct 18, 2013 at 02:45:01PM -0400, Lakshminaraya Renganarayana wrote:

> The prototype is implemented in Python and Ruby is used for chef
> interception.

Where can we find the code?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] A prototype for cross-vm synchronization and communication

2013-10-22 Thread Lakshminaraya Renganarayana

Hi Steven,

Steven Hardy  wrote on 10/21/2013 11:27:43 AM:
>
> On Fri, Oct 18, 2013 at 02:45:01PM -0400, Lakshminaraya Renganarayana
wrote:
> 
> > The prototype is implemented in Python and Ruby is used for chef
> > interception.
>
> Where can we find the code?

What part of the code are you interested in? The python pre-processor part
or the Ruby chef interceptor part? I need to get clearance from IBM to post
it on the Git. I am guessing it might be easy to get clearance for the
pre-processor code and a bit harder for the chef interceptor code.

BTW, will you be attending the OpenStack summit in HongKong? I am planning
to and I can show you a demo of this pre-processor there (if the IBM
clearance takes too long).

Thanks,
LN___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev