Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Pipes

On 01/18/2015 11:02 PM, Steven Dake wrote:

On 01/18/2015 07:59 PM, Jay Pipes wrote:

On 01/18/2015 11:11 AM, Steven Dake wrote:

On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the
network? The current native docker support did not have any logic for
network management, are we going to leverage neutron or nova-network
just like nova-docker for this?

We can just use flannel for both these use cases.  One way to approach
using flannel is that we can expect docker networks will always be setup
the same way, connecting into a flannel network.


Note that the README on the Magnum GH repository states that one of
the features of Magnum is its use of Neutron:

"Integration with Neutron for k8s multi-tenancy network security."

Is this not true?


Jay,

We do integrate today with Neutron for multi-tenant network security.
Flannel runs on top of Neutron networks using vxlan. Neutron provides
multi-tenant security; Flannel provides container networking.  Together,
they solve the multi-tenant container networking problem in a secure way.


Gotcha. That makes sense, now.


Its a shame these two technologies can't be merged at this time, but we
will roll with it until someone invents an integration.


2) For k8s, swarm, we can leverage the scheduler in those container
management tools, but what about docker native support? How to handle
resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay has
more then one node.  I keep hoping someone in the community will propose
something that doesn't introduce an agent dependency in the OS.


So, perhaps because I've not been able to find any documentation for
Magnum besides the README (the link to developers docs is a 404), I
have quite a bit of confusion around what value Magnum brings to the
OpenStack ecosystem versus a tenant just installing Kubernetes on one
of more of their VMs and managing container resources using k8s directly.


Agree documentation is dearth at this point.  The only thing we really
have at  this time is the developer guide here:
https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst

Installing Kubernetes in one or more of their VMs would also work with
kubernetes.  In fact, you can do this easily today with larsks
heat-kubernetes Heat template which we shamelessly borrowed without
magnum at all.

We do intend to offer bare metal deployment of kubernetes as well, which
should offer a significant I/O performance advantage, which is after all
what cloud services are all about.

Of course someone could just deploy kubernetes themselves on bare metal,
but there isn't at this time an integrated tool to provide
"Kubernetes-installation-as-a-service" endpoint.  Magnum does that job
today right now on master.  I suspect it can and will do more as we get
past our 2 month mark of development ;)


Ha! No worries, Steven. :) Heck, I have enough trouble just keeping up 
with the firehose of information about new container-related stuffs that 
I'm well impressed with the progress that the container team has made so 
far. I just wish I had ten more hours a day to read and research more on 
the topic!



Is the goal of Magnum to basically be like Trove is for databases and
be a Kubernetes-installation-as-a-Service endpoint?


I believe that is how the project vision started out.  I'm not clear on
the long term roadmap - I suspect there is alot more value that can be
added in.  Some of these things, like manually or automatically scaling
the infrastructure show some of our future plans.  I'd appreciate your
suggestions.


Well, when I wrap my brain around more of the container technology, I 
will certainly try and provide some feedback! :)


Best,
-jay


Thanks in advance for more info on the project. I'm genuinely curious.



Always a pleasure,
-steve


Best,
-jay

__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Steven Dake

On 01/18/2015 07:59 PM, Jay Pipes wrote:

On 01/18/2015 11:11 AM, Steven Dake wrote:

On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the
network? The current native docker support did not have any logic for
network management, are we going to leverage neutron or nova-network
just like nova-docker for this?

We can just use flannel for both these use cases.  One way to approach
using flannel is that we can expect docker networks will always be setup
the same way, connecting into a flannel network.


Note that the README on the Magnum GH repository states that one of 
the features of Magnum is its use of Neutron:


"Integration with Neutron for k8s multi-tenancy network security."

Is this not true?


Jay,

We do integrate today with Neutron for multi-tenant network security.  
Flannel runs on top of Neutron networks using vxlan. Neutron provides 
multi-tenant security; Flannel provides container networking.  Together, 
they solve the multi-tenant container networking problem in a secure way.


Its a shame these two technologies can't be merged at this time, but we 
will roll with it until someone invents an integration.



2) For k8s, swarm, we can leverage the scheduler in those container
management tools, but what about docker native support? How to handle
resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay has
more then one node.  I keep hoping someone in the community will propose
something that doesn't introduce an agent dependency in the OS.


So, perhaps because I've not been able to find any documentation for 
Magnum besides the README (the link to developers docs is a 404), I 
have quite a bit of confusion around what value Magnum brings to the 
OpenStack ecosystem versus a tenant just installing Kubernetes on one 
of more of their VMs and managing container resources using k8s directly.


Agree documentation is dearth at this point.  The only thing we really 
have at  this time is the developer guide here:

https://github.com/stackforge/magnum/blob/master/doc/source/dev/dev-quickstart.rst

Installing Kubernetes in one or more of their VMs would also work with 
kubernetes.  In fact, you can do this easily today with larsks 
heat-kubernetes Heat template which we shamelessly borrowed without 
magnum at all.


We do intend to offer bare metal deployment of kubernetes as well, which 
should offer a significant I/O performance advantage, which is after all 
what cloud services are all about.


Of course someone could just deploy kubernetes themselves on bare metal, 
but there isn't at this time an integrated tool to provide 
"Kubernetes-installation-as-a-service" endpoint.  Magnum does that job 
today right now on master.  I suspect it can and will do more as we get 
past our 2 month mark of development ;)



Is the goal of Magnum to basically be like Trove is for databases and 
be a Kubernetes-installation-as-a-Service endpoint?


I believe that is how the project vision started out.  I'm not clear on 
the long term roadmap - I suspect there is alot more value that can be 
added in.  Some of these things, like manually or automatically scaling 
the infrastructure show some of our future plans.  I'd appreciate your 
suggestions.



Thanks in advance for more info on the project. I'm genuinely curious.



Always a pleasure,
-steve


Best,
-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Lau
Steven, just filed two bps to trace all the discussions for network and
scheduler support for native docker, we can have more discussion there.

https://blueprints.launchpad.net/magnum/+spec/native-docker-network
https://blueprints.launchpad.net/magnum/+spec/magnum-scheduler-for-docker

Another I want to discuss is still network, currently, magnum only support
neutron, what about nova-network support?

2015-01-19 0:39 GMT+08:00 Steven Dake :

>  On 01/18/2015 09:23 AM, Jay Lau wrote:
>
> Thanks Steven, more questions/comments in line.
>
> 2015-01-19 0:11 GMT+08:00 Steven Dake :
>
>>  On 01/18/2015 06:39 AM, Jay Lau wrote:
>>
>>   Thanks Steven, just some questions/comments here:
>>
>>  1) For native docker support, do we have some project to handle the
>> network? The current native docker support did not have any logic for
>> network management, are we going to leverage neutron or nova-network just
>> like nova-docker for this?
>>
>>  We can just use flannel for both these use cases.  One way to approach
>> using flannel is that we can expect docker networks will always be setup
>> the same way, connecting into a flannel network.
>>
> What about introducing neutron/nova-network support for native docker
> container just like nova-docker?
>
>>
>>
> Does that mean introducing an agent on the uOS?  I'd rather not have
> agents, since all of these uOS systems have wonky filesystem layouts and
> there is not an easy way to customize them, with dib for example.
>
> 2) For k8s, swarm, we can leverage the scheduler in those container
>> management tools, but what about docker native support? How to handle
>> resource scheduling for native docker containers?
>>
>>   I am not clear on how to handle native Docker scheduling if a bay has
>> more then one node.  I keep hoping someone in the community will propose
>> something that doesn't introduce an agent dependency in the OS.
>>
> My thinking is as this: Add a new scheduler just like what nova/cinder is
> doing now and then we can migrate to gantt once it become mature, comments?
>
>
> Cool that WFM.  Too bad we can't just use gantt out the gate.
>
> Regards
> -steve
>
>
>
>> Regards
>> -steve
>>
>>
>>  Thanks!
>>
>> 2015-01-18 8:51 GMT+08:00 Steven Dake :
>>
>>> Hi folks and especially Magnum Core,
>>>
>>> Magnum Milestone #1 should released early this coming week.  I wanted to
>>> kick off discussions around milestone #2 since Milestone #1 development is
>>> mostly wrapped up.
>>>
>>> The milestone #2 blueprints:
>>> https://blueprints.launchpad.net/magnum/milestone-2
>>>
>>> The overall goal of Milestone #1 was to make Magnum usable for
>>> developers.  The overall goal of Milestone #2 is to make Magnum usable by
>>> operators and their customers.  To do this we are implementing blueprints
>>> like multi-tenant, horizontal-scale, and the introduction of coreOS in
>>> addition to Fedora Atomic as a Container uOS.  We are also plan to
>>> introduce some updates to allow bays to be more scalable.  We want bays to
>>> scale to more nodes manually (short term), as well as automatically (longer
>>> term).  Finally we want to tidy up some of the nit-picky things about
>>> Magnum that none of the core developers really like at the moment.  One
>>> example is the magnum-bay-status blueprint which will prevent the creation
>>> of pods/services/replicationcontrollers until a bay has completed
>>> orchestration via Heat.  Our final significant blueprint for milestone #2
>>> is the ability to launch our supported uOS on bare metal using Nova's
>>> Ironic plugin and the baremetal flavor.  As always, we want to improve our
>>> unit testing from what is now 70% to ~80% in the next milestone.
>>>
>>> Please have a look at the blueprints and feel free to comment on this
>>> thread or in the blueprints directly.  If you would like to see different
>>> blueprints tackled during milestone #2 that feedback is welcome, or if you
>>> think the core team[1] is on the right track, we welcome positive kudos too.
>>>
>>> If you would like to see what we tackled in Milestone #1, the code
>>> should be tagged and ready to run Tuesday January 20th.  Master should work
>>> well enough now, and the developer quickstart guide is mostly correct.
>>>
>>> The Milestone #1 bluerpints are here for comparison sake:
>>> https://blueprints.launchpad.net/magnum/milestone-1
>>>
>>> Regards,
>>> -steve
>>>
>>>
>>> [1] https://review.openstack.org/#/admin/groups/473,members
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>>   Thanks,
>>
>>  Jay Lau (Guangya Liu)
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> opens

Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Pipes

On 01/18/2015 11:11 AM, Steven Dake wrote:

On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the
network? The current native docker support did not have any logic for
network management, are we going to leverage neutron or nova-network
just like nova-docker for this?

We can just use flannel for both these use cases.  One way to approach
using flannel is that we can expect docker networks will always be setup
the same way, connecting into a flannel network.


Note that the README on the Magnum GH repository states that one of the 
features of Magnum is its use of Neutron:


"Integration with Neutron for k8s multi-tenancy network security."

Is this not true?


2) For k8s, swarm, we can leverage the scheduler in those container
management tools, but what about docker native support? How to handle
resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay has
more then one node.  I keep hoping someone in the community will propose
something that doesn't introduce an agent dependency in the OS.


So, perhaps because I've not been able to find any documentation for 
Magnum besides the README (the link to developers docs is a 404), I have 
quite a bit of confusion around what value Magnum brings to the 
OpenStack ecosystem versus a tenant just installing Kubernetes on one of 
more of their VMs and managing container resources using k8s directly.


Is the goal of Magnum to basically be like Trove is for databases and be 
a Kubernetes-installation-as-a-Service endpoint?


Thanks in advance for more info on the project. I'm genuinely curious.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Steven Dake

On 01/18/2015 09:23 AM, Jay Lau wrote:

Thanks Steven, more questions/comments in line.

2015-01-19 0:11 GMT+08:00 Steven Dake >:


On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle
the network? The current native docker support did not have any
logic for network management, are we going to leverage neutron or
nova-network just like nova-docker for this?

We can just use flannel for both these use cases.  One way to
approach using flannel is that we can expect docker networks will
always be setup the same way, connecting into a flannel network.

What about introducing neutron/nova-network support for native docker 
container just like nova-docker?





Does that mean introducing an agent on the uOS?  I'd rather not have 
agents, since all of these uOS systems have wonky filesystem layouts and 
there is not an easy way to customize them, with dib for example.



2) For k8s, swarm, we can leverage the scheduler in those
container management tools, but what about docker native support?
How to handle resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay
has more then one node.  I keep hoping someone in the community
will propose something that doesn't introduce an agent dependency
in the OS.

My thinking is as this: Add a new scheduler just like what nova/cinder 
is doing now and then we can migrate to gantt once it become mature, 
comments?


Cool that WFM.  Too bad we can't just use gantt out the gate.

Regards
-steve



Regards
-steve



Thanks!

2015-01-18 8:51 GMT+08:00 Steven Dake mailto:sd...@redhat.com>>:

Hi folks and especially Magnum Core,

Magnum Milestone #1 should released early this coming week. 
I wanted to kick off discussions around milestone #2 since

Milestone #1 development is mostly wrapped up.

The milestone #2 blueprints:
https://blueprints.launchpad.net/magnum/milestone-2

The overall goal of Milestone #1 was to make Magnum usable
for developers.  The overall goal of Milestone #2 is to make
Magnum usable by operators and their customers.  To do this
we are implementing blueprints like multi-tenant,
horizontal-scale, and the introduction of coreOS in addition
to Fedora Atomic as a Container uOS.  We are also plan to
introduce some updates to allow bays to be more scalable.  We
want bays to scale to more nodes manually (short term), as
well as automatically (longer term).  Finally we want to tidy
up some of the nit-picky things about Magnum that none of the
core developers really like at the moment.  One example is
the magnum-bay-status blueprint which will prevent the
creation of pods/services/replicationcontrollers until a bay
has completed orchestration via Heat. Our final significant
blueprint for milestone #2 is the ability to launch our
supported uOS on bare metal using Nova's Ironic plugin and
the baremetal flavor.  As always, we want to improve our unit
testing from what is now 70% to ~80% in the next milestone.

Please have a look at the blueprints and feel free to comment
on this thread or in the blueprints directly.  If you would
like to see different blueprints tackled during milestone #2
that feedback is welcome, or if you think the core team[1] is
on the right track, we welcome positive kudos too.

If you would like to see what we tackled in Milestone #1, the
code should be tagged and ready to run Tuesday January 20th. 
Master should work well enough now, and the developer

quickstart guide is mostly correct.

The Milestone #1 bluerpints are here for comparison sake:
https://blueprints.launchpad.net/magnum/milestone-1

Regards,
-steve


[1] https://review.openstack.org/#/admin/groups/473,members


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Thanks,


Jay Lau (Guangya Liu)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__

Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Lau
Thanks Steven, more questions/comments in line.

2015-01-19 0:11 GMT+08:00 Steven Dake :

>  On 01/18/2015 06:39 AM, Jay Lau wrote:
>
>   Thanks Steven, just some questions/comments here:
>
>  1) For native docker support, do we have some project to handle the
> network? The current native docker support did not have any logic for
> network management, are we going to leverage neutron or nova-network just
> like nova-docker for this?
>
> We can just use flannel for both these use cases.  One way to approach
> using flannel is that we can expect docker networks will always be setup
> the same way, connecting into a flannel network.
>
What about introducing neutron/nova-network support for native docker
container just like nova-docker?

>
>  2) For k8s, swarm, we can leverage the scheduler in those container
> management tools, but what about docker native support? How to handle
> resource scheduling for native docker containers?
>
>   I am not clear on how to handle native Docker scheduling if a bay has
> more then one node.  I keep hoping someone in the community will propose
> something that doesn't introduce an agent dependency in the OS.
>
My thinking is as this: Add a new scheduler just like what nova/cinder is
doing now and then we can migrate to gantt once it become mature, comments?

>
> Regards
> -steve
>
>
>  Thanks!
>
> 2015-01-18 8:51 GMT+08:00 Steven Dake :
>
>> Hi folks and especially Magnum Core,
>>
>> Magnum Milestone #1 should released early this coming week.  I wanted to
>> kick off discussions around milestone #2 since Milestone #1 development is
>> mostly wrapped up.
>>
>> The milestone #2 blueprints:
>> https://blueprints.launchpad.net/magnum/milestone-2
>>
>> The overall goal of Milestone #1 was to make Magnum usable for
>> developers.  The overall goal of Milestone #2 is to make Magnum usable by
>> operators and their customers.  To do this we are implementing blueprints
>> like multi-tenant, horizontal-scale, and the introduction of coreOS in
>> addition to Fedora Atomic as a Container uOS.  We are also plan to
>> introduce some updates to allow bays to be more scalable.  We want bays to
>> scale to more nodes manually (short term), as well as automatically (longer
>> term).  Finally we want to tidy up some of the nit-picky things about
>> Magnum that none of the core developers really like at the moment.  One
>> example is the magnum-bay-status blueprint which will prevent the creation
>> of pods/services/replicationcontrollers until a bay has completed
>> orchestration via Heat.  Our final significant blueprint for milestone #2
>> is the ability to launch our supported uOS on bare metal using Nova's
>> Ironic plugin and the baremetal flavor.  As always, we want to improve our
>> unit testing from what is now 70% to ~80% in the next milestone.
>>
>> Please have a look at the blueprints and feel free to comment on this
>> thread or in the blueprints directly.  If you would like to see different
>> blueprints tackled during milestone #2 that feedback is welcome, or if you
>> think the core team[1] is on the right track, we welcome positive kudos too.
>>
>> If you would like to see what we tackled in Milestone #1, the code should
>> be tagged and ready to run Tuesday January 20th.  Master should work well
>> enough now, and the developer quickstart guide is mostly correct.
>>
>> The Milestone #1 bluerpints are here for comparison sake:
>> https://blueprints.launchpad.net/magnum/milestone-1
>>
>> Regards,
>> -steve
>>
>>
>> [1] https://review.openstack.org/#/admin/groups/473,members
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
>   Thanks,
>
>  Jay Lau (Guangya Liu)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Steven Dake

On 01/18/2015 06:39 AM, Jay Lau wrote:

Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the 
network? The current native docker support did not have any logic for 
network management, are we going to leverage neutron or nova-network 
just like nova-docker for this?
We can just use flannel for both these use cases.  One way to approach 
using flannel is that we can expect docker networks will always be setup 
the same way, connecting into a flannel network.


2) For k8s, swarm, we can leverage the scheduler in those container 
management tools, but what about docker native support? How to handle 
resource scheduling for native docker containers?


I am not clear on how to handle native Docker scheduling if a bay has 
more then one node.  I keep hoping someone in the community will propose 
something that doesn't introduce an agent dependency in the OS.


Regards
-steve


Thanks!

2015-01-18 8:51 GMT+08:00 Steven Dake >:


Hi folks and especially Magnum Core,

Magnum Milestone #1 should released early this coming week. I
wanted to kick off discussions around milestone #2 since Milestone
#1 development is mostly wrapped up.

The milestone #2 blueprints:
https://blueprints.launchpad.net/magnum/milestone-2

The overall goal of Milestone #1 was to make Magnum usable for
developers.  The overall goal of Milestone #2 is to make Magnum
usable by operators and their customers.  To do this we are
implementing blueprints like multi-tenant, horizontal-scale, and
the introduction of coreOS in addition to Fedora Atomic as a
Container uOS.  We are also plan to introduce some updates to
allow bays to be more scalable. We want bays to scale to more
nodes manually (short term), as well as automatically (longer
term).  Finally we want to tidy up some of the nit-picky things
about Magnum that none of the core developers really like at the
moment.  One example is the magnum-bay-status blueprint which will
prevent the creation of pods/services/replicationcontrollers until
a bay has completed orchestration via Heat.  Our final significant
blueprint for milestone #2 is the ability to launch our supported
uOS on bare metal using Nova's Ironic plugin and the baremetal
flavor.  As always, we want to improve our unit testing from what
is now 70% to ~80% in the next milestone.

Please have a look at the blueprints and feel free to comment on
this thread or in the blueprints directly.  If you would like to
see different blueprints tackled during milestone #2 that feedback
is welcome, or if you think the core team[1] is on the right
track, we welcome positive kudos too.

If you would like to see what we tackled in Milestone #1, the code
should be tagged and ready to run Tuesday January 20th.  Master
should work well enough now, and the developer quickstart guide is
mostly correct.

The Milestone #1 bluerpints are here for comparison sake:
https://blueprints.launchpad.net/magnum/milestone-1

Regards,
-steve


[1] https://review.openstack.org/#/admin/groups/473,members

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-18 Thread Jay Lau
Thanks Steven, just some questions/comments here:

1) For native docker support, do we have some project to handle the
network? The current native docker support did not have any logic for
network management, are we going to leverage neutron or nova-network just
like nova-docker for this?
2) For k8s, swarm, we can leverage the scheduler in those container
management tools, but what about docker native support? How to handle
resource scheduling for native docker containers?

Thanks!

2015-01-18 8:51 GMT+08:00 Steven Dake :

> Hi folks and especially Magnum Core,
>
> Magnum Milestone #1 should released early this coming week.  I wanted to
> kick off discussions around milestone #2 since Milestone #1 development is
> mostly wrapped up.
>
> The milestone #2 blueprints:
> https://blueprints.launchpad.net/magnum/milestone-2
>
> The overall goal of Milestone #1 was to make Magnum usable for
> developers.  The overall goal of Milestone #2 is to make Magnum usable by
> operators and their customers.  To do this we are implementing blueprints
> like multi-tenant, horizontal-scale, and the introduction of coreOS in
> addition to Fedora Atomic as a Container uOS.  We are also plan to
> introduce some updates to allow bays to be more scalable.  We want bays to
> scale to more nodes manually (short term), as well as automatically (longer
> term).  Finally we want to tidy up some of the nit-picky things about
> Magnum that none of the core developers really like at the moment.  One
> example is the magnum-bay-status blueprint which will prevent the creation
> of pods/services/replicationcontrollers until a bay has completed
> orchestration via Heat.  Our final significant blueprint for milestone #2
> is the ability to launch our supported uOS on bare metal using Nova's
> Ironic plugin and the baremetal flavor.  As always, we want to improve our
> unit testing from what is now 70% to ~80% in the next milestone.
>
> Please have a look at the blueprints and feel free to comment on this
> thread or in the blueprints directly.  If you would like to see different
> blueprints tackled during milestone #2 that feedback is welcome, or if you
> think the core team[1] is on the right track, we welcome positive kudos too.
>
> If you would like to see what we tackled in Milestone #1, the code should
> be tagged and ready to run Tuesday January 20th.  Master should work well
> enough now, and the developer quickstart guide is mostly correct.
>
> The Milestone #1 bluerpints are here for comparison sake:
> https://blueprints.launchpad.net/magnum/milestone-1
>
> Regards,
> -steve
>
>
> [1] https://review.openstack.org/#/admin/groups/473,members
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][nova][ironic] Magnum Milestone #2 blueprints - request for comments

2015-01-17 Thread Steven Dake

Hi folks and especially Magnum Core,

Magnum Milestone #1 should released early this coming week.  I wanted to 
kick off discussions around milestone #2 since Milestone #1 development 
is mostly wrapped up.


The milestone #2 blueprints:
https://blueprints.launchpad.net/magnum/milestone-2

The overall goal of Milestone #1 was to make Magnum usable for 
developers.  The overall goal of Milestone #2 is to make Magnum usable 
by operators and their customers.  To do this we are implementing 
blueprints like multi-tenant, horizontal-scale, and the introduction of 
coreOS in addition to Fedora Atomic as a Container uOS.  We are also 
plan to introduce some updates to allow bays to be more scalable.  We 
want bays to scale to more nodes manually (short term), as well as 
automatically (longer term).  Finally we want to tidy up some of the 
nit-picky things about Magnum that none of the core developers really 
like at the moment.  One example is the magnum-bay-status blueprint 
which will prevent the creation of pods/services/replicationcontrollers 
until a bay has completed orchestration via Heat.  Our final significant 
blueprint for milestone #2 is the ability to launch our supported uOS on 
bare metal using Nova's Ironic plugin and the baremetal flavor.  As 
always, we want to improve our unit testing from what is now 70% to ~80% 
in the next milestone.


Please have a look at the blueprints and feel free to comment on this 
thread or in the blueprints directly.  If you would like to see 
different blueprints tackled during milestone #2 that feedback is 
welcome, or if you think the core team[1] is on the right track, we 
welcome positive kudos too.


If you would like to see what we tackled in Milestone #1, the code 
should be tagged and ready to run Tuesday January 20th.  Master should 
work well enough now, and the developer quickstart guide is mostly correct.


The Milestone #1 bluerpints are here for comparison sake:
https://blueprints.launchpad.net/magnum/milestone-1

Regards,
-steve


[1] https://review.openstack.org/#/admin/groups/473,members

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev