Wouldn't that limit the ability to share/optimize resources then and increase the number of operators needed (since each COE/bay would need its own set of operators managing it)?

If all tenants are in a single openstack cloud, and under say a single company then there isn't much need for management isolation (in fact I think said feature is actually a anti-feature in a case like this). Especially since that management is already by keystone and the project/tenant & user associations and such there.

Security isolation I get, but if the COE is already multi-tenant aware and that multi-tenancy is connected into the openstack tenancy model, then it seems like that point is nil?

I get that the current tenancy boundary is the bay (aka the COE right?) but is that changeable? Is that ok with everyone, it seems oddly matched to say a company like yahoo, or other private cloud, where one COE would I think be preferred and tenancy should go inside of that; vs a eggshell like solution that seems like it would create more management and operability pain (now each yahoo internal group that creates a bay/coe needs to figure out how to operate it? and resources can't be shared and/or orchestrated across bays; hmmmm, seems like not fully using a COE for what it can do?)

Just my random thoughts, not sure how much is fixed in stone.

-Josh

Adrian Otto wrote:
Joshua,

The tenancy boundary in Magnum is the bay. You can place whatever
single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
Swarm). This allows you to use native tools to interact with the COE in
that bay, rather than using an OpenStack specific client. If you want to
use the OpenStack client to create both bays, pods, and containers, you
can do that today. You also have the choice, for example, to run kubctl
against your Kubernetes bay, if you so desire.

Bays offer both a management and security isolation between multiple
tenants. There is no intent to share a single bay between multiple
tenants. In your use case, you would simply create two bays, one for
each of the yahoo-mail.XX tenants. I am not convinced that having an
uber-tenant makes sense.

Adrian

On Sep 30, 2015, at 1:13 PM, Joshua Harlow <harlo...@outlook.com
<mailto:harlo...@outlook.com>> wrote:

Adrian Otto wrote:
Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.


So an interesting question, but how is tenancy going to work, will
there be a keystone tenancy <-> COE tenancy adapter? From my
understanding a whole bay (COE?) is owned by a tenant, which is great
for tenants that want to ~experiment~ with a COE but seems disjoint
from the end goal of an integrated COE where the tenancy model of both
keystone and the COE is either the same or is adapted via some adapter
layer.

For example:

1) Bay that is connected to uber-tenant 'yahoo'

1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
<http://yahoo-mail.us/>'
1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
...

All those tenancy information is in keystone, not replicated/synced
into the COE (or in some other COE specific disjoint system).

Thoughts?

This one becomes especially hard if said COE(s) don't even have a
tenancy model in the first place :-/

Thanks,

Adrian

On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni<devdatta.kulka...@rackspace.com
<mailto:devdatta.kulka...@rackspace.com>> wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu<hongbin...@huawei.com
<mailto:hongbin...@huawei.com>> Sent: Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau....@gmail.com] Sent: September-29-15
10:57 PM To: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
k8s?



+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should
export some interfaces to enable end user can create container
applications but not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all
of the concepts in different coe (k8s has pod, service, rc, swarm
only has container, nova only has VM, PM with different
hypervisors), but this deserve some deep dive and thinking to see
how can move forward.....





On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz<e...@walmartlabs.com
<mailto:e...@walmartlabs.com>>
wrote: definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):) I belive our goal should
be seamlessly integrate Kub/Mesos/Swarm to OpenStack ecosystem
(Neutron/Cinder/Barbican/etc) even if we need to step in to
Kub/Mesos/Swarm communities for that.

— Egor

From: Adrian
Otto<adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage
questions)"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>


Date: Tuesday, September 29, 2015 at 08:44
To: "OpenStack Development Mailing List (not for usage
questions)"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>


Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen
(danehans)<daneh...@cisco.com
<mailto:daneh...@cisco.com><mailto:daneh...@cisco.com>> wrote:


+1

From: Tom Cammann<tom.camm...@hpe.com
<mailto:tom.camm...@hpe.com><mailto:tom.camm...@hpe.com>>
Reply-To:
"openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>


Date: Tuesday, September 29, 2015 at 2:22 AM
To:
"openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>


Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to
completely deprecate the COE specific APIs such as pod/service/rc
and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to
be very difficult and probably a wasted effort trying to
consolidate their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote: Would it make sense to ask the
opposite of Wanghua's question: should pod/service/rc be deprecated
if the user can easily get to the k8s api? Even if we want to
orchestrate these in a Heat template, the corresponding heat
resources can just interface with k8s instead of Magnum. Ton Ngo,

<ATT00001.gif>Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
docker compose is just command line tool which doesn’t have any api
or scheduling feat

From: Egor Guz<e...@walmartlabs.com
<mailto:e...@walmartlabs.com>><mailto:e...@walmartlabs.com>
To:
"openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>"<mailto:openstack-dev@lists.openstack.org>
<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org>><mailto:openstack-dev@lists.openstack.org>


Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
________________________________



Also I belive docker compose is just command line tool which
doesn’t have any api or scheduling features. But during last Docker
Conf hackathon PayPal folks implemented docker compose executor for
Mesos (https://github.com/mohitsoni/compose-executor) which can
give you pod like experience.

— Egor

From: Adrian
Otto<adrian.o...@rackspace.com
<mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com><mailto:adrian.o...@rackspace.com>>


Reply-To: "OpenStack Development Mailing List (not for usage
questions)"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>
Date: Monday, September 28, 2015 at 22:03 To: "OpenStack
Development Mailing List (not for usage
questions)"<openstack-dev@lists.openstack.org
<mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org><mailto:openstack-dev@lists.openstack.org>>


Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do follow your logic, but docker-compose only needs the docker
API to operate. We are intentionally avoiding re-inventing the
wheel. Our goal is not to replace docker swarm (or other existing
systems), but to compliment it/them. We want to offer users of
Docker the richness of native APIs and supporting tools. This way
they will not need to compromise features or wait longer for us to
implement each new feature as it is added. Keep in mind that our
pod, service, and replication controller resources pre-date this
philosophy. If we started out with the current approach, those
would not exist in Magnum.

Thanks,

Adrian

On Sep 28, 2015, at 8:32 PM, 王华
<wanghua.hum...@gmail.com
<mailto:wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com><mailto:wanghua.hum...@gmail.com>>
wrote:

Hi folks,

Magnum now exposes service, pod, etc to users in kubernetes coe,
but exposes container in swarm coe. As I know, swarm is only a
scheduler of container, which is like nova in openstack. Docker
compose is a orchestration program which is like heat in openstack.
k8s is the combination of scheduler and orchestration. So I think
it is better to expose the apis in compose to users which are at
the same level as k8s.


Regards Wanghua
__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe<mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



<ATT00001.gif>__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)


Unsubscribe:
openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org><mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--





Thanks, Jay Lau (Guangya Liu)

__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________


OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org
<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to