I think it’s fair to assert that our generic scheduling interface should be 
based on Gantt. When that approaches a maturity point where it’s appropriate to 
leverage Gantt for container use cases, we should definitely consider switching 
to that. We should remain engaged in Gantt design decisions along the way to 
provide input.

In the short term we want a solution that works nicely for our Docker handler, 
because that’s an obvious functionality gap. The k8s handler already has a 
scheduler, so it can remain unchanged. Let’s not fall into a trap of 
over-engineering the scheduler, as that can be very tempting but yield limited 
value.

My suggestion is that we focus on the right solution for the Docker backend for 
now, and keep in mind that we want a general purpose scheduler in the future 
that could be adapted to work with a variety of container backends.

I want to recognize that Andrew’s thoughts are well considered to avoid rework 
and remain agnostic about container backends. Further, I think resource 
scheduling is the sort of problem domain that would lend itself well to a 
common solution with numerous use cases. If you look at the various ones that 
exist today, there are lots of similarities. We will find a multitude of 
scheduling algorithms, but probably not uniquely innovative scheduling 
interfaces. The interface to a scheduler will be relatively simple, and we 
could afford to collaborate a bit with the Gantt team to get solid ideas on the 
table for that. Let’s table that pursuit for now, and re-engage at our Midcycle 
meetup to explore that topic further. In the mean time, I’d like us to iterate 
on a suitable point solution for the Docker backend. A final iteration of that 
work may be to yank it completely, and replace it with a common scheduler at a 
later point. I’m willing to accept that tradeoff for a quick delivery of a 
Docker specific scheduler that we can learn from and iterate.

Cheers,

Adrian

On Feb 9, 2015, at 10:57 PM, Jay Lau 
<jay.lau....@gmail.com<mailto:jay.lau....@gmail.com>> wrote:

Thanks Steve, just want to discuss more for this. Then per Andrew's comments, 
we need a generic scheduling interface, but if our focus is native docker, then 
does this still needed? Thanks!

2015-02-10 14:52 GMT+08:00 Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>>:


From: Jay Lau <jay.lau....@gmail.com<mailto:jay.lau....@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 9, 2015 at 11:31 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum

Steve,

So you mean we should focus on docker and k8s scheduler? I was a bit confused, 
why do we need to care k8s? As the k8s cluster was created by heat and once the 
k8s was created, the k8s has its own scheduler for creating pods/service/rcs.

So seems we only need to care scheduler for native docker and ironic bay, 
comments?

Ya scheduler only matters for native docker.  Ironic bay can be k8s or 
docker+swarm or something similar.

But yup, I understand your point.


Thanks!

2015-02-10 12:32 GMT+08:00 Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>>:


From: Joe Gordon <joe.gord...@gmail.com<mailto:joe.gord...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, February 9, 2015 at 6:41 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] Scheduling for Magnum



On Mon, Feb 9, 2015 at 6:00 AM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:


On 2/9/15, 3:02 AM, "Thierry Carrez" 
<thie...@openstack.org<mailto:thie...@openstack.org>> wrote:

>Adrian Otto wrote:
>> [...]
>> We have multiple options for solving this challenge. Here are a few:
>>
>> 1) Cherry pick scheduler code from Nova, which already has a working a
>>filter scheduler design.
>> 2) Integrate swarmd to leverage its scheduler[2].
>> 3) Wait for the Gantt, when Nova Scheduler to be moved out of Nova.
>>This is expected to happen about a year from now, possibly sooner.
>> 4) Write our own filter scheduler, inspired by Nova.
>
>I haven't looked enough into Swarm to answer that question myself, but
>how much would #2 tie Magnum to Docker containers ?
>
>There is value for Magnum to support other container engines / formats
>(think Rocket/Appc) in the long run, so we should avoid early design
>choices that would prevent such support in the future.

Thierry,
Magnum has an object type of a bay which represents the underlying cluster
architecture used.  This could be kubernetes, raw docker, swarmd, or some
future invention.  This way Magnum can grow independently of the
underlying technology and provide a satisfactory user experience dealing
with the chaos that is the container development world :)

While I don't disagree with anything said here, this does sound a lot like 
https://xkcd.com/927/


Andrew had suggested offering a unified standard user experience and API.  I 
think that matches the 927 comic pretty well.  I think we should offer each 
type of system using APIs that are similar in nature but that offer the native 
features of the system.  In other words, we will offer integration across the 
various container landscape with OpenStack.

We should strive to be conservative and pragmatic in our systems support and 
only support container schedulers and container managers that have become 
strongly emergent systems.  At this point that is docker and kubernetes.  Mesos 
might fit that definition as well.  Swarmd and rocket are not yet strongly 
emergent, but they show promise of becoming so.  As a result, they are clearly 
systems we should be thinking about for our roadmap.  All of these systems 
present very similar operational models.

At some point competition will choke off new system design placing an upper 
bound on the amount of systems we have to deal with.

Regards
-steve



We will absolutely support relevant container technology, likely through
new Bay formats (which are really just heat templates).

Regards
-steve

>
>--
>Thierry Carrez (ttx)
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: 
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org/?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Thanks,

Jay Lau (Guangya Liu)
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to